Welcome!

Java IoT Authors: Liz McMillan, Pat Romanski, Elizabeth White, Peter Silva, Patrick Hubbard

Related Topics: Java IoT

Java IoT: Article

Clustered Timers

For Robust Scalable Systems

Often, when someone asks how we are going to scale the Web application we're about to develop, we look at them, smile, and say, "Not a problem - we'll just cluster the application servers." Clustering our application across multiple servers provides us with the ability to handle large volumes of traffic and to scale systems by adding additional servers to the cluster.

In addition to providing scalability, application clusters make the system more robust by allowing for automatic system failover when a server fails. This way when one server goes down the application continues to run, albeit with slightly decreased performance. While it is true that the current generation of application servers makes it relatively pain-free to create a cluster, there are still several significant, if often overlooked, design issues that must be taken into account now that the system is clustered.

When we run our Web application in a cluster, we have the exact same software running on each machine in the cluster. While this eliminates a host of configuration management difficulties, it does create other problems. While we don't have to write different code for every possible machine in the cluster, there are times when this simplicity actually makes things more complex; the running of scheduled tasks is typically one of these areas. Scheduled tasks are used to execute procedures that need to run at certain fixed times or at fixed intervals. Typical examples of scheduled tasks within a Web application are report-generation tasks and tasks that send data to external systems that are only available within a certain time frame.

To understand why clustering affects how we design our application to handle scheduled tasks, let's consider a generic e-commerce Web application. To allow management to analyze sales trends, profits, inventory, etc., the system has been set up to periodically compile a set of reports and e-mail them to management. Clearly, management doesn't want to receive multiple e-mails containing the same reports, yet this is what we will get if we simply write a scheduled task and then cluster our system. When the appointed time to run the report comes up, all machines in the cluster will generate the same report and send it to management. This can be seen visually in Figure 1.

Perhaps the most straightforward way to solve this problem is to package the code that runs the scheduled tasks into a separate JAR file within the EAR file that contains the WAR file for the Web application. This EAR file is deployed to all the servers in the cluster; however, the JAR containing the scheduled tasks is configured to run on only one of the servers. This solves the problem by preventing the scheduled tasks from ever running on multiple machines. However, there is a significant downside to these solutions. First, you have now created additional configuration management problems. You need to carefully track which servers are set up to run the scheduled tasks and the exact deployment procedures that were used so that when additional servers are added to the cluster, the application is properly deployed on those servers.

The second problem is that you have effectively taken the scheduled tasks out of the cluster. Now, if the machine that is set up with the scheduled tasks fails, or its connection to the network fails, there is no backup or failover system. The tasks won't run. The remainder of this article investigates solutions to this problem that allow the scheduled tasks to remain part of the cluster and don't involve additional configuration management.

To stop every system in the cluster from performing the same scheduled task, report generation in this case, we have to utilize something outside of the application server cluster to track the state of our scheduled task. A perfect candidate for maintaining the state of our scheduled tasks is a shared database, and since nearly all applications already have access to a shared database, this is the resource we will use to solve this problem in our example (see Figure 2). It's worth mentioning that while a shared database is an ideal resource for solving this problem, it's not the only option. The solution presented here could be adapted to use flat files or some other shared resource external to the cluster.

Our external resource, the database in this case, will act as a mediator between competing machines in the cluster. We will create a table in the database that tracks scheduled tasks and their status. When a machine in the cluster wants to run one of the scheduled tasks, it first checks the status of that task in the database to see if some other machine is already running that task. If no other machine is running the task, the status of the task will be updated and that machine will run the task.

Another way of thinking about this solution is to think in terms of a concurrent method running on a single machine. If we see the scheduled task in these terms, it becomes clear that the best way to keep multiple threads from running the task at the same time is to use some sort of semaphore. Again, if this was a single method on one machine, we could easily do this by creating a synchronized block around the code that we wanted to protect. When a thread first attempts to enter the synchronized block, it has to attempt to get the lock. If it fails to get the lock, it can't run. In our distributed system, we are using the database as the lock.

We will call our database table "Tasks" and it will have three columns. The first column will be the name of the task, the second the status of the task, and the third the date and time that the task last changed status. The generic SQL script to produce this table is shown below.

CREATE TABLE 'Tasks' (
'TaskName' varchar(50) NOT NULL,
'Status' varchar(25) NOT NULL,
'StatusTime' datetime,
PRIMARY KEY ('TaskName')
) ;

Now that we have created our database table to serve as our mediator, we can create the class that accesses this table in order to determine if a particular instance of a Task can execute. We'll call this class TaskMonitor. (The source code for this article can be downloaded from www.sys-con.com/java/sourcec.cfm.) The class exposes two methods to the public, public static boolean acquireLock(String taskName) and public static void releaseLock(String taskName). Before a Task runs, it will need to call the acquireLock method of the TaskMonitor. If this method returns True, it's safe for the Task to run. If it returns False, then it's not safe for the Task to run as some other instance of this Task in the cluster is already executing. The key to understanding the TaskMonitor class is to understand the ACQUIRE_LOCK SQL query on lines 5-7.

What needs to be done is to determine if the Task in question, as identified by the field TaskName, is currently Idle, and if so, change its Status to Active. The crucial aspect of this is that it needs to happen atomically, that is, it must all happen as one single step. That's why we use a single update statement instead of writing both a select statement to see if the Task is currently Idle and an update to change its Status. In the case where we use the select statement first, it would be possible for the same select statement to be run by the other machines in the cluster before the update is executed. This would result in multiple Tasks running since they would all see the Idle state. By performing the entire process in an update statement, we take advantage of the automatic exclusive row locking that takes place in the database whenever an update statement is executed.

Now that we understand how the ACQUIRE_LOCK query works, the rest of the acquireLock method of the TaskMonitor is easy to follow. On line 23 the query is executed and the results are examined. The executeUpdate method returns the number of rows that were affected by the query. When the ACQUIRE_LOCK query successfully changes the Task from Idle to Active (as will be the case when this particular query is the first one in the cluster to run), one row will have been affected and the lockAcquired flag will be set to true. Otherwise, no rows will be affected and the lockAcquired flag will remain false.

The releaseLock method of TaskMonitor is meant to be called when a Task has finished executing. This method simply changes the status of the Task back to Idle. Both the releaseLock and the acquireLock methods also update the StatusTime field with the current date and time for record-keeping purposes.

One final note on the TaskMonitor class: the getConnection method shown in lines 75-85 should be upgraded before placing this class into production. As written, the method creates a connection to an instance of a MySQL database. A better practice in production would be to retrieve a connection from an existing connection pool.

Together the Tasks database table and the TaskMonitor class provide a framework for ensuring that only one instance of a given Task is running at a particular time, no matter how many instances of the application are running within the clustered system. At this point we're ready to create our report generating Task.

Because we're concerned with managing Tasks in a clustered environment, and not with creating reports or using the javax.mail APIs, we'll create a simple Task, called ReportTask, to illustrate the concept. Because we want this Task to execute automatically on a schedule, we need to extend java.util.TimerTask. TimerTask is an abstract class that has one method that we have to implement, public void run(). This is the method where all the Task's work is done. For our simple example, ReportTask, we'll output some text to show that the Task is running. The code for this class is shown below.

1) import java.util.TimerTask;
2) public class ReportTask extends TimerTask {
3) public void run() {
4) if(TaskMonitor.acquireLock("ReportTask") == false)
5) return;
6) System.out.println("Creating report to be emailed...");
7) TaskMonitor.releaseLock("ReportTask");
8) }
9) }

The key thing to note here is that before the ReportTask actually performs its work, printing some text in this case, it first attempts to acquire the lock for this Task by making the call to acquireLock on line 4. If it fails to acquire the lock, it simply returns without performing its work. However, if it does successfully acquire the lock, then it's free to perform its work and it goes ahead and prints out its message on line 6. Once the Task is complete, it's vital that the lock be released. This is accomplished by calling releaseLock on line 7. If the lock is never released, this Task will never run again on any machine in the cluster. Ensuring that the lock is properly released is clearly not an issue with this simple example; however, in more complex tasks it can be tricky. Consider a Task where several different error conditions could cause the Task to terminate before running to completion. There are now potentially several places where the lock will have to be released.

At this point, you've probably noticed a serious problem with our Task. We never populated the Tasks table with any tasks. As things stand, our ReportTask will never be able to acquire a lock and will never run, and this step needs to take place for every Task that's going to be managed in this way. To rectify this situation we need to insert the ReportTask information into the Tasks table using the following SQL script:

insert into Tasks values (‘ReportTask', ‘Idle', null);

We've nearly finished setting up our system for managing clustered tasks. So far we've created an external resource and a TimerTask called ReportTask that will run in our cluster. All that remains to be done is to create a Timer for running our ReportTask. Because we want to start the Timer for our task as soon as the application starts, we'll create a servlet called StartupServlet that does the work of creating our Timer. We will ensure that StartupServlet is loaded immediately by adding the following lines to web.xml:

<servlet>
<servlet-name>StartupServlet</servlet-name>
<display-name>StartupServlet</display-name>
<description>Used to create the Timers</description>
<servlet-class>StartupServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>

As our simple StartupServlet is not designed to handle requests, it doesn't need to override any method other than init(). When we create the Timer for running our ReportTask, it's important that we use one of the overloaded constructors to create the Timer as a daemon thread. If we don't specify that the Timer should be a daemon thread and use the default no argument constructor, the Timer will not be a daemon thread. By making it a daemon thread, we ensure that the Timer will continue to run for as long as our Web application runs and that it will terminate when the application terminates. We don't want to try to generate reports if the application has been stopped for some reason.

After calculating how many milliseconds are in a day (we want our ReportTask to run once a day), we schedule the ReportTask to run daily, starting now. On line 12 we place the Timer that we created in the ServletContext. While this is not strictly necessary to keep the ReportTask running, by keeping a reference to the Timer available we are able to check easily on the status of the ReportTimer or cancel it entirely should the need arise.

With the StartupServlet in place, we now have a very basic but workable system for running scheduled Tasks in a clustered environment, without having to worry about the same task running on all of the machines in the cluster simultaneously. It's important to note that if this scheme is used as presented and the tasks being executed complete in a very short period of time, you could still see duplicate executions of the same task if the clocks on all of the machines are not in synch with each other. While it is possible to extend this approach to address this problem, it's outside the scope of this article. With a little bit of effort, this system can also be extended to allow for such things as programmatic modification of the running tasks, robust error handling, and recovery of frozen tasks.

More Stories By Clark D. Richey Jr.

Clark is a principal consultant with the RABA Technologies RiSC group for advanced research and development. In his spare time, he teaches the Java platform to students at Loyola College, where as an associate professor, he shares his experiences with much enthusiasm. Clark is the founder of both JUGaccino, a Maryland-based JUG, and the StopLight and PermissionSniffer open source projects. He is also involved in implementing highly scalable, highly secure, service-oriented architectures using Jini.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
cbellonch 05/12/05 11:04:17 AM EDT

Hi,
Thanks for the article, it would be useful for our project. We've tried to download the code in:
· www.sys-con.com/java/sourcec.cfm
· http://www.sys-con.com/java/archives3/0903/Richey0903.zip

without success, are the links correct?

tbb 03/10/04 08:44:10 AM EST

I believe a class that implements ServletContextListener would be a better way to solve this problem than a servlet that loads on startup. (If your servlet container implements the servlet 2.3+ spec).

@ThingsExpo Stories
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
WebRTC is about the data channel as much as about video and audio conferencing. However, basically all commercial WebRTC applications have been built with a focus on audio and video. The handling of “data” has been limited to text chat and file download – all other data sharing seems to end with screensharing. What is holding back a more intensive use of peer-to-peer data? In her session at @ThingsExpo, Dr Silvia Pfeiffer, WebRTC Applications Team Lead at National ICT Australia, looked at differ...
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now ...
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, discussed the impact of technology on identity. Sho...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus o...
Providing secure, mobile access to sensitive data sets is a critical element in realizing the full potential of cloud computing. However, large data caches remain inaccessible to edge devices for reasons of security, size, format or limited viewing capabilities. Medical imaging, computer aided design and seismic interpretation are just a few examples of industries facing this challenge. Rather than fighting for incremental gains by pulling these datasets to edge devices, we need to embrace the i...
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place June 6-8, 2017, at the Javits Center in New York City, New York, is co-located with 20th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry p...
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
SYS-CON Events announced today that Catchpoint, a leading digital experience intelligence company, has been named “Silver Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Catchpoint Systems is a leading Digital Performance Analytics company that provides unparalleled insight into your customer-critical services to help you consistently deliver an amazing customer experience. Designed for digital business, C...
@ThingsExpo has been named the ‘Top WebRTC Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @ThingsExpo ranked as the number one ‘WebRTC Influencer' followed by @DevOpsSummit at 55th.
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...