Welcome!

Java IoT Authors: Elizabeth White, Paul Simmons, Liz McMillan, Yeshim Deniz, Pat Romanski

Related Topics: Java IoT

Java IoT: Article

Clustered Timers

For Robust Scalable Systems

Often, when someone asks how we are going to scale the Web application we're about to develop, we look at them, smile, and say, "Not a problem - we'll just cluster the application servers." Clustering our application across multiple servers provides us with the ability to handle large volumes of traffic and to scale systems by adding additional servers to the cluster.

In addition to providing scalability, application clusters make the system more robust by allowing for automatic system failover when a server fails. This way when one server goes down the application continues to run, albeit with slightly decreased performance. While it is true that the current generation of application servers makes it relatively pain-free to create a cluster, there are still several significant, if often overlooked, design issues that must be taken into account now that the system is clustered.

When we run our Web application in a cluster, we have the exact same software running on each machine in the cluster. While this eliminates a host of configuration management difficulties, it does create other problems. While we don't have to write different code for every possible machine in the cluster, there are times when this simplicity actually makes things more complex; the running of scheduled tasks is typically one of these areas. Scheduled tasks are used to execute procedures that need to run at certain fixed times or at fixed intervals. Typical examples of scheduled tasks within a Web application are report-generation tasks and tasks that send data to external systems that are only available within a certain time frame.

To understand why clustering affects how we design our application to handle scheduled tasks, let's consider a generic e-commerce Web application. To allow management to analyze sales trends, profits, inventory, etc., the system has been set up to periodically compile a set of reports and e-mail them to management. Clearly, management doesn't want to receive multiple e-mails containing the same reports, yet this is what we will get if we simply write a scheduled task and then cluster our system. When the appointed time to run the report comes up, all machines in the cluster will generate the same report and send it to management. This can be seen visually in Figure 1.

Perhaps the most straightforward way to solve this problem is to package the code that runs the scheduled tasks into a separate JAR file within the EAR file that contains the WAR file for the Web application. This EAR file is deployed to all the servers in the cluster; however, the JAR containing the scheduled tasks is configured to run on only one of the servers. This solves the problem by preventing the scheduled tasks from ever running on multiple machines. However, there is a significant downside to these solutions. First, you have now created additional configuration management problems. You need to carefully track which servers are set up to run the scheduled tasks and the exact deployment procedures that were used so that when additional servers are added to the cluster, the application is properly deployed on those servers.

The second problem is that you have effectively taken the scheduled tasks out of the cluster. Now, if the machine that is set up with the scheduled tasks fails, or its connection to the network fails, there is no backup or failover system. The tasks won't run. The remainder of this article investigates solutions to this problem that allow the scheduled tasks to remain part of the cluster and don't involve additional configuration management.

To stop every system in the cluster from performing the same scheduled task, report generation in this case, we have to utilize something outside of the application server cluster to track the state of our scheduled task. A perfect candidate for maintaining the state of our scheduled tasks is a shared database, and since nearly all applications already have access to a shared database, this is the resource we will use to solve this problem in our example (see Figure 2). It's worth mentioning that while a shared database is an ideal resource for solving this problem, it's not the only option. The solution presented here could be adapted to use flat files or some other shared resource external to the cluster.

Our external resource, the database in this case, will act as a mediator between competing machines in the cluster. We will create a table in the database that tracks scheduled tasks and their status. When a machine in the cluster wants to run one of the scheduled tasks, it first checks the status of that task in the database to see if some other machine is already running that task. If no other machine is running the task, the status of the task will be updated and that machine will run the task.

Another way of thinking about this solution is to think in terms of a concurrent method running on a single machine. If we see the scheduled task in these terms, it becomes clear that the best way to keep multiple threads from running the task at the same time is to use some sort of semaphore. Again, if this was a single method on one machine, we could easily do this by creating a synchronized block around the code that we wanted to protect. When a thread first attempts to enter the synchronized block, it has to attempt to get the lock. If it fails to get the lock, it can't run. In our distributed system, we are using the database as the lock.

We will call our database table "Tasks" and it will have three columns. The first column will be the name of the task, the second the status of the task, and the third the date and time that the task last changed status. The generic SQL script to produce this table is shown below.

CREATE TABLE 'Tasks' (
'TaskName' varchar(50) NOT NULL,
'Status' varchar(25) NOT NULL,
'StatusTime' datetime,
PRIMARY KEY ('TaskName')
) ;

Now that we have created our database table to serve as our mediator, we can create the class that accesses this table in order to determine if a particular instance of a Task can execute. We'll call this class TaskMonitor. (The source code for this article can be downloaded from www.sys-con.com/java/sourcec.cfm.) The class exposes two methods to the public, public static boolean acquireLock(String taskName) and public static void releaseLock(String taskName). Before a Task runs, it will need to call the acquireLock method of the TaskMonitor. If this method returns True, it's safe for the Task to run. If it returns False, then it's not safe for the Task to run as some other instance of this Task in the cluster is already executing. The key to understanding the TaskMonitor class is to understand the ACQUIRE_LOCK SQL query on lines 5-7.

What needs to be done is to determine if the Task in question, as identified by the field TaskName, is currently Idle, and if so, change its Status to Active. The crucial aspect of this is that it needs to happen atomically, that is, it must all happen as one single step. That's why we use a single update statement instead of writing both a select statement to see if the Task is currently Idle and an update to change its Status. In the case where we use the select statement first, it would be possible for the same select statement to be run by the other machines in the cluster before the update is executed. This would result in multiple Tasks running since they would all see the Idle state. By performing the entire process in an update statement, we take advantage of the automatic exclusive row locking that takes place in the database whenever an update statement is executed.

Now that we understand how the ACQUIRE_LOCK query works, the rest of the acquireLock method of the TaskMonitor is easy to follow. On line 23 the query is executed and the results are examined. The executeUpdate method returns the number of rows that were affected by the query. When the ACQUIRE_LOCK query successfully changes the Task from Idle to Active (as will be the case when this particular query is the first one in the cluster to run), one row will have been affected and the lockAcquired flag will be set to true. Otherwise, no rows will be affected and the lockAcquired flag will remain false.

The releaseLock method of TaskMonitor is meant to be called when a Task has finished executing. This method simply changes the status of the Task back to Idle. Both the releaseLock and the acquireLock methods also update the StatusTime field with the current date and time for record-keeping purposes.

One final note on the TaskMonitor class: the getConnection method shown in lines 75-85 should be upgraded before placing this class into production. As written, the method creates a connection to an instance of a MySQL database. A better practice in production would be to retrieve a connection from an existing connection pool.

Together the Tasks database table and the TaskMonitor class provide a framework for ensuring that only one instance of a given Task is running at a particular time, no matter how many instances of the application are running within the clustered system. At this point we're ready to create our report generating Task.

Because we're concerned with managing Tasks in a clustered environment, and not with creating reports or using the javax.mail APIs, we'll create a simple Task, called ReportTask, to illustrate the concept. Because we want this Task to execute automatically on a schedule, we need to extend java.util.TimerTask. TimerTask is an abstract class that has one method that we have to implement, public void run(). This is the method where all the Task's work is done. For our simple example, ReportTask, we'll output some text to show that the Task is running. The code for this class is shown below.

1) import java.util.TimerTask;
2) public class ReportTask extends TimerTask {
3) public void run() {
4) if(TaskMonitor.acquireLock("ReportTask") == false)
5) return;
6) System.out.println("Creating report to be emailed...");
7) TaskMonitor.releaseLock("ReportTask");
8) }
9) }

The key thing to note here is that before the ReportTask actually performs its work, printing some text in this case, it first attempts to acquire the lock for this Task by making the call to acquireLock on line 4. If it fails to acquire the lock, it simply returns without performing its work. However, if it does successfully acquire the lock, then it's free to perform its work and it goes ahead and prints out its message on line 6. Once the Task is complete, it's vital that the lock be released. This is accomplished by calling releaseLock on line 7. If the lock is never released, this Task will never run again on any machine in the cluster. Ensuring that the lock is properly released is clearly not an issue with this simple example; however, in more complex tasks it can be tricky. Consider a Task where several different error conditions could cause the Task to terminate before running to completion. There are now potentially several places where the lock will have to be released.

At this point, you've probably noticed a serious problem with our Task. We never populated the Tasks table with any tasks. As things stand, our ReportTask will never be able to acquire a lock and will never run, and this step needs to take place for every Task that's going to be managed in this way. To rectify this situation we need to insert the ReportTask information into the Tasks table using the following SQL script:

insert into Tasks values (‘ReportTask', ‘Idle', null);

We've nearly finished setting up our system for managing clustered tasks. So far we've created an external resource and a TimerTask called ReportTask that will run in our cluster. All that remains to be done is to create a Timer for running our ReportTask. Because we want to start the Timer for our task as soon as the application starts, we'll create a servlet called StartupServlet that does the work of creating our Timer. We will ensure that StartupServlet is loaded immediately by adding the following lines to web.xml:

<servlet>
<servlet-name>StartupServlet</servlet-name>
<display-name>StartupServlet</display-name>
<description>Used to create the Timers</description>
<servlet-class>StartupServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>

As our simple StartupServlet is not designed to handle requests, it doesn't need to override any method other than init(). When we create the Timer for running our ReportTask, it's important that we use one of the overloaded constructors to create the Timer as a daemon thread. If we don't specify that the Timer should be a daemon thread and use the default no argument constructor, the Timer will not be a daemon thread. By making it a daemon thread, we ensure that the Timer will continue to run for as long as our Web application runs and that it will terminate when the application terminates. We don't want to try to generate reports if the application has been stopped for some reason.

After calculating how many milliseconds are in a day (we want our ReportTask to run once a day), we schedule the ReportTask to run daily, starting now. On line 12 we place the Timer that we created in the ServletContext. While this is not strictly necessary to keep the ReportTask running, by keeping a reference to the Timer available we are able to check easily on the status of the ReportTimer or cancel it entirely should the need arise.

With the StartupServlet in place, we now have a very basic but workable system for running scheduled Tasks in a clustered environment, without having to worry about the same task running on all of the machines in the cluster simultaneously. It's important to note that if this scheme is used as presented and the tasks being executed complete in a very short period of time, you could still see duplicate executions of the same task if the clocks on all of the machines are not in synch with each other. While it is possible to extend this approach to address this problem, it's outside the scope of this article. With a little bit of effort, this system can also be extended to allow for such things as programmatic modification of the running tasks, robust error handling, and recovery of frozen tasks.

More Stories By Clark D. Richey Jr.

Clark is a principal consultant with the RABA Technologies RiSC group for advanced research and development. In his spare time, he teaches the Java platform to students at Loyola College, where as an associate professor, he shares his experiences with much enthusiasm. Clark is the founder of both JUGaccino, a Maryland-based JUG, and the StopLight and PermissionSniffer open source projects. He is also involved in implementing highly scalable, highly secure, service-oriented architectures using Jini.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
cbellonch 05/12/05 11:04:17 AM EDT

Hi,
Thanks for the article, it would be useful for our project. We've tried to download the code in:
· www.sys-con.com/java/sourcec.cfm
· http://www.sys-con.com/java/archives3/0903/Richey0903.zip

without success, are the links correct?

tbb 03/10/04 08:44:10 AM EST

I believe a class that implements ServletContextListener would be a better way to solve this problem than a servlet that loads on startup. (If your servlet container implements the servlet 2.3+ spec).

@ThingsExpo Stories
Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
From 2013, NTT Communications has been providing cPaaS service, SkyWay. Its customer’s expectations for leveraging WebRTC technology are not only typical real-time communication use cases such as Web conference, remote education, but also IoT use cases such as remote camera monitoring, smart-glass, and robotic. Because of this, NTT Communications has numerous IoT business use-cases that its customers are developing on top of PaaS. WebRTC will lead IoT businesses to be more innovative and address...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abilit...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, discussed the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
No hype cycles or predictions of zillions of things here. IoT is big. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, Associate Partner at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He discussed the evaluation of communication standards and IoT messaging protocols, data analytics considerations, edge-to-cloud tec...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
In his session at @ThingsExpo, Dr. Robert Cohen, an economist and senior fellow at the Economic Strategy Institute, presented the findings of a series of six detailed case studies of how large corporations are implementing IoT. The session explored how IoT has improved their economic performance, had major impacts on business models and resulted in impressive ROIs. The companies covered span manufacturing and services firms. He also explored servicification, how manufacturing firms shift from se...
IoT is at the core or many Digital Transformation initiatives with the goal of re-inventing a company's business model. We all agree that collecting relevant IoT data will result in massive amounts of data needing to be stored. However, with the rapid development of IoT devices and ongoing business model transformation, we are not able to predict the volume and growth of IoT data. And with the lack of IoT history, traditional methods of IT and infrastructure planning based on the past do not app...
Organizations planning enterprise data center consolidation and modernization projects are faced with a challenging, costly reality. Requirements to deploy modern, cloud-native applications simultaneously with traditional client/server applications are almost impossible to achieve with hardware-centric enterprise infrastructure. Compute and network infrastructure are fast moving down a software-defined path, but storage has been a laggard. Until now.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...