Welcome!

Java IoT Authors: Pat Romanski, Elizabeth White, Liz McMillan, AppDynamics Blog, Sematext Blog

Related Topics: Java IoT

Java IoT: Article

Clustered Timers

For Robust Scalable Systems

Often, when someone asks how we are going to scale the Web application we're about to develop, we look at them, smile, and say, "Not a problem - we'll just cluster the application servers." Clustering our application across multiple servers provides us with the ability to handle large volumes of traffic and to scale systems by adding additional servers to the cluster.

In addition to providing scalability, application clusters make the system more robust by allowing for automatic system failover when a server fails. This way when one server goes down the application continues to run, albeit with slightly decreased performance. While it is true that the current generation of application servers makes it relatively pain-free to create a cluster, there are still several significant, if often overlooked, design issues that must be taken into account now that the system is clustered.

When we run our Web application in a cluster, we have the exact same software running on each machine in the cluster. While this eliminates a host of configuration management difficulties, it does create other problems. While we don't have to write different code for every possible machine in the cluster, there are times when this simplicity actually makes things more complex; the running of scheduled tasks is typically one of these areas. Scheduled tasks are used to execute procedures that need to run at certain fixed times or at fixed intervals. Typical examples of scheduled tasks within a Web application are report-generation tasks and tasks that send data to external systems that are only available within a certain time frame.

To understand why clustering affects how we design our application to handle scheduled tasks, let's consider a generic e-commerce Web application. To allow management to analyze sales trends, profits, inventory, etc., the system has been set up to periodically compile a set of reports and e-mail them to management. Clearly, management doesn't want to receive multiple e-mails containing the same reports, yet this is what we will get if we simply write a scheduled task and then cluster our system. When the appointed time to run the report comes up, all machines in the cluster will generate the same report and send it to management. This can be seen visually in Figure 1.

Perhaps the most straightforward way to solve this problem is to package the code that runs the scheduled tasks into a separate JAR file within the EAR file that contains the WAR file for the Web application. This EAR file is deployed to all the servers in the cluster; however, the JAR containing the scheduled tasks is configured to run on only one of the servers. This solves the problem by preventing the scheduled tasks from ever running on multiple machines. However, there is a significant downside to these solutions. First, you have now created additional configuration management problems. You need to carefully track which servers are set up to run the scheduled tasks and the exact deployment procedures that were used so that when additional servers are added to the cluster, the application is properly deployed on those servers.

The second problem is that you have effectively taken the scheduled tasks out of the cluster. Now, if the machine that is set up with the scheduled tasks fails, or its connection to the network fails, there is no backup or failover system. The tasks won't run. The remainder of this article investigates solutions to this problem that allow the scheduled tasks to remain part of the cluster and don't involve additional configuration management.

To stop every system in the cluster from performing the same scheduled task, report generation in this case, we have to utilize something outside of the application server cluster to track the state of our scheduled task. A perfect candidate for maintaining the state of our scheduled tasks is a shared database, and since nearly all applications already have access to a shared database, this is the resource we will use to solve this problem in our example (see Figure 2). It's worth mentioning that while a shared database is an ideal resource for solving this problem, it's not the only option. The solution presented here could be adapted to use flat files or some other shared resource external to the cluster.

Our external resource, the database in this case, will act as a mediator between competing machines in the cluster. We will create a table in the database that tracks scheduled tasks and their status. When a machine in the cluster wants to run one of the scheduled tasks, it first checks the status of that task in the database to see if some other machine is already running that task. If no other machine is running the task, the status of the task will be updated and that machine will run the task.

Another way of thinking about this solution is to think in terms of a concurrent method running on a single machine. If we see the scheduled task in these terms, it becomes clear that the best way to keep multiple threads from running the task at the same time is to use some sort of semaphore. Again, if this was a single method on one machine, we could easily do this by creating a synchronized block around the code that we wanted to protect. When a thread first attempts to enter the synchronized block, it has to attempt to get the lock. If it fails to get the lock, it can't run. In our distributed system, we are using the database as the lock.

We will call our database table "Tasks" and it will have three columns. The first column will be the name of the task, the second the status of the task, and the third the date and time that the task last changed status. The generic SQL script to produce this table is shown below.

CREATE TABLE 'Tasks' (
'TaskName' varchar(50) NOT NULL,
'Status' varchar(25) NOT NULL,
'StatusTime' datetime,
PRIMARY KEY ('TaskName')
) ;

Now that we have created our database table to serve as our mediator, we can create the class that accesses this table in order to determine if a particular instance of a Task can execute. We'll call this class TaskMonitor. (The source code for this article can be downloaded from www.sys-con.com/java/sourcec.cfm.) The class exposes two methods to the public, public static boolean acquireLock(String taskName) and public static void releaseLock(String taskName). Before a Task runs, it will need to call the acquireLock method of the TaskMonitor. If this method returns True, it's safe for the Task to run. If it returns False, then it's not safe for the Task to run as some other instance of this Task in the cluster is already executing. The key to understanding the TaskMonitor class is to understand the ACQUIRE_LOCK SQL query on lines 5-7.

What needs to be done is to determine if the Task in question, as identified by the field TaskName, is currently Idle, and if so, change its Status to Active. The crucial aspect of this is that it needs to happen atomically, that is, it must all happen as one single step. That's why we use a single update statement instead of writing both a select statement to see if the Task is currently Idle and an update to change its Status. In the case where we use the select statement first, it would be possible for the same select statement to be run by the other machines in the cluster before the update is executed. This would result in multiple Tasks running since they would all see the Idle state. By performing the entire process in an update statement, we take advantage of the automatic exclusive row locking that takes place in the database whenever an update statement is executed.

Now that we understand how the ACQUIRE_LOCK query works, the rest of the acquireLock method of the TaskMonitor is easy to follow. On line 23 the query is executed and the results are examined. The executeUpdate method returns the number of rows that were affected by the query. When the ACQUIRE_LOCK query successfully changes the Task from Idle to Active (as will be the case when this particular query is the first one in the cluster to run), one row will have been affected and the lockAcquired flag will be set to true. Otherwise, no rows will be affected and the lockAcquired flag will remain false.

The releaseLock method of TaskMonitor is meant to be called when a Task has finished executing. This method simply changes the status of the Task back to Idle. Both the releaseLock and the acquireLock methods also update the StatusTime field with the current date and time for record-keeping purposes.

One final note on the TaskMonitor class: the getConnection method shown in lines 75-85 should be upgraded before placing this class into production. As written, the method creates a connection to an instance of a MySQL database. A better practice in production would be to retrieve a connection from an existing connection pool.

Together the Tasks database table and the TaskMonitor class provide a framework for ensuring that only one instance of a given Task is running at a particular time, no matter how many instances of the application are running within the clustered system. At this point we're ready to create our report generating Task.

Because we're concerned with managing Tasks in a clustered environment, and not with creating reports or using the javax.mail APIs, we'll create a simple Task, called ReportTask, to illustrate the concept. Because we want this Task to execute automatically on a schedule, we need to extend java.util.TimerTask. TimerTask is an abstract class that has one method that we have to implement, public void run(). This is the method where all the Task's work is done. For our simple example, ReportTask, we'll output some text to show that the Task is running. The code for this class is shown below.

1) import java.util.TimerTask;
2) public class ReportTask extends TimerTask {
3) public void run() {
4) if(TaskMonitor.acquireLock("ReportTask") == false)
5) return;
6) System.out.println("Creating report to be emailed...");
7) TaskMonitor.releaseLock("ReportTask");
8) }
9) }

The key thing to note here is that before the ReportTask actually performs its work, printing some text in this case, it first attempts to acquire the lock for this Task by making the call to acquireLock on line 4. If it fails to acquire the lock, it simply returns without performing its work. However, if it does successfully acquire the lock, then it's free to perform its work and it goes ahead and prints out its message on line 6. Once the Task is complete, it's vital that the lock be released. This is accomplished by calling releaseLock on line 7. If the lock is never released, this Task will never run again on any machine in the cluster. Ensuring that the lock is properly released is clearly not an issue with this simple example; however, in more complex tasks it can be tricky. Consider a Task where several different error conditions could cause the Task to terminate before running to completion. There are now potentially several places where the lock will have to be released.

At this point, you've probably noticed a serious problem with our Task. We never populated the Tasks table with any tasks. As things stand, our ReportTask will never be able to acquire a lock and will never run, and this step needs to take place for every Task that's going to be managed in this way. To rectify this situation we need to insert the ReportTask information into the Tasks table using the following SQL script:

insert into Tasks values (‘ReportTask', ‘Idle', null);

We've nearly finished setting up our system for managing clustered tasks. So far we've created an external resource and a TimerTask called ReportTask that will run in our cluster. All that remains to be done is to create a Timer for running our ReportTask. Because we want to start the Timer for our task as soon as the application starts, we'll create a servlet called StartupServlet that does the work of creating our Timer. We will ensure that StartupServlet is loaded immediately by adding the following lines to web.xml:

<servlet>
<servlet-name>StartupServlet</servlet-name>
<display-name>StartupServlet</display-name>
<description>Used to create the Timers</description>
<servlet-class>StartupServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>

As our simple StartupServlet is not designed to handle requests, it doesn't need to override any method other than init(). When we create the Timer for running our ReportTask, it's important that we use one of the overloaded constructors to create the Timer as a daemon thread. If we don't specify that the Timer should be a daemon thread and use the default no argument constructor, the Timer will not be a daemon thread. By making it a daemon thread, we ensure that the Timer will continue to run for as long as our Web application runs and that it will terminate when the application terminates. We don't want to try to generate reports if the application has been stopped for some reason.

After calculating how many milliseconds are in a day (we want our ReportTask to run once a day), we schedule the ReportTask to run daily, starting now. On line 12 we place the Timer that we created in the ServletContext. While this is not strictly necessary to keep the ReportTask running, by keeping a reference to the Timer available we are able to check easily on the status of the ReportTimer or cancel it entirely should the need arise.

With the StartupServlet in place, we now have a very basic but workable system for running scheduled Tasks in a clustered environment, without having to worry about the same task running on all of the machines in the cluster simultaneously. It's important to note that if this scheme is used as presented and the tasks being executed complete in a very short period of time, you could still see duplicate executions of the same task if the clocks on all of the machines are not in synch with each other. While it is possible to extend this approach to address this problem, it's outside the scope of this article. With a little bit of effort, this system can also be extended to allow for such things as programmatic modification of the running tasks, robust error handling, and recovery of frozen tasks.

More Stories By Clark D. Richey Jr.

Clark is a principal consultant with the RABA Technologies RiSC group for advanced research and development. In his spare time, he teaches the Java platform to students at Loyola College, where as an associate professor, he shares his experiences with much enthusiasm. Clark is the founder of both JUGaccino, a Maryland-based JUG, and the StopLight and PermissionSniffer open source projects. He is also involved in implementing highly scalable, highly secure, service-oriented architectures using Jini.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
cbellonch 05/12/05 11:04:17 AM EDT

Hi,
Thanks for the article, it would be useful for our project. We've tried to download the code in:
· www.sys-con.com/java/sourcec.cfm
· http://www.sys-con.com/java/archives3/0903/Richey0903.zip

without success, are the links correct?

tbb 03/10/04 08:44:10 AM EST

I believe a class that implements ServletContextListener would be a better way to solve this problem than a servlet that loads on startup. (If your servlet container implements the servlet 2.3+ spec).

@ThingsExpo Stories
"We've discovered that after shows 80% if leads that people get, 80% of the conversations end up on the show floor, meaning people forget about it, people forget who they talk to, people forget that there are actual business opportunities to be had here so we try to help out and keep the conversations going," explained Jeff Mesnik, Founder and President of ContentMX, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Internet of @ThingsExpo has announced today that Chris Matthieu has been named tech chair of Internet of @ThingsExpo 2016 Silicon Valley. The 6thInternet of @ThingsExpo will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
When people aren’t talking about VMs and containers, they’re talking about serverless architecture. Serverless is about no maintenance. It means you are not worried about low-level infrastructural and operational details. An event-driven serverless platform is a great use case for IoT. In his session at @ThingsExpo, Animesh Singh, an STSM and Lead for IBM Cloud Platform and Infrastructure, will detail how to build a distributed serverless, polyglot, microservices framework using open source tec...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
From wearable activity trackers to fantasy e-sports, data and technology are transforming the way athletes train for the game and fans engage with their teams. In his session at @ThingsExpo, will present key data findings from leading sports organizations San Francisco 49ers, Orlando Magic NBA team. By utilizing data analytics these sports orgs have recognized new revenue streams, doubled its fan base and streamlined costs at its stadiums. John Paul is the CEO and Founder of VenueNext. Prior ...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
CenturyLink has announced that application server solutions from GENBAND are now available as part of CenturyLink’s Networx contracts. The General Services Administration (GSA)’s Networx program includes the largest telecommunications contract vehicles ever awarded by the federal government. CenturyLink recently secured an extension through spring 2020 of its offerings available to federal government agencies via GSA’s Networx Universal and Enterprise contracts. GENBAND’s EXPERiUS™ Application...
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discussed how businesses can gain an edge over competitors by empowering consumers to take control through IoT. He cited examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He also highlighted how IoT can revitalize and restore outdated business models, making them profitable ...
We all know the latest numbers: Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from last year, and will reach 20.8 billion by 2020. We're rapidly approaching a data production of 40 zettabytes a day – more than we can every physically store, and exabytes and yottabytes are just around the corner. For many that’s a good sign, as data has been proven to equal money – IF it’s ingested, integrated, and analyzed fast enough. Without real-ti...
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
"We are a well-established player in the application life cycle management market and we also have a very strong version control product," stated Flint Brenton, CEO of CollabNet,, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The IoT has the potential to create a renaissance of manufacturing in the US and elsewhere. In his session at 18th Cloud Expo, Florent Solt, CTO and chief architect of Netvibes, discussed how the expected exponential increase in the amount of data that will be processed, transported, stored, and accessed means there will be a huge demand for smart technologies to deliver it. Florent Solt is the CTO and chief architect of Netvibes. Prior to joining Netvibes in 2007, he co-founded Rift Technologi...
We're entering the post-smartphone era, where wearable gadgets from watches and fitness bands to glasses and health aids will power the next technological revolution. With mass adoption of wearable devices comes a new data ecosystem that must be protected. Wearables open new pathways that facilitate the tracking, sharing and storing of consumers’ personal health, location and daily activity data. Consumers have some idea of the data these devices capture, but most don’t realize how revealing and...
Unless your company can spend a lot of money on new technology, re-engineering your environment and hiring a comprehensive cybersecurity team, you will most likely move to the cloud or seek external service partnerships. In his session at 18th Cloud Expo, Darren Guccione, CEO of Keeper Security, revealed what you need to know when it comes to encryption in the cloud.
What are the successful IoT innovations from emerging markets? What are the unique challenges and opportunities from these markets? How did the constraints in connectivity among others lead to groundbreaking insights? In her session at @ThingsExpo, Carmen Feliciano, a Principal at AMDG, will answer all these questions and share how you can apply IoT best practices and frameworks from the emerging markets to your own business.
Basho Technologies has announced the latest release of Basho Riak TS, version 1.3. Riak TS is an enterprise-grade NoSQL database optimized for Internet of Things (IoT). The open source version enables developers to download the software for free and use it in production as well as make contributions to the code and develop applications around Riak TS. Enhancements to Riak TS make it quick, easy and cost-effective to spin up an instance to test new ideas and build IoT applications. In addition to...