Click here to close now.

Welcome!

Java Authors: Carmen Gonzalez, Liz McMillan, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: Java, Microservices Journal

Java: Article

Scaling Java and JSP Apps with Distributed Caching

Keeping up with the high volume of transactions in JSP applications

Java is the technology of choice for high-end enterprise applications. The most common applications that developers are involved with are JavaServer Pages web applications, also known as JSP applications. JSP has become one of the two standards for developing high traffic web applications, the other being Microsoft ASP.NET. Being part of Java, JSP has been popular for a long time and is highly instrumental in promoting Web technologies for developing high-traffic applications. Millions of people are using JSP applications and those numbers keep growing.

These JSP applications are endowed with an architecture that scales very nicely. You can handle more and more users by adding more web servers to a load-balanced Web farm. As you have an increasing amount of transaction load, you just keep adding more servers to the Web farm. That way you can handle more transactions and more concurrent users.

However, all good things come to an end, and in this case data storage and data access are not able to keep up with the increasingly higher volume of transactions in JSP applications. Therefore, data storage and data access become a bottleneck in JSP applications. As the saying goes, "The strength of a chain is only as strong as its weakest link". While JSP architecture is very scalable, data storage starts to bring it down and thus a bottleneck is created.

There are two types of data primarily used in JSP applications. One is Servlet Session data. The other is normal application data that comes from the application database. This application database could be a relational database, a mainframe, or it could come from a Web services call. Both types of data storage incur scalability bottlenecks for high transaction loads.

Figure 1: JSP Application Facing Data Storage Bottlenecks

How do you address this issue and remove these scalability bottlenecks? The goal is not only to improve the performance although that is always nice, but rather to improve scalability. Scalability here is defined as the ability to maintain good performance even under peak transaction load. In effect, if you have five users, your Web application is probably very fast. If you have 500,000 users, it's probably going to not only slow down but actually choke. If you have good scalability, your 500,000 user performance would be very similar to a five-user performance.

Distributed Cache Eliminates Data Storage Bottlenecks
In-memory Distributed cache is the way to remove these scalability bottlenecks in JSP applications and improve scalability. It lets you cache application data and reduce those expensive database trips that are causing these bottlenecks. A distributed cache spans across multiple inexpensive cache servers and brings together their memory and CPU power to provide a very scalable architecture. It permits you to keep adding more cache servers to the distributed cache cluster as your transaction load increases. This gives you a linear scalability for handling transactions in JSP applications.

Figure 2: Distributed Cache Removing Bottlenecks in a JSP Application

A shown in Figure 2 a distributed cache efficiently fits into JSP application architecture; it provides the essential scalability and reduces pressure on the database. As a further note, it is important to know that unlike a database that uses persistent storage, a distributed cache uses volatile memory as its store. Therefore, a distributed cache ensures data reliability through data replication across multiple cache servers to warrant that all data is kept on at least two cache servers. Then, if any one server goes down, no data is lost.

There are two ways you can use distributed caching in JSP applications. One is for HTTP Session persistence. The second is application data caching that is also called object caching. Both of these help improve JSP application scalability in their own ways.

Using Distributed Cache for HTTP Session Persistence
Just like any regular Web application, JSP also uses HTTP Session to keep track of a user's session across multiple HTTP requests. By default, there are five persistence options provided for HTTP Session. They are:

  1. Memory (single server without replication): This doesn't work in a multi-server load balanced Web farm running a JSP application and therefore is not scalable at all.
  2. File system persistence: This has performance and scalability issues because all session are being persisted on a single file server and disk-based access is not as fast as in-memory access.
  3. JDBC persistence: This also has serious performance and scalability issues because a database server is unable to scale linearly whereas a load balanced Web farm can.
  4. Cookie-based persistence: This is very limiting because the entire session has to be sent to the user's browser and then returned back to the Web server at the time of next HTTP request. It consumes a lot of bandwidth as well and also slows down the response time because of it.
  5. Clustered session persistence (replicated) by a Servlet Engine: Each Servlet Engine has implemented its own scheme for replicating HTTP Session. These schemes at least support multi-server load-balanced Web farms with Session replication to ensure that no data loss occurs. But, the clustering and replication in all the leading Servlet engines (Apache Tomcat, JBoss, WebLogic, and WebSphere) are not very optimized for a high-transaction environment. As a result, you quickly run into scalability bottlenecks.

As you can see, none of the above options are ideal for a high-transaction multi-server environment. Although clustered session persistence by a Servlet Engine handles a multi-server environment, it still can't cope with the extreme transaction load that your JSP application needs to handle.

The best option is to use a distributed cache for JSP Session persistence. The reason is because unlike the Servlet Engine implementation of Session clustering and replication, a distributed cache scales very nicely in a linear fashion. This allows you to keep adding more cache servers to the mix as your transaction load increases. As a result, you never run into any scalability bottlenecks. In addition, a distributed cache usually provides various caching topologies including an intelligent combination of data partitioning and data replication so along with scalability you would also get reliability through data replication.

Depending on the distributed caching vendor you use, you may already have a plug-in HTTP Filter. This automatically intercepts your HTTP calls and reads the JSP Session from the distributed cache before your JSP page is executed. Then, after the JSP page is done and it is sending a response back to the user, this HTTP Filter takes the JSP Session object and saves it back to the distributed cache. This means you don't have to write any special code for JSP Session persistence. You only make a configuration change.

Just plug in the HTTP filter and make changes in your configuration files and your JSP Sessions are automatically persisted in a distributed cache. However, you have to make sure that any object that you store in the JSP Session is serializable. Serialization is needed for shipping data across process boundaries and a distributed cache usually resides in its own process either on the Web server or on a separate dedicated server.

Using Distributed Cache for Application Data Caching
Just like a typical Web application, most JSP applications deal with data that is coming from an application database. This database could be a relational database like Oracle, IBM DB2, SQL Server, or MySQL. It could also be a mainframe or a Web service call to cloud-based storage. Either way, the data store is typically not able to handle a growing number of transactions and quickly slows down and even grinds to a halt if you put too much pressure on the database.

The second use of distributed cache is for application data caching. By deploying this particular caching, you significantly cut down on those expensive database trips for reading the same data over again, which is overwhelming the database server. This frees up the application database to handle writes more efficiently and handle a larger number of users. Another key benefit is that you cache transactional or read-write data in addition to caching read-only data. Transactional data is one that changes frequently, even as frequently as every 20 to 30 seconds. It's a good idea to cache this type of data because even during this short time, your application ends up reading this data many times. When you multiply this with the total number of users and transactions, you immediately realize that the overall traffic to the database reduces dramatically.

In caching application data, the goal is to reduce those application database trips by about 70 to 90%. This means 70 to 90% of the time you should not even be going to the database. Instead, you should just be getting your data from the distributed cache.

While you are reducing those expensive database trips, you are also eliminating scalability bottlenecks in your application database. Most often you modify your application source code to make calls to a distributed cache API. The following is an example of how you can use a distributed cache in a JSP application for caching application data.

<%@page import="com.alachisoft.ncache.web.caching.*" %>
...
<%
String cacheId = "mycache";
Cache _cache;

//Initializing the cache object ...

try {
_cache = DistCache.initializeCache(cacheId);
}
catch (Exception e){}

//Adding key (cache item name) and val (object) into the cache ...

try {
_cache.add(key, val, null, Cache.NoAbsoluteExpiration,

Cache.NoSlidingExpiration, CacheItemPriority.Default);

}
catch (Exception e){}

//Getting the object against a given key ...

try {
obj = _cache.get(key);
}
catch (Exception e){}
%>

Listing 1: Example of using a Distributed Cache in a JSP application

Using Distributed Cache Topologies
Let's now go back to what was earlier said about a distributed cache being highly scalable while at the same time providing data replication intelligently to ensure data reliability. A distributed cache usually provides multiple caching topologies to meet your environment. A caching topology consists of data storage and a client/server connection strategy.

A typical distributed cache would provide the following topologies to you:

  1. Mirrored Cache: This topology consists of two cache servers. One is active and the other is passive. All clients connect to the active server to do their reads and writes. All writes are asynchronously backed up to the passive cache server. If the active cache server goes down at runtime, the passive one becomes active and all clients connect to it automatically. You would use this normally if you only have one dedicated cache server and you use your database server or another server as the passive mirror. This topology handles reads and writes very efficiently but is limited in terms of storage capacity and transaction capacity since it cannot have more than two servers.
  2. Replicated Cache: This topology can have more than two servers. All are active and all contain an entire copy of the cache. Reads are super fast but writes are not as fast because they're made synchronously throughout the cache cluster. Also, adding more servers does not increase storage capacity. This topology is good when you're not making changes to cached data very frequently.
  3. Partitioned Cache: This topology can have more than two servers. All servers are active. The cache is broken down into partitions and each server contains one partition. As you add more servers, you grow storage capacity and also transaction capacity. This topology offers linear scalability but doesn't provide the data reliability as there is no replication of data.
  4. Partitioned-Replicated Cache: This topology is similar to the Partitioned Cache except that it also provides data replication at the partition level. Doing this allows it to scale linearly just like the Partitioned Cache while at the same time providing data reliability through replication.
  5. Client Cache (aka Near Cache): This topology works with any of the above four topologies. It is basically a local cache near your application and sits on your Web/application server. However, it's not a standalone cache and is in fact connected to the cache cluster. It gets informed by the cache cluster whenever there is any data change so it can update itself automatically. Client Cache provides further scalability to your applications because you reduce trips even to the cache cluster.

The most popular caching topology is a partitioned-replicated cache. As the name implies, this hybrid topology provides the benefits of partitioned cache, which is in terms of scalability. Simultaneously, it hands you the benefits of a replicated cache, which is reliability. This means all data is copied to two different servers. Other topologies are partitioned, replicated, and client cache. For the time being, let's focus on partitioned-replicated, and the others we will discuss later.

Figure 3: Example of a Partitioned-Replicated Caching Topology

Important Application Data Caching Features
There are several major and important features associated with a highly efficient distributed cache for application data caching. They are:

  • Absolute and sliding expirations
  • Cache dependency for managing relational data in the cache
  • Synchronize cache with a database
  • Read-through and write-through
  • Groups and tags
  • SQL-like Cache Query Language
  • Event Notifications

Absolute and sliding expirations allow you to specify when individual cache items should expire and be automatically removed from the cache. You can either specify an absolute date-time or an interval of inactivity as criteria. Cache dependency is particularly useful for managing data relationships. The majority of cached data comes from relational databases hence it has relationships. When keeping track of this data in the cache, you rely on the cache to manage data integrity and simplify your application.

Database synchronization also plays a big role in application data caching. Consider that the cache keeps a copy of the data that is in the database. If it changes in the database, it's more effective if the cache can automatically learn about it and synchronize itself. It can do that by removing that item from the cache o reloading a new copy from the database.

As far as read-through and write-through, at times, your application directly reads data from the database and caches it. Other times, you want the cache to read the data for you because this simplifies your application code and also provides other benefits. For this latter case, you need both read-through and write-through. Groups and tags come into play for grouping multiple cached items in various ways. That way you can easily locate them. Here, a group allows each item to relate to only one group. Conversely, with tags, you are provided with a many-to-many grouping with cached items. Both distributed cache traits provide you with great flexibility for fetching data and keeping track of it in the cache.

The last two major distributed caching attributes you should seek are SQL-like cache query event notifications. A typical cache fetch is based on a key since every cached item has a key. However, on certain occasions, you want to search for items based on other criteria. A cache query allows you to provide an SQL-like query to search the cache based on object attributes rather than the key.

In the area of event notifications, your application often wants to be notified when some data changes in the cache. An efficient cache provides various event propagation mechanisms. One is key-based event notification, which is triggered by an individual cached item update. Second is a general-purpose event triggered whenever anything in the cache is updated or removed. Third is a continuous query that is triggered whenever an item in a criteria-based data set in the cache is updated or removed. All of these allow your applications to make full use of the cache.

High Availability of Distributed Cache
A rule of thumb to remember is that you're using a distributed cache because you are anticipating a high transaction environment for your application. This usually means your JSP application has a greater impact on your business. Therefore, you can't afford any unscheduled downtimes for your application and even the scheduled downtimes should be very short and very infrequent.

Therefore, since a distributed cache runs in your data center as part of your JSP application, it must provide high availability in itself. One critical aspect of this high availability is that the cache cluster must be self-healing and totally dynamically configurable. Some caches provide a manually fixed cache cluster (so your application code creates and manages the cluster). Some other caches use master/slave architecture where if the master node goes down, all the slaves either stop working or become read-only. Both architectures are severely limiting and inflexible.

A highly efficient distributed cache has a peer-to-peer cache clustering that corrects itself automatically at runtime, thus self-healing if you add or remove cache servers from the cache cluster or if a cache server crashes for some reason. This is a highly important characteristic of a good distributed cache.

Conclusion
You should seriously consider incorporating a distributed cache both for application data caching and for session state storage if you are developing a JSP application targeted for a high transaction environment.

One last point - Caveat Emptor. Currently, there are a number of free distributed caches available. However, you must seriously consider the old tried and true saying, "there is no free lunch." Sure, you might think of not forking over any money for a free distributed cache. But, in the long run, the cost becomes exorbitant. If your JSP application is business-critical then you must consider the total cost of ownership and not just the price of a distributed cache or that it's free.

More Stories By Iqbal Khan

Iqbal Khan is the President and Technology Evangelist of Alachisoft. Alachisoft provides NCache, a Java and .NET distributed cache for boosting performance and scalability in enterprise applications. Iqbal received his MS in Computer Science from Indiana University, Bloomington, in 1990. You can reach him at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The worldwide cellular network will be the backbone of the future IoT, and the telecom industry is clamoring to get on board as more than just a data pipe. In his session at @ThingsExpo, Evan McGee, CTO of Ring Plus, Inc., discussed what service operators can offer that would benefit IoT entrepreneurs, inventors, and consumers. Evan McGee is the CTO of RingPlus, a leading innovative U.S. MVNO and wireless enabler. His focus is on combining web technologies with traditional telecom to create a new breed of unified communication that is easily accessible to the general consumer. With over a de...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
Cloud is not a commodity. And no matter what you call it, computing doesn’t come out of the sky. It comes from physical hardware inside brick and mortar facilities connected by hundreds of miles of networking cable. And no two clouds are built the same way. SoftLayer gives you the highest performing cloud infrastructure available. One platform that takes data centers around the world that are full of the widest range of cloud computing options, and then integrates and automates everything. Join SoftLayer on June 9 at 16th Cloud Expo to learn about IBM Cloud's SoftLayer platform, explore se...
SYS-CON Media announced today that 9 out of 10 " most read" DevOps articles are published by @DevOpsSummit Blog. Launched in October 2014, @DevOpsSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce softw...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
15th Cloud Expo, which took place Nov. 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, expanded the conference content of @ThingsExpo, Big Data Expo, and DevOps Summit to include two developer events. IBM held a Bluemix Developer Playground on November 5 and ElasticBox held a Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of Bluemix, its services and functionality and provide short-term introductory projects that developers can complete between sessions.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, a...
The Internet of Things (IoT) promises to evolve the way the world does business; however, understanding how to apply it to your company can be a mystery. Most people struggle with understanding the potential business uses or tend to get caught up in the technology, resulting in solutions that fail to meet even minimum business goals. In his session at @ThingsExpo, Jesse Shiah, CEO / President / Co-Founder of AgilePoint Inc., showed what is needed to leverage the IoT to transform your business. He discussed opportunities and challenges ahead for the IoT from a market and technical point of vie...
Grow your business with enterprise wearable apps using SAP Platforms and Google Glass. SAP and Google just launched the SAP and Google Glass Challenge, an opportunity for you to innovate and develop the best Enterprise Wearable App using SAP Platforms and Google Glass and gain valuable market exposure. In his session at @ThingsExpo, Brian McPhail, Senior Director of Business Development, ISVs & Digital Commerce at SAP, outlined the timeline of the SAP Google Glass Challenge and the opportunity for developers, start-ups, and companies of all sizes to engage with SAP today.
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo – to be held June 9-11, 2015, at the Javits Center in New York City, NY – is now accepting Hackathon proposals. Hackathon sponsorship benefits include general brand exposure and increasing engagement with the developer ecosystem. At Cloud Expo 2014 Silicon Valley, IBM held the Bluemix Developer Playground on November 5 and ElasticBox held the DevOps Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make smarter decisions, faster.
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT.
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.