Welcome!

Java Authors: Pat Romanski, Elizabeth White, Roger Strukhoff, Liz McMillan, Jim Kaskade

Related Topics: Java, SOA & WOA

Java: Article

Scaling Java and JSP Apps with Distributed Caching

Keeping up with the high volume of transactions in JSP applications

Java is the technology of choice for high-end enterprise applications. The most common applications that developers are involved with are JavaServer Pages web applications, also known as JSP applications. JSP has become one of the two standards for developing high traffic web applications, the other being Microsoft ASP.NET. Being part of Java, JSP has been popular for a long time and is highly instrumental in promoting Web technologies for developing high-traffic applications. Millions of people are using JSP applications and those numbers keep growing.

These JSP applications are endowed with an architecture that scales very nicely. You can handle more and more users by adding more web servers to a load-balanced Web farm. As you have an increasing amount of transaction load, you just keep adding more servers to the Web farm. That way you can handle more transactions and more concurrent users.

However, all good things come to an end, and in this case data storage and data access are not able to keep up with the increasingly higher volume of transactions in JSP applications. Therefore, data storage and data access become a bottleneck in JSP applications. As the saying goes, "The strength of a chain is only as strong as its weakest link". While JSP architecture is very scalable, data storage starts to bring it down and thus a bottleneck is created.

There are two types of data primarily used in JSP applications. One is Servlet Session data. The other is normal application data that comes from the application database. This application database could be a relational database, a mainframe, or it could come from a Web services call. Both types of data storage incur scalability bottlenecks for high transaction loads.

Figure 1: JSP Application Facing Data Storage Bottlenecks

How do you address this issue and remove these scalability bottlenecks? The goal is not only to improve the performance although that is always nice, but rather to improve scalability. Scalability here is defined as the ability to maintain good performance even under peak transaction load. In effect, if you have five users, your Web application is probably very fast. If you have 500,000 users, it's probably going to not only slow down but actually choke. If you have good scalability, your 500,000 user performance would be very similar to a five-user performance.

Distributed Cache Eliminates Data Storage Bottlenecks
In-memory Distributed cache is the way to remove these scalability bottlenecks in JSP applications and improve scalability. It lets you cache application data and reduce those expensive database trips that are causing these bottlenecks. A distributed cache spans across multiple inexpensive cache servers and brings together their memory and CPU power to provide a very scalable architecture. It permits you to keep adding more cache servers to the distributed cache cluster as your transaction load increases. This gives you a linear scalability for handling transactions in JSP applications.

Figure 2: Distributed Cache Removing Bottlenecks in a JSP Application

A shown in Figure 2 a distributed cache efficiently fits into JSP application architecture; it provides the essential scalability and reduces pressure on the database. As a further note, it is important to know that unlike a database that uses persistent storage, a distributed cache uses volatile memory as its store. Therefore, a distributed cache ensures data reliability through data replication across multiple cache servers to warrant that all data is kept on at least two cache servers. Then, if any one server goes down, no data is lost.

There are two ways you can use distributed caching in JSP applications. One is for HTTP Session persistence. The second is application data caching that is also called object caching. Both of these help improve JSP application scalability in their own ways.

Using Distributed Cache for HTTP Session Persistence
Just like any regular Web application, JSP also uses HTTP Session to keep track of a user's session across multiple HTTP requests. By default, there are five persistence options provided for HTTP Session. They are:

  1. Memory (single server without replication): This doesn't work in a multi-server load balanced Web farm running a JSP application and therefore is not scalable at all.
  2. File system persistence: This has performance and scalability issues because all session are being persisted on a single file server and disk-based access is not as fast as in-memory access.
  3. JDBC persistence: This also has serious performance and scalability issues because a database server is unable to scale linearly whereas a load balanced Web farm can.
  4. Cookie-based persistence: This is very limiting because the entire session has to be sent to the user's browser and then returned back to the Web server at the time of next HTTP request. It consumes a lot of bandwidth as well and also slows down the response time because of it.
  5. Clustered session persistence (replicated) by a Servlet Engine: Each Servlet Engine has implemented its own scheme for replicating HTTP Session. These schemes at least support multi-server load-balanced Web farms with Session replication to ensure that no data loss occurs. But, the clustering and replication in all the leading Servlet engines (Apache Tomcat, JBoss, WebLogic, and WebSphere) are not very optimized for a high-transaction environment. As a result, you quickly run into scalability bottlenecks.

As you can see, none of the above options are ideal for a high-transaction multi-server environment. Although clustered session persistence by a Servlet Engine handles a multi-server environment, it still can't cope with the extreme transaction load that your JSP application needs to handle.

The best option is to use a distributed cache for JSP Session persistence. The reason is because unlike the Servlet Engine implementation of Session clustering and replication, a distributed cache scales very nicely in a linear fashion. This allows you to keep adding more cache servers to the mix as your transaction load increases. As a result, you never run into any scalability bottlenecks. In addition, a distributed cache usually provides various caching topologies including an intelligent combination of data partitioning and data replication so along with scalability you would also get reliability through data replication.

Depending on the distributed caching vendor you use, you may already have a plug-in HTTP Filter. This automatically intercepts your HTTP calls and reads the JSP Session from the distributed cache before your JSP page is executed. Then, after the JSP page is done and it is sending a response back to the user, this HTTP Filter takes the JSP Session object and saves it back to the distributed cache. This means you don't have to write any special code for JSP Session persistence. You only make a configuration change.

Just plug in the HTTP filter and make changes in your configuration files and your JSP Sessions are automatically persisted in a distributed cache. However, you have to make sure that any object that you store in the JSP Session is serializable. Serialization is needed for shipping data across process boundaries and a distributed cache usually resides in its own process either on the Web server or on a separate dedicated server.

Using Distributed Cache for Application Data Caching
Just like a typical Web application, most JSP applications deal with data that is coming from an application database. This database could be a relational database like Oracle, IBM DB2, SQL Server, or MySQL. It could also be a mainframe or a Web service call to cloud-based storage. Either way, the data store is typically not able to handle a growing number of transactions and quickly slows down and even grinds to a halt if you put too much pressure on the database.

The second use of distributed cache is for application data caching. By deploying this particular caching, you significantly cut down on those expensive database trips for reading the same data over again, which is overwhelming the database server. This frees up the application database to handle writes more efficiently and handle a larger number of users. Another key benefit is that you cache transactional or read-write data in addition to caching read-only data. Transactional data is one that changes frequently, even as frequently as every 20 to 30 seconds. It's a good idea to cache this type of data because even during this short time, your application ends up reading this data many times. When you multiply this with the total number of users and transactions, you immediately realize that the overall traffic to the database reduces dramatically.

In caching application data, the goal is to reduce those application database trips by about 70 to 90%. This means 70 to 90% of the time you should not even be going to the database. Instead, you should just be getting your data from the distributed cache.

While you are reducing those expensive database trips, you are also eliminating scalability bottlenecks in your application database. Most often you modify your application source code to make calls to a distributed cache API. The following is an example of how you can use a distributed cache in a JSP application for caching application data.

<%@page import="com.alachisoft.ncache.web.caching.*" %>
...
<%
String cacheId = "mycache";
Cache _cache;

//Initializing the cache object ...

try {
_cache = DistCache.initializeCache(cacheId);
}
catch (Exception e){}

//Adding key (cache item name) and val (object) into the cache ...

try {
_cache.add(key, val, null, Cache.NoAbsoluteExpiration,

Cache.NoSlidingExpiration, CacheItemPriority.Default);

}
catch (Exception e){}

//Getting the object against a given key ...

try {
obj = _cache.get(key);
}
catch (Exception e){}
%>

Listing 1: Example of using a Distributed Cache in a JSP application

Using Distributed Cache Topologies
Let's now go back to what was earlier said about a distributed cache being highly scalable while at the same time providing data replication intelligently to ensure data reliability. A distributed cache usually provides multiple caching topologies to meet your environment. A caching topology consists of data storage and a client/server connection strategy.

A typical distributed cache would provide the following topologies to you:

  1. Mirrored Cache: This topology consists of two cache servers. One is active and the other is passive. All clients connect to the active server to do their reads and writes. All writes are asynchronously backed up to the passive cache server. If the active cache server goes down at runtime, the passive one becomes active and all clients connect to it automatically. You would use this normally if you only have one dedicated cache server and you use your database server or another server as the passive mirror. This topology handles reads and writes very efficiently but is limited in terms of storage capacity and transaction capacity since it cannot have more than two servers.
  2. Replicated Cache: This topology can have more than two servers. All are active and all contain an entire copy of the cache. Reads are super fast but writes are not as fast because they're made synchronously throughout the cache cluster. Also, adding more servers does not increase storage capacity. This topology is good when you're not making changes to cached data very frequently.
  3. Partitioned Cache: This topology can have more than two servers. All servers are active. The cache is broken down into partitions and each server contains one partition. As you add more servers, you grow storage capacity and also transaction capacity. This topology offers linear scalability but doesn't provide the data reliability as there is no replication of data.
  4. Partitioned-Replicated Cache: This topology is similar to the Partitioned Cache except that it also provides data replication at the partition level. Doing this allows it to scale linearly just like the Partitioned Cache while at the same time providing data reliability through replication.
  5. Client Cache (aka Near Cache): This topology works with any of the above four topologies. It is basically a local cache near your application and sits on your Web/application server. However, it's not a standalone cache and is in fact connected to the cache cluster. It gets informed by the cache cluster whenever there is any data change so it can update itself automatically. Client Cache provides further scalability to your applications because you reduce trips even to the cache cluster.

The most popular caching topology is a partitioned-replicated cache. As the name implies, this hybrid topology provides the benefits of partitioned cache, which is in terms of scalability. Simultaneously, it hands you the benefits of a replicated cache, which is reliability. This means all data is copied to two different servers. Other topologies are partitioned, replicated, and client cache. For the time being, let's focus on partitioned-replicated, and the others we will discuss later.

Figure 3: Example of a Partitioned-Replicated Caching Topology

Important Application Data Caching Features
There are several major and important features associated with a highly efficient distributed cache for application data caching. They are:

  • Absolute and sliding expirations
  • Cache dependency for managing relational data in the cache
  • Synchronize cache with a database
  • Read-through and write-through
  • Groups and tags
  • SQL-like Cache Query Language
  • Event Notifications

Absolute and sliding expirations allow you to specify when individual cache items should expire and be automatically removed from the cache. You can either specify an absolute date-time or an interval of inactivity as criteria. Cache dependency is particularly useful for managing data relationships. The majority of cached data comes from relational databases hence it has relationships. When keeping track of this data in the cache, you rely on the cache to manage data integrity and simplify your application.

Database synchronization also plays a big role in application data caching. Consider that the cache keeps a copy of the data that is in the database. If it changes in the database, it's more effective if the cache can automatically learn about it and synchronize itself. It can do that by removing that item from the cache o reloading a new copy from the database.

As far as read-through and write-through, at times, your application directly reads data from the database and caches it. Other times, you want the cache to read the data for you because this simplifies your application code and also provides other benefits. For this latter case, you need both read-through and write-through. Groups and tags come into play for grouping multiple cached items in various ways. That way you can easily locate them. Here, a group allows each item to relate to only one group. Conversely, with tags, you are provided with a many-to-many grouping with cached items. Both distributed cache traits provide you with great flexibility for fetching data and keeping track of it in the cache.

The last two major distributed caching attributes you should seek are SQL-like cache query event notifications. A typical cache fetch is based on a key since every cached item has a key. However, on certain occasions, you want to search for items based on other criteria. A cache query allows you to provide an SQL-like query to search the cache based on object attributes rather than the key.

In the area of event notifications, your application often wants to be notified when some data changes in the cache. An efficient cache provides various event propagation mechanisms. One is key-based event notification, which is triggered by an individual cached item update. Second is a general-purpose event triggered whenever anything in the cache is updated or removed. Third is a continuous query that is triggered whenever an item in a criteria-based data set in the cache is updated or removed. All of these allow your applications to make full use of the cache.

High Availability of Distributed Cache
A rule of thumb to remember is that you're using a distributed cache because you are anticipating a high transaction environment for your application. This usually means your JSP application has a greater impact on your business. Therefore, you can't afford any unscheduled downtimes for your application and even the scheduled downtimes should be very short and very infrequent.

Therefore, since a distributed cache runs in your data center as part of your JSP application, it must provide high availability in itself. One critical aspect of this high availability is that the cache cluster must be self-healing and totally dynamically configurable. Some caches provide a manually fixed cache cluster (so your application code creates and manages the cluster). Some other caches use master/slave architecture where if the master node goes down, all the slaves either stop working or become read-only. Both architectures are severely limiting and inflexible.

A highly efficient distributed cache has a peer-to-peer cache clustering that corrects itself automatically at runtime, thus self-healing if you add or remove cache servers from the cache cluster or if a cache server crashes for some reason. This is a highly important characteristic of a good distributed cache.

Conclusion
You should seriously consider incorporating a distributed cache both for application data caching and for session state storage if you are developing a JSP application targeted for a high transaction environment.

One last point - Caveat Emptor. Currently, there are a number of free distributed caches available. However, you must seriously consider the old tried and true saying, "there is no free lunch." Sure, you might think of not forking over any money for a free distributed cache. But, in the long run, the cost becomes exorbitant. If your JSP application is business-critical then you must consider the total cost of ownership and not just the price of a distributed cache or that it's free.

More Stories By Iqbal Khan

Iqbal Khan is the President and Technology Evangelist of Alachisoft. Alachisoft provides NCache, a Java and .NET distributed cache for boosting performance and scalability in enterprise applications. Iqbal received his MS in Computer Science from Indiana University, Bloomington, in 1990. You can reach him at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.