Click here to close now.




















Welcome!

Java IoT Authors: Pat Romanski, Ruxit Blog, Tim Hinds, Elizabeth White, Harry Trott

Related Topics: Java IoT, Microservices Expo

Java IoT: Article

Scaling Java and JSP Apps with Distributed Caching

Keeping up with the high volume of transactions in JSP applications

Java is the technology of choice for high-end enterprise applications. The most common applications that developers are involved with are JavaServer Pages web applications, also known as JSP applications. JSP has become one of the two standards for developing high traffic web applications, the other being Microsoft ASP.NET. Being part of Java, JSP has been popular for a long time and is highly instrumental in promoting Web technologies for developing high-traffic applications. Millions of people are using JSP applications and those numbers keep growing.

These JSP applications are endowed with an architecture that scales very nicely. You can handle more and more users by adding more web servers to a load-balanced Web farm. As you have an increasing amount of transaction load, you just keep adding more servers to the Web farm. That way you can handle more transactions and more concurrent users.

However, all good things come to an end, and in this case data storage and data access are not able to keep up with the increasingly higher volume of transactions in JSP applications. Therefore, data storage and data access become a bottleneck in JSP applications. As the saying goes, "The strength of a chain is only as strong as its weakest link". While JSP architecture is very scalable, data storage starts to bring it down and thus a bottleneck is created.

There are two types of data primarily used in JSP applications. One is Servlet Session data. The other is normal application data that comes from the application database. This application database could be a relational database, a mainframe, or it could come from a Web services call. Both types of data storage incur scalability bottlenecks for high transaction loads.

Figure 1: JSP Application Facing Data Storage Bottlenecks

How do you address this issue and remove these scalability bottlenecks? The goal is not only to improve the performance although that is always nice, but rather to improve scalability. Scalability here is defined as the ability to maintain good performance even under peak transaction load. In effect, if you have five users, your Web application is probably very fast. If you have 500,000 users, it's probably going to not only slow down but actually choke. If you have good scalability, your 500,000 user performance would be very similar to a five-user performance.

Distributed Cache Eliminates Data Storage Bottlenecks
In-memory Distributed cache is the way to remove these scalability bottlenecks in JSP applications and improve scalability. It lets you cache application data and reduce those expensive database trips that are causing these bottlenecks. A distributed cache spans across multiple inexpensive cache servers and brings together their memory and CPU power to provide a very scalable architecture. It permits you to keep adding more cache servers to the distributed cache cluster as your transaction load increases. This gives you a linear scalability for handling transactions in JSP applications.

Figure 2: Distributed Cache Removing Bottlenecks in a JSP Application

A shown in Figure 2 a distributed cache efficiently fits into JSP application architecture; it provides the essential scalability and reduces pressure on the database. As a further note, it is important to know that unlike a database that uses persistent storage, a distributed cache uses volatile memory as its store. Therefore, a distributed cache ensures data reliability through data replication across multiple cache servers to warrant that all data is kept on at least two cache servers. Then, if any one server goes down, no data is lost.

There are two ways you can use distributed caching in JSP applications. One is for HTTP Session persistence. The second is application data caching that is also called object caching. Both of these help improve JSP application scalability in their own ways.

Using Distributed Cache for HTTP Session Persistence
Just like any regular Web application, JSP also uses HTTP Session to keep track of a user's session across multiple HTTP requests. By default, there are five persistence options provided for HTTP Session. They are:

  1. Memory (single server without replication): This doesn't work in a multi-server load balanced Web farm running a JSP application and therefore is not scalable at all.
  2. File system persistence: This has performance and scalability issues because all session are being persisted on a single file server and disk-based access is not as fast as in-memory access.
  3. JDBC persistence: This also has serious performance and scalability issues because a database server is unable to scale linearly whereas a load balanced Web farm can.
  4. Cookie-based persistence: This is very limiting because the entire session has to be sent to the user's browser and then returned back to the Web server at the time of next HTTP request. It consumes a lot of bandwidth as well and also slows down the response time because of it.
  5. Clustered session persistence (replicated) by a Servlet Engine: Each Servlet Engine has implemented its own scheme for replicating HTTP Session. These schemes at least support multi-server load-balanced Web farms with Session replication to ensure that no data loss occurs. But, the clustering and replication in all the leading Servlet engines (Apache Tomcat, JBoss, WebLogic, and WebSphere) are not very optimized for a high-transaction environment. As a result, you quickly run into scalability bottlenecks.

As you can see, none of the above options are ideal for a high-transaction multi-server environment. Although clustered session persistence by a Servlet Engine handles a multi-server environment, it still can't cope with the extreme transaction load that your JSP application needs to handle.

The best option is to use a distributed cache for JSP Session persistence. The reason is because unlike the Servlet Engine implementation of Session clustering and replication, a distributed cache scales very nicely in a linear fashion. This allows you to keep adding more cache servers to the mix as your transaction load increases. As a result, you never run into any scalability bottlenecks. In addition, a distributed cache usually provides various caching topologies including an intelligent combination of data partitioning and data replication so along with scalability you would also get reliability through data replication.

Depending on the distributed caching vendor you use, you may already have a plug-in HTTP Filter. This automatically intercepts your HTTP calls and reads the JSP Session from the distributed cache before your JSP page is executed. Then, after the JSP page is done and it is sending a response back to the user, this HTTP Filter takes the JSP Session object and saves it back to the distributed cache. This means you don't have to write any special code for JSP Session persistence. You only make a configuration change.

Just plug in the HTTP filter and make changes in your configuration files and your JSP Sessions are automatically persisted in a distributed cache. However, you have to make sure that any object that you store in the JSP Session is serializable. Serialization is needed for shipping data across process boundaries and a distributed cache usually resides in its own process either on the Web server or on a separate dedicated server.

Using Distributed Cache for Application Data Caching
Just like a typical Web application, most JSP applications deal with data that is coming from an application database. This database could be a relational database like Oracle, IBM DB2, SQL Server, or MySQL. It could also be a mainframe or a Web service call to cloud-based storage. Either way, the data store is typically not able to handle a growing number of transactions and quickly slows down and even grinds to a halt if you put too much pressure on the database.

The second use of distributed cache is for application data caching. By deploying this particular caching, you significantly cut down on those expensive database trips for reading the same data over again, which is overwhelming the database server. This frees up the application database to handle writes more efficiently and handle a larger number of users. Another key benefit is that you cache transactional or read-write data in addition to caching read-only data. Transactional data is one that changes frequently, even as frequently as every 20 to 30 seconds. It's a good idea to cache this type of data because even during this short time, your application ends up reading this data many times. When you multiply this with the total number of users and transactions, you immediately realize that the overall traffic to the database reduces dramatically.

In caching application data, the goal is to reduce those application database trips by about 70 to 90%. This means 70 to 90% of the time you should not even be going to the database. Instead, you should just be getting your data from the distributed cache.

While you are reducing those expensive database trips, you are also eliminating scalability bottlenecks in your application database. Most often you modify your application source code to make calls to a distributed cache API. The following is an example of how you can use a distributed cache in a JSP application for caching application data.

<%@page import="com.alachisoft.ncache.web.caching.*" %>
...
<%
String cacheId = "mycache";
Cache _cache;

//Initializing the cache object ...

try {
_cache = DistCache.initializeCache(cacheId);
}
catch (Exception e){}

//Adding key (cache item name) and val (object) into the cache ...

try {
_cache.add(key, val, null, Cache.NoAbsoluteExpiration,

Cache.NoSlidingExpiration, CacheItemPriority.Default);

}
catch (Exception e){}

//Getting the object against a given key ...

try {
obj = _cache.get(key);
}
catch (Exception e){}
%>

Listing 1: Example of using a Distributed Cache in a JSP application

Using Distributed Cache Topologies
Let's now go back to what was earlier said about a distributed cache being highly scalable while at the same time providing data replication intelligently to ensure data reliability. A distributed cache usually provides multiple caching topologies to meet your environment. A caching topology consists of data storage and a client/server connection strategy.

A typical distributed cache would provide the following topologies to you:

  1. Mirrored Cache: This topology consists of two cache servers. One is active and the other is passive. All clients connect to the active server to do their reads and writes. All writes are asynchronously backed up to the passive cache server. If the active cache server goes down at runtime, the passive one becomes active and all clients connect to it automatically. You would use this normally if you only have one dedicated cache server and you use your database server or another server as the passive mirror. This topology handles reads and writes very efficiently but is limited in terms of storage capacity and transaction capacity since it cannot have more than two servers.
  2. Replicated Cache: This topology can have more than two servers. All are active and all contain an entire copy of the cache. Reads are super fast but writes are not as fast because they're made synchronously throughout the cache cluster. Also, adding more servers does not increase storage capacity. This topology is good when you're not making changes to cached data very frequently.
  3. Partitioned Cache: This topology can have more than two servers. All servers are active. The cache is broken down into partitions and each server contains one partition. As you add more servers, you grow storage capacity and also transaction capacity. This topology offers linear scalability but doesn't provide the data reliability as there is no replication of data.
  4. Partitioned-Replicated Cache: This topology is similar to the Partitioned Cache except that it also provides data replication at the partition level. Doing this allows it to scale linearly just like the Partitioned Cache while at the same time providing data reliability through replication.
  5. Client Cache (aka Near Cache): This topology works with any of the above four topologies. It is basically a local cache near your application and sits on your Web/application server. However, it's not a standalone cache and is in fact connected to the cache cluster. It gets informed by the cache cluster whenever there is any data change so it can update itself automatically. Client Cache provides further scalability to your applications because you reduce trips even to the cache cluster.

The most popular caching topology is a partitioned-replicated cache. As the name implies, this hybrid topology provides the benefits of partitioned cache, which is in terms of scalability. Simultaneously, it hands you the benefits of a replicated cache, which is reliability. This means all data is copied to two different servers. Other topologies are partitioned, replicated, and client cache. For the time being, let's focus on partitioned-replicated, and the others we will discuss later.

Figure 3: Example of a Partitioned-Replicated Caching Topology

Important Application Data Caching Features
There are several major and important features associated with a highly efficient distributed cache for application data caching. They are:

  • Absolute and sliding expirations
  • Cache dependency for managing relational data in the cache
  • Synchronize cache with a database
  • Read-through and write-through
  • Groups and tags
  • SQL-like Cache Query Language
  • Event Notifications

Absolute and sliding expirations allow you to specify when individual cache items should expire and be automatically removed from the cache. You can either specify an absolute date-time or an interval of inactivity as criteria. Cache dependency is particularly useful for managing data relationships. The majority of cached data comes from relational databases hence it has relationships. When keeping track of this data in the cache, you rely on the cache to manage data integrity and simplify your application.

Database synchronization also plays a big role in application data caching. Consider that the cache keeps a copy of the data that is in the database. If it changes in the database, it's more effective if the cache can automatically learn about it and synchronize itself. It can do that by removing that item from the cache o reloading a new copy from the database.

As far as read-through and write-through, at times, your application directly reads data from the database and caches it. Other times, you want the cache to read the data for you because this simplifies your application code and also provides other benefits. For this latter case, you need both read-through and write-through. Groups and tags come into play for grouping multiple cached items in various ways. That way you can easily locate them. Here, a group allows each item to relate to only one group. Conversely, with tags, you are provided with a many-to-many grouping with cached items. Both distributed cache traits provide you with great flexibility for fetching data and keeping track of it in the cache.

The last two major distributed caching attributes you should seek are SQL-like cache query event notifications. A typical cache fetch is based on a key since every cached item has a key. However, on certain occasions, you want to search for items based on other criteria. A cache query allows you to provide an SQL-like query to search the cache based on object attributes rather than the key.

In the area of event notifications, your application often wants to be notified when some data changes in the cache. An efficient cache provides various event propagation mechanisms. One is key-based event notification, which is triggered by an individual cached item update. Second is a general-purpose event triggered whenever anything in the cache is updated or removed. Third is a continuous query that is triggered whenever an item in a criteria-based data set in the cache is updated or removed. All of these allow your applications to make full use of the cache.

High Availability of Distributed Cache
A rule of thumb to remember is that you're using a distributed cache because you are anticipating a high transaction environment for your application. This usually means your JSP application has a greater impact on your business. Therefore, you can't afford any unscheduled downtimes for your application and even the scheduled downtimes should be very short and very infrequent.

Therefore, since a distributed cache runs in your data center as part of your JSP application, it must provide high availability in itself. One critical aspect of this high availability is that the cache cluster must be self-healing and totally dynamically configurable. Some caches provide a manually fixed cache cluster (so your application code creates and manages the cluster). Some other caches use master/slave architecture where if the master node goes down, all the slaves either stop working or become read-only. Both architectures are severely limiting and inflexible.

A highly efficient distributed cache has a peer-to-peer cache clustering that corrects itself automatically at runtime, thus self-healing if you add or remove cache servers from the cache cluster or if a cache server crashes for some reason. This is a highly important characteristic of a good distributed cache.

Conclusion
You should seriously consider incorporating a distributed cache both for application data caching and for session state storage if you are developing a JSP application targeted for a high transaction environment.

One last point - Caveat Emptor. Currently, there are a number of free distributed caches available. However, you must seriously consider the old tried and true saying, "there is no free lunch." Sure, you might think of not forking over any money for a free distributed cache. But, in the long run, the cost becomes exorbitant. If your JSP application is business-critical then you must consider the total cost of ownership and not just the price of a distributed cache or that it's free.

More Stories By Iqbal Khan

Iqbal Khan is the President and Technology Evangelist of Alachisoft. Alachisoft provides NCache, a Java and .NET distributed cache for boosting performance and scalability in enterprise applications. Iqbal received his MS in Computer Science from Indiana University, Bloomington, in 1990. You can reach him at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with APIs within the next year.
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...
In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors.
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
Akana has released Envision, an enhanced API analytics platform that helps enterprises mine critical insights across their digital eco-systems, understand their customers and partners and offer value-added personalized services. “In today’s digital economy, data-driven insights are proving to be a key differentiator for businesses. Understanding the data that is being tunneled through their APIs and how it can be used to optimize their business and operations is of paramount importance,” said Alistair Farquharson, CTO of Akana.
Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models.
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, analyzed how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Payment...
"Optimal Design is a technology integration and product development firm that specializes in connecting devices to the cloud," stated Joe Wascow, Co-Founder & CMO of Optimal Design, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.
"ciqada is a combined platform of hardware modules and server products that lets people take their existing devices or new devices and lets them be accessible over the Internet for their users," noted Geoff Engelstein of ciqada, a division of Mars International, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.