Java IoT Authors: Jason Bloomberg, Zakia Bouachraoui, Liz McMillan, Elizabeth White, Yeshim Deniz

Related Topics: Java IoT

Java IoT: Article

The Challenges And PitFalls Of J2EE

The Challenges And PitFalls Of J2EE

J2EE applications are becoming the norm rather than the exception in today's distributed computing environment. But organizations are still facing the same issues with this technology set that they did with application models of yesteryear - how to ensure that they can scale quickly, respond dynamically, and maintain flexibility as their business environment changes.

These challenges have never been more pressing than they are in today's environment, where business models are changing rapidly as organizations cope with the realities of a cyclical economy.

This article focuses on some of the application characteristics of large-scale J2EE applications and is prescriptive in terms of what we have seen work and not work with large-scale Web applications. In particular, this article considers Web applications designed to support thousands of concurrent users. Areas covered include the importance and impact of up-front architectural decisions, development steps to help ensure smooth deployments, performance tuning, and production deployment planning and design.

Considering Architectural Approaches
Because of the popularity of both Java and J2EE, organizations have an unprecedented variety of portable software to draw on in the formation of their architectural standards. Since J2EE currently focuses more on application portability and less on the underlying implementation and operational characteristics of an application server, it's important for organizations to look beyond the J2EE programming model and concentrate on how key decisions will impact the operational characteristics of the deployed systems. Apart from decisions related to the underlying hardware and operating systems, the following architectural considerations will weigh most heavily on the performance of the operational environment.

  • Select application server: To an extent not seen with other Web and enterprise component platforms, the J2EE platform enables an organization to base the bulk of its development investment on the portable base platform and layered frameworks rather than on vendor-specific APIs and features. The adoption of J2EE has allowed organizations to worry less about comparing application development environments, but they still need to perform a comprehensive evaluation of the available J2EE-compliant application server products to ensure that the products meet the organization's operational requirements. Switching application servers midway through development of a J2EE-based application effort will moderately impact the delivery schedule. However, given the investment necessary to train operational teams on the product-specific processes and features, switching application server products near or during the deployment phase will result in significant additional costs and extended deployment delays.
  • Examine Web presentation layer frameworks: Almost any Web application will benefit greatly from the use of a prebuilt Web presentation layer framework. For example, the Apache Jakarta project's Cocoon and Struts Web presentation layer frameworks have become very popular with many J2EE developers. Although some teams will be hesitant to build the Web presentation layer on a third-party framework, most organizations end up developing a generic infrastructure that sits between the base J2EE Web container services and business presentation logic - whether or not they call it a framework. In addition to the time savings they generate, such frameworks are portable across almost any J2EE Web container implementation. Although the popular frameworks evolved with JSPs and servlets prior to the formal J2EE specification, the relative newness of the frameworks warrants caution when considering them for production use. It's advisable to thoroughly test any adopted framework in a production-like environment to ensure that stability and performance requirements are met.
  • Survey prebuilt components: As the architecture team identifies enterprise components that will be useful to many of their organization's business applications, the architects should first review the prebuilt Java and EJB components. This is true whether they're available through their application server vendor or through the several component marketplaces on the Web. As in the case of prebuilt frameworks, third-party components, even if customized, gain greatly from speed development efforts. The Flashline.com site references a wide variety of both EJB and Java components. As part of any component evaluation effort, each component should be thoroughly reviewed and tested for its performance and security characteristics.
  • Make hard decisions on RDBMS integration: One of the most contentious aspects of J2EE and EJBs is RDBMS access. It's worthwhile for an architecture team to invest a great deal of time with key application design team members to determine the organization's requirements with respect to RDBMS integration. This should be done even before testing the various approaches. Since approaches can run the gamut from lightweight, local-access DAO-style classes to heavier-weight, remotable CMP-based entity beans, there are huge design and implementation ramifications to adopting one approach over another. For example, using entity beans enables developers to rely on the underlying EJB container to manage complex aspects such as transactions, security, and remote communication. However, the container's role comes with the price of additional processing overhead. Many organizations fail to realize the downstream impact of their RDBMS integration approach on system performance until well into the deployment phase of an application.
  • Decide on enterprise connectivity solutions: From the standpoints of both development efficiency and runtime performance, it's generally preferable to employ prebuilt back-end connectors to shield developers of the Web presentation layer and new EJB-based business components from the detailed interface semantics of existing systems. Much like the RDBMS integration, back-end connectivity to enterprise systems can be a major source of headaches during deployment if the proper evaluation and testing of solutions is not performed in advance of development. Although the emerging Java Connector Architecture (JCA) is gaining adoption as a standards-based approach to enterprise connectors, many of the higher-end application servers already support some degree of prebuilt, albeit proprietary, connectors. A few of those vendors offer SDKs to enable organizations to quickly build their own connectors for back-end systems not addressed by the prebuilt connectors. Compared to relying on individual developers or development projects to implement efficient and secure access to back-end systems from their J2EE applications, the connector approach enables an organization to ensure consistent performance and security characteristics across applications.

For J2EE-based applications, these are several of the most prominent architectural considerations. In each case, up-front investigation and testing will help ensure a solid base for development and deployment.

Designing and Developing Production-Ready Applications
The source of perhaps the largest impact on the viability of a J2EE application in production is the initial detailed design of the application. Even though many development teams are new to J2EE features, we often encounter cases in which the team performs very little, if any, performance testing of individual application components, let alone the complete application, prior to the production deployment phase. In truth, there's nothing magic about J2EE that would permit a development team to skip important steps in the basic development process. Key steps worth highlighting are:

  • Engage experts: For your first J2EE development effort, engage experts that have already implemented production J2EE-based systems. Since momentum behind J2EE has built rapidly and the standard has had approximately a year of exposure, there are now many more experienced professionals available to lead new teams through development and deployment of applications than there were even nine months ago.
  • Leverage proven design patterns: Become familiar with the popular design patterns that apply to your application problems. Proven patterns have typically evolved to the point of becoming the most efficient yet maintainable means of addressing a particular problem. A set of patterns applied to J2EE-based systems was recently published on java.sun.com at http://developer.java.sun.com/developer/technicalArticles/J2EE/patterns/.
  • Consider impact of session size on performance: If you're considering the use of session-replication features provided by the application server, consult the session-size guidelines provided by the application server vendor. These guidelines are important when determining how much information to store in the HttpSession of the Web application and/or instance variables of stateful session beans. Although management of large amounts of session data may yield acceptable performance when an application is deployed to a single application server instance, the performance characteristics may vary dramatically as session replication is configured at deployment time. This is often done in an effort to ensure availability of session data across system outages.
  • Perform code walkthroughs: It's critical that development teams review source code to ensure that resources are properly managed by the application. The J2EE environment provides many opportunities for sloppy housekeeping. Code reviews are an early and, therefore, relatively low-cost means of avoiding the following common mistakes:
    -Failure to close JDBC result sets, statements, and connections
    -Failure to remove unused stateful session beans
    -Failure to invalidate HttpSession
    Although the JVM will handle the garbage collection of unused object instances, the application server product and JDBC drivers implement their own strategies for cleaning up unused resources. To minimize the overhead imposed by such housekeeping tasks, it's important for the application to clean up after itself. If code reviews are not performed, then the next opportunity for catching resource issues is during either code profiling or initial tuning efforts.
  • Performance test various options: In your development environment, establish small-scale test harnesses to evaluate the relative performance of different configurations. For example, experiment with both Type 2 and Type 4 JDBC drivers to assess not only their compatibility with your application, but also the performance of each driver type. Use any of the commonly available load-generation tools in the development phase to simulate moderate loads against a portion of your application. Use the application server monitoring tools to gain insight into resource utilization prior to formal system and integration testing.
  • Perform code analysis and profiling: One of the biggest issues arising during the deployment of new J2EE-based applications is the lack of expected performance. Often, such performance problems can be readily identified in advance through the use of mature code profiling tools such as JProbe and OptimizeIt!. Although these products predate J2EE, they are able to hook into the leading application servers to provide a comprehensive view of code efficiency.

Understanding Operational Requirements
As the application development activity ramps up and you have begun to tune simple deployments of the application, the implementation team can establish the general layout of the operational environment based on the requirements of the application server product and the business application. This general layout will show the relative position of the system's external interfaces. Before the general layout can be transformed into a more concrete design, implementers need to factor in the following operational requirements of the new system:

  • Security
  • Availability
  • Performance
Mature organizations are well schooled in these facets of an operational environment. One of the key differences between J2EE-based application servers and older middleware is that high-end application servers provide many deployment configuration options that drastically impact all of these areas without affecting the application implementation.
  • Security: Is there a requirement to encrypt the browser to Web server communications for all or part of the application? Will the Web-server tier exist in a demilitarized zone (DMZ) separate from the application-server tier and back-end enterprise systems? Is encryption required between the Web servers and application servers? If so, is the encryption necessary for all interactions with the application? These are questions you have to consider.
  • Availability: Availability issues are also critical. What are the availability requirements of the application? Is the loss of service acceptable when a machine becomes unavailable? Can the loss of a user's session information be tolerated? What are the possible weak links with respect to the manner in which the application interacts with other aspects of the environment? Can these weak links be reasonably enhanced or are they constants? Many shops view a CPU busy rate of 80% as a high-water mark. What is your shop's standard?
  • Performance: Performance requirements pose challenges as well and must be addressed. What, for example, are the required response times experienced by the end users for various interactions with the application? What are the perceived steady state and peak user loads? What is the average and peak amount of data transferred per Web request? What is the expected growth in user load over the next 12 months?
For peak user loads, you must focus on the number of concurrent sessions being managed by the application server. We often find that organizations view peak user load as the number of possible users rather than the average number of concurrent users. Given this more realistic view of user loads, you'll often find the number of peak users drops dramatically on paper from hundreds of thousands or even millions to tens of thousands of concurrent users.

Defining these operational requirements will help move you to the next stage in understanding the deployment environment. For the purposes of this article, let's assume that the operational security and availability requirements are such that multiple Web and app server instances, separated by a set of firewalls, will form the basis of the environment. In other words, you'll need separate tiers of machines to support the division between the DMZ and the secure, back-end business systems. You'll also need to plan on multiple instances of machines in each tier to enhance the availability of the application. Proceeding from these assumptions, the layout of the operational environment is further refined, though it does not yet address the exact number or size of machines required by the system. The next step is to develop a basic understanding of how to size a system.

Understanding Factors Affecting Sizing
The main factors affecting sizing are:

  • User load: The larger the load that's applied to a system, the greater the amount of hardware needed to satisfy that load.
  • Application design/implementation: An application that performs very little work will be able to handle many users for a given amount of hardware. In a relative sense, this kind of application often scales poorly as it spends a large percentage of its time waiting for shared resources (network, database, other enterprise systems, etc.). Conversely, applications that perform a great number of computations tend to require much more hardware per user, though these applications typically scale much better than those performing a small number of computations.
  • Hardware platform: Raw processor performance is critical for reducing the amount of hardware needed. Generally, Web applications do not include floating-point-intensive computation, which means integer performance is usually the most important factor. Even with high-speed processors, a server can scale poorly if shared resources cause significant contention. Usually, cache design and memory bandwidth play a big role in determining how much extra performance is achieved, as processors are added to a server.
  • Safety margins: Additional capacity is normally designed into a solution. One reason for this is that user loads tend to increase over time. Another reason is that most businesses expect to keep their Web-based services available during planned and unplanned outages. Proper sizing and capacity planning is typically able to compensate both for the increase of loads as businesses grow and for partial system outages. Planned outages of a portion of the system may also be required as this permits an application system to be upgraded to a new version. To do this, a portion of the system is taken offline, while user requests are still handled on the active portion of the system.

Predicting Performance
Given the factors affecting the sizing process and the general layout of the operational environment, how do you predict either the capacity of a given combination of hardware or the minimal hardware required to sustain a specified capacity? The best way to answer these questions is to take the data gathered above and plug it into the sizing calculator offered by your application server vendor. For example, iPlanet provides customers with two calculators to help size applications deployed to the iPlanet Application Server. The first calculator computes the size of a system (i.e., the number of CPUs and the number of machines in each tier) based on the factors described above. The second calculator computes the maximum capacity of a given hardware configuration (see Figure 1).

iPlanet built these calculators based on a combination of tests, including those for popular application workflows, and drew upon publicly available benchmark results for RDBMSs and processors. Both calculators assume a fully tuned system.

If your application server vendor does not make a calculator available, or the application workflows do not match those of your application, then you'll need to develop your own understanding of sizing based on the following steps:

  1. Determine performance on a single CPU: You must determine the largest load that can be sustained with a known amount of processing power. You can obtain that figure by measuring the performance of the application on a uniprocessor machine. You can either leverage the performance numbers of an existing application with similar processing characteristics or, ideally, use the results of basic performance testing done during development. Based on their experience with large-scale J2EE implementations and internal validation of their products, vendors of high-end application servers can usually provide base performance numbers for Web applications implementing a basic workflow.

    While determining performance on a single CPU, it's imperative that you begin to tune the basic environment. As with any performance test, you must ensure that none of the outlying systems (driver machines, Web servers, database machines, etc.) throttle the test. Otherwise, your performance numbers may be artificially low and will adversely impact the follow-on sizing numbers.

  1. Determine vertical scalability: You need to know how much additional performance is gained when you add processors. That is, you're indirectly measuring the amount of shared-resource contention that occurs on the server for this workflow. You either obtain this information based on additional load testing of the application on a multiprocessor system or leverage existing information from a similar application that has already been load tested. Running a series of performance tests on one to four CPUs generally provides a decent sense of the vertical scalability characteristics of the system. On Solaris, for example, it's easy to disable/enable processors. Based on your sizing estimates, it's important to exercise the application under load on systems of the target configuration.

    While determining the vertical scalability, ensure that availability requirements are factored into the configuration. For example, to guarantee that the failure of a single JVM does not result in a loss of all sessions, perform the vertical scalability tests with at least two JVMs and configure session replication between the JVMs.

  2. Determine horizontal scalability: You need to know how much additional performance is gained when you add servers. Again, benchmarking a cluster of application server systems is required if information on a similar application is not already available. Ensure that you take into consideration high-availability requirements and the attendant session-replication configuration as you lay out your horizontal scalability test environment. In this case, session replication occurs across application server instances deployed to multiple machines, in addition to session replication across JVMs within each application server instance.

Running this suite of tests will provide you with a solid understanding of the performance of the application server. Using this information, you can develop your own custom-sizing equations.

The increased programming flexibility and the great number of deployment time configuration options offered by J2EE and advanced application servers can cut the time it takes to develop and deploy highly available business services. Before these benefits can be realized, however, mature development and deployment practices must be established. In short, J2EE provides a powerful development paradigm, but one that requires careful operational planning to ensure success.

More Stories By Chris Kampmeier

Chris Kampmeier is group manager for technical evangelism within the application services group at iPlanet E-Commerce Solutions. He leads the team responsible for creating development and deployment content for the J2EE developer community including sample applications, coding tips, sizing and tuning guides as well as documentation for the application server family. Additionally, Chris is heavily involved in defining future product strategy. Prior to iPlanet, he spent more than two years at Sun Microsystems supporting field systems
engineers, and seven years developing EFT switching systems at MasterCard International. He holds a BS in computer science from Northern Illinois University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...
Rafay enables developers to automate the distribution, operations, cross-region scaling and lifecycle management of containerized microservices across public and private clouds, and service provider networks. Rafay's platform is built around foundational elements that together deliver an optimal abstraction layer across disparate infrastructure, making it easy for developers to scale and operate applications across any number of locations or regions. Consumed as a service, Rafay's platform elimi...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Ca...
New competitors, disruptive technologies, and growing expectations are pushing every business to both adopt and deliver new digital services. This ‘Digital Transformation’ demands rapid delivery and continuous iteration of new competitive services via multiple channels, which in turn demands new service delivery techniques – including DevOps. In this power panel at @DevOpsSummit 20th Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, panelists examined how DevOps helps to meet the de...