Welcome!

Java IoT Authors: Pat Romanski, Liz McMillan, Yeshim Deniz, Elizabeth White, Zakia Bouachraoui

Related Topics: Java IoT

Java IoT: Article

Improving Reliability and Scalability of Middle-Tier Servers

Improving Reliability and Scalability of Middle-Tier Servers

Many three-tier applications built using various middleware products ultimately fail in production due to a lack of scalability, flexibility or reliability. This can trigger a need to migrate an application from one middleware product to another. In this article we'll discuss a process for porting servers between CORBA and EJB middleware implementations.

Object request brokers and application servers are popular middleware technologies you can readily find in the middle tier of distributed applications. These technologies impact the reliability and scalability of the applications they support since middleware introduces a higher degree of sophistication — and therefore greater complexity.

The arrival of distributed object technology into the mainstream of software development has led to the emergence of CORBA and RMI as standard object communication mechanisms. Moreover, thanks to the growing number of e-business applications being developed for the Web, application servers have gained recent prominence within software projects

Cross-Server Interchangeability
As a way to address component interoperability and interchangeability across servers, the Enterprise JavaBeans specification defines an execution and services framework for server-side Java components. EJB relies on the underlying communication mechanism — typically CORBA or RMI — for exposing a component's public interface. Figure 1 illustrates how EJB components can interoperate with CORBA and Java objects.

Why Migrate?
The primary advantages of technologies like CORBA, RMI and EJB are interoperability and portability. As the use of these middleware standards continues to grow, requirements for functional richness and scalability help drive decisions about which implementation to choose. Changes in business and technical requirements, as well as vendor offerings, often result in the need to migrate between ORBs or application servers. More to the point, applications built using various middleware products all too often fail due to a lack of scalability, flexibility or reliability. When faced with this situation, the best course of action may be to migrate your application software to an alternative middleware product.

Migration usually involves porting your application from one vendor's middleware implementation to another, or even between implementations provided by the same vendor. Some vendors offer different application servers depending on whether you require robust EJB support for faster development versus CORBA underpinnings for enterprise scalability. In any event, migration raises a number of potential issues for the application components being ported and possibly also raises interoperability issues for components remaining on dissimilar implementations. (We'll deal with some of these issues shortly.)

Migration Process
Now let's explore a process for migrating applications between standard middleware implementations. The aims of this process are to capture baseline test cases and performance metrics, then port the application code — and finally to validate that the port was successful. To quantify the success of a migration effort it's essential to capture test cases and metrics methodically — for example, has the migration indeed resulted in increased scalability? Test automation helps streamline this process, providing a cost-effective way to successfully complete a migration effort and quantify the results.

The migration of an application involves more than just porting source code — it entails careful planning, analysis and validation of results. While CORBA and EJB define interoperability and interchangeability standards, they don't prescribe how your middleware provider may have implemented the underlying infrastructure or how you should architect your application components. So you might very well find that migration involves rearchitecting portions of your application to compensate for differences between the current middleware implementation and the new one.

The migration process consists of three phases: preparation, porting and validation. These phases, illustrated in Figure 2, are described in the next sections.

Preparation Phase
In preparation for a port, you must first create behavioral test cases and capture performance metrics. These tests and metrics should be set in the context of specific goals and targets for the ported application. For example, a goal may be to introduce automatic load balancing. Or a target may be to improve an application's performance by an order of magnitude.

The steps for the preparation phase are:

  1. Define goals and targets.
  2. Create behavioral test cases.
  3. Capture baseline performance metrics.
  4. Analyze the application architecture and identify potential issues.
1. Define goals and targets.
First we need to know where we're going. Typically, the major reason for migrating is to improve the reliability of an application that's targeted for production use. Increasingly such applications are e-business engines that will experience significant demands in terms of client requests, resource availability, and so on. By defining concrete goals and targets it should be possible to align your application's usage requirements with the capabilities of the target middleware implementation. For instance, does the middleware provide automatic failover or will you have to build this into your application?

2. Create behavioral test cases.
Test cases must be created so that the behavior of the ported application can be validated. Since middle-tier server applications can consist of multiple processes that interact through published APIs, this isn't a traditional regression testing exercise. Rather, test cases must be defined for each of an application's components (see Figure 3).

The challenge becomes how to accomplish this without having to write static test drivers for each component's API.

The following observations can be made about a middle-tier server:

  • Public interfaces are typically specified in IDL or Java.
  • Objects can be accessed dynamically via standard mechanisms such as CORBA's DII or Java's reflection API.
  • A client program can invoke methods and manipulate attributes of server objects irrespective of the implementation language and physical location of the server.
Test cases are most effectively created using an automated functional testing tool that can interact with object-based servers. (Segue's SilkPilot is such a tool for CORBA and EJB servers.) The general approach is to exploit dynamic invocation facilities, allowing you to connect to one or more servers, view information about live objects within the servers, invoke methods with parameters, and view or modify attributes.

Testing should be done within the context of the application's usage model. A banking application, for example, requires an account to be created before funds can be deposited. When an interactive test cycle is completed, a corresponding test case should be generated and saved. Test cases are run later during the validation phase of the migration effort to ensure that the newly ported application components are functioning properly.

3. Capture baseline performance metrics.
You need to quantify the performance of the original application for comparative analysis during the validation phase. Measurement of an application's performance involves simulating usage models under various loads. It's highly advisable to use an automated load-testing tool — such as Segue's SilkPerformer — to accurately simulate message traffic and measure the capacity and scalability of your server applications (see Figure 4).

The first step in capturing performance metrics is to record message traffic for a typical set of interactions with the server. You can intercept and record IIOP communication used by ORBs and highly scalable application servers, for instance. Then you can create a load test by scaling up the recorded traffic to represent the anticipated usage volume, such as a thousand banking clients making deposits rather than just the one representative case used to generate the initial traffic.

Data values captured within the recorded traffic should be replaced with randomized values to create a realistic simulation. Each of the thousand simulated banking clients would thus have a unique account number and deposit amount, for instance. Workloads can then be defined in terms of machines in the network generating the workload, number of concurrent clients executed, transaction frequencies and duration of the simulation. Scalability measurements become extremely useful when obtained under various workloads, such as starting a simulation with 20 clients and then adding 10 more every 30 seconds up to a thousand concurrent clients. Performance measurements include the throughput of an application component and the response time as perceived by client applications.

4. Analyze the application architecture and identify potential issues.
The architecture of the existing application must be analyzed and reconciled against the goals and targets for the application's performance when migrated to the new middleware. Decisions in the initial architecture were often influenced by limitations of the middleware originally used. So the migration may include some rearchitecting of the application to remove certain design concessions or workarounds that aren't necessary any longer.

Issues may also arise from the absence of particular features in the target middleware, or differences introduced by an alternate approach to implementing the underlying infrastructure; for example, multithreaded servers versus single-threaded/multiprocess. All potential issues should be identified as early in the migration process as possible.

Some specific architectural issues to consider include:

  • How clients connect to a server: For example, does your middleware implementation provide an API for binding directly to an object? Are you required to use a naming service, factory or other facility?
  • Mechanisms for creating and exposing objects within servers: Are you required to use either a basic object adapter (BOA) or portable object adapter (POA)?
  • Object management: What activation modes are available? How is object lifecycle managed? Does the middleware implement a dedicated connection between each client and your object or does it pool connections?
  • Load balancing: Does the middleware implement a transaction-processing framework? Does it provide some other functionality such as object groups?
  • Threading: Can servers be safely implemented using threads, or does the middleware require single-threaded/multiprocess servers?
  • Nonstandard features: Does your application take advantage of vendor-specific features such as interceptors, locators, loaders or client-side caching? Are you using special features to accomplish "nonstandard" tasks like piggybacking extra data onto a message?
  • Fault resilience: Are you depending on middleware-dependent features such as activation modes, automatic connection reestablishment or a transaction-processing framework?
  • Transaction support: Does your application assume the middleware will handle transaction starts, commits and rollbacks?
  • Process colocation: Are you using special features to colocate clients and servers within a common address space?
  • Callbacks: Do your clients expect callbacks from your servers? How does the middleware support this?

Porting Phase
Having completed the preparation phase, the application source code can now be ported to the target middleware. Based on the conclusions drawn from your analysis during the preparation phase, it might be necessary to make changes to source code. This is quite likely if your migration effort includes taking advantage of features that are available only in the new middleware.

The steps for the porting phase are the same as for any porting effort: modify source code as necessary, recompile on the target middleware and platform, test that the application functions properly and repeat as required. When porting is completed, it's time to move on to the validation phase.

Validation Phase
After the application is operational on the new middleware, you must validate the achievement of your goals and targets. The two steps for this phase are:

  1. Make sure that the application behaves properly.
  2. Measure the application's performance.
1. Make sure that the application behaves properly.
The test cases created during the preparation phase can be used to verify the behavior of the newly ported application. This is an API-level regression test. Each test case should be executed and the results reviewed to make sure that the new server components are responding properly.

2. Measure the application's performance.
The load tests created during the preparation phase can be rerun to generate new performance metrics. These measurements can be compared to the initial baseline metrics and performance targets to provide quantifiable evidence that the application's performance has indeed improved.

Final Thoughts
Middleware standards like CORBA and EJB provide a marvelous basis upon which to design and build three-tier applications. However, choosing the right product for the lifetime of an application is made difficult — even perhaps unlikely — by multiple middleware implementations, each with their own unique design approaches, vendor-specific features and inevitable limitations. If improving the reliability of your CORBA or EJB application requires migrating to a new middleware implementation, the process outlined in this article should help ease the pain.

More Stories By Todd Scallan

Todd Scallan is the vice president of product and engineering at Axcient, where he is responsible for leading the development team and driving product for the Axcient platform. He has over 25 years of experience in a variety of senior-level product management, engineering and business development roles at companies including Interwoven, Segue Software and Black & White Software (acquired by Segue Software). Todd holds an MS in computing engineering, a BS in electrical engineering and has published numerous articles and papers on a range of computing topics.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
DXWorldEXPO LLC announced today that Ed Featherston has been named the "Tech Chair" of "FinTechEXPO - New York Blockchain Event" of CloudEXPO's 10-Year Anniversary Event which will take place on November 12-13, 2018 in New York City. CloudEXPO | DXWorldEXPO New York will present keynotes, general sessions, and more than 20 blockchain sessions by leading FinTech experts.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of ...