Java IoT Authors: Liz McMillan, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski

Related Topics: Java IoT

Java IoT: Article

Migrating Enterprise Applications Between J2EE Application Servers

Migrating Enterprise Applications Between J2EE Application Servers

If you do a search for "migrating J2EE application servers" on the de facto "re"search engine - Google - here are some of the results you'll get from your query:

  • Migrating J2EE applications from Borland JBuilder to IBM WebSphere
  • Migrating J2EE applications from WebLogic to WebSphere application server
  • Migrating J2EE applications from earlier versions of the application server and from other application server platforms to Sun ONE application servers
  • Migrating your J2EE applications to JBoss
  • BEA WebLogic to JBoss migration program
When I first got involved in the planning for a project that involved the migration of applications between versions of IBM's WebSphere product, I naively thought - "This can't be that big a deal. After all it's just different versions of the application server. And the changes in the Java platforms between WebSphere versions is well documented. In fact, IBM even provides a Redbook to do the migration..."

I couldn't have been more mistaken. The migration was a big deal. After working on a few of these projects, I can verify that the amount of moving parts in a machine composed of various portfolios, frameworks, third-party vendors, and a variety of stakeholders made the planning for such an initiative a formidable undertaking.

This article covers the aspects of enterprise application migration that involve J2EE application servers, including the motivation, methodology, challenges, and the way to successfully undertake such an initiative. The focus is primarily on the migration of a large portfolio of applications, not individual applications. This article doesn't get into the basics of application server technologies, Java technologies, etc.; I feel it will be of most interest to architects, team leads, and technical project managers.

What's the Big Deal About Enterprise Applications?
Many of you have probably migrated applications between versions, hardware and software platforms, etc. The magnitude of the problem increases logarithmically with the number of applications, especially when they are spread across a number of business portfolios. When we're considering applications in the J2EE context, enterprise applications typically pose the following challenges:

  • The applications are distributed across different tiers - client, business, and database - and they use a different mix of J2EE technologies, including JSPs, servlets, EJBs, JDBC, etc. The versions of the APIs used across the applications are usually not uniform, since they may have been developed at different times.
  • The integration requirements of each application may be unique. For example, some applications may require integration with an existing security framework that provides single sign-on. Others may require integration with third-party packaged products.
  • Besides the Java platform APIs, the applications probably use other technology components or frameworks that were developed in your organization for Java applications. These may include messaging frameworks, utilities for logging, exception handling, etc. The dependency of applications on such components adds another level of complexity to the migration effort.
  • Applications have varying levels of complexity, and it's hard to apply the same methodology for migration to each of them.
  • Application teams have differing levels of expertise, so where the benefits of a structured migration are seen by some, the same is not true for others.
Why Migrate?
Chances are that all your enterprise applications have already been in place for a while and have been working satisfactorily. If such is the case, why migrate? After all, you could continue to support your applications on existing versions of the application server.

There are many factors that could have led your organization to consider the migration of applications. One of the main reasons why a migration takes place across the entire organization is because support for the existing version of the application server is being phased out. In such cases, the organization has no choice but to move all the applications to a version that will be supported by the vendor in the future. A typical case is that of IBM WebSphere. IBM plans to phase out version 3.5 of its application server this year. This has led to the mass migration of applications that are housed in 3.5 to the latest version (5.1).

Another reason why organizations end up considering a large scale migration of their J2EE applications is that vendors offer competitive rates/incentives. A typical case is that of Sun offering lucrative pricing on their Sun ONE server to entice organizations into migrating their applications. Another example is that of JBoss. Since this offers the most cost-effective solution, some organizations are considering migrating their J2EE applications to JBoss. For some organizations, it's a matter of consolidating their investment on a standard enterprise platform.

Other reasons include a shift in programming paradigms. Some organizations have recently adopted component-based applications and see a migration to an application server as the natural step to achieve this. Others have realized that departmental applications they have developed cannot scale to an enterprise level. However, in these cases, the effort involved is more along the lines of a rewrite rather than a migration.

I Thought Standardization on J2EE Took Care of Everything
The application server market continues to evolve. Migration is not only about moving non-J2EE applications toward the J2EE standard. Very often, there is a requirement to migrate J2EE applications between application servers. Given J2EE's promise of interoperability, migrating enterprise applications between app servers seems to be more complicated than it should be. This is because application server vendors typically provide extensions to the J2EE platform APIs in order to differentiate their product. In most cases, these extensions were necessary because the performance and rapid development features offered by pure J2EE APIs left much to be desired. For example, when the EJB model was first introduced, it was not usable for enterprise scale applications. It was only in EJB 2.0 that local interfaces were introduced. In the meantime, app server vendors had introduced their own optimizations to address the issues of performance and scalability. When migrating between application servers, the usage of the proprietary extensions adds to the complexity of migration. However, not using the extensions was not the best possible course either, when the applications were originally developed.

Although migration is not only about moving to the J2EE standard, one of the main outcomes of the migration is the move to a standardized platform. This is the case whether the migration is between application server versions, between vendors' products, or between hardware and OS platforms. Figure 1 gives an example of the transition in the case of a migration from earlier versions of IBM WebSphere application server to version 5.1. As shown in the figure, the final state of migration leads to a standardized state of J2EE APIs, IDEs, databases, hardware, etc.

How to Plan for an Enterprise-Level Migration
Enterprise application migration requires careful and detailed planning. In a large organization, at the onset of such an initiative, the planning should be managed in independent tracks that tackle different dimensions of the migration. Figure 2 illustrates the phases and tracks in the planning phase of an enterprise-level migration. As illustrated, these tracks are executed over a number of phases. In the project inception phase, a core migration team should be formed to conduct the migration assessment. As a part of this phase, the team should come up with an initial project plan for the planning phase, determine the methodology and means to communicate with the application teams, identify the key stakeholders, etc. The information gathering phase involves interaction with the various application teams to gather the characteristics of each application. These characteristics include technical parameters such as the number and versions of JSPs/servlets/EJBs, integration requirements, etc., as well as information about releases, schedules, and dependencies between applications and application portfolios.

More Stories By Ajit Sagar

Ajit Sagar is Associate VP, Digital Transformation Practice at Infosys Limited. A seasoned IT executive with 20+ years experience across various facts of the industry including consulting, business development, architecture and design he is architecture consulting and delivery lead for Infosys's Digital Transformation practice. He was also the Founding Editor of XML Journal and Chief Editor of Java Developer's Journal.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...