Click here to close now.


Java IoT Authors: Chris Fleck, Pat Romanski, Yakov Fain, Elizabeth White, Liz McMillan

Related Topics: Java IoT, IBM Cloud, Weblogic, Containers Expo Blog

Java IoT: Article

How Garbage Collection Differs in the Three Big JVMs

Differences between Sun Hotspot, IBM WebSphere, and Oracle WebLogic

(Note: If you’re interested in WebSphere in a production environment, check out Michael's upcoming webinar with The Bon-Ton Stores)

Most articles about Garbage Collection ignore the fact that the Sun Hotspot JVM is not the only game in town. In fact whenever you have to work with either IBM WebSphere or Oracle WebLogic you will run on a different runtime. While the concept of Garbage Collection is the same, the implementation is not and neither are the default settings or how to tune it. This often leads to unexpected problems when running the first load tests or in the worst case when going live. So let’s look at the different JVMs, what makes them unique and how to ensure that Garbage Collection is running smooth.

The Garbage Collection ergonomics of the Sun Hotspot JVM
Everybody believes to know how Garbage Collection works in the Sun Hotspot JVM, but lets take a closer look for the purpose of reference.

The memory model of the Sun Hotspot JVM

The memory model of the Sun Hotspot JVM

The Generational Heap
The Hotspot JVM is always using a Generational Heap. Objects are first allocated in the young generation, specifically in the Eden area. Whenever the Eden space is full a young generation garbage collection is triggered. This will copy the few remaining live objects into the empty survivor space. In addition objects that have been copied to Survivor in the previous garbage collection will be checked and the live ones will be copied as well. The result is that objects only exist in one survivor, while eden and the other survivor is empty. This form of Garbage Collection is called copy collection. It is fast as long as nearly all objects have died. In addition allocation is always fast because no fragmentation occurs. Objects that survive a couple of garbage collections are considered old and are promoted into the Tenured/Old space.

Tenured Generation GCs
The Mark and Sweep algorithms used in the Tenured space are different because they do not copy objects. As we have seen in one of my previous posts garbage collection takes longer the more objects are alive. Consequently GC runs in tenured are nearly always expensive which is why we want to avoid them. In order to avoid GCs we need to ensure that objects are only copied from Young to Old when they are permanent and in addition ensure that the tenured does not run full. Therefore generation sizing is the single most important optimization for the GC in the Hotspot JVM. If we cannot prevent objects from being copied to Tenured space once in a while we can use the Concurrent Mark and Sweep algorithm which collects objects concurrent to the application.

Comparision of the different Garbage Collector Strategies

Comparison of the different Garbage Collector Strategies

While that shortens the suspensions it does not prevent them and they will occur more frequently. The Tenured space also suffers from another problem, fragmentation. Fragmentation leads to slower allocation, longer sweep phases and eventually out of memory errors when the holes get too small for big objects.

Java Heap before and after compacting

Java Heap before and after compacting

This is remedied by a compacting phase. The serial and parallel compacting GC perform compaction for every GC run in the Tenured space. Important to note is that, while the parallel GC performs compacting every time, it does not compact the whole Tenured heap but just the area that is worth the effort. Worth the effort means when the heap has reached a certain level of fragmentation. In contrast, the Concurrent Mark and Sweep does not compact at all. Once objects cannot be allocated anymore a serial major GC is triggered. When choosing the concurrent mark and sweep strategy we have to be aware of that side affect.

The second big tuning option is therefore the choice of the right GC strategy. It has big implications for the impact the GC has on the application performance. The last and least known tuning option is around fragmentation and compacting. The Hotspot JVM does not provide a lot of options to tune it, so the only way is to tune the code directly and reduce the number of allocations.

There is another space in the Hotspot JVM that we all came to love over the years, the Permanent Generation. It holds classes and string constants that are part of those classes. While Garbage Collection is executed in the permanent generation, it only happens during a major GC. You might want to read up what a Major GC actually is, as it does not mean a Old Generation GC. Because a major GC does not happen often and mostly nothing happens in the permanent generation, many people think that the Hotspot JVM does not do garbage collection there at all.

Over the years all of us run into many different forms of the OutOfMemory situations in PermGen and you will be happy to hear that Oracle intends to do away with it in the future versions of Hotspot.

Oracle JRockit
Now that we had a look at Hotspot, let us look at the difference in the Oracle JRockit. JRockit is used by Oracle WebLogic Server and Oracle has announced that it will merge it with the Hotspot JVM in the future.

Heap Strategy

The biggest difference is the heap strategy itself. While Oracle JRockit does have a generational heap it also supports a so called continuous heap. In addition the generational heap looks different as well.

Heap of the Oracle JRockit JVM

Heap of the Oracle JRockit JVM

The Young space is called Nursery and it only has two areas. When objects are first allocated they are placed in a so called Keep Area. Objects in the Keep Area are not considered during garbage collection while all other objects still alive are immediately promoted to tenured. That has major implications for the sizing of the Nursery. While you can configure how often objects are copied between the two survivors in the Hotspot JVM,  JRockit promotes objects in the second Young Generation GC.

In addition to this difference JRockit also supports a completely continuous Heap that does not distinguish between young and old objects. In certain situations, like throughput orientated batch jobs, this results in better overall performance. The problem is that this is the default setting on a server JVM and often not the right choice. A typical Web Application is not throughput but response time orientated and you will need to explicitly choose the low pause time garbage collection mode or a generational garbage collection strategy.

Mostly Concurrent Mark and Sweep
If you choose Concurrent Mark and Sweep strategy you should be aware about a couple of differences here as well. The mostly concurrent mark phase is divided into four parts:

  • Initial marking, where the root set of live objects is identified. This is done while the Java threads are paused.
  • Concurrent marking, where the references from the root set are followed in order to find and mark the rest of the live objects in the heap. This is done while the Java threads are running.
  • Precleaning, where changes in the heap during the concurrent mark phase are identified and any additional live objects are found and marked. This is done while the Java threads are running.
  • Final marking, where changes during the precleaning phase are identified and any additional live objects are found and marked. This is done while the Java threads are paused.

The sweeping is also done concurrent to your application, but in contrast to Hotspot in two separate steps. It is first sweeping the first half of the heap. During this phase threads are allowed to allocate objects in the second half. After a short synchronization pause the second half is sweeped. This is followed by another short final synchronization pause. The JRockit algorithm therefore stops more often than the Sun Hotspot JVM, but the remark phase should be shorter. Unlike the Hotspot JVM you can tune the CMS by defining the percentage of free memory that triggers a GC run.

The JRockit does compacting for all Tenured Generation GCs, including the Concurrent Mark and Sweep. It does so in an incremental mode for portions of the heap. You can tune this with various options like percentage of heap that should be compacted each time or how many objects are compacted at max. In addition you can turn off compacting completely or force a full one for every GC. This means that compacting is a lot more tunable in the JRockit than in the Hotspot JVM and the optimum depends very much on the application itself and needs to be carefully tested.

Thread Local Allocation
Hotspot does use thread local allocation, but it is hard to find anything in the documentation about it or how to tune it. The JRockit uses this on default. This allows threads to allocate objects without any need for synchronization, which is beneficial for allocation speed. The size of a TLA can be configured and a large TLA can be beneficial for applications where multiple threads allocate a lot of objects. On the other hand a too large TLA can lead to more fragmentation. As a TLA is used exclusively by one thread, the size is naturally limited by the number of threads. Thus both decreasing and increasing the default can be good or bad depending on your applications architecture.

Large and small objects
The JRockit differentiates between large and small objects during allocation. The limit for when an object is considered large depends on the JVM version, the heap size, the garbage collection strategy and the platform used. It is usually somewhere between 2 and 128 KB. Large objects are allocated outside thread local area in in case of a generational heap directly in the old generation. This makes a lot of sense when you start thinking about it. The young generation uses a copy ccollection. At some point copying an object becomes more expensive than traversing it in ever garbage collection.

No permanent Generation
And finally it needs to be noted that the JRockit does not have a permanent generation. All classes and string constants are allocated within the normal heap area. While that makes life easier on the configuration front it means that classes can be garbage collected immediately if not used anymore. In one of my future posts I will illustrate how this can lead to some hard to find performance problems.

The IBM JVM shares a lot of characteristics with JRockit: The default heap is a continuous one. Especially in WebSphere installation this is often the initial cause for bad performance. It differentiates between large and small objects with the same implications and uses thread local allocation on default. It also does not have a permanent generation, but while the IBM JVM also supports a generational Heap model it looks more like Sun’s rather than JRockit.

The IBM JVM generational heap

The IBM JVM generational heap

Allocate and Survivor act like Eden and Survivor of the Sun JVM. New objects are allocated in one area and copied to the other on garbage collection. In contrast to JRockit the two areas are switched upon gc. This means that an object is copied multiple times between the two areas before it gets promoted to Tenured. Like JRockit the IBM JVM has more options to tune the compaction phase. You can turn it off or force it to happen for every GC. In contrast to JRockit the default triggers it due to a series of triggers but will then lead to a full compaction. This can be changed to an incremental one via a configuration flag.

We see that while the three JVMs are essentially trying to achieve the same goal, they do so via different strategies. This leads to different behaviour that needs tuning. With Java 7 Oracle will finally declare the G1 (Garbage First) production ready and the G1 is a different beast altogether, so stay tuned.

If you’re interested in hearing me discuss more about WebSphere in a production environment, then check out our upcoming webinar with The Bon-Ton Stores. I’ll be joined by Dan Gerard, VP of Technical & Web Services at Bon-Ton, to discuss the challenges they’ve overcome in operating a complex Websphere production eCommerce site to deliver great web application performance and user experience. Reserve your seat today to hear me go into more detail about Websphere and production eCommerce environments.

Related reading:

  1. The impact of Garbage Collection on Java performance // In my last post I explained what a major...
  2. Major GCs – Separating Myth from Reality In a recent post we have shown how the Java...
  3. JDK6 Update 23 changes CMS Collection counters Stefan Frandl, Test Automation Team Lead at dynaTrace, recently tested...
  4. The Top Java Memory Problems – Part 1 // Memory and Garbage Collection problems are still the most...
  5. Troubleshooting response time problems – why you cannot trust your system metrics // Production Monitoring is about ensuring the stability and health...

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

@ThingsExpo Stories
SYS-CON Events announced today that Luxoft Holding, Inc., a leading provider of software development services and innovative IT solutions, has been named “Bronze Sponsor” of SYS-CON's @ThingsExpo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Luxoft’s software development services consist of core and mission-critical custom software development and support, product engineering and testing, and technology consulting.
Developing software for the Internet of Things (IoT) comes with its own set of challenges. Security, privacy, and unified standards are a few key issues. In addition, each IoT product is comprised of at least three separate application components: the software embedded in the device, the backend big-data service, and the mobile application for the end user's controls. Each component is developed by a different team, using different technologies and practices, and deployed to a different stack/target - this makes the integration of these separate pipelines and the coordination of software upd...
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
In his session at @ThingsExpo, Tony Shan, Chief Architect at CTS, will explore the synergy of Big Data and IoT. First he will take a closer look at the Internet of Things and Big Data individually, in terms of what, which, why, where, when, who, how and how much. Then he will explore the relationship between IoT and Big Data. Specifically, he will drill down to how the 4Vs aspects intersect with IoT: Volume, Variety, Velocity and Value. In turn, Tony will analyze how the key components of IoT influence Big Data: Device, Connectivity, Context, and Intelligence. He will dive deep to the matrix...
When it comes to IoT in the enterprise, namely the commercial building and hospitality markets, a benefit not getting the attention it deserves is energy efficiency, and IoT’s direct impact on a cleaner, greener environment when installed in smart buildings. Until now clean technology was offered piecemeal and led with point solutions that require significant systems integration to orchestrate and deploy. There didn't exist a 'top down' approach that can manage and monitor the way a Smart Building actually breathes - immediately flagging overheating in a closet or over cooling in unoccupied ho...
SYS-CON Events announced today that Cloud Raxak has been named “Media & Session Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Raxak Protect automates security compliance across private and public clouds. Using the SaaS tool or managed service, developers can deploy cloud apps quickly, cost-effectively, and without error.
Scott Guthrie's keynote presentation "Journey to the intelligent cloud" is a must view video. This is from AzureCon 2015, September 29, 2015 I have reproduced some screen shots in case you are unable to view this long video for one reason or another. One of the highlights is 3 datacenters coming on line in India.
“The Internet of Things transforms the way organizations leverage machine data and gain insights from it,” noted Splunk’s CTO Snehal Antani, as Splunk announced accelerated momentum in Industrial Data and the IoT. The trend is driven by Splunk’s continued investment in its products and partner ecosystem as well as the creativity of customers and the flexibility to deploy Splunk IoT solutions as software, cloud services or in a hybrid environment. Customers are using Splunk® solutions to collect and correlate data from control systems, sensors, mobile devices and IT systems for a variety of Ind...
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest Clo...
You have your devices and your data, but what about the rest of your Internet of Things story? Two popular classes of technologies that nicely handle the Big Data analytics for Internet of Things are Apache Hadoop and NoSQL. Hadoop is designed for parallelizing analytical work across many servers and is ideal for the massive data volumes you create with IoT devices. NoSQL databases such as Apache HBase are ideal for storing and retrieving IoT data as “time series data.”
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud without worrying about any lock-in fears. In fact by having standard APIs for IaaS would help PaaS expl...
Organizations already struggle with the simple collection of data resulting from the proliferation of IoT, lacking the right infrastructure to manage it. They can't only rely on the cloud to collect and utilize this data because many applications still require dedicated infrastructure for security, redundancy, performance, etc. In his session at 17th Cloud Expo, Emil Sayegh, CEO of Codero Hosting, will discuss how in order to resolve the inherent issues, companies need to combine dedicated and cloud solutions through hybrid hosting – a sustainable solution for the data required to manage I...
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Bradley Holt, Developer Advocate at IBM Cloud Data Services, will demonstrate techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk will be on IBM Cloudant, Apa...
Mobile messaging has been a popular communication channel for more than 20 years. Finnish engineer Matti Makkonen invented the idea for SMS (Short Message Service) in 1984, making his vision a reality on December 3, 1992 by sending the first message ("Happy Christmas") from a PC to a cell phone. Since then, the technology has evolved immensely, from both a technology standpoint, and in our everyday uses for it. Originally used for person-to-person (P2P) communication, i.e., Sally sends a text message to Betty – mobile messaging now offers tremendous value to businesses for customer and empl...
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
The enterprise is being consumerized, and the consumer is being enterprised. Moore's Law does not matter anymore, the future belongs to business virtualization powered by invisible service architecture, powered by hyperscale and hyperconvergence, and facilitated by vertical streaming and horizontal scaling and consolidation. Both buyers and sellers want instant results, and from paperwork to paperless to mindless is the ultimate goal for any seamless transaction. The sweetest sweet spot in innovation is automation. The most painful pain point for any business is the mismatch between supplies a...
The broad selection of hardware, the rapid evolution of operating systems and the time-to-market for mobile apps has been so rapid that new challenges for developers and engineers arise every day. Security, testing, hosting, and other metrics have to be considered through the process. In his session at Big Data Expo, Walter Maguire, Chief Field Technologist, HP Big Data Group, at Hewlett-Packard, will discuss the challenges faced by developers and a composite Big Data applications builder, focusing on how to help solve the problems that developers are continuously battling.
As enterprises capture more and more data of all types – structured, semi-structured, and unstructured – data discovery requirements for business intelligence (BI), Big Data, and predictive analytics initiatives grow more complex. A company’s ability to become data-driven and compete on analytics depends on the speed with which it can provision their analytics applications with all relevant information. The task of finding data has traditionally resided with IT, but now organizations increasingly turn towards data source discovery tools to find the right data, in context, for business users, d...
SYS-CON Events announced today that MobiDev, a software development company, will exhibit at the 17th International Cloud Expo®, which will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software development company with representative offices in Atlanta (US), Sheffield (UK) and Würzburg (Germany); and development centers in Ukraine. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 150 developers, designers, quality assurance engineers, project manage...