Welcome!

Java IoT Authors: Matt Davis, Elizabeth White, Liz McMillan, Jyoti Bansal, Sematext Blog

Related Topics: Java IoT

Java IoT: Article

Turbo-Charging Java for Real-Time Applications

Accelerating code execution

The Java platform is usually perceived as inadequate for real-time applications because of its lack of determinism, that is, its unpredictable execution time.

For example, garbage collection (GC), which removes no-longer-needed Java objects and reduces memory overhead, may automatically and transparently freeze the system from time to time. Such behavior is obviously unacceptable in the real-time world. (A commonly recognized goal of real-time computing is to meet an application's time constraints.)

To address this issue, new Java Virtual Machines (JVM) are being developed (e.g., JVM with concurrent GC). In addition, a new Real-Time Specification for Java (RTSJ, JSR-001) has been finalized.

Unfortunately, these solutions achieve predictability to the detriment of performance. For example, concurrent GC is less efficient than "stop the world GC" (which requires total CPU usage), and the memory model advocated by the RTSJ requires runtime checks that impact performance.

This article examines a new solution, one that provides determinism for real-time threads and also has the positive side effect of significantly "accelerating" code execution.

The High Cost of Object Creation
Creating new objects in Java has a significant memory/CPU impact. The impact is somewhat proportional to the object size, but creating even small objects is quite expensive. The memory has to be allocated and initialized and, eventually, when the object is no longer needed, garbage collection is used to free up the memory.

Avoiding memory allocation can significantly increase the performance of your application. (The J.A.D.E. library provides an XML parser significantly faster [2x-3x] than any conventional XML parser only because it does not perform dynamic allocation.)

To minimize object creation and its associated overhead, Java programmers can:

  • Use primitive types: For example, using primitive type "double" is 10 times faster and requires one-third less memory than creating instances of class java.lang.Double.
  • Use the "return value" parameter technique: The basic idea is to avoid object creation by passing a local static object to a function. The function returns this extra parameter after modifying its state to correspond to the desired value. Numerous examples of this technique can be found in the Java standard library (for example, Component.getLocation(Point rv)).
Both of these approaches are error-prone, however. Java primitive types cannot be strongly typed, and the "return value" parameter has to be mutable (modifiable at runtime), which is inherently unsafe (see "Item 13: Favor Immutability," in Effective Java Programming Language Guide by Joshua Bloch for a detailed explanation). Also, the "return value" approach unnecessarily increases the number of Java methods because conventional functions "without" the additional parameter are still provided (for example, Component.getLocation()). To Sun's credit, it should be mentioned that the expense is mitigated quite a bit in their latest versions of Java compared to, say, 1.3.x, especially for short-lived, small objects - thanks to the HotSpot Generational Collector. It may be worthwhile to point this out, even if the improvements don't scale to the same speed as object reuse.

A Real-Time Solution for All Virtual Machines
Garbage collection occurs when memory is being allocated. Therefore, if "new" ready-to-use objects exist (and need not be allocated/initialized because they have been recycled), the memory/CPU is not stressed. As a result, code execution:

  1. Is faster
  2. Is not interrupted by the garbage collector (thereby providing more predictable scheduling)
  3. Has no assignment constraint, as all objects originate from the heap (see RTSJ assignment rules where heap objects cannot refer to scoped objects [JSR-001, pg. 8])
All Java Virtual Machines work in a "heap context" where objects are allocated on demand ("new") and recycled through garbage collection. To support object "recycling" in a transparent manner, we could either use some reference-counting mechanism or work with thread stacks. Due to possible circularities in the general case, the first approach is difficult to implement. The second approach is easier and faster, but the application has to ensure that stack objects are not referenced anymore after the stack is "popped." Fortunately, this risk can be greatly mitigated in practice (using the export method, as we'll see later), which makes this approach far more attractive as a general purpose solution.

Context Programming to the Rescue
Often the same piece of code might have to behave differently based on some thread-locale information. It's not always practical to pass this information as extra parameters to the methods' calls. For example, arithmetic operations might depend on a common modulo number or concurrent threads might log information in separate files. For such situations, the open source J.A.D.E. library defines specific zones called Context, where threads may execute independently from each other (see Java Addition to Default Environment, jade.dautelle.com). The scope of a context is defined by a try-finally block statement, which starts with a static enter call and ends with a static exit call, the class name identifying the type of context; for example:

LocalContext.enter(); // Context used for local setting.
try
   DEBUG.setValue(true);
   ...
} finally {
   LocalContext.exit();
}

Context can be nested; it inherits the setting/behaviors of its outer contexts (unless these setting/behaviors are mutually exclusive). This characteristic also applies to concurrent threads executed while in the context's scope (see Listing 1).

Context programming is somewhat complementary to aspect-oriented programming. Whereas context programming is dynamic by nature (thread based), AOP is typically code based (AspectJ tool/compiler). Both can be used in conjunction to insert custom context code automatically.

The Pool Context
This context implements the "stack" approach mentioned earlier. It ensures that most of the CPU is used to perform the actual task and not maintenance tasks such as memory allocation and garbage collection. In other words the CPU is used at its maximum efficiency.

Pool contexts allow objects to be recycled so that after the pool/stack of recycled objects gets large enough, no memory allocation need ever be performed.

As far as the application is concerned, pool objects need not be mutable; in fact, it's better (safer) if they are immutable. Remember that within a pool context, creating immutable objects is as efficient as reusing mutable objects.

All objects that have been allocated while in a pool context are recycled at the same time when the thread exits the pool context. Recycling is extremely fast and independent from the number of objects allocated (a lot faster than GC). (Recycling is almost instantaneous; it basically consists of resetting the pool/stack's pointers.)

Listing 2 illustrates how pool contexts can be used to accelerate calculations on multiple inputs.

As you can see in Listing 2, it may be necessary to export important results from the current pool context to the outer context to keep these results from being overwritten after the pool objects are recycled. In most cases, the only object that needs to be exported is the result of the operation; all intermediate/temporary objects can be ignored (they are automatically recycled).

No Garbage Collection Ever
For some, a real-time application being interrupted by the garbage collector and consequently missing a deadline is simply not acceptable (considered a critical error in hard real time). Fortunately, by using pool contexts it's relatively easy to avoid running the garbage collector.

There will be no garbage collection ever as long as all your threads run in a pool context, only static constants are exported to the heap, and your system state can be updated without allocating new objects (e.g., StringBuffer instead of string or FastMap instead of HashMap) (see Figure 1). (FastMap class, unlike HashMap, does not allocate a new entry each time a new object is added to the collection.)

For concurrent access/modification of the system state, the use of a reentrant lock is recommended, such as com.dautelle.util.ReentrantLock or the new (JDK1.5) java.util.concurrent.locks.ReentrantLock. Provided that factory methods are used instead of the new keyword for object creations, most of the application code is oblivious of the garbage collection issue. (The new keyword always allocates on the heap. The J.A.D.E. library cannot/does not change the virtual machine behavior with regards to class instantiation.) Particular care should be taken with some JDK library methods that may allocate temporary objects onto the heap at each call (setup/initialization heap allocations are okay), and therefore should be avoided or replaced by cleaner classes (e.g., TypeFormat [J.A.D.E. class: com.dautelle.util.TypeFormat] for parsing/formatting of primitive types). Listing 3 provides an example of a real-time handler processing UDP messages

A Nice Side-Effect: Increase of Execution Speed
The cost of allocating an object on the heap is somewhat proportional to that object's size. The cost of reusing an object, however, is independent of its size. In other words, the larger the object, the more performance gain you can expect from using a pool context. For example, adding 1024-bits immutable integers is up to five times faster (LargeInteger versus BigInteger, J.A.D.E. benchmark results). The high performance associated with pool contexts is due not only to object reuse but also to a more efficient use of the CPU internal cache (cache hits are a lot more frequent when objects are being reused).

Recycling objects is more powerful than just recycling memory (a.k.a. GC). It's particularly true for objects requiring some CPU-intensive setup at initialization (e.g., preallocated linked lists or tables). Unlike hardware recycled objects, software recycled objects are as good as new.

Limitations
The strength of Java resides mostly in its comprehensive library. Unfortunately, the Java API may allocate temporary objects on the heap, which may annihilate the performance gained from using pool contexts (if you save 100 allocations, that's good…but if the API does 1,000 allocations in the process of running your code, saving 100 allocations isn't as big a gain as might be imagined). One solution is for the JVM to support pool contexts, making the new keyword context-sensitive. This change would be backward compatible, as the default context is the heap context. Then the whole Java API would be more deterministic and execute faster.

Concurrent Context: Harnessing Hyper-Threading and Multiprocessors Potential
With the JDK1.5 Tiger release, a significant effort has been accomplished with regard to concurrent programming. Still, the JDK1.5 concurrency packages (java.util.concurrent, java.util.concurrent.atomic, and java.util.concurrent.locks) rely on the dynamic creation of new threads in order to take advantage of concurrent algorithms, which is usually a no-no in the real-time world. Furthermore, it's inefficient for low-level libraries (too much overhead) and synchronization can be tricky.

To address this particular issue, a concurrent context has been created. It allows real-time applications to take advantage of parallel algorithms on multiprocessor cards or even single processors with hyper-threading technology without creating new threads. (HyperThreading doubles the number of executing threads per processor.) This objective is achieved by maintaining a limited number of threads on stand-by. These threads can then be utilized on demand to perform concurrent executions. If all concurrent threads are busy, the current thread executes the concurrent operation itself. Concurrent context is easy to use, provides automatic load-balancing between processors with almost no overhead, and does not require any synchronization code as the parent thread is not allowed (blocks on the exit() call) to exit its concurrent context until all concurrent executions are complete. As soon as a concurrent thread completes its execution, it becomes available again for more, resulting in concurrent threads/processors being busy most of the time. Last but not least, concurrent contexts guarantee the same behavior whether or not the execution is performed by the current thread or a concurrent thread, granted that the concurrent execution's order has no impact on the behavior. In particular, any exception raised by a concurrent thread is propagated to the parent thread and concurrent threads execute in the same context as their parent.

ConcurrentContext.enter();
try {
   ConcurrentContext.execute(runnable1);
   ConcurrentContext.execute(runnable2);
   ...
} finally {
   ConcurrentContext.exit(); // Waits for all concurrent threads
}    // to complete.

Direct Memory Access: Struct and Union
It's not rare for real-time/embedded projects to use Java and C/C++ together. By mixing them, projects get the best of both worlds: the high-performance of C/C++ with the rapid development cycle typically associated with Java.

Until recently data exchange was problematic as the storage layout of Java objects is not determined by the compiler. The layout of objects in memory is deferred to runtime and determined by the interpreter (or just-in-time compiler). This approach allows for dynamic loading and binding, but also makes interfacing with C/C++ code difficult.

This particular issue has been addressed in the form of two public domain classes: Struct and Union. These two classes mimic the C struct and union types. They follow the same alignment rules, support the same features (e.g., bit fields, packing), and make it extremely easy to convert C header files to Java classes (one-to-one mapping).

Using these classes, embedded systems can map Java objects to a physical address to control hardware devices or communicate through shared memory with external apps.

Conclusion
Garbage collection is not the only issue preventing Java from being used for a real-time system. Other issues include thread scheduling, accurate timer, synchronization overhead, lock queuing order, class initialization, and maximum interrupt response latency. Until now it has definitively been a "stopper." Because of it, most real-time systems today are developed in C/C++ despite the existence of Java compilers.

The good news is that whereas before you had to use C/C++ and some real-time OS, now you can use GCJ/J.A.D.E. and the same real-time OS (with JNI/Struct for the interface).

Pool contexts are a substitute for the complicated memory model of the RTSJ. The concept of scoped memory and immortal memory and how to transfer data between these areas leads to a cumbersome programming style. And the runtime checks for this model are a real performance killer. However, to see the full advantage of this approach for real time, you need a real-time kernel. Since the RTSJ (implemented as Reference Implementation or jRate) is the only available Java real time, it would be interesting to see some results on top of it.

References

  • J.A.D.E. Real-Time FAQ: jade.dautelle.com/api/com/dautelle/realtime/package-summary.html#FAQ
  • RTJ API: rtj.org/doc/index.html
  • Ajile RTJ chips: www.ajile.com/downloads/aJ100Datasheet_1.3.pdf
  • JStamp: jrealtime.systronix.com/
  • Restriction of Java for Embedded Real-Time Systems: www.jopdesign.com/doc/rtjava.pdf
  • The Real-Time for Java Expert Group: www.rtj.org
  • Brosgol, B., et al. (2000). The Real-Time Specification for Java. Addison-Wesley.
  • RTSC (JSR-001): www.rtj.org/rtsj-V1.0.pdf
  • Comments (3) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    Pat 08/04/04 10:18:39 AM EDT

    If a real-time Java VM is what you need and you absolutely must have both determinism AND performance... Take a look at PERC from Aonix/NewMonics (www.aonix.com) These guys have been doing this from the beginning and have the best set of tools for building real-world RTJ apps.

    larry 08/03/04 10:40:01 PM EDT

    Interesting article but the author is out of date w.r.t. current state of the art with RTSJ.

    I implemented a RTSJ for J2SE on solaris based on a 1.4.1 codebase. While there was some performance degradation on runtime checks. It was less than 15% for a 1.4.1 VM. It''s real time determinacy characteristics were comparable to and in some cases exceeded many realtime OS''s.

    The algorithm we used for managing the checks between heap, immortal, and scoped memory was very efficient and can be found in the literature.

    With a well constructed commercial grade RTSJ VM performance is very good. One should not rely on the reference implementation to base viability estimations of the technology. The reference implementation is designed for correctness and was not intended for performance measurements.

    Anthony Berglas 07/13/04 06:05:09 AM EDT

    Does 1.4 optimize out alloctions for inlined value parameters? Eg. Does the following actually create any garbage?

    Foo foo() {return new Foo(123)}
    ...
    while (true) { // tight loop
    Foo f = foo()
    f.value...
    }
    // no references to f or things in f here.

    (But either out/byref parameters or being able to return multiple values at once should have been added to Java long ago!)

    @ThingsExpo Stories
    SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
    SYS-CON Events announced today that Technologic Systems Inc., an embedded systems solutions company, will exhibit at SYS-CON's @ThingsExpo, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Technologic Systems is an embedded systems company with headquarters in Fountain Hills, Arizona. They have been in business for 32 years, helping more than 8,000 OEM customers and building over a hundred COTS products that have never been discontinued. Technologic Systems’ pr...
    SYS-CON Events announced today that Auditwerx will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Auditwerx specializes in SOC 1, SOC 2, and SOC 3 attestation services throughout the U.S. and Canada. As a division of Carr, Riggs & Ingram (CRI), one of the top 20 largest CPA firms nationally, you can expect the resources, skills, and experience of a much larger firm combined with the accessibility and attent...
    SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
    SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
    SYS-CON Events announced today that Loom Systems will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2015, Loom Systems delivers an advanced AI solution to predict and prevent problems in the digital business. Loom stands alone in the industry as an AI analysis platform requiring no prior math knowledge from operators, leveraging the existing staff to succeed in the digital era. With offices in S...
    Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
    SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
    SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat...
    SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 20th Cloud Expo, which will take place on June 6-8, 2017 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 add...
    SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
    In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), will provide an overview of various initiatives to certifiy the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldw...
    In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
    Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
    The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
    Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
    My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
    Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
    DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
    What sort of WebRTC based applications can we expect to see over the next year and beyond? One way to predict development trends is to see what sorts of applications startups are building. In his session at @ThingsExpo, Arin Sime, founder of WebRTC.ventures, will discuss the current and likely future trends in WebRTC application development based on real requests for custom applications from real customers, as well as other public sources of information,