Welcome!

Java IoT Authors: Yeshim Deniz, Liz McMillan, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: Java IoT

Java IoT: Article

Secrets of Java Serialization

Secrets of Java Serialization

Serialization in Java is an operation in which an object's internal state is translated into a stream of bytes. This binary stream or image of the object is created in an operating system-neutral network byte order. The image can be written to a disk, stored in memory, or sent over a network to a different operating system. This amazing feat requires little or no work on the part of the programmer. Just implement the serializable interface, which contains no methods, and call the writeObject() method on your object, and it's serialized! You can serialize an object to or from any I/O device that Java supports.

The serializable interface doesn't contain any code or data; it's a marker interface. If a hierarchy of classes is to be serialized, each class in the hierarchy must implement the serializable interface. All objects that are in an object hierarchy, or "Web of objects," at runtime will be serialized. If any one of them doesn't implement serializable, an exception will be thrown.

Serialization works across platforms - an object serialized on Solaris can be read on Windows. This is a key component in making Java's write once, run anywhere philosophy real, as well as the core requirement that Java programs be able to effectively and easily communicate with each other.

The default serialization capabilities often do the job. When that's not the case, Java allows progressive customization of the process at the class level. A class can contain its own readObject() and writeObject() methods to add data to the stream, set variables to special values, and more. We've found that we use readObject() frequently to reinitialize fields that it doesn't make sense to serialize, such as a database connection handle. We use writeObject() a lot less.

For those who really need a custom solution, Java provides the externalizable interface, which allows (in fact, forces) the class that implements it to take complete control of reading and writing instances of itself to the I/O stream. In over a year of using serialization every day, we've never had to resort to this level of customization. Still, it's nice to know that it's there.

What Are Some Uses for Serialization?
Network
Serialization is much easier than writing message formats for each type of message that is to be passed between a client/server pair.

Just send the common objects back and forth and you're done. Designing messaging protocols is notoriously finicky. Once again Java saves the day and makes a previously arduous task a simple matter of a single method call.

Serialization over networks is the basis of Sun's communications infrastructure: RMI uses it, as well as the many communications subsystems in Jini such as JNDI.

File I/O
As we stated before, an object may be serialized to any Java I/O byte stream class. So it's trivial to serialize an object or a whole hierarchy of them to a disk file. We use this feature of the language for two reasons:

  1. Saving an application program's state for the next time it's started.
  2. Caching objects on a disk before they're needed. These objects are complex hierarchies that are created from database access and extensive algorithmic processing, and thus must be created ahead of time.
RDBMS
Another use of serialization is to store complete objects as BLOBs in a relational database. This can provide a way to use traditional databases with an object-oriented programming language without needing to move to a true object-oriented database.

Tips and Traps
Trap 1: Hidden Caching
We found our first trap in a client/server setting. We wanted to resend objects when their values changed. But we saw that although the object had been sent again, it still had its original value. Much to our surprise we discovered that the default behavior for Java serialization was to serialize any unique object (as determined by a comparison of the memory address of the object to be serialized with that of all objects previously serialized) just once. The serialized form is cached and then sent again when requested. This design decision helps speed up serialization for applications that are just passing objects as messages, not for their internal values. However, it's not obvious to beginning users that this caching is happening. Each time you try to serialize the object, you'll get that first cached instance again. If you want the serialized object to reflect changes in the "source" object, call the reset() method.

Trap 2: Versioning
If you attempt to send a serialized object to a running program that knows only about an earlier version of the class, an exception will be thrown. This happens when a class is compiled because the Java compiler creates a version stamp in the class. When the internal structure of the classes changes, the version stamp is changed. So when a serialized object is received by a program, the version in the object must match the class version stamp the program knows about. If they don't match, Java will throw an exception to prevent even worse things from happening, such as old code trying to read a new object.

You can take over version stamp creation yourself, but watch carefully for versions that really aren't compatible (such as the addition or deletion of a new class member). If you do want to take control of versioning of a class yourself, you must declare the following in the class's source code:

Static final long serialVersionUID = 12; // 12 is just an example!

Whenever you make changes that would prevent compatible serialization, bump up the version number.

Rather than trying to keep track of this ourselves, we let Java do the work for us through the class java.io.ObjectStreamClass. We prevent the "incompatible version" problem by sharing JAR files between programs that send serialized objects back and forth. The common JAR files contain all the shared classes. This prevents two different programs from trying to use different versions of the same class.

Sun also changes the serialization format occasionally. This is something to watch for if you're generating a serialized object store with some persistence, for instance, a disk cache that will remain around for some time. Fortunately, Sun seems intent on incorporating ways to work around the problem when it does make these changes. Sun posted the following on the Java Web site (www.java.sun.com/) with the release of version 1.2:

It was necessary to make a change to the serialization stream format in JDKTM 1.2 that isn't backwards compatible to all minor releases of JDKTM.

1.1. To provide for cases where backwards [sic] compatibility is required, a capability has been added to indicate what PROTOCOL_VERSION to use when writing a serialization stream. The method ObjectOutputStream.useProtocolVersion takes as a parameter the protocol version to use to write the serialization stream.

Tip 1: Handling Static Variables
Java classes often hold some globally relevant value in a static class variable. We won't enter into the long history of the debate over the propriety of global variables - let's just say that programmers continue to find them useful and the alternatives suggested by purists aren't always practical.

For static variables that are initialized when declared, serialization doesn't present any special problems. The first time the class is used, the variable in question will be set to the correct value.

Some statics can't be initialized this way. They may, for instance, be set by a human during the running time of the program. Let's say we have a static variable that turns on debugging output in a class. This variable can be set on a server by sending it some message, perhaps from a monitor program. We'll also imagine that when the server gets this message, the operator wants debugging turned on in all subsequent uses of the class in the clients that are connected to that server.

The programmer is now faced with a difficulty. When the class in question arrives at the client, the static variable's value doesn't come with it. However, it contains the default static state that's set when the class's no-argument constructor is called by writeObject(). How can the client programs receive the new correct value?

The programmer could create another message type and transmit that to the client; however, this requires a proliferation of message types, marring the simplicity that the use of serialization can achieve in messaging. The solution we've come up with is for the class that needs the static transmitted to include a "static transporter" inner class. This class knows about all the static variables in its outer class that must be set. It contains a member variable for each static variable that must be serialized. StaticTransporter copies the statics into its member variables in the writeObject() method of the class. The readObject() method "unwraps" this bundle and transmits the server's settings for the static variables to the client. Since it's an inner class, it'll be able to write to the outer class's static variables, regardless of the level of privacy with which they were declared.

Tip 2: Easy Cloning
Serialization provides a simple way to clone. Instead of writing ugly and hard-to-maintain clone methods, simply serialize the object to memory, read it back to a new reference, and you have a new deep copy. The deep-copy idea is important. When a shallow copy is performed (the default behavior of Java cloning), only references to data members are copied. If an object of type Foo holds a reference to a String s, when a Foo is cloned, both the original and the new copy will point to the same copy of s. Sometimes this is fine, but other times you need a deep copy.

In a deep copy the new object will get new copies of its data members, not just new pointers to the same data members. Most of the time when you want this behavior, you want it recursively so that the members' members are deep copied as well. A major problem with clone methods is the difficult code that's required for all the deep copying a class might require. To solve this problem we wrote a class called Cloner that removed the need to write clone methods at all. Cloner is small enough so it can be included here in its entirety:

public class Cloner(){ public Cloner() { } public static Object clone(Object o) throws Exception { ByteArrayOutputStream b = new ByteArrayOutputStream(); ObjectOutputStream out = new ObjectOutputStream(b); out.writeObject(o); out.close(); ByteArrayInputStream bi=new

ByteArrayInputStream(b.toByteArray()); ObjectInputStream in = new ObjectInputStream(bi); Object no = in.readObject(); return no; } }

To use this class is easy:

Foo foo = new Foo();
Foo bar = (Foo)Cloner.clone(foo);

Now "bar" is a completely new deep copy of Foo!

As Bruce Eckel points out in his excellent book Thinking in Java, it's an order of magnitude slower to clone this way. However, in situations where you need to do a deep copy and are more worried about development time than running time, this is a good alternative. And since objects aren't cloneable by default, you need access to their source to make them so. You can subclass an object just to make it serializable, but this only works if you're the one constructing it. If you need to deep copy a Web of objects, some of which are created inside libraries that you don't have the source for, serialization may be your only alternative.

Tip 3: Transient
Make liberal use of the transient keyword to trim down the size of your serialized objects. Mark any class elements that don't need to be passed between programs as transient. Some of these elements, such as file handles, are useless when passed to another program. Other elements are needed, but can be re-created from other data members. This can be done by writing your own readObject() method.

For example, perhaps you store some objects in a linked list, but also keep them in a hashtable for quick lookup by value. Since the hashtable can easily be re-created from the list, you can mark it as transient. If the table is large, re-creating it can be much faster than serializing, transmitting, and deserializing it.

The use of transient can also be important for security considerations. If a class contains data members that hold information that shouldn't be made public, such as a clear text version of a password or employees' salaries, a malicious program might be able to read them from the serialized byte stream. Declaring those members transient will prevent them from ever being written to serialized output. (Java also allows for the encryption of byte streams if further security measures are needed.)

Summary
Serialization is an important and useful addition to the Java language. However, before you make use of it, it's important to understand the pitfalls of the technique and know how to turn its strengths to your advantage. Sun has made it the foundation of Java's communications infrastructure. You can make it the foundation of yours as well.

More Stories By Gene Callahan

Gene Callahan, president of St. George Technologies, designs and implements Internet projects. He has written articles for several national and international industry publications.
Rob Dodson is a software developer who writes options-trading software in Java and C++ for OTA Limited Partnership.
Previous projects include weather analysis software, tactical programs for Navy submarines, and code for electronic shelf labels.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
After years of investments and acquisitions, CloudBlue was created with the goal of building the world's only hyperscale digital platform with an increasingly infinite ecosystem and proven go-to-market services. The result? An unmatched platform that helps customers streamline cloud operations, save time and money, and revolutionize their businesses overnight. Today, the platform operates in more than 45 countries and powers more than 200 of the world's largest cloud marketplaces, managing mo...
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.
In an age of borderless networks, security for the cloud and security for the corporate network can no longer be separated. Security teams are now presented with the challenge of monitoring and controlling access to these cloud environments, at the same time that developers quickly spin up new cloud instances and executives push forwards new initiatives. The vulnerabilities created by migration to the cloud, such as misconfigurations and compromised credentials, require that security teams t...
The graph represents a network of 1,329 Twitter users whose recent tweets contained "#DevOps", or who were replied to or mentioned in those tweets, taken from a data set limited to a maximum of 18,000 tweets. The network was obtained from Twitter on Thursday, 10 January 2019 at 23:50 UTC. The tweets in the network were tweeted over the 7-hour, 6-minute period from Thursday, 10 January 2019 at 16:29 UTC to Thursday, 10 January 2019 at 23:36 UTC. Additional tweets that were mentioned in this...
The term "digital transformation" (DX) is being used by everyone for just about any company initiative that involves technology, the web, ecommerce, software, or even customer experience. While the term has certainly turned into a buzzword with a lot of hype, the transition to a more connected, digital world is real and comes with real challenges. In his opening keynote, Four Essentials To Become DX Hero Status Now, Jonathan Hoppe, Co-Founder and CTO of Total Uptime Technologies, shared that ...
When Enterprises started adopting Hadoop-based Big Data environments over the last ten years, they were mainly on-premise deployments. Organizations would spin up and manage large Hadoop clusters, where they would funnel exabytes or petabytes of unstructured data.However, over the last few years the economics of maintaining this enormous infrastructure compared with the elastic scalability of viable cloud options has changed this equation. The growth of cloud storage, cloud-managed big data e...
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.