Click here to close now.

Welcome!

Java Authors: Carmen Gonzalez, Irit Gillath, XebiaLabs Blog, David Sprott, Elizabeth White

Related Topics: Java

Java: Article

XML Serialization of Java Objects

XML Serialization of Java Objects

Java serialization was initially used to support remote method invocation (RMI), allowing argument objects to be passed between two virtual machines.

RMI works best when the two VMs contain compatible versions of the class being transmitted, and can reliably transmit a binary representation of the object based on its internal state. When an object is serialized, it must also serialize the objects to which its fields refer - resulting in what is commonly called an object graph of connected components. Although the transient keyword can be used to control the extent to which the serialization process penetrates the object graph, this level of control is seldom enough.

Many have tried to use Java's serialization to achieve the so-called "long-term persistence" of data - where the serialized form of a Java data structure is written to a file for later use. One such area is the development tools domain, in which designs must be saved for later use. Because the logic that saves and restores serialized objects is based on the internal structure of the constituent classes, any changes to those classes between the time that the object was saved and when it was retrieved may cause the deserialization process to fail outright; for example, a field was added or removed, existing fields were renamed or reordered, or the class's superclass or package was altered. Such changes are to be expected during the development process, and any mechanism that relies on the internal structure of all classes being identical between versions to work has the odds stacked against it. Over the last few years the "versioning issues" associated with Java's serialization mechanism have indeed proved to be insurmountable and have led to widespread abandonment of Java's serialization as a viable long-term persistence strategy in the development tools space.

To tackle Java serialization problems, a Java Specification Request (JSR 57) was created, titled "Long-Term Persistence for JavaBeans." JSR 57 is included in JRE 1.4 and is part of the "java.beans" package. This article describes the mechanism with which the JSR solved the problems of long-term persistence, and how you can take control of the way that the XMLEncoder generates archives to represent the data in your application.

We'll start our section by dispelling two popular myths that have grown up around XML serialization: that it can only be used for JavaBeans and that all JavaBeans are GUI widgets. In fact, the XMLEncoder can support any public Java class; these classes don't have to be JavaBeans and they certainly don't have to be GUI widgets. The only constraint that the encoder places on the classes it can archive is that there must be a means to create and configure each instance through public method calls. If the class implements the getter/setter paradigm of the JavaBeans specification, the encoder can acheive its goal automatically - even for a class it knows nothing about. On top of this default behavior, the XMLEncoder comes with a small but very powerful API that allows it to be "taught" how to save instances of any class - even if they don't use any of the JavaBeans design patterns. In fact, most of the Swing classes deviate from the JavaBeans specification in some way and yet the XMLEncoder handles them via a set of rules with which it comes preconfigured. The XMLEcoder is currently spec'ed to provide automatic support for all subclasses of Component in the SDK and all of their property types (recursively). This means that as well as being able to serialize all of AWT and Swing GUI widgets, the XMLEncoder can also serialize: primitive values (int, double, etc.), strings, dates, arrays, lists, hashtables (including all Collection classes), and many other classes that you might not think of as having anything to do with JavaBeans. The support for all these classes is not "hard-wired" into the XMLEncoder; instead it is provided to the Encoder through the API that it exposes for general use. The variety in the APIs among even the small subset of classes mentioned earlier should give some idea of the generality and scope of the persistence techniques we will cover in the next sections.

Background
When problems are encountered with an object stream, they're hard to correct because the format is binary. An XML document is human readable, and therefore easier for a user to examine and manipulate when problems arise. To serialize objects to an XML document, use the class java.beans.XMLEncoder; to read objects, use the class java.beans.XMLDecoder.

One reason object streams are brittle is that they rely on the internal shape of the class remaining unchanged between encoding and decoding. The XMLEncoder takes a completely different approach here: instead of storing a bit-wise representation of the field values that make up an object's state, the XMLEncoder stores the steps necessary to create the object through its public API. There are two key factors that make XML files written this way remarkably robust when compared with their serialized counterparts.

First, many changes to a class's internal implementation can be made while preserving backward compatibility in its public APIs. In public libraries, this is often a requirement of new releases - as breaking a committed public API would break all the third-party code that had used the library in its older form. As a result of this, many software vendors have internal policies that prevent its developers from knowingly "breaking" any of the public APIs in new releases. While exceptions inevitably arise, they are on a much, much smaller scale than the internal changes that are made to the private implementations of the classes within the library. In this way, the XMLDecoder derives much of its resilience to versioning by aligning its requirements with those of developers who program against APIs directly.

The second reason for the stability of the decoding process as implemented by the XMLDecoder is just as important. If you were to take an instance of any class, choose an arbitrary member variable, and set it to null - the behavior of that instance would be completely undefined in all subsequent operations - and a bug-free implementation would be entitled to fail catastrophically under these circumstances. This is exactly what happens when a field is added to a new version of a class and this causes people to cross their fingers when trying to deserialize an instance of a class that was written out with an older version. The XMLEncoder, by contrast, doesn't store a list of private fields but a program that represents the object's state. Here's an XML file representing a window with the title "Test":

<?xml version="1.0" encoding="UTF-8"?>
<java version="1.4.1" class="java.beans.XMLDecoder">
<object class="javax.swing.JFrame">
<void property="title">
<string>Test</string>
</void>
<void property="visible">
<boolean>true<boolean/>
</void>
</object>
</java>

XML archives, written by XMLEncoder, have exactly the same information as a Java program - they're just written using an XML encoding rather than a Java one. Here's what the above program would look like in Java:

JFrame f = new JFrame();
f.setTitle("Test");
f.setVisible(true);

When a backward compatibility issue arises in one of the classes in the archive, it may cause one of the earlier statements to fail. A new version of the class might, for example, choose not to define the "setTitle()" method. When this happens, the XMLDecoder detects that this method is now missing from the class and doesn't try to call it. Instead, it issues a warning, ignores the offending statement, and continues with the other statements in the file. The critical point is that not calling the "setTitle()" method does not violate the contract of the implementation (as deleting an instance variable would), and the resulting instance should be a valid and fully functional Java object. If the resulting Java object fails in any way, an ordinary Java program could be written against its API to demonstrate a genuine bug in its implementation.

The vendors of popular Java libraries tend to devote significant resources toward programs to manage demonstrable bugs of this kind and enlist the support of the development community to work toward their eradication - Sun's "BugParade" is a well-known example. As a result of these kinds of programs, bugs that can be demonstrated by simple "setup code" tend to be rare in mature libraries. Once again, the XMLDecoder benefits here as it's able to ride on the coattails of the Java developer by using the public APIs of the classes instead of relying on special privileges to circumvent them.

Encoding of JavaBeans
To illustrate the XMLEncoder, this article shows serialization based on a number of scenarios using an example Person class. These range from simple JavaBeans encoding through nondefault construction and custom initialization.

In the simplest scenario, the class Person has String fields for firstName and lastName, together with get and set methods.

public class Person {
private String firstName;
private String lastName;
public String getFirstName() { return firstName; }
public String getLastName() { return lastName; }
public void setFirstName(String str) { firstName = str; }
public void setLastName(String str) { lastName = str; }
}

The following code creates an encoder and serializes a Person.

FileOutputStream os = new FileOutputStream("C:/cust.xml");
XMLEncoder encoder = new XMLEncoder(os);
Person p = new Person();
p.setFirstName("John");
encoder.writeObject(p);
encoder.close();

The XML file created shows that Person class has been encoded, and that its firstName property is the string "John".

<?xml version="1.0" encoding="UTF-8"?>
<java version="1.4.1" class="java.beans.XMLDecoder">
<object class="Person">
<void property="firstName">
<string>John</string>
</void>
</object>
</java>

When the file is decoded with the XMLDecoder, the Person class will be instantiated with its default constructor, and the firstName property set by calling the method setFirstName("John").

FileInputStream os = new FileInputStream("C:/cust.xml");
XMLDecoder decoder = new XMLDecoder(os);
Person p = (Person)decoder.readObject();
decoder.close();

To understand how to leverage the encoder and decoder for custom serialization requires an understanding of the JavaBeans component model. This describes a class's interface in terms of a set of properties, each of which can have a get and set method. To determine the set of operations required to re-create an object, the XMLEncoder creates a prototype instance using its default constructor and then compares the value of each property between this and the object being serialized. If any of the values don't match, the encoder adds it to the graph of objects to be serialized, and so on until it has a complete set of the objects and properties required to re-create the original object being serialized. When the encoder reaches objects that can't be broken down any further, such as Java's strings, ints, or doubles, it writes these values directly to the XML document as tag values. For a complete list of these primitive values and their associated tags, see http://java.sun.com/products/jfc/tsc/ articles/persistence3/index.html.

To serialize an object, XMLEncoder uses the Strategy pattern, and delegates the logic to an instance of java.beans.PersistenceDelegate. The persistence delegate is given the object being serialized and is responsible for determining which API methods can be used to re-create the same instance in the VM in which it will be decoded. The XMLEncoder then executes the API to create the prototype instance that it gives to the delegate, together with the original object being serialized, so the delegate can determine the API methods to re-create the nondefault state.

The method XMLEncoder.setPersistenceDelegate(Class objectClass, PersistenceDelegate delegate) is used to set a customized delegate for an object class. To illustrate this we'll change the original Person class so that it no longer conforms to the standard JavaBeans model, and show how persistence delegates can be used to teach the XMLEncoder to successfully serialize each instance.

Constructor Arguments
One of the patterns that can be taught to the XMLEncoder is how to create an instance where there is no zero-argument constructor. The following is an example of this in which a Person must be constructed with its firstName and lastName as arguments.

public Person(String aFirstName, String aLastName){
firstName = aFirstName;
firstName = aLastName;
}

In the absence of any customized delegate, the XMLEncoder uses the class java.beans.DefaultPersistenceDelegate. This expects the instance to conform to the JavaBeans component model with a zero-argument constructor and JavaBeans properties controlling its state. For the Person whose property values are supplied as constructor arguments, an instance of DefaultPersistenceDelegate can be created with the list of property names that represent the constructor arguments.

XMLEncoder e = new XMLEncoder(os);
Person p = new Person("John","Smith");
e.setPersistenceDelegate(Person.class,
new DefaultPersistenceDelegate(
new String[] { "firstName","lastName"}
);
e.writeObject(person);

When the XMLEncoder creates the XML for the Person object, it uses the supplied instance of the DefaultPersistenceDelegate, queries the values of the firstName and lastProperties, and creates the following XML document.

<?xml version="1.0" encoding="UTF-8"?>
<java version="1.4.1" class="java.beans.XMLDecoder">
<object class="Person">
<string>John</string>
<string>Smith</string>
</object>
</java>

The result is a record of the Object's state but written in such a way that the XMLDecoder can locate and call the public constructor of the Person object just as a Java program would. In the previous XML document where the Person was a standard JavaBeans component, the nondefault properties were specified with named <void property="propertyName"> tags that contained the argument values.

Although custom encoding rules can be supplied to the XMLEncoder, this is not true of the XMLDecoder. The XML document represents the API steps to re-create the serialized objects in a target VM. One advantage of not having custom decoder rules is that only the environment that serializes the objects requires customization, whereas the target environment just requires the classes with unchanged APIs. This makes it ideal for the following scenario - serialization of an object graph within a development tool that has access to design-time customization, where the XML document will be read in a runtime environment that does not have access to the persistence delegates used during encoding.

Custom Instantiation
In addition to a class being constructed with property values as arguments, custom instantiation can include use of factory methods. An example of this would be if Person's constructor were package protected and instances of the Person class could only be created by calling a static createPerson() method defined in a PersonFactory class.

To write a persistence delegate requires a basic understanding of how the encoder creates its set of operations that will re-create the serialized objects when the stream is deserialized. The XMLEncoder uses the command pattern to record each of the required method calls as instances of the class java.beans.Statement. Each Statement represents an API call in which a method is sent to a target, together with any arguments. Commands that are responsible for the instantiation of objects are instances of java.beans.Expression. A subclass of Statement returns a value. Each object in the graph is represented by the Expression that creates it and a set of Statements that are used to initialize it.

For general control of instantiation, a subclass of the PersistenceDelegate class should be created with a specialized instantiate() method. The return value is the java.beans.Expression that indicates to the encoder which method or constructor should be used to create (or retrieve) the object. The returned Expression includes the object, the target (normally the class that defines the constructor), the method name (normally the fake name "new," which indicates a constructor call), and the argument values that the method or constructor takes.

The first argument of the instantiate() method is the instance of the Person object being serialized, and the second object is the encoder (see Listing 1).

When the XMLEncoder serializes the Person instance, instead of the DefaultPersistenceDelegate that uses standard JavaBeans rules for properties, it uses the anonymous inner class we registered as the persistence delegate of the Person.class. The resulting XML follows. In the <object> tag as well as the class name, the static method createPerson has also been included, and the arguments are specified as child tags.

<?xml version="1.0" encoding="UTF-8"?>
<java version="1.4.1" class="java.beans.XMLDecoder">
<object class="PersonFactory" method="createPerson">
<string>Smith</string>
<void property="firstName">
<string>John</string>
</void>
</object>
</java>

The inner class created for the Person persistence delegate subclasses from DefaultPersistenceDelegate, so the firstName property value of "John" is included in the XML document; however, no property tag is included for lastName. This is because the XMLEncoder compares the prototype instance of Person against the instance being serialized to determine which property values are not their default and need to be included in the XML document. The method that does this is protected void initialize(Class type, Object oldInstance, Object newInstance, Encoder out). The oldInstance argument is the object being serialized and the newInstance is the prototype. Because the prototype instance is created using the Expression returned by the persistence delegate's method protected Expression instantiate(Object oldInstance, Encoder encoder), the newInstance argument will already have the lastName set to be the same as the oldInstance so the encoder won't see their values as different and hence it does not serialize a property value for the lastName.

Custom State
The DefaultPersistenceDelegate assumes that the state of the oldInstance can be determined and restored by using the JavaBeans component model for properties. The list of properties for a class is retrieved using the method java.beans.Introspector.getBeanInfo(ClassaClass).getPropertyDescriptors(). Each property is an instance of java.beans.PropertyDescriptor and includes a get and set method. The Introspector uses a set of rules matching method name pairs to create properties, although these rules can be overridden by supplying a specific BeanInfo class. The BeanInfo class can use a different set of methods than those that the introspector would otherwise have determined as the property's get and set method. However, it can't deal with scenarios in which there is no get and set method, for example. For these the persistence delegate needs to be customized, and as an example we will have a property called nicknames that is multivalued.

private List nicknames = new ArrayList();
public void addNickname(String name){nicknames.add(name); }
public List getNicknames(){return nicknames; }

Nicknames are added to the class one at a time using the addNickname() method, and the complete list is retrieved using getNicknames(). The decoder needs to iterate through the nicknames and create an archive that uses the addNickname() method to re-create the Person.

The persistence delegate will subclass DefaultPersistenceDelegate that assumes construction of the class through a default Person, and will override the instantiate() method that's responsible for determining the expressions required to re-create the oldInstance (see Listing 2).

The persistence delegate iterates through the nicknames and for each one adds a statement to the encoder that specifies the API to re-create the nickname. For this the Statement includes the target of the method (the Person oldInstance), the method name (addNickname), and the arguments (the nickname) (see Listing 3).

Specifying Delegates in BeanInfo Classes
In the examples used so far the custom persistence delegate was set directly onto the XMLEncoder by calling the method setPersistenceDelegate(Class,PersistenceDelegate). This works if you're the author of the code that's responsible for performing the serialization, but in some scenarios another piece of software such as an IDE tool is responsible for encoding the JavaBeans. In this situation you must teach the tool about the delegate that it should use for your class; this is done by specifying the delegate class name in the BeanDescriptor for a string key of "persistenceDelegate". For example, if the Person class is going to be introduced into an IDE together with PersonBeanInfo, the getBeanDescriptor() method would be specialized.

public class PersonBeanInfo extends SimpleBeanInfo {
public BeanDescriptor getBeanDescriptor(){
BeanDescriptor result = new BeanDescriptor(Person.class);
result.setValue("persistenceDelegate", PersonPersistenceDelegate.class);
return result;
}
}

If the PersonBeanInfo is not in the same package as the Person class, the search path of the Introspector in the tool will need to be updated to include the BeanInfo's package.

Another way in which BeanInfo classes can be used to leverage persistence is by marking properties as transient. When DefaultPersistenceDelegate is responsible for encoding the JavaBean, it looks at all the available read/write properties and compares the existing values on the object being serialized against the values on the prototype instance. To flag a property so that it will be ignored, the key "transient" should be set to the value Boolean.TRUE. For example, if the "firstName" property should be considered transient, the getPropertyDescriptors() method on PersonBeanInfo could be specialized as shown in Listing 4.

Conclusion
This article explained how the design of the XMLEncoder avoids many of the fundamental pitfalls of binary serialization and makes the case that XML archives produced by the XMLEncoder can be trusted as a reliable means to store valuable data over the long term. Central to the design of the XMLEncoder is the java.beans.DefaultPersistenceDelegate class, which provides a default serialization strategy based on the idea of properties as laid out in a JavaBeans component model.

We show how custom delegates can be submitted to the encoder to teach it about idioms other than those of the JavaBeans component model, so classes that don't follow the JavaBeans conventions can be accommodated without changing their APIs. Because, in all cases, the decoder inflates object graphs using public API calls; deserialization is remarkably robust in the face of changes made to the classes referred to in the archives. If you need to save some critical data in your application to a file and are not interested in designing a new file format and coding the readers and writers for it - check out the XMLEncoder/XMLDecoder to see if they'll do it all for you.

References

  • Using XML Encoder on the Swing Connection: http://java.sun.com/products/jfc/tsc/ articles/persistence4/index.html
  • JavaBeans: http://java.sun.com/products/javabeans/
  • More Stories By Joe Winchester

    Joe Winchester, Editor-in-Chief of Java Developer's Journal, was formerly JDJ's longtime Desktop Technologies Editor and is a software developer working on development tools for IBM in Hursley, UK.

    More Stories By Philip Milne

    Philip Milne is a software developer who worked at Sun as part of the Swing development team, and now works as a consultant in London, UK. He can be contacted at [email protected]

    Comments (1) View Comments

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Most Recent Comments
    koundinya75 04/23/09 01:20:00 AM EDT

    This is nice presentation on XML serialization.

    I wonder how we can serialize the Composite Objects.

    For ex: If I have Department instance associated with Employee instance then Frameworks like JAXB or CASTOR are able to do right marshalling. But I am not seeing the same with XML serialization. Could you share some of your thoughts on this?

    @ThingsExpo Stories
    Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
    Dale Kim is the Director of Industry Solutions at MapR. His background includes a variety of technical and management roles at information technology companies. While his experience includes work with relational databases, much of his career pertains to non-relational data in the areas of search, content management, and NoSQL, and includes senior roles in technical marketing, sales engineering, and support engineering. Dale holds an MBA from Santa Clara University, and a BA in Computer Science from the University of California, Berkeley.
    Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
    The cloud is now a fact of life but generating recurring revenues that are driven by solutions and services on a consumption model have been hard to implement, until now. In their session at 16th Cloud Expo, Ermanno Bonifazi, CEO & Founder of Solgenia, and Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia, will discuss how a top European telco has leveraged the innovative recurring revenue generating capability of the consumption cloud to enable a unique cloud monetization model to drive results.
    Docker is an excellent platform for organizations interested in running microservices. It offers portability and consistency between development and production environments, quick provisioning times, and a simple way to isolate services. In his session at DevOps Summit at 16th Cloud Expo, Shannon Williams, co-founder of Rancher Labs, will walk through these and other benefits of using Docker to run microservices, and provide an overview of RancherOS, a minimalist distribution of Linux designed expressly to run Docker. He will also discuss Rancher, an orchestration and service discovery platf...
    As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
    Analytics is the foundation of smart data and now, with the ability to run Hadoop directly on smart storage systems like Cloudian HyperStore, enterprises will gain huge business advantages in terms of scalability, efficiency and cost savings as they move closer to realizing the potential of the Internet of Things. In his session at 16th Cloud Expo, Paul Turner, technology evangelist and CMO at Cloudian, Inc., will discuss the revolutionary notion that the storage world is transitioning from mere Big Data to smart data. He will argue that today’s hybrid cloud storage solutions, with commodity...
    Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
    Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
    The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
    Every innovation or invention was originally a daydream. You like to imagine a “what-if” scenario. And with all the attention being paid to the so-called Internet of Things (IoT) you don’t have to stretch the imagination too much to see how this may impact commercial and homeowners insurance. We’re beyond the point of accepting this as a leap of faith. The groundwork is laid. Now it’s just a matter of time. We can thank the inventors of smart thermostats for developing a practical business application that everyone can relate to. Gone are the salad days of smart home apps, the early chalkb...
    We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
    Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
    SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
    Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
    Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch of Docker's initial release in March of 2013, interest was revved up several notches. Then late last...
    SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
    CommVault has announced that top industry technology visionaries have joined its leadership team. The addition of leaders from companies such as Oracle, SAP, Microsoft, Cisco, PwC and EMC signals the continuation of CommVault Next, the company's business transformation for sales, go-to-market strategies, pricing and packaging and technology innovation. The company also announced that it had realigned its structure to create business units to more directly match how customers evaluate, deploy, operate, and purchase technology.
    In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud.
    The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...