Welcome!

Java Authors: Elizabeth White, Pat Romanski, Liz McMillan, Carmen Gonzalez, Robert Reeves

Related Topics: Java

Java: Article

StAX: Java's XML Pull Parser Specification

An overview

Until recently Java programmers have had three options when wishing to access the XML infoset: they could use DOM, JDOM, or SAX. With the release of the JSR 173 StAX specification, Java programmers now have a fourth option, which gives them the efficiency of SAX with a convenient and extendable programming model. This article explores the rationale behind StAX's pull parsing model and describes how you can use the API to more cleanly create Java code to extract the information you need from your XML document. The article describes StAX's two API flavors, "cursor" and "event," and provides some of the reasons why the specification ended up containing two sets of reading and writing APIs. Example usage of each of the reading APIs and each of the writing APIs will be provided.

Document Streaming vs Document Object Model
When creating code that processes XML documents there are two approaches to dealing with the XML infoset data: object model and streaming. With object model, you first create an in-memory object model tree that holds the complete infoset state for an XML document; once in memory you can freely navigate around the tree and even evaluate arbitrary XPath expressions against the tree. This flexibility comes at a price - the complete details of the XML document must be held in memory as objects for the entire duration of the document processing. The creation of the document object graph requires considerable processor resources and takes up a large memory footprint. This may not be a problem for small XML documents, but for large XML ones memory may become a bottleneck to application performance.

With the streaming approach the XML infoset is processed in a serial manner (as a depth first traversal of the XML infoset tree); once an element has been seen its state is discarded and may be garbage collected. Only the infoset state at the current point of the document is available at any one time, which clearly limits the types of processing that can be done and implies that you need to know what processing you are going to perform before reading in the XML document. If you don't think you will ever need to use XML streaming, try loading a 100 megabyte XML file into your latest application and watch what happens.

Streaming Pull Parsing vs Streaming Push Parsing
If you are creating an application that's memory limited, either because you are running on a constrained device (as in a phone) or your application is simultaneously processing several requests (as in an application server), you need to read your XML documents using a streaming model. In the past you were restricted to using the Simple API for XML (SAX). SAX was the first widely available API for reading XML in Java and provides a very low-level, efficient API that deals directly with the character data in the XML document. SAX uses a push processing model in which the SAX library reads the XML document and calls methods on your application objects as it encounters elements and text within the XML document. Although the SAX API is simple, the code application developers need to create to use SAX is not.

A "pull" parsing alternative to SAX's "push" parsing has lurked in the background for some time, but no longer: the recently ratified StAX specification now standardizes a pull parser for Java. StAX provides an alternative processing model where you call methods on the parser at your leisure and move the processing along at your command. The key difference here is that with SAX you don't have control of the application thread and can only accept invocations from the parser. In contrast, with pull parsing you own the application thread and you control when and where you call the XML parser.

This control over "when and where" leads to increased freedoms in application design, allowing you to either collect all your parsing code together or alternatively place your parsing code within the objects that understand that particular type of information.

Pull parsing has the following advantages over push parsing:

  • Parsing simple documents can now be done with simple code.
  • It's much simpler to write recursive descent parsing code for more complex documents.
  • More than one document can be read by an application at one time with just a single thread. This can be useful when part way through reading one document you need to read a second document.
  • The parser can be told to skip parts of the XML document that are not relevant to the application, which simplifies your code and may reduce processing time and memory churn.
  • You can create streaming pipelines in an object-oriented way that are efficient and simple to use.
XML Information Model
The StAX specification models an XML document as a set of events, and these events are pulled by the application and supplied in the order in which they are encountered in the XML document. The StAX specification defines the following types of events: Attribute, Characters, Comment, StartDocument, EndDocument, StartElement, EndElement, Namespace, DTD, EntityDeclaration, EntityReference, NotationDeclaration, and ProcessingInstruction.

The last five event types are only seen if your document contains a DTD. Each event has different properties associated with it depending on the type of event.

A StAX parser is an engine that reads a Unicode character stream and converts the textual data into events. Below is an example XML document:


<?xml version="1.0" encoding="UTF-8"?>
<pre1:foo xmlns:pre1="http://www.example.com/foons">
	<pre1:sometext>
			the text <![CDATA[<notallowed> as normal text]]> other text
	</pre1:sometext>
	<pre2:bar ratio="5.5" xmlns:pre2="http://www.example.com/barns"/>
</pre1:foo>

The above document would be parsed into the events shown in Figure 1.

As you can see, this small document would be converted (by default) into a stream of 14 primary events. Each colored circle in the figure is an event object. The large circles are the primary events that are seen by the application, while the small circles are secondary events that are generally accessed from some primary event.

Salient points about the event stream to note are:

  • Every StartElement event has a matching EndElement event, even for empty events like <eg/>.
  • Attributes, although events, are not (normally) seen in the event stream but are instead accessible from their StartElement event.
  • Namespaces events are not (normally) seen in the event stream but instead appear twice, first accessible from a StartElement and, second, accessible from the corresponding EndElement.
  • XML Character data may be split over more than one event and crop up where you might not expect.
While parsing an XML document the StAX parsing engine maintains a namespace stack. The namespace stack holds details of all the XML namespaces defined for the current element and its ancestors. This namespace stack is accessible though the interface javax.xml. namespace.NamespaceContext, which includes methods for looking up a namespace URI given a prefix and looking up a prefix given a namespace URI.

Cursor vs Iterator
The Story of Two APIs

When programming in Java, object creation has traditionally been seen as the enemy of performance. One of the SAX API's main advantages is that very few superfluous objects are created during the parsing of an XML document. SAX even gives the application direct access into the parser's internal character buffer in order to read text content! This lean and mean approach leads to very efficient parsing of XML. One of the design goals for StAX was for it to be at least as fast as SAX.

Very early on during the process of creating the StAX specification two alternative API styles were proposed by the expert group. One, which I'll call "cursor," followed SAX's lean and mean approach; the other, which I'll call "iterator," was a modern object-oriented API utilizing immutable objects. The expert group looked long and hard at which API style we should go with, including doing a performance analysis of the two styles. What we discovered was that we were trying to support three different end-user developers:

  1. Library and infrastructure developers: The group who creates app servers, JAXM, JAXB, JAXRpc, implementations, etc., and needs a very low-level and close-to-the-metal API with little overhead and few requirements for extensibility.
  2. J2ME developers: This group wants an XML pull parsing library that's small and simple and has a tiny footprint. They have little need for being able to extend the parser or modify the event stream.
  3. J2EE and J2SE developers: This large group generally wants a simple, efficient pull parser that naturally produces elegant bug-free code while allowing for more complex features such as stream modification and introducing new application event types.
We looked at many inventive ideas on how to create a single API that would allow us to "have our cake and eat it," but each of these ideas was rejected as they left us with a bad taste in our mouth and invariably produced a less-predictable API. In the end our desire to support these three developer types and our requirement that we be just as fast as SAX led the expert group to support both API styles.

Cursor API
The cursor API contains a central parser interface called XMLStreamReader that includes accessor methods for all the possible information you could retrieve from the XML Information model. The parser interface contains methods for accessing the document encoding, element names, attributes, namespaces, text content, processing instructions, etc. Methods are provided to allow access into the internal character buffer just as in SAX. The cursor approach is like a mirror image of SAX and provides direct access to string and character information while exposing methods with integer indexes for accessing attribute and namespace information just as in SAX. Thus it's possible to access all of the Information Model via a set of methods that return strings so object allocation is kept to a bare minimum.

Listing 1 provides some example code that uses an XMLStreamReader instance called "sr", which reads the example document listed in Figure 1. The code uses "sr" to walk over the document and retrieve the text content of the element <pre1:sometext> and the value of the ratio attribute of the element <pre2:bar>.

Listing 1 assumes the XMLStreamReader sr has just been created, i.e., StartDocument will be the first event.

Iterator API
The iterator API presents the event stream as an ordered list of immutable event objects. The StAX API defines a common base interface called XMLEvent and a subinterface for each of the event types listed in the XML information model. The Iterator API contains a central parser interface called XMLEventReader with just five methods in it, the most important being nextEvent(), which returns the next event in the stream. The interface XMLEventReader implements java.util.Iterator so it can be passed into routines that can handle the standard Java Iterator.

The common super interface XMLEvent contains methods for finding the actual event type and downcasting to the event subtypes. Listing 2 shows the equivalent code for reading our example XML document but using the XMLEventReader "er".

Clearly in a real application the three QName objects would be held in static final fields but are left inline here to aid in comparing the code examples.

What can I do with the iterator API that I can't do with the cursor API?

As the XMLEvent subclasses are immutable objects, you can place them in arrays, lists, and maps and pass them through your application as you desire, even after the parser has moved on to later events.

You can create your own subtypes of XMLEvent that are either completely new information items or extensions of existing events but with extra methods.

You can modify an event stream by adding or removing events in a way that's much simpler to code than with the cursor API.

Which API Should I Use?
This decision can only be made by the individual developer depending on the specific situation; if one API fitted all needs we would not have ended up with two.

My personal rules of thumb:

  1. If you're programming on J2ME, use the cursor API.
  2. If you're creating low-level infrastructure or libraries and you need to achieve the best possible performance, use the cursor API.
  3. If you want to create pipelines of XML processing, use the iterator API.
  4. If you want to modify the event stream, use the iterator API.
  5. If you want to future proof your app by enabling pluggable processing of the event stream, use the iterator API.
  6. If in doubt, use the iterator API.
Performance: What's the Beef?
During the expert group design discussions for StAX, the author created a prototype pull parser that provided both types of API styles, and a series of performance tests were performed using a wide range of XML test data. Due to the fact that the internal parser implementation was common for all the tests, the results showed the performance impact of using the iterator API style versus the cursor API style when accessing the parser.

The performance differences between the API styles arise mostly because of the extra objects that are created and later need to be garbage collected. The cursor and event API need to create the objects shown in Figure 2.

The cursor API does not need to create string objects for XML character data as it provides direct access to the internal character buffer. In addition, the iterator API needs to create the immutable event objects. The items colored yellow are string objects that may be cached to improve performance at the expense of some cache management complexity.

Figure 3 shows the number of bytes of XML processed per millisecond averaged over different sets of XML test files. The test was run on JDK 122 and JDK 142 for both API styles and with and without string caching enabled. The test files are loaded into memory so there is no I/O during the test runs.

Clearly there has been a massive (3-6 times) performance improvement across the board from JDK 122 to JDK 142. The iterator API without any string caching is 6.5 times faster on JDK 142.

String caching makes a big difference on JDK 122, running up to 50% faster, but only a small difference on JDK 142, showing less than a 5% improvement with perfect caching. Clearly we can now take Joshua Bloch at his word when he says that object pooling of lightweight objects is unnecessary with modern JVMs.

The overhead from using the iterator API style instead of the cursor API is around 25-30%. This sounds like a lot but remember this test program is doing 90% XML parsing and 10% application logic, whereas your typical application would probably be the other way round - 90% app logic and 10% parsing - which would drive the overhead down to about 3%, which is in the noise for most applications.

Of course your mileage may vary depending on your particular usage scenario. With the first generation StAX parser coming out soon, we'll see if this level of performance difference is also reflected in the real StAX parsers.

StAX Input Factories
How do I create an instance of XMLStreamReader or XMLEventReader?

StAX parsers that implement either of these two APIs are created by the javax.xml.stream.XMLInputFactory, which follows the standard factory pattern. When we get JAXP support you'll be able to create instances via the JAXP APIs. Create a new instance of XMLInputFactory by calling newInstance(); this method will search for a StAX implementation using the standard techniques.

The input factory has a set of configuration parameters that control the features of the parser that will be created by the factory. Once you have an instance of the factory you can override the default configuration parameter values.

Some of the more interesting configuration parameters are:

  • javax.xml.stream.isCoalescing: Defaults to false but when set to true will request a parser that coalesces all contagious text into a single character event.
  • javax.xml.stream.supportDTD: Defaults to true, but when set to false will request a parser that does not support DTDs in XML documents.
  • javax.xml.stream.resolver: Can be used to set an implementation of XMLResolver, which is used during parsing to resolve external entities.
Below is the code to create an instance of the XMLStreamReader on a File f.


        XMLInputFactory factory = XMLInputFactory.newInstance();
        XMLStreamReader sr = factory.createXMLStreamReader(new FileInputStream(f));
And to create an XMLEventReader
        XMLInputFactory factory = XMLInputFactory.newInstance();
        XMLEventReader sr = factory.createXMLEventReader(new FileInputStream(f));

If we wanted a differently featured parser, we would set some configuration parameters before creating the parser as follows:


        XMLInputFactory factory = XMLInputFactory.newInstance();
        factory.setProperty("javax.xml.stream.isCoalescing",Boolean.TRUE);
      factory.setProperty("javax.xml.stream.supportDTD",Boolean.FALSE);
        XMLEventReader sr = factory.createXMLEventReader(new FileInputStream(f));

Some of the standard configuration parameters are optional, meaning that some implementations may choose not to support the feature. You can check to see if a standard (or nonstandard) configuration parameter is supported by calling isPropertySupported() on the factory instance. Here's an example of using an optional feature:


        XMLInputFactory factory = XMLInputFactory.newInstance();
        if(factory.isPropertySupported("javax.xml.stream.isValidating")){
            factory.setProperty("javax.xml.stream.isValidating",Boolean.TRUE);
            factory.setProperty("javax.xml.stream.reporter",this);
        }
        XMLStreamReader sr = factory.createXMLStreamReader(new FileInputStream(f));

The above code instantiates a DTD validating parser if the implementation supports DTD validation.

Once you have created either an XMLStreamReader or an XMLEventReader parser you can find out what its configuration is by calling getProperty() on the parser.

One important design feature of StAX is that you must specify the properties of the parser before you create the parser and you cannot change the property value for an existing parser nor set a new data source into the parser. The rationale behind these restrictions is that we wanted to enable optimized and modular implementations.

A StAX implementation could use a special parsing engine with its own optimized byte to Unicode decoding when reading from a java.io.InputStream; alternatively it could use a different parsing engine when a java.io.Reader is the data source. Another example is the javax.xml.stream.supportDTD property that when "false" could trigger an implementation to use a simpler, faster parsing engine. If we allowed any of the configuration parameters to be modifiable after the parser was created, these types of optimization would be considerably harder to implement.

What About Writing XML?
StAX is a truly bidirectional API and can be effectively used to generate XML, either from scratch or as the result of a StAX Event pipeline. The StAX writers are intelligent in that they maintain namespace stacks and can automatically generate namespace prefixes (if you don't care what they look like). The writers can also close elements using the correct prefix and localname.

Listings 3 and 4 show some code to generate our example XML file using someText and ratio variables. If you want to use the cursor API, the code appears as in Listing 3. If you want to use the iterator API, the code should look like Listing 4.

The writers automatically escape any illegal Unicode characters such as < or & that are found in character content or attribute values.

Summary
After a lot of hard work the StAX API specification has finally seen the light of day and Java application programmers now have a standard pull parser interface for XML. The StAX API has many advantages over the SAX API for developers, including a simpler programming model and the ability to modify the event stream data and extend the information model to allow the introduction of application-specific additions. J2ME programmers now have an XML API that matches their resource-constrained environments, while library developers now have a standard API to use that fits in with their client applications' threading models and performance expectations.

Acknowledgment
I would like to thank the members of the JSR 173 expert group for an interesting and stimulating set of discussions over the past two years.

More Stories By David Stephenson

David has worked in the computing industry for over ten years working in areas of distributed systems, middleware and IT solutions. David has a wide range of experience in computing having worked for Hewlett Packard laboratories and for the HPs middleware divisions creating e-services technology. Lately David has worked in the area of web services and on both the Java and .net platforms and has been HPís contributing expert in the JCP for both JSR31 and JSR173.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immediate and actionable interpretation of events as they happen. Another aspect concerns how to deliver ...
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud.
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, addressed the big issues involving these technologies and, more important, the results they will achieve. Rodney Rogers, chairman and CEO of Virtustream; Brendan O'Brien, co-founder of Aria Systems, Bart Copeland, president and CEO of ActiveState Software; Jim Cowie, chief scientist at Dyn; Dave Wagstaff, VP and chief architect at BSQUARE Corporation; Seth Proctor, CTO of NuoDB, Inc.; and Andris Gailitis, C...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, discussed how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.
In this Women in Technology Power Panel at 15th Cloud Expo, moderated by Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, Esmeralda Swartz, CMO at MetraTech; Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems; Seema Jethani, Director of Product Management at Basho Technologies; Victoria Livschitz, CEO of Qubell Inc.; Anne Hungate, Senior Director of Software Quality at DIRECTV, discussed what path they took to find their spot within the technology industry and how do they see opportunities for other women in their area of expertise.
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor...
“With easy-to-use SDKs for Atmel’s platforms, IoT developers can now reap the benefits of realtime communication, and bypass the security pitfalls and configuration complexities that put IoT deployments at risk,” said Todd Greene, founder & CEO of PubNub. PubNub will team with Atmel at CES 2015 to launch full SDK support for Atmel’s MCU, MPU, and Wireless SoC platforms. Atmel developers now have access to PubNub’s secure Publish/Subscribe messaging with guaranteed ¼ second latencies across PubNub’s 14 global points-of-presence. PubNub delivers secure communication through firewalls, proxy ser...