Welcome!

Java Authors: Carmen Gonzalez, Pat Romanski, Victoria Livschitz, Liz McMillan, Elizabeth White

Related Topics: Java, SOA & WOA, Virtualization

Java: Article

Dataflow Programming: A Scalable Data-Centric Approach to Parallelism

Dataflow allows developers to easily take advantage of today’s multicore processors

There are two major drivers behind the need to embrace parallelism: the dramatic shift to commodity multicore CPUs, and the striking increase in the amount of data being processed by the applications that run our enterprises. These two factors must be addressed by any approach to parallelism or we will find ourselves falling short of resolving the crisis that is upon us. While there are data-centric approaches that have generated interest, including Map-Reduce, dataflow programming is arguably the easiest parallel strategy to adopt for the millions of developers trained in serial programming.

The blog gives a nice summary of why parallel processing is important.

Hardware Support for Parallelism
Let's start with an overview of the supported parallelism available today in modern processors. First there is processor-level parallelism involving instruction pipelining and other techniques handled by the processor. These are all optimized by compilers and runtime environments such as the Java Virtual Machine. This goodness is available to all developers without much effort on our part.

Recently commodity multicore processors have brought parallelism into the mainstream. As we move into many-core systems, we now have available essentially a "cluster in a box." But, software has lagged behind hardware in the area of parallelism. As a result, many of today's multicore systems are woefully under-utilized. We need a paradigm shift to a new programming model that embraces this high level of parallelism from the start, making it easy for developers to create highly scalable applications. However, focusing only on cores doesn't take into account the whole system. Data-intensive applications by definition have significant amounts of I/O operations. A parallel programming model must take into account parallelizing I/O operations with compute. Otherwise we'll be unable to build applications that can keep the multicore monster fed and happy.

Virtualization is a popular way to divvy up multicore machines. This is essentially treating a single machine as multiple, separate machines. Each virtual slice has its function to provide and each operates somewhat independently. This works well for splitting up IT types of functions such as email servers, and web servers. But it doesn't help with the problem of crunching big data. For big data types of problems, taking advantage of the whole machine, the "cluster in a box," is imperative.

Scale-out, using multiple machines to execute big data jobs, is another way to implement parallelism. This technique has been around for ages and is seeing new instantiations in systems such as Hadoop, built on the Map-Reduce design pattern. Scaling out to large cluster systems certainly has its advantages and is absolutely required for the Internet-scale data problem. It does however introduce inefficiencies that can be critical barriers to full utilization in smaller cluster configurations (less than 100-node size clusters).

The Next Step for Hadoop
In a talk on Hadoop, Jeff Hammerbacher stated, "More programmer-friendly parallel dataflow languages await discovery, I think. MapReduce is one (small) step in that direction." His talk is summarized in this blog. As Jeff points out, Map-Reduce is a great first step, but is lacking as a programming model. Integrating dataflow with the scale-out capabilities available in frameworks such as Hadoop offers the next big step in handling big data.

Dataflow Programming
Dataflow architecture is based on the concept of using a dataflow graph for program execution. A dataflow graph consists of nodes that are computational elements. The edges in a dataflow graph provide data paths between nodes. A dataflow graph is directed and acyclic (DAG). Figure 1 provides a snapshot of an executing dataflow application. Note how all of the nodes are executing in parallel, flowing data in a pipeline fashion.

Figure 1

Nodes in the graph do work by reading data from their input flow(s), transforming the data and pushing the results to their outputs. Nodes that provide connectivity may have only input or output flows. A graph is constructed by creating nodes and linking their data flows together. Once a graph is constructed and executed, the connectivity nodes begin reading data and pushing it downstream. Downstream consumers read the data, process it and send their results downstream. This results in pipeline parallelism, allowing each node in the graph to run in parallel as the pipeline begins to fill.

Dataflow provides a computational model. A dataflow graph must first be constructed before it can be executed. This leads to a very nice modularity: creating building blocks (nodes) that can be plugged together in an endless number of ways to create complex applications. This model is analogous to the UNIX shell model. With the UNIX shell, you can string together multiple commands that are pipelined for execution. Each command reads its input, does something with the data and writes to its output. The commands operate independently in the sense that they don't care what is upstream or downstream from them. It is up to the pipeline composer (the end user) to create the pipeline correctly to process the data as wanted. Dataflow is very similar to this model, but provides more capabilities.

The dataflow architecture provides flow control. Flow control prevents fast producers from overrunning slower consumers. Flow control is inherent in the way dataflow works and puts no burden on the programmer to deal with issues such as deadlock or race conditions.

Dataflow is focused on data parallelism. As such, it is not a great fit for all computational problems. But as has become evident over the past few years, there are many domains of parallel problems and one solution or architecture will not solve all problems for all domains. Dataflow provides a different programming paradigm for most developers, so it requires a bit of a shift in thinking to a more data-centric way of designing solutions. But once that shift takes place, dataflow programming is a natural way to express data-centric solutions.

Dataflow Programming and Actors
Dataflow programming and the Actor model available in languages such as Scala and Erlang share many similarities. The Actor model provides for independent actors to communicate using message passing. Within an actor, pattern matching is used to allow an actor determine how to handle a message. Messages are generally asynchronous, but synchronous behavior with flow control can be built on top of the Actor model with some effort.

 

In general, the Actor model is best used for task parallelism. For example, Erlang was originally developed within the telecom industry for building non-stop control systems. Dataflow is data centric and therefore well suited for big data processing tasks.

Dataflow Goodness
As just mentioned, dataflow programming is a different paradigm and so it does require somewhat of a shift in design thinking. This is not a critical issue as the concepts around dataflow are easy to grasp, which is a very important point. A parallel framework that provides great multicore utilization but takes months if not years to master is not all that helpful. Dataflow programming makes the simple things easy and the hard tasks possible.

Dataflow applications are simple to express. Dataflow uses a composition programming model based on a building blocks approach. This leads to very modular designs that provide a high amount of reuse.

Dataflow does a good job of abstracting the details of parallel development. This is important as all of the lower level tools for parallel application development are available today in frameworks such as the java.util.concurrent library available in the JDK. However, these libraries are low-level and require a high degree of expertise to use them correctly. They rely on shared state that must be protected using synchronization techniques that can lead to race conditions, deadlocks and extremely hard-to-debug problems.

Being based on a shared-nothing, immutable message passing architecture makes dataflow a simplified programming model. The nodes within a dataflow graph don't have to worry about using synchronization techniques to produce shared memory. They are lock-free so deadlock and race conditions are not a worry either. The dataflow architecture inherently handles these conditions, allowing the developer to focus on their job at hand. Since the data streams are immutable, this allows multiple readers to attach to the output node. This feature provides more flexibility and reuse in the programming model.

The immutability of the data flows also limits the side effects of nodes within a dataflow program. Nodes within a dataflow graph can only communicate over dataflow channels. By following this model, you are assured that no global state or state of other nodes can be affected by a node. Again, this helps to simplify the programming model. Developing new nodes is free of most of the worries normally involved with parallel programming.

The dataflow programming model is functional in style. Each node within a graph provides a very specific, continuous function on its input data. Programs are built by stitching these functions together in various ways to create complex applications.

Dataflow-based architecture elegantly takes advantage of multicore processors on a single machine (scale up). It's also a good architecture for scaling out to multiple machines. Nodes that run across machine boundaries can communicate over data channels using network sockets. This provides the same simple, flexible dataflow programming model in a distributed configuration.

Dataflow and Big Data
The inherent pipeline parallelism built into dataflow programming makes dataflow great for datasets ranging from thousands to billions of records. Applications written using dataflow techniques can scale easily to extremely large data sizes, generally without much strain on the memory system as a dataflow application will eventually enter into a steady state of memory consumption. The overall amount of data pumped through the application doesn't affect that steady state memory size.

Not all dataflow operators are friendly when it comes to memory consumption. Many are designed specifically to load data into memory. For example a hash join operator may load one of its data sources into an in-memory index. This is the nature of the operator and must be taken into account when using it.

Being pipelined in nature also allows for great overlap of I/O and computational tasks. As mentioned earlier, this is an important "whole" application approach that is highly critical to success in building big data applications.

Dataflow systems are easily embeddable in the current commonly used systems. For instance, a dataflow-based application can easily be executed within the context of a Map-Reduce application. Experimentation with a dataflow-based platform named Pervasive DataRush has shown that the Hadoop system can be used to scale out an application using DataRush within each map step to help parallelize the mapper to take advantage of multicore efficiencies. Allowing each mapper to handle larger chunks of data allows the overall Map-Reduce application to run faster since each mapper is itself parallelized.

Summary
Dataflow is a software architecture that is based on the idea of continuous functions executing in parallel on data streams. It's focused on data-intensive applications, lending itself to today's big data challenges. Dataflow is easy to grasp and simple to express, and this design-time scalability can be as important as its run-time scalability.

Dataflow allows developers to easily take advantage of today's multicore processors and also fits well into a distributed environment. Tackling big data problems with dataflow is straightforward and ensures your applications will be able to scale in the future to meet the growing demands of your organization.

More Stories By Jim Falgout

Jim Falgout has 20+ years of large-scale software development experience and is active in the Java development community. As Chief Technologist for Pervasive DataRush, he’s responsible for setting innovative design principles that guide the company’s engineering teams as they develop new releases and products for partners and customers. He applied dataflow principles to help architect Pervasive DataRush.

Prior to Pervasive, Jim held senior positions with NexQL, Voyence Net Perceptions/KD1 Convex Computer, Sequel Systems and E-Systems. Jim has a B.Sc. (Cum Laude) in Computer Science from Nicholls State University. He can be reached at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.