Click here to close now.

Welcome!

Java IoT Authors: Pat Romanski, Elizabeth White, Mark O'Neill, Sematext Blog, John Wetherill

Related Topics: Java IoT, @MicroservicesE Blog, @ContainersExpo Blog

Java IoT: Article

Dataflow Programming: A Scalable Data-Centric Approach to Parallelism

Dataflow allows developers to easily take advantage of today’s multicore processors

There are two major drivers behind the need to embrace parallelism: the dramatic shift to commodity multicore CPUs, and the striking increase in the amount of data being processed by the applications that run our enterprises. These two factors must be addressed by any approach to parallelism or we will find ourselves falling short of resolving the crisis that is upon us. While there are data-centric approaches that have generated interest, including Map-Reduce, dataflow programming is arguably the easiest parallel strategy to adopt for the millions of developers trained in serial programming.

The blog gives a nice summary of why parallel processing is important.

Hardware Support for Parallelism
Let's start with an overview of the supported parallelism available today in modern processors. First there is processor-level parallelism involving instruction pipelining and other techniques handled by the processor. These are all optimized by compilers and runtime environments such as the Java Virtual Machine. This goodness is available to all developers without much effort on our part.

Recently commodity multicore processors have brought parallelism into the mainstream. As we move into many-core systems, we now have available essentially a "cluster in a box." But, software has lagged behind hardware in the area of parallelism. As a result, many of today's multicore systems are woefully under-utilized. We need a paradigm shift to a new programming model that embraces this high level of parallelism from the start, making it easy for developers to create highly scalable applications. However, focusing only on cores doesn't take into account the whole system. Data-intensive applications by definition have significant amounts of I/O operations. A parallel programming model must take into account parallelizing I/O operations with compute. Otherwise we'll be unable to build applications that can keep the multicore monster fed and happy.

Virtualization is a popular way to divvy up multicore machines. This is essentially treating a single machine as multiple, separate machines. Each virtual slice has its function to provide and each operates somewhat independently. This works well for splitting up IT types of functions such as email servers, and web servers. But it doesn't help with the problem of crunching big data. For big data types of problems, taking advantage of the whole machine, the "cluster in a box," is imperative.

Scale-out, using multiple machines to execute big data jobs, is another way to implement parallelism. This technique has been around for ages and is seeing new instantiations in systems such as Hadoop, built on the Map-Reduce design pattern. Scaling out to large cluster systems certainly has its advantages and is absolutely required for the Internet-scale data problem. It does however introduce inefficiencies that can be critical barriers to full utilization in smaller cluster configurations (less than 100-node size clusters).

The Next Step for Hadoop
In a talk on Hadoop, Jeff Hammerbacher stated, "More programmer-friendly parallel dataflow languages await discovery, I think. MapReduce is one (small) step in that direction." His talk is summarized in this blog. As Jeff points out, Map-Reduce is a great first step, but is lacking as a programming model. Integrating dataflow with the scale-out capabilities available in frameworks such as Hadoop offers the next big step in handling big data.

Dataflow Programming
Dataflow architecture is based on the concept of using a dataflow graph for program execution. A dataflow graph consists of nodes that are computational elements. The edges in a dataflow graph provide data paths between nodes. A dataflow graph is directed and acyclic (DAG). Figure 1 provides a snapshot of an executing dataflow application. Note how all of the nodes are executing in parallel, flowing data in a pipeline fashion.

Figure 1

Nodes in the graph do work by reading data from their input flow(s), transforming the data and pushing the results to their outputs. Nodes that provide connectivity may have only input or output flows. A graph is constructed by creating nodes and linking their data flows together. Once a graph is constructed and executed, the connectivity nodes begin reading data and pushing it downstream. Downstream consumers read the data, process it and send their results downstream. This results in pipeline parallelism, allowing each node in the graph to run in parallel as the pipeline begins to fill.

Dataflow provides a computational model. A dataflow graph must first be constructed before it can be executed. This leads to a very nice modularity: creating building blocks (nodes) that can be plugged together in an endless number of ways to create complex applications. This model is analogous to the UNIX shell model. With the UNIX shell, you can string together multiple commands that are pipelined for execution. Each command reads its input, does something with the data and writes to its output. The commands operate independently in the sense that they don't care what is upstream or downstream from them. It is up to the pipeline composer (the end user) to create the pipeline correctly to process the data as wanted. Dataflow is very similar to this model, but provides more capabilities.

The dataflow architecture provides flow control. Flow control prevents fast producers from overrunning slower consumers. Flow control is inherent in the way dataflow works and puts no burden on the programmer to deal with issues such as deadlock or race conditions.

Dataflow is focused on data parallelism. As such, it is not a great fit for all computational problems. But as has become evident over the past few years, there are many domains of parallel problems and one solution or architecture will not solve all problems for all domains. Dataflow provides a different programming paradigm for most developers, so it requires a bit of a shift in thinking to a more data-centric way of designing solutions. But once that shift takes place, dataflow programming is a natural way to express data-centric solutions.

Dataflow Programming and Actors
Dataflow programming and the Actor model available in languages such as Scala and Erlang share many similarities. The Actor model provides for independent actors to communicate using message passing. Within an actor, pattern matching is used to allow an actor determine how to handle a message. Messages are generally asynchronous, but synchronous behavior with flow control can be built on top of the Actor model with some effort.

 

In general, the Actor model is best used for task parallelism. For example, Erlang was originally developed within the telecom industry for building non-stop control systems. Dataflow is data centric and therefore well suited for big data processing tasks.

Dataflow Goodness
As just mentioned, dataflow programming is a different paradigm and so it does require somewhat of a shift in design thinking. This is not a critical issue as the concepts around dataflow are easy to grasp, which is a very important point. A parallel framework that provides great multicore utilization but takes months if not years to master is not all that helpful. Dataflow programming makes the simple things easy and the hard tasks possible.

Dataflow applications are simple to express. Dataflow uses a composition programming model based on a building blocks approach. This leads to very modular designs that provide a high amount of reuse.

Dataflow does a good job of abstracting the details of parallel development. This is important as all of the lower level tools for parallel application development are available today in frameworks such as the java.util.concurrent library available in the JDK. However, these libraries are low-level and require a high degree of expertise to use them correctly. They rely on shared state that must be protected using synchronization techniques that can lead to race conditions, deadlocks and extremely hard-to-debug problems.

Being based on a shared-nothing, immutable message passing architecture makes dataflow a simplified programming model. The nodes within a dataflow graph don't have to worry about using synchronization techniques to produce shared memory. They are lock-free so deadlock and race conditions are not a worry either. The dataflow architecture inherently handles these conditions, allowing the developer to focus on their job at hand. Since the data streams are immutable, this allows multiple readers to attach to the output node. This feature provides more flexibility and reuse in the programming model.

The immutability of the data flows also limits the side effects of nodes within a dataflow program. Nodes within a dataflow graph can only communicate over dataflow channels. By following this model, you are assured that no global state or state of other nodes can be affected by a node. Again, this helps to simplify the programming model. Developing new nodes is free of most of the worries normally involved with parallel programming.

The dataflow programming model is functional in style. Each node within a graph provides a very specific, continuous function on its input data. Programs are built by stitching these functions together in various ways to create complex applications.

Dataflow-based architecture elegantly takes advantage of multicore processors on a single machine (scale up). It's also a good architecture for scaling out to multiple machines. Nodes that run across machine boundaries can communicate over data channels using network sockets. This provides the same simple, flexible dataflow programming model in a distributed configuration.

Dataflow and Big Data
The inherent pipeline parallelism built into dataflow programming makes dataflow great for datasets ranging from thousands to billions of records. Applications written using dataflow techniques can scale easily to extremely large data sizes, generally without much strain on the memory system as a dataflow application will eventually enter into a steady state of memory consumption. The overall amount of data pumped through the application doesn't affect that steady state memory size.

Not all dataflow operators are friendly when it comes to memory consumption. Many are designed specifically to load data into memory. For example a hash join operator may load one of its data sources into an in-memory index. This is the nature of the operator and must be taken into account when using it.

Being pipelined in nature also allows for great overlap of I/O and computational tasks. As mentioned earlier, this is an important "whole" application approach that is highly critical to success in building big data applications.

Dataflow systems are easily embeddable in the current commonly used systems. For instance, a dataflow-based application can easily be executed within the context of a Map-Reduce application. Experimentation with a dataflow-based platform named Pervasive DataRush has shown that the Hadoop system can be used to scale out an application using DataRush within each map step to help parallelize the mapper to take advantage of multicore efficiencies. Allowing each mapper to handle larger chunks of data allows the overall Map-Reduce application to run faster since each mapper is itself parallelized.

Summary
Dataflow is a software architecture that is based on the idea of continuous functions executing in parallel on data streams. It's focused on data-intensive applications, lending itself to today's big data challenges. Dataflow is easy to grasp and simple to express, and this design-time scalability can be as important as its run-time scalability.

Dataflow allows developers to easily take advantage of today's multicore processors and also fits well into a distributed environment. Tackling big data problems with dataflow is straightforward and ensures your applications will be able to scale in the future to meet the growing demands of your organization.

More Stories By Jim Falgout

Jim Falgout has 20+ years of large-scale software development experience and is active in the Java development community. As Chief Technologist for Pervasive DataRush, he’s responsible for setting innovative design principles that guide the company’s engineering teams as they develop new releases and products for partners and customers. He applied dataflow principles to help architect Pervasive DataRush.

Prior to Pervasive, Jim held senior positions with NexQL, Voyence Net Perceptions/KD1 Convex Computer, Sequel Systems and E-Systems. Jim has a B.Sc. (Cum Laude) in Computer Science from Nicholls State University. He can be reached at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
2015 predictions circa 1970: houses anticipate our needs and adapt, city infrastructure is citizen and situation aware, office buildings identify and preprocess you. Today smart buildings have no such collective conscience, no shared set of fundamental services to identify, predict and synchronize around us. LiveSpace and M2Mi are changing that. LiveSpace Smart Environment devices deliver over the M2Mi IoT Platform real time presence, awareness and intent analytics as a service to local connected devices. In her session at @ThingsExpo, Sarah Cooper, VP Business of Development at M2Mi, will d...
Thanks to widespread Internet adoption and more than 10 billion connected devices around the world, companies became more excited than ever about the Internet of Things in 2014. Add in the hype around Google Glass and the Nest Thermostat, and nearly every business, including those from traditionally low-tech industries, wanted in. But despite the buzz, some very real business questions emerged – mainly, not if a device can be connected, or even when, but why? Why does connecting to the cloud create greater value for the user? Why do connected features improve the overall experience? And why do...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
Imagine a world where targeting, attribution, and analytics are just as intrinsic to the physical world as they currently are to display advertising. Advances in technologies and changes in consumer behavior have opened the door to a whole new category of personalized marketing experience based on direct interactions with products. The products themselves now have a voice. What will they say? Who will control it? And what does it take for brands to win in this new world? In his session at @ThingsExpo, Zack Bennett, Vice President of Customer Success at EVRYTHNG, will answer these questions a...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust IT industrialization – allowing customers to provide amazing user experiences with optimized IT per...
We’re entering a new era of computing technology that many are calling the Internet of Things (IoT). Machine to machine, machine to infrastructure, machine to environment, the Internet of Everything, the Internet of Intelligent Things, intelligent systems – call it what you want, but it’s happening, and its potential is huge. IoT is comprised of smart machines interacting and communicating with other machines, objects, environments and infrastructures. As a result, huge volumes of data are being generated, and that data is being processed into useful actions that can “command and control” thi...
Building low-cost wearable devices can enhance the quality of our lives. In his session at Internet of @ThingsExpo, Sai Yamanoor, Embedded Software Engineer at Altschool, provided an example of putting together a small keychain within a $50 budget that educates the user about the air quality in their surroundings. He also provided examples such as building a wearable device that provides transit or recreational information. He then reviewed the resources available to build wearable devices at home including open source hardware, the raw materials required and the options available to power s...
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud.
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
Collecting data in the field and configuring multitudes of unique devices is a time-consuming, labor-intensive process that can stretch IT resources. Horan & Bird [H&B], Australia’s fifth-largest Solar Panel Installer, wanted to automate sensor data collection and monitoring from its solar panels and integrate the data with its business and marketing systems. After data was collected and structured, two major areas needed to be addressed: improving developer workflows and extending access to a business application to multiple users (multi-tenancy). Docker, a container technology, was used to ...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In this session, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, will describe how to revolutionize your architecture and...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.