Click here to close now.




















Welcome!

Java IoT Authors: Pat Romanski, Gary Kaiser, Dana Gardner, Liz McMillan, Elizabeth White

Related Topics: Java IoT, Microservices Expo, Containers Expo Blog

Java IoT: Article

Dataflow Programming: A Scalable Data-Centric Approach to Parallelism

Dataflow allows developers to easily take advantage of today’s multicore processors

There are two major drivers behind the need to embrace parallelism: the dramatic shift to commodity multicore CPUs, and the striking increase in the amount of data being processed by the applications that run our enterprises. These two factors must be addressed by any approach to parallelism or we will find ourselves falling short of resolving the crisis that is upon us. While there are data-centric approaches that have generated interest, including Map-Reduce, dataflow programming is arguably the easiest parallel strategy to adopt for the millions of developers trained in serial programming.

The blog gives a nice summary of why parallel processing is important.

Hardware Support for Parallelism
Let's start with an overview of the supported parallelism available today in modern processors. First there is processor-level parallelism involving instruction pipelining and other techniques handled by the processor. These are all optimized by compilers and runtime environments such as the Java Virtual Machine. This goodness is available to all developers without much effort on our part.

Recently commodity multicore processors have brought parallelism into the mainstream. As we move into many-core systems, we now have available essentially a "cluster in a box." But, software has lagged behind hardware in the area of parallelism. As a result, many of today's multicore systems are woefully under-utilized. We need a paradigm shift to a new programming model that embraces this high level of parallelism from the start, making it easy for developers to create highly scalable applications. However, focusing only on cores doesn't take into account the whole system. Data-intensive applications by definition have significant amounts of I/O operations. A parallel programming model must take into account parallelizing I/O operations with compute. Otherwise we'll be unable to build applications that can keep the multicore monster fed and happy.

Virtualization is a popular way to divvy up multicore machines. This is essentially treating a single machine as multiple, separate machines. Each virtual slice has its function to provide and each operates somewhat independently. This works well for splitting up IT types of functions such as email servers, and web servers. But it doesn't help with the problem of crunching big data. For big data types of problems, taking advantage of the whole machine, the "cluster in a box," is imperative.

Scale-out, using multiple machines to execute big data jobs, is another way to implement parallelism. This technique has been around for ages and is seeing new instantiations in systems such as Hadoop, built on the Map-Reduce design pattern. Scaling out to large cluster systems certainly has its advantages and is absolutely required for the Internet-scale data problem. It does however introduce inefficiencies that can be critical barriers to full utilization in smaller cluster configurations (less than 100-node size clusters).

The Next Step for Hadoop
In a talk on Hadoop, Jeff Hammerbacher stated, "More programmer-friendly parallel dataflow languages await discovery, I think. MapReduce is one (small) step in that direction." His talk is summarized in this blog. As Jeff points out, Map-Reduce is a great first step, but is lacking as a programming model. Integrating dataflow with the scale-out capabilities available in frameworks such as Hadoop offers the next big step in handling big data.

Dataflow Programming
Dataflow architecture is based on the concept of using a dataflow graph for program execution. A dataflow graph consists of nodes that are computational elements. The edges in a dataflow graph provide data paths between nodes. A dataflow graph is directed and acyclic (DAG). Figure 1 provides a snapshot of an executing dataflow application. Note how all of the nodes are executing in parallel, flowing data in a pipeline fashion.

Figure 1

Nodes in the graph do work by reading data from their input flow(s), transforming the data and pushing the results to their outputs. Nodes that provide connectivity may have only input or output flows. A graph is constructed by creating nodes and linking their data flows together. Once a graph is constructed and executed, the connectivity nodes begin reading data and pushing it downstream. Downstream consumers read the data, process it and send their results downstream. This results in pipeline parallelism, allowing each node in the graph to run in parallel as the pipeline begins to fill.

Dataflow provides a computational model. A dataflow graph must first be constructed before it can be executed. This leads to a very nice modularity: creating building blocks (nodes) that can be plugged together in an endless number of ways to create complex applications. This model is analogous to the UNIX shell model. With the UNIX shell, you can string together multiple commands that are pipelined for execution. Each command reads its input, does something with the data and writes to its output. The commands operate independently in the sense that they don't care what is upstream or downstream from them. It is up to the pipeline composer (the end user) to create the pipeline correctly to process the data as wanted. Dataflow is very similar to this model, but provides more capabilities.

The dataflow architecture provides flow control. Flow control prevents fast producers from overrunning slower consumers. Flow control is inherent in the way dataflow works and puts no burden on the programmer to deal with issues such as deadlock or race conditions.

Dataflow is focused on data parallelism. As such, it is not a great fit for all computational problems. But as has become evident over the past few years, there are many domains of parallel problems and one solution or architecture will not solve all problems for all domains. Dataflow provides a different programming paradigm for most developers, so it requires a bit of a shift in thinking to a more data-centric way of designing solutions. But once that shift takes place, dataflow programming is a natural way to express data-centric solutions.

Dataflow Programming and Actors
Dataflow programming and the Actor model available in languages such as Scala and Erlang share many similarities. The Actor model provides for independent actors to communicate using message passing. Within an actor, pattern matching is used to allow an actor determine how to handle a message. Messages are generally asynchronous, but synchronous behavior with flow control can be built on top of the Actor model with some effort.

 

In general, the Actor model is best used for task parallelism. For example, Erlang was originally developed within the telecom industry for building non-stop control systems. Dataflow is data centric and therefore well suited for big data processing tasks.

Dataflow Goodness
As just mentioned, dataflow programming is a different paradigm and so it does require somewhat of a shift in design thinking. This is not a critical issue as the concepts around dataflow are easy to grasp, which is a very important point. A parallel framework that provides great multicore utilization but takes months if not years to master is not all that helpful. Dataflow programming makes the simple things easy and the hard tasks possible.

Dataflow applications are simple to express. Dataflow uses a composition programming model based on a building blocks approach. This leads to very modular designs that provide a high amount of reuse.

Dataflow does a good job of abstracting the details of parallel development. This is important as all of the lower level tools for parallel application development are available today in frameworks such as the java.util.concurrent library available in the JDK. However, these libraries are low-level and require a high degree of expertise to use them correctly. They rely on shared state that must be protected using synchronization techniques that can lead to race conditions, deadlocks and extremely hard-to-debug problems.

Being based on a shared-nothing, immutable message passing architecture makes dataflow a simplified programming model. The nodes within a dataflow graph don't have to worry about using synchronization techniques to produce shared memory. They are lock-free so deadlock and race conditions are not a worry either. The dataflow architecture inherently handles these conditions, allowing the developer to focus on their job at hand. Since the data streams are immutable, this allows multiple readers to attach to the output node. This feature provides more flexibility and reuse in the programming model.

The immutability of the data flows also limits the side effects of nodes within a dataflow program. Nodes within a dataflow graph can only communicate over dataflow channels. By following this model, you are assured that no global state or state of other nodes can be affected by a node. Again, this helps to simplify the programming model. Developing new nodes is free of most of the worries normally involved with parallel programming.

The dataflow programming model is functional in style. Each node within a graph provides a very specific, continuous function on its input data. Programs are built by stitching these functions together in various ways to create complex applications.

Dataflow-based architecture elegantly takes advantage of multicore processors on a single machine (scale up). It's also a good architecture for scaling out to multiple machines. Nodes that run across machine boundaries can communicate over data channels using network sockets. This provides the same simple, flexible dataflow programming model in a distributed configuration.

Dataflow and Big Data
The inherent pipeline parallelism built into dataflow programming makes dataflow great for datasets ranging from thousands to billions of records. Applications written using dataflow techniques can scale easily to extremely large data sizes, generally without much strain on the memory system as a dataflow application will eventually enter into a steady state of memory consumption. The overall amount of data pumped through the application doesn't affect that steady state memory size.

Not all dataflow operators are friendly when it comes to memory consumption. Many are designed specifically to load data into memory. For example a hash join operator may load one of its data sources into an in-memory index. This is the nature of the operator and must be taken into account when using it.

Being pipelined in nature also allows for great overlap of I/O and computational tasks. As mentioned earlier, this is an important "whole" application approach that is highly critical to success in building big data applications.

Dataflow systems are easily embeddable in the current commonly used systems. For instance, a dataflow-based application can easily be executed within the context of a Map-Reduce application. Experimentation with a dataflow-based platform named Pervasive DataRush has shown that the Hadoop system can be used to scale out an application using DataRush within each map step to help parallelize the mapper to take advantage of multicore efficiencies. Allowing each mapper to handle larger chunks of data allows the overall Map-Reduce application to run faster since each mapper is itself parallelized.

Summary
Dataflow is a software architecture that is based on the idea of continuous functions executing in parallel on data streams. It's focused on data-intensive applications, lending itself to today's big data challenges. Dataflow is easy to grasp and simple to express, and this design-time scalability can be as important as its run-time scalability.

Dataflow allows developers to easily take advantage of today's multicore processors and also fits well into a distributed environment. Tackling big data problems with dataflow is straightforward and ensures your applications will be able to scale in the future to meet the growing demands of your organization.

More Stories By Jim Falgout

Jim Falgout has 20+ years of large-scale software development experience and is active in the Java development community. As Chief Technologist for Pervasive DataRush, he’s responsible for setting innovative design principles that guide the company’s engineering teams as they develop new releases and products for partners and customers. He applied dataflow principles to help architect Pervasive DataRush.

Prior to Pervasive, Jim held senior positions with NexQL, Voyence Net Perceptions/KD1 Convex Computer, Sequel Systems and E-Systems. Jim has a B.Sc. (Cum Laude) in Computer Science from Nicholls State University. He can be reached at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors.
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
Akana has released Envision, an enhanced API analytics platform that helps enterprises mine critical insights across their digital eco-systems, understand their customers and partners and offer value-added personalized services. “In today’s digital economy, data-driven insights are proving to be a key differentiator for businesses. Understanding the data that is being tunneled through their APIs and how it can be used to optimize their business and operations is of paramount importance,” said Alistair Farquharson, CTO of Akana.
Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models.
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, analyzed how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Payment...
In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.
"Optimal Design is a technology integration and product development firm that specializes in connecting devices to the cloud," stated Joe Wascow, Co-Founder & CMO of Optimal Design, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.
"ciqada is a combined platform of hardware modules and server products that lets people take their existing devices or new devices and lets them be accessible over the Internet for their users," noted Geoff Engelstein of ciqada, a division of Mars International, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.