Welcome!

Java IoT Authors: Nishanth Kadiyala, Pat Romanski, Stackify Blog, Yeshim Deniz, Liz McMillan

Related Topics: Java IoT, Microservices Expo, Containers Expo Blog

Java IoT: Article

Dataflow Programming: A Scalable Data-Centric Approach to Parallelism

Dataflow allows developers to easily take advantage of today’s multicore processors

There are two major drivers behind the need to embrace parallelism: the dramatic shift to commodity multicore CPUs, and the striking increase in the amount of data being processed by the applications that run our enterprises. These two factors must be addressed by any approach to parallelism or we will find ourselves falling short of resolving the crisis that is upon us. While there are data-centric approaches that have generated interest, including Map-Reduce, dataflow programming is arguably the easiest parallel strategy to adopt for the millions of developers trained in serial programming.

The blog gives a nice summary of why parallel processing is important.

Hardware Support for Parallelism
Let's start with an overview of the supported parallelism available today in modern processors. First there is processor-level parallelism involving instruction pipelining and other techniques handled by the processor. These are all optimized by compilers and runtime environments such as the Java Virtual Machine. This goodness is available to all developers without much effort on our part.

Recently commodity multicore processors have brought parallelism into the mainstream. As we move into many-core systems, we now have available essentially a "cluster in a box." But, software has lagged behind hardware in the area of parallelism. As a result, many of today's multicore systems are woefully under-utilized. We need a paradigm shift to a new programming model that embraces this high level of parallelism from the start, making it easy for developers to create highly scalable applications. However, focusing only on cores doesn't take into account the whole system. Data-intensive applications by definition have significant amounts of I/O operations. A parallel programming model must take into account parallelizing I/O operations with compute. Otherwise we'll be unable to build applications that can keep the multicore monster fed and happy.

Virtualization is a popular way to divvy up multicore machines. This is essentially treating a single machine as multiple, separate machines. Each virtual slice has its function to provide and each operates somewhat independently. This works well for splitting up IT types of functions such as email servers, and web servers. But it doesn't help with the problem of crunching big data. For big data types of problems, taking advantage of the whole machine, the "cluster in a box," is imperative.

Scale-out, using multiple machines to execute big data jobs, is another way to implement parallelism. This technique has been around for ages and is seeing new instantiations in systems such as Hadoop, built on the Map-Reduce design pattern. Scaling out to large cluster systems certainly has its advantages and is absolutely required for the Internet-scale data problem. It does however introduce inefficiencies that can be critical barriers to full utilization in smaller cluster configurations (less than 100-node size clusters).

The Next Step for Hadoop
In a talk on Hadoop, Jeff Hammerbacher stated, "More programmer-friendly parallel dataflow languages await discovery, I think. MapReduce is one (small) step in that direction." His talk is summarized in this blog. As Jeff points out, Map-Reduce is a great first step, but is lacking as a programming model. Integrating dataflow with the scale-out capabilities available in frameworks such as Hadoop offers the next big step in handling big data.

Dataflow Programming
Dataflow architecture is based on the concept of using a dataflow graph for program execution. A dataflow graph consists of nodes that are computational elements. The edges in a dataflow graph provide data paths between nodes. A dataflow graph is directed and acyclic (DAG). Figure 1 provides a snapshot of an executing dataflow application. Note how all of the nodes are executing in parallel, flowing data in a pipeline fashion.

Figure 1

Nodes in the graph do work by reading data from their input flow(s), transforming the data and pushing the results to their outputs. Nodes that provide connectivity may have only input or output flows. A graph is constructed by creating nodes and linking their data flows together. Once a graph is constructed and executed, the connectivity nodes begin reading data and pushing it downstream. Downstream consumers read the data, process it and send their results downstream. This results in pipeline parallelism, allowing each node in the graph to run in parallel as the pipeline begins to fill.

Dataflow provides a computational model. A dataflow graph must first be constructed before it can be executed. This leads to a very nice modularity: creating building blocks (nodes) that can be plugged together in an endless number of ways to create complex applications. This model is analogous to the UNIX shell model. With the UNIX shell, you can string together multiple commands that are pipelined for execution. Each command reads its input, does something with the data and writes to its output. The commands operate independently in the sense that they don't care what is upstream or downstream from them. It is up to the pipeline composer (the end user) to create the pipeline correctly to process the data as wanted. Dataflow is very similar to this model, but provides more capabilities.

The dataflow architecture provides flow control. Flow control prevents fast producers from overrunning slower consumers. Flow control is inherent in the way dataflow works and puts no burden on the programmer to deal with issues such as deadlock or race conditions.

Dataflow is focused on data parallelism. As such, it is not a great fit for all computational problems. But as has become evident over the past few years, there are many domains of parallel problems and one solution or architecture will not solve all problems for all domains. Dataflow provides a different programming paradigm for most developers, so it requires a bit of a shift in thinking to a more data-centric way of designing solutions. But once that shift takes place, dataflow programming is a natural way to express data-centric solutions.

Dataflow Programming and Actors
Dataflow programming and the Actor model available in languages such as Scala and Erlang share many similarities. The Actor model provides for independent actors to communicate using message passing. Within an actor, pattern matching is used to allow an actor determine how to handle a message. Messages are generally asynchronous, but synchronous behavior with flow control can be built on top of the Actor model with some effort.

 

In general, the Actor model is best used for task parallelism. For example, Erlang was originally developed within the telecom industry for building non-stop control systems. Dataflow is data centric and therefore well suited for big data processing tasks.

Dataflow Goodness
As just mentioned, dataflow programming is a different paradigm and so it does require somewhat of a shift in design thinking. This is not a critical issue as the concepts around dataflow are easy to grasp, which is a very important point. A parallel framework that provides great multicore utilization but takes months if not years to master is not all that helpful. Dataflow programming makes the simple things easy and the hard tasks possible.

Dataflow applications are simple to express. Dataflow uses a composition programming model based on a building blocks approach. This leads to very modular designs that provide a high amount of reuse.

Dataflow does a good job of abstracting the details of parallel development. This is important as all of the lower level tools for parallel application development are available today in frameworks such as the java.util.concurrent library available in the JDK. However, these libraries are low-level and require a high degree of expertise to use them correctly. They rely on shared state that must be protected using synchronization techniques that can lead to race conditions, deadlocks and extremely hard-to-debug problems.

Being based on a shared-nothing, immutable message passing architecture makes dataflow a simplified programming model. The nodes within a dataflow graph don't have to worry about using synchronization techniques to produce shared memory. They are lock-free so deadlock and race conditions are not a worry either. The dataflow architecture inherently handles these conditions, allowing the developer to focus on their job at hand. Since the data streams are immutable, this allows multiple readers to attach to the output node. This feature provides more flexibility and reuse in the programming model.

The immutability of the data flows also limits the side effects of nodes within a dataflow program. Nodes within a dataflow graph can only communicate over dataflow channels. By following this model, you are assured that no global state or state of other nodes can be affected by a node. Again, this helps to simplify the programming model. Developing new nodes is free of most of the worries normally involved with parallel programming.

The dataflow programming model is functional in style. Each node within a graph provides a very specific, continuous function on its input data. Programs are built by stitching these functions together in various ways to create complex applications.

Dataflow-based architecture elegantly takes advantage of multicore processors on a single machine (scale up). It's also a good architecture for scaling out to multiple machines. Nodes that run across machine boundaries can communicate over data channels using network sockets. This provides the same simple, flexible dataflow programming model in a distributed configuration.

Dataflow and Big Data
The inherent pipeline parallelism built into dataflow programming makes dataflow great for datasets ranging from thousands to billions of records. Applications written using dataflow techniques can scale easily to extremely large data sizes, generally without much strain on the memory system as a dataflow application will eventually enter into a steady state of memory consumption. The overall amount of data pumped through the application doesn't affect that steady state memory size.

Not all dataflow operators are friendly when it comes to memory consumption. Many are designed specifically to load data into memory. For example a hash join operator may load one of its data sources into an in-memory index. This is the nature of the operator and must be taken into account when using it.

Being pipelined in nature also allows for great overlap of I/O and computational tasks. As mentioned earlier, this is an important "whole" application approach that is highly critical to success in building big data applications.

Dataflow systems are easily embeddable in the current commonly used systems. For instance, a dataflow-based application can easily be executed within the context of a Map-Reduce application. Experimentation with a dataflow-based platform named Pervasive DataRush has shown that the Hadoop system can be used to scale out an application using DataRush within each map step to help parallelize the mapper to take advantage of multicore efficiencies. Allowing each mapper to handle larger chunks of data allows the overall Map-Reduce application to run faster since each mapper is itself parallelized.

Summary
Dataflow is a software architecture that is based on the idea of continuous functions executing in parallel on data streams. It's focused on data-intensive applications, lending itself to today's big data challenges. Dataflow is easy to grasp and simple to express, and this design-time scalability can be as important as its run-time scalability.

Dataflow allows developers to easily take advantage of today's multicore processors and also fits well into a distributed environment. Tackling big data problems with dataflow is straightforward and ensures your applications will be able to scale in the future to meet the growing demands of your organization.

More Stories By Jim Falgout

Jim Falgout has 20+ years of large-scale software development experience and is active in the Java development community. As Chief Technologist for Pervasive DataRush, he’s responsible for setting innovative design principles that guide the company’s engineering teams as they develop new releases and products for partners and customers. He applied dataflow principles to help architect Pervasive DataRush.

Prior to Pervasive, Jim held senior positions with NexQL, Voyence Net Perceptions/KD1 Convex Computer, Sequel Systems and E-Systems. Jim has a B.Sc. (Cum Laude) in Computer Science from Nicholls State University. He can be reached at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Artificial intelligence, machine learning, neural networks. We’re in the midst of a wave of excitement around AI such as hasn’t been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. Will this time be different? Most likely. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Furthermore, the funding and research going into AI now comes from a wide range of com...
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devic...
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business...
SYS-CON Events announced today that GrapeUp, the leading provider of rapid product development at the speed of business, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company, specialized in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market acr...
SYS-CON Events announced today that Ayehu will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara California. Ayehu provides IT Process Automation & Orchestration solutions for IT and Security professionals to identify and resolve critical incidents and enable rapid containment, eradication, and recovery from cyber security breaches. Ayehu provides customers greater control over IT infras...
In this presentation, Striim CTO and founder Steve Wilkes will discuss practical strategies for counteracting fraud and cyberattacks by leveraging real-time streaming analytics. In his session at @ThingsExpo, Steve Wilkes, Founder and Chief Technology Officer at Striim, will provide a detailed look into leveraging streaming data management to correlate events in real time, and identify potential breaches across IoT and non-IoT systems throughout the enterprise. Strategies for processing massive ...
SYS-CON Events announced today that Cloud Academy named "Bronze Sponsor" of 21st International Cloud Expo which will take place October 31 - November 2, 2017 at the Santa Clara Convention Center in Santa Clara, CA. Cloud Academy is the industry’s most innovative, vendor-neutral cloud technology training platform. Cloud Academy provides continuous learning solutions for individuals and enterprise teams for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and the most popular cloud com...
In his session at Cloud Expo, Alan Winters, an entertainment executive/TV producer turned serial entrepreneur, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to ma...
SYS-CON Events announced today that Enzu will exhibit at SYS-CON's 21st Int\ernational Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their ...
We build IoT infrastructure products - when you have to integrate different devices, different systems and cloud you have to build an application to do that but we eliminate the need to build an application. Our products can integrate any device, any system, any cloud regardless of protocol," explained Peter Jung, Chief Product Officer at Pulzze Systems, in this SYS-CON.tv interview at @ThingsExpo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business - from apparel to energy - is being rewritten by software. From planning to development to management to security, CA creates software that fuels transformation for companies in the applic...
Amazon started as an online bookseller 20 years ago. Since then, it has evolved into a technology juggernaut that has disrupted multiple markets and industries and touches many aspects of our lives. It is a relentless technology and business model innovator driving disruption throughout numerous ecosystems. Amazon’s AWS revenues alone are approaching $16B a year making it one of the largest IT companies in the world. With dominant offerings in Cloud, IoT, eCommerce, Big Data, AI, Digital Assista...
Multiple data types are pouring into IoT deployments. Data is coming in small packages as well as enormous files and data streams of many sizes. Widespread use of mobile devices adds to the total. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists looked at the tools and environments that are being put to use in IoT deployments, as well as the team skills a modern enterprise IT shop needs to keep things running, get a handle on all this data, and deliver...
In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), provided an overview of various initiatives to certify the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldwide re...
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, discussed some of the security challenges of the IoT infrastructure and related how these aspects impact Smart Living. The material was delivered interac...
IoT solutions exploit operational data generated by Internet-connected smart “things” for the purpose of gaining operational insight and producing “better outcomes” (for example, create new business models, eliminate unscheduled maintenance, etc.). The explosive proliferation of IoT solutions will result in an exponential growth in the volume of IoT data, precipitating significant Information Governance issues: who owns the IoT data, what are the rights/duties of IoT solutions adopters towards t...