|By Jim Falgout||
|March 1, 2011 03:00 PM EST||
There are two major drivers behind the need to embrace parallelism: the dramatic shift to commodity multicore CPUs, and the striking increase in the amount of data being processed by the applications that run our enterprises. These two factors must be addressed by any approach to parallelism or we will find ourselves falling short of resolving the crisis that is upon us. While there are data-centric approaches that have generated interest, including Map-Reduce, dataflow programming is arguably the easiest parallel strategy to adopt for the millions of developers trained in serial programming.
The blog gives a nice summary of why parallel processing is important.
Hardware Support for Parallelism
Let's start with an overview of the supported parallelism available today in modern processors. First there is processor-level parallelism involving instruction pipelining and other techniques handled by the processor. These are all optimized by compilers and runtime environments such as the Java Virtual Machine. This goodness is available to all developers without much effort on our part.
Recently commodity multicore processors have brought parallelism into the mainstream. As we move into many-core systems, we now have available essentially a "cluster in a box." But, software has lagged behind hardware in the area of parallelism. As a result, many of today's multicore systems are woefully under-utilized. We need a paradigm shift to a new programming model that embraces this high level of parallelism from the start, making it easy for developers to create highly scalable applications. However, focusing only on cores doesn't take into account the whole system. Data-intensive applications by definition have significant amounts of I/O operations. A parallel programming model must take into account parallelizing I/O operations with compute. Otherwise we'll be unable to build applications that can keep the multicore monster fed and happy.
Virtualization is a popular way to divvy up multicore machines. This is essentially treating a single machine as multiple, separate machines. Each virtual slice has its function to provide and each operates somewhat independently. This works well for splitting up IT types of functions such as email servers, and web servers. But it doesn't help with the problem of crunching big data. For big data types of problems, taking advantage of the whole machine, the "cluster in a box," is imperative.
Scale-out, using multiple machines to execute big data jobs, is another way to implement parallelism. This technique has been around for ages and is seeing new instantiations in systems such as Hadoop, built on the Map-Reduce design pattern. Scaling out to large cluster systems certainly has its advantages and is absolutely required for the Internet-scale data problem. It does however introduce inefficiencies that can be critical barriers to full utilization in smaller cluster configurations (less than 100-node size clusters).
The Next Step for Hadoop
In a talk on Hadoop, Jeff Hammerbacher stated, "More programmer-friendly parallel dataflow languages await discovery, I think. MapReduce is one (small) step in that direction." His talk is summarized in this blog. As Jeff points out, Map-Reduce is a great first step, but is lacking as a programming model. Integrating dataflow with the scale-out capabilities available in frameworks such as Hadoop offers the next big step in handling big data.
Dataflow architecture is based on the concept of using a dataflow graph for program execution. A dataflow graph consists of nodes that are computational elements. The edges in a dataflow graph provide data paths between nodes. A dataflow graph is directed and acyclic (DAG). Figure 1 provides a snapshot of an executing dataflow application. Note how all of the nodes are executing in parallel, flowing data in a pipeline fashion.
Nodes in the graph do work by reading data from their input flow(s), transforming the data and pushing the results to their outputs. Nodes that provide connectivity may have only input or output flows. A graph is constructed by creating nodes and linking their data flows together. Once a graph is constructed and executed, the connectivity nodes begin reading data and pushing it downstream. Downstream consumers read the data, process it and send their results downstream. This results in pipeline parallelism, allowing each node in the graph to run in parallel as the pipeline begins to fill.
Dataflow provides a computational model. A dataflow graph must first be constructed before it can be executed. This leads to a very nice modularity: creating building blocks (nodes) that can be plugged together in an endless number of ways to create complex applications. This model is analogous to the UNIX shell model. With the UNIX shell, you can string together multiple commands that are pipelined for execution. Each command reads its input, does something with the data and writes to its output. The commands operate independently in the sense that they don't care what is upstream or downstream from them. It is up to the pipeline composer (the end user) to create the pipeline correctly to process the data as wanted. Dataflow is very similar to this model, but provides more capabilities.
The dataflow architecture provides flow control. Flow control prevents fast producers from overrunning slower consumers. Flow control is inherent in the way dataflow works and puts no burden on the programmer to deal with issues such as deadlock or race conditions.
Dataflow is focused on data parallelism. As such, it is not a great fit for all computational problems. But as has become evident over the past few years, there are many domains of parallel problems and one solution or architecture will not solve all problems for all domains. Dataflow provides a different programming paradigm for most developers, so it requires a bit of a shift in thinking to a more data-centric way of designing solutions. But once that shift takes place, dataflow programming is a natural way to express data-centric solutions.
Dataflow Programming and Actors
Dataflow programming and the Actor model available in languages such as Scala and Erlang share many similarities. The Actor model provides for independent actors to communicate using message passing. Within an actor, pattern matching is used to allow an actor determine how to handle a message. Messages are generally asynchronous, but synchronous behavior with flow control can be built on top of the Actor model with some effort.
In general, the Actor model is best used for task parallelism. For example, Erlang was originally developed within the telecom industry for building non-stop control systems. Dataflow is data centric and therefore well suited for big data processing tasks.
As just mentioned, dataflow programming is a different paradigm and so it does require somewhat of a shift in design thinking. This is not a critical issue as the concepts around dataflow are easy to grasp, which is a very important point. A parallel framework that provides great multicore utilization but takes months if not years to master is not all that helpful. Dataflow programming makes the simple things easy and the hard tasks possible.
Dataflow applications are simple to express. Dataflow uses a composition programming model based on a building blocks approach. This leads to very modular designs that provide a high amount of reuse.
Dataflow does a good job of abstracting the details of parallel development. This is important as all of the lower level tools for parallel application development are available today in frameworks such as the java.util.concurrent library available in the JDK. However, these libraries are low-level and require a high degree of expertise to use them correctly. They rely on shared state that must be protected using synchronization techniques that can lead to race conditions, deadlocks and extremely hard-to-debug problems.
Being based on a shared-nothing, immutable message passing architecture makes dataflow a simplified programming model. The nodes within a dataflow graph don't have to worry about using synchronization techniques to produce shared memory. They are lock-free so deadlock and race conditions are not a worry either. The dataflow architecture inherently handles these conditions, allowing the developer to focus on their job at hand. Since the data streams are immutable, this allows multiple readers to attach to the output node. This feature provides more flexibility and reuse in the programming model.
The immutability of the data flows also limits the side effects of nodes within a dataflow program. Nodes within a dataflow graph can only communicate over dataflow channels. By following this model, you are assured that no global state or state of other nodes can be affected by a node. Again, this helps to simplify the programming model. Developing new nodes is free of most of the worries normally involved with parallel programming.
The dataflow programming model is functional in style. Each node within a graph provides a very specific, continuous function on its input data. Programs are built by stitching these functions together in various ways to create complex applications.
Dataflow-based architecture elegantly takes advantage of multicore processors on a single machine (scale up). It's also a good architecture for scaling out to multiple machines. Nodes that run across machine boundaries can communicate over data channels using network sockets. This provides the same simple, flexible dataflow programming model in a distributed configuration.
Dataflow and Big Data
The inherent pipeline parallelism built into dataflow programming makes dataflow great for datasets ranging from thousands to billions of records. Applications written using dataflow techniques can scale easily to extremely large data sizes, generally without much strain on the memory system as a dataflow application will eventually enter into a steady state of memory consumption. The overall amount of data pumped through the application doesn't affect that steady state memory size.
Not all dataflow operators are friendly when it comes to memory consumption. Many are designed specifically to load data into memory. For example a hash join operator may load one of its data sources into an in-memory index. This is the nature of the operator and must be taken into account when using it.
Being pipelined in nature also allows for great overlap of I/O and computational tasks. As mentioned earlier, this is an important "whole" application approach that is highly critical to success in building big data applications.
Dataflow systems are easily embeddable in the current commonly used systems. For instance, a dataflow-based application can easily be executed within the context of a Map-Reduce application. Experimentation with a dataflow-based platform named Pervasive DataRush has shown that the Hadoop system can be used to scale out an application using DataRush within each map step to help parallelize the mapper to take advantage of multicore efficiencies. Allowing each mapper to handle larger chunks of data allows the overall Map-Reduce application to run faster since each mapper is itself parallelized.
Dataflow is a software architecture that is based on the idea of continuous functions executing in parallel on data streams. It's focused on data-intensive applications, lending itself to today's big data challenges. Dataflow is easy to grasp and simple to express, and this design-time scalability can be as important as its run-time scalability.
Dataflow allows developers to easily take advantage of today's multicore processors and also fits well into a distributed environment. Tackling big data problems with dataflow is straightforward and ensures your applications will be able to scale in the future to meet the growing demands of your organization.
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, will provide an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profes...
Feb. 11, 2016 01:15 AM EST Reads: 208
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
Feb. 11, 2016 01:00 AM EST Reads: 222
One of the bewildering things about DevOps is integrating the massive toolchain including the dozens of new tools that seem to crop up every year. Part of DevOps is Continuous Delivery and having a complex toolchain can add additional integration and setup to your developer environment. In his session at @DevOpsSummit at 18th Cloud Expo, Miko Matsumura, Chief Marketing Officer of Gradle Inc., will discuss which tools to use in a developer stack, how to provision the toolchain to minimize onboa...
Feb. 10, 2016 11:45 PM EST Reads: 110
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Avere delivers a more modern architectural approach to storage that doesn’t require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbuilding of data centers ...
Feb. 10, 2016 09:00 PM EST
SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful...
Feb. 10, 2016 02:30 PM EST Reads: 430
SYS-CON Events announced today that Interoute, owner-operator of one of Europe's largest networks and a global cloud services platform, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Interoute is the owner-operator of one of Europe's largest networks and a global cloud services platform which encompasses 12 data centers, 14 virtual data centers and 31 colocation centers, with connections to 195 ad...
Feb. 10, 2016 02:30 PM EST Reads: 398
Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures. Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations.
Feb. 10, 2016 01:15 PM EST
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management...
Feb. 10, 2016 12:15 PM EST Reads: 428
SYS-CON Events announced today that VAI, a leading ERP software provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. VAI (Vormittag Associates, Inc.) is a leading independent mid-market ERP software developer renowned for its flexible solutions and ability to automate critical business functions for the distribution, manufacturing, specialty retail and service sectors. An IBM Premier Business Part...
Feb. 10, 2016 12:00 PM EST Reads: 624
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, will discuss the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filte...
Feb. 10, 2016 11:00 AM EST Reads: 195
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
Feb. 10, 2016 10:45 AM EST Reads: 254
Fortunately, meaningful and tangible business cases for IoT are plentiful in a broad array of industries and vertical markets. These range from simple warranty cost reduction for capital intensive assets, to minimizing downtime for vital business tools, to creating feedback loops improving product design, to improving and enhancing enterprise customer experiences. All of these business cases, which will be briefly explored in this session, hinge on cost effectively extracting relevant data from ...
Feb. 10, 2016 10:45 AM EST Reads: 112
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies adopt disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevO...
Feb. 10, 2016 10:30 AM EST Reads: 228
With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts...
Feb. 10, 2016 09:00 AM EST Reads: 389
As enterprises work to take advantage of Big Data technologies, they frequently become distracted by product-level decisions. In most new Big Data builds this approach is completely counter-productive: it presupposes tools that may not be a fit for development teams, forces IT to take on the burden of evaluating and maintaining unfamiliar technology, and represents a major up-front expense. In his session at @BigDataExpo at @ThingsExpo, Andrew Warfield, CTO and Co-Founder of Coho Data, will dis...
Feb. 10, 2016 09:00 AM EST Reads: 187
SYS-CON Events announced today that iDevices®, the preeminent brand in the connected home industry, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. iDevices, the preeminent brand in the connected home industry, has a growing line of HomeKit-enabled products available at the largest retailers worldwide. Through the “Designed with iDevices” co-development program and its custom-built IoT Cloud Infrastruc...
Feb. 10, 2016 08:00 AM EST
Eighty percent of a data scientist’s time is spent gathering and cleaning up data, and 80% of all data is unstructured and almost never analyzed. Cognitive computing, in combination with Big Data, is changing the equation by creating data reservoirs and using natural language processing to enable analysis of unstructured data sources. This is impacting every aspect of the analytics profession from how data is mined (and by whom) to how it is delivered. This is not some futuristic vision: it's ha...
Feb. 10, 2016 07:45 AM EST Reads: 440
Silver Spring Networks, Inc. (NYSE: SSNI) extended its Internet of Things technology platform with performance enhancements to Gen5 – its fifth generation critical infrastructure networking platform. Already delivering nearly 23 million devices on five continents as one of the leading networking providers in the market, Silver Spring announced it is doubling the maximum speed of its Gen5 network to up to 2.4 Mbps, increasing computational performance by 10x, supporting simultaneous mesh communic...
Feb. 10, 2016 07:00 AM EST
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry's single source for the cloud. Fusion's advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including clou...
Feb. 6, 2016 03:30 PM EST Reads: 771
Most people haven’t heard the word, “gamification,” even though they probably, and perhaps unwittingly, participate in it every day. Gamification is “the process of adding games or game-like elements to something (as a task) so as to encourage participation.” Further, gamification is about bringing game mechanics – rules, constructs, processes, and methods – into the real world in an effort to engage people. In his session at @ThingsExpo, Robert Endo, owner and engagement manager of Intrepid D...
Feb. 5, 2016 09:00 PM EST Reads: 834