Welcome!

Java IoT Authors: Pat Romanski, Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Liz McMillan

Related Topics: @DXWorldExpo, @CloudExpo, Apache

@DXWorldExpo: Blog Post

Taking Apache Spark for a Spin | @BigDataExpo #BigData

What is Spark?

You might have looked at some of the articles on Apache Spark on the Web and wondered if you could try it out for yourself. While Spark and Hadoop are designed for clusters, you might think you need to have lots of nodes.

If you wanted to see what you could do with Spark, you could set up a home lab with a few servers from Ebay. But there’s no rule saying that you need more than one machine just to learn Spark. Today’s multi-core processors are like having a cluster already on your desk. Even better, with a laptop, you can pick up your cluster and take it with you. Try doing that with your rack-mount servers.

What is Spark?
If you’re looking to try out Apache Spark, it helps to know what it actually is. Spark is a cluster computing framework that builds on Hadoop to support not only cluster computing, but also real-time cluster computing.

Spark consists of the Spark Core, which handles the actual dispatching, scheduling and I/O. Spark’s key feature is the Resilient Distributed Dataset, or RDD. RDDs are the basic data abstraction, containing a distributed list of elements. You can perform actions on RDDs, which return values, and transformations, which return new RDDs. It’s similar to functional programming, where functions return outputs and don’t have any side effects.

Spark is so fast because it represents RDDs in memory—and because RDDs are lazily evaluated. Transformations will not be calculated until an action on the RDDs has been requested to produce some form of output.

Spark also gives you access to some powerful tools like the real-time Spark Streaming engine for streaming analytics and the MLlib machine learning library.

Installing Spark

Installing Spark is easy enough. While the MapR distribution is essential for production use, you can install Spark from the project website on your own machine, whether you’re running Windows or Linux. It’s a good idea to set up a virtual machine for exploring Spark, just to keep it separate and reduce the possible security risk of running a server on your machine. This way, you can just turn it off when you don’t need it. Linux is a good choice because that’s what most servers will be running.

You can install also install Spark from your favorite distribution’s package manager. At least with the package manager, you won’t have to worry about dependencies like Scala. You can also install them from the respective websites or even build from source if you want.

Using the REPL

One of Spark’s greatest strengths is its interactive capabilities. Like most modern languages, Spark offers a REPL: A Read-Eval-Print-Loop. It’s just like the shell, or a Python interactive prompt.

Spark is actually implemented in Scala and you can use Scala or Python interactively. Learning both of these languages is beyond the scope of this article, but Python tends to be more familiar to people than Scala. In any case, if you’re interested at all in technologies like Spark, you likely have experience in some programming, and either Scala or Python shouldn’t be too hard to pick up. Of course if you have experience in Java that will work as well.

When you’ve got Spark up and running, you’ll be able to try out all the actions and transformations on your data.

The Spark equivalent of a “Hello, world!” seems to be a word count.

Here is an example shown in Python:

text_file = spark.textFile("hdfs://...")

 

text_file.flatMap(lambda line: line.split())

.map(lambda word: (word, 1))

.reduceByKey(lambda a, b: a+b)

You can see that even in Python, Spark makes uses of functional programming concepts such as maps and lambdas. The Spark documentation has an extensive reference of commands for both Python and Scala. The shell lets you quickly and easily experiment with data. Give it a try for yourself to see what Spark can really do.

Conclusion

If you’ve been curious about Spark and its ability to offer both batch and stream processing, and want to try it out, there’s no need to feel left out just because you don’t have your own cluster. Whether you’re a developer, a student, or a manager, you can get a taste of what Apache Spark has to offer. When you’re ready for production use, opt for the MapR Spark distribution when you’re ready for a complete, reliable version.

To further explore Spark, jump over to Getting Started with Apache Spark: From Inception to Production, a free interactive ebook by James A. Scott.

More Stories By Jim Scott

Jim has held positions running Operations, Engineering, Architecture and QA teams in the Consumer Packaged Goods, Digital Advertising, Digital Mapping, Chemical and Pharmaceutical industries. Jim has built systems that handle more than 50 billion transactions per day and his work with high-throughput computing at Dow Chemical was a precursor to more standardized big data concepts like Hadoop.

IoT & Smart Cities Stories
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...