Welcome!

Java IoT Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, Frank Lupo

Related Topics: Java IoT, Microservices Expo, Microsoft Cloud, Machine Learning , Agile Computing, @BigDataExpo

Java IoT: Article

Eating Our Own Dog Food – 2x Faster Hadoop MapReduce Jobs

How to analyze and optimize Hadoop jobs beyond just tweaking MapReduce options

For a while now I have been writing about how to analyze and optimize Hadoop jobs beyond just tweaking MapReduce options. The other day I took a look at some of our Outage Analyzer Hadoop jobs and put words into action.

A simple analysis of the Outage Analyzer jobs with Compuware APM 5.5 identified three hotspots and two potential Hadoop problems in one of our biggest jobs. It took the responsible developer a couple of hours to fix it and the result is a 2x improvement overall and a 6x improvement on the Reduce part of the job. Let's see how we achieved that.

About Outage Analyzer
Outage Analyzer is a free service provided by Compuware that displays in real-time any availability problems with the most popular third-party content providers on the Internet. It's available at http://www.outageanalyzer.com. It uses real time analytical process technologies to do anomaly detection and event correlation and classification. It stores billions of measures taken from Compuware's global testing network every day in Hadoop and runs different MapReduce jobs to analyze the data. I examined the performance of these MapReduce jobs.

Identifying Worthwhile Jobs to Analyze
The first thing I did was look for a worthwhile job to analyze. To do this, I looked at cluster utilization broken down by user and job.

This chart visualizes the cluster CPU usage by user giving a good indication about which user executes the most expensive jobs

What I found was that John was the biggest user of our cluster. So I looked at the jobs John was running.

These are all the jobs that John was running over the last several days. It's always the same one, consuming about the same amount of resources

The largest job by far was an analytics simulation of a full day of measurement data. This job is run often to test and tune changes to the analytics algorithms. Except for one of the runs, all of them lasted about 6.5 hours in real time and consumed nearly 100% CPU of the cluster during that time. This looked like a worthy candidate for further analysis.

Identifying Which Job Phase to Focus On
My next step was to look at a breakdown of the job from two angles: consumption and real time. From a real-time perspective, map and reduce took about the same amount of time - roughly 3 hours each. This could also be nicely seen in the resource usage of the job.

This dashboard shows the overall cluster utilization during the time the job ran. 100% of the cluster CPU is used during most of the time

The significant drop in Load Average and the smaller drop in the other charts mark the end of the mapping phase and the start of pure reducing. What is immediately obvious is that the reducing phase, while lasting about the same time, does not consume as many resources as the map phase. The Load Average is significantly lower and the CPU utilization drops in several steps before the job is finished.

On the one hand that is because we have a priority scheduler and reducing does not use all slots, but more important, reducing cannot be parallelized as much as mapping. Every optimization here counts twofold, because you cannot scale things away. While the mapping phase is clearly consuming more resources, the reducing phase is a bottleneck and might therefore benefit even more from optimization.

The breakdown of job phase times shows that the mapping phase consumes twice as much time as reducing, even though we know that the job real time of the two phases is about the same - 3 hours each

As we can see the Map Time (time we spend in the mapping function, excluding merging, spilling and combining) is twice as high as the reduce time. The reduce time here represents the time that tasks were actually spending in the reduce function, excluding shuffle and merge time (which is represented separately). As such those two times represent those portions of the job that are directly impacted by the Map and Reduce code, which is usually custom code - and therefore tuneable by our developers.

Analyzing Map and Reduce Performance
As a next step I used Compuware APM to get a high-level performance breakdown of the job's respective 3 hour mapping and reducing phases. A single click gave me this pretty clear picture of the mapping phase:

This is a hot spot analysis of our 3 hour mapping phase which ran across 10 servers in our hadoop cluster

The hot spot dashboard for the mapping phase shows that we spent the majority of the time (about 70%) in our own code and that it's about 50% CPU time. This indicates a lot of potential for improvement. Next, I looked at the reducing phase.

This hot spot shows that we spend nearly all of our reducing time in all reduce tasks in our own code.

This shows that 99% of the reducing time is spent on our own code and not in any Hadoop framework. Since the reduce phase was clearly the winner in terms of potential, I looked at the details of that hot spot - and immediately found three hot spots that were responsible for the majority of the reduce time.

Three Simple Code Issues Consume 70% of the Reduce Time
This is what the method hot spots dashboard for the reduce phase showed.

These are the method hot spots for the reduce phase, showing that nearly everything is down to only three line items

The top three items in the method hot spot told me everything I needed know. As it turned out nearly all other items listed were sub-hotspots of the top most method:

  1. SimpleDateFormat initialization:
    The five items marked in red are all due to the creation of a SimpleDateFormat object. As most of us find out very painfully during our early career as a Java developer, the SimpleDateFormat is not thread safe and cannot be used as a static variable easily. This is why the developer chose the easiest route and created a new one for every call, leading to about 1.5 billion creations of this object. The initialization of the Formatter is actually very expensive and involves resource lookups, locale lookups and time calculations (seen in the separate line items listed here). This item alone consumed about 40% of our reduce time.
    Solution: We chose to use the well-known Joda framework (the code replacement was easy) and made the Formatter a static final variable; totally removing this big hot spot from the equation.
  2. Regular Expression Matching (line two in the picture)
    In order to split the CSV input we were using java.lang.String.split. It is often forgotten that this method uses regular expressions underneath. RegEx is rather CPU intensive and overkill for such a simple job. This was consuming another 15-20% of the allotted CPU time.
    Solution: We changed this to a simple string tokenizer.
  3. Exception throwing (line three in the picture)
    This example was especially interesting. During the reading of input data we are parsing numeric values, and if the field is not a correct number java.lang.Long.parseLong will throw a NumberFormatException. Our code would catch it, mark the field as invalid and ignore the exception. The fact is that nearly every input record in that feed has an invalid field or an empty field that should contain a number. Throwing this exception billions of times consumed another 10% of our CPU time.
    Solution: We changed the code in a way to avoid the exception altogether.

There we have it - three simple hot spots were consuming about 70% of our reduce CPU time. During analysis of the mapping portion I found the same hot spots again, where they contributed about 20-30% to the CPU time.

I sent this analysis to the developer and we decided to eat our own dog food, fix it and rerun the job to analyze the changes.

Job Done in Half the Time - Sixfold Improvement in Reduce Time
The result of the modified code exceeded our expectations by quite a bit. The immediate changes saw the job time reduced by 50%. Instead of lasting about 6.5 hours, it was done after 3.5. Even more impressive was that while the mapping time only went down by about 15%, the reducing time was slashed from 3 hours to 30 minutes.

This is the jobs cluster CPU Usage and Load Average after we made the changes

The Cluster Utilization shows a very clear picture. The overall utilization and load average during mapping phase actually increased a bit and instead of lasting 3 hours 10 minutes it was done after 2 hours and 40 minutes. While not huge this is still a 15% improvement.

The reduce phase on the other hand shrank dramatically: from roughly 3 hours to 30 minutes. That means a couple of hours of development work lead to an impressive sixfold performance improvement. We also see that the reduce phase is of course still not utilizing the whole cluster and it's actually the 100% phase that got a lot shorter.

Conclusion
Three simple code fixes resulted in a 100% improvement of our biggest job and a sixfold speedup of the reduce portion. Suffice it to say that this totally surprised the owners of the job. The job was utilizing 100% of the cluster, which for them meant that from a Hadoop perspective things were running in an optimal fashion. While this is true, it doesn't mean that the job itself is efficient.

This example shows that optimizing MapReduce jobs beyond tinkering with Hadoop options can lead to a lot more efficiency without adding any more hardware - achieving the same result with fewer resources.

The Hotspot analysis did also reveal some Hadoop-specific hotspots that led us to change some job options and speed things up even more. More on that in my next blog.

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Taica manufacturers Alpha-GEL brand silicone components and materials, which maintain outstanding performance over a wide temperature range -40C to +200C. For more information, visit http://www.taica.co.jp/english/.
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities – ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups. As a result, many firms employ new business models that place enormous impor...
SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives. Since 1999, we'v...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. Tha...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they b...
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...