|By Michael Kopp||
|April 29, 2013 10:15 AM EDT||
Over the last couple of months I have been talking to more and more customers who are either bringing their Hadoop clusters into production or have already done so and are now getting serious about operations. This leads to some interesting discussions about how to monitor Hadoop properly and one thing pops up quite often: Do they need anything beyond Ganglia? If yes, what should they do beyond it?
As in every other system, monitoring in a Hadoop environment starts with the basics: System Metrics - CPU, Disk, Memory you know the drill. Of special importance in a Hadoop system is a well-balanced cluster; you don't want to have some nodes being much more (or less) utilized then others. Besides CPU and memory utilization, Disk utilization and of course I/O throughput is of high importance. After all the most likely bottleneck in a Big Data system is I/O - either with ingress (network and disk), moving data around (e.g., MapReduce shuffle on the network) and straightforward read/write to disk.
The problem in a Hadoop system is of course its size. Nothing new for us, some of our customers monitor well beyond 1000+ JVMs with CompuwareAPM. The "advantage" in a Hadoop system is its relative conformity - every node looks pretty much like the other. This is what Ganglia leverages.
Cluster Monitoring with Ganglia
What Ganglia is very good at is providing an overview over how a cluster is utilized. The load chart is particularly interesting:
This chart shows the CPU load on a 1000 Server cluster that has roughly 15.000 CPUs
It tells us the number of available cores in the system and the number of running processes (in theory a core can never handle more than one process at a time) and the 1-min load average. If the system is getting fully utilized the 1-min load average would approach the total number of CPUs. Another view on this is the well-known CPU utilization chart:
CPU Utilization over the last day. While the utilization stays well below 10% we see a lot of I/O wait spikes.
While the load chart gives a good overall impression of usage, the utilization tells us the story of how the CPUs are used. While typical CPU charts show a single server, Ganglia specializes in showing whole clusters (the picture shows CPU usage of a 1000 machine cluster). In the case of the depicted chart we see that the CPUs are experiencing a lot of I/O wait spikes, which points toward heavy disk I/O. Basically it seems the disk I/O is the reason that we cannot utilize our CPU better at these times. But in general our cluster is well underutilized in terms of CPU.
Trends are also easy to understand, as can be seen in this memory chart over a year.
Memory capacity and usage over a year
All this looks pretty good, so what is missing? The "so what" and "why" is what is missing. If my memory demand is growing, I have no way of knowing why it is growing. If the CPU chart tells me that I spend a lot of time waiting, it does not tell what to do, or why that is so? These questions are beyond the scope of Ganglia.
What about Hadoop specifics?
Ganglia also has a Hadoop plugin, which basically gives you access to all the usual Hadoop metrics (unfortunately a comprehensive list of Hadoop metrics is really hard to find, appreciate if somebody commented the link). There is a good explanation on what is interesting on Edward Caproli's page: JoinTheGrid. Basically you can use those metrics to monitor the capacity and usage trends of HDFS and the NameNodes and also how many jobs, mappers and reducers are running.
Capacity of the DataNodes over time
Capacity of the Name Nodes over time
The DataNode Operations give me an impression of I/O pressure on the Hadoop cluster
All these charts can of course be easily built in any modern monitoring or APM solution like CompuwareAPM, but Ganglia gives you a simple starting point; and it's Free as in Beer.
What's missing again is the so what? If my jobs are running a lot longer than yesterday, what should I do? Why do they run longer? A Hadoop expert might dig into 10 different charts around I/O and Network, spilling, look at log files among other things and try an educated guess as to what might be the problem. But we aren't all experts, neither do we have the time to dig into all of these metrics and log files all the time.
This is the reason that we and our customers are moving beyond Ganglia - to solve the "Why" and "So What" within time constraints.
Beyond the Basics #1 - Understanding Cluster Utilization
A use case that we get from customers is that they want to know which users or which pools (in case of the fair scheduler) are responsible for how much of the cluster utilization. LinkedIn just released White Elephant, a tool that parses MapReduce logs and builds some nice dashboards and shows you which of your users occupy how much of your cluster. This is of course based on log file analysis and thus okay for analysis but not for monitoring. With proper tools in place we can do the same thing in near real time.
The CPU Usage in the Hadoop Cluster on per User basis
In this example I wanted to monitor which user consumed how much of my Amazon EMR cluster. If we see a user or pool that occupies a lot of the cluster we can course also see which jobs are running and how much of the cluster they occupy.
The CPU Usage in the Hadoop Cluster on per Job basis
And this will also tell us if that job has always been there, and just uses a lot more resources now. This would be our cue to start analyzing what has changed.
Beyond the Basics #2 - Understanding why my jobs are slow(er)
If we want to understand why a job is slow we need to look at a high-level break down first.
In which phase of the map reduce do we spend the most time, or did we spend more time than yesterday? Understanding these timings in context with the respective job counters, like Map Input or Spilled Records, helps us understand why the phase took longer.
Overview of the time spent in different phases and the respective input/output counters
At this point we will already have a pretty good idea as to what happened. We either simply have more data to crunch (more input data) or a portion of the MapReduce job consumes more CPU (code change?) or we spill more records to disk (code change or Hadoop config change?). We might also detect an unbalanced cluster in the performance breakdown.
This job is executing nearly exclusively on a single node instead of distributing
In this case we want to check whether all the involved nodes processed the same amount of data
Here we see that there is a wide range from minimum, average to maximum on mapped input and output records. The data is not balanced
or if the difference can again be found in the code (different kinds of computations). If we are running against HBase we might of course have an issue with HBase performance or distribution.
At the beginning of the job only a single HBase region Server consumes CPU while all others remain idle
On the other hand, if a lot of mapping time is spent in the garbage collector then you should maybe invest in larger JVMs.
The Performance Breakdown of this particular job shows considerable time in GC suspension
If spilling data to disk is where we spend our time, we should take a closer look at that phase. It might turn out that all of our time is spent on disk wait.
If the Disk were the bottleneck we would see it on disk I/O here
Now if disk write is our bottleneck, then really the only thing that we can do is reduce the map output records. Adding a combiner will not reduce the disk write (it will actually increase it, read here). In other words combining only optimizes the shuffle phase, thus the amount of data sent over the network, but not spill time!!
And at the very detailed level we can look at single task executions and understand in detail what is really going on.
The detailed data about each Map, Reduce Task Atttempt as well as the spills and shuffles
Ganglia is a great tool for high-level monitoring of your Hadoop cluster utilization, but it is not enough. The fact that everybody is working on additional means to understand the Hadoop cluster (Hortonworks with Ambari, Cloudera with their Manager, LinkedIn with White Elephant, the Star Fish project...) shows that there is a lot more needed beyond simple monitoring. Even those more advanced monitoring tools are not always answering the "why" though, which is what we really need to do. This is where the Performance Management discipline can add a lot of value and really help you get the best out of your Hadoop cluster. In other words don't just run Hadoop jobs at scale, run them efficiently and at scale!
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
Mar. 3, 2015 09:45 AM EST Reads: 1,029
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
Mar. 3, 2015 09:00 AM EST Reads: 1,378
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
Mar. 3, 2015 09:00 AM EST Reads: 718
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
Mar. 3, 2015 09:00 AM EST Reads: 2,330
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
Mar. 3, 2015 09:00 AM EST Reads: 1,435
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.
Mar. 3, 2015 09:00 AM EST Reads: 1,212
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
Mar. 3, 2015 08:30 AM EST Reads: 811
HP and Aruba Networks on Monday announced a definitive agreement for HP to acquire Aruba, a provider of next-generation network access solutions for the mobile enterprise, for $24.67 per share in cash. The equity value of the transaction is approximately $3.0 billion, and net of cash and debt approximately $2.7 billion. Both companies' boards of directors have approved the deal. "Enterprises are facing a mobile-first world and are looking for solutions that help them transition legacy investments to the new style of IT," said Meg Whitman, Chairman, President and Chief Executive Officer of HP...
Mar. 3, 2015 08:00 AM EST Reads: 820
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing demand and the rapidly changing workspace model.
Mar. 3, 2015 07:00 AM EST Reads: 781
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...
Mar. 3, 2015 05:00 AM EST Reads: 2,596
Since 2008 and for the first time in history, more than half of humans live in urban areas, urging cities to become “smart.” Today, cities can leverage the wide availability of smartphones combined with new technologies such as Beacons or NFC to connect their urban furniture and environment to create citizen-first services that improve transportation, way-finding and information delivery. In her session at @ThingsExpo, Laetitia Gazel-Anthoine, CEO of Connecthings, will focus on successful use cases.
Mar. 3, 2015 04:00 AM EST Reads: 2,998
Mar. 3, 2015 03:30 AM EST Reads: 2,641
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
Mar. 3, 2015 02:00 AM EST Reads: 3,103
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
Mar. 3, 2015 12:00 AM EST Reads: 3,041
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
Mar. 2, 2015 11:30 PM EST Reads: 801
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch of Docker's initial release in March of 2013, interest was revved up several notches. Then late last...
Mar. 2, 2015 07:15 PM EST Reads: 867
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
Mar. 2, 2015 04:00 PM EST Reads: 1,378
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Mar. 2, 2015 03:15 PM EST Reads: 1,486
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
Mar. 2, 2015 02:00 PM EST Reads: 1,445
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
Mar. 2, 2015 01:45 PM EST Reads: 1,312