|By Michael Kopp||
|April 29, 2013 10:15 AM EDT||
Over the last couple of months I have been talking to more and more customers who are either bringing their Hadoop clusters into production or have already done so and are now getting serious about operations. This leads to some interesting discussions about how to monitor Hadoop properly and one thing pops up quite often: Do they need anything beyond Ganglia? If yes, what should they do beyond it?
As in every other system, monitoring in a Hadoop environment starts with the basics: System Metrics - CPU, Disk, Memory you know the drill. Of special importance in a Hadoop system is a well-balanced cluster; you don't want to have some nodes being much more (or less) utilized then others. Besides CPU and memory utilization, Disk utilization and of course I/O throughput is of high importance. After all the most likely bottleneck in a Big Data system is I/O - either with ingress (network and disk), moving data around (e.g., MapReduce shuffle on the network) and straightforward read/write to disk.
The problem in a Hadoop system is of course its size. Nothing new for us, some of our customers monitor well beyond 1000+ JVMs with CompuwareAPM. The "advantage" in a Hadoop system is its relative conformity - every node looks pretty much like the other. This is what Ganglia leverages.
Cluster Monitoring with Ganglia
What Ganglia is very good at is providing an overview over how a cluster is utilized. The load chart is particularly interesting:
This chart shows the CPU load on a 1000 Server cluster that has roughly 15.000 CPUs
It tells us the number of available cores in the system and the number of running processes (in theory a core can never handle more than one process at a time) and the 1-min load average. If the system is getting fully utilized the 1-min load average would approach the total number of CPUs. Another view on this is the well-known CPU utilization chart:
CPU Utilization over the last day. While the utilization stays well below 10% we see a lot of I/O wait spikes.
While the load chart gives a good overall impression of usage, the utilization tells us the story of how the CPUs are used. While typical CPU charts show a single server, Ganglia specializes in showing whole clusters (the picture shows CPU usage of a 1000 machine cluster). In the case of the depicted chart we see that the CPUs are experiencing a lot of I/O wait spikes, which points toward heavy disk I/O. Basically it seems the disk I/O is the reason that we cannot utilize our CPU better at these times. But in general our cluster is well underutilized in terms of CPU.
Trends are also easy to understand, as can be seen in this memory chart over a year.
Memory capacity and usage over a year
All this looks pretty good, so what is missing? The "so what" and "why" is what is missing. If my memory demand is growing, I have no way of knowing why it is growing. If the CPU chart tells me that I spend a lot of time waiting, it does not tell what to do, or why that is so? These questions are beyond the scope of Ganglia.
What about Hadoop specifics?
Ganglia also has a Hadoop plugin, which basically gives you access to all the usual Hadoop metrics (unfortunately a comprehensive list of Hadoop metrics is really hard to find, appreciate if somebody commented the link). There is a good explanation on what is interesting on Edward Caproli's page: JoinTheGrid. Basically you can use those metrics to monitor the capacity and usage trends of HDFS and the NameNodes and also how many jobs, mappers and reducers are running.
Capacity of the DataNodes over time
Capacity of the Name Nodes over time
The DataNode Operations give me an impression of I/O pressure on the Hadoop cluster
All these charts can of course be easily built in any modern monitoring or APM solution like CompuwareAPM, but Ganglia gives you a simple starting point; and it's Free as in Beer.
What's missing again is the so what? If my jobs are running a lot longer than yesterday, what should I do? Why do they run longer? A Hadoop expert might dig into 10 different charts around I/O and Network, spilling, look at log files among other things and try an educated guess as to what might be the problem. But we aren't all experts, neither do we have the time to dig into all of these metrics and log files all the time.
This is the reason that we and our customers are moving beyond Ganglia - to solve the "Why" and "So What" within time constraints.
Beyond the Basics #1 - Understanding Cluster Utilization
A use case that we get from customers is that they want to know which users or which pools (in case of the fair scheduler) are responsible for how much of the cluster utilization. LinkedIn just released White Elephant, a tool that parses MapReduce logs and builds some nice dashboards and shows you which of your users occupy how much of your cluster. This is of course based on log file analysis and thus okay for analysis but not for monitoring. With proper tools in place we can do the same thing in near real time.
The CPU Usage in the Hadoop Cluster on per User basis
In this example I wanted to monitor which user consumed how much of my Amazon EMR cluster. If we see a user or pool that occupies a lot of the cluster we can course also see which jobs are running and how much of the cluster they occupy.
The CPU Usage in the Hadoop Cluster on per Job basis
And this will also tell us if that job has always been there, and just uses a lot more resources now. This would be our cue to start analyzing what has changed.
Beyond the Basics #2 - Understanding why my jobs are slow(er)
If we want to understand why a job is slow we need to look at a high-level break down first.
In which phase of the map reduce do we spend the most time, or did we spend more time than yesterday? Understanding these timings in context with the respective job counters, like Map Input or Spilled Records, helps us understand why the phase took longer.
Overview of the time spent in different phases and the respective input/output counters
At this point we will already have a pretty good idea as to what happened. We either simply have more data to crunch (more input data) or a portion of the MapReduce job consumes more CPU (code change?) or we spill more records to disk (code change or Hadoop config change?). We might also detect an unbalanced cluster in the performance breakdown.
This job is executing nearly exclusively on a single node instead of distributing
In this case we want to check whether all the involved nodes processed the same amount of data
Here we see that there is a wide range from minimum, average to maximum on mapped input and output records. The data is not balanced
or if the difference can again be found in the code (different kinds of computations). If we are running against HBase we might of course have an issue with HBase performance or distribution.
At the beginning of the job only a single HBase region Server consumes CPU while all others remain idle
On the other hand, if a lot of mapping time is spent in the garbage collector then you should maybe invest in larger JVMs.
The Performance Breakdown of this particular job shows considerable time in GC suspension
If spilling data to disk is where we spend our time, we should take a closer look at that phase. It might turn out that all of our time is spent on disk wait.
If the Disk were the bottleneck we would see it on disk I/O here
Now if disk write is our bottleneck, then really the only thing that we can do is reduce the map output records. Adding a combiner will not reduce the disk write (it will actually increase it, read here). In other words combining only optimizes the shuffle phase, thus the amount of data sent over the network, but not spill time!!
And at the very detailed level we can look at single task executions and understand in detail what is really going on.
The detailed data about each Map, Reduce Task Atttempt as well as the spills and shuffles
Ganglia is a great tool for high-level monitoring of your Hadoop cluster utilization, but it is not enough. The fact that everybody is working on additional means to understand the Hadoop cluster (Hortonworks with Ambari, Cloudera with their Manager, LinkedIn with White Elephant, the Star Fish project...) shows that there is a lot more needed beyond simple monitoring. Even those more advanced monitoring tools are not always answering the "why" though, which is what we really need to do. This is where the Performance Management discipline can add a lot of value and really help you get the best out of your Hadoop cluster. In other words don't just run Hadoop jobs at scale, run them efficiently and at scale!
Basho Technologies has announced the latest release of Basho Riak TS, version 1.3. Riak TS is an enterprise-grade NoSQL database optimized for Internet of Things (IoT). The open source version enables developers to download the software for free and use it in production as well as make contributions to the code and develop applications around Riak TS. Enhancements to Riak TS make it quick, easy and cost-effective to spin up an instance to test new ideas and build IoT applications. In addition to...
Jul. 1, 2016 08:30 PM EDT Reads: 790
IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effi...
Jul. 1, 2016 06:30 PM EDT Reads: 765
Internet of @ThingsExpo has announced today that Chris Matthieu has been named tech chair of Internet of @ThingsExpo 2016 Silicon Valley. The 6thInternet of @ThingsExpo will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Jul. 1, 2016 06:00 PM EDT Reads: 536
Presidio has received the 2015 EMC Partner Services Quality Award from EMC Corporation for achieving outstanding service excellence and customer satisfaction as measured by the EMC Partner Services Quality (PSQ) program. Presidio was also honored as the 2015 EMC Americas Marketing Excellence Partner of the Year and 2015 Mid-Market East Partner of the Year. The EMC PSQ program is a project-specific survey program designed for partners with Service Partner designations to solicit customer feedbac...
Jul. 1, 2016 05:30 PM EDT Reads: 786
The cloud promises new levels of agility and cost-savings for Big Data, data warehousing and analytics. But it’s challenging to understand all the options – from IaaS and PaaS to newer services like HaaS (Hadoop as a Service) and BDaaS (Big Data as a Service). In her session at @BigDataExpo at @ThingsExpo, Hannah Smalltree, a director at Cazena, provided an educational overview of emerging “as-a-service” options for Big Data in the cloud. This is critical background for IT and data profession...
Jul. 1, 2016 05:15 PM EDT Reads: 637
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci...
Jul. 1, 2016 04:15 PM EDT Reads: 239
Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ...
Jul. 1, 2016 03:15 PM EDT Reads: 315
Connected devices and the industrial internet are growing exponentially every year with Cisco expecting 50 billion devices to be in operation by 2020. In this period of growth, location-based insights are becoming invaluable to many businesses as they adopt new connected technologies. Knowing when and where these devices connect from is critical for a number of scenarios in supply chain management, disaster management, emergency response, M2M, location marketing and more. In his session at @Th...
Jul. 1, 2016 02:00 PM EDT Reads: 1,423
Extracting business value from Internet of Things (IoT) data doesn’t happen overnight. There are several requirements that must be satisfied, including IoT device enablement, data analysis, real-time detection of complex events and automated orchestration of actions. Unfortunately, too many companies fall short in achieving their business goals by implementing incomplete solutions or not focusing on tangible use cases. In his general session at @ThingsExpo, Dave McCarthy, Director of Products...
Jul. 1, 2016 01:15 PM EDT Reads: 337
There are several IoTs: the Industrial Internet, Consumer Wearables, Wearables and Healthcare, Supply Chains, and the movement toward Smart Grids, Cities, Regions, and Nations. There are competing communications standards every step of the way, a bewildering array of sensors and devices, and an entire world of competing data analytics platforms. To some this appears to be chaos. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, Bradley Holt, Developer Advocate a...
Jul. 1, 2016 01:00 PM EDT Reads: 1,049
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Jul. 1, 2016 01:00 PM EDT Reads: 703
Apixio Inc. has raised $19.3 million in Series D venture capital funding led by SSM Partners with participation from First Analysis, Bain Capital Ventures and Apixio’s largest angel investor. Apixio will dedicate the proceeds toward advancing and scaling products powered by its cognitive computing platform, further enabling insights for optimal patient care. The Series D funding comes as Apixio experiences strong momentum and increasing demand for its HCC Profiler solution, which mines unstruc...
Jul. 1, 2016 12:30 PM EDT Reads: 687
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 Silicon Valley. The 19th Cloud Expo and 6th @ThingsExpo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Interne...
Jul. 1, 2016 12:00 PM EDT Reads: 658
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
Jul. 1, 2016 10:45 AM EDT Reads: 577
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp...
Jul. 1, 2016 10:30 AM EDT Reads: 1,105
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...
Jul. 1, 2016 10:00 AM EDT Reads: 531
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh...
Jul. 1, 2016 09:30 AM EDT Reads: 1,201
The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discussed how businesses can gain an edge over competitors by empowering consumers to take control through IoT. He cited examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He also highlighted how IoT can revitalize and restore outdated business models, making them profitable ...
Jul. 1, 2016 09:00 AM EDT Reads: 696
IoT offers a value of almost $4 trillion to the manufacturing industry through platforms that can improve margins, optimize operations & drive high performance work teams. By using IoT technologies as a foundation, manufacturing customers are integrating worker safety with manufacturing systems, driving deep collaboration and utilizing analytics to exponentially increased per-unit margins. However, as Benoit Lheureux, the VP for Research at Gartner points out, “IoT project implementers often ...
Jul. 1, 2016 08:45 AM EDT Reads: 813
When people aren’t talking about VMs and containers, they’re talking about serverless architecture. Serverless is about no maintenance. It means you are not worried about low-level infrastructural and operational details. An event-driven serverless platform is a great use case for IoT. In his session at @ThingsExpo, Animesh Singh, an STSM and Lead for IBM Cloud Platform and Infrastructure, will detail how to build a distributed serverless, polyglot, microservices framework using open source tec...
Jul. 1, 2016 08:30 AM EDT Reads: 817