Welcome!

Java IoT Authors: Sematext Blog, Pat Romanski, Craig Lowell, Liz McMillan, Elizabeth White

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Why Perfomance Management Is Easier in Public than On-Premise Clouds

Performance Management in public and in private clouds

Performance is one of the major concerns in the cloud. But the question should not really be whether or not the cloud performs, but whether the Application in question can and does perform in the cloud. The main problem here is that application performance is either not managed at all or managed incorrectly and therefore this question often remains unanswered. Now granted, performance management in cloud environments is harder than in physical ones, but it can be argued that it is easier in public clouds than in on-premise clouds or even a large virtualized environment. How do I come to that conclusion? Before answering that let’s look at the unique challenges that virtualization in general – and clouds in particular – pose to the realm of APM.

Time is relative
The problem with timekeeping is well known in the VMWare community. There is a very good VMWare whitepaper that explains this in quite some detail. It doesn’t tell the whole story, however, because obviously there are other virtualization solutions like Xen, KVM, Hyper-V and more. All of them solve this problem differently. On top of that the various guest operating systems behave very differently as well. In fact I might write a whole article just about that, but the net result is that time measurement inside a guest is not accurate, unless you know what you do. It might lag behind real time and speedup to catch up in the next moment. If your monitoring tool is aware of that and supports native timing calls it can work around that and give you real response times. Unfortunately that leads to yet another problem. Your VM is not running all the time, like a process it will get de-scheduled from time to time; however, unlike a process it will not be aware of that. While real time is important for response time, it will screw with your performance analysis on a deeper level.

The Effects of timekeeping on Response and Execution Time

The Effects of timekeeping on Response and Execution Time

If you measure real time, then Method B looks more expensive than it actually is. This might lead you down a wrong track when you look for a performance problem. When you measure apparent time then you don’t have this problem, but your response times do not reflect the real user experience. There are generally two ways of handling that. Your monitoring solution can capture these de-schedule times and account this all the way against your execution times. The more granular your measurement the more overhead this will produce. The more pragmatic approach is to simply account this once per transaction and thus capture the “impact” that the de-schedules have on your response time. Yet another approach is to periodically read the CPU steal time (either from vSphere or via mpstat on Xen) and correlate this with your transaction data. This will give you a better grasp on things. Even then it will add a level of uncertainty in your performance diagnostics, but at least you know the real response time and how fast your transactions really are. Bottom line, those two are no longer the same thing.

The impact of shared environments
The sharing of resources is what makes virtualization and cloud environments compelling from a cost perspective. Most normal data centers have a average CPU utilization far below 20%. The reason is two fold: on the one hand they isolate the different applications by running them on different hardware; on the other hand they have to provision for peak load. By using virtualization you can put multiple “isolated” applications on the same hardware. Resource utilization is higher, but even then it does not go beyond 30-40 percent most of the time, as you still need to take peak load into account. But the peak loads for the different applications might occur at different times! The first order of business here is to find the optimal balance.

The first thing to realize is that your VM is treated like a process by the virtualization infrastructure. It gets a share of resources – how much can be configured. If it reaches the configured limit it has to wait. The same is true if the physical resources are exhausted. To drive utilization higher, virtualization and cloud environments overcommit. That means they allow 10 2GHz VMs on a 16GHz physical machine. Most of the time this is perfectly fine as not all VMs will demand 100 percent CPU at the same time. If there is not enough CPU to go around, some will be de-scheduled and will be given a greater share the next time around. Most importantly this is not only true for CPU but also memory, disk and network IO.

What does this mean for performance management? It means that increasing load on one application, or a bug in the same, can impact another negatively without you being aware of this. Without having a virtualization-aware monitoring solution that also monitors the other application you will not see this. All you see is that the application performance goes down!

When the load increases on one Application it effects the other

When the load increases on one Application it affects the other

With proper tools this is relatively easy to catch for CPU-related problems, but a lot harder for IO-related issues. So you need to monitor both applications, their VMs and the underlying virtualization infrastructure and correlate the information. That adds a lot of complexity. The virtualization vendors try to solve this by looking purely at VM and Host level system metrics. What they forget is that high utilization of a resource does not mean the application is slow! And it is the application we care about.

OS metrics are worse than useless
Now for the good stuff. Forget your guest operating system utilization metrics, they are not showing you what is really going on. There are several reasons why that is so. One is the timekeeping problem. Even if you and your monitoring tool use the right timer and measure time correctly, your operating system might not. In fact most systems will not read out the timer device all the time, but rely on the CPU frequency and counters to estimate time as it is faster than reading the timer device. As utilization metrics are always based on a total number of possible requests or instructions per time slice, they get screwed up by that. This is true for every metric, not just CPU. The second problem is that the guest does not really know the upper limit for a resource, as the virtualization environment might overcommit. That means you may never be able to get 100% or you can get it at one time but not another. A good example is the Amazon EC2 Cloud. Although I cannot be sure, I suspect that the guest CPU metrics are actually correct. They correctly report the CPU utilization of the underlying hardware, only you will never get 100% of the underlying hardware. So without knowing how much of a share you get, they are useless.

What does this mean? You can rely on absolute numbers like the number of I/O requests, the number of SQL Statements and the amount of data sent over the wire for a specific application or transaction. But you do not know whether an over-utilization of the physical hardware presents a bottleneck. There are two ways to solve this problem.

The first involves correlating resource and throughput metrics of your application with the reported utilization and throughput measures on the virtualization layer. In case of VMWare that means correlating detailed application and transaction level metrics with metrics provided by vSphere. On EC2 you can do the same with metrics provided by CloudWatch.

EC2 Cloud Monitoring Dashboard showing 3 instances

EC2 Cloud Monitoring Dashboard showing 3 instances

This is the approach recommended by some virtualization vendors. It is possible, but because of the complexity requires a lot of expertise.  You do however know which VM consumes how much of your resources. With a little calculation magic you can break this down to application and transaction level; at least on average. You need this for resource optimization and to decide which VMs should be moved to a different physical hardware. This does not do you a lot of good in case of acute performance problems or troubleshooting as you don’t know the actual impact of the resource shortage. Or if it has an impact at all. You might move a VM, and not actually speed things up. The real crux is that just because something is heavily used does not mean that it is the source of your performance problem! And of course this approach only works if you are in charge of the hardware, meaning it does not work with public clouds!

The second option is one that is, among others, proposed by Bernd Harzog, a well-known expert in the virtualization space. It is also the one that I would recommend.

Response time, response time, Latency and more response time
On the Virtualization Practice blog Bernd explains in detail why resource utilization does not help you on either performance management or capacity planning. Instead he points out that what really matters is response time or throughput of your application. If your physical hardware or virtualization infrastructure runs into utilization problems the easiest way to spot this is when it slows down. In effect that means that I/O requests done by your application are slowing down and you can measure that. What’s more important is that you can turn this around! If your application performs fine then whatever the virtualization or cloud infrastructure reports, there is no performance problem. To be more accurate, you only need to analyze the virtualization layer if your application performance monitoring shows that a high portion of your response time is down to CPU shortage, memory shortage or I/O latency. If that is not the case than nothing is gained by optimizing the virtualization layer from a performance perspective.

Network Impact on Transaction is minimal, even though network utilization is high

Network Impact on Transaction is minimal, even though network utilization is high

Diagnosing the virtualization layer
Of course in case of virtualization and private clouds you still need to diagnose a infrastructure response time problem, once identified. You measure the infrastructure response time inside your application. If you have identified a bottleneck, meaning it slows down or is a big portion of your response time, you need to relate that infrastructure response time back to your virtualized infrastructure: Which resource slows down? From there you can use the metrics provided by VMWare (or whatever your virtualization vendor) to diagnose the root cause of the bottleneck. The key is that you identify the problem based on actual impact and then use the infrastructure metrics to diagnose the cause of that.

Layers add complexity
What this of course means is that you now have to manage performance on even more levels than before. It also means that you have to somehow manage which VMs run on the same physical host. We have already seen that the nature of the shared environment means that applications can impact each other. So a big part of managing the performance in a virtualized environment is to detect that impact and “tune” your environment in a way that both minimizes that impact and maximizes your resource usage and utilization. These are diametrically opposed goals!

Now what about Clouds
A cloud by nature is more dynamic than a “simple” virtualized environment. A cloud will enable you to provision new environments on the fly and also dispose of them again. This will lead to spikes on your utilization, leading to performance impact on existing application. So in the cloud the “minimum impact vs. maximize resource usage” goal becomes even harder to achieve. Cloud Vendors usually provide you with management software to manage the placement of your VMs. They will move them around based on complex algorithms to try and achieve the impossible goal of high performance and high utilization. The success is limited, because most of these management solutions ignore the application and only look at the virtualization layer to make these decisions. It’s a vicious cycle and the price you pay for better utilizing your datacenter and faster provisioning of new environments.

Maybe a bigger issue is Capacity management. The shared nature of the environment prevents you from making straight-forward predictions about capacity usage on a hardware level. You get a long way by relating the requests done by your application on a transactional level with the capacity usage on the virtualization layer, but that is cumbersome and does not lead to accurate results. Then of course a cloud is dynamic and your application is distributed, so without having a Solution that measures all your transactions and auto detects changes in the cloud environment you can easily make this a full time job.

Another problem is that the only way to notice a real capacity problems is to determine if the infrastructure response time goes down and negatively impact your application. Remember utilization does not equal performance and you want high utilization anyway! But once you notice capacity problems, it is to late to order new hardware.

That means is that you not only need to provision for peak loads, effectively over provisioning again,  you also need to take all those temporary and newly-provisioned environments into account. A match made in planning hell.

Performance Management in a public cloud
First let me clarify the term public cloud here. While a public cloud has many characteristics, the most important ones for this article are that you don’t own the hardware, have limited control over it and can provision new instances on the fly.

If you think about this carefully you will notice immediately that you have fewer problems. You only care about the performance of your application and not at all about the utilization of the hardware – it’s not your hardware after all. Meaning there are no competing goals! Depending on your application you will add a new instance if response time goes down on a specific tier or if you need more throughput than you currently achieve. You provision on the fly, meaning your capacity management is done on the fly as well. Another problem solved. You still run in a shared environment and this will impact you. But your options are limited as you cannot monitor or fix this directly. What you can do is measure the latency of the infrastructure. If you notice a slowdown you can talk to your vendor. Most of the time you will not care and just terminate the old and start a new instance if infrastructure response time goes down. Chances are the new instances are started on a less utilized server and that’s that. I won’t say that this is easy. I also do not say that this is better, but I do say that performance management is easier than in private clouds.

Conclusion
Private and Public cloud strategies are based on similar underlying technologies. Just because they are based on similar technologies, however, doesn’t mean that they are similar in any way in terms of actual usage. In the private cloud, the goal is becoming more efficient by dynamically and automatically allocating resources in order to drive up utilization while also lowering management costs of those many instances. The problem with this is that driving up utilization and having high performance are competing goals. The higher the utilization the more the applications will impact one another. Reaching a balance is highly complex, and is made more complex due to the dynamic nature of the private cloud.

In the public cloud, these competing goals are split – between the cloud provider, who cares about utilization, and the application owner, who cares about performance. In the public cloud the application owner has limited options: he can measure application performance; he can measure the impact of infrastructure degradation on the performance of his business transactions; but he cannot resolve the actual degradation. All he can do is terminate slow instances and/or add new once and in the hope that they will perform at a higher level. In this way, performance in the public cloud is in fact easier to manage.

But whether it be public or private you must actively manage performance in a cloud production environment. In the private cloud you need to maintain a balance between high utilization and application performance, which requires you to know what is going under the hood. And without application performance management in the public cloud, application owners are at the mercy of cloud providers, whose goals are not necessarily aligned with yours.

Related reading:

  1. The rise and fall of the machines – Watching out for clouds // It has been 5 years ago that Amazon launched...
  2. From Cloud Monitoring to Effective Cloud Management The following overview of our webinar with IntraLinks is taken...
  3. Integrated Cloud based Load Testing and Performance Management from Keynote and dynaTrace Watch the 7 Minute Walk-Through Video that guides you through the...
  4. Field Report – Application Performance Management in WebSphere Environments // Just in time for the upcoming Webinar with The...
  5. Troubleshooting response time problems – why you cannot trust your system metrics // Production Monitoring is about ensuring the stability and health...

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

@ThingsExpo Stories
Why do your mobile transformations need to happen today? Mobile is the strategy that enterprise transformation centers on to drive customer engagement. In his general session at @ThingsExpo, Roger Woods, Director, Mobile Product & Strategy – Adobe Marketing Cloud, covered key IoT and mobile trends that are forcing mobile transformation, key components of a solid mobile strategy and explored how brands are effectively driving mobile change throughout the enterprise.
DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev...
Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition.
Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, will deep dive into best practices that will ensure a successful smart city journey.
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Almost two-thirds of companies either have or soon will have IoT as the backbone of their business in 2016. However, IoT is far more complex than most firms expected. How can you not get trapped in the pitfalls? In his session at @ThingsExpo, Tony Shan, a renowned visionary and thought leader, will introduce a holistic method of IoTification, which is the process of IoTifying the existing technology and business models to adopt and leverage IoT. He will drill down to the components in this fra...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
There is growing need for data-driven applications and the need for digital platforms to build these apps. In his session at 19th Cloud Expo, Muddu Sudhakar, VP and GM of Security & IoT at Splunk, will cover different PaaS solutions and Big Data platforms that are available to build applications. In addition, AI and machine learning are creating new requirements that developers need in the building of next-gen apps. The next-generation digital platforms have some of the past platform needs a...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
Pulzze Systems was happy to participate in such a premier event and thankful to be receiving the winning investment and global network support from G-Startup Worldwide. It is an exciting time for Pulzze to showcase the effectiveness of innovative technologies and enable them to make the world smarter and better. The reputable contest is held to identify promising startups around the globe that are assured to change the world through their innovative products and disruptive technologies. There w...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher....
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple…
Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil...
Is the ongoing quest for agility in the data center forcing you to evaluate how to be a part of infrastructure automation efforts? As organizations evolve toward bimodal IT operations, they are embracing new service delivery models and leveraging virtualization to increase infrastructure agility. Therefore, the network must evolve in parallel to become equally agile. Read this essential piece of Gartner research for recommendations on achieving greater agility.