Welcome!

Java Authors: Frank Huerta, Pieter Van Heck, Esmeralda Swartz, Gary Kaiser, Dana Gardner

Related Topics: Cloud Expo, Java, SOA & WOA, Virtualization

Cloud Expo: Article

Why Perfomance Management Is Easier in Public than On-Premise Clouds

Performance Management in public and in private clouds

Performance is one of the major concerns in the cloud. But the question should not really be whether or not the cloud performs, but whether the Application in question can and does perform in the cloud. The main problem here is that application performance is either not managed at all or managed incorrectly and therefore this question often remains unanswered. Now granted, performance management in cloud environments is harder than in physical ones, but it can be argued that it is easier in public clouds than in on-premise clouds or even a large virtualized environment. How do I come to that conclusion? Before answering that let’s look at the unique challenges that virtualization in general – and clouds in particular – pose to the realm of APM.

Time is relative
The problem with timekeeping is well known in the VMWare community. There is a very good VMWare whitepaper that explains this in quite some detail. It doesn’t tell the whole story, however, because obviously there are other virtualization solutions like Xen, KVM, Hyper-V and more. All of them solve this problem differently. On top of that the various guest operating systems behave very differently as well. In fact I might write a whole article just about that, but the net result is that time measurement inside a guest is not accurate, unless you know what you do. It might lag behind real time and speedup to catch up in the next moment. If your monitoring tool is aware of that and supports native timing calls it can work around that and give you real response times. Unfortunately that leads to yet another problem. Your VM is not running all the time, like a process it will get de-scheduled from time to time; however, unlike a process it will not be aware of that. While real time is important for response time, it will screw with your performance analysis on a deeper level.

The Effects of timekeeping on Response and Execution Time

The Effects of timekeeping on Response and Execution Time

If you measure real time, then Method B looks more expensive than it actually is. This might lead you down a wrong track when you look for a performance problem. When you measure apparent time then you don’t have this problem, but your response times do not reflect the real user experience. There are generally two ways of handling that. Your monitoring solution can capture these de-schedule times and account this all the way against your execution times. The more granular your measurement the more overhead this will produce. The more pragmatic approach is to simply account this once per transaction and thus capture the “impact” that the de-schedules have on your response time. Yet another approach is to periodically read the CPU steal time (either from vSphere or via mpstat on Xen) and correlate this with your transaction data. This will give you a better grasp on things. Even then it will add a level of uncertainty in your performance diagnostics, but at least you know the real response time and how fast your transactions really are. Bottom line, those two are no longer the same thing.

The impact of shared environments
The sharing of resources is what makes virtualization and cloud environments compelling from a cost perspective. Most normal data centers have a average CPU utilization far below 20%. The reason is two fold: on the one hand they isolate the different applications by running them on different hardware; on the other hand they have to provision for peak load. By using virtualization you can put multiple “isolated” applications on the same hardware. Resource utilization is higher, but even then it does not go beyond 30-40 percent most of the time, as you still need to take peak load into account. But the peak loads for the different applications might occur at different times! The first order of business here is to find the optimal balance.

The first thing to realize is that your VM is treated like a process by the virtualization infrastructure. It gets a share of resources – how much can be configured. If it reaches the configured limit it has to wait. The same is true if the physical resources are exhausted. To drive utilization higher, virtualization and cloud environments overcommit. That means they allow 10 2GHz VMs on a 16GHz physical machine. Most of the time this is perfectly fine as not all VMs will demand 100 percent CPU at the same time. If there is not enough CPU to go around, some will be de-scheduled and will be given a greater share the next time around. Most importantly this is not only true for CPU but also memory, disk and network IO.

What does this mean for performance management? It means that increasing load on one application, or a bug in the same, can impact another negatively without you being aware of this. Without having a virtualization-aware monitoring solution that also monitors the other application you will not see this. All you see is that the application performance goes down!

When the load increases on one Application it effects the other

When the load increases on one Application it affects the other

With proper tools this is relatively easy to catch for CPU-related problems, but a lot harder for IO-related issues. So you need to monitor both applications, their VMs and the underlying virtualization infrastructure and correlate the information. That adds a lot of complexity. The virtualization vendors try to solve this by looking purely at VM and Host level system metrics. What they forget is that high utilization of a resource does not mean the application is slow! And it is the application we care about.

OS metrics are worse than useless
Now for the good stuff. Forget your guest operating system utilization metrics, they are not showing you what is really going on. There are several reasons why that is so. One is the timekeeping problem. Even if you and your monitoring tool use the right timer and measure time correctly, your operating system might not. In fact most systems will not read out the timer device all the time, but rely on the CPU frequency and counters to estimate time as it is faster than reading the timer device. As utilization metrics are always based on a total number of possible requests or instructions per time slice, they get screwed up by that. This is true for every metric, not just CPU. The second problem is that the guest does not really know the upper limit for a resource, as the virtualization environment might overcommit. That means you may never be able to get 100% or you can get it at one time but not another. A good example is the Amazon EC2 Cloud. Although I cannot be sure, I suspect that the guest CPU metrics are actually correct. They correctly report the CPU utilization of the underlying hardware, only you will never get 100% of the underlying hardware. So without knowing how much of a share you get, they are useless.

What does this mean? You can rely on absolute numbers like the number of I/O requests, the number of SQL Statements and the amount of data sent over the wire for a specific application or transaction. But you do not know whether an over-utilization of the physical hardware presents a bottleneck. There are two ways to solve this problem.

The first involves correlating resource and throughput metrics of your application with the reported utilization and throughput measures on the virtualization layer. In case of VMWare that means correlating detailed application and transaction level metrics with metrics provided by vSphere. On EC2 you can do the same with metrics provided by CloudWatch.

EC2 Cloud Monitoring Dashboard showing 3 instances

EC2 Cloud Monitoring Dashboard showing 3 instances

This is the approach recommended by some virtualization vendors. It is possible, but because of the complexity requires a lot of expertise.  You do however know which VM consumes how much of your resources. With a little calculation magic you can break this down to application and transaction level; at least on average. You need this for resource optimization and to decide which VMs should be moved to a different physical hardware. This does not do you a lot of good in case of acute performance problems or troubleshooting as you don’t know the actual impact of the resource shortage. Or if it has an impact at all. You might move a VM, and not actually speed things up. The real crux is that just because something is heavily used does not mean that it is the source of your performance problem! And of course this approach only works if you are in charge of the hardware, meaning it does not work with public clouds!

The second option is one that is, among others, proposed by Bernd Harzog, a well-known expert in the virtualization space. It is also the one that I would recommend.

Response time, response time, Latency and more response time
On the Virtualization Practice blog Bernd explains in detail why resource utilization does not help you on either performance management or capacity planning. Instead he points out that what really matters is response time or throughput of your application. If your physical hardware or virtualization infrastructure runs into utilization problems the easiest way to spot this is when it slows down. In effect that means that I/O requests done by your application are slowing down and you can measure that. What’s more important is that you can turn this around! If your application performs fine then whatever the virtualization or cloud infrastructure reports, there is no performance problem. To be more accurate, you only need to analyze the virtualization layer if your application performance monitoring shows that a high portion of your response time is down to CPU shortage, memory shortage or I/O latency. If that is not the case than nothing is gained by optimizing the virtualization layer from a performance perspective.

Network Impact on Transaction is minimal, even though network utilization is high

Network Impact on Transaction is minimal, even though network utilization is high

Diagnosing the virtualization layer
Of course in case of virtualization and private clouds you still need to diagnose a infrastructure response time problem, once identified. You measure the infrastructure response time inside your application. If you have identified a bottleneck, meaning it slows down or is a big portion of your response time, you need to relate that infrastructure response time back to your virtualized infrastructure: Which resource slows down? From there you can use the metrics provided by VMWare (or whatever your virtualization vendor) to diagnose the root cause of the bottleneck. The key is that you identify the problem based on actual impact and then use the infrastructure metrics to diagnose the cause of that.

Layers add complexity
What this of course means is that you now have to manage performance on even more levels than before. It also means that you have to somehow manage which VMs run on the same physical host. We have already seen that the nature of the shared environment means that applications can impact each other. So a big part of managing the performance in a virtualized environment is to detect that impact and “tune” your environment in a way that both minimizes that impact and maximizes your resource usage and utilization. These are diametrically opposed goals!

Now what about Clouds
A cloud by nature is more dynamic than a “simple” virtualized environment. A cloud will enable you to provision new environments on the fly and also dispose of them again. This will lead to spikes on your utilization, leading to performance impact on existing application. So in the cloud the “minimum impact vs. maximize resource usage” goal becomes even harder to achieve. Cloud Vendors usually provide you with management software to manage the placement of your VMs. They will move them around based on complex algorithms to try and achieve the impossible goal of high performance and high utilization. The success is limited, because most of these management solutions ignore the application and only look at the virtualization layer to make these decisions. It’s a vicious cycle and the price you pay for better utilizing your datacenter and faster provisioning of new environments.

Maybe a bigger issue is Capacity management. The shared nature of the environment prevents you from making straight-forward predictions about capacity usage on a hardware level. You get a long way by relating the requests done by your application on a transactional level with the capacity usage on the virtualization layer, but that is cumbersome and does not lead to accurate results. Then of course a cloud is dynamic and your application is distributed, so without having a Solution that measures all your transactions and auto detects changes in the cloud environment you can easily make this a full time job.

Another problem is that the only way to notice a real capacity problems is to determine if the infrastructure response time goes down and negatively impact your application. Remember utilization does not equal performance and you want high utilization anyway! But once you notice capacity problems, it is to late to order new hardware.

That means is that you not only need to provision for peak loads, effectively over provisioning again,  you also need to take all those temporary and newly-provisioned environments into account. A match made in planning hell.

Performance Management in a public cloud
First let me clarify the term public cloud here. While a public cloud has many characteristics, the most important ones for this article are that you don’t own the hardware, have limited control over it and can provision new instances on the fly.

If you think about this carefully you will notice immediately that you have fewer problems. You only care about the performance of your application and not at all about the utilization of the hardware – it’s not your hardware after all. Meaning there are no competing goals! Depending on your application you will add a new instance if response time goes down on a specific tier or if you need more throughput than you currently achieve. You provision on the fly, meaning your capacity management is done on the fly as well. Another problem solved. You still run in a shared environment and this will impact you. But your options are limited as you cannot monitor or fix this directly. What you can do is measure the latency of the infrastructure. If you notice a slowdown you can talk to your vendor. Most of the time you will not care and just terminate the old and start a new instance if infrastructure response time goes down. Chances are the new instances are started on a less utilized server and that’s that. I won’t say that this is easy. I also do not say that this is better, but I do say that performance management is easier than in private clouds.

Conclusion
Private and Public cloud strategies are based on similar underlying technologies. Just because they are based on similar technologies, however, doesn’t mean that they are similar in any way in terms of actual usage. In the private cloud, the goal is becoming more efficient by dynamically and automatically allocating resources in order to drive up utilization while also lowering management costs of those many instances. The problem with this is that driving up utilization and having high performance are competing goals. The higher the utilization the more the applications will impact one another. Reaching a balance is highly complex, and is made more complex due to the dynamic nature of the private cloud.

In the public cloud, these competing goals are split – between the cloud provider, who cares about utilization, and the application owner, who cares about performance. In the public cloud the application owner has limited options: he can measure application performance; he can measure the impact of infrastructure degradation on the performance of his business transactions; but he cannot resolve the actual degradation. All he can do is terminate slow instances and/or add new once and in the hope that they will perform at a higher level. In this way, performance in the public cloud is in fact easier to manage.

But whether it be public or private you must actively manage performance in a cloud production environment. In the private cloud you need to maintain a balance between high utilization and application performance, which requires you to know what is going under the hood. And without application performance management in the public cloud, application owners are at the mercy of cloud providers, whose goals are not necessarily aligned with yours.

Related reading:

  1. The rise and fall of the machines – Watching out for clouds // It has been 5 years ago that Amazon launched...
  2. From Cloud Monitoring to Effective Cloud Management The following overview of our webinar with IntraLinks is taken...
  3. Integrated Cloud based Load Testing and Performance Management from Keynote and dynaTrace Watch the 7 Minute Walk-Through Video that guides you through the...
  4. Field Report – Application Performance Management in WebSphere Environments // Just in time for the upcoming Webinar with The...
  5. Troubleshooting response time problems – why you cannot trust your system metrics // Production Monitoring is about ensuring the stability and health...

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.