Click here to close now.

Welcome!

Java IoT Authors: Elizabeth White, Liz McMillan, Carmen Gonzalez, Bill Talbot, Pat Romanski

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Why Perfomance Management Is Easier in Public than On-Premise Clouds

Performance Management in public and in private clouds

Performance is one of the major concerns in the cloud. But the question should not really be whether or not the cloud performs, but whether the Application in question can and does perform in the cloud. The main problem here is that application performance is either not managed at all or managed incorrectly and therefore this question often remains unanswered. Now granted, performance management in cloud environments is harder than in physical ones, but it can be argued that it is easier in public clouds than in on-premise clouds or even a large virtualized environment. How do I come to that conclusion? Before answering that let’s look at the unique challenges that virtualization in general – and clouds in particular – pose to the realm of APM.

Time is relative
The problem with timekeeping is well known in the VMWare community. There is a very good VMWare whitepaper that explains this in quite some detail. It doesn’t tell the whole story, however, because obviously there are other virtualization solutions like Xen, KVM, Hyper-V and more. All of them solve this problem differently. On top of that the various guest operating systems behave very differently as well. In fact I might write a whole article just about that, but the net result is that time measurement inside a guest is not accurate, unless you know what you do. It might lag behind real time and speedup to catch up in the next moment. If your monitoring tool is aware of that and supports native timing calls it can work around that and give you real response times. Unfortunately that leads to yet another problem. Your VM is not running all the time, like a process it will get de-scheduled from time to time; however, unlike a process it will not be aware of that. While real time is important for response time, it will screw with your performance analysis on a deeper level.

The Effects of timekeeping on Response and Execution Time

The Effects of timekeeping on Response and Execution Time

If you measure real time, then Method B looks more expensive than it actually is. This might lead you down a wrong track when you look for a performance problem. When you measure apparent time then you don’t have this problem, but your response times do not reflect the real user experience. There are generally two ways of handling that. Your monitoring solution can capture these de-schedule times and account this all the way against your execution times. The more granular your measurement the more overhead this will produce. The more pragmatic approach is to simply account this once per transaction and thus capture the “impact” that the de-schedules have on your response time. Yet another approach is to periodically read the CPU steal time (either from vSphere or via mpstat on Xen) and correlate this with your transaction data. This will give you a better grasp on things. Even then it will add a level of uncertainty in your performance diagnostics, but at least you know the real response time and how fast your transactions really are. Bottom line, those two are no longer the same thing.

The impact of shared environments
The sharing of resources is what makes virtualization and cloud environments compelling from a cost perspective. Most normal data centers have a average CPU utilization far below 20%. The reason is two fold: on the one hand they isolate the different applications by running them on different hardware; on the other hand they have to provision for peak load. By using virtualization you can put multiple “isolated” applications on the same hardware. Resource utilization is higher, but even then it does not go beyond 30-40 percent most of the time, as you still need to take peak load into account. But the peak loads for the different applications might occur at different times! The first order of business here is to find the optimal balance.

The first thing to realize is that your VM is treated like a process by the virtualization infrastructure. It gets a share of resources – how much can be configured. If it reaches the configured limit it has to wait. The same is true if the physical resources are exhausted. To drive utilization higher, virtualization and cloud environments overcommit. That means they allow 10 2GHz VMs on a 16GHz physical machine. Most of the time this is perfectly fine as not all VMs will demand 100 percent CPU at the same time. If there is not enough CPU to go around, some will be de-scheduled and will be given a greater share the next time around. Most importantly this is not only true for CPU but also memory, disk and network IO.

What does this mean for performance management? It means that increasing load on one application, or a bug in the same, can impact another negatively without you being aware of this. Without having a virtualization-aware monitoring solution that also monitors the other application you will not see this. All you see is that the application performance goes down!

When the load increases on one Application it effects the other

When the load increases on one Application it affects the other

With proper tools this is relatively easy to catch for CPU-related problems, but a lot harder for IO-related issues. So you need to monitor both applications, their VMs and the underlying virtualization infrastructure and correlate the information. That adds a lot of complexity. The virtualization vendors try to solve this by looking purely at VM and Host level system metrics. What they forget is that high utilization of a resource does not mean the application is slow! And it is the application we care about.

OS metrics are worse than useless
Now for the good stuff. Forget your guest operating system utilization metrics, they are not showing you what is really going on. There are several reasons why that is so. One is the timekeeping problem. Even if you and your monitoring tool use the right timer and measure time correctly, your operating system might not. In fact most systems will not read out the timer device all the time, but rely on the CPU frequency and counters to estimate time as it is faster than reading the timer device. As utilization metrics are always based on a total number of possible requests or instructions per time slice, they get screwed up by that. This is true for every metric, not just CPU. The second problem is that the guest does not really know the upper limit for a resource, as the virtualization environment might overcommit. That means you may never be able to get 100% or you can get it at one time but not another. A good example is the Amazon EC2 Cloud. Although I cannot be sure, I suspect that the guest CPU metrics are actually correct. They correctly report the CPU utilization of the underlying hardware, only you will never get 100% of the underlying hardware. So without knowing how much of a share you get, they are useless.

What does this mean? You can rely on absolute numbers like the number of I/O requests, the number of SQL Statements and the amount of data sent over the wire for a specific application or transaction. But you do not know whether an over-utilization of the physical hardware presents a bottleneck. There are two ways to solve this problem.

The first involves correlating resource and throughput metrics of your application with the reported utilization and throughput measures on the virtualization layer. In case of VMWare that means correlating detailed application and transaction level metrics with metrics provided by vSphere. On EC2 you can do the same with metrics provided by CloudWatch.

EC2 Cloud Monitoring Dashboard showing 3 instances

EC2 Cloud Monitoring Dashboard showing 3 instances

This is the approach recommended by some virtualization vendors. It is possible, but because of the complexity requires a lot of expertise.  You do however know which VM consumes how much of your resources. With a little calculation magic you can break this down to application and transaction level; at least on average. You need this for resource optimization and to decide which VMs should be moved to a different physical hardware. This does not do you a lot of good in case of acute performance problems or troubleshooting as you don’t know the actual impact of the resource shortage. Or if it has an impact at all. You might move a VM, and not actually speed things up. The real crux is that just because something is heavily used does not mean that it is the source of your performance problem! And of course this approach only works if you are in charge of the hardware, meaning it does not work with public clouds!

The second option is one that is, among others, proposed by Bernd Harzog, a well-known expert in the virtualization space. It is also the one that I would recommend.

Response time, response time, Latency and more response time
On the Virtualization Practice blog Bernd explains in detail why resource utilization does not help you on either performance management or capacity planning. Instead he points out that what really matters is response time or throughput of your application. If your physical hardware or virtualization infrastructure runs into utilization problems the easiest way to spot this is when it slows down. In effect that means that I/O requests done by your application are slowing down and you can measure that. What’s more important is that you can turn this around! If your application performs fine then whatever the virtualization or cloud infrastructure reports, there is no performance problem. To be more accurate, you only need to analyze the virtualization layer if your application performance monitoring shows that a high portion of your response time is down to CPU shortage, memory shortage or I/O latency. If that is not the case than nothing is gained by optimizing the virtualization layer from a performance perspective.

Network Impact on Transaction is minimal, even though network utilization is high

Network Impact on Transaction is minimal, even though network utilization is high

Diagnosing the virtualization layer
Of course in case of virtualization and private clouds you still need to diagnose a infrastructure response time problem, once identified. You measure the infrastructure response time inside your application. If you have identified a bottleneck, meaning it slows down or is a big portion of your response time, you need to relate that infrastructure response time back to your virtualized infrastructure: Which resource slows down? From there you can use the metrics provided by VMWare (or whatever your virtualization vendor) to diagnose the root cause of the bottleneck. The key is that you identify the problem based on actual impact and then use the infrastructure metrics to diagnose the cause of that.

Layers add complexity
What this of course means is that you now have to manage performance on even more levels than before. It also means that you have to somehow manage which VMs run on the same physical host. We have already seen that the nature of the shared environment means that applications can impact each other. So a big part of managing the performance in a virtualized environment is to detect that impact and “tune” your environment in a way that both minimizes that impact and maximizes your resource usage and utilization. These are diametrically opposed goals!

Now what about Clouds
A cloud by nature is more dynamic than a “simple” virtualized environment. A cloud will enable you to provision new environments on the fly and also dispose of them again. This will lead to spikes on your utilization, leading to performance impact on existing application. So in the cloud the “minimum impact vs. maximize resource usage” goal becomes even harder to achieve. Cloud Vendors usually provide you with management software to manage the placement of your VMs. They will move them around based on complex algorithms to try and achieve the impossible goal of high performance and high utilization. The success is limited, because most of these management solutions ignore the application and only look at the virtualization layer to make these decisions. It’s a vicious cycle and the price you pay for better utilizing your datacenter and faster provisioning of new environments.

Maybe a bigger issue is Capacity management. The shared nature of the environment prevents you from making straight-forward predictions about capacity usage on a hardware level. You get a long way by relating the requests done by your application on a transactional level with the capacity usage on the virtualization layer, but that is cumbersome and does not lead to accurate results. Then of course a cloud is dynamic and your application is distributed, so without having a Solution that measures all your transactions and auto detects changes in the cloud environment you can easily make this a full time job.

Another problem is that the only way to notice a real capacity problems is to determine if the infrastructure response time goes down and negatively impact your application. Remember utilization does not equal performance and you want high utilization anyway! But once you notice capacity problems, it is to late to order new hardware.

That means is that you not only need to provision for peak loads, effectively over provisioning again,  you also need to take all those temporary and newly-provisioned environments into account. A match made in planning hell.

Performance Management in a public cloud
First let me clarify the term public cloud here. While a public cloud has many characteristics, the most important ones for this article are that you don’t own the hardware, have limited control over it and can provision new instances on the fly.

If you think about this carefully you will notice immediately that you have fewer problems. You only care about the performance of your application and not at all about the utilization of the hardware – it’s not your hardware after all. Meaning there are no competing goals! Depending on your application you will add a new instance if response time goes down on a specific tier or if you need more throughput than you currently achieve. You provision on the fly, meaning your capacity management is done on the fly as well. Another problem solved. You still run in a shared environment and this will impact you. But your options are limited as you cannot monitor or fix this directly. What you can do is measure the latency of the infrastructure. If you notice a slowdown you can talk to your vendor. Most of the time you will not care and just terminate the old and start a new instance if infrastructure response time goes down. Chances are the new instances are started on a less utilized server and that’s that. I won’t say that this is easy. I also do not say that this is better, but I do say that performance management is easier than in private clouds.

Conclusion
Private and Public cloud strategies are based on similar underlying technologies. Just because they are based on similar technologies, however, doesn’t mean that they are similar in any way in terms of actual usage. In the private cloud, the goal is becoming more efficient by dynamically and automatically allocating resources in order to drive up utilization while also lowering management costs of those many instances. The problem with this is that driving up utilization and having high performance are competing goals. The higher the utilization the more the applications will impact one another. Reaching a balance is highly complex, and is made more complex due to the dynamic nature of the private cloud.

In the public cloud, these competing goals are split – between the cloud provider, who cares about utilization, and the application owner, who cares about performance. In the public cloud the application owner has limited options: he can measure application performance; he can measure the impact of infrastructure degradation on the performance of his business transactions; but he cannot resolve the actual degradation. All he can do is terminate slow instances and/or add new once and in the hope that they will perform at a higher level. In this way, performance in the public cloud is in fact easier to manage.

But whether it be public or private you must actively manage performance in a cloud production environment. In the private cloud you need to maintain a balance between high utilization and application performance, which requires you to know what is going under the hood. And without application performance management in the public cloud, application owners are at the mercy of cloud providers, whose goals are not necessarily aligned with yours.

Related reading:

  1. The rise and fall of the machines – Watching out for clouds // It has been 5 years ago that Amazon launched...
  2. From Cloud Monitoring to Effective Cloud Management The following overview of our webinar with IntraLinks is taken...
  3. Integrated Cloud based Load Testing and Performance Management from Keynote and dynaTrace Watch the 7 Minute Walk-Through Video that guides you through the...
  4. Field Report – Application Performance Management in WebSphere Environments // Just in time for the upcoming Webinar with The...
  5. Troubleshooting response time problems – why you cannot trust your system metrics // Production Monitoring is about ensuring the stability and health...

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

@ThingsExpo Stories
"We have seen the evolution of WebRTC right from the starting point to what it has become today, that people are using in real applications," noted Dr. Natasha Tamaskar, Vice President and Head of Cloud and Mobile Strategy and Ecosystem at GENBAND, in this SYS-CON.tv interview at WebRTC Summit, held June 9-11, 2015, at the Javits Center in New York City.
The 3rd International WebRTC Summit, to be held Nov. 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 15th International Cloud Expo, 6th International Big Data Expo, 3rd International DevOps Summit and 2nd Internet of @ThingsExpo. WebRTC (Web-based Real-Time Communication) is an open source project supported by Google, Mozilla and Opera that aims to enable bro...
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool for visual control over the cloud and the best price/performance value available. ProfitBricks was named one of the coolest Clo...
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
"In the IoT space we are helping customers, mostly enterprises and industry verticals where time-to-value is critical, and we help them with the ability to do faster insights and actions using our platform so they can transform their business operations," explained Venkat Eswara, VP of Marketing at Vitria, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
WebRTC converts the entire network into a ubiquitous communications cloud thereby connecting anytime, anywhere through any point. In his session at WebRTC Summit,, Mark Castleman, EIR at Bell Labs and Head of Future X Labs, will discuss how the transformational nature of communications is achieved through the democratizing force of WebRTC. WebRTC is doing for voice what HTML did for web content.
To many people, IoT is a buzzword whose value is not understood. Many people think IoT is all about wearables and home automation. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed some incredible game-changing use cases and how they are transforming industries like agriculture, manufacturing, health care, and smart cities. He will discuss cool technologies like smart dust, robotics, smart labels, and much more. Prepare to be blown away with a glimpse of the future.
Connected things, systems and people can provide information to other things, systems and people and initiate actions for each other that result in new service possibilities. By taking a look at the impact of Internet of Things when it transitions to a highly connected services marketplace we can understand how connecting the right “things” and leveraging the right partners can provide enormous impact to your business’ growth and success. In her general session at @ThingsExpo, Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, discussed how this exciting emergence of layers of...
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is corr...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
SYS-CON Events announced today that kintone has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. kintone promotes cloud-based workgroup productivity, transparency and profitability with a seamless collaboration space, build your own business application (BYOA) platform, and workflow automation system.
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you'll have no problem fillin...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
SYS-CON Events announced today that Secure Infrastructure & Services will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Secure Infrastructure & Services (SIAS) is a managed services provider of cloud computing solutions for the IBM Power Systems market. The company helps mid-market firms built on IBM hardware platforms to deploy new levels of reliable and cost-effective computing and high availability solutions, leveraging the cloud and the benefits of Infrastructure-as-a-Service (IaaS...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.