Click here to close now.

Welcome!

Java Authors: Liz McMillan, Pat Romanski, Jason Bloomberg, Carmen Gonzalez, Lori MacVittie

Related Topics: Cloud Expo, Java, Microservices Journal, Virtualization

Cloud Expo: Article

Why Perfomance Management Is Easier in Public than On-Premise Clouds

Performance Management in public and in private clouds

Performance is one of the major concerns in the cloud. But the question should not really be whether or not the cloud performs, but whether the Application in question can and does perform in the cloud. The main problem here is that application performance is either not managed at all or managed incorrectly and therefore this question often remains unanswered. Now granted, performance management in cloud environments is harder than in physical ones, but it can be argued that it is easier in public clouds than in on-premise clouds or even a large virtualized environment. How do I come to that conclusion? Before answering that let’s look at the unique challenges that virtualization in general – and clouds in particular – pose to the realm of APM.

Time is relative
The problem with timekeeping is well known in the VMWare community. There is a very good VMWare whitepaper that explains this in quite some detail. It doesn’t tell the whole story, however, because obviously there are other virtualization solutions like Xen, KVM, Hyper-V and more. All of them solve this problem differently. On top of that the various guest operating systems behave very differently as well. In fact I might write a whole article just about that, but the net result is that time measurement inside a guest is not accurate, unless you know what you do. It might lag behind real time and speedup to catch up in the next moment. If your monitoring tool is aware of that and supports native timing calls it can work around that and give you real response times. Unfortunately that leads to yet another problem. Your VM is not running all the time, like a process it will get de-scheduled from time to time; however, unlike a process it will not be aware of that. While real time is important for response time, it will screw with your performance analysis on a deeper level.

The Effects of timekeeping on Response and Execution Time

The Effects of timekeeping on Response and Execution Time

If you measure real time, then Method B looks more expensive than it actually is. This might lead you down a wrong track when you look for a performance problem. When you measure apparent time then you don’t have this problem, but your response times do not reflect the real user experience. There are generally two ways of handling that. Your monitoring solution can capture these de-schedule times and account this all the way against your execution times. The more granular your measurement the more overhead this will produce. The more pragmatic approach is to simply account this once per transaction and thus capture the “impact” that the de-schedules have on your response time. Yet another approach is to periodically read the CPU steal time (either from vSphere or via mpstat on Xen) and correlate this with your transaction data. This will give you a better grasp on things. Even then it will add a level of uncertainty in your performance diagnostics, but at least you know the real response time and how fast your transactions really are. Bottom line, those two are no longer the same thing.

The impact of shared environments
The sharing of resources is what makes virtualization and cloud environments compelling from a cost perspective. Most normal data centers have a average CPU utilization far below 20%. The reason is two fold: on the one hand they isolate the different applications by running them on different hardware; on the other hand they have to provision for peak load. By using virtualization you can put multiple “isolated” applications on the same hardware. Resource utilization is higher, but even then it does not go beyond 30-40 percent most of the time, as you still need to take peak load into account. But the peak loads for the different applications might occur at different times! The first order of business here is to find the optimal balance.

The first thing to realize is that your VM is treated like a process by the virtualization infrastructure. It gets a share of resources – how much can be configured. If it reaches the configured limit it has to wait. The same is true if the physical resources are exhausted. To drive utilization higher, virtualization and cloud environments overcommit. That means they allow 10 2GHz VMs on a 16GHz physical machine. Most of the time this is perfectly fine as not all VMs will demand 100 percent CPU at the same time. If there is not enough CPU to go around, some will be de-scheduled and will be given a greater share the next time around. Most importantly this is not only true for CPU but also memory, disk and network IO.

What does this mean for performance management? It means that increasing load on one application, or a bug in the same, can impact another negatively without you being aware of this. Without having a virtualization-aware monitoring solution that also monitors the other application you will not see this. All you see is that the application performance goes down!

When the load increases on one Application it effects the other

When the load increases on one Application it affects the other

With proper tools this is relatively easy to catch for CPU-related problems, but a lot harder for IO-related issues. So you need to monitor both applications, their VMs and the underlying virtualization infrastructure and correlate the information. That adds a lot of complexity. The virtualization vendors try to solve this by looking purely at VM and Host level system metrics. What they forget is that high utilization of a resource does not mean the application is slow! And it is the application we care about.

OS metrics are worse than useless
Now for the good stuff. Forget your guest operating system utilization metrics, they are not showing you what is really going on. There are several reasons why that is so. One is the timekeeping problem. Even if you and your monitoring tool use the right timer and measure time correctly, your operating system might not. In fact most systems will not read out the timer device all the time, but rely on the CPU frequency and counters to estimate time as it is faster than reading the timer device. As utilization metrics are always based on a total number of possible requests or instructions per time slice, they get screwed up by that. This is true for every metric, not just CPU. The second problem is that the guest does not really know the upper limit for a resource, as the virtualization environment might overcommit. That means you may never be able to get 100% or you can get it at one time but not another. A good example is the Amazon EC2 Cloud. Although I cannot be sure, I suspect that the guest CPU metrics are actually correct. They correctly report the CPU utilization of the underlying hardware, only you will never get 100% of the underlying hardware. So without knowing how much of a share you get, they are useless.

What does this mean? You can rely on absolute numbers like the number of I/O requests, the number of SQL Statements and the amount of data sent over the wire for a specific application or transaction. But you do not know whether an over-utilization of the physical hardware presents a bottleneck. There are two ways to solve this problem.

The first involves correlating resource and throughput metrics of your application with the reported utilization and throughput measures on the virtualization layer. In case of VMWare that means correlating detailed application and transaction level metrics with metrics provided by vSphere. On EC2 you can do the same with metrics provided by CloudWatch.

EC2 Cloud Monitoring Dashboard showing 3 instances

EC2 Cloud Monitoring Dashboard showing 3 instances

This is the approach recommended by some virtualization vendors. It is possible, but because of the complexity requires a lot of expertise.  You do however know which VM consumes how much of your resources. With a little calculation magic you can break this down to application and transaction level; at least on average. You need this for resource optimization and to decide which VMs should be moved to a different physical hardware. This does not do you a lot of good in case of acute performance problems or troubleshooting as you don’t know the actual impact of the resource shortage. Or if it has an impact at all. You might move a VM, and not actually speed things up. The real crux is that just because something is heavily used does not mean that it is the source of your performance problem! And of course this approach only works if you are in charge of the hardware, meaning it does not work with public clouds!

The second option is one that is, among others, proposed by Bernd Harzog, a well-known expert in the virtualization space. It is also the one that I would recommend.

Response time, response time, Latency and more response time
On the Virtualization Practice blog Bernd explains in detail why resource utilization does not help you on either performance management or capacity planning. Instead he points out that what really matters is response time or throughput of your application. If your physical hardware or virtualization infrastructure runs into utilization problems the easiest way to spot this is when it slows down. In effect that means that I/O requests done by your application are slowing down and you can measure that. What’s more important is that you can turn this around! If your application performs fine then whatever the virtualization or cloud infrastructure reports, there is no performance problem. To be more accurate, you only need to analyze the virtualization layer if your application performance monitoring shows that a high portion of your response time is down to CPU shortage, memory shortage or I/O latency. If that is not the case than nothing is gained by optimizing the virtualization layer from a performance perspective.

Network Impact on Transaction is minimal, even though network utilization is high

Network Impact on Transaction is minimal, even though network utilization is high

Diagnosing the virtualization layer
Of course in case of virtualization and private clouds you still need to diagnose a infrastructure response time problem, once identified. You measure the infrastructure response time inside your application. If you have identified a bottleneck, meaning it slows down or is a big portion of your response time, you need to relate that infrastructure response time back to your virtualized infrastructure: Which resource slows down? From there you can use the metrics provided by VMWare (or whatever your virtualization vendor) to diagnose the root cause of the bottleneck. The key is that you identify the problem based on actual impact and then use the infrastructure metrics to diagnose the cause of that.

Layers add complexity
What this of course means is that you now have to manage performance on even more levels than before. It also means that you have to somehow manage which VMs run on the same physical host. We have already seen that the nature of the shared environment means that applications can impact each other. So a big part of managing the performance in a virtualized environment is to detect that impact and “tune” your environment in a way that both minimizes that impact and maximizes your resource usage and utilization. These are diametrically opposed goals!

Now what about Clouds
A cloud by nature is more dynamic than a “simple” virtualized environment. A cloud will enable you to provision new environments on the fly and also dispose of them again. This will lead to spikes on your utilization, leading to performance impact on existing application. So in the cloud the “minimum impact vs. maximize resource usage” goal becomes even harder to achieve. Cloud Vendors usually provide you with management software to manage the placement of your VMs. They will move them around based on complex algorithms to try and achieve the impossible goal of high performance and high utilization. The success is limited, because most of these management solutions ignore the application and only look at the virtualization layer to make these decisions. It’s a vicious cycle and the price you pay for better utilizing your datacenter and faster provisioning of new environments.

Maybe a bigger issue is Capacity management. The shared nature of the environment prevents you from making straight-forward predictions about capacity usage on a hardware level. You get a long way by relating the requests done by your application on a transactional level with the capacity usage on the virtualization layer, but that is cumbersome and does not lead to accurate results. Then of course a cloud is dynamic and your application is distributed, so without having a Solution that measures all your transactions and auto detects changes in the cloud environment you can easily make this a full time job.

Another problem is that the only way to notice a real capacity problems is to determine if the infrastructure response time goes down and negatively impact your application. Remember utilization does not equal performance and you want high utilization anyway! But once you notice capacity problems, it is to late to order new hardware.

That means is that you not only need to provision for peak loads, effectively over provisioning again,  you also need to take all those temporary and newly-provisioned environments into account. A match made in planning hell.

Performance Management in a public cloud
First let me clarify the term public cloud here. While a public cloud has many characteristics, the most important ones for this article are that you don’t own the hardware, have limited control over it and can provision new instances on the fly.

If you think about this carefully you will notice immediately that you have fewer problems. You only care about the performance of your application and not at all about the utilization of the hardware – it’s not your hardware after all. Meaning there are no competing goals! Depending on your application you will add a new instance if response time goes down on a specific tier or if you need more throughput than you currently achieve. You provision on the fly, meaning your capacity management is done on the fly as well. Another problem solved. You still run in a shared environment and this will impact you. But your options are limited as you cannot monitor or fix this directly. What you can do is measure the latency of the infrastructure. If you notice a slowdown you can talk to your vendor. Most of the time you will not care and just terminate the old and start a new instance if infrastructure response time goes down. Chances are the new instances are started on a less utilized server and that’s that. I won’t say that this is easy. I also do not say that this is better, but I do say that performance management is easier than in private clouds.

Conclusion
Private and Public cloud strategies are based on similar underlying technologies. Just because they are based on similar technologies, however, doesn’t mean that they are similar in any way in terms of actual usage. In the private cloud, the goal is becoming more efficient by dynamically and automatically allocating resources in order to drive up utilization while also lowering management costs of those many instances. The problem with this is that driving up utilization and having high performance are competing goals. The higher the utilization the more the applications will impact one another. Reaching a balance is highly complex, and is made more complex due to the dynamic nature of the private cloud.

In the public cloud, these competing goals are split – between the cloud provider, who cares about utilization, and the application owner, who cares about performance. In the public cloud the application owner has limited options: he can measure application performance; he can measure the impact of infrastructure degradation on the performance of his business transactions; but he cannot resolve the actual degradation. All he can do is terminate slow instances and/or add new once and in the hope that they will perform at a higher level. In this way, performance in the public cloud is in fact easier to manage.

But whether it be public or private you must actively manage performance in a cloud production environment. In the private cloud you need to maintain a balance between high utilization and application performance, which requires you to know what is going under the hood. And without application performance management in the public cloud, application owners are at the mercy of cloud providers, whose goals are not necessarily aligned with yours.

Related reading:

  1. The rise and fall of the machines – Watching out for clouds // It has been 5 years ago that Amazon launched...
  2. From Cloud Monitoring to Effective Cloud Management The following overview of our webinar with IntraLinks is taken...
  3. Integrated Cloud based Load Testing and Performance Management from Keynote and dynaTrace Watch the 7 Minute Walk-Through Video that guides you through the...
  4. Field Report – Application Performance Management in WebSphere Environments // Just in time for the upcoming Webinar with The...
  5. Troubleshooting response time problems – why you cannot trust your system metrics // Production Monitoring is about ensuring the stability and health...

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

@ThingsExpo Stories
Can call centers hang up the phones for good? Intuitive Solutions did. WebRTC enabled this contact center provider to eliminate antiquated telephony and desktop phone infrastructure with a pure web-based solution, allowing them to expand beyond brick-and-mortar confines to a home-based agent model. It also ensured scalability and better service for customers, including MUY! Companies, one of the country's largest franchise restaurant companies with 232 Pizza Hut locations. This is one example of WebRTC adoption today, but the potential is limitless when powered by IoT.
GENBAND introduced its Real Time Communications (RTC) Client for Lync* to seamlessly combine real-time communications with Lync Instant Messaging (IM) and Presence. “We’re shaking up the economics of delivering Unified Communications (UC) and offering a compelling way to integrate previously bespoke communications technologies,” said Carl Baptiste, GENBAND’s Senior Vice President, Enterprise Solutions. “We’re offering enterprises the best of both worlds by combining our own high availability voice, video and collaboration with Lync’s IM and Presence; creating a single, web centric, client. O...
SYS-CON Events announced today that GENBAND, a leading developer of real time communications software solutions, has been named “Silver Sponsor” of SYS-CON's WebRTC Summit, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. The GENBAND team will be on hand to demonstrate their newest product, Kandy. Kandy is a communications Platform-as-a-Service (PaaS) that enables companies to seamlessly integrate more human communications into their Web and mobile applications - creating more engaging experiences for their customers and boosting collaboration and productiv...
SYS-CON Events announced today that SoftLayer, an IBM company, has been named “Gold Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015 at the Javits Center in New York City, NY, and the 17th International Cloud Expo®, which will take place November 3–5, 2015 at the Santa Clara Convention Center in Santa Clara, CA. SoftLayer operates a global cloud infrastructure platform built for Internet scale. With a global footprint of data centers and network points of presence, SoftLayer provides infrastructure as a service to leading-edge customers ranging from ...
SYS-CON Events announced today that BroadSoft, the leading global provider of Unified Communications and Collaboration (UCC) services to operators worldwide, has been named “Gold Sponsor” of SYS-CON's WebRTC Summit, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BroadSoft is the leading provider of software and services that enable mobile, fixed-line and cable service providers to offer Unified Communications over their Internet Protocol networks. The Company’s core communications platform enables the delivery of a range of enterprise and consumer calling...
SYS-CON Events announced today that Cisco, the worldwide leader in IT that transforms how people connect, communicate and collaborate, has been named “Gold Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Cisco makes amazing things happen by connecting the unconnected. Cisco has shaped the future of the Internet by becoming the worldwide leader in transforming how people connect, communicate and collaborate. Cisco and our partners are building the platform for the Internet of Everything by connecting the...
SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make smarter decisions, faster.
Recent technology advances in miniaturization has positioned the wearables as the pinnacle of technology convergence with the human body. We inquire if wearables are mere standard miniaturized devices extended with the connectivity and present our views on considerations like design, applications, performance, efficiency, interoperability, usage scenarios, human device interaction and consequent trade-offs enabling wearables to impart optimal value.
Participants will reach the final if their IoT solution is liked. A community vote will determine the best solutions submitted in each country, after which an expert jury will select the national winners and the best international IoT solution. Each country's best solution can win a national marketing campaign worth up to €30,000 and become a partner in Deutsche Telekom's participating markets. The winning international solution can become partner of Deutsche Telekom Group across all eight countries and reach out to a potential of 10,8 million business customers. Deutsche Telekom Group has a...
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
In this session we look at creating interactive communications via the web by adding messaging, file transfer, and group communication (group chat and audio/video conferencing) into the web experience. We will also discuss potential applications of this technology in areas including B2B, B2C, P2P, and gaming. Peter is Technical Director at Acision. He graduated from The University of Edinburgh in 2000 with a BSc (Hons) in Computer Science. After graduation Peter worked on a PSTN switch developing signalling stacks for SS7, ISDN and similar protocols and creating advanced routing and serv...
SYS-CON Events announced today that ProfitBricks, the provider of painless cloud infrastructure, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY., and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. ProfitBricks is the IaaS provider that offers a painless cloud experience for all IT users, with no learning curve. ProfitBricks boasts flexible cloud servers and networking, an integrated Data Center Designer tool f...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
SYS-CON Events announced today that On the Avenue Marketing Group, a sales and marketing firm that utilizes events to market and sell products to consumers, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. On the Avenue Marketing Group (OTA) is a sales and marketing firm that utilizes events to market and sell products to consumers. On behalf of our clients, we attend thousands of fairs, festivals, expos, concerts, conferences, and sporting events annually, helping them reach millions of individuals ...
SYS-CON Events announced today that ActiveState, the leading independent Cloud Foundry and Docker-based PaaS provider, has been named “Silver Sponsor” of SYS-CON's DevOps Summit New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. ActiveState believes that enterprises gain a competitive advantage when they are able to quickly create, deploy and efficiently manage software solutions that immediately create business value, but they face many challenges that prevent them from doing so. The Company is uniquely positioned to help address these challenges thro...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
SYS-CON Events announced today that Alert Logic, the leading provider of Security-as-a-Service solutions for the cloud, has been named “Bronze Sponsor” of SYS-CON's 16th International Cloud Expo® and DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY, and the 17th International Cloud Expo® and DevOps Summit 2015 Silicon Valley, which will take place November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, a...
SYS-CON Events announced today that Akana, formerly SOA Software, has been named “Bronze Sponsor” of SYS-CON's 16th International Cloud Expo® New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Akana’s comprehensive suite of API Management, API Security, Integrated SOA Governance, and Cloud Integration solutions helps businesses accelerate digital transformation by securely extending their reach across multiple channels – mobile, cloud and Internet of Things. Akana enables enterprises to share data as APIs, connect and integrate applications, drive part...