Welcome!

Java IoT Authors: APM Blog, Stackify Blog, XebiaLabs Blog, Liz McMillan, William Schmarzo

Related Topics: Java IoT, Microservices Expo

Java IoT: Article

Application Performance Monitoring in Production

A step-by-step guide – Part 1

Setting up Application Performance Monitoring is a big task, but like everything else it can be broken down into simple steps. You have to know what you want to achieve and subsequently where to start. So let’s start at the beginning and take a top-down approach

Know What You Want
The first thing to do is to be clear of what we want when monitoring the application. Let’s face it: we “do not want to” ensure CPU utilization to be below 90 percent or a network latency of under one millisecond. We are also not really interested in garbage collection activity or whether the database connection pool is utilized. We need to monitor all of these things in order to reach our main goal. And the main goal for this article series is to ensure the health and stability of our application and business services. To ensure that we need to leverage all of the mentioned metrics.

What does health and stability of the application mean though? A healthy and stable application performs its function without errors and delivers accurate results within a predefined satisfactory time frame. In technical terms this means low response time and/or high throughput and low to not existing error rate. If we monitor and ensure this than the health and stability of the application is likewise guaranteed.

Define Your KPIs
At first we need to define what satisfactory performance means. In case of an end-user facing application things like first impression and page load time are good KPIs. The good thing is that satisfactory is relatively simple as the user will tolerate up to 3-4 seconds but will get frustrated after that. Other interactions, like a credit card payment or a search have very different thresholds though and you need to define them. In addition to response time you also need to define how many concurrent users you want, or need, to be able to serve without impacting the overall response time. These two KPIs, response time and concurrent users, will get you very far if you apply them on a granular enough level.

If we are talking about a transaction oriented application your main KPI will be throughput. The desired throughput will depend on the transaction type. Most likely you will have a time window in which you have to process a certain known number of transactions, which dictates what satisfactory performance means to you.

Resource and Hardware usage can be considered secondary KPIs. As long as the primary KPI is not met, we will not look too closely at the secondary one. On the other hand, as soon as the primary is met optimizations must always be towards improving this secondary KPI.

If we take a strict top-down approach and measure end-to-end we will not need more detailed KPIs for response time or throughput. We of course need to measure more detailed than that in order to ensure performance.

Know What, Where and How to Measure
In addition to specifying a KPI for e.g. the response time of the search feature we also need to define where we measure it.

The different places where we can measure response time

The different places where we can measure response time

This picture shows several different places where we can measure the response time of our application. In order to have objective and comparable measurements we need to define where we measure it. This needs to be communicated to all involved parties. This way you ensure that everybody talks about the same thing. In general the closer you come to the end user the close it gets to the real world and also the harder it is to measure.

We also need to define how we measure. If we measure the average we will need to define how that is calculated. Averages themselves are alright if you talk about throughput, but very inaccurate for response time. The average tells you nearly nothing about the actual user experience, because it ignores the volatility. Even if you are only interested in throughput volatility is interesting. It is harder to plan capacity for a highly volatile application than for one that is stable.  Personally I prefer percentiles over averages as they give us a good picture of the distribution and thus the volatility.

50th, 75th, 90th and 95th percentile of End User Response time of the Page Load

50th, 75th, 90th and 95th percentile of End User Response time of the Page Load

In the picture we see that the page load time of our sample has a very high volatility. While 50 percent of all page requests are loaded in 3 seconds, the slowest 10 percent take between 5 and 20 seconds! That not only bodes ill for our end user experience and performance goals, but also for our capacity management (we’d need to over provision a lot to compensate). High volatility in itself indicates instability and is not desirable. It can also mean that we measure the response time not granular enough. It might not be enough to measure the response time of e.g. the payment transactions in general. The CreditCard and DebitCard payment transaction might have very different characteristics and we should measure them separately. Without doing that type of measuring response time becomes meaningless because we will not see performance problems and monitoring a trend will be impossible.

This brings us to the next point, what do we measure? Most monitoring solutions allow the monitoring either on an URL level, servlet level (JMX/App Servers) or Network level. In many cases the URL level is good enough as we can use pattern matching on specific URI parameters.

Create measures by matching the URI of our Application and Transaction tyoe

Create measures by matching the URI of our Application and Transaction type

For Ajax, WebService Transactions or SOA applications in general this will not be enough. WebService frameworks often provide a single URI entry point per application or service and distinguish between different business transactions in the SOAP message. Transaction oriented applications have different transaction types which will have very different performance characteristics, yet the entry point to the application will be the same nearly every time (e.g. JMS). The transaction type will only be available within the request and within the executed code. In our Credit/Debit card example we would most likely see this only as part of the SOAP message. So what we need to do is to identify the transaction within our application. We can do this by modifying the code and provide the measures ourselves (e.g. via JMX). If we do not want to modify our code we could also use aspects to inject it or use one of the many monitoring solutions that supports this kind of categorization via business transactions.

We want to measure response time of requests that call a method with a given parameter

We want to measure response time of requests that call a method with a given parameter

In our case we would measure the response time of every transaction and label it as a DebitCard payment transaction when the shown method is executed and the argument of the first parameter is “DebitCard”. This way we can measure the different types of transactions even if they cannot be distinguished via the URI.

Think About Errors
Apart from performance we also need to take errors into account. Very often we see applications where most transactions respond within 1.5 seconds and sometimes a lot faster, e.g. 0.2 seconds. More often than not these very fast transactions represent errors. The result is that the more errors you have the better your average response time will get, that is of course misleading.

Show error rate, warning rate and response time of two business transactions

Show error rate, warning rate and response time of two business transactions

We need to count errors on the business transaction level as well. If you don’t want to have your response time skewed by those errors, you should exclude erroneous transaction from your response time measurement. The error rate of your transaction would be another KPI on which you can put a static threshold. An increased error rate is often the first sign of an impeding performance problem, so you should watch it carefully.

I will cover how to monitor errors in more detail in one of my next posts.

What Are Problems?
It sounds like a silly question but I decided to ask it anyway, because in order to detect problems, we first need to understand them.

ITIL defines a problem as a reoccurring incident or an incident with high impact. In our case this means that a single transaction that exceeds our response time goal is not considered a problem. If you are monitoring a big system you will not have the time or the means to analyze every single violation anyway. But it is a problem if the response time goal is exceeded by 20% of your end user requests. This is one key reason why I prefer percentiles over averages. I know I have a problem if the 80th percentile exceeds the response time goal.

The same can be said for errors and exceptions. A single exception or error might be interesting to the developer. We should therefore save the information so that it can be fixed in a later release. But as far as operations is concerned, it will be ignored if it only happens once or twice. On the other hand if the same error happens again and again we need to treat it as a problem as it clearly violates our goal of ensuring a healthy application.

Alerting in a production environment must be set up around this idea. If we were to produce an alert for every single incident we would have a so called alarm storm and would either go mad or ignore them entirely. On the other hand if we wait until the average is higher than the response time goal customers will be calling our support line, before we are aware of the problem.

Know Your System and Application
The goal of monitoring is to ensure proper performance. Knowing there is a problem is not enough, we need to isolate the root cause  quickly. We can only do that if we know our application and which other resources or services it uses. It is best to have a system diagram or flow chart that describes our application. You most likely will want to have at least two or three different detail levels of this.

  1. System Topology
    This should include all your applications, service, resources and the communication patterns on a high level. It gives us an idea of what exists and which other applications might influence ours.
  2. Application Topology
    This would concentrate on the topology of the application itself. It is a subset of the system topology and would only include communication flows as seen from that applications point of view. It would end when it calls third party applications.
  3. Transaction Response Flow
    Here we would see the individual business transaction type. This is the level that we use for response time measurement.

Maintaining this can be tricky, but many monitoring tools provide this automatically these days. Once we know which other applications and services our transaction is using we can break down the response time into its contributors. We do this by measuring the request on the calling side, inside our application and on the receiving end.

Show response time distribution throughout the system of a single transaction type

Show response time distribution throughout the system of a single transaction type

This way we get a definite picture of where response time is spent. In addition we will also see if we loose time on the network in form of latency.

Next Steps
At this point we can monitor health, stability and performance of our application and we can isolate the causing tier in case we have a problem. If we do this for all of our applications we will also get a good picture how the applications impact each other. The next steps are to monitor each application tier in more detail, including used resources and system metrics. In the coming weeks I will explain how to monitor each of these tiers with the specific goal of allowing production level root cause analysis. At every level we will focus on monitoring the tier from an application and transaction point of view as this is the only way we can accurately measure performance impact on the end user.

Finally I will also cover System Monitoring. Our goal is however not to monitor and describe the system itself, but measure how it affects the application. In terms of Application Performance Monitoring, system monitoring is an integral part and not a separate discipline.

Related reading:

  1. Troubleshooting response time problems – why you cannot trust your system metrics // Production Monitoring is about ensuring the stability and health...
  2. End-to-End Monitoring and Load Testing with Keynote and dynaTrace Watch the 6 Minute Walk-Through Video that guides you through...
  3. Week 22 – Is There a Business Case for Application Performance? We all know that slow performance – and service disruption...
  4. Hands-On Guide: Verifying FIFA World Cup Web Site against Performance Best Practices Whether you call it Football, Futbol, Fussball, Futebol, Calcio or...
  5. Week 9 – How to Measure Application Performance Measurement is the most central concept in any performance-related activity....

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

@ThingsExpo Stories
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things’). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing? IoT is not about the devices, it’s about the data consumed and generated. The devices are tools, mechanisms, conduits. In his session at Internet of Things at Cloud Expo | DXWor...