|By Andreas Grabner||
|April 11, 2013 03:25 PM EDT||
We have been blogging about the same problems and problem patterns we see while working with our customers over the past few of years. There have always been the classic application performance landmines in the areas of inefficient database access, misconfigured frameworks, excessive memory usage, bloated web pages and not following common web performance best practices among others.
More than two years ago we posted summary blogs of the Top Server-Side Performance Problems and the Top 10 Client-Side Performance Problems to give operations, architects, testers and developers easy-to-consume best practices. We feel that it is time to provide an update to these best practices as new problem patterns have since come into play. We also want to cover more than just problems that happen within your application by broadening the scope across the entire Application Delivery Chain. This includes all components between your end user and your back-end systems, databases and third-party services. The following illustrates which components are involved and what the typical errors are along the delivery chain.
Delivering an application to the end user has become more complex as it involves more components than ever before. This also leaves a lot of room for mistakes that impact end-user experience.
Let's now dig a little deeper in some of the highlighted problem areas. The following lists our Top Performance Landmines that have been reported by our customers such as BonTon and Swarovski. Other companies include those in the financial services industry, manufacturing industry and energy industry among others. To make it easier for you to decide which landmines to read we also added the target audience for each problem area.
Bloated Web Front Ends
Audience: Operations, Architects, Testers, Developers
Often companies focus on optimizing the performance of the applications they deliver by tuning the code, reducing SQL overhead, implementing application caching, and other items that are, for the most part, invisible to the customer using the application. However, all of this effort and activity can go completely unnoticed if the content being delivered to customers is bloated and inefficient.
Sources we track show that the average page delivered to customers has been steadily increasing in size and complexity over the last 3-4 years as well as customers' expectations of performance. This continuous conflict of business vs customer expectations needs to be understood in order to be effectively managed. What companies need to realize is that what they consider to be fast and efficient doesn't really matter. If the customers using the site believe that the site is slow and hard to use, they won't use it and they will tell their friends about their poor experience.
Comparing your performance to top competitors in your industry as well as Internet leaders helps you set performance goals that can be achieved over time. Additionally, understanding why your customers leave your site can help you resolve customer experience issues: Is it a particular subset of customers who leave? Which page caused them to leave? Is there an application function on that page that is bloated and slow?
Comparing your site against peers in the same industry will help you understand where you rank.
Using caching, compression, CDNs, and a critical eye that asks questions about every new image, function, and feature you add, you can trim the weight of your site and deliver a better customer experience.
We discuss the performance degradation that can be traced to bloated front ends and how this affects site performance in Performance Improvement is not Performance Optimization and Super Bowl Sunday 2013 - Winners, Losers, and Casualties.
Slow Third-Party Content and CDNs
Audience: Operations, Architects, Testers
Focusing on your own content can leave you exposed to performance issues that originate outside your organization. With companies adding more content from third-party sources to their site, managing application performance becomes increasingly complex, even when these services are designed to improve performance.
During peak performance events over the last 12 months - holiday shopping season and the Super Bowl - two primary trends were seen: third-party services were overwhelmed when more than one of their customers reached peak traffic simultaneously and CDNs buckled under flash loads that were far larger than even the busiest days their customers typically experience.
Monitoring and managing third parties means treating them as unique applications, with their own baselines and Service Level Agreements (SLAs) and Service Level Objectives (SLOs). It sometimes means asking tough questions of these services, such as:
- Have you load tested your systems to see what happens when three of your largest customers experience peak traffic simultaneously?
- What is the escalation path we should follow with your team when we discover a performance issue that is affecting our customers?
- How well did your system perform during the eight busiest hours over the last 12 months, not just the average performance?
Monitor the impact of slow third-party and CDN content on your page load time.
Finally, your team needs to be prepared for the scenario where a third-party service or CDN suffers a severe outage or begins to seriously degrade your site performance. Always have a Plan B, C, etc. that gives you the ability to mitigate the issue. These plans could include removing third-party tags, images, and content from your site entirely during peak traffic, load balancing between multiple CDNs, moving content to a secondary cloud provider, all the way to switching to a simple bare bones site that removes all rich media until traffic returns to a normal level.
Unless you know how third parties affect your performance, there is no way for you to manage them effectively. Once you manage your third parties, you can take control of all aspects of your site performance.
More on third-party services and their effects on application performance is covered in: You only control 1/3 of your Page Load Performance!, Third Party Content Management applied: Four steps to gain control of your Page Load Performance!, The Ripple Effect of Facebook's Outage, Third-Party Issues and the Performance Ripple Effect, and Website's Vulnerability to Third-Party Services Exposed.
We also discuss third parties, most notably CDN performance in: Super Bowl Sunday 2013 - Winners, Losers, and Casualties, and Why Bon Ton needs real-time visibility into 85% of its content delivered by Akamai.
Wrong Usage of Frameworks
Audience: Architects, Developers
The following screenshot shows that Hibernate executes the same SQL query multiple times instead of caching the result from the first query. This happens in case Hibernate has not been configured correctly to perform optimally for your specific needs:
Loading a person two times in a row, but no session cache involved
Finally, frameworks get constantly updated to improve functionality but also improve performance and stability. You want to watch out for these updates and also update your implemented framework version to benefit from the improvements. We have seen cases where, e.g., jQuery was never updated leaving websites with bad performance on older browsers and sometimes even on newer browsers when older versions of jQuery didn't leverage the capabilities of the latest IE, FF, Chrome or Safari browsers.
Long-running CSS Class Name Lookups contribute about 80% to the Client-Side Load Time.
If you want to read more about common problems when using these types of frameworks check out our blogs series on Hibernate (The Session Cache, The Query Cache, Second Level Cache), the Top SharePoint Performance Mistakes or the 101 on jQuery Selector Performance.
Network Infrastructure Problems
Audience: Operations, Architects, Testers
Network infrastructure is an important component of every successful business operation. Performance problems experienced by end users can have various origins. The operation teams need Application Performance Monitoring solutions that will enable them to isolate fault domains effortlessly and quickly.
Sometimes the answer is not obvious and performance problems can end up in a "war room" between infrastructure and application providers. The team needs to analyze whether the problem is present at all locations where the application is executed. In certain cases, the performance problems might be caused by external infrastructure used by some users.
Performance problems can be pretty costly. According to the report by the Aberdeen Group they can reduce revenue by 9% and productivity by 64%. When our services are based on the SAP infrastructure the costs can rise to even $15,000 per every minute of a service downtime. Even though SAP provides tools to monitor its components, the proper APM solution should deliver a holistic view over the entire infrastructure. Only then can the Operations team tell whether it is a problem with SAP components that were quite an investment to deploy or it's an infrastructure problem that's not related to the SAP or any application.
Overview of SAP tier with top most under-performing modules and most affected users
The most obvious hints on whether this is a network or an application problem can be seen by checking for the Network and Server time outliers compared to the values of the baseline traffic. But eyeballing the reports is not enough to avoid problems. The first step toward proactive application performance management is to learn to respond promptly to alerts triggered by the APM tool when key measures go outside of the usual range.
Audience: Operations, Architects
"The Cloud" comes with a great promise: endless resources for endless scalability and performance when I need it. This eliminates the need to buy a lot of hardware that sits idle most of the time but is only used during peak traffic periods. It also allows me to scale and perform far beyond what is expected without needing to wait for additional hardware to ship.
But there are some gotchas: throwing hardware at an application that is not designed to scale in a cloud environment won't leverage the possibilities that the cloud provides. In fact, it often ends up being a very costly endeavor. One must also understand that The Cloud - unless we talk about a private cloud setting - is an environment that is not owned by you. Direct access to the underlying hardware is not as easy as if the hardware is located in the next room, which makes troubleshooting or monitoring much harder. The cloud is also not just an endless resource pool of CPU, Memory or Disk On-Demand. It provides lots of other services such as storage, messaging and more which one must understand and monitor for performance, as these services are key components of your application.
It is recommended to live monitor cloud instance usage and cost in order to not fall into a cost trap
Relating to these problem areas you want to read the following blog posts: Managing Hybrid Cloud Environments, Analyzing Performance of Windows Azure Storage, Why Performance Monitoring is easier in Public than onPremise Clouds and Monitoring your Clouds.
Too Many Database Calls
Audience: Architects, Testers, Developers
Database Access is the problem we see the most within the application. It is nothing new - but - as we still see it on almost every application we work with, it is critical enough to mention it again. The first lesson learned is that the blame is often not on the database side but on the access patterns of the application to the database. All too often we see a single web request that queries thousands of database statements. There are multiple reasons for it: fetching too much data beyond just the data that is needed or inefficient fetching of data that then gets aggregated and computed in the application rather than in a stored procedure. What is really interesting is that we see this problem pattern not only in distributed applications running on modern application servers. We also see it on "legacy" applications such as VB6 or even the mainframe. The following screenshot highlights the transaction flow of an enterprise application that calls the mainframe. The mainframe transaction makes 225 SQL executions per transaction. A closer look typically reveals that the same statements are called hundreds of times due to the reasons mentioned above:
The Transaction Flow highlights how services interact with each other including the number of interactions to DB2 which indicate a potential architectural and performance problem.
Besides these access pattern problems we also see individual statements that take a long time to execute. In this case, it is important to not only focus on the database to optimize statements by tweaking indices or the like, it's also important to analyze whether these queries can be optimized from within the application. We often see that too much data is retrieved from the database, which first gets parsed by the application (using extra memory) and is then thrown away (more GC activity). Another landmine is misconfigured connection pools or application code that holds on to connections too long and ends up blocking other threads from accessing the database.
The following screenshot shows the database queries executed by a single transaction, most of them taking very long to execute. The fix to this problem was to optimize these statements in both the application and in the database:
The architects in this case started by optimizing SQL statements that took a long time to execute and those that got executed several times within the same transaction.
For further reading check out our blogs with more detailed background on these problem patterns such as Don't let your load balancers ruin your holiday business or Saving MIPS and Money. For connection pool problems we also have one interesting blog named The reason I don't monitor connection pool usage.
Big Data Not Optimized
Audience: Operations, Architects, Testers, Developers
The amount of data that we and our applications have to process is constantly growing. Big Data solutions (NoSQL, MapReduce...) provide new approaches to storing and processing large amount of data. But as with every technology it needs to be used in an optimized way to fit your specific needs. It is a misconception that you can simply process more data by adding additional resources to, e.g., a MapReduce cluster in order to speed up data processing. This only works if you have implemented your jobs in a way that allows them to scale. The same is true for accessing data from a NoSQL database. The same problems we see with relational databases also apply to accessing data in Big Data solutions. If you make inefficient queries or more queries than necessary, you are going to impact performance.
The following screenshot highlights a transaction that spends most of its time in MongoDB. A closer look into this revealed that the framework used to access MongoDB made a call to a size method of the cursor that then executed an additional query to MongoDB, which was totally unnecessary. In this example, eliminating that call reduced roundtrips to MongoDB and improved overall transaction performance by 15x:
Transactions that call JourneyCollection.getCount spend nearly half their time in MongoDB.
If you are using Big Data technologies such as Cassandra, MongoDB, Hadoop, or the like I suggest following up with the following blog posts that explain some of the problem patterns and highlight best practices: MongoDB Anti-Pattern, NoSQL vs Traditional Databases, Inside Cassandra Write Performance and What we can Learn from Cassandra Pagination. Also check out 15x Performance Improvements for Pig+HBase.
Undetected Memory Leaks
Audience: Architects, Testers, Developers
Memory and Garbage Collection problems are still very prominent issues in any enterprise application. One of the reasons is that the very nature of Garbage Collection is often misunderstood. Besides the traditional memory-related problems such as high memory usage, wrong cache usage strategies, we also see memory issues related to class loading, large classes or native memory. The following screenshot shows the problem of having single objects consuming a lot of memory. Not that this is a bad idea if necessary - but too often this happens because information is kept in memory for no apparent reason and with that consuming memory that is not available for others.
Single Object that is responsible for a big portion of the memory being leaked
Traditional memory leaks often lead to out of memory exceptions and typically to crashes of the virtual machines. This has a negative impact on the end user as the current context of user sessions and active transactions might be lost.
High memory usage on the other hand can result in high garbage collection, which has a direct impact on end user response time. Transactions that are suspended because of long running garbage collection processing can be optimized by tweaking garbage collection settings as well as being less "wasteful" with memory.
Even problems related to wrong implementations of equals/hashcode can lead to memory problems. To address this problem we wrote a full chapter on Memory Management in our Java Enterprise Performance book that explains concepts like How Garbage Collection works, Difference between JVMs, GC Tuning, High Memory Usage and the Root Cause, Class Load Related Problems and more. We have also blogged about specific memory scenarios - check out the following blogs: Memory Monitoring in WebSphere Environments, GC Bottlenecks in Heterogeneous Environments, Leak Detection in Production Environments, Top Memory Problems - Part I and Part II.
More to Come...
These landmines are some highlights with links to more detailed blog posts. As we continue to blog about these problem patterns, we plan to compile a second list of problems later this year. Keep watching our blog for more information and check out our online book on Java Enterprise Performance.
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Nov. 28, 2014 05:00 PM EST Reads: 2,145
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
Nov. 27, 2014 04:00 PM EST Reads: 2,123
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
Nov. 27, 2014 04:00 PM EST Reads: 2,195
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Nov. 27, 2014 03:00 PM EST Reads: 2,321
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
Nov. 27, 2014 03:00 PM EST Reads: 2,201
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
Nov. 27, 2014 01:00 PM EST Reads: 2,211
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
Nov. 27, 2014 11:00 AM EST Reads: 2,034
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
Nov. 27, 2014 10:00 AM EST Reads: 2,026
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.
Nov. 27, 2014 08:00 AM EST Reads: 2,034
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
Nov. 27, 2014 07:45 AM EST Reads: 2,128
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Nov. 27, 2014 07:00 AM EST Reads: 2,202
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Nov. 27, 2014 06:45 AM EST Reads: 2,250
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
Nov. 27, 2014 06:45 AM EST Reads: 2,167
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
Nov. 27, 2014 04:00 AM EST Reads: 1,839
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 27, 2014 04:00 AM EST Reads: 1,923
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Nov. 26, 2014 02:00 PM EST Reads: 2,157
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Nov. 24, 2014 07:00 PM EST Reads: 2,319
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
Nov. 24, 2014 12:00 PM EST Reads: 2,067
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Nov. 24, 2014 11:00 AM EST Reads: 2,439
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
Nov. 24, 2014 09:00 AM EST Reads: 2,301