Click here to close now.


Java IoT Authors: Elizabeth White, Deep Bhattacharjee, Brad Thies, Liz McMillan, Anders Wallgren

Related Topics: Java IoT, Microservices Expo, IoT User Interface

Java IoT: Article

Third-Party Content Management Applied

Four steps to gain control of your Page Load Performance

Today's web sites are often cluttered up with third-party content that slows down page load and rendering times, hampering user experience. In my first blog post, I discussed how third-party content impacts your website's performance and identified common problems with its integration. Today I want to share the experience I have had as a developer and consultant with the management of third-party content. In the following, I will show you best practices for integrating third-party content and for convincing your business that they will benefit from establishing third-party management.

First the bad news: as a developer, you have to get the commitment for establishing third-party management and changing the integration of third-party content from the highest business management level possible - the best is CEO level. Otherwise you will run into problems trying to implement improvements. The good news is that, from my experience, this is an achievable goal - you just have to bring the problems up the right way with hard facts. Let's start our journey toward implementing third-party content management from two possible starting points that I've seen in the past. The first one is triggered if someone from the business has a bad user experience and wants to find out who is responsible for the slow pages. The second one is that you as the developer know that your page is slow. No matter where you are starting, the first step you should make is to get the correct hard facts.

Step 1: Detailed third-party content impact analysis
For a developer this isn't really difficult. The only thing we have to do is use the Web Performance Optimization Tool of our choice and take a look at the page load timing. What we get is a picture like the screenshot below. We as developers immediately recognize that we have a problem - but for the business this is a diagram that requires an explanation.

As we want to convince them, we should make it easier for them to understand. Something that from my experience works well is to take the time and implement a URL-Parameter that turns off all the third-party content for a webpage. Then we can capture a second timeline from the same page without the third-party requests (see Figure 2). Everybody can now easily see that there are huge differences:

We can present these timelines to the business as well but we still have to explain what all the boxes, timings, etc., mean. We should invest some more time and create a table like Table 1, where we compare some main Key Performance Indicators (KPI).


Page with TPC

Page without TPC

First Impression Time



Onload Time



Total Load Time



JavaScript Time



Number of Domains



Number of Resources



Total Pagesize



As a single page is not representative we prepare similar tables for the five most important pages. Which pages these are depends on your website. Landing pages, product pages and pages on the "money path" are potentially interesting. Our Web Analytics Tool can help us find the most interesting pages.

Step 2: Inform business about the impact
During step 1 we discovered that the impact is significant, we collected facts, and we still think we have to improve the performance of our application. Now it's time to present the results of the first step to the business. From my experience the best way to do this is a face-to-face meeting with the high-level business executives. CEOs, CTOs and other business unit executives are the appropriate attendees.

The presentation we give during this meeting should cover the following three major topics:

  • Case Study facts from other companies
  • The hard facts we have collected
  • Recommendations for improvements

Google, Bing, Amazon, etc., have done some case studies that show the impact of slow pages on the revenue and the users' interaction with the website. Amazon, for example, found out that a 100 ms slower page reduces revenue by 1%. I have attached an example presentation to this blog, which should provide some guidance for our presentation and contains some more examples.

After this general information we can show the hard facts about our system, and as the business is now aware of the relationship between performance and revenue they normally listen carefully. Now we're no longer talking about time but money.

At the end of our presentation we make some recommendations on how we can improve the integration of third-party content. Don't be shy - no third-party content plugin is untouchable at this point. Some of the recommendations can only be decided by the business, not by the development. Our goals for this meeting are that we have the commitment to proceed, that we get support from the executives when discussing the implementation alternatives, and that we have a follow-up meeting with the same attendees to show improvements. What would be nice is the consent of the executives to the recommended improvements but from my experience they seldom commit.

Step 3: Check third-party content implementation alternatives
Now as we have the commitments, we can start thinking about integration alternatives. If we stick to the standard implementation the provider recommends, we won't be able to make any improvements. We have to be creative and always try to create win-win situations. Here in this article I want to talk about four best practices I have encountered the past.

Best Practice 1: Remove it
Every developer will now say "Okay, that's easy," and every business man will say "That's not possible because we need it!" But do you really need it? Let's take a closer look at the social media plugins, tracking pixels and ads.

A lot of websites have integrated social media plugins like Twitter or Facebook. They are very popular these days and a lot of webpages have integrated such plugins. Have you ever checked how often your users really use one of the plugins? A customer of ours had integrated five plugins. After six months they checked how often each of them was used. They found out that only one was used by people other than the QA department, which checks that all of them are working after each release. With a little investigation they found out that four of the five plugins could be removed as nobody used it.

What about a Tracking Pixel? I have seen a lot of pages out there that have not only integrated one tracking pixel but five, seven or even more. Again, the question is: Do we really need all of them? It doesn't matter who we ask; we'll always get a good explanation as to why a special pixel is needed, but stick to the goal of reducing it down to one pixel. Find out which one can deliver most or even all of the data that each department needs and remove all the others. Problems we might run into will be user privileges and business objectives that are defined for teams and individuals on specific statistics. It takes some time to handle this but, at the end, things will get easier as we have only one source for our numbers; we will stop discussing which statistics delivers the correct values and from which statistics to take the numbers as there is only one left. Once at a customer we removed five Tracking Pixels with a single blow. As this led to incredible performance improvements, their marketing department made an announcement to let customers know they care about their experience. This is a textbook example of creating a win-win-situation, as mentioned earlier.

Other third-party content that is a candidate for removal is banner ads. Businessmen will now say this guy is crazy to make such a recommendation, but if your main business is not earning money with displaying ads then it might be worth taking a look. Take the numbers from Amazon that 100 ms additional page load time is reducing the revenue by one percent and think of the example page where ads consume about 1000 ms of page load time - 10 times as much. This would mean that we lose 10 * 1% = -10% of our revenue just because of ads. The question now is: "Are you really earning 10% or more of your total revenue with ads?" If not you should consider removing ads from your page.

Best Practice 2: Move loading of resources back after the Onload-event
As we have now removed all unnecessary third-party content, we still have some left. For the user experience, apart from the total load time, the first impression and the onload-time are the most important timings. To improve these timings we can implement lazy loading where parts of the page are loaded after the onload event via JavaScript; several libraries are available that help you implement this. There are two things you should be aware of: first is that you are just moving the starting point for the download of the resources so you are not reducing the download size of your page or the number of requests. The second is that lacy loading only works when JavaScript is available in the user's browser. So you have to make sure that your page is useable without JavaScript. Candidates for moving the download starting point back are plugins that only work if JavaScript is available or are not vital to the usage of the page. Ads, Social Media Plugins, Maps, etc., are in most cases such candidates.

Best Practice 3: Load on user click
This is an interesting option if you want to integrate a social media plugin. The standard implementation, for example, of such a plugin looks like the figure below. It consists of a button to trigger the like/tweet action and the number of likes / tweets.

To improve this, the question that has to be answered is: Do the users really need to know how often the page was tweeted, liked, etc.? If the answer is no, we can save several requests and download volume. All we have to do is deliver a link that looks like the action button and, if the user clicks on the link, we can open a popup window or an overlay where the user can perform the necessary actions.

Best Practice 4: Maps vs. static Maps
This practice focuses on the integration of maps like Google Maps or Bing Maps on our page. What can be seen all around the Web are map integrations where the maps are very small and only used to give the user a hint as to where the point of interest is located. To show the user this hint, several JavaScript files and images have to be downloaded. In most cases the user doesn't need to zoom or reposition the map, and as the map is small it's also hard to use. Why not use the static map implementation Bing Maps and Google Maps are offering? To figure out the advantages of the static implementation I have created two HTML pages that show the same map, one uses the standard implementation and the other the static implementation.

After capturing the timings we get the following results:


Standard Google Maps

Static Google Maps

Difference in %

First Impression Time

493 ms

324 ms


Onload Time

473 ms



Total Load Time

1801 ms

700 ms


JavaScript Time

563 ms

0 ms


Number of Domains




Number of Resources




Total Pagesize

636 Kb

77 Kb


When we take a closer look at the KPIs we can see that every KPI for the Static Google Map implementation is better. Especially when we look at the KPIs timings we can see that the first impression and the onload time are improving by 34% and 22%. The total load time decreases by 1 sec, which is 61% less and this really has a big impact on the user's experience.

Some people will argue that this approach is not applicable as they want to offer the map controls to their customers. But remember Best Practice 3 - Load on user click: As soon as the user states his intention of interacting with the map by clicking on it, we can offer him a bigger and easier-to-use map by opening a popup, overlay or a new page. The only thing the development has to do is to surround the static image with a link tag.

Step 4: Monitor the performance of your web application / third-party content
As we need to show improvements in our follow-up meeting with business executives, it's important to monitor how the performance of our website evolves over time. There are three things that should be monitored by business, operations and development:

  1. Third-party content usage by customers and generated business value - business monitoring
  2. The impact of new added third-party content - development monitoring
  3. The performance of third-party content in the client browser - operations monitoring

Business Monitoring
An essential part of the business monitoring should be a check as to whether the requested third-party features contribute to the business value. Is the feature used by the customer or does it help us to increase our revenue? We have to ask this question again and again - not only once at the beginning of the development, but every time when business, development and operations meet to discuss web application performance. If we ever can state: No, the feature is not adding value - remove it as soon as possible.

Operations Monitoring
There are only a few tools that help us monitor the impact of third-party content for our users. What we need is either a synthetic monitoring system like Gomez and Keynote provide, or a monitoring tool that really sits in our users' browsers and collects the data there like dynaTrace UEM.

Synthetic monitoring tools allow us to monitor the performance from specified locations all over the world. The only downside is that we are not getting data from our real users. With dynaTrace UEM we can monitor the third-party content performance of all our users wherever they are situated and we get the real experienced timings. The figure below shows a dashboard from dynaTrace UEM that contains all the important data from the operations point of view. The pie chart and the table below indicate which third-party content provider has the biggest impact on the page load time and the distribution. The three line charts on the right side show you the request trend, the total page load time and the onload time and the average time that third-party content contributes to your page performance.

Development Monitoring
A very important thing is that the development has the ability to compare the KPIs between two releases and the differences between the pages with and without third-party content. We have already established Functional Web Tests that integrate with a web performance optimization tool that delivers the necessary values for the KPIs. We just have to reuse the switch we established during Step 1 and run automatic tests on the pages we have identified as the most important. From this moment on we will always be able to automatically find regression caused by third-party content.

We may also consider enhancing our switch and making each of the third-party plugins switchable. This allows us to check the overhead a new plugin adds to our page. It also helps us when we have to decide which feature we want to turn if there are two or more similar plugins.

Last but not least as now business, operation and development have all the necessary data to improve the user experience, we should meet up regularly to check the performance trend of our page and find solutions to upcoming performance challenges.

It is not a big deal to start improving the third-party content integration. If we want to succeed it's necessary that business, development and operations work together. We have to be creative, we have to make compromises and we have to be ready to use different methods of integration - never stop aiming for a top performing website. If we take things seriously, we can improve the experience of our users and therefore increase our business.

More Stories By Klaus Enzenhofer

Klaus Enzenhofer has several years of experience and expertise in the field of Web Performance Optimization and User Experience Management. He works as Technical Strategist in the Center of Excellence Team at dynaTrace Software. In this role he influences the development of the dynaTrace Application Performance Management Solution and the Web Performance Optimization Tool dynaTrace AJAX Edition. He mainly gathered his experience in web and performance by developing and running large-scale web portals at Tiscover GmbH.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
I recently attended and was a speaker at the 4th International Internet of @ThingsExpo at the Santa Clara Convention Center. I also had the opportunity to attend this event last year and I wrote a blog from that show talking about how the “Enterprise Impact of IoT” was a key theme of last year’s show. I was curious to see if the same theme would still resonate 365 days later and what, if any, changes I would see in the content presented.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk was on IBM Cloudant, Apache CouchDB, and ...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now all corporate assets – people, objects, and spaces – can share information about themselves and thei...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessions, I wanted to share some of my observations on emerging trends. As cyber security serves as a fou...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content. Join @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 7-9, 2016 in New York City, for three days of intense 'Internet of Things' discussion and focus, including Big Data's indespensable role in IoT, Smart Grids and Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) IoT's use in Vertical Markets.
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
The cloud. Like a comic book superhero, there seems to be no problem it can’t fix or cost it can’t slash. Yet making the transition is not always easy and production environments are still largely on premise. Taking some practical and sensible steps to reduce risk can also help provide a basis for a successful cloud transition. A plethora of surveys from the likes of IDG and Gartner show that more than 70 percent of enterprises have deployed at least one or more cloud application or workload. Yet a closer inspection at the data reveals less than half of these cloud projects involve production...
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningful and actionable insights. In his session at @ThingsExpo, Paul Turner, Chief Marketing Officer at...
Internet of @ThingsExpo, taking place June 7-9, 2016 at Javits Center, New York City and Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 18th International @CloudExpo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
We all know that data growth is exploding and storage budgets are shrinking. Instead of showing you charts on about how much data there is, in his General Session at 17th Cloud Expo, Scott Cleland, Senior Director of Product Marketing at HGST, showed how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound cha...
We are rapidly moving to a brave new world of interconnected smart homes, cars, offices and factories known as the Internet of Things (IoT). Sensors and monitoring devices will touch every part of our lives. Let's take a closer look at the Internet of Things. The Internet of Things is a worldwide network of objects and devices connected to the Internet. They are electronics, sensors, software and more. These objects connect to the Internet and can be controlled remotely via apps and programs. Because they can be accessed via the Internet, these devices create a tremendous opportunity to inte...
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, wil...
SYS-CON Events announced today that Kintone has been named "Bronze Sponsor" of SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. kintone promotes cloud-based workgroup productivity, transparency and profitability with a seamless collaboration space, build your own business application (BYOA) platform, and workflow automation system.
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical to maintaining positive ROI. Raxak Protect is an automated security compliance SaaS platform and ma...
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, explored the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context with p...
Just over a week ago I received a long and loud sustained applause for a presentation I delivered at this year’s Cloud Expo in Santa Clara. I was extremely pleased with the turnout and had some very good conversations with many of the attendees. Over the next few days I had many more meaningful conversations and was not only happy with the results but also learned a few new things. Here is everything I learned in those three days distilled into three short points.