Click here to close now.

Welcome!

Java Authors: Roger Strukhoff, Harry Trott, Trevor Parsons, Elizabeth White, Tim Hinds

Related Topics: DevOps Journal, Java, SOA & WOA, .NET, Virtualization, Cloud Expo, Big Data Journal

DevOps Journal: Article

I’m Not Scared of DevOps and You Shouldn’t Be Either

The Foundation of DevOps

DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO's chair will come from the IT leaders that successfully make the transition to a DevOps model. If this still seems foreign to you, I recommend reading up on DevOps Blog from IT Revolution, the OpsCode Blog, and check out The Phoenix Project.

Despite all the talk around simple monitoring tools, breaking through the walls between Dev and Ops still poses a real challenge. This is because of a misunderstanding around Operations real purpose - extracting real value from its resources. According to Kevin Behr the definition of Operations is the act of harvesting value from IT resources. Anything that prevents this from happening is a detriment to the business. This means that firefighting and war room sessions are a hindrance to the DevOps model. The following screenshots are good examples of a war room scenario.

Unexpected crashes of websites upon new rollouts still lead to "War Room" situations - despite all the good efforts of DevOps and Agile Delivery/Deployment

Successfully riding the DevOps train: Many of our production customers that made it through the firefighting mode applied the principals of DevOps with a special focus on Application Performance. In this article we describe the steps and milestones companies need to go through in order to level-up their Operations and Engineering Teams to provide more value out of the existing resources.

The Foundation of DevOps
CAMS (Culture, Automation, Measurement, Sharing) are four key areas that are core to the DevOps movement. Culture is the hardest to change but is also the most important because it means a change in the way in which the different teams work together and share the responsibility for the end users of their application. It promotes the usage of development practices in operations to automate deployment. It also allows developers to learn from "the real world" Ops experience and with that mutual exchange it breaks down the walls.

The Lack of Performance Focus
An interesting fact
based on the feedback we get from operation teams worldwide: The root cause for about 80% of site crashes or performance problems is related to only about 20% of problem patterns. Want to learn more? Check out blogs such as Top Performance Landmines in Production and I am sure you'll find some issues you already ran into yourself.

Looking at these common problem patterns it is clear that despite all the DevOps efforts lots of performance and scalability-related problems still make it into a release deployment. Why is that? Because our organizations are still very much driven by business requirements that need numerous new features being pushed in ever shorter release cycles. Teams keep growing and are being spread around the world. In order to keep up with the pace, third-party components are included in the code in place of in-house innovation. This "natural" evolution however is also the root cause for firefights and limiting the benefits of DevOps because there is too much focus on pushing functionality through the Deployment Pipeline but not enough focus on Performance.

More developers across more locations including more untested 3rd party code with less time to focus on performance

Plugging Performance into DevOps
In order to focus developers on performance to avoid War Room scenarios you must plug performance into the four pillars of CAMS:

  • Culture: Performance as Key Requirement in Dev, Test and Ops
  • Automation: Automated Performance Tests already in Continuous Integration
  • Measurement: Measure Key Performance Metrics in CI, Test and Ops
  • Sharing: Share the same tools and same performance data across Dev, Test and Ops

There are several key milestones to consider:

Milestone 1: Level-Up Performance to Increase Feedback Between Ops and Development
The first step in any DevOps initiative is to get the Ops teams and the Dev teams talking in order to relieve constraints on the business. This might be easy for small teams to accomplish but the larger the organization, the more difficult it becomes as constraints are greater in larger organizations. Operations has to diminish these constraints on the business. This is where APM solutions can help. Beware as not all solutions are created equal. As mentioned in the previous DevOps blog the drive to diminish constraint needs to be applied across the delivery chain. Monitoring just does not cut it here. There needs to be something that not only starts the process but allows the teams to continue to mature and grow. Simple monitoring tools fall short because they only help extinguish fires in operations. How does this continue to drive down constraints? Remember, firefighting is not a part of operations, which means operations should not be looking at fire extinguishers for their DevOps strategies.

Milestone 2: Level-Up Performance Thinking of Engineering
Both Operations and Test Teams have a good understanding of Performance as they deal with it every day. These teams need to educate engineering on the importance of performance as it is a key requirement to software engineering and how it plays a role in large-scale environments under heavy load.

The Ops team shares data with engineering to highlight the performance behavior of their applications under real production load. This helps engineers to prevent these top performance problems from entering production and with that eliminating the need for firefights.

The test teams do their share by providing automated performance test frameworks and educating engineering on how to automate testing for these performance problem patterns.

Milestone 3: Level-Up Load and Capacity Testing
With development executing its own performance tests it's time to level up the test team as well. On one side there is more time to focus on large-scale load tests that need to be executed in a production- like environment. This helps to find any "data-driven", scalability, and "third-party impacted" performance problems. Close collaboration with Ops ensures that tests can be executed either in the prod environment or in a staged environment that mirrors production. Executing these tests in collaboration with Ops allows the teams to become more confident when releasing a new version and also helps with proper capacity planning steps.

Running tests against the production system gives better input for capacity planning and uncovers heavy load application issues

Milestone 4: Level-Up Performance Test Automation
The "traditional" testing teams are used to execute performance and scalability tests in their own environments at the end of a milestone. The goal is to provide these test frameworks and environments to engineering so that these basic performance tests can be executed automatically in the CI environment. In order for this to work you need to make sure that:

  1. These test frameworks are easy to use and accepted by developers
  2. Deliver performance metrics to detect the common problem patterns
  3. These are fully integrated into continuous integration

Automatic Integration Tests run in C/I to detect performance regressions on metrics such as # of SQL Calls, Page Load Time, # of JS files or Images ...

What's Next? Build a Performance Center of Excellence
Many of our customers who jumped on the DevOps train a while back are now promoting a performance culture in their organizations. In the next few blogs we will cover their best practices and tips on either building a separate "Performance Center of Excellence" Team or up-leveling the existing DevOps teams to deliver software with high confidence and fewer War Room weekends.

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The Workspace-as-a-Service (WaaS) market will grow to $6.4B by 2018. In his session at 16th Cloud Expo, Seth Bostock, CEO of IndependenceIT, will begin by walking the audience through the evolution of Workspace as-a-Service, where it is now vs. where it going. To look beyond the desktop we must understand exactly what WaaS is, who the users are, and where it is going in the future. IT departments, ISVs and service providers must look to workflow and automation capabilities to adapt to growing demand and the rapidly changing workspace model.
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch of Docker's initial release in March of 2013, interest was revved up several notches. Then late last...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
HP and Aruba Networks on Monday announced a definitive agreement for HP to acquire Aruba, a provider of next-generation network access solutions for the mobile enterprise, for $24.67 per share in cash. The equity value of the transaction is approximately $3.0 billion, and net of cash and debt approximately $2.7 billion. Both companies' boards of directors have approved the deal. "Enterprises are facing a mobile-first world and are looking for solutions that help them transition legacy investments to the new style of IT," said Meg Whitman, Chairman, President and Chief Executive Officer of HP...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...