Welcome!

Java IoT Authors: Liz McMillan, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski

Blog Feed Post

Agile Performance Testing – Proactively Managing Performance

image_pdfimage_print

Just in case you haven’t heard, Waterfall is out and Agile is in.  For organizations that thrive on innovation, successful agile development and continuous deployment processes are paramount to reducing go to market time, fast tracking product enhancements and quickly resolving defects.

Executed successfully, with the right team in place, Agile practices should result in higher functional product quality.  Operating in small, focused teams that work well-defined sprints with clearly groomed stories is ideal for early QA involvement, parallel test planning and execution.

But how do you manage non-functional performance quality in an Agile model?  The reality is that traditional performance engineering, and testing, is often best performed over longer periods of time; workload characterizations, capacity planning, script development, test user creation, test data development, multi-day soak tests and more… are not always easily adaptable into 2-week, or shorter, sprints.  And the high-velocity of development change often cause continuous, and sometimes large, ripples that disrupt a team’s ability to keep up with these activities; anyone ever had a data model change break their test dataset?

Before joining AppDynamics I faced this exact scenario as the Lead Performance Engineer for PayPal’s Java Middleware team.  PayPal was undergoing an Agile transformation and our small team of historically matrix aligned, specialty engineers, was challenged to adapt.

Here are my best practices and lessons learned, sometimes the hard way, of how to adapt performance-engineering practices into an agile development model:

  1. Fully integrate yourself into the Sprint team, immediately.  My first big success at PayPal was the day I had my desk moved to sit in the middle of the Dev team.  I joined the water cooler talk, attended every standup, shot nerf missiles across the room, wrote and groomed stories as a core part of the scrum team.  Performance awareness, practices, and results organically increased because it was a well represented function within the team as opposed to an after thought farmed out to a remote organization.
  2. Build multiple performance and stress test scenarios with distinct goals and execution schedules.  Plan for longer soak and stress tests as part of the release process, but have one or more per-sprint, and even nightly, performance tests that can be continually executed to proactively measure performance, and identify defects as they are introduced.  Consider it your mission to quantify the performance impact of a code change.
  3. Extend your Continuous Integration (CI) pipelines to include performance testing.  At PayPal, I built custom integrations between Jenkins and JMeter to automate test execution and report generation.  Our pipelines triggered automated nightly regressions on development branches and within a well-understood platform where QA and development could parameterize workload, kick-off a performance test and interpret a test report.  Unless you like working 18-hour days, I can’t overstate the importance of building integrations into tools that are already or easily adopted by the broader team.  If you’re using Jenkins, you might take a look at the Jenkins Performance Plugin.
  4. Define Key Performance Indicators (KPIs).  In an Agile model you should expect smaller scoped tests, executed at a higher frequency.  It’s critical to have a set of KPIs the group understands, and buys into, so you can quickly look at a test and interpret if a) things look good, or b) something funky happened and additional investigation is needed. Some organizations have clearly defined non-functional criteria, or SLAs, and many don’t. Be Agile with your KPIs, and refine them over time. Here are some of the KPIs we commonly evaluated:
  • Percentile Response-Time - 90th, 95th, 99th - Summary and Per-Transaction
  • Throughput – Summary and Per-Transaction
  • Garbage Collector (GC) Performance - % non-paused time, number of collections (major and minor), and collection times.
  • Heap Utilization – Young Generation and Tenured Space
  • Resource Pools – Connection Pools and Thread Pools

5. Invest in best of breed tooling.  With higher velocity code change and release schedules, it’s essential to have deep visibility into your performance environment. Embrace tooling, but consider these factors impacted by Agile development: 

  • Can your toolset automatically, and continuously discover, map and diagnose failures in a distributed system without asking you to configure what methods should be monitored?  In an Agile team the code base is constantly shifting.  If you have to configure method-level monitoring, you’ll spend significant time maintaining tooling, rather than solving problems.
  • Can the solution be enabled out of the box under heavy loads?  If the overhead of your tooling degrades performance under high loads, it’s ineffective in a performance environment.  Don’t let your performance monitoring become your performance problem.

When a vendor recommends you reduce monitoring coverage to support load testing, consider a) the effectiveness of a tool which won’t provide 100% visibility, and b) how much time will be spent consistently reconfiguring monitoring for optimal overhead.

Performance testing within an Agile organization challenges us as engineers to adapt to a high velocity of change.  Applying best practices gives us the opportunity to work as part of the development team to proactively identify and diagnose performance defects as code changes are introduced.  Because the fastest way to resolve a defect in production is to fix it before it gets there.

Take five minutes to get complete visibility into the performance of your production applications with AppDynamics today.

The post Agile Performance Testing – Proactively Managing Performance written by appeared first on Application Performance Monitoring Blog from AppDynamics.

Read the original blog entry...

More Stories By AppDynamics Blog

In high-production environments where release cycles are measured in hours or minutes — not days or weeks — there's little room for mistakes and no room for confusion. Everyone has to understand what's happening, in real time, and have the means to do whatever is necessary to keep applications up and running optimally.

DevOps is a high-stakes world, but done well, it delivers the agility and performance to significantly impact business competitiveness.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...