Welcome!

Java IoT Authors: Liz McMillan, Pat Romanski, Zakia Bouachraoui, Elizabeth White, Yeshim Deniz

Related Topics: @DevOpsSummit, Containers Expo Blog, Agile Computing

@DevOpsSummit: Blog Post

Cruise Control: Automation in Performance Testing | @DevOpsSummit #APM #DevOps

Automated performance testing is important for many different reasons

Listen closely to the background hum of any agile shop, and you'll likely hear this ongoing chant: Automate! Automate! Automate! While automation can be incredibly valuable to the agile process, there are some key things to keep in mind when it comes to automated performance testing.

Automated performance testing is important for many different reasons. It allows you to refactor or introduce change and test for acceptance with virtually no manual effort. You can also stay on the lookout for regression defects and test for things that just wouldn't come up manually. Ultimately, automated testing should save time and resources, so you can release code that is bug-free and ready for real-world use.

Recently, I spoke with performance specialist Brad Stoner about how to fit performance testing into agile development cycles. This week, we'll use this blog post to follow up with greater detail around performance testing automation and recap which performance tests are good candidates for automation. After all, automation is an important technique for any modern performance engineer to master.

Automation Without Direction
Most of the time, automation gets set up without performance testing in mind. Performance testing is, at best, an afterthought to the automation process. That leaves you, as a performance engineer, stuck with some pretty tricky scenarios. Maybe every test case is a functional use case, and if you want to adapt them for performance, you have to go back and modify them for scale or high concurrency. Or perhaps the data required for a large performance test is never put together leaving you with a whole new pile of work to do.

Use cases are strung together in an uncoordinated way, so you have to create another document that describes how to use existing functional tests to conduct a load test. And of course, those test cases are stuck on "the happy path" making sure functionality works properly, so they don't test edge cases or stress cases, and therefore, don't identify performance defects.

None of these scenarios is desirable, but they can be easily rectified by incorporating performance objectives into your automation strategy from the start. You want to plan your approach to automation intelligently.

What Automation Is - and Isn't - Good For
You can't automate everything all the time. If you run daily builds, you can't do a massive load test every night. That idea would be even worse if you build several times a day. Instead, you'll have to pick and choose your test cases, mapping out what you do over periods of time in coordination with the release cycle for the app.

Too many use cases to cover at a time will kill your environment. Constantly high traffic patterns are next to impossible to maintain. Highly specific test scenarios can also cause difficulty because you may need to adjust performance tests every time something changes. That's why it pays to be smart about what you automate.

Look for a manageable number of tests that can be run generically and regularly. Then, benchmark those tests. After that, you can focus your manual time on ad hoc testing, bottlenecks, or areas under active development. This isolation will catch a ton of defects before production.

Get Automation Working for You
Automation can be great, but it has to identify performance defects and alert you. Just like functional tests validate a defined plan of how an application should behave, performance tests should validate your application's service level agreement. Define the tests in which you want to leverage automation. Is it for workload capacity? Or are you looking for stress, duration, and soak tests? Will you automate to find defects on the front end?

It's easy to automate these problems, and you can do it at a low cost. You'll want to establish benchmarks and baselines often to see if performance degrades as applications are further developed. Testing with direction means that you don't test just for the sake of testing. You always test with a purpose and motive: to find and isolate performance defects. This is a critical thing to do as a performance engineer because you're always dealing with pushing the envelope of the application. You need to know where that boundary lies.

Get Ready for Smooth Sailing
Automated performance testing can be a huge time saver. To make the most of that time-saving potential, you want to do it right. Work smart by always testing with purpose. Ready to dive even deeper into these topics? Jump right in and check out the full webcast here where we go into greater detail about automation strategies. You can also learn how Neotys can help you with the overall agile performance testing cycle.

Photo: Pixabay

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

IoT & Smart Cities Stories
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC, the producer of the world's most influential technology conferences and trade shows has announced the 22nd International CloudEXPO | DXWorldEXPO "Early Bird Registration" is now open. Register for Full Conference "Gold Pass" ▸ Here (Expo Hall ▸ Here)
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...