Java IoT Authors: Elizabeth White, Liz McMillan, Pat Romanski, Yeshim Deniz, Mehdi Daoudi

Related Topics: Java IoT, Microsoft Cloud, Open Source Cloud, Machine Learning , Ruby-On-Rails

Java IoT: Blog Post

Slow Tests Are the Symptom, Not the Cause

Test slowness is merely the symptom; what you should really address is the cause

Note that username and mailing_list_name are required named arguments, and will raise an ArgumentError if not passed in (this is not available even in ruby version 2.0), whereas the other arguments will get the specified default value (evaluated and) assigned to them if not passed in.

Codeship - A hosted Continuous Deployment platform for web applications

Using One Level of Abstraction Per Method

There is still something that doesn't feel quite right. Consider Kent Beck's advice:

Divide your program into methods that perform one identifiable task. Keep all of the operations in a method at the same level of abstraction.

The call method invokes a few other methods, but these methods operate on different levels of abstractions: notifying a user is a domain-level concept, whereas updating a user's attributes is a lower-level, persistence related concept. Another way to put it: while the details of how exactly the user is going to be notified are hidden, the details of updating the attributes are exposed. The fact that active record gives us multiple ways of updating attributes makes this problem even clearer: what if we wanted to update attributes using accessors and save using active record's save? These details are irrelevant at the abstraction level of the method we're in and the method should not change if they do. notifies_user is treated as a role, while user is wrongly treated as the active record's implementation of a user role.

The call to find_by! also operates on a lower level than notifies_user's, similarly to the case of update_attributes. In order to fix these problems we create an instance method and a class method in User:

class User < ActiveRecord::Base
validates_uniqueness_of :username
def add_to_mailing_list(list_name)
update_attributes(mailing_list_name: list_name)
def self.find_by_username!(username)
find_by!(username: username)

The specific query to find the user or the specific active record used are now User's business. We achieved further decoupling from active record and made sure the abstraction level is right. If we keep using our own methods instead of active record's we've effectively made persistence and object lookup an implementation detail that's the concern of the model only.

The end result looks like this:

The Complete Refactoring

class MailingListsController < ApplicationController
respond_to :json
def add_user
user = User.find_by!(username: params[:username])
NotifiesUser.run(user, 'blog_list')
user.update_attributes(mailing_list_name: 'blog_list')
respond_with user


class MailingListsController < ApplicationController
respond_to :json
def add_user
user = AddsUserToList.(username: params[:username], mailing_list_name: 'blog_list')
respond_with user

class AddsUserToList
def self.call(username:, mailing_list_name:, finds_user: User,
notifies_user: NotifiesUser)
user = finds_user.find_by_username!(username)
notifies_user.(user, mailing_list_name)

The Tests

The class AddsUserToList can be tested using true, isolated unit tests: we can easily isolate the class under test and make sure it properly communicates with its collaborators. There is no database access, no heavy handed request stubbing and if we want to - no loading of the rails stack. In fact, I'd argue that any test that requires any of the above is not a unit test, but rather an integration test (see entire repo here).

describe AddsUserToList do
let(:finds_user) { double('finds_user') }
let(:notifies_user) { double('notifies_user') }
let(:user) { double('user') }
subject(:adds_user_to_list) { AddsUserToList }
it 'registers a new user' do
expect(finds_user).to receive(:find_by_username!).with('username').and_return(user)
expect(notifies_user).to receive(:call).with(user, 'list_name')
expect(user).to receive(:add_to_mailing_list).with('list_name')
adds_user_to_list.(username: 'username', mailing_list_name: 'list_name', finds_user:
finds_user, notifies_user: notifies_user)

Here we pass in mocks (initialized with #double) for each collaborator and expect them to receive the correct messages. We do not assert any values - specifically not the value of user.mailing_list_name. Instead we require that user receives the add_to_mailing_list message. We need to trust user to update the attributes. After all, that's a unit test for AddsUserToList, not for user.

Note that the fact that we pushed update_attributes to User helps us avoid a mocking pitfall: you need to only mock types you own. Technically we do own User, but most of its interface is coming down the inheritance tree from ActiveRecord::Base, which we don't own. There is no design feedback when you mock parts of the interface you don't own. Or rather - you do get feedback but you can't act on it.

As you can see there is a close resemblance between the test code and the code it is testing. I don't see it as a problem. A unit test should verify that the object under test sends the correct messages to its collaborators, and in the case of AddsUserToList we have a controller-like object, and a controller's job is to... coordinate sending messages between collaborators. Sandi Metz talks about what you should and what you should not test here. To use her vocabulary, all we are testing here are outgoing command messages since these are the only messages this object sends. For that reason I think the resemblance is acceptable.

I should mention that TDD classicists have long criticized TDD mockists (as myself) for writing tests that are too coupled to the implementation. You can read more about it in Martin Fowler's Mocks Aren't Stubs.

I omit the controller and integration tests here, but please don't forget them in your code. They will be much simpler and there will be fewer of them if you extract service objects.

Some Numbers

How much faster is this test from a unit test that touches the database and loads rails and the application? Here are the results:

Single Test RuntimeTotal Suite Runtime

‘false' unit test 0.0530s 2.5s

true unit test 0.0005s 0.4s

A single test run is roughly a hundred times faster. The absolute times are rather small but the difference will be very noticeable when you have hundreds of unit tests or more. The total runtime in the "false" version takes roughly two seconds longer. This is the time it takes to load a trivial rails app on my machine. This will be significantly higher when the app grows in size and adds more gems.

The ‘before' version's tests are harder to write and are significantly slower since we bundle many responsibilities into a single class, the controller class. The ‘After' version is easier to test (we pass mocks to override the default classes). This means that in our code in AddsUserToList we can easily replace the collaborators with other implementations in case the requirements change and require no or little code change. The controller has been reduced to performing the most basic task of coordination between a few objects.

Is the ‘After' version better? I think it is. It's easier and faster to test, but more importantly the collaborators are clearly defined and are treated as roles, not as specific implementations. As such, they can always be replaced by different implementations of the role they play. We now can concentrate on the messages passing between the differentroles in our system.

When you practice TDD with mock objects you will almost be forced to inject your dependencies in order to mock collaborators. Extracting your business logic into service objects makes all this much easier, and further decoupling from active record makes the tests true unit tests that are also blazing fast.

This brings us closer to a lofty design goal stated by Kent Beck:

When you can extend a system solely by adding new objects without modifying any existing objects, then you have a system that is flexible and cheap to maintain.

Using mocks and dependency injection with TDD makes sure your system is designed for this form of modularity from the get go. You know you can replace your objects with a different implementation because this is exactly what you did in your tests when you passed in mocks. Such design guarantees that you can write true, isolated and thus fast, tests.

We want to thank Oren for making his original article available to the Codeship Blog. It is a pleasure republishing blog posts of such great quality. Let us know what you think about Oren's article in the comments!

If you liked Oren's post be sure to sign up for his newsletter to get tips about how to improve your code.

Go ahead and try the Codeship for free! Set up Continuous Integration and Deployment for your GitHub and BitBucket projects in only 3 minutes.

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

IoT & Smart Cities Stories
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Druva is the global leader in Cloud Data Protection and Management, delivering the industry's first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence-dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it. Druva's...
BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for five years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DSR is a supplier of project management, consultancy services and IT solutions that increase effectiveness of a company's operations in the production sector. The company combines in-depth knowledge of international companies with expert knowledge utilising IT tools that support manufacturing and distribution processes. DSR ensures optimization and integration of internal processes which is necessary for companies to grow rapidly. The rapid growth is possible thanks, to specialized services an...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
Scala Hosting is trusted by 50 000 customers from 120 countries and hosting 700 000+ websites. The company has local presence in the United States and Europe and runs an internal R&D department which focuses on changing the status quo in the web hosting industry. Imagine every website owner running their online business on a fully managed cloud VPS platform at an affordable price that's very close to the price of shared hosting. The efforts of the R&D department in the last 3 years made that pos...