Welcome!

Java IoT Authors: Elizabeth White, Stefana Muller, Pat Romanski, Liz McMillan, Yeshim Deniz

Related Topics: Java IoT

Java IoT: Article

Software Testing Shouldn't Be Rocket Science

Software Testing Shouldn't Be Rocket Science

Earthdate: October 15, 1997, and the Cassini spacecraft is launched. Mission: to boldly go and explore the planet Saturn.

Saturn is about 10 times farther away from the Sun than the Earth, and to get there required two orbits of the inner solar system, receiving gravitational kicks from Venus and Earth before doing a flyby of Jupiter to get a final assist toward Saturn.

Piggy-backed to Cassini was the Huygens probe that would be dropped onto Saturn's moon, Titan. Unlike most other moons in the solar system that are barren, cratered rocky places, Titan has an atmosphere covering it. The purpose of the probe was to parachute through this, capturing data as it descended onto the planet's surface. The data would be transmitted from the probe up to the Cassini craft, which would act as a relay and transmit back to earth where the experiments' results would be analyzed.

On January 14, 2005, Huygens successfully landed on Titan's surface and provided some fantastic pictures of the moon (www.esa.int/SPECIALS/Cassini-Huygens/index.html). Despite this, there were two major problems on the mission.

The first is that one of the radio channels that the Huygens craft was going to use to transmit data to Cassini failed. The remaining channel was used successfully, although due to this problem only half of Huygen's pictures have come back and some experiments have had all their data lost. The reason for the problem is described as a "software commanding error." The reality is the receiver on Cassini was never programmed to switch on.

The second problem is related to the premise that Huygens transmits its collected data to the Cassini orbiter, which then relays it back to earth. Three years after the launch one of the space agency's employees became uneasy about the fact that this feature hadn't been tested enough in realistic conditions. The story is described in detail at www.spectrum.ieee.org/WEBONLY/publicfeature/oct04/1004titan.html and provides a sobering lesson in the importance of testing. This employee worked hard to convince colleagues and superiors of the importance of testing the link in real conditions, so a simulation was done by sending data from earth to Cassini mid-mission while it hurtled toward Saturn. This mimicked the separation conditions that would be encountered between the craft and Titan and the raw data sent was echoed back to earth by Cassini and analyzed. It showed a fundamental flaw.

Because of the difference in the relative speeds at which Cassini was traveling in space relative to Huygens, there was a Doppler shift. A Doppler shift is when waves are effectively compressed if the receiver and source are moving toward each other and expanded if moving apart. As the wavelength decreases, the frequency increases, meaning that Cassini would have to adjust its listening frequencies to account for its velocity relative to the Huygens transmitter. In addition, the decoding would be affected. Digital data is split into ones and zeros and compared against a base signal to decipher; however, the Doppler shift would stretch and compress the lengths of the payload actual bits in the wave, meaning the digital signal couldn't be analyzed correctly. A fix was required to rescue the $3.26 billion project.

Despite the fact that Cassini's hardware allows its receiver to receive over a range of shifted frequencies, the firmware program was unable to be modified after launch, even though a small fix would have sufficed. The solution they used was to alter the trajectory of Cassini's orbits of Titan so that the craft's approach allowed the radio transmissions to travel perpendicular to its direction of motion, thereby reducing the Doppler shift.

The cost of the two Cassini bugs is huge. Coming down to earth it provokes questions about testing in general. A trait I've encountered at times in my career is for a program to be released knowing it is flawed because the programmer hopes to release the working version in a subsequent fixpack, hopefully before the user has encountered the errant feature. Upgrading releases is easy for developers, but for a user who has to migrate data and schedule business downtime it's frustrating and must contribute to the perception that the latest release is not a set of fully baked features but a rollup of the previous version's fixes bundled with new features that perpetually introduce their own set of bugs.

Good testing is about attitude, where a developer takes pride not just in the elegance or volume of his or her code, but in whether it meets the user's requirements and performs reliably in its first incarnation. I once heard a developer say that releasing buggy code was part of agile programming to allow you to have more cycles of code/release//fix/code. Apart from not grasping the methodology maturely, it showed a basic lack of pride in their work that they were trying to justify. The same excuses can also lead to bloatware, where code is thrown upon code without any tight design or following the basic principles of software engineering. Is the problem that some developers are incapable of taking pride in the complete quality of their work, or that education and teaching, or marketing pressures in a commercial environment still mean that buggy software is released. Next time you release code without proper testing, keep in mind the Cassini programmers who had to physically alter their craft's passage to find a solution. While we might have the next fixpack or release available to us, in space no one can hear your excuses.

More Stories By Joe Winchester

Joe Winchester, Editor-in-Chief of Java Developer's Journal, was formerly JDJ's longtime Desktop Technologies Editor and is a software developer working on development tools for IBM in Hursley, UK.

Comments (1)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to impr...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Bi...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...