Click here to close now.

Welcome!

Java Authors: Elizabeth White, Pat Romanski, Liz McMillan, Carmen Gonzalez, Yeshim Deniz

Related Topics: Java

Java: Article

Software Archeology: What Is It and Why Should Java Developers Care?

The Java language is very mature and most new Java projects aren't from scratch

The process of Software Archeology can really save a significant amount of work, or in many cases, rework. So what challenges do all developers face when asked to do these kinds of projects?

  • What have I just inherited?
  • What pieces should be saved?
  • Where are the scary sections of the code?
  • What kind of development team created this?
  • Where are the performance spots I should worry about?
  • What's missing that will most likely cause me significant problems downstream in the development process?
The overall approach is broken into a six-step process. By the time a team is finished, and has reviewed what is there and what is not, this process can drastically help define the go-forward project development strategy. The six steps include:
  • Visualization: a visual representation of the application's design.
  • Design Violations: an understanding of the health of the object model.
  • Style Violations: an understanding of the state the code is currently in.
  • Business Logic Review: the ability to test the existing source.
  • Performance Review: where are the bottlenecks in the source code?
  • Documentation: does the code have adequate documentation for people to understand what they're working on?
Most developers regard these steps as YAP (Yet Another Process), but in reality many of them should be part of the developer's daily process, so it shouldn't be too overwhelming. The next question is can these tasks be done by hand? From a purely technical point-of-view, the answer would have to be yes, but in today's world of shorter timelines and elevated user expectations, the time needed to do this by hand is unacceptable.

So if it really can't be done by hand, what tools do I need to get the job done? Let's break down the process step-by-step and look at the tools that could be used to complete the task. Some advanced IDEs exist that include all of these tools and there are open source-based tools that may be able to do some parts of the job.

Visualization is the first step to understanding what kind of code the developer will be working with. It always amazes me how many developers have never looked at a visualization of the code they've written. Many times key architecture issues can be discovered just by looking at an object diagram of the system. Things like relationships between objects and level of inheritance can be a real eye opener. The old adage is true: a picture can be worth a 1,000 lines of code. When thinking about visualization in an object-oriented language like Java, UML diagrams seems to be widely used and understood. Being able to reverse-engineer the code into a class diagram is the first tool that's needed. Later in the process it will be important to be able to reverse-engineer methods into sequence or communication diagrams for a better understanding of extremely complex classes and methods.

Once visualization of the system is done and reviewed, the next step is reviewing the system from a design violation standpoint. This can be done by using static code metrics. Using metrics gives the developer or team a way to check the health of the object design. Basic system knowledge like lines of code (LOC) or the ever-important cyclomatic complexity (CC) can give a lot of information to the reviewer.

Many developers have no idea how big or small the application they're working on is or where the most complex parts of the application are located. Using a select number of metrics, developers can pinpoint "trouble" areas; these should be marked for further review, because normally those areas are the ones that are asking to be modified. Further analysis can also be done on methods that have been marked as overly complex by generating sequence diagrams. These diagrams offer a condensed graphical representation and make it much easier for developers and management to understand the task of updating or changing the methods. Another valuable metric is JUnit testing Coverage (JUC). In many cases when code is being inherited a low or non-existent number around JUnit tests exists and should raise major concerns about making changes to the system. The biggest concern will most likely become how to ensure that changes made to the code or the fixes implemented are correct and don't break other parts of the system. By using the information generated by the metrics tools developers get a better understanding of what's been inherited and some of the complications around the product.

Style violations help complete the picture of the inherited code. Many developers argue that static code audits should be run first, and this is true from a new project perspective. However, when inheriting massive amounts of code, running metrics first usually gives more object health-based information. Once the health of the object design is determined and can point to various areas of the code that need significant work, the audits can further refine that knowledge.

Static code audits include all kind of rules checking that look for code consistency, standards, and bad practices. Audit tools like ours include 200+ audits and will help in understanding the complexity of the application under review. Advanced audit tools include rules for finding things like god classes, god methods, feature envy, and shotgun surgery. These advanced audits actually use some of the metrics to give the reviewers more information. Take god methods for example. This is a method in a class that gets called from everywhere, meaning from an object design standpoint that method has too much responsibility so making changes to that one method could have a dramatic effect on the entire system. Look at feature envy. This is almost the exact opposite of a god class; this is a class that doesn't do much and maybe should be re-factored back to its calling class. When estimating the amount of time to give to a particular enhancement or determine what kind of code has been inherited this kind of low-level understanding is worth a lot.

Business logic review focuses on the testability of an application. By using advanced metrics the amount of testing available can be determined in a few minutes. Inheriting a large amount of code and finding that no unit test exists for it is going to have a dramatic effect on estimates for enhancements, or make the developers realize they probably don't have a way to verify that any changes to the system are correct. The tools needed for testing business logic should include a code coverage product and an integrated unit testing product like JUnit. Having one of the two is okay, but having both opens a lot of new testing possibilities. First, by running the unit test with a code coverage tool, the code to be tested can be verified. Code coverage can also be used when you don't have the advanced audit tools discussed above, plus a good code coverage tool will show all class and methods included in the run of the test. Using an advanced audit like shotgun surgery will highlight a method that has a lot of dependencies but using unit testing and code coverage together ensures that changes to these types of methods can be fully tested and verified. Another advantage to a code coverage tool is found in QA, which runs the product testing scripts while code coverage is turned on. This will tell them two things: whether the test script is complete and whether there's test coverage for all of the applications code. The good thing about this piece of Software Archeology is that usually it can only get better. By adding additional tests, the end result should be a better running system.

The need for a good profiler is key to performance review. Using the tools and results from the business logic review, performance issues can be uncovered and fixed. A key metric to remember is that only around 5% of the code causes most performance issues. So having a handle on where code is complex makes ongoing maintenance faster and easier.

The last step is documentation. Doing all this work is great for the developer, reviewer, or team trying to understand the system. It would be great if that work could be captured and used going forward. Having an automatic documentation generator saves time, reduces overhead, and helps ensure the documentation is up-to-date. This will make it easier for new members joining a team or for the application to be passed to another team.

The ideas around Software Archeology are fairly straightforward; this article took an approach of inheriting a large amount of code and then being responsible for that code. Other expeditions into the code could produce useful design patterns, great algorithms to reuse, or major things to avoid. We all know that software is an asset so using Software Archeology can ensure we get the most out of that investment.

More Stories By Mike Rozlog

Mike Rozlog is with Embarcadero Technologies. In this role, he is focused on ensuring the family of Delphi developer products being created by Embarcadero meets the expectations of developers around the world. Much of his time is dedicated to discussing and explaining the technical and business aspects of Embarcadero’s products and services to analysts and other audiences worldwide. Mike was formerly with CodeGear, a developer tools group that was acquired by Embarcadero in 2008. Previously, he spent more than eight years working for Borland in a number of positions, including a primary role as Chief Technical Architect. A reputed author, Mike has been published numerous times. His latest collaboration is Mastering JBuilder from John Wiley & Sons, Inc.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
Advanced Persistent Threats (APTs) are increasing at an unprecedented rate. The threat landscape of today is drastically different than just a few years ago. Attacks are much more organized and sophisticated. They are harder to detect and even harder to anticipate. In the foreseeable future it's going to get a whole lot harder. Everything you know today will change. Keeping up with this changing landscape is already a daunting task. Your organization needs to use the latest tools, methods and expertise to guard against those threats. But will that be enough? In the foreseeable future attacks w...
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
Cloud is not a commodity. And no matter what you call it, computing doesn’t come out of the sky. It comes from physical hardware inside brick and mortar facilities connected by hundreds of miles of networking cable. And no two clouds are built the same way. SoftLayer gives you the highest performing cloud infrastructure available. One platform that takes data centers around the world that are full of the widest range of cloud computing options, and then integrates and automates everything. Join SoftLayer on June 9 at 16th Cloud Expo to learn about IBM Cloud's SoftLayer platform, explore se...
15th Cloud Expo, which took place Nov. 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA, expanded the conference content of @ThingsExpo, Big Data Expo, and DevOps Summit to include two developer events. IBM held a Bluemix Developer Playground on November 5 and ElasticBox held a Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of Bluemix, its services and functionality and provide short-term introductory projects that developers can complete between sessions.
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo – to be held June 9-11, 2015, at the Javits Center in New York City, NY – is now accepting Hackathon proposals. Hackathon sponsorship benefits include general brand exposure and increasing engagement with the developer ecosystem. At Cloud Expo 2014 Silicon Valley, IBM held the Bluemix Developer Playground on November 5 and ElasticBox held the DevOps Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of...
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
SYS-CON Events announced today that Liaison Technologies, a leading provider of data management and integration cloud services and solutions, has been named "Silver Sponsor" of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York, NY. Liaison Technologies is a recognized market leader in providing cloud-enabled data integration and data management solutions to break down complex information barriers, enabling enterprises to make smarter decisions, faster.
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Collecting data in the field and configuring multitudes of unique devices is a time-consuming, labor-intensive process that can stretch IT resources. Horan & Bird [H&B], Australia’s fifth-largest Solar Panel Installer, wanted to automate sensor data collection and monitoring from its solar panels and integrate the data with its business and marketing systems. After data was collected and structured, two major areas needed to be addressed: improving developer workflows and extending access to a business application to multiple users (multi-tenancy). Docker, a container technology, was used to ...
Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti...
For years, we’ve relied too heavily on individual network functions or simplistic cloud controllers. However, they are no longer enough for today’s modern cloud data center. Businesses need a comprehensive platform architecture in order to deliver a complete networking suite for IoT environment based on OpenStack. In his session at @ThingsExpo, Dhiraj Sehgal from PLUMgrid will discuss what a holistic networking solution should really entail, and how to build a complete platform that is scalable, secure, agile and automated.
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...