Welcome!

Java IoT Authors: Akhil Sahai, Elizabeth White, Ruxit Blog, Dana Gardner, Pat Romanski

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, Linux Containers, SDN Journal

Microservices Expo: Blog Feed Post

Software Engineering and Code Quality Goals You Should Nail Before 2018

Responsible IT managers need to change the way they think about software development

When applications crash due to a code quality issues, the common question is, "How could those experts have missed that?" The problem is, most people imagine software development as a room full of developers, keyboards clacking away with green, Matrix-esque code filling up the screen as they try and perfect the newest ground-breaking feature. However, in reality most of the work developers actually do is maintenance work fixing the bugs found in the production code to ensure a higher level of code quality.

Not only does this severely reduce the amount of business value IT can bring to the table, it also exponentially increases the cost in developing and maintaining quality applications. And even though the IT industry has seen this rise in cost happening for years, they've done little to stem the rising tide. The time has come to draw a line in the sand.

Capers Jones, VP and CTO of Namcook Analytics, recently released a collection of 20 goals software engineers should be aiming to reach by 2018 and we thought this was a great starting point to get software engineering focused on fixing the problems that lie before them, and not just spinning their gears.

However, having ambitious goals is only part of the challenge. In our experience, the organizations aren't equipped to meet these goals because:

  • Functional testing isn't enough
  • Code analyzers are myopic
  • Productivity measurement is manual and laborious

Responsible IT managers need to change the way they think about software development and arm their teams with better tools and processes if they want to come close to achieving any of these goals. This starts with gaining better visibility into their software risk, performance measurement, portfolio analysis, and quality improvement - and it needs to be instantaneous, not quarterly. The problems are happening now, in development, and management is wasting precious time and money waiting until testing to try and put it all together to work out all the kinks.

Once management has a transparent view into the code quality of their application portfolio, then they can shift their focus to achieving the software engineering goals outlined by Jones. They're great goals to aspire to, but let's make sure we're not putting the cart before the horse.

  1. Raise defect removal efficiency (DRE) from < 90.0% to > 99.5%. This is the most important goal for the industry. It cannot be achieved by testing alone but requires pre-test inspections and static analysis. DRE is measured by comparing all bugs found during development to those reported in the first 90 days by customers.
  2. Lower software defect potentials from > 4.0 per function point to < 2.0 per function point. Defect potentials are the sum of bugs found in requirements, design, code, user documents, and bad fixes. Requirements and design bugs often outnumber code bugs. Achieving this goal requires effective defect prevention such as joint application design (JAD), quality function deployment (QFD), certified reusable components, and others. It also requires a complete software quality measurement program. Achieving this goal also requires better training in common sources of defects found in requirements, design, and source code.
  3. Lower cost of quality (COQ) from > 45.0% of development to < 20.0% of development. Finding and fixing bugs has been the most expensive task in software for more than 50 years. A synergistic combination of defect prevention and pre-test inspections and static analysis are needed to achieve this goal.
  4. Reduce average cyclomatic complexity from > 25.0 to < 10.0. Achieving this goal requires careful analysis of software structures, and of course it also requires measuring cyclomatic complexity for all modules.
  5. Raise test coverage from < 75.0% to > 98.5% for risks, paths, and requirements. Achieving this goal requires using mathematical design methods for test case creation such as using design of experiments. It also requires measurement of test coverage.
  6. Eliminate error-prone modules in large systems. Bugs are not randomly distributed. Achieving this goal requires careful measurements of code defects during development and after release with tools that can trace bugs to specific modules. Some companies such as IBM have been doing this for many years. Error-prone modules (EPM) are usually less than 5% of total modules but receive more than 50% of total bugs. Prevention is the best solution. Existing error-prone modules in legacy applications may require surgical removal and replacement.
  7. Eliminate security flaws in all software applications. As cyber-crime becomes more common the need for better security is more urgent. Achieving this goal requires use of security inspections, security testing, and automated tools that seek out security flaws. For major systems containing valuable financial or confidential data, ethical hackers may also be needed.
  8. Reduce the odds of cyber-attacks from > 10.0% to < 0.1%. Achieving this goal requires a synergistic combination of better firewalls, continuous anti-virus checking with constant updates to viral signatures; and also increasing the immunity of software itself by means of changes to basic architecture and permission strategies.
  9. Reduce bad-fix injections from > 7.0% to < 1.0%. Not many people know that about 7% of attempts to fix software bugs contain new bugs in the fixes themselves commonly called "bad fixes."  When cyclomatic complexity tops 50 the bad-fix injection rate can soar to 25% or more. Reducing bad-fix injection requires measuring and controlling cyclomatic complexity, using static analysis for all bug fixes, testing all bug fixes, and inspections of all significant fixes prior to integration.
  10. Reduce requirements creep from > 1.5% per calendar month to < 0.25% per calendar month. Requirements creep has been an endemic problem of the software industry for more than 50 years. While prototypes, agile embedded users, and joint application design (JAD) are useful, it is technically possible to also use automated requirements models to improve requirements completeness.
  11. Lower the risk of project failure or cancellation on large 10,000 function point projects from > 35.0% to < 5.0%. Cancellation of large systems due to poor quality and cost overruns is an endemic problem of the software industry, and totally unnecessary. A synergistic combination of effective defect prevention and pre-test inspections and static analysis can come close to eliminating this far too common problem.
  12. Reduce the odds of schedule delays from > 50.0% to < 5.0%. Since the main reasons for schedule delays are poor quality and excessive requirements creep, solving some of the earlier problems in this list will also solve the problem of schedule delays. Most projects seem on time until testing starts, when huge quantities of bugs begin to stretch out the test schedule to infinity. Defect prevention combined with pre-test static analysis can reduce or eliminate schedule delays.
  13. Reduce the odds of cost overruns from > 40.0% to < 3.0%. Software cost overruns and software schedule delays have similar root causes; i.e. poor quality control combined with excessive requirements creep. Better defect prevention combined with pre-test defect removal can help to cure both of these endemic software problems.
  14. Reduce the odds of litigation on outsource contracts from > 5.0% to < 1.0%. The author of this paper has been an expert witness in 12 breach of contract cases. All of these cases seem to have similar root causes which include poor quality control, poor change control, and very poor status tracking. A synergistic combination of early sizing and risk analysis prior to contract signing plus effective defect prevention and pre-test defect removal can lower the odds of software breach of contract litigation.
  15. Lower maintenance and warranty repair costs by > 75.0% compared to 2014 values. Starting in about 2000 the number of U.S. maintenance programmers began to exceed the number of development programmers. IBM discovered that effective defect prevention and pre-test defect removal reduced delivered defects to such low levels that maintenance costs were reduced by at least 45% and sometimes as much as 75%.
  16. Improve the volume of certified reusable materials from < 15.0% to > 75.0%. Custom designs and manual coding are intrinsically error-prone and inefficient no matter what methodology is used. The best way of converting software engineering from a craft to a modern profession would be to construct applications from libraries of certified reusable material; i.e. reusable requirements, design, code, and test materials. Certification to near zero-defect levels is a precursor, so effective quality control is on the critical path to increasing the volumes of certified reusable materials.
  17. Improve average development productivity from < 8.0 function points per month to >16.0 function points per month. Productivity rates vary based on application size, complexity, team experience, methodologies, and several other factors. However when all projects are viewed in aggregate average productivity is below 8.0 function points per staff month. Doubling this rate needs a combination of better quality control and much higher volumes of certified reusable materials; probably 50% or more.
  18. Improve work hours per function point from > 16.5 to < 8.25. Goal 17 and this goal are essentially the same but use different metrics.  However there is one important difference. Work hours will be the same in every country. For example a project in Sweden with 126 work hours per month will have the same number of work hours as a project in China with 184 work hours per month. But the Chinese project will need fewer calendar months than the Swedish project.
  19. Shorten average software development schedules by > 35.0% compared to 2014 averages. The most common complaint of software clients and corporate executives at the CIO and CFO level is that big software projects take too long. Surprisingly it is not hard to make them shorter. A synergistic combination of better defect prevention, pre-test static analysis and inspections, and larger volumes of certified reusable materials can make significant reductions in schedule intervals.
  20. Raise maintenance assignment scopes from < 1,500 function points to > 5,000 function points. The metric "maintenance assignment scope" refers to the number of function points that one maintenance programmer can keep up and running during a calendar year. The range is from < 300 function points for buggy and complex software to > 5,000 function points for modern software released with effective quality control. The current average is about 1,500 function points. This is a key metric for predicting maintenance staffing for both individual projects and also for corporate portfolios. Achieving this goal requires effective defect prevention, effective pre-test defect removal, and effective testing using modern mathematically based test case design methods. It also requires low levels of cyclomatic complexity.

Read the original blog entry...

More Stories By Lev Lesokhin

Lev Lesokhin is responsible for CAST's market development, strategy, thought leadership and product marketing worldwide. He has a passion for making customers successful, building the ecosystem, and advancing the state of the art in business technology. Lev comes to CAST from SAP, where he was Director, Global SME Marketing. Prior to SAP, Lev was at the Corporate Executive Board as one of the leaders of the Applications Executive Council, where he worked with the heads of applications organizations at Fortune 1000 companies to identify best management practices.

@ThingsExpo Stories
Large scale deployments present unique planning challenges, system commissioning hurdles between IT and OT and demand careful system hand-off orchestration. In his session at @ThingsExpo, Jeff Smith, Senior Director and a founding member of Incenergy, will discuss some of the key tactics to ensure delivery success based on his experience of the last two years deploying Industrial IoT systems across four continents.
There will be new vendors providing applications, middleware, and connected devices to support the thriving IoT ecosystem. This essentially means that electronic device manufacturers will also be in the software business. Many will be new to building embedded software or robust software. This creates an increased importance on software quality, particularly within the Industrial Internet of Things where business-critical applications are becoming dependent on products controlled by software. Qua...
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 Silicon Valley. The 19th Cloud Expo and 6th @ThingsExpo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Interne...
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, discussed how research has demonstrated the value of Machine Learning in delivering next generation analytics to imp...
In addition to all the benefits, IoT is also bringing new kind of customer experience challenges - cars that unlock themselves, thermostats turning houses into saunas and baby video monitors broadcasting over the internet. This list can only increase because while IoT services should be intuitive and simple to use, the delivery ecosystem is a myriad of potential problems as IoT explodes complexity. So finding a performance issue is like finding the proverbial needle in the haystack.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develo...
SYS-CON Events announced today that MangoApps will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device.
Basho Technologies has announced the latest release of Basho Riak TS, version 1.3. Riak TS is an enterprise-grade NoSQL database optimized for Internet of Things (IoT). The open source version enables developers to download the software for free and use it in production as well as make contributions to the code and develop applications around Riak TS. Enhancements to Riak TS make it quick, easy and cost-effective to spin up an instance to test new ideas and build IoT applications. In addition to...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
"We've discovered that after shows 80% if leads that people get, 80% of the conversations end up on the show floor, meaning people forget about it, people forget who they talk to, people forget that there are actual business opportunities to be had here so we try to help out and keep the conversations going," explained Jeff Mesnik, Founder and President of ContentMX, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
“delaPlex Software provides software outsourcing services. We have a hybrid model where we have onshore developers and project managers that we can place anywhere in the U.S. or in Europe,” explained Manish Sachdeva, CEO at delaPlex Software, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.
From wearable activity trackers to fantasy e-sports, data and technology are transforming the way athletes train for the game and fans engage with their teams. In his session at @ThingsExpo, will present key data findings from leading sports organizations San Francisco 49ers, Orlando Magic NBA team. By utilizing data analytics these sports orgs have recognized new revenue streams, doubled its fan base and streamlined costs at its stadiums. John Paul is the CEO and Founder of VenueNext. Prior ...
IoT is rapidly changing the way enterprises are using data to improve business decision-making. In order to derive business value, organizations must unlock insights from the data gathered and then act on these. In their session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, and Peter Shashkin, Head of Development Department at EastBanc Technologies, discussed how one organization leveraged IoT, cloud technology and data analysis to improve customer experiences and effi...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Big Data engines are powering a lot of service businesses right now. Data is collected from users from wearable technologies, web behaviors, purchase behavior as well as several arbitrary data points we’d never think of. The demand for faster and bigger engines to crunch and serve up the data to services is growing exponentially. You see a LOT of correlation between “Cloud” and “Big Data” but on Big Data and “Hybrid,” where hybrid hosting is the sanest approach to the Big Data Infrastructure pro...
"My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward," explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.
With 15% of enterprises adopting a hybrid IT strategy, you need to set a plan to integrate hybrid cloud throughout your infrastructure. In his session at 18th Cloud Expo, Steven Dreher, Director of Solutions Architecture at Green House Data, discussed how to plan for shifting resource requirements, overcome challenges, and implement hybrid IT alongside your existing data center assets. Highlights included anticipating workload, cost and resource calculations, integrating services on both sides...
"We are a well-established player in the application life cycle management market and we also have a very strong version control product," stated Flint Brenton, CEO of CollabNet,, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.