Software complexity is a natural byproduct of the functional complexity that the code is attempting to enable. With multiple system interfaces and complex requirements, the complexity of software systems sometimes grows beyond control, rendering applications and portfolios overly costly to maintain and risky to enhance. Left unchecked, software complexity can run rampant in delivered projects, leaving behind bloated, cumbersome applications.
The software engineering discipline has established some common measures of software complexity. Perhaps the most common measure is the McCabe essential complexity metric. This is also sometimes called cyclomatic complexity. It is a measure of the depth and quantity of routines in a piece of code.
Using cyclomatic complexity measured by itself, however, can produce the wrong results. A module can be complex, but have few interactions with outside modules. A module can also be relatively simple, but highly coupled to many other modules, actually raising the overall complexity of the codebase beyond measure. In the first case, complexity metrics will look bad, while in the second the complexity metrics will look good – but the result will be deceptive. Thus, it is important to also measure the coupling and cohesion of the modules in the codebase to get a true system-level software complexity measure.
Fred Brooks, in his landmark paper, "No Silver Bullet — Essence and Accidents of Software Engineering," asserts that there are two types of complexity. Essential complexity is the unavoidable complexity required to fulfill the functional requirements. Accidental complexity is the additional complexity introduced by poor design or a lack of complexity management. Left unchecked, non-essential complexity can get out of hand, leaving behind a poor TCO equation and additional risk to the business.
Excess software complexity can negatively impact developers’ ability to manage the interactions between layers and components in an application. It can also make specific modules difficult to enhance and to test. Every piece of code must be assessed to determine how it will affect the application in terms of robustness and changeability. Software complexity is a major concern among organizations that manage numerous technologies and applications within a multi-tier infrastructure.
Without the use of dependable software complexity metrics, it can be difficult and time consuming to determine the architectural hotspots where risk and cost emanates. More importantly, continuous software complexity analysis enables project teams and technology management to get ahead of the problem and prevent excess complexity from taking root.
When measuring complexity, it is important to look holistically at coupling, cohesion, SQL complexity, use of frameworks, and algorithmic complexity. It is also important to have an accurate, repeatable set of complexity metrics, consistent across the technology layers of the application portfolio to provide benchmarking for continued assessment as changes are implemented to meet business or user needs. A robust software complexity measurement program provides an organization with the opportunity to:
Automated analysis based on defined software complexity algorithms provide a comprehensive assessment regardless of application size or frequency of analysis. Automation is objective, repeatable, consistent, and cost effective. A software complexity measurement regime should be implemented for any organization attempting to increase the agility of software delivery.
Click here to learn more about CAST software complexity solutions.