CISQ. Software Quality Analysis Study 2020 – Part 1

This article contains a translation of part of the survey results – the “Engineering” section.
For convenience, here is a list of questions for which statistics were collected:

  • Do you currently use software quality analysis (SQA)?

  • Does your internal regulatory or quality function require your development teams to use SQA?

  • Is your team's level of autonomy related to the team's level of maturity in SQA and technical debt management?

  • What standards do developers use for SQA?

  • How often do you conduct SQA?

  • How often do you ignore SQA metrics?

  • What is the most common reason for ignoring SQA metrics?

  • Who benefits most from using SQA?

  • In what areas of code quality does SQA benefit the most?

  • Is your team using SQA data to improve processes?

  • Are you satisfied with using SQA tools?

In the second part there will be a translation of the remaining two sections – “System integrators”, “Supplier management”.
——————–

Introducing the State of the Industry report

The Consortium for Information and Software Quality (CISQ) has launched the State of the Industry survey, the first comprehensive software quality analysis survey covering tool vendors, system integrators, managers and engineers in end-user organizations. The survey was open from July 2019 to January 2020. The impetus for this study was an alarming increase in the number of software quality incidents, as well as concerns among CISQ members that organizations were not meeting basic requirements. We wanted to understand how the shift to Agile development and DevOps is changing not only software quality practices, but also developers' attitudes and behavior regarding code quality. It is also important to understand how software quality standards are used by system integrators and end-user organizations; what standards apply, what sectors are driving their adoption, and how organizations benefit from software quality standards.

Methodology

Results in the State of the Industry Software Quality Analysis report are based on 82 online survey responses, 155 telephone conversations with enterprise IT leaders, their teams and IT service provider managers, discussions at CISQ-sponsored workshops and LinkedIn forums . This report includes survey results, observations and recommendations. The report is divided into three sections:

What do developers think about software quality analysis (SQA)?

The shift to Agile development and DevOps, and the increasing speed at which teams work, is unprecedented. Teams are releasing software at a rate never seen before. At the same time, we face much higher levels of risk and vulnerability.

Problems of low software quality and technical debt have existed since the advent of IT. It appears that we have reached a crisis point where we need to take these problems much more seriously and start treating developers as engineers. This means not only writing code, but also paying attention to other aspects of engineering, such as quality assurance, safety, reliability and long-term viability of the solutions being developed. With the shift to SaaS and cloud technologies, enterprises and engineering teams may think that this does not affect them, but this is not the case. SaaS solutions are created by engineers. If these solutions have weaknesses, it affects thousands, if not millions, of users.

In CISQ's State of the Industry report on software quality analysis, we wanted to test the hypothesis that reliability, security, performance efficiency, and maintainability, as they are often called “non-functional requirements,” remain secondary to customer-facing features for product owners, managers and teams. Let's move on to the survey results.

Are you currently using SQA?

Results: The majority of developers, 82%, report using software quality analysis. Moreover, 33% say they always use static code analysis, compared to 17% who always use dynamic code analysis. 32% of developers report that they often use static and dynamic code analysis, i.e. on a daily or weekly basis.

Observations: There was a perception that developers were reluctant to use software quality analysis in any form – static or dynamic. However, the data we collected shows that this is far from the truth, as more than 82% of developers claim to use software quality analysis in one form or another. We believe that the increased focus on continuous integration (CI) and continuous delivery (CD) in the DevOps community, as well as the availability of open source tools for analyzing software quality, are contributing to this.

Does your internal regulatory or quality function require your development teams to use SQA?

Results: 17% of developers report that the use of software quality analysis is mandatory as part of the internal regulatory function, a third note that it depends on the project, and another third believe that the regulatory function should require the use of SQA. This data shows that there is significant potential for greater collaboration between engineering and risk and compliance (GRC) functions to reduce software risk, which has not yet been fully realized. The quality assurance (QA) function reflects regulatory policy, with 17% of developers reporting that SQA is mandatory and 33% reporting that it depends on the project. However, a third of developers note that the QA function never requires the use of code analysis.

Observations: It is obvious that both the internal regulatory function and, surprisingly, the quality assurance function do not necessarily require the use of tools for analyzing code quality. It seems that the vast majority of developers make their own decision to use SQA. The project-specific approach remains dominant, and questions arise as to whether it is justified given the current level of IT and security risks.

Is your team's level of autonomy related to the team's level of maturity in SQA and technical debt management?

Results: 50% of developers report that the level of autonomy given to the team is informally related to the use of software quality analysis. 17% indicate that autonomy is formally linked to SQA. 33% of developers are unaware of any formal or informal connection between code review and team autonomy.

Observations: Agile and DevOps are based on an autonomous organizational structure with self-organizing teams. However, few organizations seem to allow autonomous teams to earn their autonomy by following the right approaches and best practices. Only 17% of developers report that their teams are formally assessed on the quality of their code structure.

We find this somewhat surprising given the prevalence of technical debt issues in many organizations. Autonomy must be linked to best practices and behavior focused on software quality.

What standards do developers use for SQA?

Results: The most “frequently” used standards are: OWASP Top 10, ISO 25000. Standards “sometimes” used include: MISRA, MITER CWE, SANS/CWE Top 25, OMG/CISQ.

Observations: It's no surprise that given its prevalence in the industry, the OWASP Top 10 list is one of the most frequently referenced standards. Please note that there is a difference between being aware of and referencing a standard and developing code that conforms to that standard. MISRA is an example of what we expect to see in the future – industry standards for software quality. We predict that cyber-physical devices and IoT will increase the use of domain-specific software quality standards.

How often do you conduct SQA?

Results: 60% of developers report that code is scanned daily or weekly, 20% indicate that code is scanned before release during the QA phase, and 20% report that code reviews are reactive and performed on an as-needed basis when an issue arises.

Observations: More than half of developers report using SQA regularly on a daily or weekly basis. This is expected given our view that DevOps and CI/CD are driving greater adoption of SQA. We should be careful not to exaggerate the importance of SQA frequency, since what matters most is the results of the code being scanned and the subsequent refactoring. There is still a high proportion of teams that do not perform SQA on a regular basis. Software quality analysis should be integrated into the DevOps toolkit.

How often do you ignore SQA metrics?

Results: 60% of developers report that they “sometimes” ignore SQA results, and 40% say they “rarely” do so.

Observations: While SQA tools have been around for some time, they are not perfect, and it should come as no surprise that they sometimes produce false positives—that is, they indicate a known vulnerability or weakness in code that does not actually exist. Based on this, it is not surprising that developers sometimes choose to ignore the results. However, we have reason to hope that developers still take the time to refactor their code, as 40% say they only rarely ignore scan results. Development teams that frequently experience false positives should invest in tuning tools to reduce the number of false positives. For the 60% of developers who “sometimes” ignore SQA reports, we believe this is due to using tools without proper setup and configuration. It is important that DevOps teams are supported by mature SQA tools that meet mature software quality standards to minimize the problem of false positives and false passes.

What is the most common reason for ignoring SQA metrics?

Results: The most common reason why developers ignore SQA metrics is lack of time (40% of responses), 20% indicate false positives, and the rest noted “not relevant” or “other.”

Observations: The one-fifth of developers ignoring SQA metrics due to false positives is a sign of increasing tool maturity, as we expected this figure to be higher.

Businesses and application managers should note that 40% of developers ignore results due to lack of time. This is contrary to Agile's values ​​of delivering value to the customer and is due to a lack of understanding of the impact of non-functional requirements on the part of product owners and product managers. It's no surprise that when talking to developers, they often feel that SQA is a waste of time if they receive conflicting signals from the business and its representatives. Software quality needs a champion within the company, and this issue must be led by the business.

Who benefits most from using SQA?

Results: The top 3 stakeholders who benefit most from SQA are, in order: 1) QA/Testers, 2) Developers, 3) Operations teams.

Observations: An interesting and somewhat surprising result is that so few developers believe that SQA directly benefits the end client or business sponsor. Given our previous point that 40% of developers ignore SQA due to lack of time, you can understand why they would pick up on the message from the business that it is not important to the client.

In what areas of code quality does SQA benefit the most?

Results: The top 3 areas of code quality that benefit most from code review are, in order: 1) Reliability, 2) Maintainability, 3) Operational Efficiency.

Observations: It's obvious that the developers understand
the relationship between SQA and classical non-functional areas. This is a concern for us, going back to the question on page 6, about SQA results being ignored by teams. This also suggests an interesting observation. If developers believe that SQA does not provide significant value to the end customer, as we saw in the previous question, does that mean that they do not consider reliability and performance important to the customer? These somewhat contradictory results bring us back to a familiar theme: non-functional requirements (NFRs), which are consciously or subconsciously viewed as unimportant. It's no surprise that reliability and safety rank high when it comes to SQA benefits.

Is your team using SQA data to improve processes?

Results: 60% of developers say their teams “rarely” use software analysis data in retrospectives or to improve processes, 20% do so “sometimes,” and 20% do so “always.”

Observations: Somewhat surprisingly, only one fifth of developers work in teams where SQA results are always used in the improvement process and in retrospectives. SQA is an indicator of the maturity of both individuals and teams in software development and is directly related to supporting practices and roles. We expect this figure to be higher. Our recommendation is to use SQA not only in retrospectives, but also as part of the cross-training process to help developers improve their technical skills and code architecture.

Are you satisfied with using SQA tools?

Results: 80% of developers report that they are “somewhat satisfied” with using code analysis tools, and 20% are “very satisfied.” None of the respondents reported being “dissatisfied.”

Observations: Although none of the survey or interview respondents directly said they disliked SQA tools, our impression is that there are people who are dissatisfied with using the tools but choose not to speak out about it. In most cases (80%) developers say they are only “somewhat satisfied.” This may be due to the lack of calibration and integration of many tools, leading to manual processes and false positives. Also, in general, developers may not like it when a machine (or anyone else) points out mistakes they've made.

We still have a way to go before SQA is seen as as important as continuous integration (CI) and testable development (TDD). However, the conversation needs to start with the business and moving away from poorly named non-functional requirements. If reliability, performance, security, and maintainability are labeled as non-functional, they will continue to take a back seat to customer-facing features.

Conclusion

Time pressure on developers, product owners, and product managers exacerbates negative attitudes toward software quality analysis and non-functional requirements. We must consider that developers ignore SQA results and are not particularly happy with using these tools. It's easy to blame the developers, but we believe this reflects the environment in which they operate. Our recommendation is to ban the term “non-functional requirements” (NFRs). Until we do that, the NFR will be treated as an afterthought. It's clear that developers are getting mixed signals. Despite the high percentage of developers using SQA, the percentage who use it proactively to improve processes and act on their findings is lower than we would like.

Recommendations to management

Application Managers and Scrum Managers must pay attention to their behavior and attitude towards non-functional requirements. They must work proactively with product owners or project managers to ensure NFR is given the proper attention. We have found that conversations with the product management team that focus only on technical debt issues are not constructive. This discussion should be conducted in the context of business outcomes and risks.

The management team must ensure that SQA is used correctly. For example, teams should tune SQA tools to reduce false positives. You also need to ensure that SQA tools are integrated into the tool chain. We need to recognize that this may mean reorganizing the QA and testing functions so that teams do more quality testing related to NFR.

Best practice is to ensure that the level of autonomy given to each team is clearly and consistently linked to the team's corresponding quantitative and qualitative key performance indicators (KPIs) or objectives and key results (OKRs), namely the team's ability to deliver high quality code with low level of technical debt and a high degree of support, reliability and performance. Obviously, code security is also a key indicator of team maturity. Teams that do not demonstrate consistent best practices in these areas will be limited in their release frequency, ability to sign off changes without approval, and frequency of code reviews.

Recognizing that we are in a world where organizations are giving development teams more autonomy, we must consider the need for enterprise-level standards. In this study, we found that those teams that have a designated person responsible for the standards that the team will use have fewer software defect incidents, less technical debt, and more consistent tools, i.e. fewer false positives. Therefore, we recommend that every team have a “quality champion” who will promote and support the use of appropriate standards, but this person does not have to take responsibility for quality – he simply serves as a champion for the team. We have found that communities of practice (COPs) are a useful tool for developer-led standards development.

From a development culture perspective, and in line with Agile and DevOps, we need to change the behavior and actions of teams from a quality assurance (QA) focus to a quality engineering (QE) focus. This goes beyond test-driven development (TDD) or behavior-driven development (BDD) practices and requires a fundamental change in lean philosophy to ensure quality at every stage.

Development recommendations

It is surprising to see that the level of SQA usage is higher than we expected, but it is clear that we still have a long way to go before SQA is effectively used. Our first recommendation is for teams to stop using the term non-functional requirements (NFR) and train their business analysts and product owners on the importance of NFR to the end customer.

We recommend treating SQA tools in the same way as CI/CD tools, integrating them into a highly automated tool chain. For this to be successful and not create problems for developers, SQA tools must be customized to the programming environment and relevant coding standards to reduce the number of false positives.

Teams should use a data-driven approach to prioritize refactoring tasks. SQA tools, customized to the team's codebase and programming styles, can help prioritize and argue with the Product Owner or Product Manager for the need to allocate time for refactorings. To set prioritization goals and reduce internal disputes, we recommend consistent use of code quality standards along with SQA tools. Additionally, teams should use standards that can be automated and do not require manual intervention.

Given the low level of use of SQA metrics for process improvement, we recommend that all retrospectives include a review of SQA dashboards, even if it is just a quick review, to avoid complacency. If teams work in organizations where autonomy must be earned based on specific metrics, this becomes even more important.

Finally, teams need to create a compelling business case for using SQA, especially if they operate in environments where NFR is viewed as secondary to customer-facing functionality. We have found that the best approach is to point out the cause-and-effect relationship between poor quality and customer experience, and the positive impact of using a standard SQA approach to reduce technical refactoring, which typically accounts for 10-15% of the time today. spent on the sprint.

“Non-functional requirements are not emphasized enough in project management and are one of the leading causes of budget overruns and project failures. Non-functional requirements are critical to success and more attention needs to be paid to preparing for maintainability, which is critical to total cost of ownership.”
Dr. Barry Bohm, Chief Scientist, SERC, TRW Professor of Software Engineering and Director of the Software Engineering Center at the University of Southern California

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *