Software Assurance Adoption through Open Source Tools

Software Assurance Adoption through Open Source Tools

Posted: November 2, 2017 | By: Corbin Moyer, Patrick Hart

Software and Security engineering as a discipline is getting increased attention across the Department of Defense (DoD) as a mission enabler. Historically the DoD used an engineering approach that is independent from the type of product. Hardware and software then followed the same generic engineering principles. These principles focused on areas such as systems integration, reliability, maintainability, and test. Aspects such as lifecycle cost have risen in recent years, but awareness of secure software development as a component of that cost has yet to reach mainstream Program Management Offices (PMOs) operations. It’s important to not only test functionality. Security is now a critical enabler of success. Change needs to happen.

During the early-to-mid 2000s, the industry at large was experiencing a shift in maturation of software security processes and tools. The open community saw the foundation of the non-profit Open Web Application Security Project (OWASP) Foundation in 2004 after beginning work in 2001 (1). OWASP creates and espouses freely and widely-available documents, references, tools, and general knowledge for the sake of more secure software. In the Government space, the Information Assurance Technology Analysis Center (IATAC) and the Data and Analysis Center for Software (DACS) published a Software Security Assurance State-of-the-Art Report (SOAR) in 2007 (2). This SOAR discusses definitions of secure software along with advice for development lifecycles, secure coding recommendation, metrics, and design patterns for secure software. Also in 2007, the Air Force Application Software Assurance Center of Excellence (ASACoE) was beginning to assist programs with hands-on support (3). The message was clear: secure software matters to Industry and Government.

The implication from over 10 years of think tanks and gap analysis was that change does not happen with multi-hundred-page documents of policy and governance. Change happens at the program level with actionable, reasonable, achievable steps. Many legacy software acquisitions are already on multi-year contracts where expectations are firmly established. Specialized tools are everywhere, but getting tens-of-thousands or hundreds-of-thousands of dollars for commercial tools or expert personnel has huge programmatic overhead. PMOs are frequently hard-pressed for this extra room in the budget. While these commercial solutions may be high-quality, adoption has been slow due to barriers of cost, training, and lifecycle restrictions.

Adoption of Open Solutions

The Air Force Lifecycle Management Center (AFLCMC) Cyber Systems Engineering Division (EZC) has started using and advocating for free or open source tools. EZC is the home of the Air Force Command and Control (C2) and Rapid Cyber Acquisition (RCA) Security Controls Assessor (SCA). This SCA is responsible for security assessments of all Air Force developed C2 systems and software, along with cyber-related urgent operational needs. From years of prior assessments, there was a clear pattern that PMOs had accounted for and practiced system security but had not applied that same rigor and discipline to software.

EZC has advocated the philosophy of software security as a discipline of proper engineering rigor. Engineering principles and open source tools allow for productive and collaborative conversations with existing PMO resources. Program managers resonate with using existing and readily-available resources to leverage new processes. This removes barriers and such as purchase orders and contract negotiations. It is unsurprising that PMOs start to consider adoption of open source tools as an overnight opportunity. While this a logical, pragmatic approach, it is not without special considerations. The remainder of this article will discuss EZC’s recommended tools and lessons as a case study in potential benefits to existing software acquisitions.

Lessons from the Lab

A handful of open source tools have been identified and used successfully in various stages of the software development lifecycle (SDLC). These include FindBugs, OWASP Dependency Check, OWASP Zed Attack Proxy (ZAP), cppcheck, WireShark, and SonarQube. These were eventually chosen as go-to recommendations for programs in the SCA’s C2 and RCA portfolios. These tools have strong organizations, transparent management, welcoming communities, and are actively updated for new threats and vulnerabilities. While the SCA’s engineering team recognizes special considerations for utilizing open source tools on classified networks, these warrant a separate discussion.

It is first worth examining the licenses of these applications to make sure there are no restrictions for Government use. Most of the open source licenses examined are concerned with a single user profiting from the use of the community’s tool. Using it to make one’s own application better is well within scope of common licenses, with a general expectation that any improvements will propagate back to the community. A summary of the licenses for these applications can be seen in the following table. It is important to note that there are specific questions an organization may want to ask. “Can the Government contribute back?” “Can the Government make private modifications?” These are not covered here. This applies specifically to unmodified use in a development environment or lab environment.

Table 1: Tools and Licenses

Name

License

Government Restriction?

Available?

FindBugs (Univ of Maryland)

GNU GPL 3.0

No

gnu.org

Dependency Check (OWASP)

Apache 2.0

No

github (DC)

ZAP (OWASP)

Apache 2.0

No

apache.org

Cppcheck

GNU GPL 3.0

No

github.org

WireShark

GNU GPL 2.0

No

wireshark.org

SonarQube

GNU GPL 3.0

sonarqube.org

FindBugs is an application developed by the University of Maryland. Its primary purpose is finding bugs via “bug patterns” in Java applications. It breaks these down into categories like “Bad Practice”, “Correctness”, “Malicious Code Vulnerability”, and “Dodgy Code” (4). Unlike typical static analysis tools, it analyzes Java bytecode instead of raw source code. This has the benefit of not needing the raw source code, but means that it has a higher potential for false positives. In practice, this tool is effective at noticing potentially bad patterns. However, it does require more manual curation to become truly useful. Contractors have shown minimal resistance to implementing it, but it often requires tight collaboration between Contractor and PMO (or EZC) engineers. There are cases where this tight relationship is difficult to achieve.

As of this writing the most recent version of the tool was released in March of 2015. While the project has official sponsorship, it is not focused on regular updates. The focus is on supporting new technologies as they develop (e.g., Java 8) rather than constant patching. It has Apache Ant support for build integration and an Eclipse plugin for individual developers. This points the application once again towards the side of manual integration for a large subset of contractors.

Overall, FindBugs has had very successful use across the SCA’s portfolios. Its strongest use case comes with post-development scanning rather than continuous integration. Only Ant is officially supported. Developers trying to build with tools like Maven or Gradle will require more work. However, some community support exists for these build environments. The interface of the default application is bare-bones but functional. PMO and developer engineers typically pick the basics up without a lot of explanation, provided they understand software development. For PMOs struggling to implement static analysis, FindBugs has been a considerable success. With proper expectation management, the tool performs well and has helped identify many major flaws across the portfolios.

OWASP Dependency Check (OWASP DC) is a tool promoted as a mature flagship product by OWASP. It serves a simple but important function. It analyzes the dependencies (such as third-party libraries) in an application to see if there are any publicly-known vulnerabilities. It currently supports Java and .NET with limited support for Python, Ruby, and Note.js (5). Dependency Check collects build information to see if there is a Common Platform Enumeration (CPE) for the dependency. If so, associated Common Vulnerability and Exposure (CVE) information is listed in an HTML-based report (5). It is still actively and frequently maintained.

The primary drawback of OWASP DC is its dependency on an Internet connection. Upon running, the machine will attempt to access the National Vulnerability Database (NVD) to download NVD data feeds (6). For applications requiring classified environments, the application cannot connect. If it’s the first time the application has been run, the application cannot establish its NVD information. This means it cannot complete a scan. This creates hurdles for Government applications on a secure network such as SIPRNet. The OWASP DC documentation provides some possible workarounds such as mirroring the NVD locally. Another drawback is its interface is through the command line. It has some limited integration options, but may require manual tweaking. These problems must be solved on a case-by-case basis and may create burden for the PMO or the developer.

OWASP DC has nonetheless proven very valuable for EZC engineers, especially among older or less actively developed projects. It is surprisingly common for developers of legacy applications to not fully understand their dependencies. This is especially true of research projects that have evolved into “full” applications in the field. OWASP DC has on several occasions provided EZC with necessary information to make sure databases, frameworks, and even cryptography libraries are tracked and updated. EZC continues to use and recommend it.

OWASP ZAP is an application security scanner and penetration tester. It is a powerful tool capable of providing proxy interception, web spidering and exploration, fuzz testing, and passive scanning (7). OWASP ZAP provides results tailored for keeping web-based applications safe from attack. It is another mature flagship product for the OWASP Foundation, and it is actively maintained and developed with a strong community. It has many build integration opportunities and has a relatively clean user interface. However, new users unfamiliar with the concepts of web application vulnerabilities may find its results confusing.

In this sense the application isn’t limited by features in the normal sense. It is limited in the audience that can adequately and accurately interpret and use the results. Web application vulnerability and penetration tests are traditionally led by “red teams” of specialized individuals. As web application development is less common in government systems, there are fewer software engineers that are specialized in this area of knowledge. This may mean that an attempt to run ZAP or integrate it into the software development process may still miss problems due to misconfiguration or improper understanding of the results.

With these limitations in mind, EZC has only had one developer actively insert ZAP into the development process. This implementation was specific to penetration testing, and was inserted after the build process itself. This process has proven to be a mixed blessing. Raw results couldn’t be fed back to every developer. Some didn’t quite understand the impact or dismissed valid results as a false positive. Careful review with EZC uncovered the need to start slowly. This means that the need for a red team may not be eliminated. Dedicated penetration testers are still valuable. However, every small step forward in cybersecurity is important. This is especially true in automated build environments. EZC continues to work with the PMO to define a good process to best utilize the results.

The fourth tool recommended by EZC is cppcheck, a static analysis tool for C and C++ code. It features many integrations including Eclipse, Jenkins, and several pre-commit hooks for version control (8). It specifically addresses errors and does not point out all bad practices or style deviations. This has implications regarding its findings. Whereas a typical security application may over-report things that are not issues, cppcheck’s “zero false positive” mentality means in practice it’s more likely to not flag a real issue than to flag a non-issue. This is a good example illustrating why using two static analysis tools whenever possible is the better solution.

Cppcheck is also in large part developed by one person, with a handful of volunteers following in support. This creates an interesting point of contention for government use. While it has built a strong reputation for reliable results, there is a known single point of failure. According to public Github commits, from February 09, 2016 through Feb 11, 2017, 55.9% of all commits were from this same one individual (10). As of this writing it is still actively developed on Github. The codebase also gets automatically scanned by Coverity with all results published online publicly (9). The combination of public source code and public code analysis helps instill confidence. EZC has recommended it with no ill effect, but this is a point of contention that any potential customer should know.

Even with this knowledge available, EZC has recommended cppcheck to several applications with positive feedback. One contractor even integrated it into their build environment to continue scanning with every build, pushing results back to individual developers. The standalone interface is spartan but has a low learning curve. Result files are also easily exportable for sharing with other engineers or organizations. While cppcheck may work best in conjunction with a second tool, its results have proven reliable and useful.

With so much attention being focused on code scanning and analysis, it is easy to overlook tools like Wireshark. Wireshark is a widely-adopted packet analyzer for network capture and diagnostics. It has been a stalwart application remaining under the leadership of its original developer, Gerald Combs (11). It is currently sponsored by Riverbed, an American Networking IT company. Wireshark is in such widespread use in the security community that it has become a de facto standard for learning packet analysis.

Wireshark’s strengths and weaknesses are well documented and well understood. It is primarily useful after development, during application testing. EZC has successfully used it to analyze network activity in a lab environment. This is useful for determining unwanted connections and verifying functionality. However, its user interface does heavily rely on the user understanding networking or packet analysis. As such, it has a limited use case in the software development lifecycle. While EZC is very comfortable using and recommending it, its scope of appropriate use must be determined beforehand.

The final tool for consideration is SonarQube. This is perhaps the most suited for continuous integration. SonarQube is an analyzer for finding bugs, but also provides overall “health” checks and a centralized source of security-related information for developers and managers to see and collaborate on information. It has support for several build systems like Ant and Maven, as well as continuous integration engines like Jenkins and Bamboo. SonarQube bills itself as more of a security management tool, and should be used alongside tools like cppcheck or ZAP.

SonarQube has the most modern user interface of the tools discussed so far, but it does so with relatively plain design. It performs best when used as a part of a continuous integration platform (12). However, not all DoD acquisition developers are set up to operate this way. Many developers on existing programs are still using waterfall development, or utilizing a hybrid agile-waterfall approach. They do not have continuous integration engines like Jenkins set up, nor do they have a culture of continuous fixing and patching. This, in fact, makes SonarQube difficult for adoption in this style of development. Additionally, any developer wishing to share results with a PMO may require giving a PMO Virtual Private Network (VPN) access, or save and email reports. This eliminates some of the benefits of SonarQube, and provides the PMO with no strong benefit over a program like Dependency Check.

With these expectations in mind, EZC frequently recommends new projects insert continuous test integration into software and security engineering from the start. SonarQube is one reliable free tool to accomplish this goal and has robust support. While it may be difficult to incorporate after, it is useful for tracking security metrics throughout development. This is an incredibly important function for modern application development regardless of what tool is used. Education is an important step in the current state of security, and EZC works closely with programs to establish security from the very start.

Recommendations

Each of these tools has a specific place in the software development lifecycle. The choice of what type of tool to use is almost as important as the choice of the tool itself. To this end, PMOs need to take a proactive approach to software security engineering based on where they are in application development. Table 2 below shows where in the development lifecycle a developer is most likely to discuss and utilize the tools outlined above. Note that a good software development lifecycle is cyclical and continuous, not linear as depicted in the table.

Table 2: Software Development Lifecycle Applications

Planning Analyzing Designing Implementing Testing Maintaining
FindBugs X X X
OWASP DC X X X
ZAP X X X
Cppcheck X X X
WireShark X X
SonarQube X X X X X

The tools discussed in this article are heavily skewed towards the implementing, testing, and maintaining portion of the software development lifecycle. This appears to be the area in which developers traditionally receive the least amount of security training. It is also the part of the lifecycle where problems are hardest to spot by manual review. There is a trade-off between human review and automatic review, and the open source community seems to have tackled the implementation and maintenance stages first.

FindBugs, cppcheck, and SonarQube are especially useful when trying to map to the Software Engineering Institute (SEI) Secure Coding standards for the C, C++, and Java languages. These tools have many of their unique finding identifiers mapped to the recommendations and rules that comprise the SEI standards. The SEI website provides this mapping (13). EZC has found this valuable in mapping to the Risk Management Framework (RMF) and the Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Application Security and Development.

Conclusion

PMOs must use increasingly limited resources to solve security engineering problems. As such, open source tools can readily be used as a part of the development process. They are a reliable, actionable way PMOs can make their systems more secure in the short term. They can be an incredibly important piece of a robust application security program in the long run. Smart organizations recognize these tools cannot be a substitute for software security engineering. Software should still abide by principles such as following open standards, using standard interfaces, and avoiding tight coupling. Once applications have a well-thought-out design and their usage has been accounted for, these automated tools can assist developers find and fix bugs early.

Secure software development is about culture, drive, and expectation. EZC encourages the DoD at large to examine open source tools, embrace the secure software community, and share best practices. Open source tools are a great start and can be a catalyst or building block of a strong software security engineering program. Given the DoD’s advanced threat landscape and large software acquisition community, we hope to see broader embracing and adoption of open source software security tools and practices.

References

  1. About The Open Web Application Security Project. (2017, March 16). Retrieved April 04, 2017, from https://www.owasp.org/index.php/About_The_Open_Web_Application_Security_Project#The_OWASP_FoundationThe SOAR itself, use the pdf to generate a citation
  2. Goertzel, K. M., et al. (2007). ITAC DACS State-of-the-Art Report. Software Security Assurance. Retrieved April 03, 2017, from http://www.dtic.mil/dtic/tr/fulltext/u2/a472363.pdf
  3. Kleffman, M., Maj. (2008). Application Software Assurance Center of Excellence (ASACOE). Retrieved April 04, 2017 from https://www.acsac.org/2008/program/case-studies/Kleffman.pdf
  4. FindBugs Bug Descriptions. (2015, March 06). Retrieved April 04 , 2017, from http://findbugs.sourceforge.net/bugDescriptions.html
  5. OWASP Dependency Check. (2017, January 23). Retrieved April 04, 2017, from https://www.owasp.org/index.php/OWASP_Dependency_Check
  6. Dependency Check. (2017, January 22). Retrieved April 04, 2017, from http://jeremylong.github.io/DependencyCheck/data/index.html
  7. OWASP Zed Attack Proxy Project. (2017, April 04). Retrieved April 04, 2017, from https://www.owasp.org/index.php/OWASP_Zed_Attack_ Proxy_Project
  8. Cppcheck. (2017, April 01). Retrieved April 04, 2017, from http://cppcheck.sourceforge.net/
  9. Coverity Scan: cppcheck. (2017, January 17). Retrieved April 04, 2017, from https://scan.coverity.com/projects/512
  10. Contributors to danmar/cppcheck. (n.d.). Retrieved April 04, 2017, from https://github.com/danmar/cppcheck/graphs/contributors?from=2016-02-09&to=2017-02-11&type=c
  11. Wireshark. (2017, March 03). Retrieved April 04, 2017, from https://www.wireshark.org/
  12. Features. (n.d.). Retrieved April 04, 2017, from https://www.sonarqube. org/features/
  13. SEI CERT Coding Standards. (2017, March 28). Retrieved April 07, 2017, from https://www.securecoding.cert.org/confluence/display/seccode/SEI CERT Coding Standards/li>

Want to find out more about this topic?

Request a FREE Technical Inquiry!