Engineering Software Assurance into Weapons Systems During the DoD Acquisition Life Cycle

CSIAC_Journal_V5N3_FINAL_WEB_1

Posted: November 2, 2017 | By: Dr. Scott M. Brown, Mr. Thomas Hurt

Software assurance (SwA) is the “level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software, throughout the life cycle.” [4] The latest change to Department of Defense (DoD) Instruction (DoDI) 5000.02, Operation of the Defense Acquisition System [1], includes a new enclosure on cybersecurity (Enclosure 14) that outlines several required actions DoD acquisition Program Managers (PMs) must implement to ensure system security and related program security across the acquisition, sustainment, and operation life cycle.

This article provides the start of a SwA user’s guide, a set of recommended and tailorable “best practice” SwA activities a PM can take during development and sustainment of weapon and other systems. These best practices are based in software and systems engineering with suggested activities; expectations for conduct of Systems Engineering Technical Reviews (SETRs) with entrance and exit criteria; Program Protection Planning (PPP) considerations; and specific application of SwA tools and methods during the DoD acquisition life cycle phases. The intent is to “engineer-in” SwA into the system up front, including to system requirements using existing methods, tools, and processes and avoid attempting to “bolt on” SwA at the end of system implementation. PMs have greatly reduced latitude working with risks and vulnerabilities after system development without dramatically impacting cost, schedule and/or performance of the weapon system.

Background

In February 2017 a Defense Science Board Task Force on Cyber Supply Chain summarized the need for a full life cycle approach to SwA, stating “[b]ecause system configurations typically remain unchanged for very long periods of time, compromising microelectronics can create persistent vulnerabilities. Exploitation of vulnerabilities in microelectronics and embedded software can cause mission failure in modern weapon systems…. Cyber supply chain vulnerabilities may be inserted or discovered throughout the life cycle of a system. Of particular concern are the weapons the nation depends upon today; almost all were developed, acquired, and fielded without formal protection plans.” [2]

The Office of the Deputy Assistant Secretary for Defense for Systems Engineering (ODASD(SE)) leads DoD in key areas of cyber resilient systems, program protection, system security engineering (SSE), and system assurance to better understand and promote how the defense portfolio should handle evolving engineering and security challenges. The need for this focus is also reflected in National Defense Authorization Acts in recent years [3, 4, and 5] as well as in observations of programs by the Office of the Secretary of Defense (OSD), the Military Services, and defense agencies (e.g., National Security Agency, National Reconnaissance Office, and Missile Defense Agency).

Public Law 111–383, National Defense Authorization Act (NDAA) for Fiscal Year 2013, Section 932, STRATEGY ON COMPUTER SOFTWARE ASSURANCE, required the Secretary of Defense to submit a DoD strategy for assuring the security of software and software-based applications of critical systems. A key element of the strategy was to develop “[m]echanisms for protection against compromise of information systems through the supply chain or cyberattack by acquiring and improving automated tools for—(A) assuring the security of software and software applications during software development; (B) detecting vulnerabilities during testing of software; and (C) detecting intrusions during real-time monitoring of software applications.” The mandated report to Congress provided the strategy, which focused on information assurance and cybersecurity policy and guidance. The strategy included tools and techniques for test and evaluation (T&E) and for detecting and monitoring software vulnerabilities.

Public Law 112-239, NDAA for Fiscal Year 2013, Section 933, IMPROVEMENTS IN ASSURANCE OF COMPUTER SOFTWARE PROCURED BY THE DEPARTMENT OF DEFENSE, directed USD(AT&L) to develop policy that “requires use of appropriate automated vulnerability analysis tools in computer software code during the entire life cycle of a covered system, including during development, operational testing, operations and sustainment phases, and retirement.”

Public Law 113-66, the NDAA for Fiscal Year 2014, Section 937, JOINT FEDERATED CENTERS FOR TRUSTED DEFENSE SYSTEMS FOR THE DEPARTMENT OF DEFENSE, directed DoD establish and charter a federation of capabilities to support trusted defense systems and ensure the security of software and hardware developed, acquired, maintained, and used by the Department. The statute stated a key charter responsibility of the federation was to set forth “the requirements for the federation to procure, manage, and distribute enterprise licenses for automated software vulnerability analysis tools.” The resulting Joint Federated Assurance Center (JFAC), chartered by the Deputy Secretary of Defense, declared Initial Operational Capability (IOC) in 2016. The JFAC is a federation of DoD organizations with a shared interest in promoting software and hardware assurance in defense acquisition programs, systems, and supporting activities. The JFAC has sought to:

  • Operationalize and institutionalize assurance capabilities and expert support to programs
  • Organize to better leverage the DoD, interagency, and public/private sector assurance-related capabilities, and
  • Influence research and development investments and activities to improve assurance technology, methodology, workforce training, and more.

In early 2017, the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) updated DoDI 5000.02 to include a new Enclosure 14, “Cybersecurity in the Defense Acquisition System.” The policy states in part, “Program managers, assisted by supporting organizations to the acquisition community, are responsible for the cybersecurity of their programs, systems, and information. This responsibility starts from the earliest exploratory phases of a program, with supporting technology maturation, through all phases of the acquisition. Acquisition activities include system concept trades, design, development, T&E, production, fielding, sustainment, and disposal.” PMs request assistance from the JFAC such as subject matter expertise; tools, and capabilities to support program software and hardware assurance needs; knowledge; supporting software and hardware assurance contract requirements; access to state-of-the-art T&E; training; and licenses to a suite of software vulnerability analysis tools.

Technical risk management is a fundamental program management and engineering process that should be used to detect and mitigate vulnerabilities, defects, and weaknesses in SW and HW so they do not become breachable cyber vulnerabilities in deployed systems. Cyber vulnerabilities provide potential exploitation points for adversaries to disrupt mission success by stealing, altering, or destroying system functionality, information, or technology. PMs describe in their PPPs the program’s critical program information and mission-critical functions and components; the threats to and vulnerabilities of these items; and the plan to apply countermeasures to mitigate or remediate associated risks. Software is typically predominate in system functionality and SwA is one of several countermeasures that should be used in an integrated approach that also includes information safeguarding, designed-in system protections, “defense-in-depth” and layered security, supply chain risk management (SCRM) and assurance, hardware assurance, anti-counterfeit practices, anti-tamper, and program security-related activities such as information security, operations security (OPSEC), personnel security, physical security, and industrial security. SwA vulnerabilities and risk-based remediation strategies are assessed, planned for, and included in the PPP from a time frame early enough that resources can be planned and obtained.

Based on the recent DoDI 5000.02 update on cybersecurity, DoD is leading development and implementation of the supporting practices, guidance, tools and workforce competencies to ensure PMs have the ability to mitigate cybersecurity risk or vulnerabilities. Key assurance gaps exist regarding Program Management Office (PMO) activities to implement SwA within their programs as defined in the JFAC SwA Capability Gap Analysis, recently approved by the JFAC Steering Committee. This gap analysis is required by Public Law 113-66, National Defense Authorization Act (NDAA) for Fiscal Year 2014, Section 937. Where policy provides for the assessment of planning activities during development (e.g., DoDI 5000.02 mentions program support assessments and Preliminary Design Review (PDR) and Critical Design Review (CDR) assessments), software assurance should be an explicit consideration of those execution assessments.

Following are recommended SwA execution actions a PM can take during development, sustainment, and operation of weapon systems. These activities may be considered by DoD for inclusion in future policy and guidance.

Software Assurance in the DoD Acquisition Life Cycle

The Defense Acquisition Process, as provided in DoDI 5000.02, is a tailorable multi-phased development and sustainment process for all DoD programs, using six acquisition models. The phases, from Materiel Solution Analysis to Operations and Support, contain multiple milestones, decision points and technical reviews. Within this process, program management, systems engineering, T&E, and other acquisition disciplines execute their own individual but interrelated processes, and include SwA.

The development and sustainment of software is a major portion of the total system life-cycle cost, and software assurance should be considered at every phase, milestone, decision point and technical review in the acquisition life cycle both to reduce cost and to repel cyberattacks. A range of SwA activities must be planned and executed to gain assurance that any system containing software will perform operationally as expected, and only as expected. These activities blend into the entire life cycle, from requirements, to design, to implementation, to testing, to fielding, and to operation of the software. Figure 1 shows the DoD acquisition life cycle, and the tables below describe activities that should be tailored and employed among the phases and technical reviews in its process. Some of these assurance activities are also applied iteratively during the software development life cycle (not shown) whenever and wherever those software development activities occur during the DoD acquisition life cycle, such as in block, agile, or DevOps approaches.

Figure 1: Software Assurance spans the entire DoD Acquisition life cycle.

Neglecting SwA in early life cycle activities (such as development planning, requirements, architecture assessment, design, and code development) will increase the cost of achieving assurance during later life cycle activities (such as operational testing and sustainment). But all life cycle phases require attention in the implementation of SwA. For example, thorough design and code review, use of static and origin analysis SwA tools, and follow-on remediation of findings, will both complement testing and reduce the resources expended during testing. Some flaws are more readily found through SwA tools used during review and analysis, others through dynamic analysis in testing, and certain software vulnerabilities are only detectable through manual analysis. Also the costs and benefits of specific assurance activities (e.g., code review, static code analysis, fuzz testing, and penetration testing) vary depending on the programming language, development environment, the availability of source code, the attack surface, the characteristics of the program, interoperability with other systems, and the criticality of the software in the context of the system.

Table 1 through Table 5 identify SwA considerations and specific activities associated with each phase of the acquisition life cycle. If a program is initiated later in the life cycle, for example at Milestone B, select activities from earlier phases may still be appropriate for consideration in later phases as determined by assessment of the tactical or operational use of the system compared with mission threads and system requirements. If a program is using an iterative development approach, SwA tools and methodology should be applied to individual software module development, then to integration testing and software builds so that vulnerabilities in software code are detected when they are generated, and remediated according to likelihood and consequence of adversarial attack.

The Joint Federated Assurance Center (JFAC) website https://www.acq.osd.mil/se/initiatives/init_jfac.html, accessible by Common Access Card, provides a broad spectrum of assistance in planning and operation of assurance as an underpinning SSE activity. It also provides tools-as-a-service for all DoD programs and organizations in support of the listed activities. Four examples are assurance service providers, access to subject matter expertise, the Assessment Knowledge Base, and SwA engineering tools. The JFAC community spans DoD and can be of help at any point in the acquisition life cycle. Consider how JFAC might support a program’s needs in each of the tables below.

Table 1: Software Assurance Considerations During the Materiel Solution Analysis Phase – Source: Author

SOFTWARE ASSURANCE CONSIDERATIONS (MSA Phase)

  • Identify SwA roles, responsibilities, and assurance needs for the program (i.e., staffing, tools, training, etc.); plan for SwA training and resourcing.
  • For the risk management process, develop understanding of how the deployed system may be attacked via software and use that understanding to scope criticality and threat analyses that are summarized in the PPP.
  • Plan assessments and map tactical use threads, mission threads, system requirements, system interoperability, and functionality upgrades from the existing deployed system, and maintain the mapping as metadata through the last upgrade in sustainment.
  • Identify system requirements that may map to software and SwA requirements to facilitate trade-offs and studies to optimize functional architecture and system design, and planning and resourcing to mitigate software vulnerabilities, risks, and life cycle cost. For an integration-intensive system that relies substantially on non-development/commercial off-the-shelf (COTS)/government off-the-shelf (GOTS) items, trade space analysis can provide important information to understand the feasibility of capability and mission requirements as well as assurance of the non-developmental software supply chain. Consider alternatives to refine the system concept of implementation and optimize for modularity and digital engineering; ensure contract language for assurance reduces technical and programmatic risk. Support contracts should be part of this early solution analysis, to articulate/manage government technical data rights that later impact SwA.
  • Select secure design and coding standards for the program based on system functionality.
  • Plan and resource for the use of automated tools that determine assurance for or that detect vulnerabilities, defects, and weaknesses in requirements, allocation of requirements to functional architecture, functional architecture, allocation of functions to system design, system design, allocation of design modules to software design, coding and unit testing, and integration testing. Identify JFAC SwA service providers to assist with SwA planning and services and engage as necessary.
  • Develop SwA activities interconnected across the system life cycle and document in the program software engineering planning document and in the program Integrated Master Schedule (IMS).

Table 2: Software Assurance Considerations During the Technology Maturation and Risk Reduction Phase – Source: Author

SOFTWARE ASSURANCE CONSIDERATIONS (TMRR Phase)

  • Incorporate SwA requirements, tool use, metrics, and assurance thresholds into solicitations. Architectures, designs, and code developed for prototyping are frequently reused later in development.
  • Assess system functional requirements and verification methods for inclusion of SwA tools, methodologies, and remediation across the development life cycle.
  • Assess requirements for SwA are correct and complete regarding assurance. Consider means of attack such as insiders and adversaries using malicious inserts; system characteristics; interoperability with other systems; mission threads; and other factors. Assure that mapping and traceability are maintained as metadata for use in all downstream assessments.
  • Establish baseline architecture and review for weaknesses (e.g., use of Common Weakness Enumeration (CWE)) and susceptibility to attack (e.g., use of Common Attack Pattern Enumeration and Classification (CAPEC)), and likelihood of attack success considering each detected weakness; identify potential attack entry points and mission impacts. Consider which families of automated SwA engineering tools are needed for vulnerability or weakness detection.
  • Review architecture and design for adherence to secure design principles and assess soundness of architectural decisions considering likely means of attack; programming language choices; development environments; frameworks; and use of open source software, etc.
  • Identify and mitigate technical risks through competitive prototyping while engineering in assurance. System prototypes may be physical or math models and simulations that emulate expected performance. High-risk concepts may require scaled models to reduce uncertainty too difficult to resolve purely by mathematical emulation. SW prototypes that reflect the results of key trade-off analyses should be demonstrated during the TMRR phase. These demonstrations will provide SW performance data (e.g., latency, security architecture, integration of legacy services, graceful function degradation and re-initiation, and scalability) to inform decisions as to maturity; further, EMD estimates (schedule and life cycle cost) often depend on reuse of SW components developed in TMRR; therefore to prevent technical debt, SwA considerations must have been taken into account.
  • Develop a comprehensive system-level architecture, then design (address function integrity, assurance of the functional breakout, function interoperation, and separation of function) that covers the full scope of the system in order to maintain capabilities across multiple releases and provide the fundamental basis to fight through cyberattack. The program focused on a given SW build/release/increment may only produce artifacts for that limited scope; however, vulnerability assessments often interact so apply system-wide and across all build/release/increment and interfaces to interoperating systems and must be maintained through development and sustainment. A PDR, for example, must maintain this system-level and longer-term, end-state perspective, as one of its functions is to provide an assessment of system maturity for the Milestone Decision Authority to assess prior to Milestone B.
  • Involve non-developmental item vendors in system design in order to assure functional integration addresses actual vendor product capabilities. In an integration-intensive environment, system models may be difficult to develop and fully exploit if many system components come from proprietary sources or commercial vendors with restrictions on data rights. Explore alternatives early and consider model-based systems engineering (MBSE) as the means to engineer-in assurance. Validating system performance and security assumptions may be difficult or even impossible. Proactive work with the vendor community to support model development and support informs downstream assessments including in sustainment.
  • Establish and manage entry and exit criteria for SwA at each SETR in order to properly focus the scope of the reviews and achieve usable assessment results and thresholds. Increasing knowledge / definition of elements of the integrated system design should include details of support and data rights.

Table 3: Software Assurance Considerations During the Engineering and Manufacturing Development Phase – Source: Author

SOFTWARE ASSURANCE CONSIDERATIONS (EMD Phase)

  • Review architecture and design to assess against secure design principles; including system element / function isolation, least common mechanism, least privilege, fault isolation, graceful degradation, function re-initialization, input checking, and validation. These are the engineering basis that enable system resilience.
  • Enforce secure coding practices through code inspection augmented by automated static and origin analysis tools, and secure code standards for the languages used.
  • Detect vulnerabilities, weaknesses and defects in the software as close to the point of generation as possible, prioritize according to likelihood and consequence of use by an adversary, remediate, and regression test.
  • Confirm SwA requirements, vulnerability remediations, and unresolved vulnerabilities are mapped to module test cases and to the final acceptance test cases. This provides a basis for assurance that will be used in downstream assessments and system changes in sustainment. Ensure program critical function software and critical components receive rigorous automated SwA tool assessment including static code analysis (SCA), origin analysis, and penetration and fuzz testing including application of test coverage analyzers. Multiple SCA tools should be used.
  • Ensure CDR software documentation represents the design, performance, and test requirements, and includes development and software/systems integration facilities for coding and integrating the deliverable software and assurance operations on the integrated development environment. Software and systems used for computer software configuration item (CSCI) development (e.g., simulations and emulations) should be assured whenever possible. Problem report metadata should include assurance factors such as CWE and Common Vulnerabilities and Exposure (CVE) numbers wherever relevant so that data is usable for tracking, reporting, and assurance assessments. Legacy problem report tracking information can be used to profile and predict which types of software functions may accrue what levels of problem reports. Assessments of patterns of problem reports, or vulnerabilities among software components of the system can provide valuable information to support program resource and progress decisions.
  • Address systems assurance (SW, HW, FW, function, interoperability) up front and early vs. delay until later software builds. For a program using an incremental software development approach, technical debt may accrue within a given build, and across multiple builds without a plan or resources to remediate code vulnerabilities as they are generated. Technical reviews, both at the system and build levels, should have a minimum viable requirements and architecture baseline that includes SwA requirements and assured design architecture considerations, as well as ensuring fulfillment of a build-centric set of incremental review criteria and requirements that include assurance. This baseline should be retained for use through the last upgrade in sustainment. For build content that needs to evolve across builds, the PM and the systems engineer should ensure that system-level vulnerabilities, defects, and weaknesses are recorded and mitigated as soon as practical to ensure any related development or risk reduction activities occur in a timely manner. Configuration management and associated change control/review boards can facilitate the recording and management of build information and mapped assurance metadata.
  • Ensure all detectable vulnerabilities, defects, and weaknesses are remediated before each developmental module is checked into CM.
  • Install system components in a System Integration Lab (SIL) and assess continuously for assurance considerations throughout EMD. Assurance considerations include version update and CM of all COTS in the IDE, including assurance tools, operational assurance tools for the IDE, techniques using operational assurance tools to detect insider threats and malicious activity, and configuration control of the installed software and files. Details of the use of developmental system interfaces should be assessed and validated to ensure their scalability, suitability, and security for use. The emphasis in an integration-intensive system environment may be less on development and more on implementation and test. Progressive levels of integration, composition, and use should be obtained in order to evaluate ever higher levels of system performance, conducting automated penetration and fuzz testing, ultimately encompassing end-to-end testing based on user requirements and expectations. Assessment and test results should be maintained for downstream test activities and system changes. If the system is later breached, assurance metadata generated during EMD will be a basis to determine behavior, impacts, and remediations. “Glue” code and other scripted extensions to the system operational environment to enable capability should be handled in as rigorous a manner for assurance as any developed software, i.e., kept under strong configuration management, scanned with multiple SwA tools, and inspected; updates should be properly regression-tested and progressively integrated/tested

Table 4: Software Assurance Considerations During the Production and Deployment Phase – Source: Author

SOFTWARE ASSURANCE CONSIDERATIONS (P&D Phase)

  • Continue to enforce secure design and coding practices for all SW changes, such as for installation modifications, through inspections and automated scans for vulnerabilities and weaknesses and maintain assessment results.
  • Conduct automated code vulnerability scans using SCA and origin assessment tools, reporting, and prioritization, and execute defect remediation consistent with program policy as system changes occur. Tool updates can detect additional vulnerabilities, and installations in deployment can change SW characteristics or code.
  • Conduct penetration testing using retained red-team or other automated test cases to detect any variations from expected system behavior.
  • Maintain and enhance added automated regression tests for remediated vulnerabilities, and employ test coverage analyzers to ensure sufficient test coverage for remediations.
  • Progressive deployment of an integration-intensive system provides infrastructure and services and higher-level capabilities as each release is verified and validated. A rigorous release process includes configuration management and the use of regression test suites that include SwA tools. The PM, systems engineer, software engineer and systems security engineer should ensure user involvement in gaining understanding and approval of changes to design, functions, or operation that may result from vulnerability remediations.
  • Synchronize and time block builds as much as possible to avoid forced upgrades or other problems at end-user sites. End user sites that perform their own customization or tailoring of the system installation should ensure that changes are mapped from the standard configuration, recorded, and shared with the PMO or the integrator/developer so that problem reporting and resolution activities account for any operational and performance implications, and so vulnerability assessment data are updated. Any changes should be scanned at the deployment site with multiple SwA tools to assure that no detectable vulnerabilities were inserted. This information will be necessary for assessments in detecting breaches, and for remediation of breaches.

Table 5: Software Assurance Considerations During the Operations and Support Phase – Source: Author

SOFTWARE ASSURANCE CONSIDERATIONS (O&S Phase)

  • The SW sustainment activity should take ownership of all assurance-related metadata and results in support of the PMO and system operation.
  • Continue to enforce secure design and coding practices during sustainment through inspections and automated scans for vulnerabilities and weaknesses during sustainment system upgrades, revisions, Engineering Change Proposals, patches, and builds. System changes in sustainment can be as significant as new acquisitions.
  • Continue to conduct automated code vulnerability scans, reporting, and prioritization, and execute defect remediation.
  • Maintain and enhance automated regression tests for remediated vulnerabilities, and employ test coverage analyzers to assure sufficient test coverage.
  • Continue to conduct penetration testing using retained red-team or other automated test cases to detect any variations from expected system behavior
  • Develop and use procedures to facilitate/ensure effective software configuration management and control. For example, require static and origin analysis scans for any changes to executable scripts or code, with all detected vulnerabilities remediated before the changes are approved. A defined block change or follow-on incremental development which delivers new or evolved capability, maintenance, security, safety, or urgent builds and upgrades to the field should be accomplished using this best practice. Procedures for updating and maintaining software on fielded systems often requires individual user action, and may require specific training. There are inherent security risks involved in installing or modifying software on fielded weapon systems used in tactical activities. This should be anticipated and remediated during the MSA phase. For example, the software would have been designed so that device update in tactical situations can be assured in-situ to reduce or eliminate the opportunities for malicious insertion, corruption, or loss of software or data. Software updates to business and IT systems can also pose risks to operational availability through insider threats that should be anticipated and mitigated during the MSA phase. For example, scan glue code periodically and assess any unknown changes for malicious insertions, and scan all executable SW or scripts whenever changes are applied. For any changes that impact system function, assess the design to maintain separation of function. PMs and systems and software engineers should implement procedures and tools to assure the supply chain in order to reduce risk and prevent malicious insertions. The supply chain includes sources for COTS, GOTS, and open source libraries.
  • Maintain test cases previously developed for automated penetration and fuzz testing tools used during operational testing or red-team operations during system maintenance and asynchronously conduct them to detect changes in system function, operation, or timing from the baseline. Changes can be the result of undetected and operational malicious inserts by insiders.
  • Plan for system upgrades/updates timed to limit the proliferation of releases and therefore focus available maintenance, assurance, and support resources. In an integration-intensive environment, security upgrades, technical refreshes, and maintenance releases can proliferate, causing loss of situational awareness of assurance posture at end-user sites. Configuration management and regression testing should be used to ensure system integrity and to maintain detailed situational awareness.
  • Use SwA tools such as origin analysis and penetration testing to detect changes in operational configuration between the deployed site and the tested baseline.

Table 6: Software Assurance Success Criteria for Conduct of Technical Reviews – Source: Author

Objective

SwA Success Criteria

System Requirements Review (SRR)

Recommendation to proceed into development with acceptable risk.

 

Level of understanding of top-level system requirements is adequate to support further requirements analysis and design activities.

 

Government and contractor mutually understand system requirements including (1) the preferred materiel solution (including its support concept) from the Materiel Solution Analysis (MSA) phase, (2) available technologies resulting from the prototyping efforts, and (3) maturity of interdependent systems.

  • Select automated tools for design, vulnerability scan/analysis, etc.
  • Establish facilities, tools, equipment, staff and funding.
  • Confirm contractor SEMP includes SwA roles and responsibilities
  • Determine security requirements for programming languages, architectures, development environment, and operational environment.
  • Identify secure design principles to guide architecture and design decisions.
  • Establish processes for ensuring adherence to secure design and coding standards.
  • Develop plan for addressing SwA in legacy code.
  • Establish assurance requirements for software to deter, detect, react, and recover from faults and attacks.
  • Perform initial SwA reviews and inspections, and establish tracking processes for completion of assurance requirements.

Preliminary Design Review (PDR)

Recommendation that allocated baseline fully satisfies user requirements and developer ready to begin detailed design with acceptable risk.

 

Allocated baseline established such that design provides sufficient confidence to support 2366b certification.

 

Preliminary design and basic system architecture support capability need and affordability target achievement.

  • Determine that baseline fully satisfies user requirements
  • Review architecture and design against secure design principles; including system element isolation, least common mechanism, least privilege, fault isolation, input checking and validation.
  • Determine if initial SwA Reviews and Inspections received from assurance testing activities capture requirements appropriately.
  • Confirm that SwA requirements are mapped to module test cases and to the final acceptance test cases.
  • Establish automated regression testing procedures and tools as a core process.

Critical Design Review (CDR)

Recommendation to start fabricating, integrating, and testing test articles with acceptable risk.

 

Product design is stable. Initial product baseline established.

 

Design is stable and performs as expected. Initial product baseline established by the system detailed design documentation confirms affordability/should-cost goals. Government control of Class I changes as appropriate.

  • Enforce secure coding practices through Code Inspection augmented by automated Static Analysis Tools.
  • Detect vulnerabilities, weaknesses and defects in the software, prioritize, and remediate.
  • Assure chain-of-custody from development through sustainment for any known vulnerabilities and weaknesses remaining and mitigations planned.
  • Assure hash checking for delivered products.
  • Establish processes for timely remediation of known vulnerabilities (e.g. Common Vulnerability Enumeration (CVEs)) in fielded COTS components.
  • Ensure planned SwA testing provides variation in testing parameters, e.g. through application of Test Coverage Analyzers.
  • Ensure program critical function software, Critical Program Information, and Critical Components receive rigorous test coverage

Systems Engineering Technical Reviews (SETRs). Three reviews are particularly important to the development of all systems: the System Requirements Review (SRR), the Preliminary Design Review (PDR), and the Critical Design Review (CDR):

  • The SRR ensures the system under review is ready to proceed into initial system design. It ensures all system requirements and performance requirements derived from the Initial Capabilities Document or draft Capability Development Document are defined and testable and that they are consistent with cost, schedule, risk, technology readiness, and other system constraints.
  • The PDR assesses the maturity of the preliminary design supported by the results of requirements trades, prototyping, and critical technology demonstrations during the TMRR phase. The PDR establishes the allocated baseline and confirms the system under review is ready to proceed into detailed design (development of build-to drawings, software code-to documentation, and other fabrication documentation) with acceptable risk.
  • The CDR assesses design maturity, design build-to or code-to documentation, and remaining risks, and establishes the initial product baseline.

Table 6 proposes success criteria for selected SETRs. These criteria have been developed through assessment of the SwA content in numerous PPPs, in feedback through JFAC from the Services and agencies, and they continue to be improved. The guidance presented in Table 6 should be tailored to the specific SETRs employed for a given acquisition program, and for the characteristics of the program.

Initiatives

Software Assurance Capability Gap Analysis: In July 2016, the DoD JFAC SwA Technical Working Group identified 63 assurance-related DoD software and systems engineering gaps that impair the effective planning and execution of SwA within the DoD acquisition and sustainment process. The gaps are organized into seven categories:

  1. Life Cycle Planning and Execution;
  2. SwA Technology;
  3. Policy, Guidance, and Processes;
  4. Resources;
  5. Contracting and Legal;
  6. Metrics; and
  7. Federated Coordination.

The JFAC Steering Committee recently approved the congressionally mandated analysis document, published on the JFAC website, and directed the SwA Technical Working Group to develop a strategy to address the identified gaps. This strategy is in development and the gap analysis will be published on the JFAC portal5.

Cyber Integrator (CI): U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) conducted a one-year pilot program in an ACAT I program to include a CI into the Program Management Organization [9]. The CI is an acquisition professional with a systems engineering background charged with the holistic assessment of software assurance, anti-tamper, hardware assurance, firmware assurance and more, for planning recommendations to the Program Office, to plan and meet assurance and cybersecurity statute, policy and guidance requirements for each phase of the acquisition life cycle.

As the principal assurance and cyber advisor to the PM, the CI:

  • horizontally assesses all system security disciplines to identify gaps beyond general statutory and policy compliance;
  • recommends means to fully comply with statutory and policy requirements and incorporate best practices to improve overall assurance posture;
  • plans PPP activities and determines costs to implement;
  • informs contract language needs, performs assessments, and makes updates to improve a program’s technical assurance posture;
  • conducts relevant full coverage scans; and
  • continuously monitors assurance activities, provides status, and maintains awareness of changing policy and guidance and derived cyber requirements.

AMRDEC observed that programs normally met all compliance requirements prior to an acquisition milestone, yet with even 100% “compliance,” residual assurance risk can remain in a system. AMRDEC’s goal is to reduce or eliminate this residual assurance risk through implementation of system technical assurance posture, while executing best practices along the way. To aid in the conduct of the CI’s responsibilities, AMRDEC, in conjunction with the JFAC, sponsored a CI Dashboard tool that supports planning, tracking and reporting assurance and cyber-related activities and requirements. The CI dashboard guides users through a series of informative survey questions to obtain appropriate assurance and compliance (statutory and policy) and recommended guidance tailored to the program. The dashboard, shown in Figure 2, is populated by the responses to the survey questions and provides the PMO an “at-a-glance” status of all the ongoing assurance and cyber activities. A more detailed report is provided for activities that are not on track, and any highlighted area can redirect the user to the specific area of insufficiency. The CI Dashboard is a pilot that will be offered as a service via the JFAC website to all DoD programs and organizations without fee or other constraint.

Figure 2: The Cyber Integrator Dashboard tracks cybersecurity activities across the entire DoD acquisition life cycle.

Conclusion

Software is the foundation of systems comprising our nation’s military power. The primary and important mission capabilities of all current and foreseen weapons systems are implemented in software, and software will be 88% of the cost of DoD systems by 2024. However, the assurance that weapon system software is free of detectable vulnerabilities, defects and weaknesses that could disrupt mission, or prevent its achievement, is only now emerging as a technology and engineering discipline. The key outcome is to understand that traditional cybersecurity has a shared mission with SSE in reduction or elimination of critical vulnerabilities in the operation of weapons system and other software. Where cybersecurity has used perimeter or layered defenses to defend systems against cyberattack, SSE builds systems from the start that are engineered to be resilient and fight through tactical cyberattack. SSE uses software assurance tools, techniques, and methodology to “engineer-in assurance” from the beginning of concept development, and throughout the system life cycle, so that vulnerabilities are discovered and fixed at the earliest possible point by the engineering team. The software becomes resilient to cyberattack, so that when the adversary penetrates cybersecurity, the mission continues. Eighty-four percent of successful cyberattacks are directly and specifically to the functions in the applications that achieve system mission, and in each case the attacks not observed by cybersecurity defenses. SSE is the discipline that will mitigate this problem. This article considered assurance of the supply chain for software, malicious insertions, latent vulnerabilities in operations, and vulnerability detection and remediation during development, sustainment, and in operations. It addressed software assurance roles and responsibilities, processes, products, tools, and software assurance capability gaps within the Department. We recommend a set of software assurance activities for each acquisition phase and technical review that a PM and their staff can tailor as appropriate to the life cycle phase and characteristics of their program. Inclusion of these activities in future authoritative guidance and tools (e.g., Cyber Integrator) will aid PMOs to execute SwA more effectively and efficiently.

References

  1. Under Secretary of Defense for Acquisition, Technology and Logistics. DoDI 5000.02, Operation of the Defense Acquisition System. Instruction, Department of Defense. February 2, 2017.
  2. Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. Report of the Defense Science Board Task Force on Cyber Supply Chain. February 2017.
  3. Public Law 111-383. National Defense Authorization Act (NDAA) for Fiscal Year 2011. Section 932, Strategy on Computer Software Assurance.
  4. Public Law 112-239. National Defense Authorization Act (NDAA) for Fiscal Year 2013. Section 933, Improvements in Assurance of Computer Software Procured by the Department of Defense.
  5. Public Law 113-66. National Defense Authorization Act (NDAA) for Fiscal Year 2014. Section 937, Joint Federated Centers for Trusted Defense Systems for the Department of Defense.
  6. Office of the Deputy Assistant Secretary of Defense for Systems Engineering. Department of Defense Risk, Issue, Opportunities Management Guide for Defense Acquisition Programs. Guide, Department of Defense. 2015.
  7. Department of Defense Chief Information Officer. DoD Instruction 8510.01, Risk Management Framework (RMF) for DoD Information Technology (IT). Instruction, Department of Defense. 2016.
  8. Public Law 112-239. National Defense Authorization Act for Fiscal Year 2013. January 2, 2013.
  9. Goldsmith, Rob, and Steve Mills. Cyber Integrator: A Concept Whose Time Has Come. Defense AT&L Magazine. March–April 2015.

Want to find out more about this topic?

Request a FREE Technical Inquiry!