An Overview of the Schedule Compliance Risk Assessment Methodology (SCRAM)

https://www.dni.gov/files/NCSC/images/article_icons/20200925-Supply-Chain-article.png
Image Credit: Office of the Director of National Intelligence (DNI)

Posted: February 10, 2016 | By: Adrian Pitman, Elizabeth K. Clark, Brad Clark, Angela Tuffley

SCRAM Methodology

SCRAM has been used to find the root causes of schedule slippage and recommend improvements on programs that have experienced multiple or protracted schedule overruns. Moreover, SCRAM has proven extremely valuable in communicating schedule status and root causes of slippage to senior executives.  Several recent SCRAM assessments found that schedule slippage was, in part, due to factors outside of the program’s control.  Once aware of these factors, executive management was able to bring about changes to facilitate resolution.  Examples include late requirements levied by a senior external stakeholder and competition for operational assets that were required for system test on another program. Other examples were provided in each RCASS category discussed above.

In addition to using SCRAM once a program is experiencing problems, SCRAM provides a methodology for conducting an independent review of risk to program schedule.  SCRAM reviews produces three types of outputs:

  1. Identification and quantification of Schedule Compliance Risks (this includes identification of significant schedule drivers, root causes of existing slippage, risks to schedule and the potential impact on Program objectives)
  2. The “health” of the current program and schedule
  3. Recommendations for going forward

fig4

Figure 4. SCRAM Assessment Process Overview

In the DMO a SCRAM assessment is conducted by a small team of highly experienced system and software engineering subject matter experts along with a schedule specialist, (someone who knows how to operate the project’s scheduling tool and who is an expert in schedule preparation and construction).

There are seven key principles for this review methodology:

Minimal Disruption: Program information is collected one person at a time in an interview that usually lasts no more than one hour.

Rapid turn-around: For major programs a SCRAM team typically spends one week on-site gathering information and data.  A second week is spent consolidating, corroborating, analyzing and modeling the data culminating with an executive presentation on the results. The RCASS model is used to structure the presentation to show the interrelationships (causes and effects). Finally, a written report is provided by the end of the fourth week.

Independence: Review team members are organizationally independent of the program under review.

Non-Advocate: All significant issues and concerns are considered and reported regardless of source or origin.  The review does not favor the stakeholder, customer, end-user, acquisition office, or developer.

Non-Attribution: None of the information obtained on an assessment is attributed to any individual.  The focus is on identifying and mitigating risks to schedule.

Corroboration of Evidence: The findings and observations of the review that are reported are based on at least two independent sources of corroboration.

Openness and Transparency: For the Monte Carlo analysis or software parametric analysis component of a SCRAM review, the developer is invited to assist in resolving data anomalies, witness the analysis process and challenge model results. This transparency (no surprises) builds cooperation, trust and confidence in the schedule forecast. However the SCRAM Team is the final arbiter.

Interviews are conducted with key personnel, both acquisition office and developer, and the review questions are structured around RCASS categories. Interview comments are captured then tagged to the relevant RCASS category. The review includes the examination of program development plans, management and product artifacts, risk databases and the schedule health check discussed earlier.

As previously stated SCRAM can be applied to any major system engineering activity on a program (Figure 5). All of these activities have stakeholders, tools and facilities, requirements to be accomplished, possible help from subcontractors, a defined amount of work to be done, quality standards, staff to do the work, a timeframe to accomplish the work, and processes and infrastructure to support the work.

fig5

Figure 5. System Life Cycle Activities

Want to find out more about this topic?

Request a FREE Technical Inquiry!