Schedule slippage is a symptom of overly optimistic planning or other problems that negatively impact progress. SCRAM utilizes the Root Cause Analysis of Schedule Slippage (RCASS) model that organizes these problems into ten information categories. These categories and relationships are adapted from McGarry  and Boehm . They have been further refined based on experience with a number of SCRAM assessments.
The RCASS model is shown in Figure 1. The forward direction of an arrow indicates that there is an effect of issues in one category upon another. All arrows eventually lead to the bottom of the figure and to the categories that are of main concern: Program Schedule & Duration and Project Execution. By uncovering issues in each category, it is possible to identify risks and problems to schedule compliance and the causes of delays.
Figure 1. RCASS Model
The following sections briefly describe each RCASS category and present some sample questions addressed by a SCRAM team; during a SCRAM assessment, the answers to these questions help to identify root causes of schedule slippage. A real-world example of an issue or problem in the category is also provided.
Description: Issues in this category represent project turbulence and entropy caused by difficulties in synchronizing the project’s stakeholders.
Questions: Who are the stakeholders? How do they interact on requirements clarification, technical problems, and tradeoff analysis? Are one or more stakeholders imposing unrealistic constraints on implementation solutions or acceptance testing?
Example: One developer on a program described their stakeholders as being like a “100-headed hydra: nobody could say “yes” and anyone could say “no.”’ Stakeholder turbulence negatively impacts the ability to define a stable set of requirements.
Description: Issues in this category represent the understanding and stability of the functional and non-functional requirements, performance requirements, system constraints, standards, etc. used to define and bound what is to be developed.
Questions: Are all of the requirements defined and understood? Have the requirements been agreed to? Are there (Regulatory and Technical) standards that have to be implemented? Is there a mapping of requirements to development builds and production components? Are there technical performance requirements that are being tracked? Are the interfaces to other systems well understood?
Example: One program misinterpreted a communication standard and discovered late in development an additional 3000 message format requirements implied by that one standard. Needless to say, the program schedule slipped.
In Figure 1, the arrow from requirements to subcontractors represents the handing off of program requirements to subcontractors so as to reduce the workload for the prime contractor. The arrow to Workload means that requirements are the basis of workload estimation and that workload increases with volatility or poorly defined requirements. Programs are often plagued with the IKIWISI (I’ll Know It When I See It) approach to requirements definition and sign off which creates unplanned rework.
Description: Issues in this category represent the subcontractor products or services that will be delivered as a part of the overall system. In Figure 1, the arrow from Subcontractor to Workload reflects additional work to correct poor quality products or handle late deliveries. Late products will cause other system components to be delayed having a ripple effect on workload and delivery schedules.
Questions: Are there subcontractors involved? When are their deliverables needed? How is subcontracted work coordinated, integrated and accepted? Are subordinate schedules aligned and integrated in an integrated Master Schedule? Are system interfaces well enough defined for the subcontractor to deliver a product that works within the system?
Examples: One program had a subcontractor that claimed highly mature development processes. A visit to the subcontractor site revealed that developers were sidestepping processes in order to make deadlines incurring Technical Debt (defects). Another program had a subcontractor that was eight time zones away severely restricting coordination and virtual meetings that impacted schedule performance.
Description: Issues in this category represent products developed independently of the project that will be used in the final product, i.e. an asset that reduces the amount of new work that has to be done on a project. In Figure 1, the arrow from assets to workload shows that incorrect assumptions about functional assets may impact the amount of work to be done.
Questions: What COTS, MOTS, NDI, or GFE  products are being used on the program? Are they providing the required functionality and are they meeting hardware constraints? Are there legacy products being used and were they developed locally? Is the current product architecture defined and stable enough to evaluate and accept other pre-existing products? Do existing interface definitions accurately describe the actual product interface? What level of assurance accompanies the product? How will unused function or features be managed?
Examples: A common program issue is the underperformance of pre-existing products, i.e. the legacy systems or COTS products do not work as advertised. Another common issue stems from underestimating the amount of code that must be written or modified in using a legacy product. One program reviewed planned to only modify 10% of a legacy system but by the end of the development phase, 50% of the system had been modified to satisfy requirements increasing the Workload dramatically.
Description: Issues in this category represent the quantity of work to be done and provide a basis for estimating effort/staffing and duration. Issues with requirements, subcontractor products, functional assets, and rework may negatively impact this category.
Questions: Is the scope of work well understood? Is the workload changing for any reason, e.g. changing requirements, unstable platform or unplanned rework? Is workload being transferred to a later build? Workload is different depending on the development life cycle phase. Has the amount of work to be done been quantified, e.g. number of requirements, hardware and software configuration items or test procedures to be developed?
Examples: Many programs underestimate the amount of software code to be written and the amount of documentation to be developed and reviewed.
Staffing and Resources
Description: Issues in this category represent the availability, capability and experience of the staff necessary to do the work as well as the availability and capacity of other resources, such as test and integration labs. The arrow in Figure 1 points from staffing and resource to schedule because issues in this category may negatively impact the amount of time needed (schedule) to do the ‘actual’ work.
Questions: Are the right people (with the right experience) working on the program and are there enough people to do the work? Is the work force stable or is there turnover? Are the key personnel qualified to lead their area of work? Programs often suffer staffing issues related to high turnover, especially among experienced staff; bringing more people onto the program late making things worse.
Example: An interesting example of a staffing issue on a program was that of the “star” software developer. This one person understood the most about how the software system worked. Even though he worked long hours, he was a bottleneck. He was so busy, he did not have time to respond to problems, train others or update design documentation.
Schedule and Duration
Description: This is a category of primary interest that is impacted by issues in the other categories. Issues in this category represent the task sequencing and calendar time needed to execute the workload by available staff and other resources (e.g. test labs).
Questions: What is the current schedule with respect to milestones, builds and phases? What are the dependencies, when are they due and are they linked into the schedule? What was the basis of estimates used to construct timelines, e.g. were analogous projects or parametric models used to estimate duration? Is there any contingency built into the schedule or is it success oriented? What is the “health” of the current schedule?
Example: A typical behavior seen in programs that slip schedule is early milestones or deadlines are missed, new requirements are added, productivity is lower than estimated but schedule milestones do not change. Activities later in the development cycle then get their durations squeezed. A common remedy is to add more people late in the program to increase production. This typically slows down progress due to lack of familiarization and training and increases communication overhead among development teams.
Description: Issues in this category stem from problems in communicating the schedule and monitoring and controlling the execution of the project in accordance with the project schedule. As shown in Figure 1, the capability to execute a project schedule is impacted by the feasibility and “health” of the schedule itself as well as by the effectiveness with which the scheduled tasks are executed. In relation to the latter issue of effectiveness, experience from multiple SCRAM assessments has highlighted the need to focus on Technical Progression and System Integration.
Questions: When was the schedule base-lined? Is it being used as a communication, monitoring and control tool? Is there an integrated master schedule? How is progress being tracked? Does actual productivity match the estimated or planned productivity? Does everyone on the project have access to the schedule (at an appropriate level of detail)? Are System Integration and Formal Test phases conducted as discrete activities with specific objective entry and exit criteria? Is the system under development Technical Progression based on objective evidence of a maturing system and is the level of maturity commensurate with the resources and scheduled consumed?
Example: Generally, programs report schedule problems as they enter the System Integration and Test phase. Progress stalls as tests become blocked whilst issues with the system integration and test are resolved. This typically reflects a lack of adequate planning, grooming and qualification testing prior to conducting formal testing.
Rework and Technical Debt
Description: Issues in this category represent additional work caused by the discovery of defects in the product and/or associated artefacts, as well as work that is deferred for short-term expediency (Technical Debt) and their resolution. Causes include rushing into development before requirements are fully understood, skipping inspections and verification testing due to lack of time, and deploying a product before the operating environment is ready. Technical Debt is often accrued with no plans to repay the debt until perhaps too late. The arrow in Figure 1 shows the disrupting impact that rework and technical debt has on workload.
Questions: Has the likely amount of rework been estimated and planned? Are the compounding consequences of incurring intentional Technical Debt identified and understood?
Examples: Technical Debt is often incurred through the suspension of process (e.g. stop peer reviews to meet deadlines) and other process short-cuts. Rework is often underestimated, not planned or prioritised for correction.
Management and Infrastructure
Description: This category impacts all of the above information categories. Issues in this category reflect the factors that impact the efficiency and effectiveness of getting work done, e.g. work environments and processes, use of management and technical software tools, management practices, etc. Efficiency is negatively impacted by a lack of tools, lack of facilities and burdensome security requirements. Effectiveness is negatively impacted by poor management practices such as in the areas of quality assurance, configuration management and process improvement.
Questions: Have the capacity requirements for the development system infrastructure (e.g. integration labs, network bandwidths etc.) been explicitly estimated based on an analysis of historical productivity and system under development operational performance needs? Is an active process improvement program in place that is driven by best practice assessment (e.g. CMMI)? Is the configuration management/change control system cycle time suitable to support development performance? Does the quality management system adequately support the program?
Example: It is common for programs to have inadequate system integration and test facilities in terms of capacity and/or fidelity, e.g. simulators, emulators, and live environments. On a major aircraft development program that involved very large size software development, it was found that the Configuration Change Management System could not keep pace with the software defect notification and resolution process slowing down software release to systems integration.