The committee's recommendations for DOD's software policy address two broad objectives. The first part of this chapter describes appropriate principles for selection of a programming language, and Appendix A contains the committee's proposed modifications to a revised version of DOD Directive 3405.1 (DOD, 1987a), which was in the process of being redrafted during the course of this study. The second component of the committee's recommendations concerns the Software Engineering Plan Review, which is proposed as a method for implementing DOD's software policy and is described in the second part of this chapter.
The committee recommends that DOD approach programming language policy at three levels of precedence. The overall goal is to achieve the best combination of costs and benefits (each interpreted quite broadly, as explained below); a number of principles for acquisition of software follow from and are subordinate to this overriding goal. The second level of precedence interprets those principles as they apply to the choice of a programming language (at any level of programming). The third level specifies circumstances under which Ada is required for software development using a third-generation programming language (3GL).
This hierarchy expresses goals for software acquisition that are broader than the choice of programming language alone, clarifying the importance of many other decisions (such as decisions about whether to make, buy, or, build components; design of the development process; and necessary skills) required to achieve DOD's goals.
The focus is on operational software. It does not apply to software developed, acquired, or used by DOD research and development activities, funded by 6.1, 6.2, and 6.3a appropriations. However, research and development software efforts likely to lead to new DOD operational capabilities should include plans for the transition of such software to meet operational software policy requirements (these plans are described under "Approval Authority and Milestones" in the next section).
High-quality, low-cost, and timely delivery are the primary goals for software development. Here, "quality" and "cost" are interpreted broadly. Quality includes, but is not necessarily limited to, functionality, fitness for a purpose, assurance (including reliability, survivability, availability, safety, and information security), efficiency, ease of use, interoperability, future adaptability (including extensibility, maintainability, portability, scalability, and compliance with standards), and development of DOD's software expertise. Cost includes, but is not limited to, full life-cycle monetary costs (i.e., both shortand long-term costs) and the extent of use of other scarce resources such as expert personnel. Cost also includes assessment of program risk and monetary and non-monetary consequences of system failure. Timely delivery, or schedule, is listed as a third goal because it is difficult to classify as either a quality or cost factor. These overriding goals are reflected in the following statements, which the committee believes should serve as guidance for DOD software development.
|
1. |
Projects will use the highest-level language that meets quality, cost, and schedule constraints for each software component. Other things being equal, higher-level languages increase productivity and reduce cost. Specifically, 3GLs (high-order languages) are generally preferable to machine or assembly language; further, fourth-generation programming languages (4GLs), program generators, graphical user interface builders, and database query languages, such as Structured Query Language (SQL), are generally preferable to 3GLs. Modification of the lower-level language output from a higher-level language processor should be considered as programming at the lower-level; that is, components written in a language should be maintained in that language, and the output of a language processor should be changed only in exceptional cases. |
|
2. |
Standardized and non-proprietary languages are preferred. Using standardized languages increases the portability of code and programmers, and diminishes the possibility of "lock-in" to a single source. This principle applies at all language levels. Thus standard SQL is preferable to a proprietary database query language. In some cases, unusual or "niche" languages are the best choice; however, these choices need to be defended. |
|
3. |
Projects should not develop new languages, and language processors for them, except for domain-specific languages that provide directives for application generators. Such development is costly, in both the short and long-term, and should require unusual justification. |
|
4. |
All relevant quality, cost, and schedule factors should be considered in the choice of programming language for each component. |
Applying these four principles, it is reasonable, for example, to use small "shell" scripts to "glue" together system components, rather than writing them in Ada or some other high-order language; however, large and complex shell scripts may violate the principles by being difficult to maintain. Likewise, large packages of spreadsheet macros, or other code written in (more or less) proprietary 4GLs, need to be considered carefully. The key is to ensure that decisions are made carefully, weighing all relevant economic and engineering cost, quality, and schedule factors. These requirements lead to the following recommended policy for the use of Ada.
The committee believes that Ada should be presumed to be the best choice, and thus should be used for software development, for subsystems of DOD's operational software systems that meet all of the following criteria:
|
1. |
The subsystem is in a warfighting software application area as defined in Chapter 3. While Ada may still be a good choice for other systems, DOD policy should require that Ada be used only in areas where it has clear advantages and is most likely to maximize DOD's competitive position relative to that of its adversaries. |
|
2. |
DOD will direct the maintenance of the software. If a vendor is serving a broader customer community, then maintenance costs are spread over a larger base and are thus of less concern. If DOD directs the maintenance, whether or not the maintenance is performed by DOD personnel or a vendor, then DOD must cover the life-cycle cost, and Ada is assumed to be more cost-effective over an entire life-cycle. |
|
3. |
The software subsystem is large, more than 10,000 lines of code, or the subsystem is critical. Small and non-critical subsystems, as a rule, incur lower development and maintenance costs, and thus |
|
|
|
|
4. |
There is no better COTS, NDI, or 4GL software solution. If existing software or higher-level language solutions are suitable, new development solely to promote Ada should not be required. |
|
5. |
There is no life-cycle cost-effectiveness justification for using another programming language. |
|
6. |
New software is being developed or an existing subsystem is being re-engineered; a re-engineering is a modification substantial enough that rewriting the subsystem would be cost-effective. For systems meeting criteria 1 through 5, Ada is generally superior to other high-order languages, and conversion over time should be encouraged. |
For systems that meet all of the above criteria, Ada (preferably Ada 95) must be used for the preponderance (95 percent) of new or modified software subsystems or components; up to 5 percent may be written in other languages to facilitate component integration and other functions.
Projects that meet all of the criteria except number I above must analyze Ada as an alternative. As explained in Chapter 2, Ada is generally preferred for custom software because, compared with other 3GLs, it encourages better software development practices, has better error checking and recovery capacity, has better support in certain domains, is standardized and has a validation facility, contributes to commonality, and leads to high-quality at lower life-cycle cost.
The committee recommends that DOD broaden its current policy on programming language to include a range of software engineering factors that have a greater overall influence on software capability than does choice of a particular language alone. This section addresses how policy guidance regarding these factors, as described in Chapter 2, can be translated into operational decisions in systems development. The principal mechanism is the Software Engineering Plan Review (SEPR).
The committee explored a number of approaches for integrating selection of a programming language with related review and approval processes for software engineering decisions. One approach was to integrate the programming language selection process with a Capability Maturity Model assessment (Paulk et al., 1993), but this type of assessment focuses more on organizational process maturity than on specific technical decisions made by a particular project. Another approach was to add review of programming language and software engineering decisions to the Defense Acquisition Board (DAB) and Major Automated Information Systems Review Council (MAISRC) Milestones I and II review processes (as defined by DOD Directive 5000.2-R (DOD, 1996c)). Key software decisions are generally covered well in MAISRC reviews but often fall below the threshold of visibility in DAB reviews, which cover most DOD-dominated software application areas.
The committee determined that the DOD's best alternative to these two approaches was to require passage of a focused SEPR as a part of a major system's DAB or MAISRC Milestone I and II reviews. SEPRs used in commercial practice have proven to be highly effective for reviewing software and system requirements, plans, architectural decisions, and programming language decisions at life-cycle points similar to DAB and MAISRC Milestones I and II. For example, the SEPR concept has been used successfully in large technology-dependent commercial and government organizations, including AT&T and Lucent Technologies (architecture review board (AT&T, 1993)), Citibank (building permit system), NASA (architecture reviews), and others.2
The SEPR process is intended to provide a forum for the following activities:
The SEPR process is intended to help program managers (and possibly contractors) achieve a best-practices level of decision making for the software engineering associated with major systems, as well as to assure consideration of organizational and life-cycle factors. Implementation details are established not by senior officials, but rather by product-line stakeholders and expert peers, who have an incentive to minimize unnecessary bureaucracy and documentation. The principal policy elements for systems subject to DAB and MAISRC reviews are the following:
The SEPR process has three elements: (1) a policy framework established for major software engineering decisions and for SEPRs; (2) involvement in the review by peers as well as the principal stakeholders in system design; and (3) software engineering common practices, SEPR evaluation criteria, and SEPR process policies developed at the service and command levels (these would be specific to each service, and possibly to PEOs who could, for example, require conformance to particular architectural frameworks for a class of systems (e.g., a particular level of a common operating environment). These three elements are detailed below.
The purpose of the SEPR process is to embody institutional and long-term interests in requirements for formulation, development, and post-deployment that might otherwise be neglected or compromised in favor of short-term goals. Such short-term expedients could arise as undesired results of incentives created in the acquisition process or for other reasons.
Early decisions concerning design, process, and other software engineering factors can have a significant influence on overall life-cycle cost and risk, and on the potential for product-line commonality and interoperability. For example, the following questions arise:
The committee recommends that the SAEs be in charge of carrying out the SEPR process at the DOD service level. The SAEs would establish milestones for the SEPR process, appoint expert reviewers and stakeholder representatives, and establish criteria for evaluation. The SAEs and their associated SEOs would be responsible for implementing these functions, although this responsibility could be delegated as detailed below. The appropriate counterparts in other DOD components would have corresponding responsibilities.
The most important element is participation in the SEPR by peer software managers experienced in the application area, as well as by key stakeholders as either advocates or reviewers. Because the SEPRB's staffing from stakeholder organizations can vary considerably among systems, SEPRB representation is divided into mandatory and discretionary categories. The SAE must appoint representatives from mandatory stakeholders, but can include discretionary stakeholders as appropriate to the software engineering plan to be reviewed.
For systems subject to Milestone Decision Authority (MDA) at the service level or in the Office of the Secretary of Defense, the mandatory list of stakeholders and peer reviewers includes the following:
The discretionary list of stakeholders depends on the characteristics of the system being developed, but could include the following:
For systems subject to MDA, approval authority for the process resides with the Assistant Secretary of Defense (C3I) or the SAE, depending on the class of system. The direct management of the SEPR process would be carried out by the SAEs and their associated SEOs, with possible delegation to the PEO level. But the actual approval authority should not be delegated beyond the SAE. The Assistant Secretary of Defense (C3I) would monitor the review process.
When significant deviations are needed from DOD's stated policy and principles, direct approval of the Assistant Secretary of Defense (C3I) may be required; this should be determined when approval authority is delegated to the SAEs. It is the intent of these recommendations, however, that policy be framed with sufficient flexibility and outlook to the future that such deviations are not required in the ordinary conduct of business. It is also the intent that implementation be delegated to a level sufficient to ensure in depth review of software engineering decisions. SEPRs are ongoing processes, with specific approvals pertinent to specific milestones. SEPRs must be linked, at a minimum, to DAB and MAISRC Milestones I and II.
Many smaller systems are subject to DOD software engineering policy, but not to MDA. For these systems, approval authority resides with the SAE, but there is flexibility with respect to delegation and the need for formal SEPRs. Normally, the SAE can delegate approval authority to a PEO or, for very small systems, to a major command. In the latter case, approval can be granted for a family of related small systems as a result of a software engineering plan for a single product-line. But the committee suggests that for non-MDA systems, the decision to conduct a formal SEPR process (or some more expedient process) be required for warfighting software, and be a recommended practice at the discretion of the approval authority for other software.
Given that the Director of Defense Research and Engineering (DDR&E) is responsible for advanced (6.3a) research, the committee recommends that the DDR&E establish a software engineering review process that addresses issues pertinent to the efficient transition of software technologies associated with major 6.3a demonstration programs, including plans to modify prototype 6.3a software to conform to the committee's recommended policy on selection of programming language, as appropriate. The review criteria, which would be at the discretion of the DDR&E, would not need to use the SEPR process, thus enabling the DDR&E to manage the trade-off between efficient transitions, on the one hand, and responsiveness and flexibility of research programs to the emergence of new technologies and concepts, on the other.
As envisioned by the committee, the SEPR process requires program managers responsible for MDA system specification, development, and major re-engineering efforts to submit a software engineering plan, preceded by a request to the SAE to convene a SEPRB. The SAE, considering the recommendations of the program manager and the cognizant PEO, would then select stakeholder organizations, which appoint representatives. For smaller systems, the SAE and PEO roles are further delegated, as indicated above.
It is the committee's intent that the approval authority would work with the SEPRB and the program manager to develop a software engineering plan suitable for the project and in conformance with all DOD policies. No entry into the DAB Milestone I and II reviews could be initiated without concurrence of the approval authority. Criteria used for evaluation of the software engineering plan should be defined by the approval authority.
The software engineering plan should be a simple document3 and should cover areas relevant to the decision process, including the following:
The SEPR approval authority, in consultation with PEO and program manager representatives, would develop specific criteria to be reviewed. The criteria for review could include, for example:
For each subsystem or component in the system, the following areas should be addressed:
As experience is gained, SAEs, PEOs, and other stakeholders will develop service-specific or domain-specific refinements of the review criteria listed in the previous section. For example, a service may designate conformance with a common architectural framework as a review item. These refinements may attain the status of software engineering "codes" (analogous to building codes) particular to a service or PEO product-line. These would serve as "best-practices" documents that would necessarily evolve over time, according to requirements and technology developments. They would also enable program managers to develop expectations concerning the SEPR process on the basis of their conformance with such codes.
|
1. |
This section and Appendix A present similar material in different formats. Appendix A was prepared by the committee to serve as a proposed revision to DOD Directive 3405.1 (DOD, 1987a); this section discusses the principles and rationale underlying the committee's suggested changes to that policy document. |