Lagging indicators in occupational safety and health are required reporting figures that provide an industry-standard metric to measure the number of employees injured annually or impacted by a recordable injury. Recognizing the weaknesses of traditional lagging indicators in managing occupational safety, many organizations and industries have emphasized leading indicators: that is, metrics to capture safety-related activities before an incident occurs. The assumption is that, if good leading indicators are measured, they are better indicators of safety performance and the need for preventative actions. This NCHRP synthesis explores issues related to safety leading indicator practices used by state departments of transportation (DOTs) to track and prevent occupational injuries and other incidents.
An electronic survey was created and distributed to members of the North American Association of Transportation Safety and Health Officials (NAATSHO). A total of 43 completed responses were received across 43 different DOTs. Subsequent case example interviews were conducted with five states to gather additional details.
This synthesis presents the following findings related to safety leading indicators of practices used by state DOTs.
Thirty-five of 43 responding DOTs document safety leading indicators (see Figure 3.8 and other figures, along with Table 3.1, in Chapter 3). Of those DOTs that document safety leading indicators, 12 DOTs have been documenting them for more than 10 years (Figure 3.9). Thirty-four of 35 DOTs that document safety leading indicators have been doing so for more than a year (Figure 3.9). The most frequently noted indicators are “Tracking Safety Training Logs,” “WorkZone Traffic Control Training,” “First Aid/CPR Training,” and “Use of Safety Committees” (Table 3.1). The least frequently noted indicators are “Use of Stretch and Flex Programs,” “Reporting of Vehicle Operations (i.e., seat belt usage),” and “Employee Perceptions of a Just and Fair Safety Culture” (Table 3.1). These findings are corroborated by the feedback in the case examples.
Safety leading indicators are often collected according to schedules. Leading indicators such as “Use of Toolbox Talks” (11 DOTs) and “Reporting of Vehicle Inspections” (eight DOTs) are documented daily or weekly (Table 3.1). “Use of Safety Committees” is documented regularly as noted by several DOTs. Eleven DOTs document committees quarterly, while six DOTs
document them monthly (Table 3.1). Feedback from the case examples cited toolbox talks and employee engagements occurring either daily or weekly.
Twenty-four DOTs collect safety leading indicators on paper and store them electronically, 22 DOTs collect safety leading indicators using an electronic safety management system (SMS), and 13 DOTs collect and store the leading indicator data on paper (Figure 3.16). As indicated in the case examples, the ability to organize and use the collected safety data to present trends, target training, and highlight safety program performance is vital to the overall program. The data from safety leading indicators is typically used for short-term and long-term trends, noted by 20 DOTs each (Figure 3.17). Eighteen DOTs use the data as needed or as requested, 17 DOTs correlate the data to lagging indicators, and 14 DOTs visualize the data in a dashboard (Figure 3.17). Twenty-nine of 35 DOTs analyze the safety leading indicator data to improve the safety program, 1 DOT does not analyze the data, and five DOTs are unsure (Figure 3.18). Several of the case example interviewees anecdotally noted improvements based on their implementation of safety leading indicators. Alternatively, Virginia quantitatively assesses its efforts to measure improvements.
DOTs noted a variety of implementation strategies for their safety leading indicator programs. Seventeen DOTs have dedicated staff managing the program, 15 DOTs have an employee suggestion or other feedback mechanism, 13 DOTs communicate or advertise the program, and 12 DOTs piloted the program in a specific region or district (Figure 3.21). Specific implementation strategies deemed successful include a dedicated staff to manage the program (17 DOTs), employee suggestion or other feedback mechanism (12 DOTs), communicating or advertising the program (11 DOTs), and an oversight team (11 DOTs) (Figure 3.22). Benefits noted in safety leading indicator programs include increased awareness of safety activities (24 DOTs), improved data for safety-related decision-making (21 DOTs), increased employee engagement in safety (20 DOTs), reduction of incident rates (19 DOTs), and reduction of workers’ compensation claims (16 DOTs) (Figure 3.25). Lack of employee buy-in and participation (17 DOTs), lack of staffing support for safety programs (15 DOTs), other duties or programs with higher priority (15 DOTs), and lack of funding (8 DOTs) are the most frequently noted challenges of a safety leading indicator program (Figure 3.24). The feedback of the case examples also noted employee buy-in as a challenge but highlighted that it could be overcome through communication and building of trust.
Chapter 1 of this report provides an introduction and background to safety leading indicators for their use in measuring safety performance and the methodology used in the synthesis. A literature review of safety leading indicators is discussed in Chapter 2. Results of the national survey are presented in Chapter 3. Specific case example interviews are detailed in Chapter 4. A summary of findings from the synthesis is outlined in Chapter 5. The References section lists the sources cited in the synthesis. Appendix A provides the survey questionnaire, and Appendix B details the survey results. Appendix C lists the case example questions. Finally, Appendix D refers to DOT safety leading indicator policies.