Appendix A4 contains the findings from the Stage 1 interviews. The findings are grouped by the four project objectives. Within each group, we present the findings from the interviews and in many cases, the implications of those findings on the development of the KM Guide. A fifth group was added to hold other notable findings that fell outside the four project objectives.
Finding 1: Most DOTs we spoke with had no performance metrics to measure their knowledge management function or capabilities. In most cases, there were no plans to develop metrics in the next year or two.
Finding 2: With respect to the KM Capability Maturity Model, we found that most of the DOTs we spoke with were at the lowest level of the Model. This assessment is based on the following observations.
Finding 3: Most state DOTs KM efforts are small, new, and embryonic. The majority of KM programs are less than five years old.
Implication: The guidebook needs to include a KM Capability Maturity Model and KM performance metrics for the state DOTs. The KM CMM should focus on defining the lower levels of the model and helping the states establish and grow their KM functions.
Finding 4: All DOTs perform some KM activities. However, in many cases, the activities typically classified as knowledge management are not consolidated and performed in a single unit or function that is titled “knowledge management.” They are dispersed throughout the DOTs.
Finding 5: The KM activities undertaken at the state DOTs vary widely from state to state. A list of some of the KM activities that were being actively performed include:
Finding 6: There is no uniformity or consistency across the state DOTs with respect to their KM activities. We found no single KM activity routinely performed by the majority of state DOTs we spoke with.
Finding 7: All of the DOTs performed some KM activities in other business functions. A few examples of general KM activities performed by other functions and departments include document and file security policies; document and file retention policies; KM systems specification, evaluation, and selection (e.g., ECMS, DAMS, and enterprise search); metadata management; taxonomy/ontology development and management; autoclassification; business process documentation; and knowledge capture.
Finding 8: Some of the processes and activities typically classified as a part of knowledge management were not being performed in the DOTs. Some DOTs were not performing the KM activities listed in Finding 7.
Implication: The Guidebook should list the activities, processes, practices, and policies that are generally accepted as being part of the KM function.
Implication: The Guidebook should specify KM performance metrics at a granular level for various types of KM programs or practices but recognize that most state DOTs will not have implemented the specific KM program or practice. For example, the Guidebook may define metrics for storing digital content (e.g., digital images, drawings, audio and video content), but recognize that most state DOTs do not have a general digital asset management system nor the policies and practices to manage and protect their digital assets.
Finding 9: Several states have implemented a KM project or program that could be considered a KM leading practice in state DOTs. For example, one state had an outstanding knowledge portal [Kentucky], another had a robust process for creating “position books” [Virginia], a third had expertise profiles [Illinois], one state had an extensive SOP program [Maryland], and another had a large set of active CoPs [Michigan].
Implication: The KM Guidebook should include leading practices from DOTs because several interviewees requested KM examples drawn from the DOT community.
Finding 10: Most DOTs have not sought funding for KM staffing and projects. We believe this results from a lack of clearly defined projects with economic/operational justification.
Finding 11: No DOT has a robust approach to developing business cases for KM investments. Most DOTs admitted that they hadn’t developed formal business cases for capital funding requests for knowledge management, and no one had attempted to quantify the value of KM in terms of its impact on enterprise DOT metrics, such as shortening a construction project’s cycle time, reducing costs, and increasing labor productivity by applying captured knowledge to improve operational decision-making.
Implication: The KM Guidebook should clearly describe how to develop business cases for knowledge management based on quantitative metrics and return-on-investment (ROI) calculations.
Finding 12: The KM capabilities of state DOTs are closely tied to where the KM group sits within the organization. For example, the interviewees reported the following:
Finding 13: We did not find a noticeable number of organizational practices incorporated into the KM function from outside its organizational reporting relationship (See Finding 12). None of the interviewees mentioned applying standard business approaches and methodologies to support or enrich their KM programs, like competitive benchmarking, Six Sigma, design thinking, Agile methodology, and value chain analysis. The Louisiana DOT KM lead mentioned he would also like to utilize QCIP (Quality and Continuous Improvement Program) as a mediator among offices and as a repository for reports.
Finding 14: Today, the majority of KM leads at state DOTs sit in Human Resources or Workforce Development/Organizational Development functions.
Finding 15: There is no consensus among the state DOT KM interviewees on where KM programs should sit in an organization. Some think that it should be centralized. The core argument here is that the higher in the organization KM sits, the more leadership support it gets and, therefore, the more serious the effort will be perceived (Georgia). Others think that KM should be decentralized. In the case of NJDOT, no single office “owns” KM. It has developed in a collaborative environment and is shared and
embedded throughout the organization. In the words of NCDOT, it should be decentralized and not in the Secretary’s office, so it is “employee-driven.”
The majority, however, think that a “hub-and-spoke” model is best. This refers to an organizational model where you have a centralized, coordinating KM body and KM staff positioned in the business units. The local KM staff can translate KM requirements to meet the local needs. Maine DOT refers to this as a hybrid model. In some cases, like NYDOT, the hybrid model is favored because it is an operational model that already exists.
Each of the three models has advantages and disadvantages, summarized in the table below.
Table
Advantages and Disadvantages of Three Models
| Centralized Model | Decentralized Model | Hybrid Model | |
|---|---|---|---|
| Advantages |
|
|
|
|
|||
| Disadvantages |
|
|
|
Finding 16: The KM function seems to be better appreciated and understood internally when it is organizationally positioned closer to “knowledge-intensive” divisions and business units. For example, the KM lead in the State of Illinois DOT said, “Due to the placement of the [KM function/library] in the Bureau of Research, my work is better understood than when the library was housed in the Business Services Office. The Bureau of Research is connected to Engineering and is supervised by an engineer.”
Finding 17: An overwhelming percentage of State DOT KM programs did not have internal partnerships that supported key knowledge management activities such as retention, sharing, and development. Those that did have partnerships were solely relationship-based, not activity/operations-based. That is, the relationships existed because the individual had a long-standing relationship with someone in another department and leveraged that relationship to drive the KM effort. We believe the lack of partnerships is simply due to the newness of the KM function. There hasn’t been a requirement yet for inter-group processes and procedures.
However, based on an analysis of the interviews, there are some logical business units within a state department of transportation (DOT) for Knowledge Management (KM) programs to partner with. These include:
Finding 18: Many of the DOT KM leads did not have a complete and comprehensive understanding of the knowledge management field. Many KM groups were focused narrowly on the business function in which they worked. See Finding 12.
Implication: Include a section in the Guidebook that defines knowledge management and the knowledge management field. This section should use current (i.e., 2024) terminology and examples.
Finding 19: One of the questions posed to the interviewees addressed the topics they would like to see in a [general] KM guidebook. The interviewees suggested the following sections:
Note: Excluded from the list above were the suggestions that will be covered by this research project.
Implication: While the scope of the Guidebook for the NCHRP Project #23-17 is already defined, the NCHRP may want to fund subsequent projects to address the other topics.
Finding 20: Several state DOTs have experienced “fits and starts” in their KM efforts. A driving force for some states’ KM efforts has been nominating an employee(s) to the AASHTO CKM committee. Some interviewees admitted that their state DOT doesn’t have a formal KM program.
Finding 21: The State DOT KM Leads are excited about this NCHRP project. The majority expressed a sincere desire to get a copy of the KM Guidebook when it is completed. Additionally, many interviewees asked to see our upcoming draft versions so they could get an early start applying the recommendations and solutions.
Implication: Add a step in the project plan to get input from the Stage 1 interviewees on draft versions of the KM Guidebook.