Previous Chapter: 5 Machine Learning Tools
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

CHAPTER 6

Sample Machine Learning Applications

Introduction

This chapter describes three sample Machine Learning (ML) examples created to illustrate how ML solutions could be leveraged for different potential DOT applications. The presented ML solutions are intended for educational purposes and for raising awareness about different ML methods. The first two examples show how pre-trained deep learning (DL) models could be integrated into a custom program and directly used without the need for model training. The third application is based on Bayesian Networks and is trained with incident data from New York and requires retraining to be tailored to local data.

The first example pertains to detecting and counting vehicles from a video. The user sets up a virtual line across the travel lanes, and the ML model detects the vehicles as they cross this line and records their timestamps. The second application is about detecting stop signs from a set of images. The program automatically detects the presence of a stop sign in the image and retrieves the geocoordinates from the image’s metadata. The output includes the latitude and longitude coordinates along with a confidence score for the presence of the stop sign in each image. Such an application could be used for asset management applications. The third ML application is related to incident management, and it predicts the duration of the incident based on the available incident data. It uses a Bayesian Network model and provides updates to the predicted durations as new information about the incident becomes available.

For each ML application, step-by-step instructions are provided showing how the program can be installed and applied to given datasets. These are designed for someone with limited or no prior experience with ML methods. The entire code and documentation (e.g., step-by-step user guide) are made self-contained and available on GitHub, a public repository for sharing files and codes. All three sample ML applications can be accessed from this GitHub page: https://github.com/ODU-TRI

These sample hands-on applications are expected to help DOT staff become more familiar with ML methods and the vast ML and deep learning resources available in the public domain. This will ultimately help the agencies make more informed decisions about the deployment of ML applications. The following sections provide an overview of each one of these examples.

Traffic Volume Counting Using YOLO Object Detector

To demonstrate the practical utility of publicly accessible deep learning (DL) technologies, we have developed a program to automatically detect and count vehicles passing through a corridor in a specified direction. For this purpose, we implemented a commonly used object detector called YOLO (you only look once), a widely recognized object detection system. It takes the input frames from a video file and outputs a bounding box for every vehicle visible in the image. We used YOLO v5, known for its efficiency and accuracy, in this application. The program further allows users to define a specific reference line on the road, based on two user-selected points, to pinpoint the exact location where vehicles are to be counted in the video. A vehicle is counted once the center of its bounding box crosses this line from one frame to the next.

Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

This automated counting process is executed for each frame of the input video. We use OpenCV for processing the video file, extracting frames, and drawing additional information on the output video containing the count. We also store the count data in a CSV file with the time stamp for each vehicle detected. Pandas package in Python was used for this purpose. This data can then be aggregated to extract counts for any desired interval such as hourly counts.

Sample output frame
Figure 11. Sample output frame.

Figure 11 demonstrates a sample output of this program. In this video frame, each vehicle is enclosed in a bounding box with a category and confidence level on top of it. As vehicles pass the green line drawn by the user, the total count increases, and new records are created in the CSV file to store the timestamps. Such data could be used for various applications including AADT (annual average daily traffic) estimates, hourly or sub-hourly flow rates, ramp volumes, detecting wrong-way driving (the program can count volumes in either direction), creating cumulative volume plots, etc.

While YOLO is adept at recognizing a wide range of object categories, its ability to differentiate between various types of vehicles, such as buses and trucks, is somewhat limited due to its generalized training for many object classes. Therefore, for applications where distinguishing between specific vehicle types is crucial, we recommend fine-tuning YOLO on those particular vehicle categories to enhance its accuracy and performance in such specialized tasks. It should also be noted that the quality and resolution of the input video may affect the accuracy of the counts. Generally, higher resolution will result in better accuracy.

Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

Stop Sign Detection Using YOLO and Automatic Geotag Retrieval

This application processes all images in a given folder to determine if there is a stop sign present in the image. It also automatically extracts geographical coordinates from the image’s metadata, if available, to record where the image is taken. Such metadata are typically available when the image is captured by a smartphone. This information can be used to produce maps and create an inventory of intersections with stop signs. For this application, a pre-trained deep learning model called YOLO8 is used for stop sign detection, and the Python PIL library for extracting geolocation data of the image.

The Python code processes all images in the given folder and summarizes the results in a CSV file which contains the image names, confidence score from the YOLO8 model in detecting stop signs, latitudes, and longitudes. Additionally, images with bounding boxes around all detected objects are also saved for reference. The YOLO8 model detects 79 different object classes. The traffic-related objects detectable by the pre-trained model include vehicles, stop signs, traffic lights, parking meters, and fire hydrants. Figure 12 shows a sample output image of the software with bounding boxes over the detected objects.

Sample output image from YOLO8
Figure 12. Sample output image from YOLO8.

Table 25 depicts the geotag information of the images listed along with the confidence score of detecting a stop sign. The user can set a threshold to screen out images without a stop sign. The value of this threshold may depend on the image quality and the specific requirements of the use case.

Table 25 Sample output table from the program.

File Name Confidence Score Latitude Longitude
1 stopsign.jpeg 0.977 999.00000 999.00000
2 mystopsign.jpg 0.962 36.85460 -76.29450
3 mystopsign2.jpg 0.946 36.85552 -76.29386
4 hfl_62801b.jpg 0.000 999.00000 999.00000
5 stop.jpg 0.970 999.00000 999.00000

Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

Traffic Incident Duration (TID) Prediction using Bayesian Networks

In this example, the main goal is to show how to predict traffic incident duration (TID) probabilistically using one of the well-known ML approaches, namely, Bayesian Networks. Before we provide a brief explanation of BN-based TID prediction approach, it is important to quickly describe the problem of predicting TID in the context of real-time incident management operations.

TID can be defined as the time between the occurrence of a traffic incident and its complete clearance. To be more accurate, this duration is inclusive of the entirety of the incident management process, from the moment of the incident’s occurrence to the resolution of its impacts, including detection, response, clearance, and site recovery times.

Accurate prediction of incident duration during the incident management operations is important to achieve effective incident management, improve incident response operations, and provide timely information to travelers regarding expected delays and alternative diversion routes to mitigate the impact of incidents on individual drivers’ travel plans. However, real-time TID prediction is inherently challenging due to the highly uncertain nature of traffic incidents as well as incident management operations which require coordination of various response teams depending on the nature and location of a specific incident. Factors influencing TID include incident characteristics (type, severity, location), environmental conditions (time of day, weather), and response attributes (response time, recovery strategy). To address the challenges that are briefly mentioned above, machine learning and probabilistic graphical models such as Bayesian Networks along with increasingly available historical and real-time data can be used to model and predict incident durations (Ozbay and Noyan 2006).

Bayesian Networks (BNs) are graphical models that utilize nodes and directed edges to represent probabilistic relationships. Essentially, BNs are structured as directed acyclic graphs, capturing both conditional dependencies and independencies among variables. To articulate this more rigorously, a Bayesian Network can be defined as a directed graph G = (N, L), where N denotes the collection of nodes, and L represents the directed edges that connect these nodes. The concept of ‘parents’ for each node i is determined by its direct predecessors, denoted as xAi, and is established as follows:

A random variable xi for each node iN.

Associated with each node is a conditional probability distribution, p(xixAi), specifying the probability of xi conditioned on its parents’ values.

Below is an example of a simple Bayesian Network, where random variables are defined as nodes and the conditional probabilities are defined as directed arcs.

Example Bayesian Network
Figure 13 Example Bayesian Network.

In the BN formalism, the network in Figure 13 indicates that A is conditionally dependent upon B, C is conditionally dependent upon B, and C and A have no effect on each other. In addition, B is unaffected by A and C and has no parents. These graph structures and dependencies are built using historical data and the relevant estimation algorithms that make use of this data.

Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

The TID prediction model presented in this report is based on the BN model proposed by Ozbay and Noyan (2006) and Ozbay and Demiroluk (2014). The anonymized real-world dataset from a portion of the New York network is used. The dataset includes a total of 4,233 traffic incident records that occurred both on arterials and interstate highways in 2021. The attributes of incidents in this database used to train the BN are briefly described below. The main target variable is ‘Duration (min)’ which is converted into a categorical variable named ‘Duration_class’ with four intervals shown in the table below.

Table 26 Categories for incident duration.

Duration (min) Intervals
Class 0 0-30
Class 1 30-60
Class 2 60-90
Class 3 >90

The independent variables that are hypothesized to influence the traffic incident duration include:

  1. `Direction`: Direction of the incident (0: both directions, 1: east, 2: west, 3: south, 4: north, 5: no information)
  2. `County`: The county where the incident occurred (0: County1, 1: County2, 2: County3, 3: County4, 4: County5)
  3. `Year`: The year of the incident (The year of the incident)
  4. `TOD`: Time of day (daytime or night-time) when the incident occurred (0: daytime, 1: nighttime)
  5. `Peak Hour`: Whether it is peak or off-peak hour when the incident occurred (0: off-peak, 1: peak hour)
  6. `Day of Week`: The day when the incident occurred (0: weekday, 1: weekend)
  7. `Month of Year`: The season of the month when the incident occurred (0: spring and fall, 1: summer, 2: winter)
  8. `Injury involved`: Whether there is injury involved in the incident 0: no injury involved, 1: injuries)
  9. `Truck involved`: Whether there is a truck involved in the incident 0: no heavy vehicle, 1: heavy vehicle involved)
  10. `Lane closure type`: How many travel lanes were closed due to the incident (0: zero travel lane, 1: one travel lane, 2: more than two travel lanes, 3: all travel lanes)
  11. `Fire truck involved`: Whether there is a fire truck involved in the incident (0: no fire truck involved, 1: fire truck involved)

A Python Script that employs the “Bayesian Network” toolbox in Python along with the anonymized incident data described above is developed to train and test the BN model. The following are the steps utilized to train and test the BN model shown in Figure 14. The Python script is structured to execute the steps briefly explained below.

  1. Data Preprocessing: The preprocessing of the incident data involves cleaning the data, handling missing values, and converting categorical variables into a suitable numerical format shown
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

    above that can be used in Python code. The Python code titled Bayesian-net reads this data before it moves on to the BN estimation steps. A sample of the input data table is shown below along with the data fields described above.

Table 27 Sample incident data.

Incident Records Direction County TOD Peak Hour Day of Week Season of Year Injury involved Truck involved Lane closure type Fire truck involved Duration Class
1058 3 2 0 1 0 1 0 0 1 0 0
1016 2 1 0 1 0 0 0 0 1 0 0
139 3 3 0 1 0 0 0 0 2 0 0
1157 2 1 1 0 1 1 0 0 1 0 0
606 2 0 0 0 0 0 0 0 1 0
…….. …….. …….. …….. …….. …….. …….. …….. …….. …….. …….. ……..
3971 3 3 0 1 0 0 1 1 1 0 1
3080 4 3 1 0 1 0 1 1 2 0 1
693 3 3 0 0 0 1 0 0 2 0 0
1102 4 1 0 1 0 2 0 0 1 0 0
4900 1 1 0 0 0 0 0 1 1 1 2
  1. Network Structure Learning: Learning the structure of the Bayesian Network is achieved by defining the nodes (variables) and edges (dependencies) of the network. This is done automatically using the BN toolbox implemented in the script. The learned tree structure based on the input data is shown in Figure 14.
Example of the tree structure of the trained Bayesian Network generated using the sample historical data provided on GitHub
Figure 14. Example of the tree structure of the trained Bayesian Network generated using the sample historical data provided on GitHub.
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
  1. Parameter Learning: Once the BN structure is defined, the conditional probability distributions of each node in the network are estimated using the training data. The Python script generates a prediction of the main output namely, “Duration_class,” in terms of the probability of duration being in each one of the four categories. A sample output is given in the following table.

Table 28 Sample probabilities for each duration category.

Duration_class Prob (Duration_class)
Duration Class (0) 0.4030
Duration Class (1) 0.2991
Duration Class (2) 0.1760
Duration Class (3) 0.1219

For example, for this incident, the probability of the incident duration being between 0 and 30 minutes is 0.4030. The rest of the conditional dependencies among other nodes can be seen in the readme file as well as by running the Python script.

  1. Model Validation: The trained BN is then validated by comparing its predictions with the actual values for a subset of data using appropriate metrics recommended for the validation of BNs in the literature. In this specific case, the model performance based on the validation dataset has been evaluated using three well-known metrics, namely, Precision, Recall, and F1-score for each class. Precision is the total number of true positive (TP) outcomes divided by the sum of TP and false positive (FP) outcomes. Recall is the number of TP outcomes divided by the sum of TP and false negative (FN) outcomes. F1-score is found by calculating the harmonic mean of precision and recall. It is a measure that combines both precision and recall into a single metric by taking their harmonic mean, providing a balance between the two. F1-score only achieves a high value if both precision and recall are high, making it a robust metric for evaluating models where the balance between precision and recall is critical. The Python script generates these metrics in its output file. For the validation data, the BN model performs very well when predicting short-duration incidents (0-30 minutes). The precision, recall and F1-score are 99.58%, 96.73%, and 0.9814, respectively, underscoring the model’s high reliability in forecasting short-term incidents. For incidents spanning 30-60 minutes, the model still achieves good results with an F1-score of 0.9670. However, its accuracy diminishes for incidents exceeding 90 minutes; while it consistently identifies long-duration incidents (evidenced by a high recall), its precision drops for those lasting over 90 minutes, resulting in an F1-score of 0.8000. Incidents of 60-90 minutes duration have an F1-score of 0.785, a precision of 100%, and a lower recall of 64.66%. These scores are summarized in Table 29.

Table 29 Validation metrics for the Bayesian Network model.

Precision Recall F1-Score
Class 0 0.9958 0.9673 0.9814
Class 1 0.9360 1.0000 0.9669
Class 2 1.0000 0.6466 0.7850
Class 3 0.6667 1.0000 0.8000
  1. Prediction: Finally, the trained Bayesian Network is used to make predictions about traffic incident duration using a holdout sample that was not used for training. The overall prediction
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

    accuracy score is found to be 91.6%, which is acceptable based on the bounds given by Li et al. (2018). According to this paper, the acceptable accuracy of existing traffic incident duration prediction studies is within a range of 60%-70%. For true positive (TPR), ideally close to 1.0; >0.7 or 0.8 is often considered good. For a False Positive Rate (FPR), ideally close to 0; <0.1 or 0.05 is considered good. For Area under the TOC curve (AUC), greater than 0.7 is considered to be acceptable while values that are greater than 0.8 are considered to be good and greater than 0.9 to be excellent. The individual values related to the prediction accuracy of this model are found to be in these ranges.

In the readme file included in the GitHub directory created for this example application, there are a couple of simple case studies like the ones given in Ozbay and Noyan (2006) to demonstrate how the trained BN works with missing data. This is an important exercise to illustrate the advantage of using BNs that can still make predictions even if some of the information about incidents is missing which is very common in real-world scenarios where all incident-related information does not become available at once. There are a few important points that need to be considered when experimenting with this BN-based TID prediction example:

  1. The .csv file for training is given for initial experimentation with the BN models. However, if one would like to use a different training file, the data conventions including the data structure given in the readme file must be followed.
  2. To run the Python script, one must set up the correct virtual environment. Five steps to set up the virtual environment are also described in the readme file.

Application of the Bayesian Network

Two complementary scenarios are presented below to illustrate the inference capability of the BN given the availability of information.

Scenario#1: Predicting TID with missing Fire Truck and Injury information.

To illustrate the operational capabilities of the trained BN under real-world conditions, let us consider a hypothetical scenario where a decision-maker inputs specific information regarding an incident. This incident is reported to have occurred in County 3, on a northbound route, during the daytime peak traffic hours of the summer season, involving a truck and resulting in a lane closure. However, due to the time-sensitive nature of information availability, details concerning any injuries and the presence of a fire truck at the scene are not ascertainable at this juncture. Consequently, an input array has been constructed to accurately mirror the available and unavailable data concerning the incident conditions. The input array is then structured as follows to represent the known parameters of the incident while omitting the fields for the unavailable data:

Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.

evidence = { 'Direction': 4, #North

The trained BN model when run with the input above produces the incident duration predictions for each class shown below.

Duration_Class Prob (Duration_class)
Duration Class (0) 0.0040
Duration Class (1) 0.4980
Duration Class (2) 0.3109
Duration Class (3) 0.1871

Scenario#2: Predicting TID with missing Fire Truck Involvement information.

Now, let us assume that the decision-maker receives additional information about the existence of an injury and updates its incident scenario as shown below. In this case, the only unknown is the fire truck information.

evidence = { 'Direction': 4, #North 'County': 3, #County 4

From the new predictions shown below, it can be observed that the probability of this incident belonging to the duration class between 30 minutes and 60 minutes increased substantially compared to scenario 1. A similar increase for the duration class 3 is also observed.

Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Duration_Class Prob (Duration_class)
Duration Class (0) 0.0002
Duration Class (1) 0.6302
Duration Class (2) 0.1327
Duration Class (3) 0.2369

A more detailed description of the usage of a trained BN model for TID can be found in Ozbay and Noyan (2016). In addition to using this trained model, one can import other incident data in a format like the one provided here and retrain the BN with the new data.

Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 68
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 69
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 70
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 71
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 72
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 73
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 74
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 75
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 76
Suggested Citation: "6 Sample Machine Learning Applications." National Academies of Sciences, Engineering, and Medicine. 2024. Implementing and Leveraging Machine Learning at State Departments of Transportation. Washington, DC: The National Academies Press. doi: 10.17226/27902.
Page 77
Next Chapter: 7 Machine Learning Guide for State DOTs
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.