U.S. Department of Transportation Twitter The Intelligent Transportations Systems Joint Program Office (ITS JPO) U.S. DEPARTMENT OF TRANSPORTATION

STAGE 1B PRIMARY TRACK WINNERS

The U.S. Department of Transportation (U.S. DOT) announced the winners of the U.S. DOT Intersection Safety Challenge Stage 1B: System Assessment and Virtual Testing Primary Track at the 2025 Transportation Research Board (TRB) Annual Meeting.

The purpose of the Intersection Safety Challenge, a multi-stage prize competition, is to encourage teams of innovators and end-users to develop, prototype and test intersection safety systems (ISS) that leverage emerging technologies including artificial intelligence (AI) and machine learning (ML) to identify and mitigate unsafe conditions involving vehicles and vulnerable road users at roadway intersections. The Challenge draws on the expertise of researchers and practitioners from across the United States. This includes universities, State and local agencies, private sector developers, and other organizations.

In Stage 1A, participants submitted design concepts for their proposed intersection safety systems. Winners from Stage 1A were invited to participate in the Stage 1B Primary Track where they tackled a series of technical challenges—including sensor fusion, classification, path and conflict prediction—utilizing U.S. DOT-provided real world sensor data collected on a closed course at the Federal Highway Administration (FHWA) Turner-Fairbank Highway Research Center (TFHRC).

For the Stage 1B Primary Track, U.S. DOT awarded 10 teams prize amounts ranging from $166,666 to $750,000 across two Tiers, for a total of $4,000,000 in prize awards. Tier 1 teams demonstrated the highest performance across the evaluation criteria while Tier 2 teams demonstrated strong performance across the evaluation criteria. The winners are listed below in alphabetical order by Team Lead within each Tier.

Congratulations to the following Stage 1B Primary Track winners!

Team Lead Entity Summary
Derq USA, Inc.
  • Utilizes an approach that fuses varied perception sensors and learns from historical data to build real-time situational awareness to monitor and analyze road user behavior.
  • Developed real-time cooperative perception technology based on computer vision, machine learning, and sensor fusion with applications in traffic control, connected vehicles and safety analytics including illegal road-user movement detection, real-time near-miss (conflict) detection and real time crash detection for vehicles and vulnerable road users.
  • Approach includes a process for sensor calibration, including the alignment of camera and LiDAR data, and synchronization of timestamps for data integration.
    • Perception algorithms employed to detect and classify road users in a variety of sensor feeds, which are tracked and then fused in a common frame of reference.
  • Path prediction models are applied to anticipate the future movements of road users, feeding into conflict detection algorithms to identify potential collision scenarios.
University of California, Los Angeles (UCLA) Mobility Lab
  • InfraShield system uses sensor fusion and path prediction technologies which leverage multimodal sensor data, including LiDAR, red, green, and blue wavelengths (RGB) cameras, and radar, to detect, classify, and track vulnerable road users and vehicles under challenging conditions.
  • Utilizes a late fusion approach to combine sensor data for object detection, classification, and tracking, addressing calibration issues and sensor limitations.
  • For path prediction, InfraShield employs machine learning models to forecast future movements of road users, utilizing high-definition maps and historical object trajectory data, accounting for diverse paths of vehicles and vulnerable road users, ensuring robust predictions despite noise data, and can be used to identify conflict points using time-to-collision calculations.
University of Hawaii
  • Relies on sensor fusion across multiple modalities including LiDAR, RGB cameras, thermal cameras and signal data providing highly accurate 3D localization, open vocabulary detection for even potentially unknown test-time classes and multi-mode probabilistic path prediction, which are combined for conflict prediction.
  • Approach in optimal utilization and fusion of sensors, allows real-time inference on cheaper devices, minimizes data curation costs and ensures good generalization across conditions, which are crucial to ensure scalability to intersections all over the nation.
University of Michigan
  • Team includes Mcity of the University of Michigan, General Motors Global R&D, Ouster, and Texas A&M University.
  • SAFETI real-time algorithms are designed to work with DOT-supplied sensor data, focusing on identifying and predicting the movement of vehicles and vulnerable road users at intersections.
  • The approach integrates 2D detection from images and 3D detection from LiDAR data, followed by sensor fusion and trajectory prediction, with a conflict detection module that evaluates potential collisions between agents in real-time.

Team Lead Entity Summary
Florida A&M University (FAMU) and Florida State University (FSU)
  • Predictive Intersection Safety System's (PREDISS's) goal is to leverage machine learning, controls, optimization, connected and autonomous vehicles technologies to improve the safety of vulnerable road users at signalized intersections.
  • Approach fuses low-cost sensors' data to detect (differentiate and classify), localize, track, and predict the trajectories of vehicles and vulnerable road users.
    • Strikes a balance between compute power and practicality, factoring in the long-term goal of retrofitting such a system in intersections across the United States. Designed system in collaboration with the Tallahassee Advanced Traffic Management System (TATMS) to be deployable on existing infrastructure, including testing on live feeds.
    • Key design choices include: 1) Modular architecture; 2) User-friendly calibration; 3) Python-based implementation; 4) Efficient algorithms; 5) Adaptive fusion techniques
Miovision (Global Traffic Technologies)
  • Team includes Miovision USA, Carnegie Mellon University, Amazon Web Services, and Telus.
  • Devised a perception, path prediction, and conflict prediction framework centered around RGB camera and LiDAR sensors.
  • Emphasizing decision-level sensor fusion, approach amalgamates independent detections from multiple strategically positioned cameras with granular 3D spatial details from LiDAR data, fostering enhanced detection and localization.
    • Perception module encompasses components such as refined YOLO-based object detection and classification in 2D and LiDAR-based object detection in 3D, multi-camera object tracking based on DeepSORT, and advanced LiDAR-based 3D object localization.
    • Subsequent path prediction, bolstered by an expanded dataset and the AutoBots-Joint model, predicts complex scenarios for each road user into the future at intersections, using bird's-eye-view projections enriched by PCA-based ground plane estimations.
    • At the end, conflict prediction framework integrates time-based Surrogate Safety Measure of Time to Collision to capture complex interactions to anticipate potential collision scenarios, complemented by probabilistic filtering to reduce false positives.
Ohio State University
  • Approach leverages a late fusion strategy that integrates data from LiDAR, RGB cameras, and infrared sensors.
    • Utilizes a Euler-Region Proposal Network (E-RPN) to process Bird's Eye View (BEV) projections of LiDAR point cloud data.
    • Concurrently, a YOLOv10 network is employed for 2D object detection, and a ByteTrack2 tracker is used to track 2D bounding boxes over time. YOLO is applied independently to both RGB and infrared images to maximize detection accuracy.
    • By analyzing the velocity states of the tracked objects, it predicts their future trajectories over a specified time horizon, assuming constant velocity and performing linear extrapolation.
    • Potential collisions are identified by examining the predicted trajectories on the x, y plane.
Orion Robotics Labs
  • Orion Robotics Labs is a small woman-owned business in rural Colorado. Orion Robotics Labs has expertise in machine learning, edge compute, sensing technologies and robotics.
  • Developed a solution for detection, localization, classification and path/conflict detection to increase intersection safety by combining lightweight algorithms, fine-tuned calibrations and fast processing.
University of California, Riverside
  • Approach aims to develop an Intersection Safety System (ISS) using roadside sensor-based data, vehicle-to-everything (V2X) communications, and artificial intelligence (AI) to continuously monitor traffic, predict traffic states (including trajectories) and potential conflicts, as well as to enhance vulnerable road user safety at signalized intersections, with Stage 1B focus on roadside perception and collision prediction.
  • The approach developed centers around the following modules: 1) Data Processing; 2) Sensor Fusion; 3) Multi-Object Tracking; 4) Path Prediction; 5) Collision Prediction.
    • Integrates computer vision technologies and other machine learning techniques for road users' detection (also including sub-classification and localization), tracking, path prediction, and conflict prediction at signalized intersections.
University of Washington
  • Developed a Cooperative Perception System (CPS) to generate a comprehensive understanding of intersection dynamics.
  • System integrates multiple sensors, including eight visual cameras, five thermal cameras, and two 3D LiDARs, enabling 3D object detection, classification, and path and conflict prediction.
  • Architecture of CPS is structured into three primary modules: Object Detection and Classification, 2D-3D Camera Calibration, and Tracking and Prediction.
    • Object Detection and Classification Module acts as the foundation of the CPS and processes incoming data from both visual and thermal cameras to detect and classify road users in various lighting conditions.
    • 2D-3D Camera Calibration Module converts 2D detection results into 3D object representations using a multi-sensor re-identification process that merges data from cameras and 3D LiDAR sensors.
    • Tracking and Prediction Module utilizes the DeepSORT algorithm to track the 3D detections, capturing crucial movement data such as trajectories, speeds, and orientations. This information feeds into a Seq2Seq prediction model, which processes the sequences of past object states to forecast future movements. The model predicts potential paths and identifies possible conflicts by calculating the time-to-collision (TTC), thus assessing the likelihood of hazardous interactions.

Improving the safety of pedestrians, bicyclists, and other vulnerable road users is of critical importance to achieving the U.S. DOT's vision of zero roadway deaths and serious injuries. The Intersection Safety Challenge supports these Departmental priorities, aligns with the U.S. DOT's National Roadway Safety Strategy (NRSS), and aims to set the stage for the future deployment of roadway intersection safety systems nationwide.

Given the overwhelming interest in Stage 1B, U.S. DOT is exploring ways to engage all interested parties in future stages of the Intersection Safety Challenge. For updates on opportunities to participate in the Intersection Safety Challenge in the coming year, please visit https://its.dot.gov/isc/.

PROGRAM VIDEOS


Intersection Safety Challenge - Program Video


Intersection Safety Challenge – Stage 1B Data Collection

NEWS AND EVENTS

PROBLEM

Intersection safety is a growing issue, especially for vulnerable road users.


Dangerous Intersections

Intersection Crashes

Each year roughly one–quarter of traffic fatalities and about one–half of all traffic injuries in the United States are attributed to intersections (FHWA Safety).

Rising Vulnerable Road User Deaths

Rising Vulnerable Road User Deaths

Vulnerable road user fatalities are on the rise with pedestrian fatalities up 13% and pedalcyclist fatalities up 2% in 2021 compared to 2020 (NHTSA).

PROPOSED SOLUTION

Leverage emerging technologies to improve intersection safety at scale in a new way.


Data Fusion Utilizing Existing and Emerging Sensors

Data Fusion Utilizing Existing and Emerging Sensors

Emerging, low-cost sensors can be deployed at intersections for improved sensing of vulnerable road users. Data from these sensors can be fused and used in new ways by AI.

Plus icon
Artificial Intelligence /Machine Learning

Artificial Intelligence /Machine Learning

AI/ML can fuse data from multiple machine vision sensing modalities rapidly to improve situational awareness and anticipate potential conflicts.

Equals icon
Low-Cost, High-Value Opportunity for Integration at Scale

Low-Cost, High-Value Opportunity for Integration at Scale

These existing technologies have not been deployed together at intersections broadly, offering an opportunity ripe for innovative collaboration.

This Challenge aligns with the National Roadway Safety Strategy (NRSS) and supplements current and existing U.S. DOT safety and equity efforts (e.g., FHWA Complete Streets, Proven Safety Countermeasures).

RESOURCES

CONNECT WITH US

Office of the Assistant Secretary for Research and Technology (OST-R)U.S. Department of Transportation (US DOT)
1200 New Jersey Avenue, SE • Washington, DC 20590 • 800.853.1351

Web Policies & Notices | Accessibility | Careers | FOIA | Privacy Policy | Information Quality | No FEAR Act Data | Ethics
Civil Rights | Office of Inspector General | OIG Hotline | BusinessUSA | USA.gov | White House