Ruimin KE (柯锐岷)

I am an Assistant Professor at The University of Texas at El Paso in Civil Engineering-Smart Cities. My group at UTEP is RECRUITING Ph.D. research assistants!

Previously, I worked at the Smart Transportation Applications and Research (STAR) Lab at the University of Washington (UW) as a Postdoctoral Research Associate. I got my Ph.D. in Intelligent Transportation Systems (Department of Civil Engineering) at UW, where I was advised by Prof. Yinhai Wang. My research interest and expertise areas are in Intelligent Transportation Systems and Smart Cities with focuses on machine learning, video image processing, and the Internet of Things applications. I have an expected Master in Computer Science degree at the University of Illinois at Urbana-Champaign. Earlier, I received a Master degree from the Department of Civil and Environmental Engineering at the University of Washington, and my Bachelor degree from the Department of Automation at Tsinghua University, where I was advised by Prof. Danya Yao.

In my spare time, I am also a badminton player. I have won more than 70 medals/trophies since 2001. I was the junior champion of Sichuan Province (11 times) and City of Chengdu (15 times). Later, I was on the Tsinghua Badminton Team and won 9 titles of Beijing and the national runner-up for Tsinghua University. During my time at Tsinghua, I was the men's singles champion of Beijing (university students) for three consecutive years (2011, 2012, 2013). After moving to Seattle, I've got another 9 trophies, such as the men's singles champion of 2015 Northwest Husky Badminton Open, the men's singles runner-up of 2016 WA State Badminton Open, and the men's doubles runner-up of 2018 WA State Badminton Closed.

E-Mail / Google Scholar / Research Gate / LinkedIn / Master Thesis / PhD Dissertation

News
  • 2021/04: I am thrilled to announce that I will be joining The University of Texas at El Paso as an Assistant Professor in Fall 2021!
  • 2021/03: I join the organizing committee of the 7th IEEE International Smart Cities Conference. The deadline for paper submission is May 31, 2021.
  • 2021/03: Online talk at Monash University.
  • 2021/03: Paper "The Effect of Human Mobility and Control Measures on Traffic Safety during COVID-19 Pandemic" is accepted by PLoS One.
  • 2021/02: I join the editorial board of the Journal of Big Data Analytics in Transportation.
  • 2021/01: Online talk at the Applications of Machine Learning to Transportation workshop organized by TRB standing committee on Artificial Intelligence and Advanced Computing Applications (AED50).
  • 2021/01: Paper "Risk Mitigation Planning for Revenue Service Testing of Bus Automated Emergency Braking" is accepted by Transportation Research Record.
  • 2021/01: Online talk at Dartmouth College.
  • 2020/12: Invited talk at Tongji University.
  • 2020/12: I successfully defended my Ph.D. dissertation!
  • 2020/11: I serve as an Area Editor for CICTP 2021.
  • 2020/11: Our team (Ziyuan, Shuyi, Ian, Luyang, and me) won the Mobility Track Prize of the 2020 MetroLab Student Cup, advancing to the final as one of the four finalists (50 teams in total).
  • 2020/10: Online talk at the 2020 IEEE Intelligent Vehicles (IV) Symposium Workshop on IoT in ITS.
  • 2020/09: I chair the session "Transportation and Mobility II" for the 2020 IEEE International Smart Cities Conference.
  • 2020/09: Invited lecture titled Computer Vision for Intelligent Transportation Systems to a high-school robotics team "High Definition".
  • 2020/07: Paper "Sensing Data Supported Traffic Flow Prediction via Denoising Schemes and ANN: A Comparison" is accepted by IEEE Sensors Journal.
  • 2020/06: Paper "High-Resolution Vehicle Trajectory Extraction and Denoising From Aerial Videos" is accepted by IEEE Transactions on Intelligent Transportation Systems.
  • 2020/05: Paper "Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network for Forecasting Network-wide Traffic State with Missing Values" is accepted by Transportation Research Part C: Emerging Technologies.
  • 2020/05: Paper "Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving" is accepted by Transportation Research Part C: Emerging Technologies.
  • 2020/05: Online talk at MIT.
  • 2020/04: Paper "Edge-Based Traffic Flow Data Collection Method Using Onboard Monocular Camera" is accepted by Journal of Transportation Engineering, Part A: Systems.
  • 2020/04: Paper "Evaluating the Nonlinear Correlation between Vertical Curve Features and Crash Frequency on Highways Using Random Forests" is accepted by Journal of Transportation Engineering, Part A: Systems.
  • 2020/03: Paper (first author) "A Smart, Efficient, and Reliable Parking Surveillance System with Edge Artificial Intelligence on IoT Devices" is accepted by IEEE Transactions on Intelligent Transportation Systems.
  • 2020/03: Paper "Learning Traffic as a Graph: A Gated Graph Wavelet Recurrent Neural Network for Network-scale Traffic Prediction" is accepted by Transportation Research Part C: Emerging Technologies.
  • 2020/03: Talk at Villanova University.
  • 2020/03: Paper "Testing an Automated Collision Avoidance and Emergency Braking System for Buses" is accepted by Transportation Research Record.
  • 2020/02: Paper (first author) "Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact" is accepted by Transportation Research Record.
  • 2020/02: Paper (first author) "Advanced Framework for Microscopic and Lane-level Macroscopic Traffic Parameters Estimation from UAV Video" is accepted by IET Intelligent Transport Systems.
  • 2020/01: Talk at Purdue University.
  • 2020/01: I received the Michael Kyte Outstanding Student of the Year Award presented by the Pacific Northwest Transportation Consortium (PacTrans), USDOT Region 10 University Transportation Center.
Selected Publications

A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices
Ruimin Ke, Yifan Zhuang, Ziyuan Pu, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2020
abstract / bibtex / link

Cloud computing has been a main-stream computing service for years. Recently, with the rapid development in urbanization, massive video surveillance data are produced at an unprecedented speed. A traditional solution to deal with the big data would require a large amount of computing and storage resources. With the advances in Internet of things (IoT), artificial intelligence, and communication technologies, edge computing offers a new solution to the problem by processing all or part of the data locally at the edge of a surveillance system. In this study, we investigate the feasibility of using edge computing for smart parking surveillance tasks, specifically, parking occupancy detection using the real-time video feed. The system processing pipeline is carefully designed with the consideration of flexibility, online surveillance, data transmission, detection accuracy, and system reliability. It enables artificial intelligence at the edge by implementing an enhanced single shot multibox detector (SSD). A few more algorithms are developed either locally at the edge of the system or on the centralized data server targeting optimal system efficiency and accuracy. Thorough field tests were conducted in the Angle Lake parking garage for three months. The experimental results are promising that the final detection method achieves over 95% accuracy in real-world scenarios with high efficiency and reliability. The proposed smart parking surveillance system is a critical component of smart cities and can be a solid foundation for future applications in intelligent transportation systems.

@article{9061155,
title={A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices},
author={Ke, Ruimin and Zhuang, Yifan and Pu, Ziyuan and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2020},
volum={},
number={},
pages={1-13}
}
Edge Computing for Real-Time Near-Crash Detection for Smart Transportation Applications
Ruimin Ke, Zhiyong Cui, Yanlong Chen, Meixin Zhu, Frank Yang, Yinhai Wang*
ArXiv Preprint, 2021
abstract / bibtex / link / demo

Traffic near-crash events serve as critical data sources for various smart transportation applications, such as being surrogate safety measures for traffic safety research and corner case data for automated vehicle testing. However, there are several key challenges for near-crash detection. First, extracting near-crashes from original data sources requires significant computing, communication, and storage resources. Also, existing methods lack efficiency and transferability, which bottlenecks prospective large-scale applications. To this end, this paper leverages the power of edge computing to address these challenges by processing the video streams from existing dashcams onboard in a real-time manner. We design a multi-thread system architecture that operates on edge devices and model the bounding boxes generated by object detection and tracking in linear complexity. The method is insensitive to camera parameters and backward compatible with different vehicles. The edge computing system has been evaluated with recorded videos and real-world tests on two cars and four buses for over ten thousand hours. It filters out irrelevant videos in real-time thereby saving labor cost, processing time, network bandwidth, and data storage. It collects not only event videos but also other valuable data such as road user type, event location, time to collision, vehicle trajectory, vehicle speed, brake switch, and throttle. The experiments demonstrate the promising performance of the system regarding efficiency, accuracy, reliability, and transferability. It is among the first efforts in applying edge computing for real-time traffic video analytics and is expected to benefit multiple sub-fields in smart transportation research and applications.

@misc{ke2021edge,
title={Edge Computing for Real-Time Near-Crash Detection for Smart Transportation Applications},
author={Ruimin Ke and Zhiyong Cui and Yanlong Chen and Meixin Zhu and Hao (Frank) Yang and Yinhai Wang},
year={2021},
eprint={2008.00549},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact
Ruimin Ke, Wan Li, Zhiyong Cui, Yinhai Wang*
Transportation Research Record, 2020
abstract / bibtex / link / data (wsdot --> loopgroup data download)

Traffic speed prediction is a critically important component of intelligent transportation systems. Recently, with the rapid development of deep learning and transportation data science, a growing body of new traffic speed prediction models have been designed that achieved high accuracy and large-scale prediction. However, existing studies have two major limitations. First, they predict aggregated traffic speed rather than lane-level traffic speed; second, most studies ignore the impact of other traffic flow parameters in speed prediction. To address these issues, the authors propose a two-stream multi-channel convolutional neural network (TM-CNN) model for multi-lane traffic speed prediction considering traffic volume impact. In this model, the authors first introduce a new data conversion method that converts raw traffic speed data and volume data into spatial�temporal multi-channel matrices. Then the authors carefully design a two-stream deep neural network to effectively learn the features and correlations between individual lanes, in the spatial�temporal dimensions, and between speed and volume. Accordingly, a new loss function that considers the volume impact in speed prediction is developed. A case study using 1-year data validates the TM-CNN model and demonstrates its superiority. This paper contributes to two research areas: (1) traffic speed prediction, and (2) multi-lane traffic flow study.

@article{ke2020TWO,
title={Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact},
author={Ke, Ruimin and Li, Wan and Cui, Zhiyong and Wang, Yinhai},
journal={Transportation Research Record},
pages={0361198120911052},
year={2020},
publisher={publisher={SAGE Publications Sage CA: Los Angeles, CA}
}
Advanced Framework for Microscopic and Lane-level Macroscopic Traffic Parameters Estimation from UAV Video
Ruimin Ke, Shuo Feng, Zhiyong Cui, Yinhai Wang*
IET Intelligent Transport Systems, 2020
abstract / bibtex / link / video

Unmanned aerial vehicle (UAV) is at the heart of modern traffic sensing research due to its advantages of low cost, high flexibility, and wide view range over traditional traffic sensors. Recently, increasing efforts in UAV-based traffic sensing have been made, and great progress has been achieved on the estimation of aggregated macroscopic traffic parameters. Compared to aggregated macroscopic traffic data, there has been extensive attention on higher-resolution traffic data such as microscopic traffic parameters and lane-level macroscopic traffic parameters since they can help deeply understand traffic patterns and individual vehicle behaviours. However, little existing research can automatically estimate microscopic traffic parameters and lane-level macroscopic traffic parameters using UAV videos with a moving background. In this study, an advanced framework is proposed to bridge the gap. Specifically, three functional modules consisting of multiple processing streams and the interconnections among them are carefully designed with the consideration of UAV video features and traffic flow characteristics. Experimental results on real-world UAV video data demonstrate promising performances of the framework in microscopic and lane-level macroscopic traffic parameters estimation. This research pushes off the boundaries of the applicability of UAVs and has an enormous potential to support advanced traffic sensing and management.

@article{ke2020advanced,
title={Advanced framework for microscopic and lane-level macroscopic traffic parameters estimation from UAV video},
author={Ke, Ruimin and Feng, Shuo and Cui, Zhiyong and Wang, Yinhai},
journal={IET Intelligent Transport Systems}
year = {2020}
publisher={IET}
}
Real-Time Traffic Flow Parameter Estimation From UAV Video Based on Ensemble Classifier and Optical Flow
Ruimin Ke, Zhibin Li, Jinjun Tang, Zewen Pan, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2019
abstract / bibtex / link / data

Recently, the availability of unmanned aerial vehicle (UAV) opens up new opportunities for smart transportation applications, such as automatic traffic data collection. In such a trend, detecting vehicles and extracting traffic parameters from UAV video in a fast and accurate manner is becoming crucial in many prospective applications. However, from the methodological perspective, several limitations have to be addressed before the actual implementation of UAV. This paper proposes a new and complete analysis framework for traffic flow parameter estimation from UAV video. This framework addresses the well-concerned issues on UAV's irregular ego-motion, low estimation accuracy in dense traffic situation, and high computational complexity by designing and integrating four stages. In the first two stages an ensemble classifier (Haar cascade + convolutional neural network) is developed for vehicle detection, and in the last two stages a robust traffic flow parameter estimation method is developed based on optical flow and traffic flow theory. The proposed ensemble classifier is demonstrated to outperform the state-of-the-art vehicle detectors that designed for UAV-based vehicle detection. Traffic flow parameter estimations in both free flow and congested traffic conditions are evaluated, and the results turn out to be very encouraging. The dataset with 20,000 image samples used in this study is publicly accessible for benchmarking at http://www.uwstarlab.org/research.html.

@article{ke2018real,
title={Real-time traffic flow parameter estimation from UAV video based on ensemble classifier and optical flow},
author={Ke, Ruimin and Li, Zhibin and Tang, Jinjun and Pan, Zewen and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2018},
volume={20},
number={1},
pages={54-64},
publisher={IEEE}}
New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis
Ruimin Ke, Ziqiang Zeng, Ziyuan Pu, Yinhai Wang*
Journal of Transportation Engineering, Part A: Systems, 2018
(Featured in the Editor's Choice Section of the journal)
abstract / bibtex / link

As the amount of traffic congestion continues to grow, pinpointing freeway bottleneck locations and quantifying their impacts are crucial activities for traffic management and control. Among the previous bottleneck identification methods, limitations still exist. The first key limitation is that they cannot determine precise breakdown durations at a bottleneck in an objective manner. Second, the input data often needs to be aggregated in an effort to ensure better robustness to noise, which will significantly reduce the time resolution. Wavelet transform, as a powerful and efficient data-processing tool, has already been implemented in some transportation application scenarios to much benefit. However, there is still a wide gap between existing preliminary explorations of wavelet analysis in transportation research and a completely automatic bottleneck identification framework. This paper addresses several key issues in existing bottleneck identification approaches and also fills a gap in transportation-related wavelet applications. The experimental results demonstrate that the proposed method is able to locate the most severe bottlenecks and comprehensively quantify their impacts.

@article{doi:10.1061/JTEPBS.0000168,
author = {Ruimin Ke and Ziqiang Zeng and Ziyuan Pu and Yinhai Wang },
title = {New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis},
journal = {Journal of Transportation Engineering, Part A: Systems},
volume = {144},
number = {9},
pages = {04018044},
year = {2018},
doi = {10.1061/JTEPBS.0000168}
}
Multi-Lane Traffic Pattern Learning and Forecasting Using Convolutional Neural Network
Ruimin Ke, Wan Li, Zhiyong Cui, Yinhai Wang*
COTA International Symposium on Emerging Trends in Transportation (ISETT), 2018
abstract / link

Recently, the emergence of deep learning has facilitated many research fields including transportation, especially traffic pattern recognition and traffic forecasting. While many efforts have been made in the exploration of new models for higher accuracy and larger scale, few existing studies focus on learning higher-resolution traffic patterns. The most representative example is the lack of research in multi-lane pattern mining and forecasting. To this end, this paper proposes a deep learning framework that can learn multi-lane traffic patterns and forecast lane-level short-term traffic conditions with high accuracy. Multi-lane traffic dynamics are modeled as a multi-channel spatial-temporal image in which each channel corresponds to a traffic lane. The constructed multi-channel image is then learned by a convolutional neural network, which can capture key traffic patterns and forecast multi-lane traffic flow parameters. One-year loop detector data for a freeway segment in Seattle are used for model validation. The results and analyses demonstrate the promising performance of the proposed method.

A Cost-effective Framework for Automated Vehicle-pedestrian Near-miss Detection through Onboard Monocular Vision
Ruimin Ke, Jerome Lutin, Jerry Spears, Yinhai Wang*
Computer Vision and Pattern Recognition (CVPR), 2017
abstract / bibtex / link / video / media coverage

Onboard monocular cameras have been widely deployed in both public transit and personal vehicles. Obtaining vehicle-pedestrian near-miss event data from onboard monocular vision systems may be cost-effective compared with onboard multiple-sensor systems or traffic surveillance videos. But extracting near-misses from onboard monocular vision is challenging and little work has been published. This paper fills the gap by developing a framework to automatically detect vehicle-pedestrian near-misses through onboard monocular vision. The proposed framework can estimate depth and real-world motion information through monocular vision with a moving video background. The experimental results based on processing over 30-hours video data demonstrate the ability of the system to capture near-misses by comparison with the events logged by the Rosco/MobilEye Shield+ system which includes four cameras working cooperatively. The detection overlap rate reaches over 90% with the thresholds properly set.

@INPROCEEDINGS{8014858,
author={R. Ke and J. Lutin and J. Spears and Y. Wang},
booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
title={A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision},
year={2017},
pages={898-905},
doi={10.1109/CVPRW.2017.124},
ISSN={2160-7516},
month={July},}
Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos
Ruimin Ke, Zhibin Li, Sung Kim, John Ash, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2017
abstract / bibtex / link / demo

Unmanned aerial vehicles (UAVs) are gaining popularity in traffic monitoring due to their low cost, high flexibility, and wide view range. Traffic flow parameters such as speed, density, and volume extracted from UAV-based traffic videos are critical for traffic state estimation and traffic control and have recently received much attention from researchers. However, different from stationary surveillance videos, the camera platforms move with UAVs, and the background motion in aerial videos makes it very challenging to process for data extraction. To address this problem, a novel framework for real-time traffic flow parameter estimation from aerial videos is proposed. The proposed system identifies the directions of traffic streams and extracts traffic flow parameters of each traffic stream separately. Our method incorporates four steps that make use of the Kanade-Lucas-Tomasi (KLT) tracker, k-means clustering, connected graphs, and traffic flow theory. The KLT tracker and k-means clustering are used for interest-point-based motion analysis; then, four constraints are proposed to further determine the connectivity of interest points belonging to one traffic stream cluster. Finally, the average speed of a traffic stream as well as density and volume can be estimated using outputs from previous steps and reference markings. Our method was tested on five videos taken in very different scenarios. The experimental results show that in our case studies, the proposed method achieves about 96% and 87% accuracy in estimating average traffic stream speed and vehicle count, respectively. The method also achieves a fast processing speed that enables real-time traffic information estimation.

@ARTICLE{7546916,
author={R. Ke and Z. Li and S. Kim and J. Ash and Z. Cui and Y. Wang},
journal={IEEE Transactions on Intelligent Transportation Systems},
title={Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos},
year={2018},
volume={18},
number={4},
pages={890-901},
doi={10.1109/TITS.2016.2595526},
ISSN={1524-9050},
month={April},}
Roadway surveillance video camera calibration using standard shipping container
Ruimin Ke, Zewen Pan, Ziyuan Pu, Yinhai Wang*
IEEE International Smart Cities Conference, 2017
abstract / bibtex / link

Surveillance video cameras have been increasingly deployed on roadway networks providing important support for roadway management. While the information-rich video images are a valuable source of traffic data, these surveillance video cameras are typically designed for manual observation of roadway conditions and are not for automatic traffic data collection. The benefits of turning these surveillance cameras into data collection cameras are obvious, but collecting traffic data would normally require the development of a cost-effective method to efficiently and accurately calibrate surveillance video cameras. This paper proposes such a robust and efficient method that calibrates surveillance video cameras using standard shipping container as the reference object. The traditional camera calibration model can be simplified and camera parameters can be recovered with precise mathematical derivation. After solving for all the camera parameters, the 3D object world coordinates can be reconstructed from 2D image coordinates, thus enabling the collection of a variety of traffic data using surveillance video camera data.

@INPROCEEDINGS{8090811,
author={R. Ke and Z. Pan and Z. Pu and Y. Wang},
booktitle={2017 International Smart Cities Conference (ISC2)},
title={Roadway surveillance video camera calibration using standard shipping container},
year={2017},
volume={},
number={},
pages={1-6},
doi={10.1109/ISC2.2017.8090811},
ISSN={},
month={Sept},}
Motion-vector clustering for traffic speed detection from UAV video
Ruimin Ke, Sung Kim, Zhibin Li, Yinhai Wang*
IEEE First International Smart Cities Conference, 2015
abstract / bibtex / link

A novel method for detecting the average speed of traffic from non-stationary aerial video is presented. The method first extracts interest points from a pair of frames and performs interest point tracking with an optical flow algorithm. The output of the optical flow is a set of motion vectors which are k-means clustered in velocity space. The centers of the clusters correspond to the average velocities of traffic and the background, and are used to determine the speed of traffic relative to the background. The proposed method is tested on a 70-frame test sequence of UAV aerial video, and achieves an average error for speed estimates of less than 12%.

@INPROCEEDINGS{7366230,
author={R. Ke and S. Kim and Z. Li and Y. Wang},
booktitle={2015 IEEE First International Smart Cities Conference (ISC2)},
title={Motion-vector clustering for traffic speed detection from UAV video},
year={2015},
volume={},
number={},
pages={1-5},
doi={10.1109/ISC2.2015.7366230},
ISSN={},
month={Oct},}
Lane-changes prediction based on adaptive fuzzy neural network
Jinjun Tang, Fang Liu, Wenhui Zhang, Ruimin Ke, Yajie Zou*
Expert Systems with Applications, 2018
abstract / bibtex / link

Lane changing maneuver is one of the most important driving behaviors. Unreasonable lane changes can cause serious collisions and consequent traffic delays. High precision prediction of lane changing intent is helpful for improving driving safety. In this study, by fusing information from vehicle sensors, a lane changing predictor based on Adaptive Fuzzy Neural Network (AFFN) is proposed to predict steering angles. The prediction model includes two parts: fuzzy neural network based on Takagi–Sugeno fuzzy inference, in which an improved Least Squares Estimator (LSE) is adopt to optimize parameters; adaptive learning algorithm to update membership functions and rule base. Experiments are conducted in the driving simulator under scenarios with different speed levels of lead vehicle: 60 km/h, 80 km/h and 100 km/h. Prediction results show that the proposed method is able to accurately follow steering angle patterns. Furthermore, comparison of prediction performance with several machine learning methods further verifies the learning ability of the AFNN. Finally, a sensibility analysis indicates heading angles and acceleration of vehicle are also important factors for predicting lane changing behavior.

@article{TANG2018452,
title = "Lane-changes prediction based on adaptive fuzzy neural network",
journal = "Expert Systems with Applications",
volume = "91",
pages = "452 - 463",
year = "2018",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2017.09.025",
author = "Jinjun Tang and Fang Liu and Wenhui Zhang and Ruimin Ke and Yajie Zou",
}
Active Safety-Collision Warning Pilot in Washington State
Jerry Spears, Jerome Lutin, Yinhai Wang, Ruimin Ke, Steven Clancy
TRANSIT-IDEA Program Project Final Report, 2017
abstract / bibtex / link / media coverage

The Rosco/Mobileye Shield+ system is a collision avoidance warning system (CAWS) specifically designed for transit buses. This project involved field testing and evaluation of the CAWS in revenue service over a three-month period. The system provides alerts and warnings to the bus driver for the following conditions that could lead to a collision: 1) changing lanes without activating a turn signal, 2) exceeding posted speed limit, 3) monitoring headway with the vehicle leading the bus, 4) forward vehicle collision warning, and 5) pedestrian or cyclist collision warning in front of, or alongside the bus. Alerts and warnings are displayed to the driver by visual indicators located on the windshield and front pillars. Audible warnings are issued when collisions are imminent. Research objectives included: create a robust Rosco/Mobileye demonstration pilot for active/collision avoidance within the State of Washington on a minimum of 35 transit buses; determine the ease of retrofit of the existing fleet; develop a methodology for estimating the full costs savings of avoided collisions for each agency; develop a methodology and evaluation process for transit driver feedback and acceptance as well as bus passenger feedback; and provide detailed data and understanding on entrance barriers to this technology. The pilot test showed that although driver acceptance was mixed, there were large reductions in near-miss events for CAWS-equipped buses. Consequently, achieving driver acceptance will be a key factor in continued development and deployment of CAWS. As a result of comments received from the drivers, the vendor has begun a program to incorporate desired modifications to the system including reducing false positives. A second major factor in achieving industry acceptance is to demonstrate the business case for CAWS to both transit agencies and system developers. Although the pilot project produced encouraging results, collisions, injuries and fatalities can be considered rare events. A much larger in-service test will be needed to demonstrate actual cost-savings.

@Report{01643748,
author={Jerry Spears and Jerome Lutin and Yinhai Wang and Ruimin Ke and Steven Clancy},
journal={Transit IDEA Project},
title={Active Safety-Collision Warning Pilot in Washington State},
year={2017},
publisher={Transportation Research Board}
volume={},
number={82},
pages={1-33},
month={May},}
Digital Roadway Interactive Visualization and Evaluation Network Applications to WSDOT Operational Data Usage
Yinhai Wang, Ruimin Ke, Weibin Zhang, Zhiyong Cui, Kristian Henrickson
Washington Station Department of Transportation (WSDOT) Research Report, 2016
abstract / link / video / website

DRIVE Net is a region-wide, Web-based transportation decision support system that adopts digital roadway maps as the base, and provides data layers for integrating and analyzing a variety of data sources (e.g., traffic sensors, incident records). Moreover, DRIVE Net offers a platform for streamlining transportation analysis and decision making, and it serves as a practical tool for visualizing historical observations spatially and temporally. In its current implementation, DRIVE Net demonstrates the potential to be used as a standard tool for incorporating multiple data sets from different fields and as a platform for real-time decision making. In comparison with the previous version, the new DRIVE Net system is now able to handle more complex computational tasks, perform large-scale spatial processing, and support data sharing services to provide a stable and interoperable platform to process, analyze, visualize, and share transportation data. DRIVE Net’s capabilities include generating statistics for WSDOT’s Gray Notebook (GNB), including travel times, throughput productivity, and traffic delay calculations for both general purpose and HOV lanes, each of which are important performance indicators in the WSDOT congestion report. The DRIVE Net system includes robust loop detector data processing and quality control methods to address the data quality issues impacting loop detectors throughout the state. The capabilities of the DRIVE Net system have been expanded to include safety modeling, hotspot identification, and incident induced delay estimation. Specifically, the Safety Performance module includes functions that can be used to obtain traffic incident frequency, apply predictive models to estimate the safety performance of road segments, and visualize and compare observed incident counts and different predictive models. Additionally, a module providing multi-modal data analysis and visualization capabilities was developed as a pilot experiment for integration of heterogeneous data. This module includes pedestrian and bicycle, public transit, park and ride, Car2Go, and ferry data downloading and visualization. DRIVE Net now offers role-based access control, such that access privileges to different functions and data resources can be assigned on a group or individual basis. The new system is able to support more complex analytics and decision support features on a large-scale transportation network, and is expected to be of great practical use for both traffic engineers and researchers. With a modular structure and mature data integration and management framework, DRIVE Net can be expanded in the future to include a variety of additional data resources and analytical capabilities.


Teaching
Computer Vision for Intelligent Transportation Systems
As Invited Lecturer / Connecting with Professionals event for the "High Definition" Robotics Team
Lecture Video
Fall 2020
CET 590: Traffic Simulation and System Operations
As Graduate Instructor
Course Evaluation Score: 4.8 / 5
Fall 2019
CET 590: Traffic Simulation and System Operations
As Teaching Assistant with Prof. Yinhai Wang
Fall 2018
CET 412: Transportation Data Management and Analytics
As Guest Lecturer / Topic: Advances in Sensor Technology for Robust Traffic Data Collection
Winter 2019
EE(P) 502: Analytical Methods for Electrical Engineering
As Guest Lecturer / Topic: Computer Vision Applications in Transportation Engineering
Fall 2018
Engineering Discovery Days
As Guest Lecturer / Topic: Drone-Based Traffic Detection and Management
Spring 2016, 2017, 2018, 2019