Ruimin "Roman" KE
How to pronounce? /'reimin/kə/

I am a tenure-track Assistant Professor at Rensselaer Polytechnic Institute with the Department of Civil and Environmental Engineering. Our research group at RPI seeks talented and dedicated students to join our team. Additionally, we are open to collaborating with individuals and groups from both academia and industry who share our research interests.

I received my Ph.D. and M.S. degrees in Transportation Engineering at the University of Washington. I earned a M.S. degree in Computer Science from the University of Illinois at Urbana-Champaign, and a B.E. degreee in Automation from Tsinghua University. Our research interest lies in Intelligent Transportation Systems and Automated Vehicles with focuses on Machine Learning, Transportation Data Science, and Edge/Cloud Computing applications. I serve on the editorial board of four journals: Nature-Scientific Report, Transportation Research Record, Data Science for Transportation, and Journal of Air Transport Management. I have been extremely privileged to receive multiple academic awards in recognition of my collaboratve research accomplishments, including the 2023 TRB Best Paper Award, 2022 Outstanding Paper Award from IEEE DTPI Conference, 2020-2021 COTA Best Dissertation Award, and Editor's Choice Paper Award from Journal of Transportation Engineering.

In my free time, I have been a badminton player since eight years old and able to achieve significant milestones in the sport. I have won over 70 medals/trophies since 2001. I was crowned the junior champion of Sichuan Province (11 times) and Chengdu City (15 times). Later, I joined the Tsinghua University Badminton Team, where I contined to excel. During my tenure, I clinched 9 titles of Beijing and the national runner-up for Tsinghua University. I was the men's singles champion of Beijing (university students) for three consecutive years (2011, 2012, 2013). After relocating to Seattle, I won 9 more trophies, such as the men's singles champion of 2015 Northwest Husky Badminton Open, the men's singles runner-up of 2016 WA State Badminton Open, and the men's doubles runner-up of 2018 WA State Badminton Closed.

E-Mail / Google Scholar / Research Gate / LinkedIn / Github / PhD Dissertation / RPI Profile

News
  • 2024/01: First TRB Edge Computing Subcommittee Meeting scheduled at TRB 2024, featuring four renowned speakers from academia and industry.
  • 2023/12: Invited webinar at NYC DOT.
  • 2023/10: Talha's first-author paper Deep Learning based Computer Vision Methods for Complex Traffic Environments Perception: A Review is accepted by the Data Science for Transportation journal.
  • 2023/10: Invited talk at USDOT Region 10 PacTrans Annual Conference.
  • 2023/08: A proposal I led as the PI on Machine Learning in Transportation Education is awarded by the National Science Foundation.
  • 2023/08: Talha did a great job in unveiling the outcomes of the USDOT C2SMARTER project on transportation digital twin. Webinar YouTube Link. Project Final Report.
  • 2023/07: I am happy to announce that I will join Rensselaer Polytechnic Institute as an Assistant Professor starting from August 2023.
  • 2023/06: I was appointed as the Chair for the Edge Computing Subcommittee of TRB Standing Committee on AI and Advanced Computing Application.
  • 2023/04: The first edition of my co-authored book Machine Learning for Transportation Research and Applications is published at Elsevier.
Selected Publications

Lightweight Edge Intelligence Empowered Near-crash Detection Towards Real-time Vehicle Event Logging
Ruimin Ke, Zhiyong Cui, Yanlong Chen, Meixin Zhu, Hao Yang, Yifan Zhuang, Yinhai Wang
IEEE Transactions on Intelligence Vehicles, 2023
abstract / bibtex / link / demo

A major role of automated vehicles is that vehicles serve as mobile sensors for event detection and data collection, which support tactical automation in autonomous driving and post-analysis for traffic safety. However, most data collected during regular operations of vehicles are not of interest, while it costs a large amount of computation, communication, and storage resources on the cloud servers. Vehicular edge computing has emerged as a promising paradigm to balance these high costs in traditional cloud computing. But edge computers often have limited resources to support the high efficiency and intelligence of advanced vehicular functions. Motivated by the existing challenges and new concepts, this paper proposes and tests a lightweight edge intelligence framework for vehicle event detection and logging that runs in an event-based and real-time manner. Specifically, this paper takes vehicle-vehicle and vehicle-pedestrian near-crashes as the events of interest. The lightweight algorithm design of modeling the bounding boxes in object detection/tracking enables real-time edge intelligence onboard a vehicle; The event-based data logging mechanism eliminates redundant data onboard and integrates multi-source information for individual near-crash events. Comprehensive open-road tests on four transit vehicles have been conducted.

@article{ke2023lightweight,
title={Lightweight Edge Intelligence Empowered Near-crash Detection Towards Real-time Vehicle Event Logging},
author={Ke, Ruimin and Cui, Zhiyong and Chen, Yanlong and Zhu, Meixin and Yang, Hao and Zhuang, Yifan and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Vehicles},
year={2023},
publisher={IEEE}
}
Efficient Procedure of Building University Campus Models for Digital Twin Simulation
Talha Azfar, Jeffrey Weidner, Adeeba Raheem, Ruimin Ke, Kelvin Cheu
IEEE Journal of Radio Frequency Identification, 2022
abstract / bibtex / link / tutorial presentation

Realistic digital geographical models of real-world locations are a necessary starting point for digital twin applications, especially for simulation and visualization. However, the visual fidelity of this first step is often neglected, since the effort involved is counterproductive to the main research focus. In this paper, we explore different tools and resources to assemble an efficient and convenient procedure to create a 3D digital model of a university campus that can support digital twin applications. Specifically, the terrain, buildings, and road network are combined into the CARLA project on Unreal Engine, enabling computer vision, traffic simulation, and autonomous driving experimentation.

@article{azfar2022efficient,
title={Efficient Procedure of Building University Campus Models for Digital Twin Simulation},
author={Azfar, Talha and Weidner, Jeffrey and Raheem, Adeeba and Ke, Ruimin and Cheu, Ruey Long},
journal={IEEE Journal of Radio Frequency Identification},
volume={6},
pages={769-773},
year={2022},
publisher={IEEE}
}
When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges
Xuan Zhou, Ruimin Ke, Hao Yang, Chenxi Liu
Applied Sciences, 2021
abstract / bibtex / link

The widespread use of mobile devices and sensors has motivated data-driven applications that can leverage the power of big data to benefit many aspects of our daily life, such as health, transportation, economy, and environment. Under the context of smart city, intelligent transportation systems (ITS), as a main building block of modern cities, and edge computing (EC), as an emerging computing service that targets addressing the limitations of cloud computing, have attracted increasing attention in the research community in recent years. It is well believed that the application of EC in ITS will have considerable benefits to transportation systems regarding efficiency, safety, and sustainability. Despite the growing trend in ITS and EC research, a big gap in the existing literature is identified: the intersection between these two promising directions has been far from well explored. In this paper, we focus on a critical part of ITS, i.e., sensing, and conducting a review on the recent advances in ITS sensing and EC applications in this field. The key challenges in ITS sensing and future directions with the integration of edge computing are discussed.

@article{zhou2021intelligent,,
title={When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges},
author={Zhou, Xuan and Ke, Ruimin and Yang, Hao and Liu, Chenxi},
journal={Applied Sciences},
volum={11},
number={20},
pages={9680},
year={2021},
publisher={Multidisciplinary Digital Publishing Institute}
}
A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices
Ruimin Ke, Yifan Zhuang, Ziyuan Pu, Yinhai Wang
IEEE Transactions on Intelligent Transportation Systems, 2021
abstract / bibtex / link

Cloud computing has been a main-stream computing service for years. Recently, with the rapid development in urbanization, massive video surveillance data are produced at an unprecedented speed. A traditional solution to deal with the big data would require a large amount of computing and storage resources. With the advances in Internet of things (IoT), artificial intelligence, and communication technologies, edge computing offers a new solution to the problem by processing all or part of the data locally at the edge of a surveillance system. In this study, we investigate the feasibility of using edge computing for smart parking surveillance tasks, specifically, parking occupancy detection using the real-time video feed. The system processing pipeline is carefully designed with the consideration of flexibility, online surveillance, data transmission, detection accuracy, and system reliability. It enables artificial intelligence at the edge by implementing an enhanced single shot multibox detector (SSD). A few more algorithms are developed either locally at the edge of the system or on the centralized data server targeting optimal system efficiency and accuracy. Thorough field tests were conducted in the Angle Lake parking garage for three months. The experimental results are promising that the final detection method achieves over 95% accuracy in real-world scenarios with high efficiency and reliability. The proposed smart parking surveillance system is a critical component of smart cities and can be a solid foundation for future applications in intelligent transportation systems.

@article{9061155,
title={A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices},
author={Ke, Ruimin and Zhuang, Yifan and Pu, Ziyuan and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2020},
volum={},
number={},
pages={1-13}
}
Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact
Ruimin Ke, Wan Li, Zhiyong Cui, Yinhai Wang
Transportation Research Record, 2020
abstract / bibtex / link / data (wsdot --> loopgroup data download)

Traffic speed prediction is a critically important component of intelligent transportation systems. Recently, with the rapid development of deep learning and transportation data science, a growing body of new traffic speed prediction models have been designed that achieved high accuracy and large-scale prediction. However, existing studies have two major limitations. First, they predict aggregated traffic speed rather than lane-level traffic speed; second, most studies ignore the impact of other traffic flow parameters in speed prediction. To address these issues, the authors propose a two-stream multi-channel convolutional neural network (TM-CNN) model for multi-lane traffic speed prediction considering traffic volume impact. In this model, the authors first introduce a new data conversion method that converts raw traffic speed data and volume data into spatial�temporal multi-channel matrices. Then the authors carefully design a two-stream deep neural network to effectively learn the features and correlations between individual lanes, in the spatial�temporal dimensions, and between speed and volume. Accordingly, a new loss function that considers the volume impact in speed prediction is developed. A case study using 1-year data validates the TM-CNN model and demonstrates its superiority. This paper contributes to two research areas: (1) traffic speed prediction, and (2) multi-lane traffic flow study.

@article{ke2020TWO,
title={Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact},
author={Ke, Ruimin and Li, Wan and Cui, Zhiyong and Wang, Yinhai},
journal={Transportation Research Record},
pages={0361198120911052},
year={2020},
publisher={publisher={SAGE Publications Sage CA: Los Angeles, CA}
}
Advanced Framework for Microscopic and Lane-level Macroscopic Traffic Parameters Estimation from UAV Video
Ruimin Ke, Shuo Feng, Zhiyong Cui, Yinhai Wang
IET Intelligent Transport Systems, 2020
abstract / bibtex / link / video

Unmanned aerial vehicle (UAV) is at the heart of modern traffic sensing research due to its advantages of low cost, high flexibility, and wide view range over traditional traffic sensors. Recently, increasing efforts in UAV-based traffic sensing have been made, and great progress has been achieved on the estimation of aggregated macroscopic traffic parameters. Compared to aggregated macroscopic traffic data, there has been extensive attention on higher-resolution traffic data such as microscopic traffic parameters and lane-level macroscopic traffic parameters since they can help deeply understand traffic patterns and individual vehicle behaviours. However, little existing research can automatically estimate microscopic traffic parameters and lane-level macroscopic traffic parameters using UAV videos with a moving background. In this study, an advanced framework is proposed to bridge the gap. Specifically, three functional modules consisting of multiple processing streams and the interconnections among them are carefully designed with the consideration of UAV video features and traffic flow characteristics. Experimental results on real-world UAV video data demonstrate promising performances of the framework in microscopic and lane-level macroscopic traffic parameters estimation. This research pushes off the boundaries of the applicability of UAVs and has an enormous potential to support advanced traffic sensing and management.

@article{ke2020advanced,
title={Advanced framework for microscopic and lane-level macroscopic traffic parameters estimation from UAV video},
author={Ke, Ruimin and Feng, Shuo and Cui, Zhiyong and Wang, Yinhai},
journal={IET Intelligent Transport Systems}
year = {2020}
publisher={IET}
}
Real-Time Traffic Flow Parameter Estimation From UAV Video Based on Ensemble Classifier and Optical Flow
Ruimin Ke, Zhibin Li, Jinjun Tang, Zewen Pan, Yinhai Wang
IEEE Transactions on Intelligent Transportation Systems, 2019
abstract / bibtex / link / data

Recently, the availability of unmanned aerial vehicle (UAV) opens up new opportunities for smart transportation applications, such as automatic traffic data collection. In such a trend, detecting vehicles and extracting traffic parameters from UAV video in a fast and accurate manner is becoming crucial in many prospective applications. However, from the methodological perspective, several limitations have to be addressed before the actual implementation of UAV. This paper proposes a new and complete analysis framework for traffic flow parameter estimation from UAV video. This framework addresses the well-concerned issues on UAV's irregular ego-motion, low estimation accuracy in dense traffic situation, and high computational complexity by designing and integrating four stages. In the first two stages an ensemble classifier (Haar cascade + convolutional neural network) is developed for vehicle detection, and in the last two stages a robust traffic flow parameter estimation method is developed based on optical flow and traffic flow theory. The proposed ensemble classifier is demonstrated to outperform the state-of-the-art vehicle detectors that designed for UAV-based vehicle detection. Traffic flow parameter estimations in both free flow and congested traffic conditions are evaluated, and the results turn out to be very encouraging. The dataset with 20,000 image samples used in this study is publicly accessible for benchmarking at http://www.uwstarlab.org/research.html.

@article{ke2018real,
title={Real-time traffic flow parameter estimation from UAV video based on ensemble classifier and optical flow},
author={Ke, Ruimin and Li, Zhibin and Tang, Jinjun and Pan, Zewen and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2018},
volume={20},
number={1},
pages={54-64},
publisher={IEEE}}
New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis
Ruimin Ke, Ziqiang Zeng, Ziyuan Pu, Yinhai Wang
Journal of Transportation Engineering, Part A: Systems, 2018
(Featured in the Editor's Choice Section of the journal)
abstract / bibtex / link

As the amount of traffic congestion continues to grow, pinpointing freeway bottleneck locations and quantifying their impacts are crucial activities for traffic management and control. Among the previous bottleneck identification methods, limitations still exist. The first key limitation is that they cannot determine precise breakdown durations at a bottleneck in an objective manner. Second, the input data often needs to be aggregated in an effort to ensure better robustness to noise, which will significantly reduce the time resolution. Wavelet transform, as a powerful and efficient data-processing tool, has already been implemented in some transportation application scenarios to much benefit. However, there is still a wide gap between existing preliminary explorations of wavelet analysis in transportation research and a completely automatic bottleneck identification framework. This paper addresses several key issues in existing bottleneck identification approaches and also fills a gap in transportation-related wavelet applications. The experimental results demonstrate that the proposed method is able to locate the most severe bottlenecks and comprehensively quantify their impacts.

@article{doi:10.1061/JTEPBS.0000168,
author = {Ruimin Ke and Ziqiang Zeng and Ziyuan Pu and Yinhai Wang },
title = {New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis},
journal = {Journal of Transportation Engineering, Part A: Systems},
volume = {144},
number = {9},
pages = {04018044},
year = {2018},
doi = {10.1061/JTEPBS.0000168}
}
A Cost-effective Framework for Automated Vehicle-pedestrian Near-miss Detection through Onboard Monocular Vision
Ruimin Ke, Jerome Lutin, Jerry Spears, Yinhai Wang
Computer Vision and Pattern Recognition (CVPR), 2017
abstract / bibtex / link / video / media coverage

Onboard monocular cameras have been widely deployed in both public transit and personal vehicles. Obtaining vehicle-pedestrian near-miss event data from onboard monocular vision systems may be cost-effective compared with onboard multiple-sensor systems or traffic surveillance videos. But extracting near-misses from onboard monocular vision is challenging and little work has been published. This paper fills the gap by developing a framework to automatically detect vehicle-pedestrian near-misses through onboard monocular vision. The proposed framework can estimate depth and real-world motion information through monocular vision with a moving video background. The experimental results based on processing over 30-hours video data demonstrate the ability of the system to capture near-misses by comparison with the events logged by the Rosco/MobilEye Shield+ system which includes four cameras working cooperatively. The detection overlap rate reaches over 90% with the thresholds properly set.

@INPROCEEDINGS{8014858,
author={R. Ke and J. Lutin and J. Spears and Y. Wang},
booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
title={A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision},
year={2017},
pages={898-905},
doi={10.1109/CVPRW.2017.124},
ISSN={2160-7516},
month={July},}
Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos
Ruimin Ke, Zhibin Li, Sung Kim, John Ash, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2017
abstract / bibtex / link / demo

Unmanned aerial vehicles (UAVs) are gaining popularity in traffic monitoring due to their low cost, high flexibility, and wide view range. Traffic flow parameters such as speed, density, and volume extracted from UAV-based traffic videos are critical for traffic state estimation and traffic control and have recently received much attention from researchers. However, different from stationary surveillance videos, the camera platforms move with UAVs, and the background motion in aerial videos makes it very challenging to process for data extraction. To address this problem, a novel framework for real-time traffic flow parameter estimation from aerial videos is proposed. The proposed system identifies the directions of traffic streams and extracts traffic flow parameters of each traffic stream separately. Our method incorporates four steps that make use of the Kanade-Lucas-Tomasi (KLT) tracker, k-means clustering, connected graphs, and traffic flow theory. The KLT tracker and k-means clustering are used for interest-point-based motion analysis; then, four constraints are proposed to further determine the connectivity of interest points belonging to one traffic stream cluster. Finally, the average speed of a traffic stream as well as density and volume can be estimated using outputs from previous steps and reference markings. Our method was tested on five videos taken in very different scenarios. The experimental results show that in our case studies, the proposed method achieves about 96% and 87% accuracy in estimating average traffic stream speed and vehicle count, respectively. The method also achieves a fast processing speed that enables real-time traffic information estimation.

@ARTICLE{7546916,
author={R. Ke and Z. Li and S. Kim and J. Ash and Z. Cui and Y. Wang},
journal={IEEE Transactions on Intelligent Transportation Systems},
title={Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos},
year={2018},
volume={18},
number={4},
pages={890-901},
doi={10.1109/TITS.2016.2595526},
ISSN={1524-9050},
month={April},}
Roadway surveillance video camera calibration using standard shipping container
Ruimin Ke, Zewen Pan, Ziyuan Pu, Yinhai Wang
IEEE International Smart Cities Conference, 2017
abstract / bibtex / link

Surveillance video cameras have been increasingly deployed on roadway networks providing important support for roadway management. While the information-rich video images are a valuable source of traffic data, these surveillance video cameras are typically designed for manual observation of roadway conditions and are not for automatic traffic data collection. The benefits of turning these surveillance cameras into data collection cameras are obvious, but collecting traffic data would normally require the development of a cost-effective method to efficiently and accurately calibrate surveillance video cameras. This paper proposes such a robust and efficient method that calibrates surveillance video cameras using standard shipping container as the reference object. The traditional camera calibration model can be simplified and camera parameters can be recovered with precise mathematical derivation. After solving for all the camera parameters, the 3D object world coordinates can be reconstructed from 2D image coordinates, thus enabling the collection of a variety of traffic data using surveillance video camera data.

@INPROCEEDINGS{8090811,
author={R. Ke and Z. Pan and Z. Pu and Y. Wang},
booktitle={2017 International Smart Cities Conference (ISC2)},
title={Roadway surveillance video camera calibration using standard shipping container},
year={2017},
volume={},
number={},
pages={1-6},
doi={10.1109/ISC2.2017.8090811},
ISSN={},
month={Sept},}
Motion-vector clustering for traffic speed detection from UAV video
Ruimin Ke, Sung Kim, Zhibin Li, Yinhai Wang
IEEE First International Smart Cities Conference, 2015
abstract / bibtex / link

A novel method for detecting the average speed of traffic from non-stationary aerial video is presented. The method first extracts interest points from a pair of frames and performs interest point tracking with an optical flow algorithm. The output of the optical flow is a set of motion vectors which are k-means clustered in velocity space. The centers of the clusters correspond to the average velocities of traffic and the background, and are used to determine the speed of traffic relative to the background. The proposed method is tested on a 70-frame test sequence of UAV aerial video, and achieves an average error for speed estimates of less than 12%.

@INPROCEEDINGS{7366230,
author={R. Ke and S. Kim and Z. Li and Y. Wang},
booktitle={2015 IEEE First International Smart Cities Conference (ISC2)},
title={Motion-vector clustering for traffic speed detection from UAV video},
year={2015},
volume={},
number={},
pages={1-5},
doi={10.1109/ISC2.2015.7366230},
ISSN={},
month={Oct},}
Lane-changes prediction based on adaptive fuzzy neural network
Jinjun Tang, Fang Liu, Wenhui Zhang, Ruimin Ke, Yajie Zou
Expert Systems with Applications, 2018
abstract / bibtex / link

Lane changing maneuver is one of the most important driving behaviors. Unreasonable lane changes can cause serious collisions and consequent traffic delays. High precision prediction of lane changing intent is helpful for improving driving safety. In this study, by fusing information from vehicle sensors, a lane changing predictor based on Adaptive Fuzzy Neural Network (AFFN) is proposed to predict steering angles. The prediction model includes two parts: fuzzy neural network based on Takagi–Sugeno fuzzy inference, in which an improved Least Squares Estimator (LSE) is adopt to optimize parameters; adaptive learning algorithm to update membership functions and rule base. Experiments are conducted in the driving simulator under scenarios with different speed levels of lead vehicle: 60 km/h, 80 km/h and 100 km/h. Prediction results show that the proposed method is able to accurately follow steering angle patterns. Furthermore, comparison of prediction performance with several machine learning methods further verifies the learning ability of the AFNN. Finally, a sensibility analysis indicates heading angles and acceleration of vehicle are also important factors for predicting lane changing behavior.

@article{TANG2018452,
title = "Lane-changes prediction based on adaptive fuzzy neural network",
journal = "Expert Systems with Applications",
volume = "91",
pages = "452 - 463",
year = "2018",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2017.09.025",
author = "Jinjun Tang and Fang Liu and Wenhui Zhang and Ruimin Ke and Yajie Zou",
}
Active Safety-Collision Warning Pilot in Washington State
Jerry Spears, Jerome Lutin, Yinhai Wang, Ruimin Ke, Steven Clancy
TRANSIT-IDEA Program Project Final Report, 2017
abstract / bibtex / link / media coverage

The Rosco/Mobileye Shield+ system is a collision avoidance warning system (CAWS) specifically designed for transit buses. This project involved field testing and evaluation of the CAWS in revenue service over a three-month period. The system provides alerts and warnings to the bus driver for the following conditions that could lead to a collision: 1) changing lanes without activating a turn signal, 2) exceeding posted speed limit, 3) monitoring headway with the vehicle leading the bus, 4) forward vehicle collision warning, and 5) pedestrian or cyclist collision warning in front of, or alongside the bus. Alerts and warnings are displayed to the driver by visual indicators located on the windshield and front pillars. Audible warnings are issued when collisions are imminent. Research objectives included: create a robust Rosco/Mobileye demonstration pilot for active/collision avoidance within the State of Washington on a minimum of 35 transit buses; determine the ease of retrofit of the existing fleet; develop a methodology for estimating the full costs savings of avoided collisions for each agency; develop a methodology and evaluation process for transit driver feedback and acceptance as well as bus passenger feedback; and provide detailed data and understanding on entrance barriers to this technology. The pilot test showed that although driver acceptance was mixed, there were large reductions in near-miss events for CAWS-equipped buses. Consequently, achieving driver acceptance will be a key factor in continued development and deployment of CAWS. As a result of comments received from the drivers, the vendor has begun a program to incorporate desired modifications to the system including reducing false positives. A second major factor in achieving industry acceptance is to demonstrate the business case for CAWS to both transit agencies and system developers. Although the pilot project produced encouraging results, collisions, injuries and fatalities can be considered rare events. A much larger in-service test will be needed to demonstrate actual cost-savings.

@Report{01643748,
author={Jerry Spears and Jerome Lutin and Yinhai Wang and Ruimin Ke and Steven Clancy},
journal={Transit IDEA Project},
title={Active Safety-Collision Warning Pilot in Washington State},
year={2017},
publisher={Transportation Research Board}
volume={},
number={82},
pages={1-33},
month={May},}
Digital Roadway Interactive Visualization and Evaluation Network Applications to WSDOT Operational Data Usage
Yinhai Wang, Ruimin Ke, Weibin Zhang, Zhiyong Cui, Kristian Henrickson
Washington Station Department of Transportation (WSDOT) Research Report, 2016
abstract / link / video / website

DRIVE Net is a region-wide, Web-based transportation decision support system that adopts digital roadway maps as the base, and provides data layers for integrating and analyzing a variety of data sources (e.g., traffic sensors, incident records). Moreover, DRIVE Net offers a platform for streamlining transportation analysis and decision making, and it serves as a practical tool for visualizing historical observations spatially and temporally. In its current implementation, DRIVE Net demonstrates the potential to be used as a standard tool for incorporating multiple data sets from different fields and as a platform for real-time decision making. In comparison with the previous version, the new DRIVE Net system is now able to handle more complex computational tasks, perform large-scale spatial processing, and support data sharing services to provide a stable and interoperable platform to process, analyze, visualize, and share transportation data. DRIVE Net’s capabilities include generating statistics for WSDOT’s Gray Notebook (GNB), including travel times, throughput productivity, and traffic delay calculations for both general purpose and HOV lanes, each of which are important performance indicators in the WSDOT congestion report. The DRIVE Net system includes robust loop detector data processing and quality control methods to address the data quality issues impacting loop detectors throughout the state. The capabilities of the DRIVE Net system have been expanded to include safety modeling, hotspot identification, and incident induced delay estimation. Specifically, the Safety Performance module includes functions that can be used to obtain traffic incident frequency, apply predictive models to estimate the safety performance of road segments, and visualize and compare observed incident counts and different predictive models. Additionally, a module providing multi-modal data analysis and visualization capabilities was developed as a pilot experiment for integration of heterogeneous data. This module includes pedestrian and bicycle, public transit, park and ride, Car2Go, and ferry data downloading and visualization. DRIVE Net now offers role-based access control, such that access privileges to different functions and data resources can be assigned on a group or individual basis. The new system is able to support more complex analytics and decision support features on a large-scale transportation network, and is expected to be of great practical use for both traffic engineers and researchers. With a modular structure and mature data integration and management framework, DRIVE Net can be expanded in the future to include a variety of additional data resources and analytical capabilities.


Teaching
SC5302: Smart Cities Design
As Instructor / University of Texas at El Paso
Spring 2023
CE5390: Special Topics in Civil Engineering - Intelligent Transportation Systems
As Instructor / University of Texas at El Paso
Fall 2022
CE4340: Transportation Engineering
As Instructor / University of Texas at El Paso
Spring 2023, Spring 2022
Computer Vision for Intelligent Transportation Systems
As Invited Lecturer / Connecting with Professionals event for the "18225 High Definition" Robotics Team
Lecture Video
Fall 2020
CET590: Traffic Simulation and System Operations
As Pre-Doctoral Instructor / University of Washington
Fall 2019


Research Group and Opportunities
Current Group Members
Talha Azfar, PhD student - LinkedIn / Google Scholar
Alumni
Chengyue Wang, MS Student 2022-2023 at UTEP / Current Position: PhD Candidate at University of Macau
Prospective Students
I am currently looking for 1-2 motivated PhD students to join our group! If you are interested in working with me at RPI, please email me with you CV, transcripts, GRE score, and any other expectations. You can also directly apply through the RPI CEE programs and mention me in your application. I will try my best to respond to every applicant's email.
RPI students: (1) If you are already a graduate student at RPI, feel free to reach out. (2) I am also accepting undergraduate students for individual studies.