Ruimin "Roman" KE
How to pronounce? /'reimin/kə/

I am a tenure-track Assistant Professor of Transportation Engineering at Rensselaer Polytechnic Institute with the Department of Civil and Environmental Engineering. Our research group at RPI seeks talented and dedicated students to join our team. Additionally, we are open to collaborating with individuals and groups from both academia and industry who share our research interests.

I received my Ph.D. and M.S. degrees in Transportation Engineering at the University of Washington. I earned a M.S. degree in Computer Science from the University of Illinois at Urbana-Champaign, and a B.E. degreee in Automation from Tsinghua University. Our research interest lies in Intelligent Transportation Systems with focuses on Machine Learning and Edge/Cloud/Quantum Computing algorithms and models towards a safe, equitable, and resilient transportation infrastructure system. I serve on the editorial board of four journals: Nature-Scientific Reports, Transportation Research Record, Data Science for Transportation, and Journal of Air Transport Management. I have been extremely privileged to receive multiple academic awards in recognition of my collaboratve research accomplishments, including the 2023 TRB Best Paper Award, 2022 Outstanding Paper Award from IEEE DTPI Conference, 2020-2021 COTA Best Dissertation Award, and Editor's Choice Paper Award from Journal of Transportation Engineering.

In my free time, I have been a badminton player since eight years old and able to achieve significant milestones in the sport. I have won over 70 medals/trophies since 2001. I was crowned the junior champion of Sichuan Province (11 times) and Chengdu City (15 times). Later, I joined the Tsinghua University Badminton Team, where I contined to excel. During my tenure, I clinched 9 titles of Beijing and the national runner-up for Tsinghua University. I was the men's singles champion of Beijing (university students) for three consecutive years (2011, 2012, 2013). After relocating to Seattle, I won 9 more trophies, such as the men's singles champion of 2015 Northwest Husky Badminton Open, the men's singles runner-up of 2016 WA State Badminton Open, the men's doubles runner-up of 2018 WA State Badminton Closed, and the men's doubles third place of the 2023 New Jersey IBC Open. I also play tennis and pickle ball.

E-Mail / Google Scholar / Research Gate / LinkedIn / Github / PhD Dissertation / RPI Profile

News
  • 2024/11: The ChatSUMO paper is published at IEEE Transactions on Intelligent Vehicles. Congrats to Shuyang and Talha!
  • 2024/11: New project from IBM.
  • 2024/10: Our group has four papers accepted at TRB 2024 for presentation.
  • 2024/10: We hosted Dr. Yao-Jan Wu, the Executive Director of Arizona Transportation Institute (AZTI).
  • 2024/10: Invited guest lecture on transportation edge computing.
  • 2024/09: New projects from USDOT SEMPACT.
  • 2024/08: Welcome Kaicong Huang (National University of Singapore) to join our group as a PhD research assistant.
  • 2024/06: Welcome Shuyang Li (University of Michigan-Ann Arbor) and Jiachen Ye (Tsinghua University) to join our group officially as research interns.
  • 2024/06: Our group, in collaboration with Google Inc, AIWaysion Inc, UW, and FSU, developed a comprehensive review paper on quantum computing in ITS.
  • 2024/05: The TRB Edge Computing Subcommittee organized a session at the AI in Motion Symposium, focusing on roadway infrastructure computing and vehicle computing, with four speakers Dr. Chris Claudel, Dr. Chenxi Liu, Dr. Adian Cook, and Dr. Sidi Lu.
  • 2024/04: We hosted Dr. Yinhai Wang (University of Washington), Dr. Kris Henrickson (Uber), Dr. Ken Yang (AECOM), and Dr. Yiheng Feng (Purdue University).
  • 2024/02: US Patent Determining a visibility measure based on an image of an environment is published.
  • 2024/01: I am honored to be selected as a Board of Directors member of the Chinese Overseas Transportation Association (COTA).
  • 2024/01: First TRB Edge Computing Subcommittee Meeting scheduled at TRB 2024, featuring four renowned speakers from academia and industry.
  • 2023/12: Invited webinar at NYC DOT.
  • 2023/10: Talha's first-author paper Deep Learning based Computer Vision Methods for Complex Traffic Environments Perception: A Review is accepted by the Data Science for Transportation journal.
  • 2023/10: Invited talk at USDOT Region 10 PacTrans Annual Conference.
  • 2023/08: A proposal I led as the PI on Machine Learning in Transportation Education is awarded by the National Science Foundation.
  • 2023/07: I am happy to announce that I will join Rensselaer Polytechnic Institute as an Assistant Professor starting from August 2023.
  • 2023/06: I was appointed as the Chair for the Edge Computing Subcommittee of TRB Standing Committee on AI and Advanced Computing Application.
  • 2023/04: The first edition of my co-authored book Machine Learning for Transportation Research and Applications is published at Elsevier.
Selected Publications

ChatSUMO: Large Language Model for Automating Traffic Scenario Generation in Simulation of Urban MObility
Shuyang Li, Talha Azfar, Ruimin Ke
IEEE Transactions on Intelligent Vehicles, 2024
abstract / bibtex / link

Large Language Models (LLMs), capable of handling multi-modal input and outputs such as text, voice, images, and video, are transforming the way we process information. Beyond just generating textual responses to prompts, they can integrate with different software platforms to offer comprehensive solutions across diverse applications. In this paper, we present ChatSUMO, an LLM-based agent that integrates language processing skills to generate abstract and real-world simulation scenarios in the widely-used traffic simulator - Simulation of Urban MObility (SUMO). Our methodology begins by leveraging the LLM for user input, which adapts it to relevant keywords needed to run python scripts. These scripts are designed to convert specified regions into coordinates, fetch data from OpenStreetMap, transform it into a road network, and subsequently run SUMO simulations with the designated traffic conditions. The outputs of the simulations are then interpreted by the LLM resulting in informative comparisons and summaries. Users can continue the interaction and generate a variety of customized scenarios without prior traffic simulation expertise. Any city available from OpenStreetMap can be imported, and for demonstration, we created a real-world simulation for the city of Albany. ChatSUMO also allows simulation customization capabilities of edge edit, traffic light optimization, and vehicle edit by users through the web interface.

@article{li2024chatsumo,
title={ChatSUMO: Large Language Model for Automating Traffic Scenario Generation in Simulation of Urban MObility},
author={Li, Shuyang and Azfar, Talha and Ke, Ruimin},
journal={IEEE Transactions on Intelligent Vehicles},
pages={1-12},
year={2024},
publisher={IEEE}
}
Deep Learning‑Based Computer Vision Methods for Complex Traffic Environments Perception: A Review
Talha Azfar, Jinlong Li, Hongkai Yu, Kelvin Cheu, Yisheng Lv, Ruimin Ke
Data Science for Transportation, 2024
abstract / bibtex / link

Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real-world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, and shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.

@article{azfar2024deep,
title={Deep learning-based computer vision methods for complex traffic environments perception: A review},
author={Azfar, Talha and Li, Jinlong and Yu, Hongkai and Cheu, Ruey L and Lv, Yisheng and Ke, Ruimin},
journal={Data Science for Transportation},
volume={6},
number={1},
pages={1},
year={2024},
publisher={Springer}
}
Prototype Data Dashboard for Multi-Source Transportation and Community Health Data Analytics
Chengyue Wang, Talha Azfar, Ruimin Ke
International Conference on Transportation and Development 2024: Transportation Safety and Emerging Technologies
abstract / bibtex / link / Demo

This study had a primary objective of creating a versatile and scalable data dashboard prototype that highlights transportation and community health data, using El Paso as a case study. The dashboard serves as a centralized hub for various data sources within the city, facilitating visual analysis to uncover the intricate connections between these two crucial domains. It’s important to note that the primary aim was not to develop a final product for the city, but rather to establish a proof-of-concept, engage potential users, and conduct scenario analysis. The true impact of this study is expected to extend beyond the creation of the dashboard itself, as it has the potential to influence significant city-wide or regional policy changes based on the valuable insights derived from the comprehensive data analysis. This initiative aims to provide a robust tool for decision-makers and stakeholders to make informed choices that could ultimately benefit the community’s transportation and health outcomes.

inproceedings{wang2024prototype,
title={Prototype Data Dashboard for Multi-Source Transportation and Community Health Data Analytics},
author={Wang, Chengyue and Azfar, Talha and Ke, Ruimin},
booktitle={International Conference on Transportation and Development 2024},
pages={699--709},
year={2024}
}
Digital Twin Technologies Towards Understanding the Interactions Between Transportation and Other Civil Infrastructure Systems: Phase 2
Ruimin Ke, Jeffrey Weidner, Adeeba Raheem, Ruey Long Cheu, Talha Azfar, Angelica Littles, Francisco Guilien, Emily Solem, Christopher Edens
USDOT C2SMART Final Report, 2023
abstract / bibtex / link / Webinar

The advent of Digital Twin (DT) technology signifies a significant stride towards the progressive transformation from physical to digital paradigms within the realm of civil engineering. Its predecessors, namely Computer-Aided Drafting (CAD) and Building Information Modeling (BIM), have revolutionized the industry by streamlining the documentation of designs, curtailing both time and cost investments. The primary objective of this research was to explore different ways to create Digital Twins for civil infrastructure and to explore the effectiveness of DT technology as a tool to create new visualizations and understand interactions between transportation and other related civil infrastructure systems. We used The University of Texas at El Paso (UTEP) campus as a living lab by creating DT models based on transportation network, structural modelling, and LiDAR scans of different parts of the campus. The transportation network and 3D model of the entire campus was combined for traffic simulation and real-time sensing at a roundabout, while the digital model of a pedestrian bridge was made for structural simulations with provisions for strain and tilt sensors. In addition, LiDAR scans of campus buildings were integrated into virtual reality (VR) applications for visualization in VR headsets. Multiple scenarios were tested for each DT to showcase how the technology can be applied to real world problems, resulting in greater ease of use and more informed design decisions. The project concluded with a stakeholder engagement workshop focused on utilizing DT technologies for research relevant to civil infrastructure in the El Paso region.

@article{ke2023digital,
title={Digital Twin Technologies Towards Understanding the Interactions Between Transportation and Other Civil Infrastructure Systems: Phase 2},
author={Ke, Ruimin and Weidner, Jeffrey and Raheem, Adeeba and Cheu, Ruey Long and Azfar, Talha and Littles, Angelica and Guilien, Francisco and Solem, Emily and Edens, Christopher},
year={2023},
publisher={Connected Communities for Smart Mobility Toward Accessible and Resilient Transportation for Equitably Reducing Congestion (C2SMARTER) Tier-1 University Transportation Center (UTC)}
}
Lightweight Edge Intelligence Empowered Near-crash Detection Towards Real-time Vehicle Event Logging
Ruimin Ke, Zhiyong Cui, Yanlong Chen, Meixin Zhu, Hao Yang, Yifan Zhuang, Yinhai Wang
IEEE Transactions on Intelligence Vehicles, 2023
abstract / bibtex / link / demo

A major role of automated vehicles is that vehicles serve as mobile sensors for event detection and data collection, which support tactical automation in autonomous driving and post-analysis for traffic safety. However, most data collected during regular operations of vehicles are not of interest, while it costs a large amount of computation, communication, and storage resources on the cloud servers. Vehicular edge computing has emerged as a promising paradigm to balance these high costs in traditional cloud computing. But edge computers often have limited resources to support the high efficiency and intelligence of advanced vehicular functions. Motivated by the existing challenges and new concepts, this paper proposes and tests a lightweight edge intelligence framework for vehicle event detection and logging that runs in an event-based and real-time manner. Specifically, this paper takes vehicle-vehicle and vehicle-pedestrian near-crashes as the events of interest. The lightweight algorithm design of modeling the bounding boxes in object detection/tracking enables real-time edge intelligence onboard a vehicle; The event-based data logging mechanism eliminates redundant data onboard and integrates multi-source information for individual near-crash events. Comprehensive open-road tests on four transit vehicles have been conducted.

@article{ke2023lightweight,
title={Lightweight Edge Intelligence Empowered Near-crash Detection Towards Real-time Vehicle Event Logging},
author={Ke, Ruimin and Cui, Zhiyong and Chen, Yanlong and Zhu, Meixin and Yang, Hao and Zhuang, Yifan and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Vehicles},
year={2023},
publisher={IEEE}
}
Efficient Procedure of Building University Campus Models for Digital Twin Simulation
Talha Azfar, Jeffrey Weidner, Adeeba Raheem, Ruimin Ke, Kelvin Cheu
IEEE Journal of Radio Frequency Identification, 2022
abstract / bibtex / link / tutorial presentation

Realistic digital geographical models of real-world locations are a necessary starting point for digital twin applications, especially for simulation and visualization. However, the visual fidelity of this first step is often neglected, since the effort involved is counterproductive to the main research focus. In this paper, we explore different tools and resources to assemble an efficient and convenient procedure to create a 3D digital model of a university campus that can support digital twin applications. Specifically, the terrain, buildings, and road network are combined into the CARLA project on Unreal Engine, enabling computer vision, traffic simulation, and autonomous driving experimentation.

@article{azfar2022efficient,
title={Efficient Procedure of Building University Campus Models for Digital Twin Simulation},
author={Azfar, Talha and Weidner, Jeffrey and Raheem, Adeeba and Ke, Ruimin and Cheu, Ruey Long},
journal={IEEE Journal of Radio Frequency Identification},
volume={6},
pages={769-773},
year={2022},
publisher={IEEE}
}
When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges
Ruimin Ke, Xuan Zhou, Hao Yang, Chenxi Liu
Applied Sciences, 2021
abstract / bibtex / link

The widespread use of mobile devices and sensors has motivated data-driven applications that can leverage the power of big data to benefit many aspects of our daily life, such as health, transportation, economy, and environment. Under the context of smart city, intelligent transportation systems (ITS), as a main building block of modern cities, and edge computing (EC), as an emerging computing service that targets addressing the limitations of cloud computing, have attracted increasing attention in the research community in recent years. It is well believed that the application of EC in ITS will have considerable benefits to transportation systems regarding efficiency, safety, and sustainability. Despite the growing trend in ITS and EC research, a big gap in the existing literature is identified: the intersection between these two promising directions has been far from well explored. In this paper, we focus on a critical part of ITS, i.e., sensing, and conducting a review on the recent advances in ITS sensing and EC applications in this field. The key challenges in ITS sensing and future directions with the integration of edge computing are discussed.

@article{zhou2021intelligent,,
title={When Intelligent Transportation Systems Sensing Meets Edge Computing: Vision and Challenges},
author={Zhou, Xuan and Ke, Ruimin and Yang, Hao and Liu, Chenxi},
journal={Applied Sciences},
volum={11},
number={20},
pages={9680},
year={2021},
publisher={Multidisciplinary Digital Publishing Institute}
}
A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices
Ruimin Ke, Yifan Zhuang, Ziyuan Pu, Yinhai Wang
IEEE Transactions on Intelligent Transportation Systems, 2021
abstract / bibtex / link

Cloud computing has been a main-stream computing service for years. Recently, with the rapid development in urbanization, massive video surveillance data are produced at an unprecedented speed. A traditional solution to deal with the big data would require a large amount of computing and storage resources. With the advances in Internet of things (IoT), artificial intelligence, and communication technologies, edge computing offers a new solution to the problem by processing all or part of the data locally at the edge of a surveillance system. In this study, we investigate the feasibility of using edge computing for smart parking surveillance tasks, specifically, parking occupancy detection using the real-time video feed. The system processing pipeline is carefully designed with the consideration of flexibility, online surveillance, data transmission, detection accuracy, and system reliability. It enables artificial intelligence at the edge by implementing an enhanced single shot multibox detector (SSD). A few more algorithms are developed either locally at the edge of the system or on the centralized data server targeting optimal system efficiency and accuracy. Thorough field tests were conducted in the Angle Lake parking garage for three months. The experimental results are promising that the final detection method achieves over 95% accuracy in real-world scenarios with high efficiency and reliability. The proposed smart parking surveillance system is a critical component of smart cities and can be a solid foundation for future applications in intelligent transportation systems.

@article{9061155,
title={A Smart, Efficient, and Reliable Parking Surveillance System With Edge Artificial Intelligence on IoT Devices},
author={Ke, Ruimin and Zhuang, Yifan and Pu, Ziyuan and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2020},
volum={},
number={},
pages={1-13}
}
Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact
Ruimin Ke, Wan Li, Zhiyong Cui, Yinhai Wang
Transportation Research Record, 2020
abstract / bibtex / link / data (wsdot --> loopgroup data download)

Traffic speed prediction is a critically important component of intelligent transportation systems. Recently, with the rapid development of deep learning and transportation data science, a growing body of new traffic speed prediction models have been designed that achieved high accuracy and large-scale prediction. However, existing studies have two major limitations. First, they predict aggregated traffic speed rather than lane-level traffic speed; second, most studies ignore the impact of other traffic flow parameters in speed prediction. To address these issues, the authors propose a two-stream multi-channel convolutional neural network (TM-CNN) model for multi-lane traffic speed prediction considering traffic volume impact. In this model, the authors first introduce a new data conversion method that converts raw traffic speed data and volume data into spatial temporal multi-channel matrices. Then the authors carefully design a two-stream deep neural network to effectively learn the features and correlations between individual lanes, in the spatial temporal dimensions, and between speed and volume. Accordingly, a new loss function that considers the volume impact in speed prediction is developed. A case study using 1-year data validates the TM-CNN model and demonstrates its superiority. This paper contributes to two research areas: (1) traffic speed prediction, and (2) multi-lane traffic flow study.

@article{ke2020TWO,
title={Two-Stream Multi-Channel Convolutional Neural Network for Multi-Lane Traffic Speed Prediction Considering Traffic Volume Impact},
author={Ke, Ruimin and Li, Wan and Cui, Zhiyong and Wang, Yinhai},
journal={Transportation Research Record},
pages={0361198120911052},
year={2020},
publisher={publisher={SAGE Publications Sage CA: Los Angeles, CA}
}
Advanced Framework for Microscopic and Lane-level Macroscopic Traffic Parameters Estimation from UAV Video
Ruimin Ke, Shuo Feng, Zhiyong Cui, Yinhai Wang
IET Intelligent Transport Systems, 2020
abstract / bibtex / link / video

Unmanned aerial vehicle (UAV) is at the heart of modern traffic sensing research due to its advantages of low cost, high flexibility, and wide view range over traditional traffic sensors. Recently, increasing efforts in UAV-based traffic sensing have been made, and great progress has been achieved on the estimation of aggregated macroscopic traffic parameters. Compared to aggregated macroscopic traffic data, there has been extensive attention on higher-resolution traffic data such as microscopic traffic parameters and lane-level macroscopic traffic parameters since they can help deeply understand traffic patterns and individual vehicle behaviours. However, little existing research can automatically estimate microscopic traffic parameters and lane-level macroscopic traffic parameters using UAV videos with a moving background. In this study, an advanced framework is proposed to bridge the gap. Specifically, three functional modules consisting of multiple processing streams and the interconnections among them are carefully designed with the consideration of UAV video features and traffic flow characteristics. Experimental results on real-world UAV video data demonstrate promising performances of the framework in microscopic and lane-level macroscopic traffic parameters estimation. This research pushes off the boundaries of the applicability of UAVs and has an enormous potential to support advanced traffic sensing and management.

@article{ke2020advanced,
title={Advanced framework for microscopic and lane-level macroscopic traffic parameters estimation from UAV video},
author={Ke, Ruimin and Feng, Shuo and Cui, Zhiyong and Wang, Yinhai},
journal={IET Intelligent Transport Systems}
year = {2020}
publisher={IET}
}
Real-Time Traffic Flow Parameter Estimation From UAV Video Based on Ensemble Classifier and Optical Flow
Ruimin Ke, Zhibin Li, Jinjun Tang, Zewen Pan, Yinhai Wang
IEEE Transactions on Intelligent Transportation Systems, 2019
abstract / bibtex / link / data

Recently, the availability of unmanned aerial vehicle (UAV) opens up new opportunities for smart transportation applications, such as automatic traffic data collection. In such a trend, detecting vehicles and extracting traffic parameters from UAV video in a fast and accurate manner is becoming crucial in many prospective applications. However, from the methodological perspective, several limitations have to be addressed before the actual implementation of UAV. This paper proposes a new and complete analysis framework for traffic flow parameter estimation from UAV video. This framework addresses the well-concerned issues on UAV's irregular ego-motion, low estimation accuracy in dense traffic situation, and high computational complexity by designing and integrating four stages. In the first two stages an ensemble classifier (Haar cascade + convolutional neural network) is developed for vehicle detection, and in the last two stages a robust traffic flow parameter estimation method is developed based on optical flow and traffic flow theory. The proposed ensemble classifier is demonstrated to outperform the state-of-the-art vehicle detectors that designed for UAV-based vehicle detection. Traffic flow parameter estimations in both free flow and congested traffic conditions are evaluated, and the results turn out to be very encouraging. The dataset with 20,000 image samples used in this study is publicly accessible for benchmarking at http://www.uwstarlab.org/research.html.

@article{ke2018real,
title={Real-time traffic flow parameter estimation from UAV video based on ensemble classifier and optical flow},
author={Ke, Ruimin and Li, Zhibin and Tang, Jinjun and Pan, Zewen and Wang, Yinhai},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2018},
volume={20},
number={1},
pages={54-64},
publisher={IEEE}}
New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis
Ruimin Ke, Ziqiang Zeng, Ziyuan Pu, Yinhai Wang
Journal of Transportation Engineering, Part A: Systems, 2018
(Featured in the Editor's Choice Section of the journal)
abstract / bibtex / link

As the amount of traffic congestion continues to grow, pinpointing freeway bottleneck locations and quantifying their impacts are crucial activities for traffic management and control. Among the previous bottleneck identification methods, limitations still exist. The first key limitation is that they cannot determine precise breakdown durations at a bottleneck in an objective manner. Second, the input data often needs to be aggregated in an effort to ensure better robustness to noise, which will significantly reduce the time resolution. Wavelet transform, as a powerful and efficient data-processing tool, has already been implemented in some transportation application scenarios to much benefit. However, there is still a wide gap between existing preliminary explorations of wavelet analysis in transportation research and a completely automatic bottleneck identification framework. This paper addresses several key issues in existing bottleneck identification approaches and also fills a gap in transportation-related wavelet applications. The experimental results demonstrate that the proposed method is able to locate the most severe bottlenecks and comprehensively quantify their impacts.

@article{doi:10.1061/JTEPBS.0000168,
author = {Ruimin Ke and Ziqiang Zeng and Ziyuan Pu and Yinhai Wang },
title = {New Framework for Automatic Identification and Quantification of Freeway Bottlenecks Based on Wavelet Analysis},
journal = {Journal of Transportation Engineering, Part A: Systems},
volume = {144},
number = {9},
pages = {04018044},
year = {2018},
doi = {10.1061/JTEPBS.0000168}
}
A Cost-effective Framework for Automated Vehicle-pedestrian Near-miss Detection through Onboard Monocular Vision
Ruimin Ke, Jerome Lutin, Jerry Spears, Yinhai Wang
Computer Vision and Pattern Recognition (CVPR), 2017
abstract / bibtex / link / video / media coverage

Onboard monocular cameras have been widely deployed in both public transit and personal vehicles. Obtaining vehicle-pedestrian near-miss event data from onboard monocular vision systems may be cost-effective compared with onboard multiple-sensor systems or traffic surveillance videos. But extracting near-misses from onboard monocular vision is challenging and little work has been published. This paper fills the gap by developing a framework to automatically detect vehicle-pedestrian near-misses through onboard monocular vision. The proposed framework can estimate depth and real-world motion information through monocular vision with a moving video background. The experimental results based on processing over 30-hours video data demonstrate the ability of the system to capture near-misses by comparison with the events logged by the Rosco/MobilEye Shield+ system which includes four cameras working cooperatively. The detection overlap rate reaches over 90% with the thresholds properly set.

@INPROCEEDINGS{8014858,
author={R. Ke and J. Lutin and J. Spears and Y. Wang},
booktitle={2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
title={A Cost-Effective Framework for Automated Vehicle-Pedestrian Near-Miss Detection Through Onboard Monocular Vision},
year={2017},
pages={898-905},
doi={10.1109/CVPRW.2017.124},
ISSN={2160-7516},
month={July},}
Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos
Ruimin Ke, Zhibin Li, Sung Kim, John Ash, Yinhai Wang*
IEEE Transactions on Intelligent Transportation Systems, 2017
abstract / bibtex / link / demo

Unmanned aerial vehicles (UAVs) are gaining popularity in traffic monitoring due to their low cost, high flexibility, and wide view range. Traffic flow parameters such as speed, density, and volume extracted from UAV-based traffic videos are critical for traffic state estimation and traffic control and have recently received much attention from researchers. However, different from stationary surveillance videos, the camera platforms move with UAVs, and the background motion in aerial videos makes it very challenging to process for data extraction. To address this problem, a novel framework for real-time traffic flow parameter estimation from aerial videos is proposed. The proposed system identifies the directions of traffic streams and extracts traffic flow parameters of each traffic stream separately. Our method incorporates four steps that make use of the Kanade-Lucas-Tomasi (KLT) tracker, k-means clustering, connected graphs, and traffic flow theory. The KLT tracker and k-means clustering are used for interest-point-based motion analysis; then, four constraints are proposed to further determine the connectivity of interest points belonging to one traffic stream cluster. Finally, the average speed of a traffic stream as well as density and volume can be estimated using outputs from previous steps and reference markings. Our method was tested on five videos taken in very different scenarios. The experimental results show that in our case studies, the proposed method achieves about 96% and 87% accuracy in estimating average traffic stream speed and vehicle count, respectively. The method also achieves a fast processing speed that enables real-time traffic information estimation.

@ARTICLE{7546916,
author={R. Ke and Z. Li and S. Kim and J. Ash and Z. Cui and Y. Wang},
journal={IEEE Transactions on Intelligent Transportation Systems},
title={Real-Time Bidirectional Traffic Flow Parameter Estimation From Aerial Videos},
year={2018},
volume={18},
number={4},
pages={890-901},
doi={10.1109/TITS.2016.2595526},
ISSN={1524-9050},
month={April},}
Roadway surveillance video camera calibration using standard shipping container
Ruimin Ke, Zewen Pan, Ziyuan Pu, Yinhai Wang
IEEE International Smart Cities Conference, 2017
abstract / bibtex / link

Surveillance video cameras have been increasingly deployed on roadway networks providing important support for roadway management. While the information-rich video images are a valuable source of traffic data, these surveillance video cameras are typically designed for manual observation of roadway conditions and are not for automatic traffic data collection. The benefits of turning these surveillance cameras into data collection cameras are obvious, but collecting traffic data would normally require the development of a cost-effective method to efficiently and accurately calibrate surveillance video cameras. This paper proposes such a robust and efficient method that calibrates surveillance video cameras using standard shipping container as the reference object. The traditional camera calibration model can be simplified and camera parameters can be recovered with precise mathematical derivation. After solving for all the camera parameters, the 3D object world coordinates can be reconstructed from 2D image coordinates, thus enabling the collection of a variety of traffic data using surveillance video camera data.

@INPROCEEDINGS{8090811,
author={R. Ke and Z. Pan and Z. Pu and Y. Wang},
booktitle={2017 International Smart Cities Conference (ISC2)},
title={Roadway surveillance video camera calibration using standard shipping container},
year={2017},
volume={},
number={},
pages={1-6},
doi={10.1109/ISC2.2017.8090811},
ISSN={},
month={Sept},}
Motion-vector clustering for traffic speed detection from UAV video
Ruimin Ke, Sung Kim, Zhibin Li, Yinhai Wang
IEEE First International Smart Cities Conference, 2015
abstract / bibtex / link

A novel method for detecting the average speed of traffic from non-stationary aerial video is presented. The method first extracts interest points from a pair of frames and performs interest point tracking with an optical flow algorithm. The output of the optical flow is a set of motion vectors which are k-means clustered in velocity space. The centers of the clusters correspond to the average velocities of traffic and the background, and are used to determine the speed of traffic relative to the background. The proposed method is tested on a 70-frame test sequence of UAV aerial video, and achieves an average error for speed estimates of less than 12%.

@INPROCEEDINGS{7366230,
author={R. Ke and S. Kim and Z. Li and Y. Wang},
booktitle={2015 IEEE First International Smart Cities Conference (ISC2)},
title={Motion-vector clustering for traffic speed detection from UAV video},
year={2015},
volume={},
number={},
pages={1-5},
doi={10.1109/ISC2.2015.7366230},
ISSN={},
month={Oct},}
Lane-changes prediction based on adaptive fuzzy neural network
Jinjun Tang, Fang Liu, Wenhui Zhang, Ruimin Ke, Yajie Zou
Expert Systems with Applications, 2018
abstract / bibtex / link

Lane changing maneuver is one of the most important driving behaviors. Unreasonable lane changes can cause serious collisions and consequent traffic delays. High precision prediction of lane changing intent is helpful for improving driving safety. In this study, by fusing information from vehicle sensors, a lane changing predictor based on Adaptive Fuzzy Neural Network (AFFN) is proposed to predict steering angles. The prediction model includes two parts: fuzzy neural network based on Takagi–Sugeno fuzzy inference, in which an improved Least Squares Estimator (LSE) is adopt to optimize parameters; adaptive learning algorithm to update membership functions and rule base. Experiments are conducted in the driving simulator under scenarios with different speed levels of lead vehicle: 60 km/h, 80 km/h and 100 km/h. Prediction results show that the proposed method is able to accurately follow steering angle patterns. Furthermore, comparison of prediction performance with several machine learning methods further verifies the learning ability of the AFNN. Finally, a sensibility analysis indicates heading angles and acceleration of vehicle are also important factors for predicting lane changing behavior.

@article{TANG2018452,
title = "Lane-changes prediction based on adaptive fuzzy neural network",
journal = "Expert Systems with Applications",
volume = "91",
pages = "452 - 463",
year = "2018",
issn = "0957-4174",
doi = "https://doi.org/10.1016/j.eswa.2017.09.025",
author = "Jinjun Tang and Fang Liu and Wenhui Zhang and Ruimin Ke and Yajie Zou",
}
Active Safety-Collision Warning Pilot in Washington State
Jerry Spears, Jerome Lutin, Yinhai Wang, Ruimin Ke, Steven Clancy
TRANSIT-IDEA Program Project Final Report, 2017
abstract / bibtex / link / media coverage

The Rosco/Mobileye Shield+ system is a collision avoidance warning system (CAWS) specifically designed for transit buses. This project involved field testing and evaluation of the CAWS in revenue service over a three-month period. The system provides alerts and warnings to the bus driver for the following conditions that could lead to a collision: 1) changing lanes without activating a turn signal, 2) exceeding posted speed limit, 3) monitoring headway with the vehicle leading the bus, 4) forward vehicle collision warning, and 5) pedestrian or cyclist collision warning in front of, or alongside the bus. Alerts and warnings are displayed to the driver by visual indicators located on the windshield and front pillars. Audible warnings are issued when collisions are imminent. Research objectives included: create a robust Rosco/Mobileye demonstration pilot for active/collision avoidance within the State of Washington on a minimum of 35 transit buses; determine the ease of retrofit of the existing fleet; develop a methodology for estimating the full costs savings of avoided collisions for each agency; develop a methodology and evaluation process for transit driver feedback and acceptance as well as bus passenger feedback; and provide detailed data and understanding on entrance barriers to this technology. The pilot test showed that although driver acceptance was mixed, there were large reductions in near-miss events for CAWS-equipped buses. Consequently, achieving driver acceptance will be a key factor in continued development and deployment of CAWS. As a result of comments received from the drivers, the vendor has begun a program to incorporate desired modifications to the system including reducing false positives. A second major factor in achieving industry acceptance is to demonstrate the business case for CAWS to both transit agencies and system developers. Although the pilot project produced encouraging results, collisions, injuries and fatalities can be considered rare events. A much larger in-service test will be needed to demonstrate actual cost-savings.

@Report{01643748,
author={Jerry Spears and Jerome Lutin and Yinhai Wang and Ruimin Ke and Steven Clancy},
journal={Transit IDEA Project},
title={Active Safety-Collision Warning Pilot in Washington State},
year={2017},
publisher={Transportation Research Board}
volume={},
number={82},
pages={1-33},
month={May},}
Digital Roadway Interactive Visualization and Evaluation Network Applications to WSDOT Operational Data Usage
Yinhai Wang, Ruimin Ke, Weibin Zhang, Zhiyong Cui, Kristian Henrickson
Washington Station Department of Transportation (WSDOT) Research Report, 2016
abstract / link / video / website

DRIVE Net is a region-wide, Web-based transportation decision support system that adopts digital roadway maps as the base, and provides data layers for integrating and analyzing a variety of data sources (e.g., traffic sensors, incident records). Moreover, DRIVE Net offers a platform for streamlining transportation analysis and decision making, and it serves as a practical tool for visualizing historical observations spatially and temporally. In its current implementation, DRIVE Net demonstrates the potential to be used as a standard tool for incorporating multiple data sets from different fields and as a platform for real-time decision making. In comparison with the previous version, the new DRIVE Net system is now able to handle more complex computational tasks, perform large-scale spatial processing, and support data sharing services to provide a stable and interoperable platform to process, analyze, visualize, and share transportation data. DRIVE Net’s capabilities include generating statistics for WSDOT’s Gray Notebook (GNB), including travel times, throughput productivity, and traffic delay calculations for both general purpose and HOV lanes, each of which are important performance indicators in the WSDOT congestion report. The DRIVE Net system includes robust loop detector data processing and quality control methods to address the data quality issues impacting loop detectors throughout the state. The capabilities of the DRIVE Net system have been expanded to include safety modeling, hotspot identification, and incident induced delay estimation. Specifically, the Safety Performance module includes functions that can be used to obtain traffic incident frequency, apply predictive models to estimate the safety performance of road segments, and visualize and compare observed incident counts and different predictive models. Additionally, a module providing multi-modal data analysis and visualization capabilities was developed as a pilot experiment for integration of heterogeneous data. This module includes pedestrian and bicycle, public transit, park and ride, Car2Go, and ferry data downloading and visualization. DRIVE Net now offers role-based access control, such that access privileges to different functions and data resources can be assigned on a group or individual basis. The new system is able to support more complex analytics and decision support features on a large-scale transportation network, and is expected to be of great practical use for both traffic engineers and researchers. With a modular structure and mature data integration and management framework, DRIVE Net can be expanded in the future to include a variety of additional data resources and analytical capabilities.


Teaching
CIVL6961: Machine Learning in Transportation
As Instructor / Rensselaer Polytechnic Institute
Fall 2024
CIVL2030: Intro to Transportation Engineering
As Instructor / Rensselaer Polytechnic Institute
Spring 2024
SC5302: Smart Cities Design
As Instructor / University of Texas at El Paso
Spring 2023
CE5390: Special Topics in Civil Engineering - Intelligent Transportation Systems
As Instructor / University of Texas at El Paso
Fall 2022
CE4340: Transportation Engineering
As Instructor / University of Texas at El Paso
Spring 2023, Spring 2022
Computer Vision for Intelligent Transportation Systems
As Invited Lecturer / Connecting with Professionals event for the "18225 High Definition" Robotics Team
Lecture Video
Fall 2020
CET590: Traffic Simulation and System Operations
As Pre-Doctoral Instructor / University of Washington
Fall 2019


Research Group and Opportunities
Current Group Members
Talha Azfar, PhD Candidate - LinkedIn / Google Scholar
Kaicong Huang, PhD student (M.S. in Robotics from National University of Singapore)
Jiachen Ye, Undergraduate Research Assistant (Tsinghua University)
Alumni
Shuyang Li, Research Intern 2024 at RPI / Current Position: MS Student at University of Michigan
Chengyue Wang, MS RA 2022-2023 at UTEP / Current Position: PhD Student at University of Macau
Abiral Aashish, MS RA 2022-2023 at UTEP / Current Position: PhD Student at University of Florida
Swapnil Samant, PhD RA 2022-2023 at UTEP / Current Position: Associate Research Scientist at Texas A&M Transportation Institute
Leslie Nunez, Undergraduate RA 2023 at UTEP / Current Position: Civil Engineering Designer at FXSA
Alejandra Yanez, Undergraduate RA 2023 at UTEP / Current Position: Student at UTEP
Prospective Students
I am currently looking for 1-2 motivated PhD students to join our group! If you are interested in working with me at RPI, please email me with you CV, transcripts, GRE score, and any other expectations. You can also directly apply through the RPI CEE programs and mention me in your application. I will try my best to respond to every applicant's email.
RPI students: (1) If you are already a graduate student at RPI, feel free to reach out. (2) I am also accepting undergraduate students for individual studies.