Publications

SmartDashCam: Automatic Live Calibration for DashCams

 Gopi Krishna Tummala, Tanmoy Das, Prasun Sinha and Rajiv Ramnath
 18th International Conference on Information Processing in Sensor Networks IPSN (colocated with CPS-IoT Week 2019), 2019

Dashboard camera installations are becoming increasingly common due to various Advanced Driver Assistance Systems (ADAS) based services provided by them. Though deployed primarily for crash recordings, calibrating these cameras can allow them to measure real-world distances, which can enable a broad spectrum of ADAS applications such as lane-detection, safe driving distance estimation, collision prediction, and collision prevention. Today, dashboard camera calibration is a tediousmanual process that requires a trained professional who needs to use a known pattern (e.g., chessboardlike) at a calibrated distance. In this paper, we propose SmartDash- Cam, a system for automatic and live calibration of dashboard cameras which always ensures highly accurate calibration values. Smart- DashCam leverages collecting images of a large number of vehicles appearing in front of the camera and using their coarse geometric shapes to derive the calibration parameters. In sharp contrast to the manual process we are proposing the use of a large amount of data and machine learning techniques to arrive at calibration accuracies that are comparable to the manual process. SmartDashCam implemented using commodity dashboard cameras estimates realworld distances with mean errors of 5.7 % which closely rivals the 4.1% mean error obtained from traditional manual calibration using known patterns.

AutoCalib: Automatic traffic Camera Calibration at Scale

 Gopi Krishna Tummala, Bhardwaj, Romil, Ganesan Ramalingam, Ramachandran Ramjee, and Prasun Sinha
 ACM Transactions on Sensor Networks (TOSN), 2018

Emerging smart cities are typically equipped with thousands of outdoor cameras. However, these cameras are usually not calibrated, i.e., information such as their precise mounting height and orientation is not available. Calibrating these cameras allows measurement of real-world distances from the video, thereby, enabling a wide range of novel applications such as identifying speeding vehicles, and city road planning. Unfortunately, robust camera calibration is a manual process today and is not scalable. In this paper, we propose AutoCalib, a system for scalable, automatic calibration of traffic cameras. AutoCalib exploits deep learning to extract selected key-point features from car images in the video and uses a novel filtering and aggregation algorithm to automatically produce a robust estimate of the camera calibration parameters from just hundreds of samples.We have implemented AutoCalib as a service on Azure that takes in a video segment and computes the camera calibration parameters. Using video from real-world traffic cameras, we show that AutoCalib is able to estimate real-world distances with an error of less than 12%.

AutoCalib: Automatic traffic Camera Calibration at Scale

 Gopi Krishna Tummala, Bhardwaj, Romil, Ganesan Ramalingam, Ramachandran Ramjee, and Prasun Sinha
 Proceedings of the 4th ACM International Conference on Systems for Energy-Efficient Built Environments, 2017

Emerging smart cities are typically equipped with thousands of outdoor cameras. However, these cameras are usually not calibrated, i.e., information such as their precise mounting height and orientation is not available. Calibrating these cameras allows measurement of real-world distances from the video, thereby, enabling a wide range of novel applications such as identifying speeding vehicles, and city road planning. Unfortunately, robust camera calibration is a manual process today and is not scalable. In this paper, we propose AutoCalib, a system for scalable, automatic calibration of traffic cameras. AutoCalib exploits deep learning to extract selected key-point features from car images in the video and uses a novel filtering and aggregation algorithm to automatically produce a robust estimate of the camera calibration parameters from just hundreds of samples.We have implemented AutoCalib as a service on Azure that takes in a video segment and computes the camera calibration parameters. Using video from real-world traffic cameras, we show that AutoCalib is able to estimate real-world distances with an error of less than 12%.

Live View of On-Road Vehicular Information

 Gopi Krishna Tummala, Dong Li, and Prasun Sinha
 14th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), 2017

Inter-vehicular communication (IVC) can be explored for enhancing collaborative vehicular applications related to traffic statistics, safety by accident prediction and prevention, and energy efficient route planning. For enhancing these applications, a live map of vehicles associated with their communication identities (e.g., IP/MAC addresses) is needed. This is particularly challenging to achieve in the presence of legacy vehicles which might not have any sensing or IVC capabilities. Additionally, vehicles might have diverse sensing capabilities and can have conflicting estimates of parameters of surrounding vehicles. We present RoadView, a system that builds the live map of surrounding vehicles by intelligently fusing the local maps created by individual vehicles. RoadView runs on top of existing local vehicular matching systems (LM) such as Foresight [10] or RoadMap [20]. RoadView is the first work that provides a live map of vehicles by leveraging collaboration across vehicles. Our simulations show that for different adoption rates and traffic densities, RoadView can robustly fuse information from a collection of local maps and enhance vehicles to sense 1.8x (average) number of immediate neighboring vehicles compared to state of art LM algorithms.

Soft-Swipe: Enabling High-Accuracy Pairing of Vehicles to Lanes using COTS

 Gopi Krishna Tummala , Derrick Cobb, Prasun Sinha and Rajiv Ramnath
 First ACM International Workshop on Smart, Autonomous, and Connected Vehicular Systems and Services (colocated with ACM MobiCom), 2016

Proximity-based interactions underlie applications such as retail payments using smartphone-apps like Apple Pay and Google Wallet, automated grocery store checkout, and vehicular transactions for toll payments. In these applications, a transaction takes place between two objects when they come close to each other, with relative proximity determining the pairing. In this paper, we present a new approach to enable highly accurate pairing of vehicles to specific lanes in a wide-range of vehicle-based multi-lane service stations using general-purpose commodity communication and sensing technology. To evaluate its performance, we consider an example application of pairing vehicles to respective quality check bays in an automobile manufacturing plant. Our proposed system called Soft-Swipe works by matching natural signatures (specifically, motion signatures) generated by the target object with the same signature detected by simple instrumentation of the environment (a video camera or an inexpensive sensor array). Soft-Swipe implemented in a vehicle testing station performed pairing with median F-score of 96% using vision-only system, 92% using sensor-only system and, 99% using both

RoadMap: Mapping Vehicles to IP Addresses using Motion Signatures

 Gopi Krishna Tummala , Dong Li, Prasun Sinha
 First ACM International Workshop on Smart, Autonomous, and Connected Vehicular Systems and Services (colocated with ACM MobiCom), 2016

Inter-vehicular communication (IVC) can be used to enhance the sensing region of vehicles for improved safety on the roads. For many applications based on IVC, the relative locations and communication identities (e.g., IP addresses) of other collaborating vehicles are important for accurate identification. This is particularly challenging to achieve in the presence of legacy vehicles which may not have any sensing or IVC capabilities. We present a system called RoadMap, that matches IP addresses with respective vehicles observed through a camera. It assumes a smartphone or a dashboard camera deployed in vehicles, to identify the vehicles in field of view (FoV), and IVC capability. It runs in the adopted vehicles and accurately matches information obtained through multiple sensing modalities (e.g., visual and electronic). RoadMap matches the motion-trajectories of vehicles observed from the dash-board camera with the motion-trajectories transmitted by other vehicles. To the best of our knowledge, RoadMap is the first work to explore motion-trajectories of vehicles observed from a camera to create a map of vehicles by smartly fusing electronic and visual information. It has low hardware requirement and is designed to work in low adoption rate scenarios. Through real-world experiments and simulations, RoadMap matches IP-Addresses with camera observed vehicles with a median matching precision of 80%, which is 20% improvement compared to existing schemes.