Projects  /  Simulation  /  AV Visualization Dashboard
Simulation Apr – May 2025

Autonomous Vehicle Visualization Dashboard

A 12 FPS real-time dashboard overlaying lane detection, object tracking, pedestrian poses, and collision predictions on Tesla Model S video streams.

YOLORAFTOpenPoseBlenderPyTorchPythonOptical Flow
← Back to Projects

Overview

This project builds a comprehensive autonomous vehicle perception visualization dashboard. The system processes dashcam video streams and overlays rich perception outputs — detected lanes, classified vehicles, pedestrian skeletons, and predicted collision zones — at 12 frames per second.

Approach

Object Detection & Classification: YOLO handles vehicle and pedestrian detection with 3D bounding box estimation for vehicle classification and orientation. Detected objects are tracked across frames using IoU-based association.

Motion Segmentation: RAFT optical flow is used to segment moving objects from the static background, enabling dynamic collision zones to be computed based on relative motion trajectories.

Pedestrian Pose Estimation: OpenPose provides full 2D skeleton estimation for detected pedestrians, enabling pose-based intent prediction (e.g., about to cross road vs. walking parallel).

Visualization & Replay: All overlays are rendered at 12 FPS in real time. Annotated sequences are also exported to Blender for frame-accurate review and analysis.

Results

12 FPSReal-time overlay
3DVehicle classification
FullCollision prediction

The pipeline runs at a consistent 12 FPS on GPU hardware with all perception modules active simultaneously. The multi-modal overlay (lanes + objects + poses + collision zones) provides significantly richer scene understanding than any single-model baseline.

Media

🎥 Demo video and project images coming soon.