Boston, MA
Robotics Engineer | Turning Perception & AI Into Human-Centered Robotic Solutions
I'm a robotics and perception engineer pursuing my M.S. in Robotics at Worcester Polytechnic Institute (expected graduation by May 2026), with a strong passion for building intelligent robots that see, think, and act.
With hands-on experience in end-to-end perception and control systems for mobile and manipulation robots, I specialize in bridging the gap between research algorithms and real-world prototypes. My work spans computer vision, sensor fusion, deep learning, and embedded deployment on platforms like Jetson.
I work best on projects that combine algorithms, hardware, and real-world impact. I thrive where hardware meets algorithms, and where perception drives autonomy.
If it involves Robots sensing, deciding, or acting - I want to build it!
2D/3D vision, object detection, pose estimation, photogrammetry, septh fusion
Deep RL, Imitation Learning, CNNs, Transformers
Visual-Inertial Odometry, EKF/IEKF, SLAM
Kinematics, trajectory planning, computed-torque control
ROS2, Gazebo, Jetson, MuJoCo, Blender, Docker
Python, C/C++, PyTorch, TensorFlow, OpenCV
Built comprehensive perception system with multi-camera calibration, depth fusion, and object recognition for bi-manual mobile nurse assistant. Achieved <5% pose estimation error for medical objects.
Developed marker-less pose estimator for medical objects using vision-only pipeline and multi-view fusion. Eliminated physical markers, reducing setup time by ~30%.
Implemented DDPG and A3C from scratch for manipulation tasks. Built imitation learning pipeline for peg-in-hole tasks with action chunking transformers.
Created >10,000 frame synthetic Vision+IMU dataset. Implemented MSCKF filter and deep fusion network achieving <5% trajectory recovery error on benchmarks.
Built simulation and control stack for non-verbal gesture control. Achieved >90% gesture detection accuracy and reduced adaptation time by ~40%.
Developed 12 FPS dashboard with lane detection, object tracking, pedestrian pose estimation, and collision prediction for Tesla Model S video streams.
Worcester, Massachusetts, USA
August 2024 – May 2026 (Expected)
CGPA: 4.00/4.00
Coursework: Robot Dynamics, Human–Robot Interaction, Computer Vision, Machine Learning for Robotics
Award: Dr. Glenn Yee Graduate Student Tuition Award
Bengaluru, Karnataka, India
November 2020 – July 2024
CGPA: 8.90/10.0
Relevant Coursework: Mathematics for Machine Learning, Deep Learning, Control Engineering
Leadership: President – Coding Club; Student Placement Coordinator; Head of Machine Learning division (Quantum Computing Research)
I'm always open to discussing robotics, research opportunities, or collaborations.