A few of the projects I worked on!

Smart Glasses for the visually and hearing impaired

Project Member: Romala Mishra

In this project, I developed a multimodal assistive system for individuals who are visually or hearing impaired by combining embedded hardware with deep learning-based visual perception. Using an Arduino Uno, I interfaced ultrasonic sensors to detect obstacles in three directions. These signals were transmitted to a Raspberry Pi, which provided real-time auditory alerts through a Bluetooth speaker — enabling spatial awareness for visually impaired users. To assist users who are deaf or hard of hearing, I deployed a CNN–Transformer-based gesture recognition model on the Raspberry Pi. This model processed live camera input using OpenCV, TensorFlow, and Python to interpret sign language gestures. When a deaf user performs gestures in front of the camera (e.g., mounted on smart glasses), the system can convert them to spoken output via the speaker, allowing communication with hearing individuals who do not understand sign language. The system demonstrates the potential of lightweight, real-time AI for enhancing accessibility through multimodal assistance.[Code].

Intelligent Ground Vehicle

Project Member: Romala Mishra

I developed a novel and lightweight U-Net-based deep learning model for accurate and real-time lane segmentation tailored to autonomous ground vehicle systems. The model was trained using lane annotations extracted from ROSBag files and optimized for deployment in real-world navigation scenarios. To ensure low-latency performance, I integrated the trained network on an NVIDIA Jetson Nano, enabling efficient on-board inference suitable for embedded robotic platforms. This work demonstrates the potential of compact semantic segmentation models in supporting autonomous navigation through precise lane perception.

Vacbot - Autonomous Cleaning Bot

Project Members: Romala Mishra, Pratik Kumar Sahoo, Mrinal Misra

We developed an autonomous cleaning robot simulation equipped with SLAM (Simultaneous Localization and Mapping) to enable real-time area exploration and understanding of dynamic environments. The system utilized RRT (Rapidly-exploring Random Tree) for efficient global path planning, allowing the robot to systematically explore and map previously unmapped regions. To ensure safe and smooth navigation, I integrated MoveBase with local costmaps and a dynamic planner, enabling real-time, obstacle-aware path execution and collision-free movement throughout the environment. This project demonstrated the coordinated use of perception, planning, and control modules in a robotics simulation for autonomous navigation tasks.[Code].

Holonomic Art Bot

Project Members: Romala Mishra, Pratik Kumar Sahoo, Shantanu Panda, Monalisa Behera

We designed and implemented the controller logic for a 3-omni-wheel holonomic robot, enabling smooth and precise omnidirectional movement. The system extracted contour points from a given sketch, converting them into coordinate-based waypoints that defined the robot’s drawing trajectory. For global localization, an overhead camera was employed along with ArUco markers, allowing the robot to track its position in real-time. Using PID control and inverse kinematics, we computed individual wheel velocities to ensure accurate path following and faithful sketch rendering, demonstrating the integration of vision-based localization, motion planning, and control in a real-world robotic system.[Code].

Mini Projects:


A list of small projects which required less than a week to work on: