Sensor Fusion SLAM
Mono Visual-Inertial SLAM for Mobile/Wearable & Robots.
Abstract
Developed a cross-platform integration of a lightweight monocular visual-inertial SLAM system for:
- Mobile devices, self-driving vehicles, and smart glasses
- Supported across Android, iOS, Linux, Windows, macOS, and ROS
- Optimized for embedded hardware with real-time constraints and limited compute
- Designed to enable platform-specific deployment with minimal code duplication
Problem
Deploying a single SLAM solution across heterogeneous platforms involves major challenges:- Sensor drivers and APIs differ drastically across OS/hardware
- Real-time performance tuning must consider device-specific limitations (e.g. ARM mobile vs x86 vehicle platform)
- Reusability and maintainability degrade without abstraction layers
- Debugging across embedded systems (e.g. desktops, Android phones) is complex and time-consuming
Contribution
- Refactored and modularized existing SLAM codebase to supportmulti-platform deployment
- Designed platform abstraction layers (sensors, IMU, camera input) for code portability and maintainability
- Ported the SLAM system to Android, iOS, and ROS with hardware-specific tuning for smart glasses and autonomous vehicles
- Integrated the SLAM pipeline into self-driving vehicle systems, with real-time pose feedback to navigation modules
- Improved build/test automation for continuous integration across multiple target environments
Result
- Successfully deployed the SLAM system across 6+ operating systemsand multiple hardware targets
- Enabled real-time visual-inertial tracking on embedded platforms with limited compute and memory
- Delivered reliable pose tracking in smart glasses and self-driving vehicles under varied conditions
- educed per-platform maintenance burden through abstraction and shared code structure