Overview
In summer 2021, I worked in the Olin College Crowdsourcing and Machine-Learning (OCCaM) Lab
under advisor Paul Ruvolo. OCCaM Lab co-designs apps with and for the blind and visually impaired (B/VI) community.
Their major app, Clew, uses Apple’s ARKit to guide users back along a path they’ve walked with a sighted guide.
My project focused on Invisible Map, a prototype navigation app extending Clew to support
full indoor navigation without a sighted guide. The idea: building owners can create a single map of
their building which could then be shared with B/VI users for independent navigation.
By the end of the summer, we delivered a reliable proof-of-concept demo,
improving SLAM accuracy and showing Invisible Map’s potential to expand autonomy
for blind and visually impaired users.
Challenges
Invisible Map’s core challenge was a Simultaneous Localization and Mapping (SLAM) problem:
when someone was making a map, the app needed to track their location in the building while simultaneously
constructing the map of the building.
- Drift - compounded sensor error degraded accuracy over time.
- Outliers in AprilTag detection - misread tags skewed maps, sometimes by meters.
Results
By the end of the summer, my project partner and I transformed Invisible Map from
a struggling prototype into a working proof-of-concept demo.
With improved weighting strategies and LiDAR-enhanced AprilTag detection,
maps were significantly more accurate and robust.
Future Work
We also began tackling path optimization. The existing "breadcrumb" approach
required users to exactly retrace the map creator’s path, leading to inefficient routes.
Our contribution was a LiDAR-based algorithm to detect whether two points
were connected by a floor (with no walls between them).
This raycasting method showed promise for detecting overlapping regions
and enabling shortcuts, though it remained as a demo for future researchers
to integrate into Invisible Map.