Image-Based Relative Navigation for Autonomous Space Exploration
Status: Active
Start Date: 2023-08-15
End Date: 2027-12-27
Description: Development of autonomous systems for deep space exploration, Rendezvous, Proximity Operations and Docking (RPOD), and On-Orbit Servicing (OOS) have been of growing interest for NASA as shown by the latest decadal survey and the Space Technology Mission Directorate’s (STMD) Strategic Framework [1, 2]. These systems rely heavily on autonomous navigation capabilities centered around Optical Navigation (OPNAV), the use of resolved bodies in imagery for navigation purposes. Unlike traditional navigation techniques, OPNAV has the ability to recover a full pose estimate from a lightweight and cost-effective digital camera, which is a key benefit in deep space travel where communications may be limited. With image processing, pose estimation, and implementation developments, OPNAV has already played a significant role in furthering space exploration through missions such as OSIRIS-REx and Artemis I [3, 4]. The need for autonomous navigation, and therefore OPNAV, will increase as the space exploration focus shifts from Earth-orbiting missions to cislunar and deep space applications due to these inherent communications delays. OPNAV is inextricably linked to the field of Relative Navigation (RelNav), a branch of navigation that determines the spacecraft’s pose relative to another spacecraft, body, or terrain feature. With techniques such as limb-scanning and surface mapping, NASA has already utilized OPNAV and RelNav on prominent missions. Most notably, the Artemis program has used OPNAV to navigate within cislunar space and image the lunar surface [4]. While this technology has been used on past missions, OPNAV and RelNav technology have limitations on image processing and pose estimation optimality that must be explored further. Spacecraft image processing and feature detection is often a difficult task due to harsh illumination conditions, deployment anomalies, and more. Classical and learned detection methods utilize models of a nominally deployed spacecraft to determine features. On a partially illuminated spacecraft, large shadowed regions are hard to predict and typically confuse the feature detection methods. Similarly, features on a spacecraft experiencing deployment anomalies such as incomplete deployment of solar panels or antennas may be unrecognizable. This technology gap has been recognized by the European Space Agency (ESA) through the Satellite Pose Estimation Challenge (SPEC), yet the results indicate that top algorithms in the competition struggled when tested on real space imagery [5]. This challenge, alongside ongoing research, shows promise in deep learning and analytical solutions to pose estimation using feature descriptors, yet the lack of consensus on methodology and performance on real space imagery further highlights the technology gap within both pose estimation and feature detection. Bridging this technology gap is integral for future missions. For instance, future Artemis missions may utilize emerging RelNav technology in conjunction with traditional OPNAV to dock with spacecraft such as Gateway. Beyond Artemis, future components of the Mars Sample Return (MSR) campaign must perform autonomous navigation and RPOD operations around Mars to collect and return Martian samples [6]. MSR’s operations may benefit significantly from evolving OPNAV and RelNav technology. Developing efficient and robust image processing and pose estimation capabilities will not only enable improved navigation and docking operations for upcoming missions like Artemis and MSR, but also bring technology one step closer to truly autonomous systems. As a Ph.D. student at the Georgia Institute of Technology (GT), I’m interested in investigating the intersection of OPNAV and RelNav and its image processing, pose estimation, and implementation in navigation filtering. This topic will not only advance autonomous navigation needed for upcoming NASA missions, but also further NASA’s Strategic Thrusts as shown in the Strategic Framework. Additionally, it will equip me with relevant skills needed for future navigation work at NASA Johnson Space Center as a current Pathways Program Intern and future full-time engineer. Lastly. this investigation will benefit from Dr. John Christian’s (PI) expertise as of OPNAV and RelNav greatly aligns with his research lab, the Space Exploration and Analysis Laboratory (SEAL) at GT.
Lead Organization: Georgia Institute of Technology-Main Campus