This Review discusses the main results obtained in training end-to-end neural architectures for guidance and control of interplanetary transfers, planetary landings, and close-proximity operations, highlighting the successful learning of optimality principles by the underlying neural models. Spacecraft and drones aimed at exploring our solar system are designed to operate in conditions where the smart use of onboard resources is vital to the success or failure of the mission. Sensorimotor actions are thus often derived from high-level, quantifiable, optimality principles assigned to each task, using consolidated tools in optimal control theory.
View Article and Find Full Text PDF