Future autonomous systems will employ complex sensing, computation, and communication components for their perception, planning, control, and coordination, and could operate in highly dynamic and uncertain environment with safety and security assurance. To realize this vision, we have to better understand and address the challenges from the “unknowns” – the unexpected disturbances from component faults, environmental interference, and malicious attacks, as well as the inherent uncertainties in system inputs, model inaccuracies, and machine learning techniques (particularly those based on neural networks). In this work, we will discuss these challenges, propose our approaches in addressing them, and present some of the initial results. In particular, we will introduce a cross-layer framework for modeling and mitigating execution uncertainties (e.g., timing violations, soft errors) with weakly-hard paradigm, quantitative and formal methods for ensuring safe and time-predictable application of neural networks in both perception and decision making, and safety-assured adaptation strategies in dynamic environment.
Photo by Viva Luna Studios on Unsplash