Autonomy

Predicting Out-of-Distribution Performance of Deep Neural Networks Using Model Conformance

With the increasingly high interest in using Deep Neural Networks (DNN) in safety-critical cyber-physical systems, such as autonomous vehicles, providing assurance about the safe deployment of these models becomes ever more important. The safe deployment of deep learning models in the real world where the inputs can vary from the training environment of the models requires characterizing the performance and the uncertainty in the prediction of these models, particularly on novel and out-of-distribution (OOD) inputs. This has motivated the development of methods to predict the accuracy of DNN in novel (unseen during training) environments. These methods, however, assume access to some labeled data from the novel environment which is unrealistic in many real-world settings. We propose an approach for predicting the accuracy of a DNN classifier under a shift from its training distribution without assuming access to labels of the inputs drawn from the shifted distribution. We demonstrate the efficacy of the proposed approach on two autonomous driving datasets namely the GTSRB dataset for image classification, and the ONCE dataset with synchronized feeds from LiDAR and cameras used for object detection. We show that the proposed approach is applicable for predicting accuracy on different modalities (image from camera, and point cloud from LiDAR) of the input data.