Abstract: In this paper, we study how to obtain uncertainty bounds for neural networks (NNs) that capture both output data noise and model uncertainty. We present a distribution-free algorithmic approach which is based on a specific network architecture and loss function. The loss function we propose is motivated by simple axioms on ''good'' uncertainty bounds. Our algorithm ensures that the obtained bounds are themselves representable as NNs, which makes it particularly relevant in settings where one wants to use uncertainty bounds of the trained NNs in a subsequent optimization problem to promote exploration. We experimentally study our approach in low dimensional settings (1d and 2d) via simulations and benchmark it against several state-of-the-art methods. Finally, we also give a quick outlook on the practical relevance of our approach for designing a novel preference elicitation algorithm for combinatorial auctions (CAs).