SONY

Constraining neural networks output by an interpolating loss function with region priors

Date
2020
Academic Conference
NeurIPS workshop on Interpretable Inductive Biases and Physically Structured Learning(NeurIPS)
Authors
Hannes Bergkvist (Sony Europe, B.V.)
Peter Exner (Sony Europe, B.V.)
Paul Davidsson (Malmö University)
Research Areas
AI & Machine Learning

Abstract

Deep neural networks have the ability to generalize beyond observed training data. However, for some applications they may produce output that apriori is known to be invalid. If prior knowledge of valid output regions is available, one way of imposing constraints on deep neural networks is by introducing these priors in a loss function. In this paper, we introduce a novel way of constraining neural network output by using encoded regions with a loss function based on gradient interpolation. We evaluate our method in a positioning task where a region map is used in order to reduce invalid position estimates. Results show that our approach is effective in decreasing invalid outputs for several geometrically complex environments.

このページの先頭へ