Deep neural networks have the ability to generalize beyond observed training data. However, for some applications they may produce output that apriori is known to be invalid. If prior knowledge of valid output regions is available, one way of imposing constraints on deep neural networks is by introducing these priors in a loss function. In this paper, we introduce a novel way of constraining neural network output by using encoded regions with a loss function based on gradient interpolation. We evaluate our method in a positioning task where a region map is used in order to reduce invalid position estimates. Results show that our approach is effective in decreasing invalid outputs for several geometrically complex environments.