Bayesian Inverse Graphics for Few-Shot Concept Learning
Luis Octavio Arriaga Camargo, Jichen Guo, Rebecca Adam, Sebastian Houben, Frank Kirchner
In International Conference on Neural-Symbolic Learning and Reasoning 2024, (NeSy-2024), Springer, pages 141-165, 2024. Springer.

Zusammenfassung (Abstract) :

Humans excel at building generalizations of new concepts from just one single example. Contrary to this, current computer vision models typically require large amount of training samples to achieve a comparable accuracy. In this work we present a Bayesian model of perception that learns using only minimal data, a prototypical probabilistic program of an object. Specifically, we propose a generative inverse graphics model of primitive shapes, to infer posterior distributions over physically consistent parameters from one or several images. We show how this representation can be used for downstream tasks such as few-shot classification and pose estimation. Our model outperforms existing few-shot neural-only classification algorithms and demonstrates generalization across varying lighting conditions, backgrounds, and out-of-distribution shapes. By design, our model is uncertainty-aware and uses our new differentiable renderer for optimizing global scene parameters through gradient descent, sampling posterior distributions over object parameters with Markov Chain Monte Carlo (MCMC), and using a neural based likelihood function.

Links:

https://link.springer.com/chapter/10.1007/978-3-031-71167-1_8
https://www.arxiv.org/abs/2409.08351
https://github.com/oarriaga/bayesian-inverse-graphics


© DFKI GmbH
zuletzt geändert am 27.02.2023