Most computer vision models are based on artificial intelligence (AI) in particular artificial neural networks. However, the outputs of these models are not always calibrated and do not indicate possible uncertainties in the calculations. In his paper "I Find Your Lack of Uncertainty in Computer Vision Disturbing," Dr. Matias Valdenegro draws attention to this problem, which is primarily a safety issue. For this, he has now been awarded the Best Paper Award at the LatinX in CV Research Workshop, part of the annual Conference on Computer Vision and Pattern Recognition (CVPR), which took place in 2021 as a virtual event from June 19 to 25.
In his paper, the DFKI Robotics Innovation Center researcher conducts a meta-analysis of the existing computer vision literature and states that many of the computer vision models in use today, do not correctly quantify their uncertainty. This becomes a particular problem when it comes to applications that interact directly with humans. Here, a correct quantification of the uncertainty in the output of the model is required in order to be aware of the limitations of such a model. For example, it would be desirable for the model to tell the user when it does not know the correct result, rather than providing an incorrect output. With his work, the researcher hopes to motivate the community to use machine learning models in computer vision that are able to estimate their own uncertainty and make it transparent. In addition, Dr. Valdenegro describes current and future challenges and provides the impetus for further important research in this area.
Matias Valdenegro's paper was honored as part of the LatinX in CV Research Workshop at the CVPR 2021 Conference, the leading conference in computer vision. The LatinX workshop series serves to promote and give visibility to the work of Latin American researchers who have been underrepresented in the machine learning and computer vision community.
Contact: Dr. Matias Valdenegro