Auditive emotion recognition for emphatic AI-assistants
In Proceedings of the Sixteenth International Conference on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services - CENTRIC 2023, (CENTRIC-2023), 13.11.-17.11.2023, Valencia, Springer, Nov/2023.
This paper briefly introduces the Project “AudEeKA”, whose aim is to use speech and other bio signals for emotion recognition to improve remote, but also direct, healthcare. This article takes a look
at use cases, goals and challenges, of researching and implementing a possible solution. To gain additional insights, the main-goal of the project is divided into multiple sub-goals, namely speech emotion
recognition, stress detection and classification and emotion detection from physiological signals. Also, similar projects are considered and project-specific requirements stemming from use-cases introduced.
Possible pitfalls and difficulties are outlined, which are mostly associated with datasets. They also emerge out of the requirements, their accompanying restrictions and first analyses in the area of speech
emotion recognition, which are shortly presented and discussed. At the same time, first approaches to solutions for every sub-goal, which include the use of continual learning, and finally a draft of the
planned architecture for the envisioned system, is presented. This draft presents a possible solution for combining all sub-goals, while reaching the main goal of a multimodal emotion recognition system.
emotion recognition, speech emotion recognition, multimodal emotion recognition, continual learning