Affective Computing for Human Wellbeing

AI-driven Human-Computer Interaction systems

Resources

EMOVEME

The Emotional Voice Messages (EMOVOME) database is a speech dataset collected for emotion recognition in real-world conditions. It contains 999 spontaneous voice messages from 100 Spanish speakers, collected from real conversations on a messaging app. EMOVOME includes both expert and non-expert emotional annotations, covering valence and arousal dimensions, along with emotion categories for the expert annotations. Detailed participant information is provided, including sociodemographic data and personality trait assessments using the NEO-FFI questionnaire. Moreover, EMOVOME provides audio recordings of participants reading a given text, as well as transcriptions of all 999 voice messages. Additionally, baseline models for valence and arousal recognition are provided, utilizing both speech and audio transcriptions.

https://arxiv.org/abs/2402.17496

License: Academic use, not-for-profit research.

Zaragozá, L. G., del Amor, R., Vargas, E. P., Naranjo, V., Raya, M. A., & Marín-Morales, J. (2024). Emotional Voice Messages (EMOVOME) database: emotion recognition in spontaneous voice messages. arXiv preprint arXiv:2402.17496.

EDABE

We collected and published the Electrodermal activity artifact correction benchmark (EDABE) dataset, which includes raw electrodermal activity signals and the signals reconstructed via manual correction for use as a ground truth. The EDABE dataset includes a total of 74.46h EDA recording affected by motion artifacts from the 43 subjects. It is divided into a training set with 33 subjects (56.27h) and a test set with 10 subjects (18.19h).

https://data.mendeley.com/datasets/w8fxrg4pv5/2

License: Academic use, not-for-profit research.

Llanes-Jurado, J., Carrasco-Ribelles, L. A., Alcañiz, M., Soria-Olivas, E., & Marín-Morales, J. (2023). Automatic artifact recognition and correction for electrodermal activity based on LSTM-CNN models. Expert Systems with Applications, 230, 120581.

Stimuli set

Emotional Rooms

Emotional Rooms is a set of 360º stimuli developed to perform emotional elicitation in Virtual Reality. It includes 4 rooms modulates to elicit the four cuadrants of Circumplex Model of Affects.

https://personales.upv.es/jamarmo/emotionalrooms/

License: Academic use, not-for-profit research.

Marin-Morales, J., Higuera-Trujillo, J. L., Greco, A., Guixeres, J., Llinares, C., Scilingo, E. P.,… & Valenza, G. (2018). Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors. Scientific reports, 8(1), 13657.

Code/Models

Electrodermal activity artifact correction

Description: A deep learning model based on LSTM and 1DCNN to recognize and correct motion artifact on electrodermal activity signal. It was trained on EDABE dataset.

https://github.com/ASAPLableni/EDABE_LSTM_1DCNN

License: Academic use, not-for-profit research.

Llanes-Jurado, J., Carrasco-Ribelles, L. A., Alcañiz, M., Soria-Olivas, E., & Marín-Morales, J. (2023). Automatic artifact recognition and correction for electrodermal activity based on LSTM-CNN models. Expert Systems with Applications, 230, 120581.

Algorithms

Fixation identification in 3D immersive virtual reality.

Description: Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject’s head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation.

https://github.com/ASAPLableni/VR-centred_I-DT_algorithm

License: Academic use, not-for-profit research.

Llanes-Jurado, J., Marín-Morales, J., Guixeres, J., & Alcañiz, M. (2020). Development and calibration of an eye-tracking fixation identification algorithm for immersive virtual reality. Sensors, 20(17), 4956.

Intelligent systems

Conversational virtual agent based on large language model.

This study created a chatbot to be used as the conversational software behind a virtual human.

https://github.com/ASAPLableni/LableniBOT

License: Academic use, not-for-profit research.

Llanes-Jurado, J., Gómez-Zaragozá, L., Minissi, M. E., Alcañiz, M., & Marín-Morales, J. (2024). Developing conversational Virtual Humans for social emotion elicitation based on large language models. Expert Systems with Applications, 246, 123261.