Affective Computing for Human Wellbeing

Artificial Intelligence for understanding and enhancing human wellbeing

Areas

Affective Computing

Affective Computing (AfC) is a field of study that focuses on developing systems capable of automatically recognizing, modeling, and expressing emotions. Proposed by Rosalind Picard in 1997, AfC is an interdisciplinary area that integrates psychology, computer science, data science, and biomedical engineering. This field has emerged due to the importance of emotions in various underlying processes such as perception, decision-making, creativity, memory, and social interaction. Numerous studies have focused on finding reliable methodologies to identify an individual’s emotional state using machine learning algorithms.

AfC has become a significant research topic and has found substantial applications in the field of health and well-being. For instance, in the realm of mental health, systems using AfC have been developed to monitor and treat disorders such as depression and anxiety by analyzing physiological and behavioral signals to detect emotional changes. In well-being, AfC has been used in the design of mobile applications and wearable devices that help individuals manage their stress and improve their emotional well-being through biofeedback techniques and continuous monitoring of their emotional state. Additionally, in the hospital context, AfC systems have been implemented to enhance the patient experience by adjusting the hospital environment (such as lighting and music) based on the patient’s emotional state, promoting faster recovery and a more comfortable stay.

In this context, we use the term “affect” in its broadest sense, which includes emotions, moods, interpersonal stances, attitudes, and affective personality traits. This breadth allows addressing a wider range of emotional phenomena and their impact on different areas of human life.

Main papers:

Marín-Morales, J., Higuera-Trujillo, J. L., Greco, A., Guixeres, J., Llinares, C., Scilingo, E. P., … & Valenza, G. (2018). Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors. Scientific reports8(1), 13657.

Marín-Morales, J., Llinares, C., Guixeres, J., & Alcañiz, M. (2020). Emotion recognition in immersive virtual reality: From statistics to affective computing. Sensors20(18), 5163.

Marín-Morales, J., Higuera-Trujillo, J. L., Greco, A., Guixeres, J., Llinares, C., Gentili, C., … & Valenza, G. (2019). Real vs. immersive-virtual emotional experience: Analysis of psycho-physiological patterns in a free exploration of an art museum. PloS one14(10), e0223881.

Intelligent Systems

In the field of Affective Computing, both the analysis and synthesis of affect are considered, as both aspects play equally important roles. The elicitation of behaviors is key because, for example, if we want to recognize whether a person is introverted or extroverted, we must place them in a situation or task that allows generating behavioral patterns where the differences between the two groups can be seen.

Historically, passive stimuli such as images, videos, and sounds have been used to elicit emotions in laboratory settings and subsequently be measured. However, in recent years, virtual reality has allowed for more immersive environment recreations, improving the simulation of real-life settings in laboratories.

Recently, advances in artificial intelligence have enabled the creation of intelligent systems, such as virtual humans, that act as active stimuli in the elicitation of emotions. These stimuli are active because they allow direct interaction with the environment, which could revolutionize methods for eliciting human behaviors in laboratory settings. Additionally, there is a field of work dedicated to creating computational models that guide these intelligent systems to simulate humans ecologically and elicit authentic human interactions in controlled environments.

Main papers:

Llanes-Jurado, J., Gómez-Zaragozá, L., Minissi, M. E., Alcañiz, M., & Marín-Morales, J. (2024). Developing conversational Virtual Humans for social emotion elicitation based on large language models. Expert Systems with Applications246, 123261.

Behaviour Measurement and Signal Processing

Once different stimuli have been recreated, whether passive, active, or real-life situations, it is essential to measure human responses to these stimuli. For this purpose, there are explicit and implicit measures. Historically, in the field of psychology, explicit measures such as self-assessment questionnaires have been used. These can be biased due to factors like social desirability, biases in self-perception, and the subjectivity of the questions posed.

To complement these measures, implicit measures related to biomedical engineering can be used. These are divided into physiological and behavioral. Physiological measures include EEG, ECG, EDA, and fNIRs, while behavioral measures can include voice (both prosodic and verbal), facial emotion recognition, body tracking, and eye tracking.

All these measures present a high degree of complexity and require rigorous processing to clean them of possible artifacts, as well as modeling to obtain the most useful information possible. Signal processing in the field of Affective Computing is crucial to ensure that the data obtained is accurate and relevant, allowing for the proper interpretation of the subjects’ emotional and behavioral responses.

Main papers:

Llanes-Jurado, J., Marín-Morales, J., Guixeres, J., & Alcañiz, M. (2020). Development and calibration of an eye-tracking fixation identification algorithm for immersive virtual reality. Sensors20(17), 4956.

Llanes-Jurado, J., Carrasco-Ribelles, L. A., Alcañiz, M., Soria-Olivas, E., & Marín-Morales, J. (2023). Automatic artifact recognition and correction for electrodermal activity based on LSTM-CNN models. Expert Systems with Applications230, 120581.

Multimodal Machine Learning

Once we have measured the user, it is necessary to generate models that allow us to model the psychological states and traits of the subjects, to gain knowledge of their responses. For this purpose, we use both statistical machine learning, which is based on creating models from previously generated features, and deep learning, creating end-to-end architectures.

In statistical machine learning, interpretability is crucial to provide as much information as possible about how the model works and thus generate knowledge. In this field, methods such as SHAP values and other interpretability techniques are studied. These methods help to break down and understand the model’s decisions, facilitating the identification of the most influential features and providing clear and coherent explanations to end users.

On the other hand, deep learning has revolutionized the way signals are processed, allowing, for example, in the field of speech emotion recognition, the use of pre-trained models that have significantly improved performance in numerous problems. Here is where transfer learning plays a fundamental role. Transfer learning allows leveraging pre-trained models on large datasets and adapting them to specific tasks with smaller amounts of labeled data. This not only accelerates the model development process but also improves accuracy and efficiency, especially in contexts where labeled data can be limited or costly to obtain.

Finally, it is important to consider multimodality. One of the biggest challenges with multimodal data is summarizing information from multiple modalities in a way that utilizes complementary information and filters out redundant information. Due to the heterogeneity of the data, natural challenges arise such as different types of noise, alignment of modalities, and techniques for handling missing data. The effective integration of data from diverse sources, such as audio, video, and physiological signals, allows for a richer and more comprehensive understanding of the user’s affective states. Advances in deep learning have facilitated the development of models that can learn joint representations of multiple modalities, thus improving the accuracy and robustness of predictions in affective computing.

Main papers:

Marín-Morales, J., Higuera-Trujillo, J. L., Greco, A., Guixeres, J., Llinares, C., Scilingo, E. P., … & Valenza, G. (2018). Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors. Scientific reports8(1), 13657.

Altozano, A., Minissi, M. E., Alcañiz, M., & Marín-Morales, J. (2023). Comparing Feature Engineering and End-to-End Deep Learning for Autism Spectrum Disorder Assessment based on Fullbody-Tracking. arXiv preprint arXiv:2311.14533.

Health Behaviour Informatics and Computational Psychiatry

The field of Affective Computing has high applicability in Health Behaviour Informatics, and particularly in computational psychiatry. This growing area seeks to provide precise quantitative models between psychophysiological indicators, behaviors, and explicit responses, aiming to complement traditional tools used in psychiatry.

Computational psychiatry can revolutionize the treatment of mental disorders through the analysis of large amounts of biometric and behavioral data. For example, in depression, advances in Affective Computing have enabled the development of tools that analyze facial responses, speech prosody, and physiological signals to assess the emotional state of patients in real-time. These tools help personalize therapeutic interventions, allowing for a better understanding of disease progression and treatment efficacy. The detection of subtle changes in facial expression or tone of voice can indicate an imminent relapse, allowing for early intervention.

Moreover, computational psychiatry allows recognizing personal traits correlated with certain diseases and disorders, such as personality traits, attachment styles, or levels of social anxiety. The analysis of voice and facial expressions can reveal underlying levels of anxiety or tendencies towards social avoidance, important in mental health assessment.

Advances in computational psychiatry will equip clinicians with tools to monitor, diagnose, and improve precision medicine through objective biomarkers in diseases such as depression, bipolar disorder, schizophrenia, and autism. The integration of Affective Computing into computational psychiatry offers a promising pathway to enhance the detection, diagnosis, and treatment of mental disorders, providing mental health professionals with innovative and precise tools to understand and address the complexities of human behavior and emotional health.

Main papers:

Gómez-Zaragozá, L., Marín-Morales, J., Vargas, E. P., Giglioli, I. A. C., & Raya, M. A. (2023). An online attachment style Recognition System based on Voice and Machine Learning. IEEE Journal of Biomedical and Health Informatics.

Minissi, M. E., Altozano, A., Marín-Morales, J., Giglioli, I. A. C., Mantovani, F., & Alcañiz, M. (2024). Biosignal comparison for autism assessment using machine learning models and virtual reality. Computers in Biology and Medicine171, 108194.

Alcañiz, M., Chicchi‐Giglioli, I. A., Carrasco‐Ribelles, L. A., Marín‐Morales, J., Minissi, M. E., Teruel‐García, G., … & Abad, L. (2022). Eye gaze as a biomarker in the recognition of autism spectrum disorder using virtual reality and machine learning: A proof of concept for diagnosis. Autism Research15(1), 131-145.

Alcañiz Raya, M., Marín-Morales, J., Minissi, M. E., Teruel Garcia, G., Abad, L., & Chicchi Giglioli, I. A. (2020). Machine learning and virtual reality on body movements’ behaviors to classify children with autism spectrum disorder. Journal of clinical medicine9(5), 1260.