About

Biography

Adria Mallol-Ragolta is a Ph.D. Candidate at the Department of Computer Science from the University of Augsburg, Augsburg (Germany), working at the Chair of Health Informatics from the Klinikum rechts der Isar (MRI) of the Technical University of Munich (TUM), Munich (Germany) under the supervision of Prof. Dr.-Ing. habil. Björn Schuller.

He received his B.Sc. degree in Audiovisual Telecommunication Systems Engineering from the Universitat Pompeu Fabra, Barcelona (Spain), in 2016. In the following years, he pursued his M.Sc. degree in Electrical Engineering at the University of Colorado, Colorado Springs (USA), where he graduated in 2018.

His research interests include Artificial Intelligence, Affective Computing, Digital Health, and mHealth seeking to computationally understand human states and traits using multimodal and ubiquitous computing solutions.

Research

Projects

SHIFT

MetamorphoSis of cultural Heritage Into augmented hypermedia assets For enhanced accessibiliTy and inclusion (#101060660)

EU Horizon 2020 Research & Innovation Action (RIA) logo_SHIFT

Runtime: 01.10.2022 – 30.09.2025
Role: Co-Author Proposal
Partners: Software Imagination & Vision, Foundation for Research and Technology, Massive Dynamic, Audeering, University of Augsburg/Technical University of Munich, Queen Mary University of London, Magyar Nemzeti Múzeum – Semmelweis Orvostörténeti Múzeum, The National Association of Public Librarians and Libraries in Romania, Staatliche Museen zu Berlin – Preußischer Kulturbesitz, The Balkan Museum Network, Initiative For Heritage Conservation, Eticas Research and Consulting, German Federation of the Blind and Partially Sighted

The SHIFT project is strategically conceived to deliver a set of technological tools, loosely coupled that offers cultural heritage institutions the necessary impetus to stimulate growth, and embrace the latest innovations in artificial intelligence, machine learning, multi-modal data processing, digital content transformation methodologies, semantic representation, linguistic analysis of historical records, and the use of haptics interfaces to effectively and efficiently communicate new experiences to all citizens (including people with disabilities).

AUDI0NOMOUS

Agent-based Unsupervised Deep Interactive 0-shot-learning Networks Optimising Machines’ Ontological Understanding of Sound (#442218748)

DFG (German Research Foundation) Reinhart Koselleck-Projekt

Runtime: 01.01.2021 – 31.12.2025
Role: Participant
Partners: University of Augsburg/Technical University of Munich

Soundscapes are a component of our everyday acoustic environment; we are always surrounded by sounds, we react to them, as well as creating them. While computer audition, the understanding of audio by machines, has primarily been driven through the analysis of speech, the understanding of soundscapes has received comparatively little attention. AUDI0NOMOUS, a long-term project based on artificial intelligent systems, aims to achieve a major breakthroughs in analysis, categorisation, and understanding of real-life soundscapes. A novel approach, based around the development of four highly cooperative and interactive intelligent agents, is proposed herein to achieve this highly ambitious goal. Each agent will autonomously infer a deep and holistic comprehension of sound. A Curious Agent will collect unique data from web sources and social media; an Audio Decomposition Agent will decompose overlapped sounds; a Learning Agent will recognise an unlimited number of unlabelled sound; and, an Ontology Agent will translate the soundscapes into verbal ontologies. AUDI0NOMOUS will open up an entirely new dimension of comprehensive audio understanding; such knowledge will have a high and broad impact in disciplines of both the sciences and humanities, promoting advancements in health care, robotics, and smart devices and cities, amongst many others.

ForDigitHealth

Bayerischer Forschungsverbund zum gesunden Umgang mit digitalen Technologien und Medien

BayFOR (Bayerisches Staatsministerium für Wissenschaft und Kunst) Project logo_ForDigitHealth

Runtime: 01.06.2019 - 31.05.2023
Role: Participant
Partners: University of Augsburg, Otto-Friedrichs-University Bamberg, FAU Erlangen-Nuremberg, LMU Munich, JMU Würzburg

Die Digitalisierung führt zu grundlegenden Veränderungen unserer Gesellschaft und unseres individuellen Lebens. Dies birgt Chancen und Risiken für unsere Gesundheit. Zum Teil führt unser Umgang mit digitalen Technologien und Medien zu negativem Stress (Distress), Burnout, Depression und weiteren gesundheitlichen Beeinträchtigungen. Demgegenüber kann Stress auch eine positive, anregende Wirkung haben (Eustress), die es zu fördern gilt. Die Technikgestaltung ist weit fortgeschritten, sodass digitale Technologien und Medien dank zunehmender künstlicher Intelligenz, Adaptivität und Interaktivität die Gesundheit ihrer menschlichen Nutzerinnen und Nutzer bewahren und fördern können. Ziel des Forschungsverbunds ForDigitHealth ist es, die Gesundheitseffekte der zunehmenden Präsenz und intensivierten Nutzung digitaler Technologien und Medien – speziell in Hinblick auf die Entstehung von digitalem Distress und Eustress und deren Folgen – in ihrer Vielgestaltigkeit wissenschaftlich zu durchdringen sowie Präventions- und Interventionsoptionen zu erarbeiten und zu evaluieren. Dadurch soll der Forschungsverbund zu einem angemessenen, bewussten und gesundheitsförderlichen individuellen wie kollektiven Umgang mit digitalen Technologien und Medien beitragen.

sustAGE

Smart environments for person-centered sustainable work and well-being (#826506)

EU Horizon 2020 Research & Innovation Action (RIA) logo_sustAGE

Runtime: 01.01.2019 – 30.06.2022
Role: Adjunct Scientific and Technical Manager, Workpackage Leader, Participant
Partners: Foundation for Research and Technology Hellas, Centro Ricerche Fiat SCPA, Software AG, Imaginary SRL, Forschungsgesellschaft für Arbeitsphysiologie und Arbeitsschutz e.V., Heraklion Port Authority S.A., Aegis IT Research UG, University of Augsburg, Aristotelio Panepistimio Thessalonikis, Universidad Nacional de Educación a Distancia

sustAGE aims to develop a person-centered solution for promoting the concept of “sustainable work” for EU industries. The project provides a paradigm shift in human-machine interaction, building upon seven strategic technology trends -IoT, machine learning, micro-moments, temporal reasoning, recommender systems, data analytics and gamification- to deliver a composite system integrated with the daily activities at work and outside, to support employers and ageing employees to jointly increase well-being, wellness at work and productivity. The manifold contribution focuses on the support of the employment and later retirement of older adults from work and the optimization of the workforce management. The sustAGE platform guides workers on work-related tasks, recommends personalized cognitive and physical training activities with emphasis on game and social aspects, delivers warnings regarding occupational risks and cares for their proper positioning in work tasks that will maximize team performance. By combining a broad range of the innovation chain activities -namely, technology R&D, demonstration, prototyping, pilots, and extensive validation- the project aims to explore how health and safety at work, continuous training and proper workforce management can prolongue older workers’ competitiveness at work. The deployment of the proposed technologies in two critical industrial sectors and their extensive evaluation will lead to a ground-breaking contribution that will improve the performance and quality of life at work and beyond for many ageing adult workers.

Datasets

The MASCFLICHT Corpus

Face Mask Type and Coverage Area Recognition from Speech

A. Mallol-Ragolta, N. Urbach, S. Liu, A. Batliner, and B. Schuller

The MASCFLICHT corpus is a novel speech dataset for face mask type and coverage area recognition collected with a Xiaomi Mi 10 smartphone. The dataset contains 2 h 27 m 55 s of data from 30 German speakers (15 f, 15 m), with a mean age of 25.7 (± 9.1) years old. The available data is split in 3 (train/devel/test) participant-independent and gender-balanced partitions and contains speech samples of the 30 participants recorded under five different conditions: i) without face mask, ii) wearing a surgical mask only with the mouth covered, iii) wearing a surgical mask with both the mouth and the nose covered, iv) wearing an FFP2 mask only with the mouth covered, and v) wearing an FFP2 mask with both the mouth and the nose covered.

harAGE Corpus

Multimodal Dataset for Human Activity Recognition from Smartwatch Sensor Data

A. Mallol-Ragolta, A. Semertzidou, M. Pateraki, and B. Schuller

The harAGE corpus is a novel dataset for Human Activity Recognition (HAR) collected with a customised app running on a Garmin Vivoactive 3 smartwatch. This dataset contains 17 h 37 m 20 s of data from 30 participants (14 f, 16 m), with a mean age of 40.0 (± 8.3) years old. The available data is split in 3 (train/devel/test) participant-independent and gender-balanced partitions, and contains samples of the 30 participants performing 8 different activities (lying, sitting, standing, washing hands, walking, running, stairs climbing, and cycling).

Dissemination

Publications

Download the full publications list as PDF.

Theses

  1. A. Mallol-Ragolta, Post-Traumatic Stress Disorder Severity Prediction on Web-based Trauma Recovery Treatments Through Electrodermal Activity Measurements. Master thesis, University of Colorado Colorado Springs, Colorado Springs, CO, USA, July 2018. 149 pages.
  2. A. Mallol-Ragolta, Facial shape estimation methods for computing physiological signals with non-invasive techniques. Bachelor thesis, Universitat Pompeu Fabra, Barcelona, Spain, July 2016. 120 pages.

Code

  1. A. Mallol-Ragolta: "The sustAGE off-the-shelf sentiment analysis toolkit", Python, University of Augsburg, https://github.com/EIHW/sustAGE_SentimentAnalysis, September 2022.
  2. A. Mallol-Ragolta: "The sustAGE off-the-shelf arousal recognition toolkit", Python, University of Augsburg, https://github.com/EIHW/sustAGE_arousalRecognition, September 2022.
  3. A. Mallol-Ragolta: "The sustAGE smartwatch app and back-end server for data transmission and reception", Python, University of Augsburg, https://github.com/EIHW/sustAGE_SmartwatchApp, September 2022.
  4. B. Schuller, A. Batliner, S. Amiriparian, C. Bergler, M. Gerczuk, N. Holz, S. Bayerl, K. Riedhammer, A. Mallol-Ragolta, M. Pateraki, H. Coppock, I. Kiskin, M. Sinka, and S. Roberts: "The ACM Multimedia 2022 Computational Paralinguistics Challenge: Vocalisations, Stuttering, Activity, & Mosquitos", Python, University of Augsburg, https://github.com/EIHW/ComParE2022/tree/HAR-C, May 2022.
  5. M. M. Mohamed, M. A. Nessiem, A. Batliner, C. Bergler, S. Hantke, M. Schmitt, A. Baird, A. Mallol-Ragolta, V. Karas, S. Amiriparian, and B. W. Schuller: "Face Mask Recognition from Audio: The MASC Database and an Overview on the Mask Challenge", Java, University of Augsburg, https://github.com/EIHW/MaskDemoApp, October 2021.
  6. A. Mallol-Ragolta, M. Schmitt, A. Baird, N. Cummins, B. Schuller: "Performance Analysis of Unimodal and Multimodal Models in Valence-Based Empathy Recognition", Python, University of Augsburg, https://github.com/EIHW/OMGempathy2019, December 2018.

Press

Printing and Digital Press

  1. Stephanie Böhme, Marie Eva Keinert, Klara Capito, Lena Schindler-Gmelch, Adria Mallol-Ragolta, Robert Richer, Lydia Helene Rupp, Hannah Streit, Björn Schuller, Björn Eskofier, Matthias Berking, "„Mensch, ärgere dich d o c h!“ Erste Ergebnisse eines emotionsbasierten Approach-Avoidance Modification Trainings", in Spektrum der Wissenschaft, SciLogs, Spektrum der Wissenschaft Verlag/Nature Publishing Group, Germany, 31.05.2022.
  2. Adria Mallol-Ragolta, "Ens hem centrat en l’anàlisi de la tos per a la detecció de la Covid-19", in Empordà, Mairena Rivas, Spain, 22.07.2021.
  3. Adria Mallol-Ragolta, "Otomatik Yapay Zeka Öksürük Analizi ile COVID-19 Tespiti", in Teknoloji-Haber24, Turkey, 13.07.2021.
  4. Adria Mallol-Ragolta, "研究人员通过人工智能自动咳嗽分析检测COVID-19", in Sina Technology, China, 13.07.2021.
  5. Adria Mallol-Ragolta, "Detection of COVID-19 by automatic artificial intelligence cough analysis", in Industry Update, India, 12.07.2021.
  6. Adria Mallol-Ragolta, "Detection of COVID-19 via Automatic Artificial Intelligence Cough Analysis", in SciTechDaily, USA, 12.07.2021.
  7. Adria Mallol-Ragolta, "Detection of covid-19 via automatic cough analysis", in EurekAlert! – AAAS, USA, 05.07.2021.
  8. Adria Mallol-Ragolta, "Registren 1.040 àudios de persones tossint per crear una aplicació que detecti automàticament la covid-19", in Diari Més, Spain, 27.06.2021.
  9. Adria Mallol-Ragolta, "Un equip investiga la detecció de la covid-19 a través de l’anàlisi de la tos", in Regió 7, Spain, 23.06.2021.
  10. Adria Mallol-Ragolta, "Crean una aplicación de detección automática de Covid por la tos", in El Correo de Andalucía, Spain, 22.06.2021.
  11. Helena Cuesta, Adria Mallol-Ragolta, "Detection of covid-19 via automatic cough analysis", in Focus UPF, Spain, 22.06.2021.

Television

  1. Adria Mallol-Ragolta, "Primer encuentro presencial de SustAGE", TV interview broadcast on RTVE - UNED, Mayte Linares, Spain, 24:19, 03:48 mins, 10.12.2021.

Radio

  1. Adria Mallol-Ragolta, "[AUDIO]", radio interview broadcast on RNE – News, Laura Monill, Spain, 17:31, 01:33 mins, 17.08.2021.
  2. Adria Mallol-Ragolta, "[AUDIO]", radio interview broadcast on Canal Extremadura – El sol sale por el oeste, Antonio León y Nuria Labrador, 33:20, 09:33 mins, 28.06.2021.

Awards

  1. 2nd place (8 finalists) on the Unsupervised Detection of Psychotic Relapses track of the 2nd e-Prevention Challenge: Psychotic and Non-Psychotic Relapse Detection using Wearable-Based Digital Phenotyping, Adria Mallol-Ragolta, Anika Spiesberger, Andreas Triantafyllopoulos, and Björn Schuller: "TBD", in conjunction with the International Conference on Acoustics, Speech and Signal Processing (ICASSP 2024), IEEE, Seoul, Korea, 14.-19.04.2024.
  2. 2022 Outstanding Article Award, Frontiers in Computer Science, Adria Mallol-Ragolta, Anastasia Semertzidou, Maria Pateraki, Björn Schuller: “Outer Product-Based Fusion of Smartwatch Sensor Data for Human Activity Recognition”, Frontiers Media, Switzerland, 01.02.2023
  3. 3rd place (7 finalists) on the Generalized Track of the One-Minute-Gradual Empathy Prediction Challenge (OMG-Empathy 2019), Adria Mallol-Ragolta, Maximillian Schmitt, Alice Baird, Nicholas Cummins, Björn Schuller: “Performance Analysis of Unimodal and Multimodal Models in Valence-Based Empathy Recognition”, in conjunction with the 14th International Conference on Automatic Face & Gesture Recognition, IEEE, Lille, France, 14.-18.05.2019.
Lectures

WS: Winter Semester, SS: Summer Semester

  1. Seminar Digital Health Coordination
    (4 ECTS, since SS 2020) held at the University of Augsburg in Augsburg, Germany
  2. Seminar Computational Intelligence Coordination
    (4 ECTS, since SS 2020) held at the University of Augsburg in Augsburg, Germany

WS: Winter Semester, SS: Summer Semester

  1. Praktikum Digital Health
    (5 ECTS, WS 2021/22, WS 2022/23, SS 2023) held at the University of Augsburg in Augsburg, Germany with Prof. Dr.-Ing. habil. Björn Schuller and M.Sc. Alexander Kathan.
  2. Praktikum Speech Pathology
    (5 ECTS, WS 2019/20) held at the University of Augsburg in Augsburg, Germany with Prof. Dr.-Ing. habil. Björn Schuller and Dr. Nicholas Cummins.
  3. Übung zu Intelligente Signalanalyse in der Medizin / Intelligent Signal Analysis in Medicine Tutorial
    (5 ECTS, SS 2019) held at the University of Augsburg in Augsburg, Germany with Prof. Dr.-Ing. habil. Björn Schuller and Dr. Nicholas Cummins.
  4. Seminar Sports Informatics Coordination
    (4 ECTS, WS 2018/19 - WS 2019/20) held at the University of Augsburg in Augsburg, Germany
  5. Seminar Computer Audition Coordination
    (4 ECTS, WS 2018/19 - WS 2019/20) held at the University of Augsburg in Augsburg, Germany
  6. Seminar Embedded Intelligence for Health Care and Wellbeing Coordination
    (4 ECTS, WS 2018/19 - WS 2019/20) held at the University of Augsburg in Augsburg, Germany

Research Assistants

  1. Simon Pistrosch (06/2021 – 09/2022): Automatic Recognition of Emotions from Facial Videos, University of Augsburg

Master’s Theses

  1. Simone Pompe (10/2022): Inferring Blood Glucose Levels from Voice Biomarkers using XAI Techniques, University of Augsburg
  2. Simon Pistrosch (09/2022): Exploiting Appearance and Deep Features using Neural Networks with Attention for Face Emotion Recognition, University of Augsburg
  3. Qiang Chang (02/2021): Audiovisual Data-Driven Android App for Emotion Recognition, University of Augsburg
  4. Niklas Schröter (12/2020): An Analysis of Bias and Fairness in Affective Computing Orientated Machine Learning, University of Augsburg. Co-supervised with Dr. Nicholas Cummins

Bachelor's Theses

  1. Estimee Sakinna Temfack Guefack Malagah (01/2024): Alzheimer's Dementia Recognition Exploiting Self-Supervised Learning Representations of Clinical Interviews, University of Augsburg
  2. Carmen Berndt (06/2020): An investigation of sequence modelling strategies for speech-based depression detection, University of Augsburg. Co-supervised with Dr. Nicholas Cummins

Seminars

  1. Julia Maltan (08/2023): Alzheimer's Dementia Recognition Fusing Linguistics and Paralinguistics: A Literature Review, University of Augsburg
  2. Radu Cojocaru (08/2023): Recent Trends on Human Activity Recognition from Smartwatch Sensor Data, University of Augsburg
  3. Cagil Ceren Aslan (08/2023): Depression Recognition Fusing Linguistics and Paralinguistics: A Literature Review, University of Augsburg
  4. Nils Urbach (08/2022): Abnormal Face Mask Detection from Speech: Data Collection, and Initial Experiments, University of Augsburg
  5. Estimee Temfack (08/2022): Smartwatch Sensor Data Fusion for Human Activity Recognition, University of Augsburg
  6. Marcel Strobl (08/2022): Multi-Modal Fusion Methods from Wearable Sensor Data for Human Activity Recognition: A Systematic Literature Review, University of Augsburg
  7. Thao Dat Nguyen (03/2022): Improving Multi-Class Classification using Multi-Task Learning on the harAGE Dataset, University of Augsburg
  8. Thomas Wagner (08/2021): Human Activity Recognition from Wristwatch Data, University of Augsburg
  9. Simone Pompe (08/2021): Speech and Heart Rate Data Fusion for Affect Recognition, University of Augsburg
  10. Sebastian Vater (08/2021): Face mask detection from speech, University of Augsburg
  11. Tobias Artz (08/2021): An Introduction to Machine Translation, University of Augsburg
  12. Yuliia Oksymets (08/2020): Sentiment Analysis with Hierarchical Attention Networks, University of Augsburg

Projektmodul

  1. Anja Hager (11/2020): Implementation of an Empathic Chatbot, University of Augsburg
Activity

  1. A. Mallol-Ragolta: "Speech Technologies for Health: COVID-19, Masks, and Depression", Human Language Technology Lab, INESC-ID, Lisbon, Portugal, November 2024.
  2. A. Mallol-Ragolta: "Wearable Health Intelligence in the COVID-19 Crisis – A First Walk-Through", audEERING – Machine Learning Jourfix, online, Gilching, Germany, January 2024.
  3. A. Mallol-Ragolta: "The MASCFLICHT Corpus: Face Mask Type and Coverage Area Recognition from Speech ", Deep Learning Barcelona Symposium, Barcelona, Spain, December 2023.
  4. A. Mallol-Ragolta: "Multi-Type Outer Product-Based Fusion of Respiratory Sounds for Detecting COVID-19", Deep Learning Barcelona Symposium, Barcelona, Spain, December 2022.
  5. A. Mallol-Ragolta: "Cough-Based COVID-19 Detection with Contextual Attention Convolutional Neural Networks and Gender Information", Deep Learning Barcelona Symposium, Barcelona, Spain, December 2021.
  6. A. Mallol-Ragolta: "Holistic wellbeing-oriented companion system for the ageing workforce", Workshop Adaptive and Conversational Approaches for Healthy Ageing & Work Ability, COADAPT, Online Event, November 2021.
  7. A. Mallol-Ragolta: "Towards an ubiquitous human-centred AI for health and wellbeing", Welcome Service — Scientific Talk, University of Augsburg, Augsburg, Germany, July 2021.
  8. A. Mallol-Ragolta: "Performance Analysis of Unimodal and Multimodal Models in Valence-Based Empathy Recognition", Embedded Intelligence for Health Care and Wellbeing (EIHW) Oberseminar, University of Augsburg, Augsburg, Germany, May 2019.

Grand Challenge Chairing

  1. Organisation and Data Chair of the ACM Multimedia Computational Paralinguistics ChallengE (ComParE), 30th ACM International Conference on Multimedia (ACM Multimedia), ACM, Lisbon, Portugal, 10.-14.10.2022.
  1. Program Committee member of the 26th International Conference on Multimodal Interaction (ICMI 2024), ACM, San José, Costa Rica, 04.-08.11.2024.
  2. Program Committee member of the 12th International Conference on Affective Computing and Intelligent Interaction (ACII 2024), IEEE, Glasgow, UK, 15.-18.09.2024.
  3. Program Committee member of the 1st Workshop on Multimodal Virtual Agents for Mental Health and Wellbeing – A new world with Foundational Models, held in conjunction with the 23rd ACM Conference on Interactive Virtual Agents (IVA 2023), ACM, Würzburg, Germany, 19.09.2023.
  4. Program Committee member of the 25th International Conference on Multimodal Interaction (ICMI 2023), ACM, Paris, France, 09.-13.10.2023.
  5. Program Committee member of the 11th International Conference on Affective Computing and Intelligent Interaction (ACII 2023), IEEE, Cambridge, MA, USA, 10.-13.09.2023.
  6. Program Committee member of the 24th International Conference on Multimodal Interaction (ICMI 2022), ACM, Bangalore, India, 07.-11.11.2022.
  7. Program Committee member of the 10th International Conference on Affective Computing and Intelligent Interaction (ACII 2022), IEEE, Nara, Japan, 18.-21.10.2022.
  8. Program Committee member of the 9th Audio/Visual Emotion Challenge and Workshop (AVEC 2019): “State of Mind, Depression and Cross-cultural Affect”, held in conjunction with the 27th ACM International Conference on Multimedia (MM 2019), ACM, Nice, France, 21.-25.10.2019.
  1. Chairing Session "Speech Synthesis: Speaking Style, Emotion and Accents II", 23rd Annual Conference of the International Speech Communication Association (INTERSPEECH 2022), Incheon, Korea, 22.09.2022.
  2. Chairing Session "Technologies to Support OSH and Wellbeing in the Workplace", 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022) and the Affiliated Conferences, New York, USA, 26.07.2022.
  3. Chairing Session "Theme 10. General and Theoretical Informatics P1", 44th International Engineering in Medicine and Biology Conference (EMBC 2022), Glasgow, United Kingdom, 12.07.2022.
  4. Chairing Session "Human-centered Design I", 7th International Conference on Human Interaction & Emerging Technologies: Artificial Intelligence & Future Applications (IHIET-AI 2022), Lausanne, Switzerland – Virtual Event, 22.04.2022.

Journal reviewing

  1. Springer Artificial Intelligence Review, since 2024.
  2. Frontiers in Signal Processing, since 2022.
  3. IEEE Transactions on Affective Computing, since 2022 - IF: 10.506 (2021).
  4. Elsevier Computer Speech and Language, since 2022 - IF: 1.899 (2021).
  5. IEEE Transactions on Cybernetics, since 2019 - IF: 11.448 (2021).

Conference reviewing

  1. ICASSP 2025 – 50th International Conference on Acoustics, Speech and Signal Processing, Hyderabad, India, 06.-11.04.2025.
  2. ICMI 2024 – 26th International Conference on Multimodal Interaction, San José, Costa Rica, 04.-08.11.2024.
  3. ACII 2024 – 12th International Conference on Affective Computing and Intelligent Interaction, Glasgow, UK, 15.-18.09.2024.
  4. INTERSPEECH 2024 – 25th Annual Conference of the International Speech Communication Association, Kos Island, Greece, 01.-05.09.2024.
  5. MM 2023 – The ACM Multimedia 2023 Computational Paralinguistics ChallengE (ComParE 2023), Ottawa, Canada, TBD.
  6. ICMI 2023 – 25th International Conference on Multimodal Interaction, Paris, France, 09.-13.10.2023.
  7. ACII 2023 – 11th International Conference on Affective Computing and Intelligent Interaction, Cambridge, MA, USA, 10.-13.09.2023.
  8. ICMI 2022 — 24th International Conference on Multimodal Interaction, Bangalore, India, 07.-11.11.2022.
  9. ACII 2022 — 10th International Conference on Affective Computing and Intelligent Interaction, Nara, Japan, 18.-21.10.2022.
  10. ICMI 2021 — 23rd International Conference on Multimodal Interaction, Montreal, Canada, 18.-22.10.2021.
  11. ACII 2021 — 9th International Conference on Affective Computing and Intelligent Interaction, Virtual Event, 28.09.-01.10.2021.
  12. ACII 2019 — 8th International Conference on Affective Computing and Intelligent Interaction, Cambridge, United Kingdom, 03.-06.09.2019.
  13. MM 2019 — 9th Audio/Visual Emotion Challenge and Workshop (AVEC 2019), Nice, France, 21.10.2019.
  1. ISCA International Speech Communication Association. Member since 2019.