Where we come from


SCIENTIFIC
RESEARCH


audEERING’s technology is based on
decades of renowned scientific research.

YEARS OF DEVELOPMENT

audEERING’s roots at the TU Munich

audEERING was founded in 2012 as a spin-off of a former research group led by the internationally renowned affective computing expert Prof. Dr.-Ing. Björn Schuller at Technische Universität München. Our sensAI technology is based on decades of renowned scientific research of Prof. Schuller and his Machine Intelligence and Signal Processing group at TU Munich.

audEERING is the creator and owner of the well known audio analysis toolkit openSMILE. This software enjoys a reputation beyond reproach, as openSMILE is able to perform a wide array of tasks. It is applied in commercial products, scientific research projects and academic projects alike.

Get started with audio analysis

Our open source solution

It is a widely used feature extraction and pattern recognition tool which is applied for a large variety of different usecases. You want to know more? For further details and a free trial version of openSMILE click the button bellow.

get openSMILE

SMILE is an acronym for speech and music interpretation by large-space extraction. The openSMILE feature extration tool enables you to extract large audio feature spaces in real time. It combines features from Music Information Retrieval and Speech Processing. Written in C++ the feature extractor components can be freely interconnected to create new and custom features, all via a simple configuration file. New components can be added to openSMILE via an intuative binary plugin interface.

Browse Our Publications

audEERING’s approach to the One-Minute-Gradual Emotion Challenge

A. Triantafyllopoulos, H. Sagha, F. Eyben, B. Schuller, “audEERING’s approach to the One-Minute-Gradual Emotion Challenge,” arXiv preprint arXiv:1805.01222

Detecting Vocal Irony

J. Deng, B. Schuller, “Detecting Vocal Irony,” in Language Technologies for the Challenges of the Digital Age: 27th International Conference, GSCL 2017, Vol. 10713, p. 11, Springer

Emotion-awareness for intelligent vehicle assistants: a research agenda

H. J. Vögel, C. Süß, T. Hubregtsen, V. Ghaderi, R. Chadowitz, E. André, … & B. Huet, “Emotion-awareness for intelligent vehicle assistants: a research agenda,” in Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems, pp. 11-15, ACM

Robust Laughter Detection for Wearable Wellbeing Sensing

G. Hagerer, N. Cummins, F. Eyben, B. Schuller, “Robust Laughter Detection for Wearable Wellbeing Sensing,” in Proceedings of the 2018 International Conference on Digital Health, pp. 156-157, ACM

Deep neural networks for anger detection from real life speech data

J. Deng, F. Eyben, B. Schuller, F. Burkhardt, “Deep neural networks for anger detection from real life speech data,” in Proc. of 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), pp. 1-6, IEEE

Deep recurrent neural network-based autoencoders for acoustic novelty detection

E. Marchi, F. Vesperini, S. Squartini, B. Schuller, “Deep recurrent neural network-based autoencoders for acoustic novelty detection,” in Computational intelligence and neuroscience, 2017

Did you laugh enough today? – Deep Neural Networks for Mobile and Wearable Laughter Trackers

G. Hagerer, N. Cummins, F. Eyben, B. Schuller, “Did you laugh enough today? – Deep Neural Networks for Mobile and Wearable Laughter Trackers,” in Proc. Interspeech 2017, pp. 2044-2045

Automatic speaker analysis 2.0: Hearing the bigger picture

B. Schuller, “Automatic speaker analysis 2.0: Hearing the bigger picture,” in Proc. of 2017 International Conference onSpeech Technology and Human-Computer Dialogue (SpeD), pp. 1-6, IEEE

Seeking the SuperStar: Automatic assessment of perceived singing quality

J. Böhm, F. Eyben, M. Schmitt, H. Kosch, B. Schuller, “Seeking the SuperStar: Automatic assessment of perceived singing quality,” in Proc. of 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1560-1569, IEEE

Enhancing LSTM RNN-Based Speech Overlap Detection by Artificially Mixed Data

G. Hagerer, V. Pandit, F. Eyben, B. Schuller, “Enhancing LSTM RNN-Based Speech Overlap Detection by Artificially Mixed Data,” in Proc. 2017 AES International Conference on Semantic Audio

The effect of personality trait, age, and gender on the performance of automatic speech valence recognition

H. Sagha, J. Deng, B. Schuller, “The effect of personality trait, age, and gender on the performance of automatic speech valence recognition,” in Proc. 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017), San Antonio, Texas, AAAC, IEEE, October 2017

Automatic Multi-lingual Arousal Detection from Voice Applied to Real Product Testing Applications

F. Eyben, M. Unfried, G. Hagerer, B. Schuller, “Automatic Multi-lingual Arousal Detection from Voice Applied to Real Product Testing Applications,” in Proc. 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017), New Orleans, LA, IEEE

Real-time Tracking of Speakers’ Emotions, States, and Traits on Mobile Platforms

E. Marchi, F. Eyben, G. Hagerer, B. Schuller, “Real-time Tracking of Speakers’ Emotions, States, and Traits on Mobile Platforms,” in Proc. INTERSPEECH 2016, San Francisco, Califorina, USA, pp. 1182-1183

A Paralinguistic Approach To Speaker Diarisation: Using Age, Gender, Voice Likability and Personality Traits

Y. Zhang, F. Weninger, B. Liu, M. Schmitt, F. Eyben, B. Schuller, “A Paralinguistic Approach To Speaker Diarisation: Using Age, Gender, Voice Likability and Personality Traits,” in Proc. 2017 ACM Conference on Multimedia, Mountain View, California, USA, pp. 387-392

An Image-based Deep Spectrum Feature Representation for the Recognition of Emotional Speech

N. Cummins, S. Amiriparian, G. Hagerer, A. Batliner, S. Steidl, B. Schuller, “An Image-based Deep Spectrum Feature Representation for the Recognition of Emotional Speech,” in Proc. 2017 ACM Conference on Multimedia, Mountain View, California, USA, pp. 478-484

Snore sound recognition: On wavelets and classifiers from deep nets to kernels

K. Qian, C. Janott, J. Deng, C. Heiser, W. Hohenhorst, M. Herzog, N. Cummins, B. Schuller, “Snore sound recognition: On wavelets and classifiers from deep nets to kernels,” in Proc. 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3737-3740

Introducing the Weighted Trustability Evaluator for Crowdsourcing Exemplified by Speaker Likability Classification

Real-life voice activity detection with LSTM Recurrent Neural Networks and an application to Hollywood movies

F. Eyben, F. Weninger, S. Squartini, B. Schuller, “Real-life voice activity detection with LSTM Recurrent Neural Networks and an application to Hollywood movies,” in Proc. of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 483-487, 26-31 May 2013. doi: 10.1109/ICASSP.2013.6637694

Affect recognition in real-life acoustic conditions – A new perspective on feature selection

F. Eyben, F. Weninger, B. Schuller, “Affect recognition in real-life acoustic conditions – A new perspective on feature selection,” in Proc. of INTERSPEECH 2013, Lyon, France, pp. 2044-2048

Cross-Language Acoustic Emotion Recognition: An Overview and Some Tendencies

S. Feraru, D. Schuller, B. Schuller, “Cross-Language Acoustic Emotion Recognition: An Overview and Some Tendencies,” in Proc. 6th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2015), (Xi’an, P. R. China), AAAC, IEEE, pp. 125-131, September 2015

Speech Analysis in the Big Data Era

B. Schuller, “Speech Analysis in the Big Data Era,” in Proc. of the 18th International Conference on Text, Speech and Dialogue, TSD 2015, Lecture Notes in Artificial Intelligence (LNAI), Springer, September 2015, Satellite event of INTERSPEECH 2015

The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing,

F. Eyben, K. Scherer, B. Schuller, J. Sundberg, E. Andre, C. Busso, L. Devillers, J. Epps, P. Laukka, S. Narayanan, K. Truong, “The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing,” IEEE Transactions on Affective Computing, 2015

Building Autonomous Sensitive Artificial Listeners (Extended Abstract)

M. Schröder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, M. Wöllmer, “Building Autonomous Sensitive Artificial Listeners (Extended Abstract),” in Proc. of ACII 2015, Xi’an, China, invited for the Special Session on Most Influential Articles in IEEE Transactions on Affective Computing

Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies (Extended Abstract)

B. Schuller, B. Vlasenko, F. Eyben, M. Wöllmer, A. Stuhlsatz, A. Wendemuth, G. Rigoll, “Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies (Extended Abstract),” in Proc. of ACII 2015, Xi’an, China, invited for the Special Session on Most Influential Articles in IEEE Transactions on Affective Computing

Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification (Extended Abstract)

A. Metallinou, M. Wöllmer, A. Katsamanis, F. Eyben, B. Schuller, S. Narayanan, “Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification (Extended Abstract),” in Proc. of ACII 2015, Xi’an, China, invited for the Special Session on Most Influential Articles in IEEE Transactions on Affective Computing

iHEARu-PLAY: Introducing a game for crowdsourced data collection for affective computing

S. Hantke, T. Appel, F. Eyben, B. Schuller, “iHEARu-PLAY: Introducing a game for crowdsourced data collection for affective computing,” in Proc. 6th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2015), Xi’an, P. R. China, AAAC, IEEE, pp. 891-897, September 2015

Real-time Robust Recognition of Speakers’ Emotions and Characteristics on Mobile Platforms

F. Eyben, B. Huber, E. Marchi, D. Schuller, B. Schuller, “Real-time Robust Recognition of Speakers’ Emotions and Characteristics on Mobile Platforms,” in Proc. 6th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2015), Xi’an, P. R. China, AAAC, IEEE, pp. 778-780, September 2015

green angle

Progress never stops

Still active in research projects

Furthermore, audEERING is consortium member of various funded projects. Among others, we are partner within different governmental projects funded by the European Comission and the German Federal Ministry of Education and Research (BMBF).