devAIce® Web API 4.1.0 Update

Web API Update Header presenting a Smartphone and Laptop with download symbol

This latest release of devAIceⓇ Web API introduces updated dimensional and categorical emotion models in the Emotion (Large) module. In benchmarks, the new versions of these models are shown to be significantly more robust against background noises and different recording conditions than the previous models, all while keeping the computational complexity of the models unchanged.

devAIce® SDK 3.7.0 Update

devAIce Update Header presenting devices like smartwatch, headphones and VR glasses

Today, we are happy to announce the public release of devAIceⓇ SDK 3.7.0. This update comes with several noteworthy model updates for emotion and age recognition, the deprecation of the Sentiment module, as well as numerous other minor tweaks, improvements and fixes.

devAIce Web API 4.0.0 Update

2022 11 08 devAIceUpdateWEBAPI blogheader

We are proud to announce version 4.0.0 as a major update to devAIce Web API that is available to customers today. Most notably, this release introduces a modernized and simplified set of new API endpoints, all-new client libraries with support for more programming languages, OpenAPI compatibility, as well as an enhanced command-line interface tool. It also includes recent model updates and performance improvements from the latest devAIce SDK release, i.e. support for the Dominance emotion dimension and accuracy improvements of up to 15 percentage points. 

devAIce® SDK 3.6.1 Update

Emotion dimensions cube: valence, arousal, dominance

The devAIce® team is proud to announce the availability of devAIce SDK 3.6.1 which comes with a number of major enhancements, exciting new functionality and smaller fixes since the last publicly announced version, 3.4.0. This blog post summarizes the most important changes that have been introduced in devAIce® SDK since then.

Voice AI Perceives Emotions in Every World

2022 07 11 VRmeta blog Blogheader1

Human interaction is based on a language, on a context, on a world knowledge that we share. As a Voice AI company, we know that emotion is the key factor. Emotional expression gets us moving, creates movement and a collective response. It is a key factor in society. It is the basis for all the decisions we make. In creating a virtual reality, new dimensions and augmented experiences, this key factor cannot be missing.

Closing the Valence Gap in Emotion Recognition

Julia with headphones

2021 has been an exciting year for our researches working on the recognition of emotions from speech. Benefiting from the recent advances in transformer-based architectures, we have for the first time built models that predict valence with a similar high precision as arousal.

Human in the Loop – How Do We Create AI?

Human teaching two robots

Developing AI technology as we do at audEERING, we need to understand our human perception. Everyday perception is enabling us to realize the emotional state of our communication partner in different situations. In the process of Human Machine Learning we need to give the algorithm essential input. How do we at audEERING create AI?

ERIK – Emotion Recognition for Autism Therapy

ERIK robot and man

The recognition and perception of emotional output is an essential part of human communication. To develop the socio-emotional communication skills of autistic children, therapy has to focus on that. In the ERIK project a new form of therapy is being developed.