This latest release of devAIceⓇ Web API introduces updated dimensional and categorical emotion models in the Emotion (Large) module. In benchmarks, the new versions of these models are shown to be significantly more robust against background noises and different recording conditions than the previous models, all while keeping the computational complexity of the models unchanged.
Today, we are happy to announce the public release of devAIceⓇ SDK 3.7.0. This update comes with several noteworthy model updates for emotion and age recognition, the deprecation of the Sentiment module, as well as numerous other minor tweaks, improvements and fixes.
We are proud to announce version 4.0.0 as a major update to devAIce Web API that is available to customers today. Most notably, this release introduces a modernized and simplified set of new API endpoints, all-new client libraries with support for more programming languages, OpenAPI compatibility, as well as an enhanced command-line interface tool. It also includes recent model updates and performance improvements from the latest devAIce SDK release, i.e. support for the Dominance emotion dimension and accuracy improvements of up to 15 percentage points.
The devAIce® team is proud to announce the availability of devAIce SDK 3.6.1 which comes with a number of major enhancements, exciting new functionality and smaller fixes since the last publicly announced version, 3.4.0. This blog post summarizes the most important changes that have been introduced in devAIce® SDK since then.
Human interaction is based on a language, on a context, on a world knowledge that we share. As a Voice AI company, we know that emotion is the key factor. Emotional expression gets us moving, creates movement and a collective response. It is a key factor in society. It is the basis for all the decisions we make. In creating a virtual reality, new dimensions and augmented experiences, this key factor cannot be missing.
2021 has been an exciting year for our researches working on the recognition of emotions from speech. Benefiting from the recent advances in transformer-based architectures, we have for the first time built models that predict valence with a similar high precision as arousal.
We are proudly announcing a new class of next-gen emotion models coming to devAIce with our latest 3.4.0 release of devAIce TM SDK/Web API.
Developing AI technology as we do at audEERING, we need to understand our human perception. Everyday perception is enabling us to realize the emotional state of our communication partner in different situations. In the process of Human Machine Learning we need to give the algorithm essential input. How do we at audEERING create AI?
The recognition and perception of emotional output is an essential part of human communication. To develop the socio-emotional communication skills of autistic children, therapy has to focus on that. In the ERIK project a new form of therapy is being developed.
Young developers from all over the world used audEERING’s emotion detection to create their own game. Have a look and enjoy the new way of gaming.
This year, I took a more practical approach and presented our vision for the future to you: solid case studies behind each one of them.