Blog

If you have been following our gaming blogs, there is a chance that you have also wondered about the other applications of emotion recognition in the video game industry. Today, we will talk about another application: Gamer monitoring.
How should a robot sound? We're getting surrounded by talking machines, but are their voices adequate?
By Pascal Hecker, Junior Healthcare Researcher at audEERING and Luca Pettinari, Junior AI Researcher at audEERING and MSc Student in Biomedical Engineering at Università Politecnica delle Marche
Text to speech synthesis has made tremendous progress during the past years, but speaking style is still a challenge.
Likability is not a speaker trait but an individual evaluation of a recipient which makes it an important issue for profitable speech communication.
Imagine being on a train while you are on your way to work, listening to music without any disturbing sounds from your surroundings. Sounds too good to be true? It is in fact possible, if you have the right headphones.
We have again received a great award, which makes us very proud: the Innovation Award Bavaria 2018.
Two weeks ago, we wrote a short general article about emotions in video games. Make sure to check it out HERE (link to last article) if you have missed it. This week, we want to get a bit more specific and see how it actually started and how it works.
by Felix Burkhardt, director research audEERING AI becoming emotionally intelligent is discussed everywhere in the media and industry at the moment, but what is really the current status in the industry?
Have you ever wondered how cool it would be if games could also understand our emotions? Not clear enough? Alright. Let’s go through some simple examples.
Data collection is an important issue in machine learning research. If we use YouTube as a resource do we really know what we are getting? To that end do we really know what’s in our audio data?