The online portal it-daily.net has published an article on audEERING’s novel approach towards automatic recognition of emotions for next-generation speech assistants. Dialog systems like Apple’s SIRI, Google Now, or Microsoft’s Cortana will soon appear more human-like by not only recognizing what was said, but also how it was said: Is the user happy or annoyed? Interested or bored? Which personality, which age, gender, etc. has the user? All these kinds of information, which are highly relevant for natural communication, can be extracted from speech signals by audEERING’s pattern recognition algorithms in order to set new standards in intuitive human machine communication. Read the full article here.