INTELLIGENT EMOTION DETECTION FROM AUDIO.
audEERINGs UNITY PLUGIN ENABLES
NEW DEPTH OF IMMERSION IN GAME DESIGN.

the unity plugin

UNITY PLUGIN FOR EMOTION RECOGNITION

YOUR PLAYER’S FEELINGS IN REAL TIME

headset-emotion

audEERING’s Unity Plugin leverages the emotion AI to take gaming experience for your players to a new dimension. We have the most advanced software systems for real-time detection of emotions from audio. Contact us for your developer’s license.

Emotion input brings an unchallenged depth of immersion into your game. Using embedded microphones or headsets is an easy and non-invasive way to catch your player’s feelings.
audEERING’s award-winning technology can distinguish between 6 basic emotions, which can be freely combined into dozens of emotional states, fitting the specific needs of your game scenario. The lightweight Plugin performs on any gaming device.

How it works

EMOTION FROM AUDIO

THE DIRECT WAY TO A PLAYER’S FEELINGS

The human voice is the most natural way of communication which carries a lot of information. Beyond language, the expression of feelings is a universal way of interaction. Let your game use this advantage by identifiying your player’s mood.

 

1. LISTEN

Record your players voice via any microphone.
Voice Activity Detection (VAD) automatically cuts the speech
and sends the audio to the AI models.

2. DETECT

Analyzing over 6,000 features in the player's voice,
the Plugin identifies all basic emotion dimensions and their confidence
in addition to 6 basic emotions and their intensity.

3. INTERACT

Based on the output values every frame,
you can trigger events in the game fitted to the player's mood.
Let the army cheer after a pep-talk, or a pet comes to you in VR.
The opportunities are endless.

previous arrow
next arrow
Slider

LIGHTWEIGHT PLUGIN

EASY TO INTEGRATE

audEERINGs emotion detection plugin  for Unity is lightweight and easy to integrate into any game design.

The Emotion AI API is also available in C#, C++, Java, Python and etc. making it easy to integrate in any project.
Find more information in our Technical Documentation.

Working with audio from headsets and device-integrated microphones 
(16kHz, min depth 8-bit )
VAD: Voice activity detection
Calibration to person/environment happens progressively
5.9 MB IN TOTAL: ultra lightweigt PlugIn
Confidence level and intensity for each emotion

more-infromation

MORE ON UNITY PLUGIN

DOWNLOAD NOW

PROJECT
GAME CHALLENGE 
2019

GaCha-2019-Logo-600x300

audEERING’s Unity Plugin is used in the GaCha 2019 Game Challenge for emotional gaming, helping young developers to get their hands on the new technology.

visit project page

UNITY PLUGIN
FACT SHEET

Fact-sheet-icon

The plugin provides the developers with a continuous scale for 6 basic emotions and 3 emotion dimensions with their respective confidence level. This means you can identify any emotional states. Read more in the fact sheet.

Download fact sheet

SHORT SOFTWARE
DOCUMENTATION

technicalDocimentation-Icon

audEERING’s Unity Plugin is lightweight and easy to integrate. Drag and drop the scripts onto your game objects and see them reacting to emotions. Read more in the technical documentation.

Download Documentation

contact-infromation

PERSONAL CONTACT

GET IN TOUCH WITH audEERING

You are interested in how you can use audEERING’s Unity Plugin for your business? Our team at audEERING is ready to help.

YOUR PERSONAL CONTACT

Bernd Zeilmaier
Director Business Development

bzeilmaier@audeering.com
+49 8105 775 615 0

FURTHER QUESTIONS ON THE UNITY PLUGIN ?

FIND ANSWERS IN OUR FAQ

How many emotions can the Plugin distinguish ?

The emotions are categorized based on latest psychological models. In this models, we have an emotional space based on “pleasantness, urgency, dominance”. From these core dimensions, all other states can be identified. audEERING’s Unity Plugin model distinguishes between six different core emotions which are most common for the game industry. Other models in the SDK can distinguish up to 48 emotions.

  • Dimensions with confidence level:
    • Pleasantness
    • Urgency
    • Dominance (Control)
  • Emotions
    • Sadness
    • Boredom
    • Anger
    • Excitement
    • Relaxation
    • Happiness

Mixture of these numbers can result in identifying any emotional state.
For example, high urgency, low pleasantness and low control means fear.

Which microphone quality is needed for the PlugIn?

audEERING’s technology has been tested on a variety of embedded microphones and headsets. As a rule-of-thumb, the better the quality and lower the noise, the higher the accuracy. The ADC sampling rate should be at least 16 kHz, with min. depth 8-bit (u-law).  The processing of audio captured via remote wireless microphones is also possible.

How long must an audio sample be for emotion detection?

From a technical perspective, the model is returning the result in real-time (every frame). The longer the audio received by the emotion model, the higher is the accuracy of the result. Very long utterances, however, balance themselves out, as the emotional reaction of a human does not last infinite.
In practice, the return interval should be at least half a second for reliable emotion detection, and no longer than 12 seconds. This can be totally adjusted to the scenario in the game. Another practice is to get the values in short 1 second intervals and sum them up to see if they reach a threshold.

How can I test the unity plugIn?

There are developer licences available. Do not hesitate to contact our Director Business Development Bernd Zeilmaier. He is glad to help you with business solutions,  licencing options and details.

Can you combine emotions?

You can freely create new values by combining available emotions into new emotional states. For example, Happiness and Urgency can be combined into Motivation.
This creates almost infinite possibilities for any gaming scenario.

From a technical perspective, you get a JSON object with emotion results in every frame. Each emotion has a value and you can set a threshold for it based on your scenario.