A new depth of immersion in games.
entertAIn play is a Unity plugin,
enabling you to use emotions as input.
EMOTION RECOGNITION FOR UNITY
TAP INTO PLAYERS’ FEELINGS
Bring immersion to the next level – entertAIn play detects your players‘ emotions through audio. Realtime analysis gives you the possibility to implement novel gameplay scenarios.
entertAIn play provides you with six default emotions. Our drag-and-drop-integration is done in no-time. entertAIn play performs on any platform, from a VR headset to a casual mobile phone. Join us and make sure that the future of gaming is not only bound by buttons.
6 DEFAULT EMOTIONS
Based on established psychological models, entertAIn play provides you with six default emotion categories: Anger, excitement, happiness, relaxation, boredom and sadness. In addition, the core AI features allow you to create customized emotion categories.
NEW DEPTH OF IMMERSION
FULL VR INTEGRATION
Emotion input brings unchallenged depth of immersion into your games, espacially in a VR scenario. Using embedded microphones is a non-invasive way to capture your players’ feelings. entertAIn play performs excellent on mobile VR as well as stand-alone HMDs.
Whether it is a pep talk to your troops before the battle or emotional interaction with the characters in an RPG, entertAIn play has it covered. You can use emotions as active or passive input, realizing new gameplay scenarios
How excited is your audience about your game? entertAIn play provides you with an objective analysis and reliable results on players’ emotional state.
Join the goddess Elise on her task to defend then nexus against evil powers.
„Elise’s destiny“ is the winner of the GaCha 2019 international Game Challange.
Powered by entertAIn play, this game is the first third person shooter in the world using emotion AI in an action game.
EMOTION FROM AUDIO
THE DIRECT WAY TO THE PLAYERS’ FEELINGS
The human voice is the most natural way to communicate. It carries a lot of information. Beyond language, the expression of feelings is a universal manner of interaction. Let your games use emotions to their advantage.
Voice Activity Detection (VAD) automatically caputres your players voice via any microphone.
entertAIn play AI model analyzes over 6,000 voice features. It identifies the players’ emotions and their intensity in realtime.
Based on the output values, you can now trigger events in the game fitted to the players mood.
LIGHTWEIGHT PLUG IN
DRAG AND DROP INTEGRATION
entertAIn play is lightweight and easy to integrate into any game design. Find more information in our technical documentation.
VAD: Voice activity detection
SPEAKER ID possible
5 MB IN TOTAL: ultra lightweigt PlugIn
Customizable Emotion sensity
Multiplatform: Windows, MacOS, iOS and Android
EMOTION GAME CHALLENGE 2019
FEATURING entertAIn play
In the summer of 2019, young game developers were challanged to create a new chapter in the history of video game design. They used audEERING’s emotion detection and crafted their own games.
entertAIn play brings imersion to the next level by detecting your players’ emotion. Find more information the factsheet.
GET IN TOUCH WITH audEERING
Are you interested in how entertAIn play can accelerate your product and business? Our team at audEERING is ready to help!
QUESTIONS ON entertAIn play?
FIND ANSWERS IN OUR FAQ
How many emotions can entertAIn play distinguish ?
The emotions are categorized based on latest psychological models. In this models, we have an emotional space based on “pleasantness, urgency, dominance”. From these core dimensions, all other states can be identified. entertAIn play model distinguishes between six different core emotions which are most common for the game industry. Other models in the SDK can distinguish up to 48 emotions.
- Dimensions with confidence level:
- Dominance (Control)
Mixture of these numbers can result in identifying any emotional state.
For example, high urgency, low pleasantness and low control means fear.
Which microphone quality is needed for entertAIn play?
audEERING’s technology has been tested on a variety of embedded microphones and headsets. As a rule-of-thumb, the better the quality and lower the noise, the higher the accuracy. The ADC sampling rate should be at least 16 kHz, with min. depth 8-bit (u-law). The processing of audio captured via remote wireless microphones is also possible.
How long must an audio sample be for emotion detection?
From a technical perspective, the model is returning the result in real-time (every frame). The longer the audio received by the emotion model, the higher is the accuracy of the result. Very long utterances, however, balance themselves out, as the emotional reaction of a human does not last infinitely.
In practice, the return interval should be at least half a second for reliable emotion detection, and no longer than 12 seconds. This can be totally adjusted to the scenario in the game. Another practice is to get the values in short 1 second intervals and sum them up to see if they reach a threshold.
How can I test entertAIn play?
There are developer licences available. Do not hesitate to contact our Director Business Development Bernd Zeilmaier. He is glad to help you with business solutions, licencing options and details.
Can you combine emotions?
You can freely create new values by combining available emotions into new emotional states. For example, Happiness and Urgency can be combined into Motivation.
This creates almost infinite possibilities for any gaming scenario.
From a technical perspective, you get a JSON object with emotion results in every frame. Each emotion has a value and you can set a threshold for it based on your scenario.