Emotion feedback for play testing.
deep insights into the player’s feelings
by AI based voice analysis.
A NEW DEPTH
OF PLAY TESTING
Play testing with emotional feedback. Our emotion AI brings play testing to the next level. entertAIn observe analyzes your players’ emotions during test flights. Detailed real-time reports on all devices provide you with valuable insights into the user experience.
Easy to use, perfect overview. Create new tests for each game and assign a unlimited number of test subjects, all monitored by entertAIn observe in real-time. The emotion-sensitivity can be adjusted to fit your gaming scenario.
The overview gives you quick insights into the test status and results. Reports with exact timestamps come in handy to analyze reactions to any in-game event. Aggregated results for each subject help to easily compare different versions in A/B test scenarios.
UNDER THE HOOD
AI EXCELLENCE IN AUDIO
A stack of technologies has been combined to create the AI models for entertAIn observe. Voice Activity Detection (VAD) is based on advanced neural networks. It allows to automatic filtering of human voice from the background noises. It is immune to effects like clicking, typing, etc. as it’s trained to detect only human voice.
The Audio Feature Extraction process is based on the science of Digital Signal Processing (DSP). This stage extracts more then 6,000 audio features like Loudness, Harmony, Energy and many more from the player’s voice. Mapping these features to a multidimensional feature space allows the AI model to identify the emotions.
NEW DEPTH OF IMMERSION
FULL VR INTEGRATION
Emotion input brings unchallenged depth of immersion into your games, espacially in a VR scenario. Using embedded microphones is a non-invasive way to capture your players’ feelings. entertAIn play performs excellent on mobile VR as well as stand-alone HMDs.
EMOTION FROM AUDIO
ANALYZE YOUR PLAYER’S FEELINGS
The human voice is the most natural way to communicate. It carries a lot of information beyond spoken words. The expression of feelings is a universal manner of interaction, which can be analyzed by entertAIn observe.
Catch your subject‘s voice via microphone. VAD distinguishes
human voice from background noises.
Analyzing over 6,000 voice-charateristics, entertAIn observe identifies the current level of engagement in real time.
Based on the output values, you can now see how your designs affected the players‘ emotions.
entertAIn observe uses state-of-the-art AI optimized Models for Emotion recognition. It gives you the tool to ensure that your game will have the highest chance of success before release.
Real-time monitoring of Engagement
Detailed reports on every Test
Voice Activity Detection background noise is cut out
Non intrusive microphones enable testing
Natural Environment causes player to play without feeling of being monitored
Web-based: PC, Tablet, Mobile Gaming etc.
entertAIn play brings imersion to the next level by detecting your players’ emotion. Find more information the factsheet.
GET IN TOUCH WITH audEERING
Are you interested in how entertAIn observe can help you raise your children in this modern age? Our team at audEERING is ready to help!
MORE PRODUCTS ?
THE entertAIn FAMILY
AI for video games
entertAIn play is a plugin for the Unity game engine. Drag & drop emotion recognition from audio for your video games.
AI for parental control
Healthy gaming for children of any age. entertAIn family keeps you informed on your children’s emotions while playing.
QUESTIONS ON entertAIn family?
FIND ANSWERS IN OUR FAQ
How many emotions can entertAIn play distinguish ?
The emotions are categorized based on latest psychological models. In this models, we have an emotional space based on “pleasantness, urgency, dominance”. From these core dimensions, all other states can be identified. entertAIn play model distinguishes between six different core emotions which are most common for the game industry. Other models in the SDK can distinguish up to 48 emotions.
- Dimensions with confidence level:
- Dominance (Control)
Mixture of these numbers can result in identifying any emotional state.
For example, high urgency, low pleasantness and low control means fear.
Which microphone quality is needed for entertAIn play?
audEERING’s technology has been tested on a variety of embedded microphones and headsets. As a rule-of-thumb, the better the quality and lower the noise, the higher the accuracy. The ADC sampling rate should be at least 16 kHz, with min. depth 8-bit (u-law). The processing of audio captured via remote wireless microphones is also possible.
How long must an audio sample be for emotion detection?
From a technical perspective, the model is returning the result in real-time (every frame). The longer the audio received by the emotion model, the higher is the accuracy of the result. Very long utterances, however, balance themselves out, as the emotional reaction of a human does not last infinite.
In practice, the return interval should be at least half a second for reliable emotion detection, and no longer than 12 seconds. This can be totally adjusted to the scenario in the game. Another practice is to get the values in short 1 second intervals and sum them up to see if they reach a threshold.
How can I test entertAIn play?
There are developer licences available. Do not hesitate to contact our Director Business Development Bernd Zeilmaier. He is glad to help you with business solutions, licencing options and details.
Can you combine emotions?
You can freely create new values by combining available emotions into new emotional states. For example, Happiness and Urgency can be combined into Motivation.
This creates almost infinite possibilities for any gaming scenario.
From a technical perspective, you get a JSON object with emotion results in every frame. Each emotion has a value and you can set a threshold for it based on your scenario.