With the onset of the dark season, many people talk about mental health. This is a good time to show how technology can help us better understand the human experience.
audEERING’s Expression Recognition technology analyzes acoustic signals such as pitch, speech rate, and voice tension to recognize emotional expressions in context. But how can a machine learn empathy?
Empathy through data
Empathy begins with listening. That’s exactly what audEERING’s technology does.
Unlike pure “emotion recognition” systems, expression recognition does not consider the voice in isolation, but in situational context. The technology recognizes how something is said, not just what. This creates a data-based understanding of empathy.
A machine “learns” empathy by recognizing patterns from thousands of real-life speech samples that correlate with certain emotional expressions. Through continuous training on diverse, anonymized audio reference data, it learns to perceive subtle differences: between tension and excitement, calm and exhaustion, joy and irony. This creates a data-based understanding of empathy.

Practical example: Mental well-being at sea
A joint case study with TORM, Hilo, and Safetytech Accelerator shows how this works in practice. The aim of the pilot project was to record the mental well-being of seafarers using audEERING’s AI SoundLab.
Over a period of three months, the voices of 31 crew members were recorded on trade routes, both through the Voyage Data Recorder system on board and through individual voice recordings. At the same time, Hilo’s maritime platform collected activity data to link emotions and ship events.
The results speak for themselves:
During port activities, emotional fluctuations were significantly more intense than on the open sea. Heat maps revealed where stress and tension were particularly prevalent, for example on the bridge during maneuvers or during periods of increased workload.
These analyses provide valuable insights for the safety culture on board: they show when and where stress arises without the need to actively question or observe crew members. This makes audEERING’s AI Sound Lab a sensitive, respectful tool that can capture mental states without violating privacy.
Ethical responsibility in AI
audEERING is committed to transparency, fair data annotation, and cultural diversity.
Empathy cannot be one-dimensional: emotions sound different in different cultures, and AI needs to understand that. That’s why training data is carefully curated, bias is systematically checked, and ethical guidelines are integrated into development.
This ensures that AI remains not only powerful, but also humanly sensitive—especially in times when listening is more important than ever.
So when it gets dark outside, audEERING’s technology reminds us how much light there is in a voice.
