The market for autonomous driving is projected to grow from 5,7 billion in 2018 to 60 billion US-Dollars in 2030. This shows the potential this technology will have in the future and is one of the reason why the German government provided funding for project SEMULIN.
Better Communication between Driver and Car
Goal of the project is to develop a self-supporting, natural human-machine interface (HMI) for automated driving using multimodal input and output modalities. Those are among others facial expressions, gestures, gaze, and speech. In connection with the surroundings of the driver (e.g. in the car) this is a holistic approach to an HMI which is adapted to the needs of the human senses. This system shall enhance the interaction between car and driver and create a better user experience in autonomous cars.
Emotion Detection in the Car
Artificial Intelligence will be an integral part of this system. audEERING provides its well-known Audio AI for Emotion Detection based on the voice. The basis for this are Machine Learning and Affective Computing. By enhancing the communication between driver and car, autonomous driving will become more natural and convenient.
Combination of Various Sensors
While audEERING is focusing on the analysis of the driver’s voice, other partners in the consortium like Blickshift will provide eyetracking technology or Infineon contributes with gesture detection. Combining the input from these various sensors will enable to detect the driver’s state and mood. The intelligent system in the car can then react accordingly and provide individualized recommendations.
Consortium of World Leaders in the Automotive Sector The project is funded by the German Federal Ministry for Economic Affairs and Energy. Fraunhofer IIS is leader of the consortium of eight partners, including Infineon and Mercedes-Benz among others. If you want to learn more about the project, please visit the Fraunhofer IIS website.