An evaluation of the status quo and the future of empathic AI-supported triage systems
PD Dr. Cornelius Werner is a neurologist and geriatrician specializing in movement disorders, neurogenic dysphagia, and neurorehabilitation. He heads the Department of Neurology and Geriatrics at Johanniter Hospital Genthin-Stendal and is committed to improving medical care in his region. He also leads a research group at RWTH Aachen University that focuses on neurorehabilitation in neurogenic dysphagia, particularly in neurodegenerative diseases, as well as aphasia. The autoAAT aphasia project led by Accenture aimed to automate the Aachen Aphasia Test and ran from July 2022 to July 2025. audEERING®, University Hospital of Bonn, and RWTH Aachen University were part of the project group.
Question 1: How do you perceive the current discussion around AI in patient intake and triage in your environment? Are there already pilot studies or concrete projects?
Cornelius Werner: From my perspective, that still feels very far off. Of course, AI in medicine is a major topic right now—almost everyone wants to position themselves in some way. But when it comes specifically to triage in the emergency department, that’s extremely challenging—even without AI. To be honest, apart from your project, I don’t currently know of any others seriously addressing this specific setting.
We shouldn’t fool ourselves: the everyday reality in an emergency room is one of constant crisis. You have intoxicated patients, panicked relatives, people who are completely unresponsive. And no one really knows what will happen next. The idea of deploying an AI system in that context seems far too risky for many—legally and practically. What I see is more of a conversation happening in spaces far removed from clinical reality. I haven’t heard a single colleague say, “If only we had a robot to handle this for us.”
The use of such systems is conceivable, but they should start in lower-pressure settings—for example, in the form of telephone triage, as discussed in some countries, or in on-call services that deal with less critical cases.
Question 2: In your view, what are the most important requirements for a system that communicates empathetically with patients—for example, in stressful or emergency situations?
CW: First and foremost, such a system must be able to do two things simultaneously. First, it must provide an intellectual assessment of the situation despite unstructured and often vague medical input. Second, it must offer emotional de-escalation. Almost everyone who comes into an emergency room is afraid—even if they appear aggressive or irritable.
We have to keep in mind: patients rarely describe symptoms in a structured way. They’ll talk about a “weird feeling” or “dizziness,” and it’s the job of the doctor—or the bot—to extract medically relevant information from this jumble of words using heuristically guided questions.
At the same time, an empathetic system must be context-sensitive. It’s not enough to recognize anger—it has to understand that fear is often behind it.
Question 3: What benefits or risks do you see in an AI-based reception system that understands language, recognizes emotions, and performs triage?
CW: A bot like that would have one decisive advantage: patience. The 25th patient with unspecific dizziness would receive just as much friendliness as the first. That’s not something you can take for granted in day-to-day hospital work.
What I find particularly interesting is the idea of emotional triage—not just prioritizing medically, but also affectively. It’s not about giving preference to someone because they’re loud, but about recognizing who is at “1000 degrees” and who is closer to “90.” Who urgently needs emotional support so the situation doesn’t escalate? Early-warning systems like that could help prevent many conflicts—in essence, a kind of “waiting room management.”
Question 4: In your experience, how do patients and relatives respond to such technologies? Is there acceptance, skepticism, or specific expectations?
CW: That varies a lot. Often in the ER, it’s not the patients themselves who are speaking, but their relatives or the emergency personnel. In such situations, it’s less about acceptance and more about whether something works at all.
Rejection arises when people feel they’ve been parked in some kind of phone queue and no longer have access to a human being. But younger people are already using LLMs like ChatGPT as a kind of co-therapist. The threshold is very low for them. In nursing—especially among older patients—acceptance strongly depends on their familiarity with technology. There’s definitely an age gradient.
Question 5: What would need to be in place—technically, ethically, logistically—for your institution to consider deploying such a system?
CW: For any institution, two things matter: First, it has to save costs—ideally on personnel. Second, it has to be legally unassailable.
That means, quite specifically: a certified medical product, clearly defined liability issues, and no additional risks for the hospital’s insurance. But we’re still a long way from that. The emergency room is the Mount Everest of medicine. As I said earlier, I would start smaller—for example, with a preliminary telephone triage or a digital waiting room management system.
We also can’t underestimate the political dimensions. General practitioners don’t want to lose their cases, and hospitals don’t want to be overwhelmed by unnecessary emergencies. AI-based triage is not just a medical issue—it’s a very sensitive topic in terms of health policy as well.