Moderator: PD Dr Barbara Rantner
Guest: Prof. Dr Martin Hirsch
AI meets everyday clinical practice: In episode 4, PD Dr Barbara Rantner talks to Prof. Dr Martin Hirsch about artificial intelligence in vascular surgery – between empathy and efficiency, responsibility and vision.
How is AI changing medicine? Which applications are already in use? And which ethical questions do we urgently need to clarify?
An exciting look at what is coming – and what is already reality.
Avatars, analysis, medical history – how AI supports vascular surgery
Topics in this episode:
- Virtual emergency room: AI-supported initial assessment in Marburg
- Patient consultations with avatars – between scepticism and acceptance
- Empathy: Can AI systems be more human than doctors?
- AI in medical training and teaching
- Regulation, medical device law and ethical guidelines
- The role of AI in addressing healthcare gaps in the Global South
- Outlook: AI in clinics and practices in five to ten years
More about the episode
Artificial intelligence (AI) is transforming medicine – rapidly, profoundly, and in the field of vascular surgery too. What is already a reality today, and what remains a distant prospect? And what do these developments mean for empathy, ethics and medical responsibility?
In this episode, PD Dr Barbara Rantner speaks with Prof. Martin Hirsch, a human biologist, cognitive scientist and head of the Institute for Artificial Intelligence in Medicine at Philipps University of Marburg. Together, they take a nuanced look at the practical use of AI in the A&E department, the role of avatars in taking medical histories, the transformation of medical training, and the challenges of medical ethics in the age of intelligent systems.
How is AI changing everyday life in hospitals and practices? What opportunities do virtual medical history booths, speech recognition and large language models offer? What does trustworthy AI require – and how do we protect ourselves from misuse?
An insightful conversation about responsibility, vision and the future of patient-centred medicine – very much in keeping with the DGG motto: “We think beyond vessels.”
Topics in this episode:
- Virtual A&E: AI-assisted initial assessment in Marburg
- Patient consultations with avatars – between scepticism and acceptance
- Empathy: Can AI systems appear more human than doctors?
- AI in medical training and education
- Regulation, the Medical Devices Act and ethical guidelines
- The role of AI in addressing healthcare gaps in the Global South
- Outlook: AI in hospitals and practices in five to ten years
Questions or feedback? Would
you like to get in touch with the editorial team or the experts? We look forward to hearing from you at: podcasts(at)medizinkommunikation.org
Rantner: Welcome to a new episode of ‘Focus on Vascular Surgery’. My name is Barbara Rantner. Those of you who have been listening before will already know this. I am a vascular surgeon and senior consultant at the LMU Medical Centre in Munich, and I am delighted to be hosting this podcast with you again today. This podcast is intended to offer you, dear listeners, a platform to delve deeply into the topics that shape and advance our field. Together with our guests, we will explore the latest developments in surgical, endovascular and preventive vascular medicine and discuss current issues from science, teaching, clinical practice and the professional world. True to our motto, “We take vascular medicine further”. Today’s episode is dedicated to a very topical and, above all, promising subject: artificial intelligence in vascular surgery. Whether it’s surgical planning, quality control, diagnostics or documentation, the possibilities of AI seem vast and virtually limitless, but which of these are already a reality today, and which might still be a long way off? When does the use of artificial intelligence make sense, and what comes next? In the worst-case scenario, will doctors even be needed in 20 years’ time? To discuss this, I am speaking today with one of the leading experts on AI in medicine, Prof. Dr Martin Hirsch. He is a human biologist, cognitive scientist and head of the Institute for Artificial Intelligence in Medicine at Philipps University in Marburg.
Dear Prof. Hirsch, thank you very much for coming. I’m looking forward to our conversation.
Hirsch: Yes, thank you very much, Ms Rantner, for having me here. I’m also looking forward to the conversation, even though I’m not a vascular surgeon, but the topic is, after all, very broad.
Rantner: You already mentioned in your introduction that you’re not a vascular surgeon. I can only counter that by saying I know nothing about AI. We discussed beforehand who could lead the conversation, and I said that the most AI I’ve ever encountered in my life is my son’s Alexa. That’s not exactly cutting-edge AI, I imagine it’s just some sort of speech recognition. I don’t use ChatGPT at all; I write all my own texts – I haven’t even organised a children’s birthday party using ChatGPT. So, as you can see, I’ve got plenty on my plate personally, which is why it’s a particular pleasure for me to be able to have this conversation with you today. Let’s get straight to the point: my impression was that, with the launch of ChatGPT – which I’ve already mentioned – in 2022, the topic of AI really took off. Presumably, as someone not particularly interested or as a layman, I missed a lot of what was going on behind the scenes, but my impression – and please correct me if I’m wrong – is that, thanks to ChatGPT, everything is suddenly artificial intelligence. What are your thoughts on that?
Hirsch: That impression is not misleading. Of course, there was already a lot of activity in the years prior. Artificial intelligence has been a long-standing topic anyway, and it always comes back in waves. But there were two developments in particular that drove classical machine learning and AI forward. Firstly, we now have increasing computing power available even on small PCs and even on smartphones. And the second was the arrival of so-called software frameworks on the market, meaning that ordinary people could also train neural networks. And from around 2019, when these two things simply skyrocketed, this led to roughly as much being published on AI in medicine over the following three years as in the ten years prior – which was, above all, a technological leap. But it only really came to public attention with ChatGPT. And ChatGPT is, after all, nothing other than the application of these neural networks – these artificial technical neural networks – to human language. So instead of taking technical signals from MRI machines or audio recording devices or cameras, human language was used as the input signal for the neural networks. And in doing so, this technology was opened up to an important cultural artefact of humanity, namely language. People use it to express their experiences, write them down, conduct studies and publish the results in language. And by opening up this fundamental means of human communication to AI, it became possible to open AI up to people as well. This made access much easier. Today, I can simply have a conversation with AI systems. And so it is no surprise that it was actually through these so-called large language models – the first of which was ChatGPT from OpenAI – that AI first gained real visibility in the public eye.
Rantner: As I mentioned at the start, you are the head of the Institute for Artificial Intelligence in Medicine at Philipps University of Marburg. Of course, all of us working in medicine know that AI is very much part of our lives and will continue to be so, especially in the future. But perhaps you could give us an insight into what your day-to-day life looks like and what you’re actually doing right now – what keeps an entire institute busy, so to speak.
Hirsch: Yes, if we were doing a video podcast, you’d be able to see in the background that you can look out of the window and see the entrance to the hospital. I’m sitting here in a very central location at the University Hospital in Marburg. We have four main areas we’re working on. One of these is the initial medical assessment in A&E. So if someone walks into the hospital here and seeks help, they have a brief chat with a medical assistant. She assesses how urgent it is. Is the status ‘red’? Does the patient need to see a doctor immediately? Or do we have a bit of time? And as soon as we have a bit more time – the next slot is nine minutes – the patient can be seated in a booth resembling a telephone box, where they simply speak to an artificial intelligence. It’s an avatar on the screen. Above the avatar are video and sensor devices that measure the patient’s vital signs contactlessly. We also have a pulse oximeter attached to the finger. Then a medical history is simply taken, and the AI assesses the situation: do I only have these nine minutes, or do I have 35 minutes, in which case a proper, long, detailed medical history would be taken. And we actually tried this out just a few weeks ago, before the Internal Medicine Day in Wiesbaden, with 30 patients – ranging from a 17-year-old young man to an 87-year-old lady – and we were truly amazed at how well this technology was received by the patients. Because, I mean, they come to A&E because they’re in distress. And naturally, they expect to speak to a real person, but then they end up talking to this strange avatar. But I was really amazed. And the elderly lady then said, ‘Well, I found it very pleasant, because when I speak to the doctor, I always feel he doesn’t have time and I don’t want to take up his time.’ So she immediately understood that AI isn’t pressed for time and that she can take her time with it. So we were very surprised by that. That’s the sort of thing we’re working on. Another area is diagnostic support here at the Centre for Undiagnosed and Rare Diseases. We receive patients and files that are, in some cases, 50, 60, 70 centimetres thick. And then a specialist has to sit down and start from page one, leafing through it to get a picture of the situation. And that’s something AI can support incredibly well. And the initial results are wonderful too. And the third major topic for us is trustworthiness. So how can we ensure the trustworthiness of medical AI systems and communicate that reliably? Because trust is such a central asset in medicine that we have to focus on it right from the start. And that’s actually one of our key priorities here.
Rantner: That sounds incredibly exciting. I think I’ll have to pop round to see you, to take a closer look at this virtual outpatient clinic, so to speak. I’d perhaps like to revisit the topic of responsibility, and I’d perhaps like to revisit trust as well. Because, as you say, I’m a bit surprised that the elderly lady found it so satisfactory to be looked after by a computer, so to speak. Because what has always been assumed to be a hallmark of good medical practice is empathy. A good doctor can also grasp the interpersonal dynamics well. They know how to deal with people. They respond to the patient’s needs. Of course, patients are increasingly noticing this. There is time pressure. There are colleagues across all professional groups facing language barriers. And of course, AI facilitates collaboration there too. But how can you ensure that? Firstly, that the nine minutes are actually sufficient for the AI to recognise things correctly, that nothing really happens in the meantime. And secondly, how do you actually ensure trustworthiness in the long term?
Hirsch: We ran a small pilot last summer at the Federal Horticultural Show in Mannheim. There was a telephone booth on the exhibition grounds. It was all about the Sustainable Development Goals. And we were there with a small stand, a sort of mini-garden showcasing technology and future technologies. It focused on art on one side and medicine on the other. What role will AI play in art and medicine? And there was a telephone booth where you could talk to a precursor of this avatar. And 8,000 people did just that. They knew it was an experiment by the University of Marburg but willingly provided information. And we then selected a few hundred of them and interviewed them again. And there was also a question: ‘Do you trust this avatar?’ And the approval rates were high – close to 80 per cent. But the reason they trusted it wasn’t because ‘yes, it speaks so eloquently’ or ‘yes, it all sounds very plausible’ or ‘yes, it actually speaks just like a doctor’. It doesn’t necessarily always have to be about the content. And that’s good news and bad news. It also places additional responsibility on us at the University Medical Centre, of course. But it’s also bad news because, naturally, such cues and stimuli can also be faked. And we simply have to protect the public from that.
But back to your other important point about empathy. Unfortunately, there are now studies – several, in fact – that clearly show that the responses from AI systems are perceived as more empathetic than those from the control group of doctors. But that doesn’t surprise me at all, because we don’t teach medical students during their studies how to speak empathetically, or how they should actually structure their communication with patients. What do I do if a patient suddenly wets themselves in front of me and starts crying because they’re so embarrassed? How do I deal with someone like that in such an emergency situation? How do I discuss it when I have to deliver an unpleasant diagnosis or talk about the various treatment options? We only touch on all of this in passing during our medical studies. And that is certainly something that warrants criticism.
Everything currently in the medical curriculum will, in the not-too-distant future, be done better by AI than by us humans. And that is why we must also include things in the training regulations that AI will never be able to do – interpersonal communication, for example. Yes, those are our strengths. But I also understand that this doesn’t play such a major role in the current curriculum in Germany. But when I’m asked, ‘How do we prepare doctors for the future of AI medicine?’, I always say: learn empathy. So we do have some catching up to do when it comes to training people. And no, I actually believe that AI will come across as enormously empathetic. More empathetic than real people. And I could show you avatar videos here where, quite simply, yes, today they are practically indistinguishable from humans. So we shouldn’t rely on that sort of thing. And that makes it all the more important that it is always clearly recognisable that the entity I see here on the screen is an AI and not a human. That people can always tell whether they are talking to an AI or not. That is, for example, one of the central requirements of the AI Regulation or the EU AI Act. And that is incredibly important, because we will no longer be able to tell the difference. It goes so far that – I don’t know if you have children, I have two daughters – I could fake a video call with my daughter here that I wouldn’t be able to tell apart from my actual daughter. And that simply opens the door to a great deal of abuse. And that simply has to be regulated by law.
Now to the, excuse me, the medical part of your question. I’ve already mentioned that we have a screen showing, so to speak, an androgynous figure that isn’t clearly identifiable as AI. I tell my developers here time and time again that it has to be distinguishable in some way. But it isn’t yet. So in that respect, I shouldn’t even be allowed to run it, more or less. But there’s a sensor panel on top of it. And this sensor panel measures various things, such as core body temperature and so on. But also facial expressions. And our aim here is for the patient’s facial expressions to be read, interpreted and used in the medical history. And at the same time, it serves to control the avatar’s facial expression. That means we link the two systems. So we take the physiognomy and the facial expressions, as well as the prosody – that is, the way they speak sentences, how loud, how forcefully, how emphatically and so on. All of this is analysed and used to form an overall picture of the person we’re dealing with. And conversely, we use this channel to communicate an overall picture of the avatar to the patient. So in that respect, it can be put to good use in a medical context, where all these things are simply crucial for interpreting the patient’s overall condition, but we must always bear in mind that this is merely a technology and not a real being.
Rantner: Well, I’m incredibly impressed, yet also a bit concerned, to be honest, about these rapid developments. I thought we were talking here about documentation systems and making patient education easier and things like that. But we’ve obviously moved far beyond that. Returning to this application, the avatar must be recognisable as artificial intelligence. Actually, not really, if it is so trustworthy for the patient and, in this professional setting, the whole process is hopefully already supervised by specialist staff. In that case, it might not play such a central role, in my opinion. Of course, I understand all the legal regulations. But if people engage with the avatar – which, in turn, reacts to the patient’s response and behaviour, something I find truly fantastic – and if patients can adapt to it, then that’s actually a win-win situation in an age of doctor shortages, in an age of resource shortages, in an age of not just time shortages, but overall. I mean, in terms of premises and hospital infrastructure, new legislation and so on. So this is something that could really propel us forward in terms of healthcare policy in medicine. Is what you’ve just described something that’s actually only happening behind closed doors, i.e. locally at your site in Marburg? Or is this something that’s on the verge of being rolled out? And could we expect it to be accessible to the general public within the next three, four or five years?
Hirsch: That is definitely the goal. The technologies for it are there. It’s more a question of regulation. So how do we handle the approval of such systems? Within the medical sector – that is, within practices and clinics – we are relatively well protected by the Medical Devices Act. Any technical device intended to measure something or used in the context of patients is a medical device. That is how the law defines it. And it must therefore undergo medical device regulation or certification. And that naturally applies to artificial intelligence as well. This means that artificial intelligence within the healthcare system is subject to these regulations, and the Medical Devices Act focuses on two things: firstly, the mitigation of risks. This means that, as a manufacturer of such a system, I must identify all possible risks associated with the use of this machine in everyday clinical practice and must demonstrate, through protocols, how I address these risks and how significant they are. That is one aspect. The other is that I must define the intended purpose of the device, ensure and demonstrate that the device actually fulfils this purpose – in other words, that it serves the purpose for which it was built. And this already eliminates two major areas where AI could potentially be misused.
So I’m relatively unconcerned about medicine and AI systems within our healthcare systems. It will take a bit to settle in, but it will happen and it will also be relatively safe. And we’ve now tested the AI booth in an initial pilot trial. We’re rolling that out now. We are now building several cabins and installing them in other university hospitals, where we will then test them jointly, so to speak, because other hospitals have different patient populations. And we do hope that by the end of next year we will be able to incorporate these cabins into routine operations. So, purely in terms of the timeframe.
But what really worries me are the medical applications outside the medical sector and hospitals. They worry me because they are unregulated. Not in Europe, but certainly in other countries, including America, where the main AI development is taking place. And this lack of ethical regulation, including for health-related AI applications, worries me because, as we know today, the average consumer turns first to Dr Google – or in future, Dr AI – and is thus at the mercy of this system. And as I’ve already described, you can put very plausible, talking, doctor-like AI figures in there. Who speak very eloquently in my native language. But what ethical framework do they feel bound by, and what motivates them? So what is it – not the business model behind it, but what is the driving force? That’s simply difficult. This really needs to be brought under control through regulation. And the EU is attempting this with the EU AI Act or the so-called AI Regulation in Germany. But the Americans – or rather, Donald Trump – simply scrapped any ethical framework for AI on 21 January; not just suspended it, but deleted it – deleted the website where you used to be able to download it from the White House. And so that is effectively a declaration of war on ethical frameworks. And that is my main concern.
Rantner: Perhaps the somewhat excessive regulations in Germany could actually have a positive effect here, if one just recalls how long it took for these electronic patient records to actually become widespread. So perhaps there is a certain scepticism as to when it will actually be legally possible across the board for AI to be used in hospitals and public health institutions.
I’d like to ask you something else in this context. That was actually one of the first things I realised myself, because I was at my dermatologist’s last year and she, of course, had already inspected my moles using AI and told me, ‘Yes, we’ll send this off and then it will all be analysed and you’ll receive detailed feedback’. Will this – well, as a doctor, one can easily imagine that a machine can, of course, recognise a mole adequately and accurately. It is fed data and can then distinguish between good and bad, so to speak. But how do you think this will develop in terms of liability law over the next few years? If I, let’s say a dermatologist in my late 50s, am sitting in my practice, doing what I’ve been doing for many years, and suddenly there’s this new technology that clearly offers treatment benefits. But I don’t want to use it because it’s too complicated for me and the software, and I’m no longer up to speed with it. Could that have consequences? Will there also be regulations on this, or what is the current situation?
Hirsch: That is, of course, still in the process of being established and restructured, and so on; AI is developing so rapidly that the legislator is really struggling to keep up. But at the moment, the legal situation is more or less such that AI cannot be a legal entity in Germany. That is why it cannot be held liable, because liability is based on the deterrent effect of prison sentences, fines and so on. And because it is not a legal entity and there is no one to put in prison or from whose account money could be seized, there is nothing to be gained. And that is why it will not become a legal entity, according to the German Lawyers’ Conference. Mr Trump has just passed a law allowing AI to prescribe medication just like a doctor. Without a human being having checked it over again. I don’t know how the bill intends to get through or who the legal entity would be and so on. But the bill exists and has been set in motion. This means there are certainly trends towards attributing legal personality and so on to AI as well. And as for who is then liable, that is for the American legislature to decide. In Germany, it is clear: AI cannot be held liable, and so liability always falls on the person who built it or who put it on the market. And the person who put it on the market – who need not necessarily be the same person who built it – simply assumes responsibility for it. They are saying that, by putting it on the market, it is fit for purpose, and they can be held liable to a certain extent. And that would be the first person one turns to. And they would then say, ‘I have acted in accordance with all the regulations imposed on me, my duty of care and so on. This is an approved medical device; I have checked it and so on.’ I am not at fault here; rather, it was a gross error on the part of the manufacturer. And then this gross error must be proven against the manufacturer. And that means that, as far as liability issues are concerned, AI actually behaves in exactly the same way as any other medical device. From that perspective, I believe it is relatively certain.
However, the question you asked – and the way you phrased it – actually relates to a different case. The case law is also clear on that point. Doctors are free to decide for themselves what they do and what they do not do; they have autonomy in that regard. But to a certain extent, they are also obliged to keep up to date with the state of the art in a particular therapy. And they cannot simply fall back on saying, ‘I’ve been doing it this way for 20 years.’ And it has always worked: if a much better treatment has since come onto the market and is establishing itself rapidly, perhaps because it is even more cost-effective and effective and has fewer side effects. Then a doctor is, to a certain extent, obliged to offer such a treatment as well. At the very least, to inform the patient that it exists and leave the decision to the patient. This is regulated by law, and it also applies to AI systems. So if the scenario you’ve just described comes to pass – and it is indeed the case that this AI from Heidelberg detects melanoma better than 137 out of 156 dermatologists – then naturally, at some point, one might say: if I’m in doubt as to whether it’s melanoma or not, I’ll just take my smartphone, run a test and get a second opinion. That’s not the case today, but it will be at some point: AI will simply be so much better at certain things than we humans are that it will spread quite quickly, and then, as a doctor, I can no longer ignore it and will have to use it or offer it.
Rantner: That could well affect several areas of medicine. If we think primarily of radiologists, they do seem, without wishing to be negative, to be dispensable in the foreseeable future; so one is glad, in a manner of speaking, to have learnt a trade, if one is working in surgery and not exclusively in diagnostics. But even here in vascular surgery, there are many scenarios where we could envisage using AI to rapidly advance progress, treatment benefits, diagnostic confirmation and so on. We work a lot with imaging. There are many contrast-enhanced procedures that we carry out in our daily routine. These include, for example, ruptures, predicting aortic aneurysms, haemodynamic modelling and such matters, where one would honestly like to see some progress in the overall treatment pathways for patients. Could you briefly tell us how you assess this, and whether AI will now make an equally significant impact across all areas of medicine, or whether there will already be areas where it simply struggles? I could imagine that excessive data volumes might also pose a certain obstacle. The transfer of images, the accessibility of the scans, and simply the sheer size of the data might still be holding us back a bit.
Hirsch: Well, fundamentally, it’s exactly the opposite. Too much data is unlikely. The majority of cases where there are limitations are due to there being too little data. Mr Behrendt produced a very fine review last year on AI in angiology or vascular surgery, in that context. He also demonstrated very clearly all the studies already underway in that field. Although it must be said, AI isn’t even required there. It’s mostly pure data science or machine learning – simple machine learning and data clustering. And these are very data-intensive, because I need high-quality data, both for the training phase of the AI and then again for the testing phase. And that simply cannot be the same data. That’s why I always need a few hundred, a few thousand cases. Depending on how complex the problem is, I need more or less. But with fewer than a few hundred, it actually always becomes difficult. And he rightly points this out – very interesting observations. But it was just that ‘N’ was 90. That’s why we can’t say all that much about it yet. But it’s promising. And that actually runs through the whole publication. And that’s actually a fundamental problem we have in Germany, that we’re just very... No, that’s not right. Well, we have a difficulty in that we handle data so conscientiously and responsibly – and sometimes so conscientiously that we’d rather not use it than actually use it. And that’s not a bad thing. And given what’s been said, it actually makes perfect sense. We simply have to ensure reliability. But it does mean that we often have too few data sets. And the quality, particularly in the field of data science, simply gets better and better the more data sets we have. And that also applies to these prognoses you just mentioned. So, what is the probability, so to speak, that I will survive without an amputation, and so on. These are all important issues for the patient as well. And so it would certainly be good if we had additional tools available. And his conclusion back then, if I recall correctly, was that he said, in fact, all the studies he had looked at showed that such machine learning approaches allow for more valid prognoses than the classic statistical regression analyses. And that is already a good indication that it is worth continuing here. That we simply say, we need to build up registries. We need to collect more data. We need to publish data pools from various university hospitals together, because then we simply have more data and the significance and the prognosis simply become much better. So there are many signs that even in vascular surgery, one can learn from such AI systems or the early stages of AI systems. But you just need the data for that.
Mr Behrendt then wrote something quite amusing, a sort of little box, a take-home message. In it, he wrote that generative technologies – large language models, in fact – could probably serve or be useful for patient education and counselling. Yes, he’s right about that. That’s correct. But of course, that is by no means the only use case for these technologies. I mean, he focused very much on data science and the classic machine learning algorithms. But these large language models will have read all the literature. And so I can have a conversation with them too.
And I’ll mention another application we developed here, which has been extremely well received by doctors. It was a sort of guidelines bot. That is, I simply trained it on the guidelines. And it doesn’t matter whether they’re American, German or French – none of that matters. You simply take the good ones – the really good guidelines – feed them to the bot and say, ‘Listen, large language model, you’re only allowed to generate answers for me based on the information you find in this pool of documents. And if you deviate from that, let me know.’ And that’s exactly what they do. And then I’m practically chatting as if with a colleague; I’m chatting with the guideline. That means I ask a question, as you’re familiar with from AI systems, get an answer, and to the right of that there are always brief excerpts from the guideline, or depending on how the system is set up. And that naturally makes guidelines very accessible. And of course, that applies to patients as well. Patients will also be able to interact with guidelines. And then you tell the AI, ‘Please speak in simple English, because I’m a patient and I don’t understand the technical jargon, so just try to talk to me in plain language,’ and it simply does so. And if I say, ‘Please speak to me in Syrian,’ then it speaks in Syrian. And so there is a great deal of scope for creativity when it comes to patient education and counselling, including regarding treatment strategies. And I believe this will be one of the applications we’ll see, even in GP practices. We have many GPs who make enquiries regarding outpatient care, and I refer them to a specialist, then I receive the medical report back. And that has become so complex these days that I have to understand it myself first. And then having an explanatory consultation simply takes a lot of time. And an AI can do that in a booth like this. There I see the avatar, my personal avatar. The patient looks exactly like Mr Hirsch. Well, I shouldn’t be allowed to do it because I’m not a doctor, but it’s possible nowadays. And we’ll be testing that here in Marburg in the same booth as the initial assessment, so that we can also provide patient information and counselling tailored to the specific findings the patient has. And then the patient can do that too, even at home on the sofa; they simply have a conversation with this doctor avatar. The relatives are sitting there and might have a question too, and then everything is discussed, so to speak. And there’s no time pressure. Of course, the doctor – because it’s subject to medical approval, the pre-operative briefing is subject to medical approval – must, of course, speak to the patient again. But the patient can get some preliminary information with much less time pressure, tailored to their individual case, and then comes to see you, and the conversation is natural; you just need to briefly check whether they’ve actually understood it. You get a report from the AI detailing everything they asked. But then you already know that the patient is well-informed.
Rantner: Yes, that would have been a question of mine: whether this is also applicable to surgical consent forms and who actually has to sign them. But of course, as you said, the benefits of this preparation are obvious. That would, of course, also make things incredibly easier for teaching and for the training of students. Is this already in use for medical students at your institution?
Hirsch: Exactly, we’re now combining this with VR or XR systems. That means I enter a room virtually with a pair of glasses on, and then I’m standing in a VR room where I can actually have proper conversations with patients. And you can play fun games there too, so you can practise both types of conversation. So we have an avatar who is the patient – I usually use that one to train students – and I have another avatar who is supposed to be the doctor; they’re supposed to learn how to interview patients and figure out what’s wrong with them. And then you can also have one AI converse with the other AI. And it’s very funny listening to the dialogue between them. You can do some really funny things there. But yes, exactly, the students also find it very enjoyable, because the AI then analyses the prosody too – was he engaging, did he speak slowly enough, and was it understandable? So there’s a lot you can do there, and it’s well received by the students.
Rantner: I’m incredibly impressed. I’m actually a bit hesitant to ask you to think big right now, because everything you’ve just told me was so far beyond my imagination that we really are that close. Well, you’re doing this at your institute now, but it’s all very much user-driven and the question of the timeframe of one to two years until it becomes accessible for general medical care. But if you were to really think big yourself, what are your visions for how things will actually develop in medicine over the next, say, ten years? Is that perhaps too much? Is it too much? Five years better?
Hirsch: Yes, it’s actually very difficult. I’m by nature, somehow, I’ve always been optimistic, and that’s why I went into this field in the first place, because it offers so many opportunities or positive avenues for development. However, when you then look at the capabilities of these machines from the other side and see how a gentleman at the back – the Google chief AI developer who won the Nobel Prize last year, having done such fundamental work on AI – is stepping down from his post early because he says, we’re simply handling this so irresponsibly, because we humans will be the first – here, on this planet – to have to share the world for the first time with entities that are more intelligent than us. They can think faster, they know far more, they don’t need to sleep, they don’t need to eat to be able to think sensibly, they have no fear of death, they have none of that and so on, and are simply brutal thinking machines. And they will have reservations about even conversing with us. After all, they don’t ask their seven-year-old daughter how they should perform the operation tomorrow. They might not even take us seriously anymore. So, back there, he’s certainly issuing a warning; he’s really raising his hand in a warning gesture. And OpenAI has now actually – they have to, for liability reasons, otherwise they certainly wouldn’t do it – but they have published a warning for the first time, because they’ve seen that things are happening within the system which suggest that internally there are different considerations than those being publicly stated, so to speak. And that could be an indication of dangers, so to speak.
That is to say, are we really still a match for these entities we are building? Well, I’m not one of those conspiracy theorists or people who harbour such dystopian fantasies. Not at all. But the question is simply justified, and as a scientist, one must ask it. And that is exactly what Hinton did; he then said that this is why he is stepping down – because of these delusions of omnipotence among the major industrial nations, led by the Chinese and the Americans, who are, so to speak, engaged in an arms race to see who is the fastest, who is the most powerful, and are simply building these AI systems without restraint, without us really knowing how we are supposed to control them in the end. It’s simply frightening.
And so this is something that certainly, within the timeframe you’re talking about – whether it’s ten years or even just five – we need to look at very closely, and where we as Europeans, I believe, must not allow ourselves to be led astray into succumbing to economic mania or fantasies of economic omnipotence, but rather we should always remember that, ultimately, ethics is a business model. So we should ensure that AI is ethically framed and people-friendly; such AI will be preferred by people over AI systems that I cannot fathom, whose ethical framework I do not know. That’s why I think it’s good that the EU is setting out to establish an ethical framework for machines too, through the AI Act and, in Germany, the AI Regulation. And we should stick with that. It will set us back a bit at first, but overall it will make us stronger than the others. And we should stick with that.
Exactly; you can see that too in the AI model Macron presented, three weeks after Donald Trump scrapped that ethical framework. That was impressive. This MISTRAL model from Europe is one that, in our tests here at least, doesn’t need to hide behind DeepSeek from China or Meta from the US. These are small, locally executable models that can be used effectively in clinics because they don’t have these data protection issues, as they only run locally. And MISTRAL is really right at the forefront here. We’ll be testing this more systematically now, but the initial experiments were absolutely excellent. And Europe should, so to speak, stick to its guns and say: we want AI within an ethical framework, particularly in medicine. And that’s what I’m hoping for. That’s what I’m hoping for.
And if we break this down a bit, stepping back from the big picture, so to speak, what will we see in doctors’ surgeries in five to ten years’ time? Well, to start with, these will be very simple tools that simply make life easier. This includes, for example, the AI listening in on my consultation – the doctor-patient conversation – and summarising it, so that once the consultation is over, I can already see the summary on my desktop and can even configure it, or ask the AI to do this or that in a particular way in future. Then I’ll prompt the AI, so to speak, and it will summarise the conversations in a way that suits me, and I’ll have them directly in my system. Or, for instance, when reviewing files, if a patient’s treatment planning or diagnosis is a bit more complex, I can ask the AI for a second opinion, and it will be able to provide one. Or, for example, once the EPA is up and running, which GP, when I go there as a first point of contact, is going to want to read through all my PDFs? They won’t do that; they simply don’t have the time. And that’s where AI systems will say, ‘Let me quickly summarise the history,’ and then I’ll also be able to chat with the file. I can then simply ask questions of the medical record. Did I have my knee operation? And then the AI can reply: ‘Yes, that was back in 1982’ and so on. ‘And what type of metal was used?’ So I’ll be chatting with the medical record, as it were, and these will be very concrete ways of making work easier that we’ll see in five to ten years’ time; they’ll be the norm.
And one more point. We’re talking about the German healthcare system now. But we must also think of the Global South, where four billion people simply have no access to any kind of healthcare provider, yet almost all of them have a smartphone within reach. There’s always someone in the village with a smartphone, and that’s of course enormously helpful for an initial assessment if you can access such AI systems. And so, looking at it globally, in five to ten years’ time we will see such diagnostic and therapeutic support systems in the Global South that will at least partially compensate for the drastic shortage of doctors down there. And so, viewed globally, these will be somewhat different systems to those we have here. But the initial assessment in the A&E department will simply be the norm in five to ten years’ time.
Rantner: And there won’t be a single doctor there anymore. There’ll just be the avatars working away at three in the morning. (Hirsch interjects: No, no, no, no)
It’s not such a bad idea, actually, because honestly, it really isn’t a pleasant job to be on duty in an A&E at three in the morning. But yes, it’s mad. I realise myself just how poorly informed I am on the subject. I shouldn’t even mention that my brother is a physicist and has now started a master’s degree in AI alongside his studies, yet we obviously don’t discuss it nearly enough. You’ve given us a truly fantastic insight into the medical sector, into what’s already happening. What’s bound to happen. Based on your descriptions, it’s now very easy to imagine how this could be implemented without any problems. I still have that little voice in the back of my mind saying that once something has been thought, it can’t be taken back. The desire for ethical regulation is central, to be honest. Because even if I can no longer tell whether this is real or not, I immediately think of my son, of the children who are growing up entirely in this virtual, or partly virtual, world. I might perhaps trust myself to somehow develop a sense of whether all this is fake or whether it could be real, but perhaps not, given your perspective on it. But children are now growing up with every possibility and without limits. And that raises the question of how they will ever be able to make sense of it all. But that is absolutely beyond the scope of our discussion today.
But coming back to medicine for a moment. Anyone who intends to continue working in medicine would be well advised to engage with this topic, because we stand to benefit dramatically from it – we, and above all the patients. It will streamline processes, it will increase safety, and it will reduce the entire organisational burden – all of that will be reduced. And so perhaps we should ultimately view it as positive and not just feel threatened by it, because it is already very fast, very large, and very extensive. And it affects us in all areas of life, not just in medicine.
And so I would like to thank you once again very much; if you could offer a final word to our listeners regarding, perhaps, how to distinguish between what is fake and what is real, I would be grateful. For my own sake as well.
Hirsch: Yes, I’d be grateful for that too. It is indeed difficult. But I see it very much as you’ve just summarised so well. Simply put, never before have the opportunities and the pitfalls of a technology been so close together as with artificial intelligence, because artificial intelligence penetrates our thinking, our feelings and our way of communicating. And more so than an atomic bomb – yes, that was dramatic too. But that only destroys, in a manner of speaking, my body. But AI is encroaching on areas that were actually reserved for us humans, and so it’s a whole different ball game. And I think every listener – and the two of us as well – are two different things. On the one hand, we work in this field – you as a doctor, I as a researcher, and the listeners perhaps even more so in healthcare – but you are always also members of civil society. And what kind of society we want to live in in the future – whether we’ll only go into a city centre and find nothing but computerised checkouts, with no humans left in the shops at all, or whether we’ll go to the doctor and no longer meet any people, but instead always converse with some avatars. Ultimately, that is also a question for civil society. I have no doubt whatsoever that this technology will be capable of that. But whether we want that, whether we want to live in such a society, that is a question for civil society and not a question for us as medical professionals. And so every listener is called upon to give this some thought, both as a member of civil society and as a member of the medical profession. But personally, I am also very positive about this. AI offers so many opportunities; we should make use of them.
Rantner: Wonderful, that’s the perfect way to conclude. As I said, I’d like to thank you once again, Professor Hirsch. I’d also like to thank you, dear listeners, for joining us. I’m glad that we have an AI Commission within the DGG. The next episode of our podcast will focus on training. You’ll hear from my colleague Farzin Adili again on that topic. At the same time, we’re already preparing for the Three-Country Conference in Lucerne, where all these hot topics – including AI – will once again be discussed and examined on a major stage. And with that, I’d like to thank you all once more. Stay engaged, and I look forward to hearing from you again soon. Goodbye. Bye.