[Andreas J. W. Goldschmidt, Thomas M. Deserno, Alfred Winter, editors: KI in der Medizin. A review from a legal perspective]
Ulrich M. Gassner 11 University of Augsburg, Institute for Medical Devices Law, Augsburg, Germany
Bibliographic details
Andreas J. W. Goldschmidt, Thomas M. Deserno, Alfred Winter (editors)
KI in der Medizin – Folgenabschätzung für Forschung und Praxis
Publisher: medhochzwei Verlag, Heidelberg
Year of Publication: 2025, 314 pages, price: € 89,00
ISBN: 978-3-98800-141-2
ISBN (eBook): 978-3-98800-142-9
Review
Ethical challenges in AI-associated medical informatics, biometrics and epidemiology
Medicine is considered the most important area of application for AI. Hype or hope is no longer the question. Nevertheless, there are still many unanswered questions associated with the specific risks of AI-based software in diagnostics and therapy, as well as in medical research. The debate is far from over. For example, the topic of AI in medicine was only widely debated at the 129th Deutschen Ärztetag (Conference of the German Medical Association) at the end of May 2025. The basis for this was a paper published by the German Medical Association with the informative title ‘Of medical art with artificial intelligence’ [1], which summarises three earlier expert reports – the statement ‘Artificial intelligence in medicine’, the thesis paper ‘Artificial intelligence in healthcare’ and ‘Decision support for medical practice through artificial intelligence’. In this general context, the scientific perspectives of medical informatics, biometrics and epidemiology naturally remain rather underexposed. It is therefore all the more commendable that the German Association for Medical Informatics, Biometry and Epidemiology (GMDS) and in particular the members of the Presidium Commission ‘Ethical Issues in Medical Informatics, Biometry and Epidemiology’, including three of the editors, have been addressing this topic for some time. The book reviewed here is also based on three workshops organised by the Presiding Committee. The total of 19 individual contributions can therefore also be assigned to the topics chosen for the workshops: ‘Ethical Issues in Medical Informatics, Biometry and Epidemiology’, ‘Ethical Principles in Medical Research with Examples’ and ‘Ethical Principles in Medical Research with Examples’.
Contributions to the ethics of AI-associated medical informatics, biometrics and epidemiology
The individual contributions, each preceded by an abstract, examine the ethical, legal and social dimensions of AI-associated medical informatics, biometrics and epidemiology from different perspectives and questions. The first contribution (Andreas J. W. Goldschmidt) is of a fundamental nature and deals with the definition of AI, large language models (LLMs), the German Medical Research Act and impact assessment for research. The following article by the members of the aforementioned Executive Committee (Thomas M. Deserno, Birgit J. Gerecke, Andreas J. W. Goldschmidt, Alfred Winter) analyses the current performance of the GMDS ethical guidelines from 2008 based on the highly complex national, European and international normative setting and comes to the conclusion that they are still relevant, even against the background of the disruptive developments triggered by AI. In some key aspects, they could also provide a good basis for the AI guideline developed at the University Hospital Erlangen under the direction of Hans-Ulrich Prokosch, which came into force in summer 2024. This impressive pioneering work – one of the first AI guidelines in a German university hospital – is presented in detail (Hans-Ulrich Prokosch/Timo Apfelbacher/Sude Eda Koçman/Annika Clarner/Martin Schneider). The article rightly emphasises the relevance of the legal framework defined by the Medical Devices Regulation (MDR) and the Artificial Intelligence Act (AIA). The authors also do not underestimate the dynamics of development at all levels and convincingly emphasise that they understand their AI directive as a learning system. The following two contributions (Peter Walcher, Felix Walcher) have a completely different, namely biographical-historical focus. The subsequent contribution deals with the responsibility in dealing with health data in health services research using the example of the active emergency admission register (Felix Walcher/Susanne Drynda/Ronny Otto/Jonas Bienzeisler/Niels Bienzeisler/Wiebke Schirmeister/Alexandra Ramshorn-Zimmer/Rainer Röhrig) and emphasises, among other things, the role of the inner forum of researchers beyond all normative stipulations. The seventh article deals with the ‘regulatory jungle’ of the use of AI in medicine in a very prudent and easy-to-understand manner, also addressing the German Health Data Utilisation Act and liability issues (Karolina Lange-Kulmann/Marina Schulte/Thanos Rammos/Majula Jaiteh/Julian Pusch). However, the overview presentation required in this interdisciplinary context obviously left no room for a problem-orientated and differentiated discussion. It should therefore also be pointed out that some of the authors‘ statements, for example on Rule 11 (cf. [2]) or on the principle of personal service provision (cf. [3]) appear too apodictic from a jurisprudential perspective and only reflect the – always changeable – prevailing view. In the following article, the authors (Jonas Hügel/Nils Beyer/Robert Kossen/Alessandra Kuntz/Harald Kusch/Sophia Rheinländer/Sabine Solorz/Ulrich Sax) discuss the ethical standards for dealing with the vagueness and uncertainty of predictions in personalised medicine and the genomic data relevant to it, based on the ethical guidelines of the GMDS, the so-called Washington formula developed by Beauchamp and Childress and legal guidelines. Interdisciplinary dialogues and targeted training of all those involved are the means of choice for overcoming ethical challenges. The following article (Kai Wegkamp) focuses on the quality, bias and benefits in the machine learning cycle of medical applications and emphasises that the interaction of a wide range of different knowledge domains is crucial for their development and integration into patient care. It is highlighted that this cycle with its various strongly interdependent levels determines the overall quality and medical benefit of the AI application. The following article deals with ethical aspects of the digital transformation in healthcare in cardiology (Birgit J. Gerecke). The advantages and disadvantages of digital self-measurement through health apps and AI-based imaging are discussed, among other things, using the scale of the so-called Washington formula. The article ends with the desideratum that medicine – even with AI – should remain the art of healing the sick. This is followed by the presentation of a fast-track procedure for the ethical assessment of non-medical research projects (Walter Swoboda/Martin Schmieder/Julia Krumme/Johannes Schobel/Karsten Weber), which was developed for the Joint Ethics Committee of Bavarian Universities of Applied Sciences and is unique in Germany. The developers currently still reject the use of LLM-based chatbots in order to ensure the absolute primacy of autonomous human decision-making. In view of the ever-increasing performance of generative AI and easily operationalisable alignment options, such an attitude is of course becoming increasingly difficult to justify. In the following far-reaching overview article, the necessary assessment of AI in medicine is considered from the traditional overarching perspective of Health Technology Assessment (HTA) (Anna Moreno), whereby some relevant legal requirements beyond the MDR – which is primarily relevant for the development and production of AI-based software for medical purposes in addition to the AIA – are also addressed. The author summarises that conventional methods of evidence evaluation are no longer sufficient in the context of AI and calls for ethical aspects to be given greater weight in terms of personnel. The following article on ethical implications in the practical implementation of digitalisation and AI in healthcare (Jan Appel) provides an impressive systematisation of these aspects. Among other things, the author criticises the fact that ethical aspects in the border area of normative conventions (inclusion, liberal democracy, ecological and social sustainability, charity and solidarity) are only insufficiently concretised in guidelines and legal regulations and shows possible solutions along the life cycle of an AI product used in the inpatient sector. Based on the premise that companies will increasingly have to deal with the consequences of the internal use of AI, the results of an international survey on the impact of AI in the workplace are then presented (Annika Wagner/Andreas J. W. Goldschmidt). The following article deals with the use of digital solutions and AI-based software for the diagnosis and treatment of rare diseases, which affect around 4–5 million people in Germany alone (Jannik Schaaf/ Michael von Wagner/Holger Storf). The focus here is on an AI-based smart doctor portal for patients with rare diseases. The following article presents the introduction of an AI application at a maximum care provider from a practical perspective using the example of Klinikum Darmstadt GmbH and discusses the associated challenges (Clemens Maurer/Gerhard Ertl). The next article deals with the support of antibiotic stewardship (ABS) (measures that serve to sustainably improve or ensure rational antibiotic prescribing) through AI (Juliane Eidenschink/André Sander/Daniel Diekmann). The authors formulate specific requirements for a corresponding Clinical Decision Support System (CDSS) (sufficient knowledge-based and issuing of warnings). Then, in the following article, the advantages and disadvantages of using generative AI in the areas of medical documentation and the coding of services, including external review procedures by medical services, are discussed (Steffen Euler). According to the author, a fully AI-supported process not only helps to reduce the workload of those providing treatment, but can also safeguard operators and at the same time protect patients from unnecessary therapies. The book concludes with a contribution on the development of the ‘GA-Lotse’, a sustainable open source software for public health authorities (Peter Tinnemann/Stefanie Kaulich). Developed largely by the public health authority in Frankfurt am Main, the ‘GA-Lotse’, which is already in practical use, is a modern, flexible and sustainable software solution that meets the specific needs of public health authorities – something that is absolutely essential after the experiences during the COVID-19 pandemic – and at the same time sets future-oriented standards for the entire public administration.
Summary
The anthology, with its contributions by 48 authors, succeeds in almost completely reflecting the current state of knowledge of numerous disciplines in the field of AI in medicine. The range of analytical perspectives is extremely broad. It ranges from global standards to innovative projects at individual locations. Numerous contributions are also impressive due to their interdisciplinary approach. In particular, ethical and legal requirements are compared and evaluated in terms of their impact dimensions. One point of criticism could be that individual problem areas relevant to practice, such as the decisive role of industry standards (harmonised standards) for the development of Artificial Intelligence Medical Devices (AIMD) or the friction between the AI Regulation and the Medical Devices Regulation (e.g. with regard to safety requirements and research privileges) are not addressed. However, the volume is impressive precisely because it does not ignore any of the diverse aspects of the topics associated with the triumphant advance of AI in medicine, but instead gives them the necessary critical attention at an excellent level. No one who seriously engages with the relevant issues will be able to ignore this work. In short, this volume represents the state of the art when it comes to AI in medicine.
Note
The English version of the review was created with the help of DeepL and ChatGPT.
Competing interests
The author declares that he has no competing interests.
References
[1] Bundesärztekammer. Von ärztlicher Kunst mit Künstlicher Intelligenz. 2025. Available from: https://www.bundesaerztekammer.de/fileadmin/user_upload/BAEK/Politik/Programme-Positionen/Von_aerztlicher_Kunst_mit_Kuenstlicher_Intelligenz_27.05.2025.pdf[2] Gassner UM. Medizinprodukte-Software (MDSW): Klärungen zu Regel 11. Medizinprodukte Journal (MDJ). 2024;2024(4):245-57.
[3] Gassner UM. Künstliche Intelligenz in der Medizin – no human in the loop. In: Koch A, Kubiciel M, Wollenschläger F, Wurmnest W, editors. 50 Jahre Juristische Fakultät Augsburg [Fifty Years of the Augsburg Law Faculty]. Tübingen: Mohr Siebeck; 2021. p. 243-271. DOI: 10.1628/978-3-16-160999-2



