Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
"Bioética del Medicamento" es una obra académica que aborda los dilemas éticos que enfrenta la medicina actual en relación con el uso de medicamentos. Editado por Rafael Amo Usanos y escrito por un equipo interdisciplinario de expertos en el tema, este libro se enfoca en la investigación y producción industrial de medicamentos, su papel en la sostenibilidad ecológica del planeta, su relación con la práctica clínica y su impacto en la justicia y salud globales. Con 231 páginas y una bibliografía completa, esta obra es una guía esencial para estudiantes, profesionales de la salud y cualquier persona interesada en profundizar en los aspectos éticos del uso de medicamentos.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 374
Veröffentlichungsjahr: 2023
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Portadilla
BIOÉTICA DEL MEDICAMENTO
Colección
DE LA UNIVERSIDAD PONTIFICIA COMILLAS
MADRID
CÁTEDRA DE BIOÉTICA
Dilemas Éticos de la Medicina Actual
N.º 35
Director de la colección
Rafael Amo Usanos
PEDIDOS:
UPCO SERVICIODE PUBLICACIONES
C/ Universidad de Comillas, 3
28049 Madrid
Tel.: 91 734 39 50 – Fax: 91 734 45 70
www.comillas.edu/es/
Portada
Rafael Amo Usanos(editor)
BIOÉTICA DEL MEDICAMENTO
Rafael Amo Usanos
Paula Frankiln
Amelia Martín Uranga
José Antonio Sacristán
Sergio Surugi de Siquiera
José F. Zamarriego Izquierdo
Loreto Fernández-Oronoz Martín de Vidales
Rogelio Altisent
Pedro Luis Sánchez García
Carina Escobar
José Ramón Repullo Labrador
Juan Carlos Mampaso Martín-Buitrago
Mario Zarzuela Rescalvo
Ricardo Páez
David Curbelo Pérez
Carlo Casalone SJ
2023
Créditos
Servicio de Biblioteca. Universidad Pontificia Comillas de Madrid
BIOÉTICA del medicamento / Rafael Amo Usanos (editor) ; [autores] Rafael Amo Usanos [y 15 más]. -- Madrid : Universidad Pontificia Comillas, 2023.
231 p. -- (Dilemas éticos de la medicina actual / Cátedra de Bioética ; 35)
Bibliografía.
D.L. M 7241-2023. -- ISBN 978-84-8468-972-0
1. Aspectos éticos. 2. Medicamentos. 3. Industria farmacéutica. 4. Bioética. I. Amo Usanos, Rafael (1972-), autor, editor literario
Esta editorial es miembro de la Unión de Editoriales Universitarias Españolas (UNE), lo que garantiza la difusión y comercialización de sus publicaciones a nivel nacional e internacional
© 2023 UNIVERSIDAD PONTIFICIA COMILLAS
Universidad Comillas, 3
28049 Madrid
© 2023 De todos los autores
Diseño de cubierta: Belén Recio Godoy
ISBN: 978-84-8468-972-0
Depósito Legal: M-7241-2023
Maquetación e impresión: Imprenta Kadmos
Reservados todos los derechos. Queda totalmente prohibida la reproducción total o parcial de este libro por cualquier procedimiento electrónico o mecánico, incluyendo fotocopia, grabación magnética o cualquier sistema de almacenamiento o recuperación de la información, sin permiso escrito de la Universidad Pontificia Comillas.
ÍNDICE
ÍNDICE
Presentación
Rafael Amo Usanos
The Ethics of Using Ai in Healthcare
Paula Frankiln
1. Introduction
2. Artificial Intelligence in healthcare
3. Re-Imagining patient care
4. Broader ethical considerations
5. Implications of using AI in healthcare
6. Thought leadership to drive appropriate adoption
Bibliographic references
Bioética del medicamento y las lecciones aprendidas durante la pandemia: colaboración público-privada en la investigación del medicamento
Amelia Martín Uranga
1. Investigación biomédica en España
2. Historia de éxito en la pandemia COVID
3. Oportunidad para el país
4. Retos y desafíos
5. Conclusiones
Integración de la investigación clínica en la práctica médica: ¿éticamente posible?
José Antonio Sacristán
1. La ética de la investigación y la ética del cuidado médico
2. Medicina basada en la evidencia y medicina centrada en el paciente
3. Investigación clínica orientada al paciente individual
4. Integración de la investigación clínica en la práctica médica
5. Investigación en la práctica clínica: ¿éticamente posible?
6. Conclusión
Referencias bibliográficas
Aspectos éticos de los ensayos clínicos para nuevos tratamientos en la pandemia de COVID-19: La experiencia en Brasil
Sergio Surugi de Siquiera
1. Lo que nos ha mostrado la pandemia del COVID-19
2. Elementos que deben ser considerados en la ética de la investigación en emergencias sanitarias
3. Problemas encontrados relacionados con estudios para tratamientos de COVID-19 en Brasil
4. Requisitos para los protocolos de nuevos tratamientos en emergencias sanitarias realizados en Brasil
5. Directrices para obtener consentimiento en estudios para nuevos tratamientos en emergencias sanitarias
6. Recomendaciones para la actuación de los diversos actores del proceso de ética en investigación
7. Puntos positivos
8. Consideraciones finales
Referencias bibliográficas
Sistema de autorregulación y el código español de buenas prácticas de la industria farmacéutica
José F. Zamarriego Izquierdo
Loreto Fernández-Oronoz Martín de Vidales
1. Introducción
2. Antecedentes
3. El sistema de autorregulación español
4. Objeto del código y ámbito de aplicación
5. El compromiso de la industria farmacéutica con la transparencia
6. Transferencias de Valor a publicar
7. Información que se pública
8. Actualización del código 2021
Referencias bibliográficas
Ética de la relación del médico prescriptor con la industria farmacéutica
Rogelio Altisent
1. La joven historia de la ética de la prescripción
2. Tres casos para reflexionar
3. La confianza social en la independencia de los profesionales de la medicina
4. Elementos éticos de la prescripción
5. El espacio de la Deontología Médica
6. La tardía incorporación a la deontología
7. La docencia de la ética de la prescripción
8. La gestión de los conflictos de interés
9. Conclusión: financiar la formación continuada y, mientras tanto, transparencia
Referencias bibliográficas
Anexo 1
Financiación de Medicamentos y Sostenibilidad del Sistema Nacional de Salud
Pedro Luis Sánchez García
1. Introducción
2. Elementos para analizar la sostenibilidad
3. La sostenibilidad del gasto farmacéutico público
4. Los costes de la innovación farmacéutica
5. La aportación de valor de los medicamentos
6. Medicamentos y sostenibilidad del SNS. ¿Qué nos enseña el pasado?
7. Medicamentos y sostenibilidad del SNS. ¿Qué podemos esperar del futuro?
8. Conclusión
Referencias bibliográficas
Ética de la sostenibilidad económica y del acceso al medicamento: La mirada de los pacientes
Carina Escobar
1. Introducción
2. El papel de las Organizaciones de Pacientes
3. Contexto bioético tras la COVID-19
4. El derecho a la información sobre fármacos
5. Accesibilidad económica a los medicamentos
6. Participación de pacientes
7. Marco legislativo de protección de la cronicidad
Referencias bibliográficas
Ética de la sostenibilidad económica y del acceso al medicamento: La perspectiva de la administración y del interés general
José Ramón Repullo Labrador
1. El interés general y el principio ético de justicia: claves para una mirada apropiada de los poderes públicos
2. La sostenibilidad económica de los sistemas: el criterio de realidad frente a la miopía o el oportunismo
3. El contexto: ¿es posible un enfoque ético en medio de tanta indecencia y complacencia?
4. La difícil intervención pública para preservar el interés general y la justicia
5. Sostenibilidad externa e interna; dos frentes de actuación necesarios
6. La agenda del profesionalismo médico para modular desde la inteligencia y la ética los excesos de la tecno-medicina y sus efectos en la sostenibilidad
Referencias bibliográficas
Sostenibilidad ecológica y reciclaje ecología y reciclaje
Juan Carlos Mampaso Martín-Buitrago
Mario Zarzuela Rescalvo
1. La importancia de la bioética en la protección del medio ambiente
2. No es posible un desarrollo sostenible sin la bioética
3. Salud y medio ambiente, una relación unívoca
4. SIGRE, la herramienta del sector farmacéutico para el cuidado de la salud de la naturaleza
Referencias bibliográficas
Bioética global y el acceso a medicamentos
Ricardo Páez
1. Introducción
2. El acceso a medicamentos en el mundo y los factores contribuyentes a su desigual distribución
3. Valoración bioética: desde la justicia basada en las necesidades y la justicia global
4. Estrategias a seguir
5. Conclusión
Referencias bibliográficas
El enfoque Una sola Salud en la bioética farmacéutica
David Curbelo Pérez
1. Introducción
2. Eventos zoonóticos y Una sola salud
3. Una sola salud y políticas de salud pública
4. Una sola salud en tiempos de pandemia
5. De la bioética ambiental a Una sola salud
6. La urgente necesidad de una nueva forma de pensar y actuar desde la promoción de la salud
7. Una sola salud desde la industria del medicamento
8. Conclusión
Referencias bibliográficas
Posición de la Santa Sede respecto a las vacunas
Carlo Casalone SJ
1. Relación entre vacunas y aborto
2. Responsabilidad personal
3. Las primeras etapas del proceso de la vacuna: investigación y pruebas
4. Dimensión económica y explotación comercial
5. Acceso, suministración y comunicación
Referencias bibliográficas
Presentación
Presentación
Rafael Amo Usanos
Director de la Cátedra de Bioética
Todos recordamos aquel fatídico marzo del año 2020. La extensión del virus Sars-Cov-2 paralizó el mundo, a todos nos confinó en nuestras casas, puso a la humanidad a prueba y cambió nuestros planes. Entre ellos se paralizó el XXXV Seminario Interdisciplinar de Bioética que, con el título Bioética del medicamento, se iba a celebrar en mayo de aquel año.
Ahora bien, gracias a los esfuerzos de todos, pero especialmente de los médicos y la industria farmacéutica, pudimos volver a la normalidad en un tiempo récord en comparación con la duración de otras pandemias. Así, en mayo del año 2022, pudimos retomar los planes que habíamos hecho años atrás y celebramos, los días 19 y 20 de mayo, el Seminario que debería haberse celebrado dos años antes.
En esos dos días de mayo volvimos al tema que interrumpió la pandemia, la bioética del medicamento, cuyas ponencias se recogen en esta publicación.
La repercusión bioética del medicamento nunca se había tratado en los Seminarios de esta Cátedra. Puede parecer un tanto lejano a nuestra tradición que siempre ha tocado temas mucho más cercanos a la bioética fundamental o a la clínica médica. Sin embargo, no es difícil ver la conexión del medicamento con la enfermería o la medicina.
También la historia nos habla de la dimensión bioética del medicamento. En cierta manera, se puede decir que en los inicios de la bioética jugaron un papel importante los problemas de la investigación en nuevos fármacos, como se refleja en toda la literatura sobre los abusos en la investigación clínica.
Bioéticamente hablando, el medicamento, entendido aquí de forma amplia también como algunos de los productos sanitarios, tiene vida propia. No solo tiene relevancia bioética por su relación con la práctica clínica. Los problemas de su investigación, producción industrial, su papel en la sostenibilidad ecológica del planeta, son parte de la problemática propia del medicamento. Estos temas formaron el núcleo central del Seminario interdisciplinar celebrado por esta Cátedra el año 2022.
También aparecen otros problemas bioéticos del medicamento unidos a su papel en la práctica clínica, en la sostenibilidad de los sistemas sanitarios y en la perspectiva global tanto de la justicia como de la salud.
Evidentemente el tratamiento de cualquier tema bioético no puede obviar lo que ha pasado en la pandemia de COVID-19, tampoco la bioética del medicamento. Por eso el programa del seminario incluyó las lecciones aprendidas durante la pandemia, que se recogen en parte en los primeros capítulos de esta publicación.
La colaboración público-privada, que fue esencial para la elaboración de productos sanitarios, de vacunas y de medicamentos, se recoge en el capítulo segundo escrito por la Dr.ª Amelia Martín Uranga. El papel del regulador, o cómo actuaron los Comités de ética de la investigación teniendo que resolver complejísimas cuestiones mientras las listas de enfermos graves y fallecidos no dejaban de crecer, también se abordaron en el Seminario y, aunque no se recogen en esta publicación, pueden verse en tv.comillas.
La lógica del resto de capítulos de esta publicación es la vida del medicamento, una lógica genética. Comienza con las investigaciones iniciales para desarrollarlo, continua con su producción industrial, avanza en la difícil cuestión de la sostenibilidad económica de los sistemas sanitarios ante el creciente gasto farmacéutico y termina con la cuestión ecológica de la contaminación por fármacos y su reciclaje. A este ciclo vital se le añade la sensibilidad global u holística de la epistemología de nuestro momento, así se debe tratar la cuestión de la justicia y la bioética global y el enfoque One Health.
Sobre la ética de la investigación, José Sacristán expuso la difícil pero necesaria relación entre la investigación clínica y la práctica médica en el capítulo tercero; y Sergio Surugi de Siqueira no ocultó los abusos que se pueden dar en la investigación al narrar un reciente caso en Brasil, en el capítulo cuarto.
En lo referente a la industria farmacéutica, se exponen las dos caras de la moneda: el capítulo quinto de José Zamarriego recoge cómo se ve desde la industria farmacéutica, y Rogelio Antisent, cómo se vive desde el médico prescriptor, en el capítulo sexto.
El bloque más complejo es el de la sostenibilidad del sistema. En los capítulos séptimo al noveno se afronta la cuestión desde los tres puntos de vista implicados: la industria, que lo aborda Pedro Luis Sánchez; los pacientes, Carina Escobar y la administración, José Ramón Repullo.
La sostenibilidad debe ser entendida también ecológicamente, por eso el capítulo décimo recoge la cuestión del reciclaje de los medicamentos que abordó D. Juan Carlos Mampaso y Mario Zarzuel.
La perspectiva holística que se incorpora a la bioética del medicamento se refiere a la cuestión de la justicia global que trata el capítulo once de Ricardo Paez, y la cuestión de la llamada One Heatlh que expone en el capítulo doce David Curbelo.
Por último, el libro recoge la intervención de Carlo Casalone sobre la posición de la Iglesia católica con respecto a las vacunas.
Este Seminario y esta publicación no se podría desarrollar sin la ayuda de la Fundación SANITAS. Este año tuvimos la suerte de contar con la intervención inaugural de la doctora Paula Frankiln, directora médico de BUPA. En este sentido, este seminario del año 2022 no fue un simple remake del que no se celebró en el 2020, tenía dentro de sí semillas de futuro. Ella habló de las dimensiones bioéticas de la Inteligencia Artificial que será el tema del XXXVI Seminario Interdisciplinar de Bioética del año 2023.
No puedo terminar sin agradecer a quienes nos ayudaron a preparar este Seminario, tanto en el 2020 como en el 2022. Por supuesto, a nuestro Consejo de Cátedra con quien siempre contamos. Pero quiero citar especialmente a tres personas: a Juan Ramón Lacadena, quien nunca deja de ayudarnos y sorprendernos, ostenta la medalla número 1 de la Real Academia de Farmacia, y allá en el año 2019 nos ayudó a diseñar temas y a contactar con personas. Por supuesto, a Amelia Martín Uranga, gran parte –por no decir la mayoría– de lo que aprendimos aquellos días se lo debemos agradecer a ella, que ha aconsejado y facilitado contactos. Y, por último, a Iñaki Ferrando que nos trajo el futuro a este presente. Él fue quien nos facilitó el contacto y la comunicación con la Dr.ª Paula Franklin que, como he dicho, con su intervención nos abre nuevos caminos de investigación y trabajo.
A todos, y antes que nadie a Dios, muchas gracias.
The Ethics of Using Ai in Healthcare
The Ethics of Using Ai in Healthcare
Paula Frankiln
Chief Medical Officer for the Bupa Group
1. Introduction
Good morning, my name is Paula Franklin and I’m the Chief Medical Officer for the Bupa Group. Many of you may know Bupa locally as Sanitas, supporting customers, patients and residents through our health insurance business as well as hospitals, clinics, dental centres and care homes.
Bupa is the parent company of Sanitas, and we care for millions of people around the world, in places like the UK (where I am from), Australia, Chile, Poland, Hong Kong and others. We have no shareholders and are a purpose-led organisation, helping people to live longer, healthier, happier lives and making a better world.
As Chief Medical Officer, I have the great privilege of supporting the many talented and experienced clinicians who deliver healthcare across our global company.
I want to be clear at the start of this talk that I am not an expert in AI. I am a medical doctor, I have studied bioethics and medical law and have spent my career working in healthcare with a particular passion for quality, efficiency and patient experience.
Healthcare systems across the world are all striving to deliver high quality care and services for patients. Most of them are also facing challenges with staffing and meeting patients’ needs and demands. Technologies, such as AI, have the potential to revolutionise aspects of healthcare such as improving efficiency, patient experience and outcomes, but their design and adoption in a healthcare setting brings a number of ethical considerations. Over the next 30 minutes I will try to address some of these, primarily from the perspectives of patients and clinicians. AI presents great opportunity, but also great risk if people are not placed at the heart of decision making regarding how this technology is applied. In the future AI will be integral to healthcare systems in ways we haven’t even identified yet-so the structures and controls put in place now will shape healthcare for decades to come.
2. Artificial Intelligence in healthcare
AI is often referred to as the ability of computer systems to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, and decision-making (The Oxford Dictionary of Phrase and Fable, 2006). The breadth and complexity of artificial intelligence means that there is no universal agreement on the definition of «AI» and so, for the purposes of this discussion, I will use the term as an overarching term which includes machine learning and deep learning.
As such, this is not new. In 1972, Stanford University used AI to identify bacterial pathogens and recommend antibiotics (Buchanan, Shortliffe, 1984). The AI technology that was developed performed as well as Stanford’s experts but it was never used in practice. It wasn’t used because there was uncertainty about who was responsible in the event of a wrong diagnosis and incorrect treatment. Even at this early use of AI the question of liability was raised, and therefore the ethical consideration of the appropriateness of the use of this tool.
The opportunities for AI in healthcare, and the sophistication of AI, have evolved considerably since 1972. A vast amount of health data is generated every day from a range of sources. The growth of consumer technologies, including smartphone apps and wearables, has led to the design and use of tools that enable individuals to gather their own health-related data. In addition, the delivery of healthcare also generates vast amounts of data. A single patient typically generates up to 80 megabytes a year in data just from imaging and electronic medical records (Huesch, Mosher, 2017). AI has the potential to harness this data and transform the way that we manage our health.
Many see the role of AI as augmenting human intelligence rather than replacing it (Crigger, Khoury, 2019). Certainly, AI can drive operational efficiency and free up clinicians’ time to be spent where it is arguably more valuable-supporting patient care. For example, AI can automate workflows, prioritise questionable test results for clinician review or automate payments for treatments. AI can also support clinical decision-making, for example by analysing large amounts of health-related data to predict an individual’s risk for particular conditions or by remotely monitoring a patient’s vital signs and alerting the clinician when abnormal patterns are detected.
And the use of AI in healthcare is rapidly evolving. For example, AI has already been approved to diagnose without a clinician’s direct involvement. In April 2018, for example, the Food and Drug Administration in the US authorised the first AI device to diagnose diabetic retinopathy without the need for clinician interpretation of retinal images (Savoy, 2020).
While each AI technology can contribute significant value alone, additional potential lies in the synergies generated by using these technologies together across the entire patient journey, from prevention to diagnosis to treatment, and ongoing care.
3. Re-Imagining patient care
I am going to talk through a possible patient’s care that incorporates AI. I will highlight some ethical considerations that need to be taken into account as we go along.
Our patient today is Anna-a 50-year-old female who lives alone. She lives an active and healthy life and has no health concerns.
Anna is offered the opportunity to wear a device to track her blood pressure, heart rate and glucose levels as part of a new service from her healthcare provider.
The information from the device that Anna is wearing is monitored and analysed and the AI identifies abnormal or undesired patterns, or pattern changes.
When agreeing to wear the device, Anna should be aware of and consent to how her healthcare data is being used-in her own care and more widely-and with whom, and how her data will be shared with clinicians, and potentially others, such as researchers, or public health organisations.
But under what circumstances-and to what extent-should the principles of informed consent be deployed when AI is being used in Anna’s healthcare. In an experimental setting it is expected that the relevant consent would be obtained, but what about in routine healthcare delivery? To what extent do clinicians, healthcare organisations or indeed organisations that sell such devices, have a responsibility to educate the patient around the complexities of AI? How much openness do we really need?
This is still under debate. We do not require informed consent for every supportive process in clinical decision-making, for example we do not explain to a patient the process by which a drug has been proven to be effective for a particular condition. Therefore some people think that introducing informed consent in this way would mean holding AI to greater account than other parts of healthcare. Other people believe that the use of AI is altering the way that care has conventionally been delivered, so it should be explained to Anna and her consent for its use obtained.
An abnormal ECG reading from her wearable device triggers a message to Anna telling her to consult a clinician.
At her appointment, the clinician, Dr Monica Lopez has received the data which prompted the alert, along with a list of potential diagnoses. Dr Lopez uses these data and their own observations to order additional investigations.Anna’s treatment plan is agreed.
In all this, as always, it’s important that Anna has trust and confidence in her doctor. The use of AI in the clinical pathway should support the doctor-patient relationship, not compromise it.
Wearable devices will continue to monitor Anna’s cardiovascular health, notifying her when intervention may be required.
AI has the ability to reduce the need for in-person consultations between patients and clinicians. While this may be time-saving and easier for both Anna and her doctor, it may also result in Anna feeling less connected with her healthcare provider unless other support is put in place.
Conversely, it could in fact, improve her sense of connection by increasing effective touchpoints with the healthcare system. This highlights the importance of understanding the needs and preferences of individual patients.
In this example, Anna lives alone. AI could empower Anna, to evaluate her symptoms and support her with care at home.
Unfortunately, Anna develops a cough that does not get better. She arranges to see Dr Lopez.
Given Anna’s symptoms, there is a wide range of possible diagnoses. Dr Lopez requests a chest x-ray. An AI algorithm assesses the chest X-ray, highlights it as abnormal, and prioritises it for review by a clinician. Unfortunately, there is a concerning nodule on the X-ray which needs urgent follow-up with a specialist, Dr Sanchez.
Currently, AI cannot replace a healthcare professional’s ability to read Anna’s body language, provide empathy and compassion, or individualise the way in which these findings are communicated to her. While AI can take on some of the science, it is not yet possible for AI to replicate the art of medicine.
Up to this point in the pathway, I’ve been describing current realities. Now, we switch into what will be possible in the near future.
Anna has a CT scan of her chest to assess the concerning findings on the X-ray. AI assesses the nodule-which could still represent many diagnoses-and determines it to be suggestive of cancer. This is based on data from the image which is not obvious on visual inspection alone. The algorithm adds more certainty to Dr Sanchez’s decision to classify the abnormality as likely cancerous.
However, the algorithm’s prediction does not amount to a final diagnosis-that still rests with Dr Sanchez-nor does it inform Dr Sanchez about the best options for Anna, how to communicate the relative risks and benefits of the options, or how to support Anna with the choices that face her. This illustrates how AI is a source of information for, but not a replacement of, the clinician. It is also an illustration of the skills and qualities that are difficult to replicate through AI.
An urgent sample is taken, and the pathology confirms the presence of lung cancer. AI is used on both the radiology and the pathology images to predict how the tumour will respond to different treatments. Taking all of the information available, Anna and her doctors decide to proceed to chemotherapy along with resection of the nodule. Anna has benefited here in the time taken from the initial X-ray to starting treatment, which is significantly shorter than it would be in pathways without AI.
However, from both Dr Lopez and Dr Sanchez’s perspectives, AI throws up some difficulties in application. AI technologies and algorithms are complex but, ideally, they are explainable and transparent to users wherever possible. But the «black box» nature of some types of AI could be a barrier to clinician understanding.
There are, however, many current medical interventions where patients and doctors put their faith in the demonstrated benefits of an intervention-in evidence of the safety and efficacy-rather than explanations of how that treatment works. If Anna had a headache, she could easily buy aspirin over the counter without understanding its mechanism of action. A doctor does not have to understand a vaccine’s specific mechanism of stimulating immunity before recommending it to a patient.
Where possible, AI models should allow clinicians to understand the data and calculations through which a recommendation has been made. Understanding the variables that have delivered this diagnostic prediction would help Dr Sanchez to be confident in the treatment recommendation so that she can support Anna in her decision making.
Understanding the variables can be challenging with some types of machine learning. For example, a type of deep learning algorithm called a «neural network» can outperform other approaches in cognitive tasks like the analysis and synthesis of text or images. However, it is harder to pinpoint the reasons for a neural network’s output because of the number of parameters, quantity of training data, and the number of connections that they are built on. The name and structure are inspired by the human brain, mirroring the way that biological neurons signal to each other. As with the workings of the brain, it can be hard to unravel and explain what is going on in a meaningful way. So there can be a trade-off between black box models, which seem to provide better insights and better outcomes even though we don’t know how, and explainable systems that may have a weaker output but give the benefit of knowing why a decision is made so its validity and fairness can be assessed.
Transparency is also important because AI technology can make mistakes or can become out of date as new data emerges. These need to be assessed and reported, particularly to developers and regulators, to ensure that they continue to work in the way that we expect them to. This will help to build trust that these tools are being deployed appropriately and that they are safely delivering improved outcomes.
Using AI, Anna’s chemotherapy dosage is personalised and, as treatment progresses, the regimen is dynamically adjusted to optimise her regimen. AI can monitor Anna’s response to treatment more accurately and in real time, as compared with clinicians alone. This will make treatment adjustment more efficient and should lead to better treatment outcomes.
However, later in Anna’s care journey, the AI suggests a different, more cost-efficient but equally effective chemotherapy regimen for Anna than the one Dr Sanchez intended to propose. Doctors will have varying reactions to this. Some will be happy to have this decision made for them. Others may feel that their expertise and experience is not being acknowledged or used.
This can be viewed as «peer disagreement» between two competent experts: the AI and Dr Sanchez. If current and accurate information has informed the AI recommendation, then it could be expected that this is indeed the best clinical treatment for the condition. And so not quite the same as peer disagreement between doctors. However, the AI cannot take into account or discuss with Anna, her preferences, or her tolerance for particular side effects, and arrive at an effective regime that is best for Anna, as opposed to clinically best.
Though some might say that the recommendation of an algorithm would be what is clinically best-because it combines the expertise of multiple experts and many data points-it is important to remember that AI algorithms can be wrong. Therefore, critical appraisal of these recommendations can be valid. Clinical trials of IBM’s Watson Oncology, an AI tool used in cancer management, were stopped in some clinics as doctors outside America did not have confidence in its recommendations and felt that the model reflected a US-centric approach to cancer treatment (The Nuffield Council on Bioethics, 2018).
But are we holding AI systems to a higher standard than we do humans, and if so is that justifiable? Generally, we accept that human beings make mistakes-even doctors! Experts estimate that misdiagnosis rates may be as high as 1 in 5 in some areas and yet most people trust their doctor and very few obtain a second opinion, even for serious conditions (National Academies of Sciences, Engineering, and Medicine, 2015). If AI had an error rate of 1 in 100 000, would that be considered acceptable? I think the main difference here is the scope of the impact. For the most part, if a doctor makes a mistake, it affects one patient, whereas an error in an AI algorithm could affect hundreds or thousands of patients. Our tolerance for errors should be low and we should seek to identify the source of the error, share learning and correct it. It is clear that this is even more important for AI.
This leads us to question the role of human oversight for AI. Should we maintain control over AI systems and ensure that human override is always possible? How should we balance the weight of data and analysis on one side with the experience, expertise and human understanding on the other?
4. Broader ethical considerations
Alongside the perspectives of our patient, Anna, and her doctors, there are other considerations in the healthcare system.
Data is a prerequisite for AI. And although this is not unique to AI, the privacy and security of data will be a priority for the use of any AI tool, especially in healthcare. This data holds significant value, given its potential to train such algorithms and inform people about their own health and inform healthcare professionals and organisations about the health of their patients and customers. However, it can’t be assumed that all patients would consent to sharing the required data. Many people remain wary about sharing their health data. A recent consumer survey in an American health organisation found that whilst 72 % of respondents were willing to share their data with their doctor, just 25 % were willing to do so with a health technology company (Rock Health, 2021).
You might expect machines to be free from human prejudice, but algorithmic biases exist, and can lead to unfairness and discrimination.
The problem is the result of a core assumption in machine learning: that future data will be very similar to past data. Given that past datasets will contain racism, sexism and other kinds of discrimination, as might the humans involved in the development of tools, machine learning is predisposed to perpetuate historical injustices. In other words, the quality of the outputs are dependent upon the quality of the inputs.
One example is a recent study in the US which found that an AI algorithm was falsely concluding that black patients were healthier than equally sick white patients (Obermeyer, Powers, Vogeli, Mullainanthan, 2019). This was a result of using healthcare spending data as an indicator of health status and need. What wasn’t taken into account is the fact that there are racial biases in US healthcare spending-less money was being spent on black patients who had the same level of need.
Conversely, using AI and removing humans from decision-making could reduce bias. Several studies have found that human biases to gender, age and past medical history can result in disregarding relevant symptoms and missed diagnoses (Schulman et al., 1999; Croskerry, 2003; Pope et al. 2000). So machines could be less biased than humans.
Machines or AI themselves cannot reconcile these differences-their outputs will depend on what is put into them in their development. But, again, the black box nature of AI leads to added complexity here, with developers and decision-makers limited in their capacity to assess the level or impact of bias in the AI they are working with.
In addition to all these considerations, the use of AI should be consistent with global efforts to reduce the impact of human beings on the Earth’s environment, ecosystems and climate. AI systems should be designed and monitored to minimise their environmental consequences and increase energy efficiency, both in how they run and in the processes that their outputs are shaping.
5. Implications of using AI in healthcare
There is an old medical adage of «First do no harm». Throughout all decision making, a healthcare team needs to weigh the benefits of an intervention against the risk of harm.
AI can improve healthcare delivery in ways that were previously out of reach and examples of this are already available to us. During the pandemic, Sanitas used an AI system to remotely monitor patients with COVID-19. If abnormal parameters were detected a clinician would contact them via videocall, offering a level of care outside the hospital that simply hadn’t been possible before.
As already mentioned, the most obvious risk of AI systems is that they will sometimes be wrong, and that a patient could be harmed as a result. For example, the incorrect diagnosis that a skin lesion is benign or the allocation of a hospital bed to one patient over another because AI incorrectly predicted that patient would benefit more.
When this happens it is not always clear who is responsible: the developer, payer, provider, or healthcare professional? Is any apology or reparation owed to the patient and, if so, from whom?
Ideally, responsibility and accountability should be clearly defined before using any AI system.
Where harms are the consequence of an error in the algorithm or the data used to train the AI technology, accountability might be best placed with those who developed or tested the technology rather than the clinical team working with it. However, currently healthcare professionals could be held accountable if they follow the recommendation of an AI tool that results in a medical error, or conversely, if they ignore an AI-guided recommendation that might have avoided a patient’s suffering or death. This creates a dilemma for the clinician that may negatively impact the use of AI-I’m at fault if I use it and if I don’t. So, I need to rely on my own opinion as always, and therefore what has the AI added? Maybe I see this as just another input to my decision-making.
It is possible that with improvements in AI we will see a future where AI becomes standard care and therefore my choice not to use the technology could be considered «malpractice»?
An international study surveyed a number of pathologists on where medico-legal responsibility for diagnostic errors made by a human-AI combination should lie (Sarwar et al., 2019). Opinions were split between those who believed that the AI vendor and pathologist should be held equally liable, and others who believed responsibility remains primarily that of the pathologist. Only a minority said that the AI vendor should primarily be liable. In other words, they expected the pathologists to use their expertise and not rely on the AI.
These are evolving challenges and remain unsettled in the laws of most countries. However, the proposed Artificial Intelligence Act may resolve some of these uncertainties in the European Union through it’s human-centred, risk-based approach. I understand that Spain is leading the way in this area as it is playing an integral role in the development of the EU AI legal framework.
6. Thought leadership to drive appropriate adoption
The progress of a number of aspects of healthcare have been significantly accelerated by the COVID-19 pandemic. Much of the history of medical progress has been through disruption of accepted positions and beliefs. It has been necessary to break the status quo to achieve better outcomes. With a clear focus on the outcomes of and responsibility for the patient, it can be difficult to adopt innovations and new ways of doing things-even in the glare of abundant clinical evidence. The thought leadership of doctors, their influence on peers and patients should not be understated. In AI as in many other areas of health technology development, transformation will only occur if healthcare leaders have a key role in shaping the development and adoption of emerging technologies, and of ensuring that patients’ considerations remain central.
As AI tools are developed, the involvement of health professionals can ensure clinical credibility, help reduce the risk of bias, and bridge the gap between a complex medical reality and the needs and preferences of an individual patient. Through AI deployment and adoption, clinicians with the appropriate skills can integrate AI into care pathways, design communications standards and navigate the associated politics.
Governance procedures and protocols are also critical when adopting AI in a healthcare setting-especially considering the sometimes-challenging relationship between innovation and regulation. Does use of the technology satisfy requirements for safety, accuracy and efficacy, and are measures for quality control (and improvement over time) in place? Over time, an AI tool will be refined, learning from experience as it receives more feedback. This means that regulators, policymakers and the developers will need to re-review tools as time goes on. Regulatory frameworks need to allow for the potential that a model might change over time, and clinicians may need to continuously assess AI applications during actual use. Are there checks in place so that AI does not result in mental or physical harm that could be avoided by use of an alternative practice or conventional approach?
Public and private organisations across the world are developing guidelines for how to ethically develop and implement AI, but it is important to maintain accountability, as well as the regular audit of processes and outcomes.
I hope that, throughout my presentation, I have highlighted some important principles and considerations regarding the use of AI in patient care. Aspects such as whether the AI technology is worthy of our patients’ trust? Can we guarantee the absence of bias of any kind? Only once we have considered these principles should AI be implemented. And even after implementation, continued review is necessary.
The use of AI will and should increase-there is huge potential in healthcare but there are risks as well as benefits.
As we move forward we must ensure that healthcare professionals have the right skills and knowledge, we must share learnings and align best practice around the globe, and we must keep our focus on the most important thing-the experience and outcome for our patients.
Bibliographic references
Buchanan, B. G., and Shortliffe, E. H. (1984). Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison Wesley. Available online at http://www.shortliffe.net/Buchanan-Shortliffe-1984/Contents.pdf
Crigger, E., and Khoury, C. (2019). Making Policy on Augmented Intelligence in Health Care. AMA J Ethics, 21(2), E188-191
Croskerry, P. (2003). The importance of cognitive errors in diagnosis and strategies to minimize them. Academic Medicine, 78(8), 775-780.
Huesch., D., and Mosher, T. J. (2017). Using It or Losing It? The Case for Data Scientists Inside Health Care. NeJM Catalyst. Available online at https://catalyst.nejm.org/case-data-scientists-inside-health-care
National Academies of Sciences, Engineering, and Medicine. (2015). Improving Diagnosis in Health Care. Washington, DC: The National Academies Press. https://doi.org/10.17226/21794
Obermeyer. Z., Powers, B., Vogeli, C., and Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.
Pope, J. H., Aufderheide, T. P., Ruthazer, R., Woolard, R. H., Feldman, J. A., Beshansky, J. R., Griffith, J. L., and Selker. H. P. (2000). Missed diagnoses of acute cardiac ischemia in the emergency department. New England Journal of Medicine342(16), 1163-1170.
Rock Health. (2021, 26 February). Available online at Digital Health Consumer Adoption Report 2020.
Sarwar, S., Dent, A., Faust, K., Richer, M., Djuric, U., Van Ommeren, R., et al. (2019). Physician perspectives on integration of artificial intelligence into diagnostic pathology. NPJ Digit Med, 2 (1), 1-7.
Savoy, M. (2020). IDx-DR for Diabetic Retinopathy Screening. Am Fam Physician, 101(5), 307-308device: IDx-DR-Digital Diagnostics
Schulman, K. A., Berlin, J. A., Harless, W., Kerner, J. F., Sistrunk, S., Gersh, B. J., Dubé, R., Caleghani, C. K., Burke, J. E., Williams, S., Eisenberg, J. M., Ayers, W., and Escarce, J. J. (1999). The effect of race and sex on physicians’ recommendations for cardiac catheterization. New England Journal of Medicine, 340(8), 618-626.
The Nuffield Council on Bioethics. (2018). Artificial Intelligence (AI) in healthcare and research. Available online at https://www.nuffieldbioethics.org/publications/ai-in-healthcare-and-research/what-is-ai/ethical-and-social-issues
The Oxford Dictionary of Phrase and Fable. (2006). Artificial Inteligence. Available online at https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095426960;jsessionid=ECFE63FC30AA586F23237FBD0CFB1ABF
Bioética del medicamento y las lecciones aprendidas durante la pandemia: colaboración público-privada en la investigación del medicamento
Bioética del medicamento y las lecciones aprendidas durante la pandemia: colaboración público-privada en la investigación del medicamento
Amelia Martín Uranga
Directora asociada de Investigación Clínica y Traslacional de Farmaindustria
1. Investigación biomédica en España
Desde hace unos años, la investigación clínica ha adquirido un papel fundamental en nuestro país, gracias, entre otros factores, a que la industria farmacéutica aquí establecida, tanto de capital nacional como multinacional, es muy activa en investigación y está comprometida con su impulso.
La inversión de la industria farmacéutica en investigación y desarrollo de medicamentos en España superó en 2020 los 1160 millones de euros (Gráfico 1), según la última encuesta sobre actividades de I+D de Farmaindustria.
De hecho, el 60 % de su inversión anual en I+D, más de 700 millones de euros, se dedica a ensayos clínicos. La inversión de la industria en este ámbito ha aumentado a un ritmo medio anual del 4,2 % en la última década, pasando de 460 millones de euros en 2010 a los 697 millones de euros en 2020 (Gráfico 2).
Gráfico 1
Fuente: Farmaindustria.
Gráfico 2
Fuente: Farmaindustria.
La inversión en I+D realizada en los centros de investigación propios de las compañías, I+D intramuros, alcanzo los 632 millones de euros, representando el 54 % del total. Por otro lado, la industria farmacéutica destinó otros 528 millones de euros en contratos de investigación con hospitales, universidades y organismos públicos y privados, constituyendo el I+D extramuros, lo que supone una contribución clave para garantizar la sostenibilidad financiera de estos centros y la mayor fuente de ingresos privados (Gráfico 3).
Todas las comunidades autónomas recibieron inversiones extramuros por parte de la industria farmacéutica innovadora, pero cabe destacar la relevancia de Madrid y Cataluña las cuales captaron cada una en torno al 28 % del total (Gráfico 4).
Gráfico 3
Fuente: Farmaindustria.
Gráfico 4. Distribución geográfica de la inversión I+D extramuros (2020)
Fuente: Farmaindustria.
En definitiva, la investigación clínica no solo es necesaria para el desarrollo de nuevos medicamentos. En este caso, hablamos del círculo virtuoso de los ensayos: permiten que nuestros profesionales sanitarios participen de la vanguardia científica, en cooperación con colegas de centros de otros países, y puedan aplicarla también a su labor asistencial, lo que contribuye a incrementar la calidad de la prestación sanitaria; abren nuevas oportunidades para los pacientes, especialmente relevantes en casos graves que no responden al arsenal disponible, atraen financiación de las compañías farmacéuticas promotoras a los hospitales participantes, tanto públicos como privados, y proporcionan ahorros.
2. Historia de éxito en la pandemia COVID
La pandemia del coronavirus ha revelado la importancia de la investigación biomédica en el mundo. Antes de la pandemia, España era ya un país de referencia mundial en la investigación clínica de medicamentos y tras ella, ha consolidado esta posición, convirtiéndose en el primer país de Europa en número de ensayos clínicos contra el coronavirus y el cuarto del mundo.
Esta situación no hubiera sido posible sin el fuerte compromiso de los diferentes agentes para mantener nuestro alto nivel en investigación biomédica. De esta forma, la cooperación estrecha entre AEMPS, investigadores y compañías farmacéuticas permitió normalizar y mantener con relativa rapidez la actividad en investigación clínica, además de buscar respuesta a una nueva enfermedad.
Así, la Memoria Anual de la AEMPS cifró en 1027 los ensayos clínicos autorizados en el 2020 y 996 en 2021, marcas históricas para nuestro país, que apuntalan nuestra posición de referencia internacional en este ámbito. Estas cifras manifiestan igualmente que, para los diferentes actores participes en la investigación, las prioridades durante la pandemia han sido tanto dar una respuesta a la nueva enfermedad como mantener la actividad en ensayos clínicos de otras patologías con necesidades médicas no cubiertas y en el mayor beneficio de los pacientes.
