AI and healthcare
pdf (Español (España))

Keywords

inteligencia artificial
confianza
ética aplicada artificial intelligence
trust
applied ethics

How to Cite

Alonso, M., Ortega Lozano, R., & M. Astobiza, A. (2026). AI and healthcare: some considerations on trust. Eikasía Revista De Filosofía, (134), 97–106. https://doi.org/10.57027/eikasia.134.1287

Abstract

This article explores the relational nature of trust in artificial intelligence (AI) systems applied to the medical field. In contrast to approaches that tend to locate trust in individual properties—either of the trusting subject or of the object of trust—we propose understanding it as an emergent phenomenon that can only unfold fully within the interaction between humans and technologies. We critically analyze several key concepts in the current debate—such as responsibility, accountability, anthropomorphism, and value alignment—showing that none of them can be unequivocally attributed to a single pole of the relationship. We argue that an adequate understanding of these constructs requires situating them within a relational perspective, where trust does not simply derive from technical qualities or subjective attitudes, but from shared structures of meaning, practices of co-responsibility, and appropriate institutional frameworks. This approach allows for a more precise engagement with the ethical challenges of medical AI and guides the design of systems that are not only efficient but also trustworthy in a robust sense.

https://doi.org/10.57027/eikasia.134.1287
pdf (Español (España))

References

Choung, H., David, P., & Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 1–13. https://doi.org/10.1080/10447318.2022.2050543

Floridi, L. (2019). Translating principles into practices of digital ethics: five risks of being unethical. Philos. Technol. 32, 185–193

Hatherley, J. J. (2020). Limits of trust in medical AI. Journal of Medical Ethics, 46(7), 478–481. https://doi.org/10.1136/medethics-2019-105935

London AJ, (2019). Artificial intelligence and black-box medical decisions: accuracy versus Explainability. Hastings Cent Rep;49(1):15–21

Malle, B. F., and Ullman, D. (2021). “A multidimensional conception and measure of human-robot trust” in Trust in human-robot interaction. (Cambridge, MA: Academic Press), 3–25.

Shin, D., and Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Comput. Hum. Behav. 98, 277–284. doi: 10.1016/j.chb.2019.04.019

Sparrow, R., & Hatherley, J. (2019). The Promise and Perils of AI in Medicine. International Journal of Chinese & Amp; Comparative Philosophy of Medicine, 17(2), 79–109. https://doi.org/10.24112/ijccpm.171678

Thiebes, S., Lins, S., Sunyaev, A.: Trustworthy artificial intelligence. Electron. Mark. (2020). https:// doi. org/ 10. 1007/s12525- 020- 00441-4

Topol EJ. Deep medicine: how artificial intelligence can make healthcare human again. New York, NY: Basic Books, 2019.

Vereschak, O., Bailly, G., & Caramiaux, B. (2021). How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. CSCW 2021 - The 24th ACM Conference on Computer-Supported Cooperative Work and Social Computing, 5(CSCW2), 1–39. https://doi.org/10.1145/3476068

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Downloads

Download data is not yet available.