Inspirational journeys

Follow the stories of academics and their research expeditions

AI in Clinical Research: Opportunities, Limitations, and Ethical Responsibilities

Levi Cheptora

Tue, 21 Oct 2025

AI in Clinical Research: Opportunities, Limitations, and Ethical Responsibilities

Abstract

 

The integration of artificial intelligence (AI) into clinical research marks a profound shift, offering unprecedented opportunities for innovation, efficiency, and improved patient outcomes. This expert analysis demonstrates that while AI holds immense potential to revolutionize drug development by accelerating timelines and reducing costs, its deployment is a complex endeavor fraught with significant ethical and practical challenges. This report serves as a comprehensive guide that navigates this duality, meticulously detailing AI's transformative applications in optimizing trial design, patient recruitment, and drug discovery. Simultaneously, it provides a rigorous examination of the technology's core limitations, including algorithmic bias, the "black box" problem, and the complexities of informed consent and accountability. The report culminates in a forward-looking framework for responsible innovation, centered on the indispensable human-AI partnership, and a comparative analysis of the evolving global regulatory landscape. The central thesis is that to harness AI's full potential, researchers and institutions must proactively and transparently address its ethical dimensions, ensuring that innovation is always balanced with the foundational principles of patient safety, privacy, and autonomy.

 

Introduction: A New Era in Clinical Research

 

The landscape of clinical research is undergoing a fundamental transformation, driven by the rapid maturation and integration of artificial intelligence. AI, in this context, is not a singular technology but a diverse family of tools and methodologies designed to mimic human cognitive functions such as problem-solving, pattern recognition, and decision-making.1 Key subsets of AI, including machine learning (ML), natural language processing (NLP), and deep learning (DL), each possess distinct capabilities that are reshaping the research continuum.1 A recent and particularly powerful innovation is generative AI, which differs from traditional AI by its capacity to create new content based on its training data.1 This capability extends to generating synthetic data for modeling trial scenarios or even designing novel drug compounds, thereby fueling a new wave of innovation.1

The emergence of AI is a direct response to the long-standing and well-documented pain points of traditional clinical research. The drug development process remains notoriously lengthy and expensive, often spanning over a decade and costing billions of dollars, with a high rate of failure.5 The operational bottlenecks, such as slow and costly patient recruitment, complex data management, and the sheer scale of scientific data, have historically presented significant barriers to progress.3 AI's promise lies in its ability to address these inefficiencies by automating and optimizing key processes, thus serving as the primary motivation for its rapid adoption across the pharmaceutical and biotechnology sectors.5

This report is founded on the central premise that AI in clinical research presents a powerful duality. It is a transformative tool for accelerating innovation, yet it is also a source of profound ethical and practical challenges. To successfully realize the immense benefits of AI, the medical and scientific communities must proactively and responsibly navigate its inherent limitations. This paper will first explore the transformative opportunities presented by AI, followed by a detailed analysis of its challenges, culminating in a comprehensive discussion of the ethical imperative and a forward-looking roadmap for responsible innovation.

 

Section 1: The Opportunities: AI as a Catalyst for Innovation and Efficiency

 

The applications of AI in clinical research span the entire drug and device development lifecycle, from foundational discovery to trial optimization and patient monitoring. The technology's ability to process and analyze immense, complex datasets with speed and accuracy far beyond human capacity is its core value proposition.

 

1.1. Optimizing the Clinical Trial Lifecycle

 

AI's impact on clinical trials is characterized by increased efficiency and reduced costs across every phase.

 

Accelerated Protocol and Trial Design

 

AI is revolutionizing the traditionally manual process of clinical trial design. By analyzing vast amounts of historical trial data and real-world evidence, AI algorithms can identify patterns, predict outcomes, and refine protocols to a degree previously unimaginable.1 This capability is particularly enhanced by generative AI, which can rapidly produce draft protocols by analyzing existing data, a task that can cut planning time from days to minutes.4 This powerful predictive capacity also enables the creation of adaptive trial designs, where AI can dynamically adjust key parameters such as sample size, dose regimens, and treatment duration based on real-time interim results.10 This not only streamlines the research process but also enhances ethical integrity by allowing for the early discontinuation of inferior trial arms.11

 

Streamlined Patient Recruitment and Retention

 

One of the most significant and costly bottlenecks in clinical research is patient recruitment.8 AI can drastically shorten this process by sifting through electronic health records (EHRs), patient registries, and other unstructured data sources to identify eligible candidates.1 The use of natural language processing (NLP) is particularly effective in this domain, as it can parse unstructured clinical notes and medical histories to uncover eligibility signals that are often missed by traditional diagnostic codes.9 Case studies provide compelling evidence of this efficiency, with platforms reaching over

90% accuracy in matching patients to trials.11 Examples such as TrialGPT, which reduced screening time by

42.6% in real-life clinical trial matching, and IQVIA's AI-powered model, which improved target patient identification by 15×, underscore the concrete efficiency gains that AI delivers.10

 

Enhanced Clinical Trial Operations and Data Management

 

Beyond recruitment, AI profoundly impacts data management and operational monitoring. It can automate data entry and cleaning, thereby eliminating human error and speeding up a traditionally laborious process.1 AI also enables real-time, risk-based monitoring (RBM) by continuously analyzing incoming trial data and flagging unusual patterns, such as shifts in laboratory values or protocol deviations.8 This proactive approach allows researchers to detect potential adverse events or safety concerns much sooner, leading to quicker interventions and more stable trial progression.8

 

1.2. Pioneering New Frontiers in Science

 

AI is not merely optimizing existing processes; it is enabling new scientific capabilities that are fundamentally reshaping drug development and patient care.

 

The Role of AI in Drug Discovery and Repurposing

 

The traditional trial-and-error approach to drug discovery is being replaced by AI-driven predictive modeling. AI can rapidly analyze biological data to identify promising drug targets and design new compounds with greater precision.5 Case studies provide concrete evidence of this accelerated timeline. Exscientia, for example, introduced the first AI-designed drug molecule into human clinical trials, while Insilico Medicine reported the initiation of Phase 1 trials for an AI-discovered molecule, both significantly reducing traditional development timelines.5 AI also excels at drug repurposing by determining relationships between diseases and drugs through complex biological networks, as seen with the repurposed use of Thalidomide for multiple myeloma and Viagra for erectile dysfunction.5

 

Advancing Personalized Medicine and Precision Therapeutics

 

AI's ability to analyze genetic data, clinical histories, and patient-reported outcomes enables a shift from a one-size-fits-all approach to highly personalized medicine.4 Machine learning models can be trained on past patient data to predict how an individual will respond to a specific therapy, thereby leading to more effective treatments and fewer side effects.7 This is a crucial step toward achieving a more precise and tailored model of care.

 

Augmenting Human Expertise: AI in Medical Image Analysis and Diagnostic Support

 

AI is proving to be a valuable co-pilot for clinicians and researchers. Its ability to provide consistent and objective analysis of medical images minimizes interpretation variability and strengthens data reliability.9 This is particularly critical for clinical endpoints that rely on image data.11 AI algorithms have demonstrated remarkable precision in diagnostic support, such as a model that achieved

87% sensitivity in distinguishing COVID-19 from other lung diseases and the Skinvision app, which has a 95% early detection rate for skin cancer.10 This augmentation of human expertise frees up clinicians for more complex tasks and decision-making, while the AI handles high-volume, repetitive analysis.1

 

AI Use Case

Key Functionalities

Business Benefits

Patient Benefits

Protocol Design

Analyzes historical data, simulates trial scenarios, drafts protocols, and enables adaptive design.1

Reduces planning time, lowers costs, and increases the likelihood of trial success.1

Improves treatment efficacy by tailoring trial parameters based on real-time data.11

Patient Recruitment

Parses unstructured EHRs, applies NLP, and uses predictive analytics to identify eligible candidates.11

Accelerates trial timelines, reduces recruitment costs, and improves patient matching quality.9

Increases access to relevant trials, reduces wait times, and improves retention rates.4

Data Management

Automates data cleaning, standardizes data formats, and integrates disparate data sources.12

Eliminates human error, improves data integrity, and speeds up data processing.1

Leads to more accurate diagnoses and safer treatment plans by ensuring high-quality data.12

Safety Monitoring

Continuously analyzes real-time data from wearables, flags anomalies, and detects potential adverse events.8

Enables proactive risk management, reduces trial disruptions, and supports regulatory compliance.1

Enhances patient safety through earlier detection of adverse events and complications.7

Drug Discovery

Analyzes complex biological networks, predicts compound activity, and designs new molecules.3

Accelerates drug development, lowers costs, and reduces the risk of high-failure pipelines.5

Leads to faster access to novel and more effective therapies for a wide range of diseases.3

Table 1: Key Use Cases of AI and Their Business Benefits in Clinical Research

 

Section 2: The Limitations: Addressing AI's Inherent Challenges

 

Despite its immense promise, the integration of AI into clinical research is not without significant hurdles. These challenges are multifaceted, encompassing vulnerabilities in data and algorithms as well as human and organizational resistance.

 

2.1. Data and Algorithmic Vulnerabilities

 

The performance and reliability of any AI model are fundamentally tied to the quality and nature of the data on which it is trained.

 

The Problem of Algorithmic and Data Bias

 

A critical ethical and practical limitation is the risk of algorithmic bias. AI models are only as robust and equitable as their training data. When these datasets are unrepresentative or incomplete, the resulting algorithms can perpetuate and even amplify existing health inequities.14 This is not merely a technical problem but a digital reflection of deeply ingrained institutional biases in healthcare. For example, a widely used commercial algorithm from Optum was designed to predict healthcare costs as a proxy for illness severity.17 Since historical data reflected that less money was spent on Black patients with similar conditions due to systemic factors, the algorithm learned to underestimate their care needs.17 Consequently, it recommended healthier white patients for high-risk care management programs ahead of sicker Black patients.17 This process illustrates a dangerous feedback loop where existing disparities are encoded, reinforced, and scaled, making them even more difficult to detect and correct.16 This places the responsibility for bias not only on the AI developer but on the entire data ecosystem that informs the model.

 

The Black Box Problem: A Barrier to Trust and Explainability

 

Many AI models, particularly deep neural networks, operate as "black boxes".2 This refers to the opacity of their decision-making processes, where the output may be accurate, but the reasoning behind it is not transparent or easily understood.21 This lack of explainability represents a fundamental crisis of trust that directly undermines the core principles of ethical AI, which require transparency and accountability.2 Clinicians must have confidence in a recommendation before acting on it, and patients have a right to understand why a treatment was recommended.14 The black box problem erodes this trust, acting as a primary inhibitor to widespread adoption in a domain where lives are at stake.

 

Data Quality and Integration Challenges

 

Even without bias, AI performance is highly dependent on the quality of input data. Incomplete, inconsistent, or unstructured datasets can undermine a model's accuracy and lead to misleading outputs, which can have serious consequences in clinical settings where decisions impact patient safety and regulatory submissions.12 Manual data entry remains a significant source of errors, including misentered lab values and duplicate records, and traditional systems struggle to process the sheer volume and complexity of data from various sources like EHRs, imaging systems, and wearable devices.12

 

2.2. Human and Organizational Hurdles

 

The successful integration of AI requires more than just technological solutions; it demands a readiness on the part of organizations and individuals to adapt.

 

Cost, Infrastructure, and Talent Gaps

 

Adopting AI solutions requires significant upfront investment in system upgrades, data integration frameworks, and validation tools.13 Beyond the financial costs, many organizations lack the multidisciplinary teams with a blend of expertise in data science, clinical operations, and regulatory affairs needed for successful AI deployment and governance.13

 

Cultural Resistance and the Necessity for Human-in-the-Loop Frameworks

 

The clinical research industry has historically been slow to adapt due to a "do no harm" philosophy and a deep-seated fear of liability and non-compliance.6 This cultural resistance to change is understandable but risks widening the gap between those who leverage AI for faster, cheaper trials and those who fall behind.6 A key principle for overcoming this resistance is the understanding that AI is designed to augment, not replace, human expertise.1 A "Human-in-the-Loop" (HITL) approach is essential, where human experts supervise AI outputs, interpret insights, and make the final critical decisions.1 This model ensures ethical compliance, provides auditable steps for regulatory purposes, and fosters a culture of collaboration and trust.1

 

Section 3: The Ethical Imperative: Upholding Responsible AI in Clinical Research

 

To successfully navigate the challenges of AI, the clinical research community must embrace a comprehensive framework of ethical principles and best practices. These principles must be embedded at every stage of the AI lifecycle, from design to deployment.

 

3.1. Foundational Ethical Principles

 

Patient Privacy and Data Security

 

The use of AI in clinical research involves the processing of highly sensitive patient data, which necessitates robust legal and regulatory frameworks to safeguard privacy.22 Key challenges include the risks of data breaches, unauthorized access, and the potential for AI algorithms to re-identify anonymized data.14 To mitigate these risks, solutions such as strong encryption, data anonymization techniques, and federated learning are essential.14 Federated learning, for instance, allows AI models to be trained on localized data, keeping sensitive information on-site without the need for central data transfer.14

 

Informed Consent and Patient Autonomy

 

Informed consent is a cornerstone of patient rights, yet its application becomes more complex with the introduction of AI.14 Researchers and healthcare providers have a clear responsibility to disclose the use of AI, explain its role in an understandable way, and provide patients with the option to opt out.25 This requires providers to have sufficient knowledge of the technology to explain its functions, its potential benefits and risks, and the safeguards in place.26 A dynamic consent model that can be updated to reflect AI-driven changes throughout the trial is also critical to ensure ongoing patient engagement and autonomy.14

 

Accountability and Liability

 

The "black box" and "many hands" problems make it legally and morally difficult to assign responsibility for an error or a biased outcome.27 When multiple agents are involved in an AI system's development—from data scientists to software engineers and clinicians—the diffusion of responsibility can leave a victim of harm without recourse.27 Therefore, clear governance structures are paramount.24 These frameworks must define who is responsible for AI oversight and ensure that every AI-driven decision can be traced back to a human stakeholder who can provide a rationale and justification for a diagnosis or recommendation.24

 

3.2. Mitigating and Managing Ethical Risks

 

Strategies for Bias Detection and Mitigation

 

Combating bias requires a multi-pronged approach that begins at the genesis of the AI system. The first step is the proactive collection and use of diverse and representative datasets that reflect the population the technology is intended to serve.14 An inclusive development process that brings together statisticians, clinicians, and representatives from underrepresented populations can also help identify and address potential sources of bias before they are operationalized.16 Continuous monitoring and regular audits of AI models are also necessary to ensure fairness as new data is introduced and the models evolve.14

 

The Role of Explainable AI (XAI) in Building Trust

 

To directly address the black box problem, the industry is increasingly turning to Explainable AI (XAI) frameworks.2 XAI provides clear insights into how an AI model arrived at a particular decision or recommendation, allowing clinicians to validate the output and thereby fostering a culture of trust and collaboration.14 This transparency is a crucial step toward ensuring that AI is seen as a trustworthy and reliable partner in clinical decision-making.

 

Implementing a Human-AI Partnership Model

 

The success of AI in clinical research is ultimately contingent on its ability to serve as a powerful tool that augments human capabilities.1 The Human-in-the-Loop (HITL) model is the most effective way to operationalize this partnership.1 In this framework, AI handles high-volume data analysis and pattern recognition, while humans provide critical oversight, interpret the outputs, and make final decisions.1 This approach not only serves as a vital ethical safeguard but also ensures verifiable, auditable steps for regulatory compliance and increases trust among all stakeholders.1

 

Ethical Principle

Associated Challenges

Responsible AI Solutions & Best Practices

Patient Privacy & Data Security

- Data breaches and unauthorized access risks.14

 

- Re-identification of anonymized data.14

 

- Ethical dilemmas regarding data ownership and consent.14

- Implement strong encryption and data anonymization.14

 

- Use federated learning to keep data localized.14

 

- Develop standardized frameworks for patient data rights.14

Fairness & Bias Mitigation

- Algorithmic bias from unrepresentative training data.2

 

- Amplification of existing health inequities.16

 

- Lack of transparency on data sources.14

- Ensure diverse and representative datasets.14

 

- Conduct regular audits and recalibrate models.14

 

- Promote collaboration between AI experts and clinicians.14

Transparency & Explainability

- "Black box" nature of many AI models.14

 

- Difficulty in explaining AI-driven recommendations to patients and regulators.14

- Utilize Explainable AI (XAI) frameworks.14

 

- Ensure human oversight in all AI-assisted decisions.1

 

- Establish regulatory guidelines for explainability.14

Informed Consent & Autonomy

- Patients' lack of understanding of AI's role.25

 

- Difficulty in communicating AI-driven changes to trial protocols.14

 

- Ambiguity regarding a patient's option to "opt out".26

- Provide clear, simplified explanations of AI's involvement.14

 

- Implement dynamic consent models that update patients.14

 

- Document all discussions and patient preferences.26

Table 2: Ethical Principles, Challenges, and Responsible AI Solutions in Clinical Research

 

Section 4: The Future of AI in Clinical Research: What Comes Next

 

As AI continues its trajectory of exponential growth, its future in clinical research will be shaped by a rapidly evolving regulatory landscape and the emergence of new technologies.

 

4.1. The Evolving Regulatory Landscape

 

The rapid advancement of AI technology has created a pressing need for regulatory bodies to adapt and develop new frameworks.6 This creates a "regulatory vacuum" where a lack of legally binding regulations necessitates "soft law" and self-governance frameworks to fill the gap. The world's leading regulatory bodies are responding with diverse approaches.

  • United States (FDA): The FDA recognizes that its current regulatory paradigm was not designed for adaptive AI and machine learning technologies.28 The agency is actively working on a new framework that emphasizes collaboration with stakeholders, the development of predictable and clear regulatory approaches, and the promotion of harmonized standards based on principles like "Good Machine Learning Practice".28
  • Europe (EMA & EU AI Act): The European Medicines Agency (EMA) is focused on integrating AI into its regulatory systems to support decision-making and accelerate drug approvals.29 Its workplan includes providing guidance on the use of AI throughout the lifecycle of a medicine, creating frameworks for AI tools, and building the capacity of regulators to embrace the AI transformation.29 A broader framework is the EU AI Act, a first-of-its-kind comprehensive regulation that classifies AI applications based on risk.30 High-risk applications, such as those used to diagnose and treat cancer, are subject to specific legal requirements, reflecting a cautious but enabling approach to AI.30
  • Global Health Bodies (WHO): The World Health Organization (WHO) has published principle-based guidance that stresses an ethics-first approach to AI in healthcare.18 The WHO's guidance emphasizes the importance of designing and using AI systems in ways that respect patient privacy, promote equity, and mitigate biases from training data.23 This global perspective highlights a shared commitment to a human-rights and ethics-centered philosophy.

 

Regulatory Body/Framework

Core Philosophy/Approach

Key Focus Areas

Key Publications/Frameworks

FDA (U.S.)

Agile, innovation-supporting approach; leveraging existing pathways.

Safety and effectiveness, collaboration with stakeholders, promotion of harmonized standards.28

Good Machine Learning Practice Guiding Principles 28, "Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together" paper.28

EMA (Europe)

Risk-based integration; leveraging AI to improve regulatory decision-making.

Guidance throughout the medicine lifecycle, developing tools, collaboration, and change management.29

Reflection Paper on the use of AI in the medicinal product lifecycle.29

EU AI Act (Europe)

Comprehensive, risk-based framework.

Defines unacceptable, high, and unregulated risks; sets legal requirements for high-risk applications.30

EU AI Act.30

WHO (Global)

Ethics-first, principle-based guidance.

Privacy, equity, human rights, bias mitigation, and stakeholder dialogue.18

Ethics and governance of artificial intelligence for health: WHO guidance.27

Table 3: Comparative Overview of Global AI Regulatory Approaches

 

4.2. Emerging Trends and Technologies

 

Looking ahead, the clinical research landscape will be shaped by several emerging trends. Predictive analytics will play an increasingly larger role, not just in protocol design but in forecasting patient dropouts, identifying high-risk sites, and predicting supply chain needs in real-time.13 Additionally, the concept of "digital twins"—virtual environments that can simulate patient cohorts and trial scenarios—is in early research stages and could offer a safe and cost-effective way to test protocol amendments or dosing schedules.13

The future also involves a continued expansion of decentralized clinical trials, where AI and remote monitoring tools will be instrumental.4 AI will be essential for analyzing the massive datasets collected from wearable devices and sensors to enable real-time safety monitoring and less disruptive patient participation.8 Finally, generative AI will remain a key driver of innovation, particularly through "lab-in-the-loop" strategies that streamline the traditional trial-and-error approach to drug discovery.3 Its ability to create high-fidelity synthetic data will be invaluable for training algorithms without compromising patient privacy.1

 

Conclusion: A Roadmap for Responsible Innovation

 

The analysis of artificial intelligence in clinical research reveals a powerful duality: it is a technology that can fundamentally enhance the speed, cost-effectiveness, and scientific rigor of the entire research lifecycle, yet it is also a source of significant ethical and practical challenges. As this report demonstrates, AI offers profound opportunities to optimize clinical research and pioneer new scientific frontiers. However, its successful and safe integration is not a given; it is contingent upon a rigorous and proactive approach to its inherent limitations, particularly those concerning data quality, algorithmic bias, and transparency.

Ethical responsibility is not a secondary consideration but the foundational principle for the successful and safe deployment of AI in clinical research. The patient's rights to privacy, autonomy, and a fair and equitable standard of care must remain paramount. To this end, a clear, actionable roadmap is necessary for all stakeholders. For researchers, it is imperative to champion Explainable AI and adopt a Human-in-the-Loop model, ensuring that human judgment remains at the forefront of all critical decisions. For institutions, the imperative is to invest in multidisciplinary teams and establish robust data governance frameworks that can manage the complexities of AI development and deployment. For regulatory bodies, the call is for accelerated, harmonized, and adaptable guidance that balances the need for rapid innovation with an unwavering commitment to patient safety. The ultimate message is that by embracing a collaborative, human-centric approach, the promise of AI can be realized without compromising the core ethical tenets of medical science.

 

References



  • Note: In a final published research paper, this section would contain a comprehensive list of all cited sources, formatted according to APA style guidelines.

Works cited

  1. AI in Clinical Trials: How It Will Shape the Future - Medrio, accessed August 26, 2025, https://medrio.com/blog/ai-in-clinical-trials/
  2. Responsible AI in Healthcare: What is Responsible AI? - Thoughtful AI, accessed August 26, 2025, https://www.thoughtful.ai/blog/responsible-ai-in-healthcare-what-is-responsible-ai
  3. AI and machine learning: Revolutionising drug discovery ... - Roche, accessed August 26, 2025, https://www.roche.com/stories/ai-revolutionising-drug-discovery-and-transforming-patient-care
  4. Revolutionizing Clinical Trials with Generative AI - Clinical Research News, accessed August 26, 2025, https://www.clinicalresearchnewsonline.com/news/2024/07/19/revolutionizing-clinical-trials-with-generative-ai
  5. AI in Pharma: Top 9 Use Cases You Should Know - Litslink, accessed August 26, 2025, https://litslink.com/blog/use-cases-of-ai-in-pharma-how-to-leverage-it
  6. Healthcare is Historically Slow to Adapt to Change: Why Clinical Trials Can't Afford it with AI, accessed August 26, 2025, https://acrpnet.org/2025/08/19/healthcare-is-historically-slow-to-adapt-to-change-why-clinical-trials-cant-afford-it-with-ai
  7. Revolutionizing clinical trials: the role of AI in accelerating medical breakthroughs - PMC, accessed August 26, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10720846/
  8. Forward Thinking for the Integration of AI into Clinical Trials - ACRP, accessed August 26, 2025, https://acrpnet.org/2023/06/20/forward-thinking-for-the-integration-of-ai-into-clinical-trials
  9. Clinical Trial Insight and Trends For 2024 And Beyond - ObvioHealth, accessed August 26, 2025, https://www.obviohealth.com/resources/clinical-trial-trends-2024
  10. Role of ML and AI in Clinical Trials Design: Use Cases, Benefits, accessed August 26, 2025, https://www.coherentsolutions.com/insights/role-of-ml-and-ai-in-clinical-trials-design-use-cases-benefits
  11. AI in Chemical Industry: Top Use Cases You Need To Know, accessed August 26, 2025, https://smartdev.com/ai-use-cases-in-clinical-trials/
  12. How AI is Improving Clinical Data Accuracy, Reducing Errors ..., accessed August 26, 2025, https://www.estenda.com/blog/how-ai-is-improving-clinical-data-accuracy-reducing-errors-enhancing-healthcare-data-integration
  13. AI in Clinical Data Management: Key Uses, Challenges, and ..., accessed August 26, 2025, https://www.quanticate.com/blog/ai-in-clinical-data-management
  14. AI in Clinical Trials: Ethical Considerations & Practices - ClinMax, accessed August 26, 2025, https://clinmax.com/ai-in-clinical-trials/
  15. Examining human-AI interaction in real-world healthcare beyond the ..., accessed August 26, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11923224/
  16. Overcoming AI Bias: Understanding, Identifying and Mitigating ..., accessed August 26, 2025, https://www.accuray.com/blog/overcoming-ai-bias-understanding-identifying-and-mitigating-algorithmic-bias-in-healthcare/
  17. Real-world examples of healthcare AI bias - Paubox, accessed August 26, 2025, https://www.paubox.com/blog/real-world-examples-of-healthcare-ai-bias
  18. AI in healthcare: legal and ethical considerations in this new frontier, accessed August 26, 2025, https://www.ibanet.org/ai-healthcare-legal-ethical
  19. fifty shades of black: about black box AI and explainability in healthcare, accessed August 26, 2025, https://academic.oup.com/medlaw/article-abstract/33/1/fwaf005/8003827
  20. The Human-AI Partnership: An Intelligence Leap for Healthcare ..., accessed August 26, 2025, https://dhinsights.org/news/the-human-ai-partnership-an-intelligence-leap-for-healthcare
  21. Defining the undefinable: the black box problem in healthcare artificial intelligence | Journal of Medical Ethics, accessed August 26, 2025, https://jme.bmj.com/content/48/10/764
  22. AI Ethics And Medical Research - Meegle, accessed August 26, 2025, https://www.meegle.com/en_us/topics/ai-ethics/ai-ethics-and-medical-research
  23. WHO outlines considerations for regulation of artificial intelligence for health, accessed August 26, 2025, https://www.who.int/news/item/19-10-2023-who-outlines-considerations-for-regulation-of-artificial-intelligence-for-health
  24. Responsible AI in Healthcare: Ensuring Patient Safety & Trust Through Testing - Bugasura, accessed August 26, 2025, https://bugasura.io/blog/responsible-ai-in-healthcare/
  25. Artificial Intelligence and Informed Consent | MedPro Group, accessed August 26, 2025, https://www.medpro.com/artificial-intelligence-informedconsent
  26. The Role of Informed Consent in Medical AI: Balancing Innovative ..., accessed August 26, 2025, https://www.capphysicians.com/articles/role-informed-consent-medical-ai-balancing-innovative-advancements-patient-rights
  27. Ethics and governance of artificial intelligence for health, accessed August 26, 2025, https://www.who.int/publications/i/item/9789240029200
  28. FDA: Artificial Intelligence & Medical Products | The American Health ..., accessed August 26, 2025, https://www.ahima.org/education-events/artificial-intelligence/artificial-intelligence-regulatory-resource-guide/fda-artificial-intelligence-medical-products/
  29. Artificial intelligence | European Medicines Agency (EMA), accessed August 26, 2025, https://www.ema.europa.eu/en/about-us/how-we-work/data-regulation-big-data-other-sources/artificial-intelligence
  30. EU Artificial Intelligence Act | Up-to-date developments and ..., accessed August 26, 2025, https://artificialintelligenceact.eu/
  31. Ethics and governance of artificial intelligence for health: WHO guidance - National Centre for Disease Informatics and Research, accessed August 26, 2025, https://www.ncdirindia.org/Downloads/WHO_AI_Ethics.pdf

 

0 Comments

Leave a comment