Follow the stories of academics and their research expeditions
Executive summary (one sentence)
AI tools are powerful assistants for drafting, editing and idea-generation — but in health and medicine they must be used with strict editorial, ethical and legal controls (disclose use, never let AI make clinical decisions, protect patient data). icmje.org+2publicationethics.org+2
Never let AI make clinical decisions. Clinical judgement must come from qualified people. World Health Organization
Never feed identifiable patient information to public, consumer AI without a compliant contract/BAA or an on-prem / PHI-capable solution. Treat AI like any other vendor that processes Protected Health Information (PHI). The HIPAA Journal+1
Use AI as an assistant for low-risk and non-clinical drafting, editing and research-support tasks, then apply human expert review before publication:
Drafting & editing (admin / non-clinical): outlines, plain-language rewrites, bullet lists, email templates, speaker notes, social posts, SEO title ideas, metadata.
Summaries & idea generation: summarise non-sensitive literature, brainstorm article angles, create structured outlines (but verify sources).
Tone & accessibility: adapt language for patients vs clinicians; roleplay to test tone (e.g., “rewrite for a 10-year-old”); produce alternate headlines and CTAs.
Career / professional tasks: CV rewrites, LinkedIn bios, interview prep, proposal / pitch language.
These are permitted uses if you (a) verify facts, (b) check citations against primary sources, and (c) remove any PHI beforehand. (See checklist in §6 below.)
Do not use AI to:
Make or suggest clinical diagnoses, treatment decisions, drug dosing, or triage instructions. Clinical decisions must be by clinicians. World Health Organization
Process or analyze identifiable patient data with public LLMs (no PHI in prompts) unless your organisation has a compliant agreement and controls. The HIPAA Journal+1
Generate medical advice for patients without clinician review. Any patient-facing recommendation must be clinically validated and safety-checked. World Health Organization
Publish AI output without human verification. Never “publish first, check later.” icmje.org
Treat AI as an author or give it authorship credit. Editorial bodies forbid listing AI as an author — AI cannot take responsibility. Disclose use instead. publicationethics.org+1
Use AI to simulate informed consent or replace authentic patient communication. AI cannot truly consent or demonstrate clinical empathy with legal standing.
Automate clinical documentation or chart notes without quality controls. Errors can create legal and clinical risks. Foley & Lardner LLP
Disclose AI use in manuscripts and submissions. Major journal guidance requires transparent reporting of AI assistance (methods/acknowledgements/cover letter). icmje.org
AI cannot be an author. COPE and other publishers state AI lacks legal/person responsibilities and cannot take authorship. publicationethics.org
Cite primary sources, not the LLM. If the AI suggests facts, trace them back to peer-reviewed papers, guidelines, or official websites, and cite those primary sources directly. icmje.org
U.S. (HIPAA): Public generative AI platforms typically are not HIPAA-compliant by default and vendors often will not sign Business Associate Agreements (BAAs). Do not send PHI to consumer chatbots. The HIPAA Journal+1
EU (GDPR / EDPB): Personal data protections apply when training or using AI models; EDPB guidance clarifies when model training/processing implicates GDPR (anonymisation, legal basis, rights). If you process EU personal data, follow GDPR rules and EDPB opinions. European Data Protection Board+1
Medical devices & clinical AI: AI used as a clinical decision-support, diagnostic tool, or that affects patient treatment may fall under medical device regulation (e.g., FDA SaMD guidance). If your content describes or markets AI clinical tools, check device/marketing regulations. U.S. Food and Drug Administration+1
Ethics & human rights: WHO and other bodies recommend human-centred design, fairness, accountability and transparency for AI in health — integrate these principles in content and product descriptions. World Health Organization
Use this stepwise workflow every time:
Classify risk: Is the task clinical or non-clinical? Is PHI involved? If clinical or PHI, stop and use approved systems only.
De-identify any data before using AI for summarisation (remove direct identifiers and consider whether re-identification risk remains). European Data Protection Board
Choose an appropriate tool: Use enterprise/healthcare versions that offer BAAs and on-prem or private cloud if PHI is required; otherwise use consumer tools only for non-sensitive text. The HIPAA Journal+1
Prompt with constraints: Tell the model to “only summarise — do not invent references,” and request sources. (Use the prompt templates below.)
Verify facts and sources: Every factual claim, number, guideline or recommendation must be traced to a primary source (peer-reviewed paper, guideline, regulator) and checked by an expert. icmje.org
Document & disclose: Note what the AI did, in what version and when. Use disclosure text in publications/acknowledgements. icmje.org
Human review & sign-off: Clinician or specialist must approve any clinical content before use.
Save logs & prompts for auditability (who used AI, for what, and what output was accepted/edited). This supports governance and reproducibility.
For articles / journal submissions:
“Sections of this manuscript were drafted with the assistance of an AI language model. All factual statements, references and interpretations were verified and edited by the human authors, who accept full responsibility for the content.” icmje.org
For patient materials or public webpages:
“This content was prepared with help from an AI writing assistant and reviewed for clinical accuracy by [name/role]. For medical advice, consult a healthcare professional.”
“Summarise this paper into a 200-word patient leaflet in plain English. Do NOT include any patient identifiers. Provide citations as DOIs or links.”
“Create 6 headline options for a blog post on [topic]. Keep them accurate and non-sensational.”
“Draft email to a journal editor describing that AI was used for language editing; say which model/version and affirm human oversight.”
“List guideline sources (with links and DOIs) that support the statement: [insert claim].” — then independently verify links.
“What sources did you use to generate that answer? Please provide DOI/URL.”
“Which parts are evidence-based and which are inferred or hypothetical?”
“Is the information current to [insert specific date]? If not, what’s the latest evidence?”
“Flag any statements that might be speculative, biased, or likely to need clinical verification.”
(You included two of these already — excellent: “What is that answer based on?” and “Link to the source of that information.” — always ask them.) icmje.org
“Confident sounding” ≠ correct. LLMs can hallucinate. Always verify. World Health Organization
Watch for outdated guidance. Guidelines and drug approvals change — check dates. U.S. Food and Drug Administration
Protect privacy. If in doubt about data protection rules in your jurisdiction, consult your legal/privacy office before using AI with any patient-related content. The HIPAA Journal+1
Journal & publisher checks: Before submitting work where AI helped, review the target journal’s policy (many now require disclosure). icmje.org+1
Below are the authoritative references I used drafting this guidance. Use them to build your governance and reading lists.
Key sources (APA style with links)
International Committee of Medical Journal Editors (ICMJE). (2024). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals (updated). Retrieved from https://www.icmje.org/icmje-recommendations.pdf. icmje.org
Committee on Publication Ethics (COPE). (2023). Authorship and AI tools. Retrieved from https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools. publicationethics.org
World Health Organization (WHO). (2025). Ethics and governance of artificial intelligence for health (guidance). Retrieved from https://www.who.int/publications/i/item/9789240084759. World Health Organization
U.S. Food and Drug Administration (FDA). (2025). Artificial Intelligence and Machine Learning in Software as a Medical Device (SaMD) — action plan and guidance pages. Retrieved from https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device. U.S. Food and Drug Administration
HIPAA Journal. (2025, April 9). Is ChatGPT HIPAA compliant? Retrieved from https://www.hipaajournal.com/is-chatgpt-hipaa-compliant/. The HIPAA Journal
European Data Protection Board (EDPB). (2024). Opinion 28/2024 — on certain data protection aspects related to AI models. Retrieved from https://www.edpb.europa.eu/our-work-tools/our-documents/topic/artificial-intelligence_en. European Data Protection Board
Use AI to speed writing, editing and ideation — but never to replace clinical judgment, and always apply strict data protection, verification and disclosure practices before publishing or sharing anything clinical or patient-facing. World Health Organization+1
Leave a comment