Inspirational journeys

Follow the stories of academics and their research expeditions

What’s OK — and What’s Not — When Health & Medical Writers Use AI (ChatGPT & Co.)

Levi Cheptora

Tue, 21 Oct 2025

What’s OK — and What’s Not — When Health & Medical Writers Use AI (ChatGPT & Co.)

Executive summary (one sentence)
AI tools are powerful assistants for drafting, editing and idea-generation — but in health and medicine they must be used with strict editorial, ethical and legal controls (disclose use, never let AI make clinical decisions, protect patient data). icmje.org+2publicationethics.org+2


1) Quick orientation: two rules to always follow

  • Never let AI make clinical decisions. Clinical judgement must come from qualified people. World Health Organization

  • Never feed identifiable patient information to public, consumer AI without a compliant contract/BAA or an on-prem / PHI-capable solution. Treat AI like any other vendor that processes Protected Health Information (PHI). The HIPAA Journal+1


2) Safe, acceptable uses (practical examples)

Use AI as an assistant for low-risk and non-clinical drafting, editing and research-support tasks, then apply human expert review before publication:

  • Drafting & editing (admin / non-clinical): outlines, plain-language rewrites, bullet lists, email templates, speaker notes, social posts, SEO title ideas, metadata.

  • Summaries & idea generation: summarise non-sensitive literature, brainstorm article angles, create structured outlines (but verify sources).

  • Tone & accessibility: adapt language for patients vs clinicians; roleplay to test tone (e.g., “rewrite for a 10-year-old”); produce alternate headlines and CTAs.

  • Career / professional tasks: CV rewrites, LinkedIn bios, interview prep, proposal / pitch language.

These are permitted uses if you (a) verify facts, (b) check citations against primary sources, and (c) remove any PHI beforehand. (See checklist in §6 below.)


3) High-risk / inappropriate uses — stop, or use only with approved safeguards

Do not use AI to:

  • Make or suggest clinical diagnoses, treatment decisions, drug dosing, or triage instructions. Clinical decisions must be by clinicians. World Health Organization

  • Process or analyze identifiable patient data with public LLMs (no PHI in prompts) unless your organisation has a compliant agreement and controls. The HIPAA Journal+1

  • Generate medical advice for patients without clinician review. Any patient-facing recommendation must be clinically validated and safety-checked. World Health Organization

  • Publish AI output without human verification. Never “publish first, check later.” icmje.org

  • Treat AI as an author or give it authorship credit. Editorial bodies forbid listing AI as an author — AI cannot take responsibility. Disclose use instead. publicationethics.org+1

  • Use AI to simulate informed consent or replace authentic patient communication. AI cannot truly consent or demonstrate clinical empathy with legal standing.

  • Automate clinical documentation or chart notes without quality controls. Errors can create legal and clinical risks. Foley & Lardner LLP


4) Editorial & publishing rules you must follow

  • Disclose AI use in manuscripts and submissions. Major journal guidance requires transparent reporting of AI assistance (methods/acknowledgements/cover letter). icmje.org

  • AI cannot be an author. COPE and other publishers state AI lacks legal/person responsibilities and cannot take authorship. publicationethics.org

  • Cite primary sources, not the LLM. If the AI suggests facts, trace them back to peer-reviewed papers, guidelines, or official websites, and cite those primary sources directly. icmje.org


5) Legal, regulatory & governance considerations (international view)

  • U.S. (HIPAA): Public generative AI platforms typically are not HIPAA-compliant by default and vendors often will not sign Business Associate Agreements (BAAs). Do not send PHI to consumer chatbots. The HIPAA Journal+1

  • EU (GDPR / EDPB): Personal data protections apply when training or using AI models; EDPB guidance clarifies when model training/processing implicates GDPR (anonymisation, legal basis, rights). If you process EU personal data, follow GDPR rules and EDPB opinions. European Data Protection Board+1

  • Medical devices & clinical AI: AI used as a clinical decision-support, diagnostic tool, or that affects patient treatment may fall under medical device regulation (e.g., FDA SaMD guidance). If your content describes or markets AI clinical tools, check device/marketing regulations. U.S. Food and Drug Administration+1

  • Ethics & human rights: WHO and other bodies recommend human-centred design, fairness, accountability and transparency for AI in health — integrate these principles in content and product descriptions. World Health Organization


6) Practical “safe workflow” & checklist for health & medical writers

Use this stepwise workflow every time:

  1. Classify risk: Is the task clinical or non-clinical? Is PHI involved? If clinical or PHI, stop and use approved systems only.

  2. De-identify any data before using AI for summarisation (remove direct identifiers and consider whether re-identification risk remains). European Data Protection Board

  3. Choose an appropriate tool: Use enterprise/healthcare versions that offer BAAs and on-prem or private cloud if PHI is required; otherwise use consumer tools only for non-sensitive text. The HIPAA Journal+1

  4. Prompt with constraints: Tell the model to “only summarise — do not invent references,” and request sources. (Use the prompt templates below.)

  5. Verify facts and sources: Every factual claim, number, guideline or recommendation must be traced to a primary source (peer-reviewed paper, guideline, regulator) and checked by an expert. icmje.org

  6. Document & disclose: Note what the AI did, in what version and when. Use disclosure text in publications/acknowledgements. icmje.org

  7. Human review & sign-off: Clinician or specialist must approve any clinical content before use.

  8. Save logs & prompts for auditability (who used AI, for what, and what output was accepted/edited). This supports governance and reproducibility.


7) Suggested disclosure language (pick & adapt)

  • For articles / journal submissions:
    “Sections of this manuscript were drafted with the assistance of an AI language model. All factual statements, references and interpretations were verified and edited by the human authors, who accept full responsibility for the content.” icmje.org

  • For patient materials or public webpages:
    “This content was prepared with help from an AI writing assistant and reviewed for clinical accuracy by [name/role]. For medical advice, consult a healthcare professional.”


8) Sample prompt templates (safe, effective)

  • “Summarise this paper into a 200-word patient leaflet in plain English. Do NOT include any patient identifiers. Provide citations as DOIs or links.”

  • “Create 6 headline options for a blog post on [topic]. Keep them accurate and non-sensational.”

  • “Draft email to a journal editor describing that AI was used for language editing; say which model/version and affirm human oversight.”

  • “List guideline sources (with links and DOIs) that support the statement: [insert claim].” — then independently verify links.


9) Questions to ask the model (always use these follow-ups)

  • “What sources did you use to generate that answer? Please provide DOI/URL.”

  • “Which parts are evidence-based and which are inferred or hypothetical?”

  • “Is the information current to [insert specific date]? If not, what’s the latest evidence?”

  • “Flag any statements that might be speculative, biased, or likely to need clinical verification.”

(You included two of these already — excellent: “What is that answer based on?” and “Link to the source of that information.” — always ask them.) icmje.org


10) Red flags & final practical tips

  • “Confident sounding” ≠ correct. LLMs can hallucinate. Always verify. World Health Organization

  • Watch for outdated guidance. Guidelines and drug approvals change — check dates. U.S. Food and Drug Administration

  • Protect privacy. If in doubt about data protection rules in your jurisdiction, consult your legal/privacy office before using AI with any patient-related content. The HIPAA Journal+1

  • Journal & publisher checks: Before submitting work where AI helped, review the target journal’s policy (many now require disclosure). icmje.org+1


11) Resources & up-to-date guidance (select, actionable)

Below are the authoritative references I used drafting this guidance. Use them to build your governance and reading lists.

Key sources (APA style with links)


Bottom line (practical one-liner)

Use AI to speed writing, editing and ideation — but never to replace clinical judgment, and always apply strict data protection, verification and disclosure practices before publishing or sharing anything clinical or patient-facing. World Health Organization+1

0 Comments

Leave a comment