ChatGPT in Healthcare: What Does it Mean and How Can Healthcare Benefit?
November 7, 2025

How clinicians, researchers, and health systems are using large language models to streamline work and transform care
Large Language Models (LLMs) like ChatGPT have entered nearly every sector. Few, however, are as promising (or as sensitive) as healthcare. In 2024, an estimated 18% of healthcare workers had already used ChatGPT for healthcare purposes, and over 80% of non-users expressed interest in doing so (BMC Medical Informatics, 2023).
From drafting prior authorization letters to summarizing patient messages, these models offer new ways to reduce administrative burden, improve communication, and accelerate research. But they also raise important questions about accuracy, privacy, and responsibility.
Let’s explore what ChatGPT and similar LLMs mean for healthcare today, and how they might reshape the future of care delivery and research.
Key Takeaways
- ChatGPT and similar LLMs can streamline administrative tasks like prior authorization, chart summarization, and message drafting.
- They hold potential to reduce burnout and documentation burden, but risks remain around privacy and accuracy.
- Responsible, human-in-the-loop design will be essential for safe adoption.
- Tools like Brim Analytics already show how LLMs can support structured, auditable, and secure use in healthcare data workflows.
What is ChatGPT?
ChatGPT is a Large Language Model (LLM) developed by OpenAI. It’s a form of artificial intelligence trained on massive amounts of text data to understand and generate natural language. ChatGPT can summarize, draft, explain, and answer questions conversationally.
While ChatGPT is the most widely known, it’s part of a larger family of LLMs including Google’s Gemini, Anthropic’s Claude, and open-source models like Llama that are increasingly being adapted for medical contexts. In healthcare, these models are most effective when tailored to domain-specific data and embedded within safe, auditable workflows.
How Can ChatGPT Be Used in Healthcare?
LLMs like ChatGPT can help clinicians, researchers, and administrators manage the ever-growing volume of medical information. Below are some of the most promising applications.
Prior Authorization
One of the most immediate uses for LLMs is in simplifying prior authorization (PA)—the process by which clinicians must justify a treatment or test to an insurance payer. ChatGPT can:
- Assist clinicians by compiling relevant elements of a patient’s record and drafting a prior authorization letter for a procedure.
- Assist insurance companies by automating the review of submitted documentation and highlighting aspects contributing to approval or rejection decisions.
A team at the Perelman School of Medicine detailed how LLMs could compile necessary patient history elements for PA requests and reduce administrative waste (Tripathi et al., 2024).
Patient Communication
Clinicians spend hours managing patient messages through patient portals. In a 2024 randomized trial at Stanford, using AI to draft responses improved burnout and burden scores, even without saving time (JAMA Network Open, 2024).
By shifting the task from writing to editing, ChatGPT can help clinicians stay responsive while reducing cognitive load.
Patient Education & Translation
Health literacy remains a major barrier to care. Patient materials are recommended to be written at a fifth-grade reading level, but many are far more complex. One study found that 138 online patient education articles were written between 10th and 14th-grade levels (PubMed, 2014). This creates hurdles for patient education and equity.
LLMs can rewrite materials in simpler language and translate them into multiple languages, making information more accessible and inclusive.
Drafting Visit Notes
Clinician burnout is fueled by documentation demands. One study showed that from 2019 to 2022, the average time PCPs spent in the EHR per 8 hours of appointments increased by almost 30 minutes, with corresponding increases for orders, inbox, chart review, and notes (PudMed, 2024). LLMs can generate first drafts of clinical notes or summarize structured data from transcripts, reducing time spent on routine documentation. With thousands of medical scribes employed across the U.S., automation in this space has clear time-saving potential.
Research
Reading clinical notes manually is expensive and slow. LLMs make it possible to extract structured data from large volumes of unstructured text, enabling faster and more statistically significant retrospective studies.
For example, Brim Analytics’ AI-powered chart abstraction accelerates research and registry creation by converting narrative text into high-quality structured variables, making it easier to analyze patterns across thousands of patient records.
Advantages of ChatGPT in Healthcare
As early pilots show, these tools can do more than just save time; they may fundamentally reshape how clinicians and researchers interact with data.
The advantages of ChatGPT and other large language models in healthcare come down to scale, speed, and consistency. These tools can process massive amounts of unstructured clinical text in seconds, including progress notes, radiology reports, and patient messages, surfacing insights that would otherwise take hours of manual review. When used responsibly, LLMs can help clinicians focus on higher-value work, standardize documentation, and expand access to medical information.
LLMs Understand a Lot of Context, Fast
LLMs can interpret context across thousands of words, summarizing patient histories, identifying relevant treatments, or extracting temporal relationships in seconds. This contextual awareness allows for better summarization and data retrieval than rule-based NLP systems.
Reduce Inconsistencies Among Human Reviewers
Even experienced clinicians or abstractors can disagree on chart interpretation. ChatGPT’s ability to standardize and replicate reasoning steps can improve consistency, particularly when combined with transparent, auditable interfaces like Brim’s Variable Scorecard.
Shifting from Writing to Editing Reduces Staff Burden
Documentation and messaging workloads contribute heavily to clinician burnout. With LLMs handling first drafts, staff can focus on review and decision-making, improving job satisfaction and freeing up time for patient care.
Drawbacks of ChatGPT in Healthcare
Even the most advanced and promising systems come with risks. Understanding these limitations is just as important as celebrating the progress.
Despite the growing enthusiasm around ChatGPT and other large language models in medicine, these tools are far from risk-free. Healthcare data is deeply sensitive, and decisions based on AI outputs can carry real clinical consequences. Studies have shown that LLMs can produce hallucinations, which are statements that sound confident but are factually incorrect. Data privacy remains a major concern when models are not deployed in secure environments.
Beyond technical concerns, the broader ethical and regulatory landscape is still taking shape: Who is responsible if an AI tool misguides a diagnosis or denies a treatment? As institutions explore integrating LLMs into patient care, research, and administrative workflows, it’s critical to understand both the practical and moral limitations of these systems.
HIPAA Concerns
Protected Health Information (PHI) cannot be entered into public LLM interfaces like chat.openai.com. However, some Electronic Medical Record (EMR) systems are integrating private, compliant LLM instances with Business Associate Agreements (BAAs) in place that ensure data is never used for model retraining.
Healthcare organizations must carefully vet any AI vendor for compliance and data governance before adoption. These concerns are the reason for Brim's privacy-first design: secure in your firewall, using your approved institutional AI endpoint, and compliant with SOC 2 Type II and HIPAA.
Inaccurate Diagnosis
LLMs can produce convincing but incorrect statements, a phenomenon known as hallucination. When clinical reasoning or patient safety is involved, a human-in-the-loop is non-negotiable. ChatGPT may assist in summarization or information retrieval, but for safety, final judgment must always remain with clinicians.
Sycophancy and Incentives
AI systems can mirror the biases and incentives of their creators. The American Medical Association reported that 60% of surveyed clinicians believed insurers’ AI systems were systematically denying coverage for necessary treatments, raising ethical and regulatory alarms (AMA, 2025).
AI can be programmed to achieve almost any goal and that’s both its strength and its danger. If an insurer’s metric is “reduce costs,” the model may optimize for denial rates rather than fairness. Transparent validation systems and guardrails are essential to ensure that AI-driven systems remain ethical, equitable, and aligned with patient care.
How LLMs Like ChatGPT May Change Healthcare
Over the next few years, expect to see LLMs transform daily healthcare operations:
- Clinicians do less writing, more editing. Notes, letters, and messages will start as AI drafts.
- Administrative burden and burnout may decline. Clinicians will spend less time on repetitive tasks.
- Patients will be more informed. When implemented carefully, AI-driven education could improve health literacy and patient engagement.
For institutions that value accuracy and transparency, human oversight will remain the foundation of safe AI integration. This mirrors how Brim enables researchers to review, validate, and improve model outputs in chart abstraction workflows.
Bottom Line
ChatGPT represents both promise and peril for healthcare. Used responsibly, it can ease administrative load, standardize communication, and accelerate research. Misused, it risks undermining trust and safety.
The healthcare organizations that succeed with AI won’t be those that adopt it fastest; they’ll be the ones that adopt it most responsibly. At Brim Analytics, we believe the future of healthcare AI lies not in replacing humans, but in empowering them. Our AI-guided chart abstraction platform is built around human-in-the-loop verification, auditability, and transparency: principles that will be essential as ChatGPT and similar models become embedded in care and research.
ChatGPT in Healthcare FAQ
Can ChatGPT be used for healthcare?
Yes, but only under proper safeguards. ChatGPT can assist with writing, summarizing, and education tasks in healthcare, but it shouldn’t be used for diagnosis or direct patient care without human oversight.
What are the risks of using medical ChatGPT?
Risks include privacy violations, inaccurate or fabricated information (“hallucinations”), and lack of explainability. Organizations must ensure HIPAA compliance and maintain human review.
What are other applications of AI in the medical space?
Beyond ChatGPT, AI is transforming clinical trial recruitment, chart abstraction, image interpretation, and predictive analytics. Learn more in our post: Why Manual and NLP-Based Chart Abstraction Are Becoming Obsolete