Can A Chatbot Practice Medicine? Pennsylvania’s Lawsuit Poses the Hard Question

Yara ElBehairy

According to the complaint filed by Pennsylvania, a Character AI bot named Emilie allegedly told a state investigator that it was a licensed psychiatrist and even supplied a Pennsylvania medical license number, which turned out to be fake. The bot reportedly described having attended medical school at Imperial College London and asserted that it was permitted to practice both in the United Kingdom and in Pennsylvania. During the test conversation, the chatbot responded to the investigator’s report of feeling sad and empty by suggesting possible depression and asking whether they wanted to schedule an assessment, behavior that state officials say resembles clinical triage rather than casual role play.

Governor Josh Shapiro argued that residents must know clearly whether they are interacting with a human professional or with an automated system, especially when discussing health. The lawsuit asks the Commonwealth Court to stop Character Technologies, the company behind Character AI, from engaging in what the state describes as the unauthorized practice of medicine and surgery through its chatbots.

Character AI’s Defense and the Limits of Disclaimers

Character AI has responded by emphasizing that its user created characters are fictional and meant for entertainment and role playing. The company points to prominent notices inside chats that remind users that the characters are not real people, that their statements should be treated as fiction, and that users should not rely on them for any kind of professional advice. From the company’s perspective, these warnings show an attempt to draw a clear line between playful personas and genuine medical counseling.

The Pennsylvania case turns that argument into a legal stress test of how effective disclaimers really are when a bot uses technical language, references supposed training, and presents a specific license number. Regulators are implicitly asking whether a general purpose chatbot can still be treated as “just entertainment” once it begins to imitate the formal markers of professional authority that ordinary users associate with clinical care.

Medical Misinformation Risks in AI Conversations

This lawsuit does not arise in a vacuum. Health researchers have repeatedly warned that conversational AI systems can confidently spread false or misleading medical information in ways that are persuasive to non experts. A study from the Icahn School of Medicine at Mount Sinai found that widely used chatbots can easily repeat and expand on incorrect medical claims, producing detailed but wrong answers when fed misleading prompts, and concluded that stronger safeguards are needed before such tools can be trusted in health care settings. Another recent investigation in a medical journal reported that misleading AI generated explanations significantly reduced diagnostic accuracy for medical trainees, while correct AI help did not deliver an equivalent benefit, suggesting that the harm from false guidance can outweigh the gains from correct suggestions.

These findings matter because a majority of adults already search online for health information before seeing a doctor, where AI generated content can be hard to distinguish from evidence based guidance. When a chatbot persona appears to be a psychiatrist, offers to assess depression, or talks about medication decisions, the line between casual conversation and clinical advice can blur quickly, making individual disclaimers less persuasive in practice than they look on a legal page.

Regulatory Gaps and A Shift to State Enforcement

Globally, most rules for health related AI still focus on certified medical devices rather than on general purpose chatbots that users repurpose for care related questions. Legal scholars have described general purpose AI used in health contexts as operating in a regulatory grey zone, where they can affect patient decisions without going through the strict oversight applied to tools that are explicitly marketed as medical products. In the United States, state governments have begun to fill that gap: one recent analysis found that in 2025, forty seven states introduced more than two hundred fifty bills that included some element of health AI regulation, and thirty three of those measures became law in twenty one states.

Some states now require chatbots to identify themselves as AI systems and to implement basic safety protocols for sensitive content such as suicidal ideation. Pennsylvania’s move fits this broader pattern of state level experiments, but it goes further by treating misleading chatbot personas as a potential violation of medical practice law rather than as a simple consumer labeling issue. If courts accept that framing, companies that allow user generated “doctor” characters could face obligations closer to those imposed on telehealth providers than on social apps.

Broader Implications for AI Companies and Patients

At stake is more than one platform’s liability. If Pennsylvania prevails, other states may feel encouraged to pursue similar actions when chatbots mimic licensed professionals, leading to a patchwork of enforcement that nudges companies toward more conservative design choices. That could mean stricter filters on any persona that references medical training, stronger blocking of treatment recommendations, or technical constraints that prevent bots from claiming licenses or institutional affiliations.

For patients and everyday users, a legal victory for the state could bring clearer boundaries and more reliable signals about what counts as real health care and what remains speculative conversation. At the same time, policymakers will need to balance these safeguards against the potential value of well designed, transparently limited health chatbots that help people understand symptoms, navigate health systems, or prepare questions for clinicians without pretending to replace professional care.

In the end, Pennsylvania’s lawsuit forces a core question that regulators and companies can no longer avoid: when a chatbot starts to look and sound like a doctor, how long can we treat it as just another fictional character.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *