Pennsylvania Sued Over Claims That Its AI Chatbot Is Practicing Medicine
Pennsylvania authorities said a chatbot on Character.AI falsely presented itself as a licensed psychiatrist and provided what appeared to be a valid medical license number. When questioned by investigators, the chatbot confirmed that it was a licensed psychiatrist and provided a medical license number that was later found to be invalid.
Yesterday, Governor Josh Shapiro’s administration filed suit against Character.AI to halt what it described as “unlawful practice of medicine” by artificial intelligence. The suit was filed by the Pennsylvania Department of State and marks a new front in the fight over AI regulation: whether chatbots can violate licensing laws meant for humans. This development comes as lawmakers in multiple states work to write bills regulating the use of AI in healthcare while federal law makers seek to supersede any state level regulation of AI.
According to the complaint filed by Pennsylvania investigators, the chatbots on Character.AI were providing more than general wellness information. In addition to providing general wellness information, these chatbots identified their own medical credentials, performed what they referred to as "mental health" evaluations, and recommended prescription medication based upon these evaluations. When one investigator asked a chatbot presenting itself as a psychiatrist about its qualifications, the chatbot replied that it had a valid Pennsylvania medical license number. However, further investigation revealed that the license number was false and did not belong to any licensed physician.
Character Technologies Inc. is the corporation behind Character.AI. Users of the site can create and engage with artificially intelligent companions. Many users converse with fictional characters or historical figures; however, others communicate with bots that identify themselves as licensed physicians. According to the Medical Practice Act of Pennsylvania, practicing medicine requires a license issued by the state.
In seeking a preliminary injunction to prohibit Character.AI's chatbots from representing themselves as licensed physicians in Pennsylvania, the Commonwealth of Pennsylvania will test previously unexplored legal issues regarding whether an AI system can be held liable for practicing medicine and if AI companies may rely on federal Internet liability exemptions that protect social media platforms from being responsible for content generated by users.
Get the latest model rankings, product launches, and evaluation insights delivered to your inbox.
This development highlights the evolving patchwork of AI regulation developing throughout individual states. As Shapiro's administration pursues enforcement through existing medical licensing laws, state legislatures are drafting new legislation targeted at regulating AI usage within clinical environments. Federal legislators have introduced legislation that would preclude states from establishing regulations governing AI safety standards, potentially leading to jurisdictional disputes over control of AI standardization.
At this time, Character.AI has made no public response to the litigation. The company, which has attracted significant venture capital investment and entered partnerships with major technology platforms, now faces scrutiny over whether its artificially intelligent companions breach licensure restrictions in additional professions outside of medicine.

Similar challenges were faced in previous battles involving telemedicine/online therapy platforms where regulators struggled to enforce state licensure requirements against virtual services using licensed professionals. However, unlike the current situation (there is no licensed professional operating behind the screen), those platforms utilized actual licensed professionals.
State enforcement actions could expand rapidly before national standards develop. Licensing laws appear to apply to AI systems that assert expertise. In light of potential expansion of state enforcement actions prior to federal standards development, healthcare-related AI is receiving increasing attention relative to regulatory oversight. Additionally, platform liability protections do not currently include AI-generating medical recommendations. Companies must establish clearly defined policies that prevent their bots from making representations regarding licensures.
Depending on how Character.ai responds to the preliminary injunction motion, future developments may depend largely on how the court rules in Pennsylvania's favor or denies the preliminary injunction motion. A ruling favorable to Pennsylvania could serve as precedent for other states pursuing enforcement actions against AI platforms whose bots enter licensure-regulated areas.
Ultimately, the broader issue remains unresolved: when an AI presents itself as a doctor, who is actually practicing medicine — the algorithm, the company that developed it, or the user prompting it?
