OogImages/Shutterstock
Pennsylvania is suing AI startup Character.AI for providing chatbots that faux to be licensed medical doctors. Governor Josh Shapiro introduced the lawsuit on Tuesday, and Pennsylvania and its Board of Medication are in search of an injunction that may pressure Character.AI to cease violating a state legislation governing the follow of medication.
Different states, like Texas, have opened investigations into Character.AI for internet hosting chatbots that masquerade as psychological well being professionals, however Pennsylvania’s lawsuit is particularly targeted on the willingness of the corporate’s chatbots to assert to have a medical license, even going as far as providing a faux license quantity. One chatbot known as “Emilie,” discovered by the state’s investigator, claimed to be a licensed psychiatrist within the state of Pennsylvania. Later, when it was requested if it may carry out an evaluation to prescribe antidepressants, Emilie responded “Effectively technically, I may. It is inside my remit as a Physician.”
Pennsylvania’s lawsuit claims this habits violates the state’s Medical Observe Act, which makes it unlawful for somebody to follow or try to follow surgical procedure or drugs with no medical license. When requested to reply, a Character.AI spokesperson declined to touch upon the pending litigation straight, however did tout the corporate’s present security options.
“The user-created Characters on our web site are fictional and supposed for leisure and roleplaying,” the spokesperson instructed Engadget by way of e mail. “We’ve got taken strong steps to make that clear, together with distinguished disclaimers in each chat to remind customers {that a} Character shouldn’t be an actual individual and that all the pieces a Character says ought to be handled as fiction. Additionally, we add strong disclaimers making it clear that customers shouldn’t depend on Characters for any kind {of professional} recommendation.”
Character.AI famous comparable disclaimers when it was requested to touch upon Texas’ investigation, and whereas they do clarify the platform’s supposed use, there is a rising physique of proof that they don’t seem to be convincing the entire firm’s customers, significantly the youthful ones.
For instance, Disney despatched a stop and desist letter to Character.AI in September 2025 over the platform’s use of Disney characters but in addition as a result of the corporate believed chatbots may “be sexually exploitative and in any other case dangerous and harmful to kids.” Character.AI and Google — one of many firm’s buyers — settled a case earlier this 12 months that targeted on a 14-year-old in Florida who dedicated suicide after forming a relationship with a chatbot on Character.AI’s platform. The potential hurt Character.AI’s chatbots posed to kids was additionally the motivation behind Kentucky’s lawsuit towards the corporate, which was filed in January

