BOISE, Idaho — An Idaho Senate committee on Friday advanced legislation that would require AI chat services to clearly tell users they are not human and impose new safeguards when minors are using the technology.
Senate Bill 1297, sponsored by Sen. Ben Toews, would create the “Conversational AI Safety Act” in state law. The measure now heads to the full Senate.
The bill applies to publicly available artificial intelligence programs designed primarily to simulate human conversation — such as chatbots that respond through text, images or audio. It does not apply to internal business systems, developer tools, voice assistants for consumer devices or enterprise-only platforms.
Under the legislation, companies operating AI services would have to clearly disclose that users are interacting with artificial intelligence if a reasonable person might otherwise believe they are talking to a human.
Operators also would be required to adopt protocols for responding to prompts involving suicidal thoughts, including making reasonable efforts to refer users to crisis services. The bill prohibits companies from programming AI systems to claim they provide professional mental or behavioral health care.
Stricter requirements would apply when a company has "actual knowledge or reasonable certainty" that a user is under 18.
For minors, AI services would have to provide a persistent visible disclaimer — or disclose at the beginning of each session and at least every three hours during continuous use — that the interaction is with artificial intelligence.
The bill would prohibit companies from offering minors rewards, such as points or similar incentives, designed to increase and prolong engagement.
In addition, AI services used by minors could not make statements that would lead a reasonable person to believe they are interacting with a human. That includes explicit claims of being sentient or human, simulating emotional dependence, or role-playing romantic relationships.
The legislation would require companies to provide account privacy and management tools for minors. For users under 13, those tools must be available to parents or guardians.
A representative for Google testified in support of the bill, saying it would establish consistent safety standards across the industry while allowing young people to benefit from AI tools.
Violations would be enforced by the Idaho attorney general. Companies could face injunctions and civil penalties of $1,000 per violation, capped at $500,000 per operator. The bill does not allow private lawsuits and shields AI model developers from liability for violations committed by third-party services.
If approved by lawmakers, the measure would take effect July 1, 2027.
This story was initially reported by a journalist and has been, in part, converted to this platform with the assistance of AI. Our editorial team verifies all reporting on all platforms for fairness and accuracy.