The use of artificial intelligence in mental healthcare is exploding. From chatbots that offer a listening ear to algorithms that predict mental health crises, AI promises to expand access to care and improve treatment. But with this great promise comes great responsibility, and a pressing need for thoughtful regulation. In the absence of comprehensive federal guidance, a patchwork of state-level laws and policies is emerging, creating a complex and evolving landscape for providers, developers, and patients to navigate.
The States Taking the Lead
Several states have already taken significant steps to regulate the use of AI in mental healthcare, each with a slightly different approach:
Illinois: In a landmark move, Illinois passed the "Wellness and Oversight for Psychological Resources (WOPR) Act" in August 2025. This first-of-its-kind legislation explicitly prohibits the use of AI to provide therapy or make clinical decisions. AI tools can still be used for administrative and support tasks, but they cannot replace a licensed professional in the delivery of mental health services (State of Illinois, 2025).
Nevada: Nevada has also taken a firm stance, enacting a law that bans AI from being programmed to act as a mental health professional. The state has also placed restrictions on the use of AI systems by behavioral healthcare providers when treating patients (Manatt, Phelps & Phillips, LLP, 2025).
Utah: Utah’s approach focuses on transparency. The state now requires that any "mental health chatbot" clearly disclose to users that it is not a human. In addition, Utah has established an Office of Artificial Intelligence to develop policies and regulations that will ensure the safe and ethical use of AI, including in the mental health sector (Manatt, Phelps & Phillips, LLP, 2025).
Other states, including California, Massachusetts, Michigan, Ohio, Pennsylvania, and Wisconsin, are also actively considering legislation related to AI in healthcare. This flurry of activity at the state level underscores the growing recognition of the need for oversight in this rapidly developing field.
Why Staying Up-to-Date Matters
The lack of a unified federal framework for AI in healthcare creates a "Wild West" environment where regulations can vary significantly from one state to another. This can be challenging for developers who want to create and deploy AI tools on a national scale, and it can be confusing for patients who may not be aware of the protections (or lack thereof) in their state.
Staying informed about these evolving regulations is critical for several reasons:
Patient Safety: Unregulated AI tools could provide inaccurate or harmful advice, potentially putting patients at risk. Clear regulations help to ensure that AI is used safely and effectively.
Privacy and Security: Mental health data is incredibly sensitive. Regulations are needed to protect this data from being misused or shared without consent.
Equity and Bias: AI algorithms are only as good as the data they are trained on. Without proper oversight, AI tools could perpetuate and even amplify existing biases in healthcare.
CTeL's Vision for a Responsible AI Future
Amidst this complex regulatory landscape, organizations like the Center for Telehealth & eHealth Law (CTeL) are playing a crucial role in shaping the future of AI in healthcare. CTeL is a national, non-profit organization that provides expert legal and regulatory analysis of telehealth and digital health issues.
CTeL has launched an AI Blue Ribbon Collaborative, bringing together legal, clinical, and scientific experts to provide unbiased, vendor-neutral information to drive the adoption of safe, effective AI and machine learning practices. This initiative aims to establish clear standards, rules of engagement, and best practices to ensure that AI technologies are used safely, ethically, and effectively.
CTeL advocates for a balanced approach to regulation—one that protects patients without stifling innovation. By fostering collaboration and providing expert guidance, CTeL is helping to build a future where AI can safely and effectively augment human intelligence to improve mental healthcare for all.
The Road Ahead
The journey to establish a comprehensive regulatory framework for AI in mental healthcare is just beginning. As more states take action and organizations like CTeL continue to lead the conversation, we can expect to see a more consistent and robust approach to oversight emerge. By staying informed and engaged, we can all play a part in ensuring that AI fulfills its promise to revolutionize mental healthcare in a safe, ethical, and equitable way.