As artificial intelligence becomes more common in healthcare, questions are being raised about who controls its use, especially in mental health services. Briana Last, an assistant professor of psychology at Stony Brook University, has co-authored an article in Nature Mental Health that addresses these concerns.
The article, written with Gabriela Kattan Khazanov of Yeshiva University’s Ferkauf Graduate School of Psychology, argues that the development and application of AI in behavioral healthcare should involve those most affected by it—patients, clinicians, and communities. The authors reference research from psychology, public health, and technology ethics to support their position.
“Much of my research examines how the U.S. mental health system does not currently meet the needs of most people who depend on it,” Last said. “The people who need care the most are often the least likely to receive it, and the clinicians who deliver care to underserved communities are often underpaid and overworked.”
Last noted that patients and clinicians want accessible and affordable care that allows for autonomy. “Based on my and others’ scholarship, the message is fairly consistent,” she said. “People want care to be more accessible, affordable and agency-promoting. They want more autonomy and decision-making power over how care is distributed and delivered, with most people wanting more human connection, not less.”
Her concern grew as AI-powered chatbots for mental health began receiving attention from technology leaders. “Tech CEOs and even some researchers began to tell the public that these chatbots would help solve the mental health crisis,” she said. “Though that may be an effective sales pitch for investors, it is both unlikely and inconsistent with what most people seeking and delivering treatment want or need.”
The article highlights that decisions about AI use are shaped by human choices and business interests. “There’s a tendency to think that the proliferation of AI in mental healthcare is inevitable, as if the current use-cases of AI are just the natural course of history,” Last said. “That kind of technological determinism fails to recognize that there are humans and powerful private interests behind these decisions.”
Last pointed out that while private companies have led much innovation in AI for healthcare, public investment has also played a significant role: “The public has spent decades funding AI’s development, and they are currently paying for many of its costs,” she said. “They should have a say in how these technologies are used.”
She warned about potential negative effects if AI tools are developed mainly to cut costs or increase profits: “I am concerned that technology companies, employers and insurers will use these technologies to cut human-delivered mental healthcare and clinical training in the name of cost-cutting,” Last said. “We are already seeing this happening.” She added that this could make access to human clinicians increasingly limited: “Human-delivered mental healthcare might become a luxury good,” she said.
Rather than dismissing AI altogether, Last advocates for increased public investment and ownership in developing these technologies for mental health applications. She expressed doubts about whether regulation alone can address rapid technological change: “While regulation is necessary, I don’t think it’s sufficient,” she said. “Public investment and ownership can shift technological investments to prioritize care for the neediest — care which may not always be profitable, but will always be essential.”
The article calls for participatory research methods involving service users, clinicians, and communities throughout all stages of AI development—not just token consultation. “The people who will be routinely using these technologies should have a seat at the table at every stage of the research process, from idea generation to implementation,” Last said. “Right now, there’s a huge disconnect between what most people think and feel about AI and how AI is actually being deployed in mental healthcare.”
Concerns remain among providers regarding training quality as well as patient relationships when using AI systems such as chatbots: “People have very serious concerns about how AI is and will be used in mental healthcare,” Last said. “When it comes to chatbots, there are real questions about safety and efficacy, especially for vulnerable individuals.”
Last emphasized Stony Brook University’s mission as a public institution committed to societal benefit through knowledge production rather than profit: “They exist to produce knowledge that benefits society, not just private investors.” She added: “Whenever I open the SBU newsletter, I’m amazed by the innovative work happening here… It’s a testament to what publicly oriented research can achieve.”
She cautioned against expecting quick solutions from policy or technology alone: “Our mental healthcare system is plagued by a lot of problems, and I don’t think they can be fixed by a few top-down policies or technological innovations.” Instead she supports what she calls a democratic approach where those affected by technology play an active role.
“The public already pays for many of the costs of AI technologies,” she said. “We need to start feeling empowered to have a say in how AI is developed and deployed.”
“It’s easy to feel like the current ways AI is being used in mental healthcare are inescapable,” Last concluded. “But if we remember that humans—not the technologies themselves—are making decisions about how AI is designed and deployed—we can begin to reimagine how AI could actually promote public mental health and well-being. Service users, the public[,]and providers deserve a real voice. The future of mental healthcare should be shaped by the people it is meant to serve.”



