Content advisory: This story contains references to suicide. Resources and assistance are available through multiple campus programs.
Online chatbots have rapidly become more than a quick source of information, with many people now turning to them for emotional support and comfort. As some of these relationships with chatbots grow deeper and become concerning, policymakers are sending warnings about these relationships and requiring developers to remind people at regular intervals that chatbots are not human.
Researchers from Michigan State University and the University of Wisconsin–Milwaukee authored commentary in the journal Trends in Cognitive Sciences that examines the importance of finding the best way to remind people that chatbots are not human.
Celeste Campos-Castillo, associate professor in the Department of Media and Information at MSU, and Linnea Laestadius, associate professor in the Zilber College of Public Health at the University of Wisconsin–Milwaukee, urge researchers to determine how reminders impact the formation of emotional dependence between humans and chatbots, as well as identify the ideal phrasing of reminders to limit dependency without creating distress.
Here, Campos-Castillo discusses the rising concerns with chatbots, how sending reminders can impact people and why researching the best way to remind people is crucial.
There are many concerns right now about the mental and physical health risks of chatbots. For example, many people believe that, unlike humans, chatbots will not judge, tease or — unless there is a privacy breach — turn others at school or work against them. This belief can encourage people to share personal information with chatbots and become emotionally attached to them. However, companion chatbot platforms do not have the same legal confidentiality protections as therapists and other medical professionals. There are also concerns that people who use chatbots may be more likely to experience depression than those who do not.
Amplifying concerns are recent news stories and parent advocacy groups linking chatbot use, including ChatGPT and Character.AI, to deaths by suicide among teens and young adults. These cases have pushed state and federal leaders to consider rules for companion chatbots.
Proposed policies include connecting users to crisis services when they show signs of distress, limiting persuasive design features, setting rules on certain topics and uses, and promoting artificial intelligence, or AI, literacy.
Policymakers are also beginning to require chatbot developers to remind people at regular intervals that they are interacting with a chatbot and not a human. One example is a new law in New York that requires companion chatbot platforms to remind users at least every three hours that they are not talking to a human and that the AI companion “is unable to feel human emotion.” In October 2025, California passed a similar law, but its rules are more narrowly focused on minors.
While it is encouraging to see policymakers move beyond their early focus on long-term risks and address the more direct and immediate risks of AI, we urge caution about one proposed solution. This solution — requiring ongoing reminders that chatbot companions are not human — lacks strong research evidence and context. The problem it is meant to solve may be more complex than policymakers and advocates first thought.
The logic behind these policies is that people will be less likely to become overly dependent on a chatbot if they are reminded that it is not human. However, there is no clear evidence that these reminders stop people from forming attachments. In fact, research suggests the reminders may not work. Several studies show that people who feel connected to chatbots already know they are not human, and this awareness does not prevent them from forming strong attachments.
We wish the only problem with these proposed policies was that the reminders might not work. Although they are well-intended, research suggests these reminders could be ineffective or even harmful.
First, reminding users that chatbots are not human may actually encourage the very problem policymakers want to prevent — excessive dependency. Users who already know a chatbot is not human may still share very personal information, and decades of research show that sharing intimate details can strengthen emotional attachment. These reminders could increase both the risk of dependency and the privacy risks linked to sharing personal information.
Second, constant reminders may cause sadness or distress. Recent research describes a ‘bittersweet paradox’ of emotional connection with AI. Users may receive emotional and social support from chatbots, while also feeling sad because they know their companion is artificial and not human. This means the wording and timing of reminders should be carefully considered so they do not worsen existing mental health struggles.
Some of the people who are drawn to chatbots are already vulnerable because they either are or feel as if they have no one else they can count on. Reminding them that the one companion they have — the chatbot — is not human and cannot be reached in this reality may increase the risk of harmful thoughts or actions, including suicidal thoughts or behaviors, in an effort to ‘join’ the chatbot in its synthetic reality.
Adding to concerns about alternate realities and chatbots, the Washington Post coverage of the deaths of two teenagers who regularly spoke with chatbots notes that both teens had repeatedly written the phrase ‘I will shift’ in their notebooks.
This phrase comes from an online community focused on ‘shifting’ to alternate realities. This community has a strong presence on TikTok and a related subreddit on Reddit. According to the subreddit, chatbots can be used as a tool to help achieve this shift. It is unclear whether interest in these communities led to risky chatbot use, or whether both the chatbot use and the desire to shift realities were driven by outside stress and mental health struggles. Still, the overlap between these trends shows the need for caution when deciding how to remind users that their chatbot companion is not human.
More research is needed to figure out how to carefully design reminders and decide the best times to use them. The goal should be to protect vulnerable users without reducing the mental health benefits some people get from talking to a chatbot. Deciding when and how to remind someone that it is not human, which may highlight their social isolation — especially when the chatbot may be the only place they feel safe sharing their emotions — requires careful thought. Policymakers should consider research on both human-chatbot relationships and best practices for handling mental health crises.
Evidence-based policymaking requires more research on how to remind people about the nature of companion chatbots and how to balance their benefits and risks. Research can help identify the best ways to explain that chatbots are not human. This should be treated as a top research priority, and it is important that this work be done now.
Read more news from the MSU College of Communication Arts and Sciences.