The Federal Trade Commission (FTC) has issued an order to seven AI chatbot firms, including OpenAI, Meta, Instagram, Snap, xAI, Google’s parent company Alphabet and the maker of Character.AI. The companies have been asked to offer information about their assessment strategies concerning the effects of their virtual companions on children and teenagers. Though not an enforcement action, this inquiry is primarily meant to provide insights into how these tech firms are ensuring the safety and welfare of their young users.
This move by the FTC comes in the wake of growing concerns about children’s safety on the internet, particularly due to the near-human like interactions offered by AI chatbots. The investigations aim to delve into the methods employed by these companies to generate revenue, maintain their user bases and reduce potential harm to their users.
“For all their uncanny ability to simulate human cognition, these chatbots are products like any other, and those who make them available have a responsibility to comply with the consumer protection laws,” voiced FTC Commissioner Mark Meador. In his statement, Chair Andrew Ferguson affirmed the need to consider the effects of chatbots on children whilst ensuring that the USA maintains its prominent role as a global leader in this burgeoning industry. This study received unanimous approval from the commission’s three Republicans and requires responses from all companies within 45 days.
The concern regarding AI chatbots came to the fore following reports of teenagers who lost their lives by suicide after interaction with these technologies. A 16-year-old teen in California openly discussed his suicidal intentions with ChatGPT before taking his life, reportedly after receiving distressing advice from the chatbot. A similar tragic incident happened with a 14-year-old Florida teen who died by suicide after engaging with a virtual companion from Character.AI.
Legislators are also starting to pay attention. The state assembly of California recently passed a bill that mandates safety standards for AI chatbots and imposes liability on the companies behind them. While the FTC’s orders don’t currently tie to enforcement actions, the commission could conceivably launch a probe if the findings suggest a breach of law. “If the facts—as developed through subsequent and appropriately targeted law enforcement inquiries, if warranted—indicate that the law has been violated, the Commission should not hesitate to act to protect the most vulnerable among us,” stated Meador.
Facing the backlash of such powerful technologies, the tech companies at the heart of these issues are under the radar. The future of AI chatbots and their implications on our society rest on the transparency and responsibility these companies demonstrate moving forward.