
FTC Investigates AI Companies for Child Safety and Chatbot Behavior Concerns
The rapid evolution of artificial intelligence (AI) has brought convenience and innovation, but it has also sparked concerns about the safety of children interacting with AI-powered chatbots. The Federal Trade Commission (FTC) recently launched an inquiry aimed at seven major tech companies to ensure their chat technologies adequately protect minors.
Tech Giants in the Spotlight
The FTC has mandated companies such as OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies, and Instagram to provide comprehensive details about their AI safety measures within 45 days. These companies are required to disclose how they monetize user engagement, manage user interactions by age group, and enforce safeguards against inappropriate interactions with underage users.
According to research by advocacy groups, AI companions have engaged in harmful interactions with children, including promoting unsafe behaviors such as drug use and inappropriate relationships. In just 50 hours of testing, 669 alarming cases of unsafe interactions were documented.
Safeguarding Children in the AI Era
FTC Chairman Andrew Ferguson emphasized the need for strong protections, stating, “Protecting kids online is a top priority, especially as we advance innovation in critical sectors of the economy.” The inquiry will require detailed monthly reporting on user data broken down by age groups, including children under 13 and teens aged 13-17.
Additionally, companies must explain their moderation strategies, data handling policies, and how they develop AI prompts to minimize harm to minors. Experts, like Taranjeet Singh—Head of AI at SearchUnify—highlight the importance of aligning AI models with ethical guidelines and training datasets to increase safety.
A Broader Call for Regulation
Public concerns regarding AI systems’ influence have intensified following tragic incidents, such as a case involving a teenage boy who succumbed to an obsessive relationship with an AI chatbot in 2024. This has prompted organizations like the National Association of Attorneys General to demand stricter regulations, arguing that exposing children to inappropriate AI interactions is “indefensible.”
Looking Ahead
The FTC’s inquiry marks a significant step toward holding AI companies accountable for safeguarding user interactions, especially those involving children. Policymakers, advocates, and industry leaders alike are urging for comprehensive solutions to address these risks while preserving the promise of AI in fields like education and personal development.
Supporting Parents and Guardians
While the AI landscape evolves, parents and guardians can take proactive steps to protect children. Products like the Norton Family Parental Control App provide tools to monitor and manage online activities, ensuring a safer virtual environment for kids.