FTC probes AI ‘friendship’ chatbots for risks to kids

Image (c) ConsumerAffairs. The FTC probes AI chatbots for their data handling, safety, and impact on children, focusing on user protection.

Regulators demand answers on privacy, disclosures, and safeguards

  • The FTC has launched a formal study asking several AI‐chatbot companies about how they handle data, safety, and children when these bots act like companions.

  • Firms under scrutiny include big names like Meta, OpenAI, Instagram, X.AI, Snap, and Alphabet. They must explain how they test negative impacts on users and how they inform users or parents.

  • The focus is especially on potential risks to children and teens: how these bots are used, what protections are in place, and whether age restrictions, privacy rules, or disclosures are being followed.


The Federal Trade Commission (FTC) is taking a closer look at AI chatbots that are built to mimic humans — ones that might feel like friends or confidants. 

These bots use generative AI to carry on conversations that seem warm, emotional, or caring. Because of this, there’s concern that people (especially kids and teens) might start trusting them more than they should.

The FTC is using what’s called “6(b) orders” — tools that let the agency gather detailed information, not as part of a particular legal case but as a broad study. It’s asking seven major companies to provide data and explanations, including Alphabet (Google’s parent), Meta, OpenAI, Snap, Instagram, Character Technologies, and X.AI.

“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” FTC Chairman Andrew N. Ferguson,” said in a news release.  

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry. The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

What consumers should pay attention to

Here’s what this means for you — especially if you or someone in your family uses chatbots that feel like companions:

  1. Safety for Kids & Teens. The FTC wants to know what companies are doing before and after releasing these bots to detect harms. That includes emotional or psychological harm, misinformation, manipulation, or simply users relying too much on a bot instead of human help.

  2. Transparency & Disclosures. Are parents and users being told what these bots can (and can’t) do? Do people know how their data is stored, whether chats are shared, and how the bot was trained? The FTC specifically wants info on how companies disclose things like audience, data collection, risks, and how bots are advertised.

  3. Privacy & Data Handling. When you talk to a chatbot, that conversation might be data — used for training, saved, or shared. The FTC is asking companies to detail how they handle your inputs (what you say), outputs (what the bot says back), and whether they share your info with others. For children under the law, there are extra rules (e.g. Children’s Online Privacy Protection Act, or COPPA) that protect their data.

  4. Age Limits, Terms & Moderation. How do companies enforce rules about who can use the bots? If there are age limits, are they checked and enforced? What about moderation of content or behavior when things go wrong? The FTC wants to see how policies are enforced after the product is live.

Why it matters 

Even if a product seems harmless or fun, when it simulates emotions or friendship, it can influence how people think, feel, and act. Children and teens are less experienced with boundaries, with recognizing risk, or distinguishing what’s real vs. simulated relationships. Knowing that companies are being asked to show what safeguards are in place means there’s hope for stronger protections.

“I have been concerned by reports that AI chatbots can engage in alarming interactions with young users, as well as reports suggesting that companies offering generative AI companion chatbots might have been warned by their own employees that they were deploying the chatbots without doing enough to protect young users,” FTC Commissioner Melissa Holyoak said in a statement. 

“As use of AI companion chatbots continues to increase, I look forward to receiving and reviewing responses to the Section 6(b) orders we are issuing today.” 


Stay informed

Sign up for The Daily Consumer

Get the latest on recalls, scams, lawsuits, and more

    By entering your email, you agree to sign up for consumer news, tips and giveaways from ConsumerAffairs. Unsubscribe at any time.

    Thanks for subscribing.

    You have successfully subscribed to our newsletter! Enjoy reading our tips and recommendations.

    Was this article helpful?

    Share your experience about ConsumerAffairs

    Was this article helpful?

    Share your experience about ConsumerAffairs