AI chatbots suitable only for adults are still appearing in children’s toys

A new report from the US Public Interest Research Group (PIRG) Education Fund has raised concerns about the increasing use of artificial intelligence chatbots in children’s toys, warning that some of these systems may not be suitable for younger users. According to the report, some AI-based toys incorporate chatbot technology that can generate responses similar to those used in adult-focused AI services, potentially exposing children to inappropriate or misleading content.

The study examined a variety of toys that incorporate conversational AI features, including interactive dolls, robots, and educational devices. Many of these products feature large-scale language models similar to those used in popular AI chatbots, allowing children to converse with toys that respond in natural language.

Although technology can make toys more interactive and educational, PIRG researchers argue that the safeguards built into some products may not be strong enough to protect younger audiences. In particular, the report highlights that underlying AI systems often originate from platforms designed primarily for general users, not children.

Therefore, AI responses generated by these toys may potentially contain information or conversation topics that are more appropriate for adults than children. The report also warns that AI can generate inaccurate or unpredictable responses, which could be confusing to younger users who tend to trust toys as reliable sources of information.

Researchers who reviewed the toys’ documentation and privacy policies also found that some products rely heavily on cloud-based AI systems.

This means that the child’s voice interactions may be transmitted to an external server where the data can be processed and used to generate a response. Privacy advocates say this raises additional concerns about how children’s data is stored and used. Some toys may collect audio recordings, user messages, or other personal information during conversations. If these systems are not carefully designed to protect children’s privacy, data could potentially be misused or stored without clear safeguards.

The report also notes that many AI-based toys include disclaimers in their terms of service or product documentation. These disclaimers state that sometimes AI responses may not always be accurate or appropriate, and while the toys themselves are marketed directly to children, responsibility can effectively be passed on to parents.

This is important because AI technology is increasingly being applied to everyday consumer products, including items designed specifically for young people. Toys that simulate conversation can have a powerful impact on children, who often treat them as friends or learning tools.

Experts say children may have trouble distinguishing between trustworthy information and guesswork, bias, and inaccurate AI-generated responses. As AI systems continue to advance, adapting these technologies for child safety will become increasingly important.

The findings also highlight broader regulatory issues.

Many countries have laws designed to protect children’s online privacy, such as the Children’s Online Privacy Protection Act (COPPA) in the United States, but these regulations were developed before the advent of generative AI.

Advocacy groups argue that regulators may need to update safety standards and guidelines to address how AI systems interact with children through connected devices.

The PIRG report calls on toy manufacturers to implement stronger safeguards, including stricter content filtering, clearer disclosures about their use of AI, and more transparent data practices. We also recommend that companies design AI systems specifically for children rather than repurposing models originally created for adults.

Researchers say that going forward, collaboration between tech companies, regulators and child safety experts will be needed to ensure AI-based toys remain innovative and safe.

As artificial intelligence becomes increasingly integrated into everyday products, the challenge will be balancing the benefits of interactive technology with the responsibility to protect young users from potential harm.

Scroll to Top