OpenAI forms a new team to study child safety
Under scrutiny from activists — and parents — OpenAI has formed a new team to study ways to prevent its AI tools from being misused or abused by kids.
In a new job listing on its career page, OpenAI reveals the existence of a Child Safety team, which the company says is working with platform policy, legal and investigations groups within OpenAI as well as outside partners to manage “processes, incidents, and reviews” relating to underage users.
The team is currently looking to hire a child safety enforcement specialist, who’ll be responsible for applying OpenAI’s policies in the context of AI-generated content and working on review processes related to “sensitive” (presumably kid-related) content.
Tech vendors of a certain size dedicate a fair amount of resources to complying with laws like the U.S. Children’s Online Privacy Protection Rule, which mandate controls over what kids can — and can’t — access on the web as well as what sorts of data companies can collect on them. So the fact that OpenAI’s hiring child safety experts doesn’t come as a complete surprise, particularly if the company expects a significant underage user base one day. (OpenAI’s current terms of use require parental consent for children ages 13 to 18 and prohibit use for kids under 13.)
But the formation of the new team, which comes several weeks after OpenAI announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines and landed its first education customer, also suggests a wariness on OpenAI’s part of running afoul of policies pertaining to minors’ use of AI — and negative press.
Kids and teens are increasingly turning to GenAI tools for help not only with schoolwork but personal issues. According to a poll from the Center for Democracy and Technology, 29% of kids report having used ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.
Some see this as a growing risk.
Last summer, schools and colleges rushed to ban ChatGPT over plagiarism and misinformation fears. Since then, some have reversed their bans. But not all are convinced of GenAI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use GenAI in a negative way — for example creating believable false information or images used to upset someone.
In September, OpenAI published documentation for ChatGPT in classrooms with prompts and an FAQ to offer educator guidance on using GenAI as a teaching tool. In one of the support articles, OpenAI acknowledged that its tools, specifically ChatGPT, “may produce output that isn’t appropriate for all audiences or all ages” and advised “caution” with exposure to kids — even those who meet the age requirements.
Calls for guidelines on kid usage of GenAI are growing.
The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of GenAI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”