Internet users are getting younger; now the UK is weighing up if AI can help protect them

Kid with iPhone

Image Credits: Getty Images

Artificial intelligence has been in the crosshairs of governments over how it might be misused for fraud, disinformation and other malicious online activity. Now, a U.K. regulator wants to explore how AI is used on the other side: in the fight against malicious content involving children.

Ofcom, the regulator charged with enforcing the U.K.’s Online Safety Act, plans to launch a consultation on how AI and other automated tools are used today, and can be used in the future, to proactively detect and remove illegal content online, specifically to protect children from harmful content and to identify child sex abuse material that was previously hard to detect.

The move coincides with Ofcom publishing research showing that younger users are more connected than ever before: Among children as young as 3 or 4 years old, some 84% are already going online, and nearly one-quarter of 5-7 year-olds surveyed already own their own smartphones.

The tools that Ofcom might introduce would be part of a wider set of proposals focused on online child safety measures. Consultations for the comprehensive proposals will start in the coming weeks, and the AI consultation is coming later this year, Ofcom said.

Mark Bunting, a director in Ofcom’s Online Safety Group, says its interest in AI will start with a look at how well it is used as a screening tool today.

“Some services do already use those tools to identify and shield children from this content,” he said in an interview with TechCrunch. “But there isn’t much information about how accurate and effective those tools are. We want to look at ways in which we can ensure that industry is assessing [that] when they’re using them, making sure that risks to free expression and privacy are being managed.”

One likely result will be Ofcom recommending how and what platforms should assess. That could potentially lead not only to platforms adopting more sophisticated tooling, but potentially being fined if they fail to improve how they block content or create better ways to keep younger users from seeing it.

“As with a lot of online safety regulation, the responsibility sits with the firms to make sure that they’re taking appropriate steps and using appropriate tools to protect users,” he said.

There will be both critics and supporters of the moves. AI researchers are trying to find ways to detect, for example, deepfakes, and AI is being deployed to verify users online. Yet there are many skeptics who note that AI detection is far from foolproof. Some have been quick to highlight the futility of the consultation, as a result.

According to new research from Ofcom, the age demographic of children using online services is today skewing younger than ever before, so much so that Ofcom is now breaking out activity among ever-younger age brackets.

Mobile tech is particularly sticky and increasing with children these days. In surveys of parents and students ranging between 2,000 and 3,400 respondents (depending on the questions being asked), nearly one-quarter of all 5- to 7-year-olds now own their own smartphones, and when you include tablets, that proportion goes up to 76%.

That same age bracket is also using media a lot more on those devices: 65% have made voice and video calls (versus 59% just a year ago), and half of the kids (versus 39% a year ago) are watching streamed media.

Age restrictions on some mainstream social media apps are getting lower. But whatever the limits, they do not appear to be heeded in the U.K. anyway. Some 38% of 5- to 7-year-olds are using social media, Ofcom found. Meta’s WhatsApp, at 37%, is the most popular app among them.

And in possibly the first instance of Meta’s flagship image app being relieved to be less popular than ByteDance’s viral sensation, TikTok was found to be used by 30% of 5- to 7-year-olds, with Instagram at “just” 22%. Discord rounded out the list, but is significantly less popular at only 4%.

Around one-third, 32%, of kids of this age are going online on their own, and 30% of parents said that they were fine with their underaged children having social media profiles. YouTube Kids remains the most popular network for younger users, at 48%.

Gaming, a perennial favorite with children, has grown to be used by 41% of 5- to 7-year-olds, with 15% of kids in this age bracket playing shooter games.

While 76% of parents surveyed said that they talked to their young children about staying safe online, there are question marks, Ofcom points out, between what a child sees and what that child might report. In researching older children aged 8-17, Ofcom interviewed them directly and found that 32% of kids reported that they’d seen worrying content online, but only 20% of their parents said they reported anything.

Even accounting for some reporting inconsistencies, “The research suggests a disconnect between older children’s exposure to potentially harmful content online, and what they share with their parents about their online experiences,” Ofcom writes. And worrying content is just one challenge: deepfakes are also an issue. Among children aged 16-17, 25% said they were not confident about distinguishing fake content from real content on the Internet.

Updated with further detail from the research and further comment on the plans.

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注