Ofcom report finds 1 in 5 harmful content search results were 'one-click gateways' to more toxicity

a photo of outside Ofcom's office in London with glass windows

Image Credits: Bruno Vincent / Getty Images

Move over, TikTok, Ofcom, the U.K. regulator enforcing the now official Online Safety Act, is gearing up to size up an even bigger target: search engines like Google and Bing and the role that they play in presenting self-injury, suicide and other harmful content at the click of a button — particularly to underage users.

A report commissioned by Ofcom and produced by the Network Contagion Research Institute found that major search engines — including Google, Microsoft’s Bing, DuckDuckGo, Yahoo and AOL — become “one-click gateways” to such content by facilitating easy, quick access to web pages, images and videos. One out of every five search results around basic self-injury terms links to further harmful content, the researchers wrote.

The report is timely and noteworthy because much of the focus around harmful content online in recent times has been on the influence and use of walled-garden social media sites like Instagram and TikTok.

This new research is, significantly, a first step in helping Ofcom understand and gather evidence of whether the potential threat is much larger: open-ended sites like Google.com attract more than 80 billion visits per month, compared to social apps like TikTok, with its monthly active users of around 1.7 billion.

“Search engines are often the starting point for people’s online experience, and we’re concerned they can act as one-click gateways to seriously harmful self-injury content,” said Almudena Lara, Online Safety Policy Development director at Ofcom, in a statement. “Search services need to understand their potential risks and the effectiveness of their protection measures — particularly for keeping children safe online — ahead of our wide-ranging consultation due in Spring.”

Researchers analyzed some 37,000 result links across those five search engines for the report, Ofcom said. Using both common and more cryptic search terms (cryptic to try to evade basic screening), they intentionally ran searches turning off “safe search” parental screening tools, to mimic the most basic ways that people might engage with search engines, as well as the worst-case scenarios.

The results were in many ways as bad and damning as you might guess.

Not only did 22% of the search results produce single-click links to harmful content (including instructions for various forms of self-harm), but that content accounted for a full 19% of the top-most links in the results (and 22% of the links down the first pages of results).

Image searches were particularly egregious, the researchers found. A full 50% of image searches returned harmful content, followed by web pages at 28% and video at 22%. The report concludes that one reason some of these harmful results may not be getting screened out by search engines is because algorithms may confuse self-harm imagery with medical and other legitimate media — highlighting one of the more persistent flaws found in non-human-based moderation.

The cryptic search terms — which are, despite their name, actually more standarized than you might think — were also generally better at evading screening algorithms. Using these made it six times more likely that a user might reach harmful content.

One thing that is not touched on in the report, but is likely to become a bigger issue over time, is the role that generative AI searches might play in this space.

So far, it appears that there are more controls being put into place to prevent platforms like ChatGPT from being misused for toxic purposes. The question will be whether users will figure out how to game that, and what that might lead to.

“We’re already working to build an in-depth understanding of the opportunities and risks of new and emerging technologies, so that innovation can thrive, while the safety of users is protected. Some applications of generative AI are likely to be in scope of the Online Safety Act and we would expect services to assess risks related to its use when carrying out their risk assessment,” an Ofcom spokesperson told TechCrunch.

It’s not all a nightmare: some 22% of search results were also flagged for being helpful in a positive way.

The report may be getting used by Ofcom to get a better idea of the issue at hand, but it is also an early signal to search engine providers of what they will need to be prepared to work on.

Ofcom has already been clear to say that children will be its first focus in enforcing the Online Safety Bill. In the spring, Ofcom plans to open a consultation on its Protection of Children Codes of Practice, which aims to set out “the practical steps search services can take to adequately protect children.”

That will include taking steps to minimize the chances of children encountering harmful content around sensitive topics like suicide or eating disorders across the whole of the internet, including on search engines.

“Tech firms that don’t take this seriously can expect Ofcom to take appropriate action against them in future,” the Ofcom spokesperson said. That will include fines (which Ofcom said it would use only as a last resort) and in the worst scenarios, court orders requiring ISPs to block access to services that do not comply with rules. There potentially also could be criminal liability for executives who oversee services that violate the rules.

So far, Google has raised issues with some of the report’s findings and how it characterizes its efforts, claiming that its parental controls do a lot of the important work that invalidate some of the findings.

“We are fully committed to keeping people safe online,” a Google spokesperson said in a statement to TechCrunch. “Ofcom’s study does not reflect the safeguards that we have in place on Google Search and references terms that are rarely used on Search. Our SafeSearch feature, which filters harmful and shocking search results, is on by default for users under 18, whilst the SafeSearch blur setting — a feature which blurs explicit imagery, such as self-harm content — is on by default for all accounts. We also work closely with expert organisations and charities to ensure that when people come to Google Search for information about suicide, self-harm or eating disorders, crisis support resource panels appear at the top of the page.”

Microsoft and DuckDuckGo have so far not responded to a request for comment.

Update: Microsoft responded. “Microsoft is deeply committed to creating safe experiences online, and we take seriously the responsibility to protect our users, particularly children, from harmful content and conduct online,” said a spokesperson. “We are mindful of our heightened responsibilities as a major technology company and will continue to work with Ofcom to take action against harmful content in search results.”

So did DuckDuckGo. “”While DuckDuckGo gets its results from many sources, our primary source for traditional web links and image results is Bing,” said a spokesperson. “For issues in search results or problematic content, we encourage people to submit feedback directly on our search engine results page (by clicking on “Share Feedback”, which can be found at the bottom right corner of the page).”

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注