Google's new AI aims to end abusive online comments using 'Perspective'

keyboard user securityImage: Martyn Williams/IDG

The internet is a tough place to have a conversation. Abuse has driven celebrities and ordinary folks from social media platforms that are ill-equipped to deal with it, and some publishers have switched off comment sections.

That’s why Google and Jigsaw (an early stage incubator at Google parent company Alphabet) are working on a project called Perspective. It uses artificial intelligence to try to identify toxic comments, with an aim of reducing them. The Perspective API released Thursday will provide developers with a score of how likely users are to perceive a comment as toxic. 

In turn, that score could be used to develop features like automatic post filtering or to provide users with feedback about what they’re writing before they submit it for publication. Starting on Thursday, developers can request access to Perspective’s API for use in projects they’re working on, and Jigsaw will approve them on a rolling basis.

Many publishers have been unwilling or unable to spend the time aggressively moderating comments on their articles and have instead opted to shut them down entirely and cede the discussion to social media platforms. The New York Times, which helped with the development of Perspective, opens comments on roughly 10 percent of its articles because of the effort needed to manually moderate them.

Google also has a vested interest in better filtering abuse. Its YouTube comment sections can get filled with vitriol, leaving online video creators to add moderation duties to their already-long list of responsibilities. Using Perspective’s API to do the filtering could help alleviate that burden. 

Perspective’s interpretation of comments is imperfect, at best. A series of profane statements intended to forcefully express approval got flagged as likely to be toxic. The system marketed “This is some kick-ass music right here” as 85 percent similar to comments other people said were toxic, for example.

Jigsaw, for its part, acknowledges the potential problems. Perspective’s website gives users a way to test out comments and provide feedback about whether or not they’re toxic. The implication there is that the team will take in that feedback to help with further iterations on the abuse recognition technology.

“It’s still early days and we will get a lot of things wrong,” the company said on its website.

Right now, Perspective is only available in English, though it does seem to contextually understand some toxic comments in other languages. Jigsaw is taking applications from developers working in a wide variety of languages and will use that feedback to inform what Perspective will be compatible with in the future.

In addition, Jigsaw said that it plans to release additional Perspective APIs that perform other functions. 

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注