Clarity raises $16M to fight deepfakes through detection

Deepfake or Deep Fake Concept as a symbol for misrepresenting or identity theft or faking identification and misrepresentation in a 3D illustration style.

Image Credits: wildpixel (opens in a new window) / Getty Images

Fake porn of Taylor Swift. Photorealistic — but fictionalized — images of Gaza. The list of disconcerting deepfakes goes on, and — as deepfake-creating tools grow easier and cheaper to use — the waves of fakes are coming faster and fiercer.

According to a recent Pew Center poll, about two-thirds of Americans (66%) say they at least sometimes come across altered videos and images that are intended to mislead, with 15% encountering them often. In a separate survey of AI experts by Axios and Syracuse University, 62% said that misinformation will be the biggest challenge to maintaining the authenticity and credibility of news in an era of AI-generated content.

So what’s the answer? Is there one?

If you talk with folks like Michael Matias, a cybersecurity specialist and the co-founder and CEO of Clarity, they’ll tell you it’s deepfake detectors. Matias started Clarity with Gil Avriel and Natalie Fridman in 2022, with the goal of developing technology to spot AI-manipulated media — mainly video and audio.

Clarity is among the many vendors large and small racing to develop deepfake-spotting tools. Others include Reality Defender, which offers a platform to isolate text, video and image deepfakes, and Sentinel, which focuses on deepfaked images and videos.

It’s difficult, actually, to distinguish Clarity’s offerings from the others out there — at least for this writer. Like rival vendors, Clarity maintains a scanning tool available via an app and API that leverages several AI models trained to identify patterns in videos, image and audio deepface creation techniques. In addition, Clarity provides a form of watermarking that customers can use to indicate their content is legitimate.

But Matias insists that the differentiators lie not above but beneath the surface, with Clarity’s rapid response to new types of deepfakes.

“At its core, Clarity is leveraging AI but operating as a cybersecurity company,” Matias said. “Clarity treats deepfakes as viruses, acting like pathogens that quickly fork and replicate. As such, its solution was also built to fork and replicate to maintain adaptivity and resiliency … The team built infrastructure and AI models dedicated to accomplishing the ask.”

Of course, precision in the deepfakes detection realm is a moving target. Even with the best expertise and tech stack money can buy, it’s an impossible game to win considering the rate at which GenAI, deepfake-creating apps are improving. That’s perhaps why some major players — including Google, Microsoft and AWS — are embracing more sophisticated watermarking and provenance metadata as alternative — albeit imperfect — deepfake-fighting measures.

Be that as it may, Clarity hasn’t had any trouble attracting backing. The New York-based, 13-employee startup recently closed a $16 million seed round co-led by Walden Catalyst Ventures and Bessemer Venture Partners with participation from Secret Chord Ventures, Ascend Ventures and Flying Fish Partners.

And it appears to have carved out a niche. Initially, Clarity — which sells subscription as well as pay-as-you-go plans — sought customers in news publishers and the public sector, including the Israeli government. (Matias claims that Clarity is helping authenticate and verify videos coming out of the Israel-Hamas conflict.) But it’s since expanded to identity verification providers and other, unnamed “large enterprises.”

“This is a fast-paced arms race, just like traditional cybersecurity,” Matias said. “Any company that wants to tackle deepfakes needs to move as fast as those creating and spreading them are.”

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注