Google hopeful of fix for Gemini's historical-image diversity issue within weeks

MWC 2024 Demis Hassabis

Image Credits: Natasha Lomas/TechCrunch

Google is hopeful it will soon be able to “unpause” the ability of its multimodal generative AI tool, Gemini, to depict people, per DeepMind founder, Demis Hassabis. The capability to respond to prompts for images of humans should be back online in the “next few weeks,” he said today.

Google suspended the Gemini capability last week after users pointed out the tool was producing historically incongruous images, such as depicting the U.S. Founding Fathers as a diverse group of people, rather than only white men.

Hassabis responded to questions about the product snafu during an onstage interview at the Mobile World Congress in Barcelona today.

Asked by a moderator, Wired’s Steven Levy, to explain what went wrong with the image-generation feature, Hassabis sidestepped a detailed technical explanation. Instead he suggested the issue was caused by Google failing to identify instances when users are basically after what he described as a “universal depiction.” The example points to “nuances that come with advanced AI,” he also said.

“This is a field we’re all grappling with. So if you, for example, put in a prompt that asks for, ‘give me a picture of a person walking a dog or a nurse in a hospital,’ right, in those cases, you clearly want a sort of ‘universal depiction.’ Especially if you consider that as Google, we serve 200+ countries, you know, every country around the world — so you don’t know where the user’s coming from and what their background is going to be or what context they’re in. So you want to kind of show a very sort of universal range of possibilities there.”

Hassabis said the issue boiled down to a “well-intended feature” — to foster a diversity in Gemini’s image outputs of people — having been applied “too bluntly, across all of it.”

Prompts that ask for content about historical people should “of course” result in “a much narrower distribution that you give back,” he added, hinting at how Gemini may tackle prompts for people in the future.

“We care, of course, about historical accuracy. And so we’ve taken that feature offline while we fix that and we hope to have that back online in the next — in very short order. Next couple of weeks, next few weeks.”

Responding to a follow-up question about how to prevent generative AI tools from being misappropriated by bad actors, such as authoritarian regimes looking to spread propaganda, Hassabis had no simple answer. The issue is “very complex,” he suggested — likely demanding a whole-of-society mobilization and response to determine and enforce limits.

“There’s really important research and debate that needs to happen — also with civil society and governments, not just tech companies,” he said. “It’s a social technical question that affects everyone and should involve everyone to discuss it. What values do we want these systems to have? What would they represent? How do you prevent bad actors accessing the same technologies and, what you’re talking about, which is repurposing them for harmful ends that were not intended by the creators of those systems.”

Touching on the challenge of open source, general-purpose AI models, which Google also offers, he added: “Customers want to use open source systems that they can fully control  . . . But then the question comes is how do you ensure what people use downstream isn’t going to be harmful with those systems as they get increasingly more powerful?

“I think, today, it’s not an issue because the systems are still relatively nascent. But if you wind forward three, four or five years, and you start talking about next generation systems with planning capabilities and being able to act in the world and solve problems and goals, I think society really has to seriously think about these issues — of what happens if this proliferates, and then bad actors all the way from individuals to rogue states can make use of them as well.”

During the interview, Hassabis was also asked for his thoughts on AI devices and where the mobile market may be headed as generative AI continues to drive fresh developments here. He predicted a wave of “next generation smart assistants” that are useful in people’s everyday lives, rather than the “gimmicky” stuff of previous AI assistant generations, which he suggested may even reshape the mobile hardware people choose to pack on their person.

“I think there’ll be questions about what is the right device type, even,” he suggested. “But in five plus years’ time, is the phone even really going to be the perfect form factor? Maybe we need glasses or some other things so that the AI system can actually see a bit of the context that you’re in and so be even more helpful in your daily life. So I think there’s all sorts of amazing things to be invented.”

Google pauses AI tool Gemini’s ability to generate images of people after historical inaccuracies

Read more about MWC 2024 on TechCrunch

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注