UK government urged to adopt more positive outlook for LLMs to avoid missing 'AI goldrush'

Big Ben, Westminster and House of Lords at the sunset. London. England.

Image Credits: Peterscode / Getty Images

The U.K. government is taking too “narrow” a view of AI safety and risks falling behind in the AI gold rush, according to a report released today.

The report, published by the parliamentary House of Lords’ Communications and Digital Committee, follows a months-long evidence-gathering effort involving input from a wide gamut of stakeholders, including big tech companies, academia, venture capitalists, media and government.

Among the key findings from the report was that the government should refocus its efforts on more near-term security and societal risks posed by large language models (LLMs) such as copyright infringement and misinformation, rather than becoming too concerned about apocalyptic scenarios and hypothetical existential threats, which it says are “exaggerated.”

“The rapid development of AI large language models is likely to have a profound effect on society, comparable to the introduction of the internet — that makes it vital for the Government to get its approach right and not miss out on opportunities, particularly not if this is out of caution for far-off and improbable risks,” the Communications and Digital Committee’s chairman Baroness Stowell said in a statement. “We need to address risks in order to be able to take advantage of the opportunities — but we need to be proportionate and practical. We must avoid the U.K. missing out on a potential AI goldrush.”

The findings come as much of the world grapples with a burgeoning AI onslaught that looks set to reshape industry and society, with OpenAI’s ChatGPT serving as the poster child of a movement that catapulted LLMs into the public consciousness over the past year. This hype has created excitement and fear in equal doses, and sparked all manner of debates around AI governance — President Biden recently issued an executive order with a view toward setting standards for AI safety and security, while the U.K. is striving to position itself at the forefront of AI governance through initiatives such as the AI Safety Summit, which gathered some of the world’s political and corporate leaders into the same room at Bletchley Park back in November.

At the same time, a divide is emerging around to what extent we should regulate this new technology.

Regulatory capture

Meta’s chief AI scientist Yann LeCun recently joined dozens of signatories in an open letter calling for more openness in AI development, an effort designed to counter a growing push by tech firms such as OpenAI and Google to secure “regulatory capture of the AI industry” by lobbying against open AI R&D.

“History shows us that quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation,” the letter read. “Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.”

And it’s this tension that serves as a core driving force behind the House of Lords’ “Large language models and generative AI” report, which calls for the government to make market competition an “explicit AI policy objective” to guard against regulatory capture from some of the current incumbents such as OpenAI and Google.

Indeed, the issue of “closed” versus “open” rears its head across several pages in the report, with the conclusion that “competition dynamics” will not only be pivotal to who ends up leading the AI / LLM market, but also what kind of regulatory oversight ultimately works. The report notes:

At its heart, this involves a contest between those who operate ‘closed’ ecosystems, and those who make more of the underlying technology openly accessible. 

In its findings, the committee said that it examined whether the government should adopt an explicit position on this matter, vis à vis favouring an open or closed approach, concluding that “a nuanced and iterative approach will be essential.” But the evidence it gathered was somewhat colored by the stakeholders’ respective interests, it said.

For instance, while Microsoft and Google noted they were generally supportive of “open access” technologies, they believed that the security risks associated with openly available LLMs were too significant and thus required more guardrails. In Microsoft’s written evidence, for example, the company said that “not all actors are well-intentioned or well-equipped to address the challenges that highly capable [large language] models present“.

The company noted:

Some actors will use AI as a weapon, not a tool, and others will underestimate the safety challenges that lie ahead. Important work is needed now to use AI to protect democracy and fundamental rights, provide broad access to the AI skills that will promote inclusive growth, and use the power of AI to advance the planet’s sustainability needs.

Regulatory frameworks will need to guard against the intentional misuse of capable models to inflict harm, for example by attempting to identify and exploit cyber vulnerabilities at scale, or develop biohazardous materials, as well as the risks of harm by accident, for example if AI is used to manage large scale critical infrastructure without appropriate guardrails.

But on the flip side, open LLMs are more accessible and serve as a “virtuous circle” that allows more people to tinker with things and inspect what’s going on under the hood. Irene Solaiman, global policy director at AI platform Hugging Face, said in her evidence session that opening access to things like training data and publishing technical papers is a vital part of the risk-assessing process.

What is really important in openness is disclosure. We have been working hard at Hugging Face on levels of transparency [….] to allow researchers, consumers and regulators in a very consumable fashion to understand the different components that are being released with this system. One of the difficult things about release is that processes are not often published, so deployers have almost full control over the release method along that gradient of options, and we do not have insight into the pre-deployment considerations.

Ian Hogarth, chair of the U.K. government’s recently launched AI Safety Institute, also noted that we’re in a position today where the frontier of LLMs and generative AI is being defined by private companies that are effectively “marking their own homework” as it pertains to assessing risk. Hogarth said:

That presents a couple of quite structural problems. The first is that, when it comes to assessing the safety of these systems, we do not want to be in a position where we are relying on companies marking their own homework. As an example, when [OpenAI’s LLM] GPT-4 was released, the team behind it made a really earnest effort to assess the safety of their system and released something called the GPT-4 system card. Essentially, this was a document that summarised the safety testing that they had done and why they felt it was appropriate to release it to the public. When DeepMind released AlphaFold, its protein-folding model, it did a similar piece of work, where it tried to assess the potential dual use applications of this technology and where the risk was.

You have had this slightly strange dynamic where the frontier has been driven by private sector organisations, and the leaders of these organisations are making an earnest attempt to mark their own homework, but that is not a tenable situation moving forward, given the power of this technology and how consequential it could be.

Avoiding or striving to attain regulatory capture lies at the heart of many of these issues. The very same companies that are building leading LLM tools and technologies are also calling for regulation, which many argue is really about locking out those seeking to play catch-up. Thus, the report acknowledges concerns around industry lobbying for regulations, or government officials becoming too reliant on the technical know-how of a “narrow pool of private sector expertise” for informing policy and standards.

As such, the committee recommends “enhanced governance measures in DSIT [Department for Science, Innovation and Technology] and regulators to mitigate the risks of inadvertent regulatory capture and groupthink.”

This, according to the report, should:

….apply to internal policy work, industry engagements and decisions to commission external advice. Options include metrics to evaluate the impact of new policies and standards on competition; embedding red teaming, systematic challenge and external critique in policy processes; more training for officials to improve technical know‐how; and ensuring proposals for technical standards or benchmarks are published for consultation.

Narrow focus

However, this all leads to one of the main recurring thrusts of the report’s recommendation, that the AI safety debate has become too dominated by a narrowly focused narrative centered on catastrophic risk, particularly from “those who developed such models in the first place.”

Indeed, on the one hand the report calls for mandatory safety tests for “high-risk, high-impact models” — tests that go beyond voluntary commitments from a few companies. But at the same time, it says that concerns about existential risk are exaggerated and this hyperbole merely serves to distract from more pressing issues that LLMs are enabling today.

“It is almost certain existential risks will not manifest within three years, and highly likely not within the next decade,” the report concluded. “As our understanding of this technology grows and responsible development increases, we hope concerns about existential risk will decline. The Government retains a duty to monitor all eventualities — but this must not distract it from capitalising on opportunities and addressing more limited immediate risks.”

Capturing these “opportunities,” the report acknowledges, will require addressing some more immediate risks. This includes the ease with which mis- and dis-information can now be created and spread — through text-based mediums and with audio and visual “deepfakes” that “even experts find increasingly difficult to identify,” the report found. This is particularly pertinent as the U.K. approaches a general election.

“The National Cyber Security Centre assesses that large language models will ‘almost certainly be used to generate fabricated content; that hyper‐realistic bots will make the spread of disinformation easier; and that deepfake campaigns are likely to become more advanced in the run up to the next nationwide vote, scheduled to take place by January 2025’,” it said.

Moreover, the committee was unequivocal on its position around using copyrighted material to train LLMs — something that OpenAI and other big tech companies have been doing, arguing that training AI is a fair-use scenario. This is why artists and media companies such as The New York Times are pursuing legal cases against AI companies that use web content for training LLMs.

“One area of AI disruption that can and should be tackled promptly is the use of copyrighted material to train LLMs,” the report notes. “LLMs rely on ingesting massive datasets to work properly, but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege. This is an issue the Government can get a grip of quickly, and it should do so.”

It is worth stressing that the Lords’ Communications and Digital Committee doesn’t completely rule out doomsday scenarios. In fact, the report recommends that the government’s AI Safety Institute should carry out and publish an “assessment of engineering pathways to catastrophic risk and warning indicators as an immediate priority.”

Moreover, the report notes that there is a “credible security risk” from the snowballing availability of powerful AI models which can easily be abused or malfunction. But despite these acknowledgements, the committee reckons that an outright ban on such models is not the answer, on the balance of probability that the worst-case scenarios won’t come to fruition, and the sheer difficulty in banning them. And this is where it sees the government’s AI Safety Institute coming into play, with recommendations that it develops “new ways” to identify and track models once deployed in real-world scenarios.

“Banning them entirely would be disproportionate and likely ineffective,” the report noted. “But a concerted effort is needed to monitor and mitigate the cumulative impacts.”

So for the most part, the report doesn’t say that LLMs and the broader AI movement don’t come with real risks. But it says that the government needs to “rebalance” its strategy with less focus on “sci-fi end-of-world scenarios” and more focus on what benefits it might bring.

“The Government’s focus has skewed too far towards a narrow view of AI safety,” the report says. “It must rebalance, or else it will fail to take advantage of the opportunities from LLMs, fall behind international competitors and become strategically dependent on overseas tech firms for a critical technology.”

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注