UK gov't touts $100M+ plan to fire up 'responsible' AI R&D

Image Credits: Ian Vogler / Getty Images

The U.K. government is finally publishing its response to an AI regulation consultation it kicked off last March, when it put out a white paper setting out a preference for relying on existing laws and regulators, combined with “context-specific” guidance, to lightly supervise the disruptive high tech sector.

The full response is being made available later this morning, so wasn’t available for review at the time of writing (update: it’s now online here). But in a press release ahead of publication the Department for Science, Innovation and Technology (DSIT) is spinning the plan as a boost to U.K. “global leadership” via targeted measures — including £100 million+ (~$125 million) in extra funding — to bolster AI regulation and fire up innovation.

Per DSIT’s press release, there will be £10 million (~$12.5 million) in additional funding for regulators to “upskill” for their expanded workload, i.e. of figuring out how to apply existing sectoral rules to AI developments and actually enforcing existing laws on AI apps that breach the rules (including, it is envisaged, by developing their own tech tools). 

“The fund will help regulators develop cutting-edge research and practical tools to monitor and address risks and opportunities in their sectors, from telecoms and healthcare to finance and education. For example, this might include new technical tools for examining AI systems,” DSIT writes. It did not provide any detail on how many additional staff could be recruited with the extra funding.

The release also touts — a notably larger — £90 million (~$113 million) in funding the government says will be used to establish nine research hubs to foster homegrown AI innovation in areas, such as healthcare, math and chemistry, which it suggests will be situated around the U.K.

The 90:10 funding split is suggestive of where the government wants most of the action to happen — with the bucket marked ‘homegrown AI development’ the clear winner here, while “targeted” enforcement on associated AI safety risks is envisaged as the comparatively small-time add-on operation for regulators. (Although it’s worth noting the government has previously announced £100 million for an AI taskforce, focused on safety R&D around advanced AI models.)

DSIT confirmed to TechCrunch that the £10 million fund for expanding regulators’ AI capabilities has not yet been established — saying the government is “working at pace” to get the mechanism set up. “However, it’s key that we do this properly in order to achieve our objectives and ensure that we are getting value for taxpayers’ money,” a department spokesperson told us. 

The £90 million funding for the nine AI research hubs covers five years, starting from February 1. “The funding has already been awarded with investments in the nine hubs ranging from £7.2 million to £10 million,” the spokesperson added. They did not offer details on the focus of the other six research hubs.

The other top-line headline today is that the government is sticking to its plan not to introduce any new legislation for artificial intelligence yet.

“The UK government will not rush to legislate, or risk implementing ‘quick-fix’ rules that would soon become outdated or ineffective,” writes DSIT. “Instead, the government’s context-based approach means existing regulators are empowered to address AI risks in a targeted way.”

Although in an Executive Summary to its response to the consultation, Michelle Donelan, the secretary of state for science, innovation, and technology, also writes that “the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured”.

Additionally, she suggests that “further targeted binding requirements” may be required to tackle the challenges posed by “highly capable general-purpose AI systems” to ensure the handful of AI giants behind these models are “accountable” for making their technologies “sufficiently safe”. But there’s no binding requirements on the table as yet — as that would require new legislation.

“As AI systems advance in capability and societal impact, it is clear that some mandatory measures will ultimately be required across all jurisdictions to address potential AI-related harms, ensure public safety, and let us realise the transformative opportunities that the technology offers. However, acting before we properly understand the risks and appropriate mitigations would harm our ability to benefit from technological progress while leaving us unable to adapt quickly to emerging risks,” Donelan adds. “We are going to take our time to get this right — we will legislate when we are confident that it is the right thing to do.”

This staying the course is unsurprising — given the government is facing an election this year which polls suggest it will almost certainly lose. So this looks like an administration that’s fast running out of time to write laws on anything. Certainly, time is dwindling in the current parliament. (And, well, passing legislation on a tech topic as complex as AI clearly isn’t in the current prime minister’s gift at this point in the political calendar.)

At the same time, the European Union just locked in agreement on the final text of its own risk-based framework for regulating “trustworthy” AI — a long-brewing high tech rulebook which looks set to start to apply there from later this year. So the U.K.’s strategy of leaning away from legislating on AI, and opting to tread water on the issue, has the effect of starkly amplifying the differentiation vs the neighbouring bloc where, taking the contrasting approach, the EU is now moving forward (and moving further away from the U.K.’s position) by implementing its AI law.

The U.K. government evidently sees this tactic as rolling out the bigger welcome mat for AI developers. Even as the EU reckons businesses, even disruptive high tech businesses, thrive on legal certainty — plus, alongside that, the bloc is unveiling its own package of AI support measures — so which of these approaches, sector-specific guidelines vs a set of prescribed legal risks, will woo the most growth-charging AI “innovation” remains to be seen.

“The UK’s agile regulatory system will simultaneously allow regulators to respond rapidly to emerging risks, while giving developers room to innovate and grow in the UK,” is DSIT’s boosterish line.

While, on business confidence, specifically, its release flags how “key regulators”, including Ofcom and the Competition and Markets Authority [CMA], have been asked to publish their approach to managing AI by April 30 — which it says will see them “set out AI-related risks in their areas, detail their current skillset and expertise to address them, and a plan for how they will regulate AI over the coming year” — suggesting AI developers operating under U.K. rules should prepare to read the regulatory tealeaves, across multiple sectoral AI enforcement priority plans, in order to quantify their own risk of getting into legal hot water.

One thing is clear: U.K. prime minister Rishi Sunak continues to be extremely comfortable in the company of techbros — whether he’s taking time out from his day job to conduct an interview of Elon Musk for streaming on the latter’s own social media platform; finding time in his packed schedule to meet the CEOs of US AI giants to listen to their ‘existential risk’ lobbying agenda; or hosting a “global AI safety summit” to gather the tech faithful at Bletchley Park — so his decision to opt for a policy choice that avoids coming with any hard new rules right now was undoubtedly the obvious pick for him and his time-strapped government.

On the flip side, Sunak’s government does look to be in a hurry in another respect: When it comes to distributing taxpayer funding to charge up homegrown “AI innovation” — and, the suggestion here from DSIT is, these funds will be strategically targeted to ensure the accelerated high tech developments are “responsible” (whatever “responsible” means without there being a legal framework in place to define the contextual bounds in question).

As well as the aforementioned £90 million for the nine research hubs trailed in DSIT’s PR, there’s an announcement of £2 million in Arts & Humanities Research Council (AHRC) funding to support new research projects the government says “will help to define what responsible AI looks like across sectors such as education, policing and the creative industries”. These are part of the AHRC’s existing Bridging Responsible AI Divides (BRAID) program.

Additionally, £19 million will go toward 21 projects to develop “innovative trusted and responsible AI and machine learning solutions” aimed at accelerating deployment of AI technologies and driving productivity. (“This will be funded through the Accelerating Trustworthy AI Phase 2 competition, supported through the UKRI [UK Research & Innovation] Technology Missions Fund, and delivered by the Innovate UK BridgeAI program,” says DSIT.)

In a statement accompanying today’s announcements, Donelan added:

The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development.

I am personally driven by AI’s potential to transform our public services and the economy for the better — leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.

AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.

Today’s £100 million+ (total) funding announcements are additional to the £100 million previously announced by the government for the aforementioned AI safety taskforce (turned AI Safety Institute) which is focused on so-called frontier (or foundational) AI models, per DSIT, which confirmed this is new money when we asked.

We also asked about the criteria and processes for awarding AI projects U.K. taxpayer funding. We’ve heard concerns the government’s approach may be sidestepping the need for a thorough peer review process — with the risk of proposals not being robustly scrutinized in the rush to get funding distributed.

A DSIT spokesperson responded by denying there’s been any change to the usual UKRI processes. “UKRI funds research on a competitive basis,” they suggested. “Individual applications for research are assessed by relevant independent experts from academia and business. Each proposal for research funding is assessed by experts for excellence and, where applicable, impact.”

“DSIT is working with regulators to finalise the specifics [of project oversight] but this will be focused around regulator projects that support the implementation of our AI regulatory framework to ensure that we are capitalising on the transformative opportunities that this technology has to offer, while mitigating against the risks that it poses,” the spokesperson added.

On foundational model safety, DSIT’s PR suggests the AI Safety Institute will “see the UK working closely with international partners to boost our ability to evaluate and research AI models”. And the government is also announcing a further investment of £9 million, via the International Science Partnerships Fund, which it says will be used to bring together researchers and innovators in the U.K. and the U.S. — “to focus on developing safe, responsible, and trustworthy AI”.

The department’s press release goes on to describe the government’s response as laying out a “pro-innovation case for further targeted binding requirements on the small number of organisations that are currently developing highly capable general-purpose AI systems, to ensure that they are accountable for making these technologies sufficiently safe”.

“This would build on steps the UK’s expert regulators are already taking to respond to AI risks and opportunities in their domains,” it adds. (And on that front the CMA put out a set of principles it said would guide its approach towards generative AI last fall.)

The PR also talks effusively of “a partnership with the US on responsible AI”. Asked for more details on this, the spokesperson said the aim of the partnership is to “bring together researchers and innovators in bilateral research partnerships with the US focused on developing safer, responsible, and trustworthy AI, as well as AI for scientific uses” — adding that the hope is for “international teams to examine new methodologies for responsible AI development and use”.

“Developing common understanding of technology development between nations will enhance inputs to international governance of AI and help shape research inputs to domestic policy makers and regulators,” DSIT’s spokesperson added.

While they confirmed there will be no U.S.-style ‘AI safety and security’ Executive Order issued by Sunak’s government, the AI regulation White Paper consultation response dropping later today sets out “the next steps”.

This report was updated with a link to the government’s response to the consultation, once published; and with SoS Donelan’s remarks about the reasons the government is not introducing AI legislation yet but also the case for putting some “binding requirements” on highly capable general purpose AI systems at some point

Politicians commit to collaborate to tackle AI safety, US launches safety institute

UK to avoid fixed rules for AI – in favor of ‘context-specific guidance’

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注