OpenAI moves to shrink regulatory risk in EU around data privacy
While most of Europe was still knuckle deep in the holiday chocolate selection box late last month, ChatGPT maker OpenAI was busy firing out an email with details of an incoming update to its terms that looks intended to shrink its regulatory risk in the European Union.
The AI giant’s technology has come under early scrutiny in the region over ChatGPT’s impact on people’s privacy — with a number of open investigations into data protection concerns linked to how the chatbot processes people’s information and the data it can generate about individuals, including from watchdogs in Italy and Poland. (Italy’s intervention even triggered a temporary suspension of ChatGPT in the country until OpenAI revised the information and controls it provides users.)
“We have changed the OpenAI entity that provides services such as ChatGPT to EEA and Swiss residents to our Irish entity, OpenAI Ireland Limited,” OpenAI wrote in an email to users sent on December 28.
A parallel update to OpenAI’s Privacy Policy for Europe further stipulates:
If you live in the European Economic Area (EEA) or Switzerland, OpenAI Ireland Limited, with its registered office at 1st Floor, The Liffey Trust Centre, 117-126 Sheriff Street Upper, Dublin 1, D01 YC43, Ireland, is the controller and is responsible for the processing of your Personal Data as described in this Privacy Policy.
The new terms of use listing its recently established Dublin-based subsidiary as the data controller for users in the European Economic Area (EEA) and Switzerland, where the bloc’s General Data Protection Regulation (GDPR) is in force, will start to apply on February 15 2024.
Users are told if they disagree with OpenAI’s new terms they may delete their account.
The GDPR’s one-stop-shop (OSS) mechanism allows for companies that process Europeans’ data to streamline privacy oversight under a single lead data supervisory located in an EU Member State — where they are “main established”, as the regulatory jargon puts it.
Gaining this status effectively reduces the ability of privacy watchdogs located elsewhere in the bloc to unilaterally act on concerns. Instead they would typically refer complaints back to the main established company’s lead supervisor for consideration.
Other GDPR regulators still retain powers to intervene locally if they see urgent risks. But such interventions are typically temporary. They are also exceptional by nature, with the bulk of GDPR oversight funnelled via a lead authority. Hence why the status has proved so appealing to Big Tech — enabling the most powerful platforms to streamline privacy oversight of their cross-border personal data processing.
Asked if OpenAI is working with Ireland’s privacy watchdog to obtain main establishment status for its Dublin-based entity, under the GDPR’s OSS, a spokeswomen for the Irish Data Protection Commission (DPC) told TechCrunch: “I can confirm that Open AI has been engaged with the DPC and other EU DPAs [data protection authorities] on this matter.”
OpenAI was also contacted for comment.
The AI giant opened a Dublin office back in September — hiring initially for a handful of policy, legal and privacy staffers in addition to some back office roles.
At the time of writing it has just five open positions based in Dublin out of a total of 100 listed on its careers page, so local hiring still appears to be limited. A Brussels-based EU Member States policy & partnerships lead role it’s also recruiting at the moment asks applicants to specify if they’re available to work from the Dublin office three days per week. But the vast majority of the AI giant’s open positions are listed as San Francisco/U.S. based.
One of the five Dublin-based roles being advertised by OpenAI is for a privacy software engineer. The other four are for: account director, platform; international payroll specialist; media relations, Europe lead; and sales engineer.
Who and how many hires OpenAI is making in Dublin will be relevant to it obtaining main establishment status under the GDPR as it’s not simply a case of filing a bit of legal paperwork and checking a box to gain the status. The company will need to convince the bloc’s privacy regulators that the Member State-based entity it’s named as legally responsible for Europeans’ data is actually able to influence decision-making around it.
That means having the right expertise and legal structures in place to exert influence and put meaningful privacy checks on a U.S. parent.
Put another way, opening up a front office in Dublin that simply signs off on product decisions that are made in San Francisco should not suffice.
That said, OpenAI may be looking with interest at the example of X, the company formerly known as Twitter, which has rocked all sorts of boats after a change of ownership in fall 2022. But has failed to fall out of the OSS since Elon Musk took over — despite the erratic billionaire owner taking a hatchet to X’s regional headcount, driving out relevant expertise and making what appear to be extremely unilateral product decisions. (So, well, go figure.)
If OpenAI gains GDPR main established status in Ireland, obtaining lead oversight by the Irish DPC, it would join the likes of Apple, Google, Meta, TikTok and X, to name a few of the multinationals that have opted to make their EU home in Dublin.
The DPC, meanwhile, continues to attract substantial criticism over the pace and cadence of its GDPR oversight of local tech giants. And while recent years has seen a number of headline-grabbing penalties on Big Tech finally rolling out of Ireland critics point out the regulator often advocates for substantially lower penalties than its peers. Other criticisms include the glacial pace and/or unusual trajectory of the DPC’s investigations. Or instances where it chooses not to investigate a complaint at all, or opts to reframe it in a way that sidesteps the key concern (on the latter, see, for example, this Google adtech complaint).
Any existing GDPR probes of ChatGPT, such as by regulators in Italy and Poland, may still be consequential in terms of shaping the regional regulation of OpenAI’s generative AI chatbot as the probes are likely to run their course given they concern data processing predating any future main establishment status the AI giant may gain. But it’s less clear how much impact they may have.
As a refresher, Italy’s privacy regulator has been looking at a long list of concerns about ChatGPT, including the legal basis OpenAI relies upon for processing people’s data to train its AIs. While Poland’s watchdog opened a probe following a detailed complaint about ChatGPT — including how the AI bot hallucinates (i.e. fabricates) personal data.
Notably, OpenAI’s updated European privacy policy also includes more details on the legal bases it claims for processing people’s data — with some new wording that phrases its claim to be relying on a legitimate interests legal basis to process people’s data for AI model training as being “necessary for our legitimate interests and those of third parties and broader society” [emphasis ours].
Whereas the current OpenAI privacy policy contains the much drier line on this element of its claimed legal basis: “Our legitimate interests in protecting our Services from abuse, fraud, or security risks, or in developing, improving, or promoting our Services, including when we train our models.”
This suggests OpenAI may be intending to seek to defend its vast, consentless harvesting of Internet users’ personal data for generative AI profit to concerned European privacy regulators by making some kind of public interest argument for the activity, in addition to its own (commercial) interests. However the GDPR has a strictly limited set of (six) valid legal basis for processing personal data; data controllers can’t just play pick ‘n’ mix of bits from this list to invent their own bespoke justification.
It’s also worth noting GDPR watchdogs have already been trying to find common ground on how to tackle the tricky intersection of data protection law and big data-fuelled AIs via a taskforce set up within the European Data Protection Board last year. Although it remains to be seen whether any consensus will emerge from the process. And given OpenAI’s move to establish a legal entity in Dublin as the controller of European users data now, down the line, Ireland may well get the defining say in the direction of travel when it comes to generative AI and privacy rights.
If the DPC becomes lead supervisor of OpenAI it would have the ability to, for example, slow the pace of any GDPR enforcement on the rapidly advancing tech.
Already, last April in the wake of the Italian intervention on ChatGPT, the DPC’s current commissioner, Helen Dixon, warned against privacy watchdogs rushing to ban the tech over data concerns — saying regulators should take time to figure out how to enforce the bloc’s data protection law on AIs.
Note: U.K. users are excluded from OpenAI’s legal basis switch to Ireland, with the company specifying they fall under the purview of its U.S., Delware-based corporate entity. (Since Brexit, the EU’s GDPR no longer applies in the U.K. — although it retains its own U.K. GDPR in national law, a data protection regulation which is still historically based on the European framework, that’s set to change as the U.K. diverges from the bloc’s gold standard on data protection via the rights-diluting ‘data reform’ bill currently passing through parliament.)
OpenAI to open its first EU office as it readies for regulatory hurdles
ChatGPT-maker OpenAI accused of string of data protection breaches in GDPR complaint filed by privacy researcher