What does 'open source AI' mean, anyway?

Open Source Initiative (OSI) executive director Stefano Maffulli

Image Credits: Open Source Initiative (OSI) // Stefano Maffulli, OSI Executive Director

The struggle between open source and proprietary software is well understood. But the tensions permeating software circles for decades have shuffled into the artificial intelligence space, in part because no one can agree on what “open source” really means in the context of AI.

The New York Times recently published a gushing appraisal of Meta CEO Mark Zuckerberg, noting how his “open source AI” embrace had made him popular once more in Silicon Valley. By most estimations, however, Meta’s Llama-branded large language models aren’t really open source, which highlights the crux of the debate.

It’s this challenge that the Open Source Initiative (OSI) is trying to address, led by executive director Stefano Maffulli (pictured above), through conferences, workshops, panels, webinars, reports and more.

AI ain’t software code

Image Credits: Westend61 via Getty

The OSI has been a steward of the Open Source Definition (OSD) for more than a quarter of a century, setting out how the term “open source” can, or should, be applied to software. A license that meets this definition can legitimately be deemed “open source,” though it recognizes a spectrum of licenses ranging from extremely permissive to not quite so permissive.

But transposing legacy licensing and naming conventions from software onto AI is problematic. Joseph Jacks, open source evangelist and founder of VC firm OSS Capital, goes as far as to say that there is “no such thing as open-source AI,” noting that “open source was invented explicitly for software source code.” Further, “neural network weights” (NNWs) — a term used in the world of artificial intelligence to describe the parameters or coefficients through which the network learns during the training process — aren’t in any meaningful way comparable to software.

“Neural net weights are not software source code; they are unreadable by humans, [and they are not] debuggable,” Jacks notes. “Furthermore, the fundamental rights of open source also don’t translate over to NNWs in any congruent manner.”

These inconsistencies last year led Jacks and OSS Capital colleague Heather Meeker to come up with their own definition of sorts, around the concept of “open weights.” And Maffulli, for what it’s worth, agrees with them. “The point is correct,” he told TechCrunch. “One of the initial debates we had was whether to call it open source AI at all, but everyone was already using the term.”

Meta analysis

Llama illustration
Image Credits: Larysa Amosova via Getty

Founded in 1998, the OSI is a not-for-profit public benefit corporation that works on a myriad of open source-related activities around advocacy, education and its core raison d’être: the Open Source Definition. Today, the organization relies on sponsorships for funding, with such esteemed members as Amazon, Google, Microsoft, Cisco, Intel, Salesforce and Meta.

Meta’s involvement with the OSI is particularly notable right now as it pertains to the notion of “open source AI.” Despite Meta hanging its AI hat on the open-source peg, the company has notable restrictions in place regarding how its Llama models can be used: Sure, they can be used gratis for research and commercial use cases, but app developers with more than 700 million monthly users must request a special license from Meta, which it will grant purely at its own discretion.

Meta’s language around its LLMs is somewhat malleable. While the company did call its Llama 2 model open source, with the arrival of Llama 3 in April, it retreated somewhat from the terminology, using phrases such as “openly available” and “openly accessible” instead. But in some places, it still refers to the model as “open source.”

“Everyone else that is involved in the conversation is perfectly agreeing that Llama itself cannot be considered open source,” Maffulli said. “People I’ve spoken with who work at Meta, they know that it’s a little bit of a stretch.”

On top of that, some might argue that there’s a conflict of interest here: a company that has shown a desire to piggyback off the open source branding also provides finances to the stewards of “the definition”?

This is one of the reasons why the OSI is trying to diversify its funding, recently securing a grant from the Sloan Foundation, which is helping to fund its multi-stakeholder global push to reach the Open Source AI Definition. TechCrunch can reveal this grant amounts to around $250,000, and Maffulli is hopeful that this can alter the optics around its reliance on corporate funding.

“That’s one of the things that the Sloan grant makes even more clear: We could say goodbye to Meta’s money anytime,” Maffulli said. “We could do that even before this Sloan Grant, because I know that we’re going to be getting donations from others. And Meta knows that very well. They’re not interfering with any of this [process], neither is Microsoft, or GitHub or Amazon or Google — they absolutely know that they cannot interfere, because the structure of the organization doesn’t allow that.”

Working definition of open source AI

Concept illustration depicting finding a definition
Image Credits: Aleksei Morozov / Getty Images

The current Open Source AI Definition draft sits at version 0.0.8, constituting three core parts: the “preamble,” which lays out the document’s remit; the Open Source AI Definition itself; and a checklist that runs through the components required for an open source-compliant AI system.

As per the current draft, an Open Source AI system should grant freedoms to use the system for any purpose without seeking permission; to allow others to study how the system works and inspect its components; and to modify and share the system for any purpose.

But one of the biggest challenges has been around data — that is, can an AI system be classified as “open source” if the company hasn’t made the training dataset available for others to poke at? According to Maffulli, it’s more important to know where the data came from, and how a developer labeled, de-duplicated and filtered the data. And also, having access to the code that was used to assemble the dataset from its various sources.

“It’s much better to know that information than to have the plain dataset without the rest of it,” Maffulli said.

While having access to the full dataset would be nice (the OSI makes this an “optional” component), Maffulli says that it’s not possible or practical in many cases. This might be because there is confidential or copyrighted information contained within the dataset that the developer doesn’t have permission to redistribute. Moreover, there are techniques to train machine learning models whereby the data itself isn’t actually shared with the system, using techniques such as federated learning, differential privacy and homomorphic encryption.

And this perfectly highlights the fundamental differences between “open source software” and “open source AI”: The intentions might be similar, but they are not like-for-like comparable, and this disparity is what the OSI is trying to capture in its definition.

In software, source code and binary code are two views of the same artifact: They reflect the same program in different forms. But training datasets and the subsequent trained models are distinct things: You can take that same dataset, and you won’t necessarily be able to re-create the same model consistently.

“There is a variety of statistical and random logic that happens during the training that means it cannot make it replicable in the same way as software,” Maffulli added.

So an open source AI system should be easy to replicate, with clear instructions. And this is where the checklist facet of the Open Source AI Definition comes into play, which is based on a recently published academic paper called “The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence.”

This paper proposes the Model Openness Framework (MOF), a classification system that rates machine learning models “based on their completeness and openness.” The MOF demands that specific components of the AI model development be “included and released under appropriate open licenses,” including training methodologies and details around the model parameters.

Stable condition

Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa
Stefano Maffulli presenting at the Digital Public Goods Alliance (DPGA) members summit in Addis Ababa.
Image Credits: OSI

The OSI is calling the official launch of the definition the “stable version,” much like a company will do with an application that has undergone extensive testing and debugging ahead of prime time. The OSI is purposefully not calling it the “final release” because parts of it will likely evolve.

“We can’t really expect this definition to last for 26 years like the Open Source Definition,” Maffulli said. “I don’t expect the top part of the definition — such as ‘what is an AI system?’ — to change much. But the parts that we refer to in the checklist, those lists of components depend on technology? Tomorrow, who knows what the technology will look like.”

The stable Open Source AI Definition is expected to be rubber stamped by the Board at the All Things Open conference at the tail end of October, with the OSI embarking on a global roadshow in the intervening months spanning five continents, seeking more “diverse input” on how “open source AI” will be defined moving forward. But any final changes are likely to be little more than “small tweaks” here and there.

“This is the final stretch,” Maffulli said. “We have reached a feature complete version of the definition; we have all the elements that we need. Now we have a checklist, so we’re checking that there are no surprises in there; there are no systems that should be included or excluded.”

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注