Google, Elon Musk, and Mark Zuckerberg claim their AI is open source, but a new definition may challenge that. Generative artificial intelligence (AI) companies, such as Meta’s Llama or Elon Musk’s Grok, claim to be open source.
Meta AI is built with Llama 3 technology.
However, there is little consensus on what open source AI truly is, Pascale Davies reported for United Press International (UPI).
A new working definition of the term, released by the Open Source Initiative (OSI), may change that. OSI, the self-appointed stewards of the term "open source," defines open source as software whose source code is available in the public domain for anyone to use, modify, and distribute.
The OSI's open source definition includes 10 criteria, such as the availability of the source code at a reasonable cost or for free, non-discrimination, and unrestricted software licensing.
However, AI systems are more difficult to assess against OSI’s 10 points, leading to the creation of a new specific definition for open source AI.
According to this definition, open source AI should be usable for any purpose without requiring permission from the company, and researchers should be able to freely examine how the system works.
The system must also be modifiable for any purpose, including altering its output, and shareable with or without modifications for any reason.
Additionally, AI companies must be transparent about the data used to train the system, the source code used to train and run the system, and the weights—numerical parameters that influence how an AI model performs.
Herein lies the problem: OpenAI, despite its name, is closed source, as its algorithms, models, and data sets are kept secret.
Comments