Skip to content

Open Source Definition for AI Models Need a Change

Do you think that the open-source licenses should evolve?

We started the year 2023 with Artificial Intelligence (AI) being one of the biggest catches, we also saw many companies going all in into this.

Take for instance Mozilla, who entered 2023 with plans to work on open-source AI to develop various types of AI-powered solutions, or even HuggingChat, the first open source alternative to ChatGPT.

Even Meta is no stranger to that, they have their Llama 2 large language model (LLM) that has been making waves all year round, they even announced a ChatGPT contender a few months back.

However, many have raised questions that Meta's Llama 2 is not as open as one would expect, and that does seem the case when one looks at its open-source license.

The license doesn't allow using Llama 2 for services that have over 700 million daily users, and it also cannot be used to train other language models.

This also means that Meta's license for Llama 2 doesn't meet all the requirements of the Open Source Initiative's (OSI) Open Source Definition.

One can argue that open-source licenses implemented by the likes of EleutherAI and Falcon 40B are a good example of how open source licensing should be handled for AI.

But, Meta has a different take on it.

Follow It's FOSS on Google News

Open Source Licensing Needs to Evolve

In conversation with The Verge, the VP for AI research over at Meta, Joëlle Pineau defended their stand.

She says that there is a need to balance between the benefits of information sharing and the potential costs that could be incurred to Meta's business.

This approach to open source has allowed their researchers to take a more focused approach in handling their AI projects. She also adds:

Being open has internally changed how we approach research, and it drives us not to release anything that isn’t very safe and be responsible at the onset.

Joëlle also hopes that they can have the same level of enthusiasm with their generative AI models that they have seen in the past with their PyTorch initiative,

But, the problem lies with current licensing schemes. She adds that, these licenses were not meant to work with software that leverage large amounts of data from a multitude of sources.

This in turn gives limited liability to both the users and the developers, with limited indemnity to copyright infringement (read as: protection against).

Furthermore, she added that:

AI models are different from software because there are more risks involved, so I think we should evolve the current user licenses we have to fit AI models better.
But I’m not a lawyer, so I defer to them on this point.

I do agree with her on that, there is a need to update current licenses to better fit AI models, among other things.

And it appears that the OSI are already on the job. Stefano Maffulli, the executive director of OSI, has told The Verge that they understand that the current OSI-approved licenses are not up to the job for AI models.

They are in the process of reviewing how to work with AI developers to provide a “transparent, permissionless, yet safe” access to models.

He also added that:

We definitely have to rethink licenses in a way that addresses the real limitations of copyright and permissions in AI models while keeping many of the tenets of the open source community.

Regardless of what happens, it is clear that open-source standards will have to evolve to accommodate new and emerging tech, while not being just limited to AI.

I am looking forward to how the open-source licensing changes in the coming years.

💬 What about you? Do you think that older open-source standards need to be updated?


More from It's FOSS...

Latest