white robot

Why AI needs (to be) Open Source to be Good

Large Language Model nuclear weapons?

LLM Weapon
“LLM nuclear bomb” generated with SDXL

Many have drawn a parallel between safeguarding the world against uncontrolled nuclear weapons and safeguarding the world against uncontrolled large language models. The potential harm of AI has, in my opinion, been greatly exaggerated.

First and foremost, it should be clear that any comparison to nuclear weapons is heavily loaded. While nuclear technology in general has many positive uses ranging from medical radioisotopes to transport and electricity generation, nuclear weapons are inherently and immensely destructive. Comparing language models to bombs comes off a bit disingenuous, as their purpose and function are completely different.

The effects of AI are indirect, more complex and vastly more positive than that of bombs. If we are to understand what opening up access to AI and their development entails, we must understand what effects open source has in general, and how those translate to AI in particular.

Open Source Software Development

Traditional open source development has a lot of benefits. Transparency allows for the creation of dependable software that does what the user expects it to do. The right to modifications creates “quality systems that meet and exceed the security and reliability metrics of their proprietary counterparts— at a much reduced cost”1.

Community collaboration allows for the sharing of technical knowledge across the globe, which further increases innovation. Open software can be customized and improved, and different forks of the same software can be optimized for different purposes.

Modern internet runs on open source

When you open a website, open source enables nearly every step that information transfer from the internet to you. W3Techs reports that 81.7% of all websites run on Unix-based operating systems. Open Source Security and Risk Analysis 2023 reports that 96% codebases audited by Synopsys contained open source. Even when the whole application is not open, it almost always has some open source components in it. The data transfer from the server to you uses open internet protocols. On your screen, whether you’re browsing using Safari, Chrome, Firefox or Edge, the core engine that renders the webpage is open source.

Open source has taken over the internet due to its ease-of-use, cost-effectiveness and speed of innovation. Building modern proprietary software from scratch, without standing on the shoulders of giants, is near-impossible. In an era where connectivity and information sharing reign supreme, there is no competition without embracing open source. AI is sure to follow.

Open-source AI is here

Meta’s Llama 2 is not open source (but it’s close enough)

lama wearing Meta logo
Lama generated with SDXL, wearing a Meta bow tie

Strictly speaking, Meta’s Llama 2 model is not open source. It’s free to use, but while significantly more open than its competitors, Meta still dictates some terms in their Llama 2 Acceptable Use Policy: disallowing its use for developing other language models, blocking their largest competitors from using it at all and prohibiting use for unsavory things such as violence or terrorism.

Meta also hasn’t opened all information related to their AI model. Llama GitHub repo contains the model weights and starting code for the pretrained LLM, but their “secret sauce” is still behind locked doors, as the training code, pre-training dataset, fine-tuning preferences and other materials they’ve used are kept secret. This ensures nobody can recreate their model.

Nevertheless, Llama2 is open enough for the philosophical discussion on whether sharing it openly is harmful and creates a problem for humankind. As a sidenote, the distinction between Meta’s relatively open license and open source is still relevant, as Meta is cashing in on the free labor of the open source movement while simultaneously stifling competition. While a mutually beneficial arrangement for many, and should be applauded for its openness, Llama2 is not libre software.

Open-source AI gain-of-function is easy and fast

Building nuclear weapons requires great technical knowledge, the ability to acquire and enrich fissile material, and the means to precisely construct the ignition system of the fissile core. LLMs are relatively easy to make in comparison, and especially easy to tweak and update. Building the initial model requires loads of processing power and a huge amount of data, though the latter was freely available on social media until recently. The lack of available language corpus is the biggest hurdle to creating LLMs, and the processing power for the initial pre-training requires significant monetary investment. Open-source eliminates both of these problems for everyone, including the common hobbyist.

AI doing gain-of-function lab research
“AI doing gain-of-function lab research” generated with SDXL

Improving already available AI models is something that can easily be done at-home with the computation power of a gaming PC, or by paying a few euros to rent the hardware for some hours. Also, with the initial language model already built, fine-tuning it to ensure the most relevant output requires highly curated and high-quality but small datasets rather than huge amounts of good-enough information.

Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B.

leaked Google document We Have No Moat, And Neither Does OpenAI

With the help of OpenAI models and development tools, small steps in development add up, leading to quick advancements in AI capabilities through collaboration. Open AI is especially useful for private developers, making AI development possible for everybody. Furthermore, the development of smaller models is inherently incentivized, given that development naturally takes place on less powerful machines.

Theoretically, once AI can reliably improve itself, this improvement will speed up exponentially and can even lead to super-intelligence. This scenario, more than anything else, is talked about as an existential threat to humanity.

Existential threat of conscious super-intelligent AI

Now, let’s address the exaggeration part—conscious super-intelligent AI. Terms are important here. AI is not always AI, as the word is used to refer to artificial lifeforms in science fiction, as well as large language models. Large language models are complex algorithms that predict the next word of a sentence based on mathematical vector calculations. They’re no more conscious than your average calculator. Read Timothy B. Lee And Sean Trott’s excellent piece, A jargon-free explanation of how AI large language models work, on Ars Technica, to demystify their function. We’re a far way off from Skynet taking global control and hunting people down as vermin.

It’s fine to think about the hypothetical case of conscious AI taking control—just as it’s fine to think about super-intelligent aliens coming to Earth and taking over. We might even want to prepare for these eventualities. But conscious AI requires something qualitatively different from large language models, and the openness of LLMs has no effect on whether a conscious AI will be born. The disruptive effects of AI on our society are here now, and should take precedence.

The trouble with uncensored AI

AI and its misuse have great potential for mischief: misinformation and fake content is easier to create than ever. Uncensored and jailbroken models can be leveraged to create hate speech and other offensive content. Manipulation attempts and phishing attacks take less time to create.

Large companies such as OpenAI and Meta go to great lengths to create guard-rails for their AI—to limit its output to that which they, their investors and marketing partners deem safe and appropriate. This can be seen as a type of pre-censorship, as certain “opinions” are removed from the vocabulary of the AI, meaning that it cannot be used to publish those texts. Open-source models make it possible for anybody to bypass these limitations.

But [people fine-tuning their model] means that Meta’s finding that the model is very safe under their own preferred fine-tuning is approximately meaningless: It doesn’t describe how the model will actually be used.

Kelsey Piper, Why Meta’s move to make its new AI open source is more dangerous than you think

Trying to prevent usage of certain parts of the language model by installing guardrails is bound to fail when people can customize the model according to their preferences. It opens up possibilities for misuse. It also means that users are not forced to adhere to the sensibilities of a large corporation, which are susceptible to enshittening. Personalized AI follow the value-system of the user.

Furthermore, mitigating misinformation and hate speech by creating guard-rails in closed AI models is not effective. Large bad actors will still create uncensored models for their own use, and guard-rails are easy to break with some trial-and-error or basic web-search skills. AI guard-rails are security theater, easily bypassed even without open models. Misinformation by AI needs to be mitigated same as other types of misinformation—corrections and easy access to true information. The hate that fuels hate speech has never been distinguished by forceful suppression, and phishing attacks are better thwarted by login security (MFA) and security awareness.

Open AI models will undoubtedly be used to create uncensored models, and open uncensored models will be used for whatever people wish to use them for. AI is still a tool, and its application depends on its wielder. I don’t think we should worry about putting guardrails on AI any more than we worry about putting guardrails on sharp tools like axes. Guardrails are not effective, but neither misinformation nor hate speech will be won through censorship, so perhaps the situation is not as bad as we might think.

Is open source AI Good or Bad?

Anybody can take an open AI model and further train it, and many of the general benefits to open source also apply for open AI models—they speed up AI adoption and development, and help collaboration. This can democratize AI, making it accessible to a broader range of people and organizations. Potential censorship is circumvented, and people can choose for themselves which models they want to use.

AI systems are often criticized for being ‘black boxes’ that make decisions without explaining how or why. This lack of transparency and accountability can lead to unethical use of AI, such as discrimination, bias, and invasion of privacy. Transparency with training materials allows users to review AI models more easily for bias, and makes it easier and more efficient to improve the models in an ethical way. It is possible to test the output of an AI model, but it takes more effort and can never be as comprehensive as a data audit for the pre-training dataset.

It all comes down to whether AI systems might be dangerous and, if they are, if we’ll be able to learn that before we release them. 

Kelsey Piper, Why Meta’s move to make its new AI open source is more dangerous than you think

A large language model can solely model language. Language in itself is not dangerous, though at its worst, it can be used to incite to violence and cause significant indirect harm. AI, and LLMs in particular, will cause great societal transformations as automated content generation displaces jobs and reshapes our work methods. Open AI isn’t an exception in this regard, but neither does it exacerbate the issues.

It’s unclear how limiting AI development to closed-off models lessens the harmful consequences of AI. Conscious AI is far off, and its possible future dangers can not justify limiting freedoms in the present, unless the correlation of benefits is clear-cut. In fact, the openness of AI models helps democratize AI and allows for smaller developers to take part in the development. On consequential grounds, it is difficult to justify limiting AI development to closed-off corporate models, as there are no clear benefits to society that outweigh those brought on by openness.

Good AI tools need to treat users as an end in themselves

Furthermore, there are other ethical considerations besides consequences. Deontology dictates that we act by universal normative rules. A Famous formulation by Kant defines Good as acts that follow universal and rational maxims that treat all people as means in themselves—meaning, follow rules that value humans and their autonomy, without exception. If large corporations dictate how common people can and will use AI, it deprives users of the autonomy that is a fundamental human right.

I would argue that the only way AI can be good is if it is developed in a manner that serves all of humanity, ensuring the autonomy and liberty of its users through the use of transparent and open AI models.

  1. Boulanger, A. (2005). Open-source versus proprietary software: Is one more reliable and secure than the other?. IBM Systems Journal, 44(2), 239-248.[]

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.