Will AI lead to the death of open source?

17 min readMar 30, 2025

In the last couple of years, I’ve watched the AI landscape transform through countless conversations with researchers, founders, and policymakers. Each group has their own vision of where we’re headed, but they all recognize a fundamental shift occurring: what began as a corporate-driven technological revolution is rapidly becoming a matter of national security.

The tension building at the heart of this transformation is between the ethos of open source that fueled decades of technological advancement and the increasingly strategic national interests at stake.

This isn’t merely about software development methodologies, it’s about who controls the most powerful technology humanity has ever created.

The stakes with AI feel different.

More consequential.

The forces pulling toward closed, controlled development are stronger than ever, while simultaneously, the case for openness has never been more compelling. These fundamentally contradictory strategies, pushed by opposing parties, are setting up a clash that will shape our technological future.

Which leads to one of the most significant technological questions of our lifetime.

Will open source AI development be under threat from government control?

And, perhaps more importantly, can it even be properly controlled in 2025?

Open Source is the foundation of modern tech

Before diving into geopolitics, it is important to note that nearly everything you interact with online today is powered by open source software. It’s not just a nice philosophy or a side project for passionate developers, it’s the foundation of the entire digital economy.

The browser you’re reading this on is probably built on Chromium or WebKit. The server delivering this content is likely running on Linux with NGINX or Apache. And the frameworks powering the app you are reading this on is likely built on React, TensorFlow, PyTorch, all open source.

Even big tech companies that we sometimes think of as closed ecosystems, Google, Meta, Microsoft, rely heavily on open source infrastructure and contribute significantly to the ecosystem.

Where some may mistake it as charity, it’s actually strategic dependency. The leverage you get from having thousands of developers around the world to build and maintain your core infrastructure is impossible not to take advantage of.

This approach has accelerated innovation at a pace that would have been unimaginable under a fully proprietary model. When a developer in Bangalore can improve code written in Seattle, which then gets enhanced by someone in Berlin, the rate of progress compounds exponentially.

When you have access to a breakdown of Google’s brand new SOTA architecture called transformers, you can use it as the foundation for a new type of model you coin as LLMs.

AI development is the new age Manhattan Project

During World War II, the United States launched the Manhattan Project, a massive secret research initiative to develop the atomic bomb. Scientists who had freely published their work and collaborated internationally before the war suddenly found themselves working behind high security clearances, their discoveries classified. It was a deliberate attempt to keep the power of god locked behind closed doors.

But secrecy failed. Through espionage, these discoveries leaked, and soon two countries possessed the bomb. Today, nine nations have nuclear weapons.

AI is our generation’s Manhattan Project moment.

If nuclear physics represented a transformative technology that gave us the power to play god, advanced AI systems represent us trying to create god. The ability to shape thought itself. The ability to destabilize entire countries. The potential to transform or destroy the world.

The implications for national security are as obvious as they are alarming. Countries that lead in AI development gain advantages in intelligence gathering, information warfare, and economic competitiveness. The pressure to classify and control this technology grows more intense with each breakthrough.

But unlike the Manhattan Project, controlling information in the digital age is fundamentally different. When Oppenheimer and his team worked on the atomic bomb, information flowed through physical papers, face-to-face meetings, and controlled correspondence.

Today, code, research papers, and model weights flow freely across digital networks. A breakthrough at Google DeepMind can be implemented by a grad student in Taiwan within hours.

This reality has forced governments to find new chokepoints for control.

If they can’t contain the knowledge, perhaps they can restrict the means to implement it.

Controlling AI through hardware

Unlike software, which can flow freely across the internet, the advanced chips needed to develop cutting-edge AI can’t be downloaded. They must be manufactured through incredibly complex processes that only a handful of companies in the world can execute. This creates a natural chokepoint for controlling AI development.

The US has already leveraged this reality, implementing export controls that prevent companies like NVIDIA from selling their most advanced AI chips to certain countries. When the US told NVIDIA they couldn’t sell H100 GPUs to China, they effectively created a technological moat that’s nearly impossible to cross in the short term.

They also blocked manufacturing alternatives by limiting access to semiconductor manufacturing equipment, controlling electronic design automation software needed for chip design, and recently expanded to include High-Bandwidth Memory (HBM), which is critical for all modern AI systems.

This strategy represents a profound shift in how technology is controlled. Instead of trying to restrict information (which is nearly impossible in the internet age), governments are restricting the physical infrastructure needed to implement that information.

They are essentially saying, “You can have the blueprints for a nuclear reactor, but good luck getting the uranium.”

The economics are staggering. Training a frontier model can cost tens or even hundreds of millions of dollars in compute alone. This isn’t the kind of expense that a small startup or independent research lab can easily absorb. By controlling who gets access to the most powerful chips, governments can effectively decide who gets to play in the frontier AI space, regardless of how much knowledge is shared online.

However companies are finding unique ways of navigating through these restrictions. NVIDIA created the H800, a slightly downgraded version of the H100 that complies with export regulations for the Chinese market. Meanwhile, Chinese companies like Huawei are racing to develop domestic alternatives despite enormous technical challenges.

This hardware-based approach to control creates a multi-tiered world:

  • Countries with unrestricted access to the best chips
  • Countries with access to good but not cutting-edge chips
  • Countries effectively locked out of frontier AI development

But these restrictions also dramatically raise the geopolitical stakes. The concentration of advanced chip manufacturing in Taiwan through TSMC creates a single point of both technological and geopolitical leverage. For China, being cut off from advanced AI chips could increasingly look like an existential threat to their technological future, potentially making Taiwan an even more critical strategic objective. This dynamic creates a concerning parallel to other resource-based conflicts.

When nations feel their future is being threatened by denial of critical resources, they often take dramatic actions.

As recently as 4 years ago, we see that threats on the position of nations on the global stage can lead to them taking drastic action. Just as Russia’s invasion of Ukraine followed the potential expansion of NATO, restrictions on chip access could influence China’s calculus regarding Taiwan. Technology constraints can become casus belli when the alternative is permanent strategic disadvantage.

The burden of trust is no longer placed in the hands of a few like during the Manhattan Project, nor is it placed on the whole world as pure open source would suggest. Instead, it’s placed on those with access to silicon, creating a new kind of technological aristocracy based on the ability to have access to compute.

But even this approach has its limits.

Innovation flourishes under constraints

The open source AI movement reflects a genuine belief that democratizing access to powerful AI systems will lead to better outcomes for humanity.

If more diverse voices can shape AI development, we’re less likely to end up with harmful systems that reflect only narrow corporate or governmental interests.

Making AI generally available is actually our best defense against catastrophe.

Their reasoning follows a compelling logic:

  1. If AI development is restricted to a handful of powerful entities, the technology will inevitably reflect their narrow interests and biases.
  2. Concentrated AI power increases the risk of misuse, whether intentional or accidental.
  3. Open development creates a robust ecosystem where harmful applications can be identified and countered by the community.
  4. Widespread access ensures no single actor can gain a decisive strategic advantage.

Which does have a lot of truth. Global access to frontier technologies have made it so the next breakthrough is just as likely to emerge from a dorm room in Bangalore as from a research lab in Silicon Valley. The concentration of power isn’t just limited to a few.

But taking the full open source approach in regards to AI also has much higher stakes.

It means open access for those who would use these tools for harm.

The same models that help students learn more effectively can generate misinformation at scale.

The algorithms that optimize supply chains can also identify vulnerabilities in critical infrastructure.

It also means a direct subversion on actions countries can take against national security threats.

DeepSeek is the perfect example of this. Founded in China and facing significant export controls on advanced AI chips, the company should have been at a severe disadvantage compared to its Western counterparts. The hardware restrictions placed on China were specifically designed to slow their AI progress. And yet, in early 2024, DeepSeek released their 236B parameter model as open source, rivaling systems built by companies with far greater resources.

What made this possible was not just DeepSeek’s ingenuity, but the open source ecosystem they could build upon. This represents a perfect example of what analysts are now calling “AI with Chinese characteristics”, a fusion of state guidance, private-sector innovation, and open-source collaboration, all carefully managed to serve long-term technological objectives.

By leveraging architecture insights from open models like LLaMA and Mistral, studying research papers from labs worldwide, and adapting techniques from the open source community, DeepSeek didn’t have to reinvent the wheel. The collective knowledge embedded in existing open source projects gave them a foundation that hardware restrictions couldn’t take away.

We’ve seen this pattern before in other sectors, but with critically different outcomes. When Chinese manufacturers flooded markets with cheap electric vehicles, solar panels, or smartphones, American policymakers could impose tariffs, trade restrictions, and import bans to protect domestic industries. These measures worked because physical products must cross borders, pass through customs, and can be effectively regulated.

But AI is fundamentally different.

You can’t put a tariff on knowledge or restrict the import of code that flows freely across the internet.

When DeepSeek’s team couldn’t access the latest Nvidia chips, they didn’t need to stop development, they simply adapted, building on publicly available research and open source foundations while developing clever architectural workarounds for the hardware limitations.

Unlike a car factory that needs specific components, AI development can continue despite hardware restrictions as long as the underlying knowledge remains accessible.

This compounding effect of open innovation is what makes hardware restrictions less effective than they might initially appear. When researchers can study, build upon, and improve each other’s work, progress accelerates in ways that simple resource constraints can’t fully contain.

DeepSeek’s team still had to overcome significant obstacles. Unable to simply throw more GPUs at the problem (the favored approach of many Western labs), they found clever architectural innovations that maximized efficiency. Their techniques for model training and optimization were born directly from the constraints they faced.

We are once again illustrated the important truth that constraints often breed innovation. When the obvious path forward is blocked, researchers and engineers are forced to find alternative routes that might actually prove superior in the long run.

The DeepSeek case challenges the assumption that controlling hardware access can effectively contain AI development. While hardware restrictions create obstacles, they may ultimately just slow progress rather than prevent it entirely. Determined actors with sufficient expertise will find ways to innovate despite these limitations, especially when they can build on the foundation of open source work.

This doesn’t mean hardware controls are meaningless, they still shape the competitive landscape and influence who leads in AI development. But they’re not the impenetrable barrier that some policymakers might hope for. The cat and mouse game between restrictions and innovations will likely continue indefinitely, with neither side achieving a permanent advantage. HW control without control on the fundamental software can only do so much.

This paradox, that open source both increases risks and fosters innovation that can circumvent restrictions, lies at the heart of the push-pull dynamics in AI governance.

Corporations and Governments are each playing their own game

The AI landscape has become a complex strategic chessboard where open source is both a weapon and a shield, and where corporate and governmental interests collide and sometimes align.

Using open source as a corporate strategy

Meta’s release of LLaMA wasn’t just technological philanthropy, it was a strategic move to prevent OpenAI and Google from capturing all the value in the AI stack. By open sourcing a powerful model, Meta effectively commoditized part of the AI value chain where they weren’t the leader, hoping to shift competition to areas where they might have advantages.

Their continued commitment to this strategy with LLaMA 2 and more recently LLaMA 3 shows that they see long-term strategic value in pushing against the closed model trend, even as they invest billions in AI development.

Stability AI built an entire business model around open source, betting that the value will accrue in customization, deployment, and specialized applications rather than in the base models themselves.

We’re even seeing hybrid approaches. Hugging Face has positioned itself as the GitHub of machine learning, creating a platform that supports both open and closed models while fostering a community that leans heavily toward openness.

For companies like OpenAI and Anthropic, keeping models closed aligns with both their safety concerns and their business models. But even they release research papers describing many of their methods, contributing to the open knowledge base while withholding specific implementations.

This corporate push-pull creates fascinating dynamics:

  • Each time a company like Meta or Stability AI releases powerful open source models, they reset the baseline of what must be proprietary
  • This forces companies with closed models to keep pushing the frontier to maintain their advantage
  • The cycle accelerates innovation while constantly redrawing the line between open and closed

The government’s stake in AI development

Simultaneously, governments around the world are engaged in answering a complex question.

Does the competitive advantage of restricting access to cutting-edge AI outweigh the benefits of open innovation?

For national security establishments in the United States, Europe, and China, the instinct to control and restrict is strong and growing stronger. The export controls on semiconductor technology are just a preview.

We’re seeing new regulatory frameworks that impose constraints on the sharing of advanced AI models, training methodologies, and even theoretical research.

The EU’s AI Act, China’s new regulations on generative AI, and the Biden administration’s executive order on AI safety all point in the same direction: more government oversight and control.

But these restrictions face countervailing forces:

  1. Global Collaboration: The AI research community remains deeply international and collaborative. Many researchers actively resist classification of their work.
  2. Practical Reality: Truly effective containment of AI technology may be technically impossible, as the DeepSeek example shows.
  3. Innovation Speed: Countries that embrace openness might simply innovate faster than those that don’t.
  4. Market Demand: Businesses worldwide have developed a deep dependence on open source tools. Creating closed alternatives for every use case would be prohibitively expensive.

As national security concerns intensify, corporate strategies must navigate increasingly complex governmental priorities. It’s one of the biggest reasons why you will see every tech CEO with their skin in the game spending a lot of time in Washington to lobby what they believe to be their best approach.

We are caught in a tug of war between openness and control, with neither side likely to achieve total victory.

What science fiction tells us

Science fiction has long grappled with the implications of advanced artificial intelligence, offering both warnings and guideposts for our current predicament.

Isaac Asimov’s Three Laws of Robotics have served as a thought experiment for how we might constrain artificial intelligence:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The beauty of these laws was always in their simplicity and their contradictions. They created narrative tension precisely because they were insufficient for the complexity of the real world.

But implementing something like Asimov’s laws globally faces a fundamental challenge: it requires universal agreement in a deeply fragmented world.

The US, EU, China, and other major powers all have different values, priorities, and conceptions of what AI safety means.

China’s approach centers on alignment with their political system, prioritizing societal stability and national goals, while Western democracies emphasize individual rights and checks on government surveillance.

These differences make global governance frameworks nearly impossible to implement effectively.

Even if we could agree on a set of principles, enforcement would be another problem entirely.

In Asimov’s fiction, the Three Laws were hardwired into the positronic brains of robots, a technological solution to an ethical problem.

But with open source AI, anyone could potentially remove or modify safety constraints. A nation or organization that chooses to create AI without these constraints might gain significant advantages, creating powerful incentives to defect from any global agreement.

Perhaps Dune can offer us an even more relevant precedent. In the Dune universe, humanity experienced the “Butlerian Jihad,” a war against thinking machines that resulted in the complete prohibition of AI with the commandment: “Thou shalt not make a machine in the likeness of a human mind.”

What makes the Butlerian Jihad so fascinating is that it wasn’t just a policy decision, it was a violent, civilization-spanning rejection of AI after humanity had become thoroughly dependent on it.

Machines had taken over more and more human functions until they eventually enslaved humanity.

The machines weren’t inherently malevolent but were instead tools manipulated by a subset of humans to control others.

The final straw came when machine intelligence began to replace human thought and creativity, threatening the very essence of what made us human.

The Jihad wasn’t led by governments but emerged organically as a grassroots revolution when enough people recognized the existential threat. It took catastrophic events to trigger universal agreement that this technology needed to be abandoned, not just regulated, but completely prohibited. Only when faced with extinction did humanity find the will to enforce a truly global ban.

What’s equally fascinating is that the prohibition of AI didn’t stop technological progress, it just channeled it in different directions. Instead of artificial intelligence, the universe developed human potential to extraordinary degrees, creating Mentats (human computers) and the Bene Gesserit (with cognitive and physical abilities enhanced through training).

Society evolved whole new institutions and methodologies to fill the gap left by AI.

I don’t think we are headed towards our own version of a Butlerian Jihad but there are some interesting parallels. We’re already seeing calls from prominent figures for AI development moratoriums and increasingly stringent regulations.

Today’s policymakers are attempting to write their own version of safety protocols. But instead of elegant principles, we’re getting sprawling regulatory frameworks that still somehow manage to miss the core issues. The EU’s 175-page AI Act can’t possibly anticipate every use case, and the U.S. approach remains fragmented across multiple agencies.

The core tension remains unresolved: how do we balance innovation with security?

How do we prevent harmful uses without stifling beneficial ones?

Is doing it comprehensively and correctly even possible?

And if the technology is open source, is it just setting you up to fall behind if other nations are not complying at all with the regulation?

It’s a question that we have been arguing within our fiction but are now faced on deciding in today’s reality.

Where do we go from here?

The complete death of open source in advanced AI isn’t inevitable, but every new geopolitical tension and demonstration of AI’s power pushes us further toward a more controlled model. The question is what form this control will take and how it might reshape innovation.

What’s unique about our current moment is that we’re making these decisions in a completely novel environment. When nuclear technology emerged, the world had centuries of experience with nation-state rivalries and industrial secrecy. But the internet has created an information environment unlike anything in history, one where the default is openness and where borders are porous by design.

We’ve built the entire modern tech stack on the assumption that knowledge should flow freely. Our development tools, our deployment pipelines, our collaborative workflows, they all assume a world where code and ideas move without friction across organizational and national boundaries.

Constraining AI development will mean rebuilding significant parts of this infrastructure. It will be messy, expensive, and almost certainly less efficient.

I am not even sure if it is truly possible.

But an incoming shift in that direction does present startup founders with interesting challenges and many opportunities:

  1. The value of proprietary AI will increase as open alternatives face growing restrictions
  2. Companies that can navigate complex regulatory landscapes will gain advantage
  3. Nations will compete to create AI innovation hubs with special regulatory status
  4. New business models will emerge around “trust” and “security” as differentiators
  5. The gap between cutting-edge and publicly-available AI capabilities will likely widen

Just as we saw a pushback in globalization, we will see this trickle down into the development of AI. Which presents a paradigm shift that most companies and governments will pay a lot of money to stay on top of.

The Middle Path: A New Equilibrium

I don’t actually think AI will be the death of open source.

But I do think the current path of development will go through fundamental shifts.

I see a major swing in favor of state control which leads to a push back and eventually gets balanced into a new equilibrium that none of the current players would have chosen independently:

The Base Will Stay Open: Foundational AI technologies, especially those already widely distributed, will remain open source. The cat is out of the bag for many core algorithms. We’ll continue to see MIT and Apache licenses on the repositories that form the backbone of AI infrastructure.

The Frontier Will Fragment: Cutting-edge AI will operate under various levels of restriction in different jurisdictions. Some countries will embrace openness to gain competitive advantage, while others will impose tight controls for security reasons. This fragmentation will create multiple development paths with their own strengths and weaknesses. It will also cause a lot of countries to use deregulation as a means for gaining political strength with economic investment similar to what is going on in Dubai and Crypto today.

Community Strength: The open source AI community won’t disappear, it may even grow stronger in response to restrictions, developing innovative ways to collaborate within new constraints. Just as DeepSeek found ways to innovate despite hardware limitations, the broader community will adapt to whatever restrictions emerge. And will be an even more vocal against restrictions governments put on.

National Divergence: Different countries will experiment with different balances of openness and control, creating natural experiments in AI governance. Some will bet on openness, others on tight control. This diversity of approaches may actually be healthy for global innovation.

Responsible Openness: New norms will emerge about what should be shared and how, creating a more nuanced approach than the binary open/closed distinction we often use today. We may see the development of structured access regimes, tiered release strategies, and other approaches that navigate the middle ground. It will be a political talking point for every election going forward.

Is this the right approach?

Can this approach avert potential catastrophe?

Is it all just a slow march to the end of humanity?

I don’t know. But I think staying optimistic is the right approach.

Somehow, humans have managed to avert existential crisis, even when it has been as close as the courage of one man refusing to click the button to end the world.

I think messy, contradictory equilibriums are the feature that allows us to do this.

But as we continue to build world threatening technology that we continue to lose control on, it remains to see how long our extraordinary adaptability can continue to bail us out until it just simply can’t.

I started a bi-weekly newsletter about IRL events, startups and building products! These articles will be posted there first so feel free to subscribe!

Thanks for reading,

Daivik Goel

--

--

Daivik Goel
Daivik Goel

Written by Daivik Goel

Supercharging the Creator Economy | Founder | Writer | uWaterloo Computer Eng Grad | Host of The Building Blocks Podcast | ex. Tesla, Cisco Meraki

No responses yet