top of page

DeepSeek, Open Source and DePIN: A Powerful Combo for Decentralized AI?

Writer's picture: Pete Harris, Principal, Lighthouse Partners, Inc.Pete Harris, Principal, Lighthouse Partners, Inc.

Updated: Feb 8


Image of smartphone screen showing DeepSeek V3 Capabilities
Image of smartphone screen showing DeepSeek V3 Capabilities

The impact of the recent release of the V3 large language model from DeepSeek cannot be underestimated, especially for the artificial intelligence sector in the USA. Its comparatively inexpensive development and impressive benchmark performance stunned many AI experts and led to massive stock price declines for several AI-related companies, including Nvidia and Oracle. And its heritage – DeepSeek was founded in 2023 as a spinout from the Hangzhou, China hedge fund High-Flyer Capital Management – caused some trade observers to question whether it represented geopolitical payback for government actions related to AI chip export controls and even moves to ban social media service TikTok.


But it is DeepSeek’s decision to open source V3 – considered highly proficient at solving computer coding and mathematics problems – that has excited many in the tech world. And that includes those looking to implement the model (and its related open source R1 reasoning model) on Decentralized Physical Infrastructure Network (DePIN) platforms to underpin Decentralized AI (DeAI) services.


AI Brains vs GPU Brawn


Much has been written about how DeepSeek’s technologists put their smarts to great use to design, code and implement V3 with several complex optimizations, bucking the accepted “brute force” approach adopted by the likes of OpenAI, Anthropic and Google.


In the AI world, brute force means improving model performance by building extremely costly datacenters hosting tens or even hundreds of thousands of GPU chips to train models on datasets as large as the entire internet. Taking this approach to an extreme is the proposed Stargate joint venture, which might end up costing $500 billion.


Already, some observers have expressed skepticism at DeepSeek’s claimed use of a training server cluster of just 2,048 Nvidia H800 GPUs and stated V3 model training costs of less than $6 million. And OpenAI has suggested that DeepSeek included data ‘distilled’ from its own models in its training inputs – which would be a license infringement. DeepSeek has not revealed the sources of data used to train its models.


The Benefits of Open Source


DeepSeek has already received considerable criticism for its privacy and security issues, and holes in its knowledge base as a result of politically-driven censorship. While such criticisms  related to DeepSeek’s online chatbox, its mobile app, and its API service  are justified, it is unlikely that they will apply to deployment of DeepSeek’s open-source codebase, which is essentially a transparent building block for creating AI applications.


Because DeepSeek is providing its models as open source, service providers can download the model code for free, examine it for security flaws, modify it as needed for specific implementation scenarios, and then run it on their own servers and permissioned networks, avoiding the possibility that user profile and prompt data might be harvested (as routinely happens with cloud versions of models from OpenAI and others). Expect to see the open-source community pay great attention to potential privacy and security flaws, and to provide fixes that will be made available to all.


Service providers can train the models on datasets of their choice and while that represents a considerable cost, it allows the provider to include data from specialty and deep-web sources, which likely will be needed to address the business needs of industry verticals, like financial services, healthcare or defense.


Leveraging DeepSeek for DePIN and DeAI


DeepSeek’s model performance in terms of training efficiency and frugal GPU needs – assuming the company’s claims are accurate – makes it an attractive option to be offered by smaller service providers, or DePIN operators.


For DePIN operators, DeepSeek allows those with older GPU models to contribute compute to their networks, thus expanding their overall power and availability. Indeed, since DeepSeek’s R1 reasoning model (released soon after V3 and also open source) has several variants, including one that could be implemented on a laptop, that route potentially opens up DePIN supply side opportunities massively.


Akash Network, an established DePIN provider, is already offering DeepSeek models via its Supercloud service, which taps into a decentralized global network providing CPU, GPU and storage infrastructure that offers privacy and censorship resistance. Currently, more than 1,000 GPUs are available on the Supercloud.



DeepSeek deployed on the Akash network
DeepSeek R1 on Akash

One Akash-based application – Venice.ai – has also begun offering DeepSeek R1 access to Pro users of its privacy-oriented chatbot. In particular, Venice does not store user prompts or model responses on its servers.


Having upended the AI world with its initial models, expect DeepSeek to continue to be a front-of-mind player in the future, and its impact is likely to spur the development of other open-source models that will collectively both benefit from and fuel the DePIN deployment approach and power further DeAI applications.


Where might this all lead? Perhaps a DePIN-based DeAI equivalent of Stargate that even research nonprofits and developing economies can afford? Democratization of AI for sure.

Comments


bottom of page