DeepSeek has been making waves in the tech world, especially following the success of its R1 AI model, which has outperformed OpenAI’s o1 in reasoning tasks across math, science, and coding. This achievement catapulted DeepSeek to the top spot as the most downloaded free app in the US, surpassing even ChatGPT. As a result, stock values for major companies like Microsoft, Meta, and NVIDIA have taken a significant hit, largely due to the excitement surrounding DeepSeek.
Meta’s leading AI scientist, Yann LeCun, attributes DeepSeek’s groundbreaking achievements to its open-source approach. However, there are rising concerns about security. Recently, DeepSeek decided to temporarily limit new user registrations in response to “large-scale malicious attacks” targeting their services. Despite this, current users can continue using the app seamlessly.
While industry experts praise DeepSeek for surpassing proprietary AI models, there are critics who downplay the accomplishment. They point out that its open-source code is available and modifiable by anyone at no cost. The heart of this innovation is powered by DeepSeek’s open-source V3 model.
The development of this model reportedly cost around $6 million, a modest investment compared to the massive funds funneled into other flagship models, which have been hampered by the scarcity of high-quality training content.
The buzz over DeepSeek comes just as OpenAI and SoftBank announced their ambitious $500 billion Stargate Project. This project aims to enhance AI infrastructure across the U.S., with President Donald J. Trump calling it the largest AI infrastructure project in history, asserting that it will “keep the future of technology” within American borders.
While DeepSeek upholds OpenAI’s original mission of freely developing AI systems beneficial to all humanity, its open-source nature has opened up security concerns for the Chinese startup, especially in light of recent cyber threats. Perhaps OpenAI’s CEO, Sam Altman, had a point when he suggested that keeping advanced AI models closed-source might be a safer route, as it “provides an easier way to hit the safety threshold.”