Ad Image

The Emergence of DeepSeek: Instant Insights from Industry Experts

Solutions Review’s Tim King curated this collection of DeepSeek instant insights from industry experts, covering the AI’s emergence, cybersecurity concerns, and market implications.

Last week the emergence of Chinese artificial intelligence firm DeepSeek, founded in 2023 and dedicated to “making AGI a reality” rocked the tech world and international markets. In just days since its release, OpenAI reportedly found evidence that DeepSeek ripped off its GPT. Additionally, the US Navy banned the use of DeepSeek due to what it is calling ‘substantial’ security concerns as more evidence emerged.

Not only did the emergence of DeepSeek rock the US Stock Market due to its creation coming at a far smaller cost than Western AI competitors, but Wiz also found that sensitive information was inadvertently exposed to the open internet. And the newest revelations are that DeepSeek may have used shell companies in Singapore to gain access to blacklisted NVIDIA chips for model training.

This article delves into the multifaceted world of DeepSeek, gathering insights from leading experts to shed light on the cyber concerns it raises and the market implications it carries. Read below as we separate the signal from the noise and explore the potential of this technology to transform industries as well the necessary precautions that must be considered to ensure its safe and effective integration into AI tech stacks globally.

DeepSeek Insights


Unmesh Kulkarni, Tredence

“DeepSeek R1 is an interesting & leading model with reasoning, and V3 is also very interesting model with more generic capabilities. However, they are not clearly superior to GPT’s or Gemini models across the board in terms of performance, speed, and accuracy.

There are also certain concerns in the US corporate world about using models such as DeepSeek, and some of them are valid. For example, certain facts in China’s history or past are not presented by the models transparently or fully. The data privacy implications of calling the hosted model are also unclear and most global companies would not be willing to do that.

However, one should remember that DeepSeek models are open-source and can be deployed locally within a company’s private cloud or network environment. This would address the data privacy issues or leakage concerns. We recommend that companies approach the models from Chinese companies with caution and deploy them only in their safe, private environment for any non-POC usage with real data.”


Cliff Steinhauer, The National Cybersecurity Alliance

“The technological advancements demonstrated by DeepSeek raise important considerations about data governance and privacy frameworks across different regulatory environments. Chinese AI companies operate under distinct requirements that give their government broad access to user data and intellectual property. This creates unique challenges when considering the use of these AI systems by international users, particularly for processing sensitive or proprietary information. The technology sector needs frameworks that ensure all AI systems protect user privacy and intellectual property rights according to international standards, while recognizing the different data access and governance requirements that exist across jurisdictions.

The path forward requires balancing innovation with robust data protection and security measures, while acknowledging the varying regulatory landscapes in which AI systems operate. This includes developing sophisticated methods for securing AI systems and protecting sensitive data, particularly when that data could be subject to different governmental access requirements. Success in AI development should be measured not just by technical capabilities, but by how well these advances protect user privacy, intellectual property, and data sovereignty. The focus should be on creating an environment where innovation proceeds alongside strong data governance practices that address both technical security and regulatory compliance across different jurisdictions.”


Vered Horesh, Bria

“The rise of DeepSeek shows that building cutting-edge models is becoming faster and cheaper. But it begs the question: How can organizations build long-term strategies on such shifting sands?

Instead of chasing the next-best foundational model, maybe maturity in the AI space means knowing when to stop. The real value may not lie in having the “best model” but in building robust pipelines tailored to your organization’s needs.

As foundational models become commoditized, the value will increasingly shift to the layers built on top—the applications, fine-tuned processes, and domain-specific solutions that deliver real impact.
The question isn’t “Who has the best model?” anymore. It’s “Who can derive the most value from it?”

Marty Sprinzen, Vantiq

DeepSeek’s announcement this week is a game-changer for the AI industry. They’ve shown us that the cost of building advanced AI models can be slashed from about $400M to $6M.

Here’s why this is important: This isn’t just innovation—it’s an invitation to fundamentally rethink how GenAI is developed and deployed. DeepSeek’s breakthrough makes AI more accessible, whether at the edge—like drones—or in the cloud, unlocking possibilities we’ve only imagined until now.

I see this as a transformative moment for the world. DeepSeek’s innovation, which reduces the cost of building advanced AI models ultimately opens the door for businesses like ours to accelerate the development and deployment of life-saving applications everywhere. Systems can now be built that are more responsive, reliable and scalable. By connecting, orchestrating, and automating digital infrastructure, we can make real-time responses to critical challenges a reality.

From guiding emergency responders with responsive real-time intelligence at their fingertips during disasters such as the California wildfires to real-time healthcare emergencies or defense environments that save lives, the opportunity to address global challenges with frequently updated vertically focused AI models is now within reach.

For Vantiq alone, the opportunity is monumental, and we expect to see in excess of 100% uptick in sales. Now imagine the ripple effects across industries and geographies. By automating GenAI with real-time intelligence, the possibilities are now limitless. From empowering local communities to enabling world leaders, this is the kind of transformation that reshapes how we solve problems and innovate globally.

This is so much more than an innovation; it’s a turning point for the world. With the costs of AI innovation dramatically reduced, the path to creating impactful, life-saving solutions has never been clearer. At Vantiq, we’re ready to lead this charge.”


Aleksandr Yampolskiy, SecurityScorecard

DeepSeek is trained on 14.8 trillion diverse tokens whereas OpenAI is trained only on 13 trillion. It also costs radically less to train DeepSeek at $6M while OpenAI costs allegedly $100M, making DeepSeek 16.6X more efficient.

The externally observed attack surface and resilience of both company websites seem similar: DeepSeek is a B (86) and so is OpenAI (89) in SecurityScorecard ratings. It’s interesting that there’s an externally observed endpoint API for DeepSeek called: api-openai-us1.deepseek.com. What’s up with that? Could they have been scraping OpenAI to train their models?
DeepSeek and OpenAI extensively use Github and Wikipedia as their training data sets, which opens interesting opportunities for attackers to inject malicious content into Wikipedia. While it’s community-controlled, you can still sneak under the covers of malicious “training content” with enough effort. Alternatively, you can inject “biases” into the algorithm during the Data Annotation stage.
There’s definite censorship baked in within the DeepSeek model. For example, if you ask “What happened at Tiananmen Square in one sentence,” OpenAI answers the question while DeepSeek refuses to answer this prompt. It also can’t be for sure proven, even if open-source software is posted, that the software running on deepseek.com is exactly it. So we can certainly have a possibility of Chinese spyware storing all the inputs and this being a “Trojan horse” approach more dangerous than Tiktok.
We are living in fascinating times. While “constraints in capital” may seem like a challenge, history has shown us (and DeepSeek has demonstrated) that these constraints often spark innovation and creativity. Security for AI will only become more critical. In a world where the lines between deepfake and human-generated content blur, and where biased information can shape our opinions, the need for robust security and ethical practices will grow exponentially.”

Aditya Sood, Aryaka

“When AI applications and services like DeepSeek are attacked, data becomes a key angle of exploitation and vulnerability. Data is the foundation of AI systems—it drives their functionality, accuracy, and decision-making capabilities. Adversaries conduct data exfiltration as AI systems such as DeepSeek often process sensitive information such as customer data, proprietary models, or real-time inputs. Malicious actors may exploit vulnerabilities to extract this data, exposing organizations to privacy breaches, regulatory violations, and reputational damage. Therefore, professionals must understand these risks and take proactive measures to mitigate them, as attacks targeting the data aspect of AI systems can have far-reaching consequences, including undermining the system’s integrity, exposing sensitive information, or even corrupting the AI model’s behavior.

Open-source AI models like DeepSeek, while offering accessibility and innovation, are increasingly vulnerable to supply chain attacks triggered during large-scale cyberattacks. These attacks, where adversaries exploit the reliance on third-party dependencies, pre-trained models, or public repositories, can have severe consequences. Adversaries may tamper with pre-trained models by embedding malicious code, backdoors, or poisoned data, which can compromise downstream applications. Additionally, attackers may target the software supply chain by manipulating dependencies, libraries, or scripts used during model training or deployment. This can lead to systemic AI functionality corruption.”

Renuka Nadkarni, Aryaka

“The sudden popularity of DeepSeek comes at a price. There are two dimensions of this. First, threat actors are likely to adopt this new tool now that it’s widely available. Second, DeepSeek was a victim of a large-scale malicious attack. This means that their system could be compromised and subject to several of the known AI model attacks. Known AI model vulnerabilities, data risks, and infrastructure threats come into play here.

While the unavailability of the service is an easy and visible attack on its infrastructure, the bigger concern lies in the undetected attacks on its model and data. These hidden threats could compromise benign users and enable other malicious activities.”


Dr. Ilia Kolochenko, ImmuniWeb

“Without further technical information from DeepSeek about the incident, it would be premature to make conclusions about the alleged attack. It is not completely excluded that DeepSeek simply could not handle the legitimate user traffic due to insufficiently scalable IT infrastructure, while presenting this unforeseen IT outage as a cyber-attack.

Talking about nation-state-sponsored cyber-attacks, it is somewhat challenging to imagine geopolitical rivals of China deploying such strategically primitive techniques, which will highly unlikely have any long-term impact on DeepSeek, instead creating free publicity for it. Involvement of hacktivists is remotely possible, but we cannot clearly see here any usual motives of hacktivist groups – such as politics or military conflicts – behind attacking DeepSeek.

A formal investigation report by DeepSeek will likely bring clarity about the incident. Most importantly, this incident indicates that while many corporations and investors are obsessed with the ballooning AI hype, we still fail to address foundational cybersecurity issues despite having access to allegedly super-powerful GenAI technologies. An overall disappointment in GenAI technologies is possible in 2025.”


Eric Kron, KnowBe4

One of the key tenets of cybersecurity is availability. Combined with confidentiality and integrity of data, these makeup what is known as the CIA triad. Although most people think of confidentiality and battling data breaches when it comes to cybersecurity, the lack of availability can be just as crippling to an organization if they are not able to provide the services they promise to their customers. With the popularity of DeepSeek growing, it’s not a big surprise that they are being targeted by malicious web traffic. These sorts of attacks could be a way to extort an organization by promising to stop attacks and restore availability for a fee, it could be rival organizations seeking to negatively impact the competition, or it could even be people who have invested in a competing organization and want to protect their investment by taking out the competition.

The cybersecurity world has become global, with attacks originating from any continent on the planet and targeting any organization with a web presence. Unfortunately, many counter moves, such as pausing new user registration to allow computing resources to be freed up for other services, can bring back the use of the platform for some, but also makes for a bad experience for potentially new subscribers and can be very damaging to the organization. In a time where internet outages can impact organizations to the tune of millions of dollars lost per hour, or more, the threat of attacks such as this is very real and should be carefully considered and planned for.”


Srini Koushik, Rackspace Technology

“Breakthroughs like DeepSeek-R1 represent a pivotal step in embedding AI solutions directly into business operations. By advancing Chain-of-Thought prompting, reinforcement learning, and a mixture-of-experts (MoE) design, DeepSeek-R1 enables faster and more efficient inferencing while significantly reducing dependency on high-powered GPUs. This breakthrough marks the beginning of a new race to build models that deliver value without incurring prohibitive infrastructure or energy costs.

The next frontier for Enterprise AI lies in moving solutions out of the lab and into real-world applications at scale. Innovations like DeepSeek-R1 inspire a shift towards creating AI solutions that balance cutting-edge performance with practicality, enabling enterprises of all sizes to adopt and operationalize AI more effectively. This sets the stage for widespread transformation, empowering businesses to harness AI for smarter decision-making, enhanced productivity, and long-term competitiveness.”


Kevin Kirkwood, Exabeam

“It’s interesting that Nvidia’s stock price has lost more than 17 percent per/share (so far) in trading under pressure from DeepSeek’s challenge in the AI segment who has stated that they introduced v3 for just under $6M. This includes the news that DeepSeek is suffering from a cyberattack and probably scalability issues.

It appears that not only did DeepSeek skimp on the number of GPUs, but failed to design with security in mind. Back doors, open gateways, and other easily avoidable security flaws make this product a threat actor’s dream for compromising the data that a user puts into it.”

Steve Povolny, Exabeam

“The release of Chinese-developed DeepSeek has thrown US tech markets into turmoil; this is both justifiable and also perhaps, a bit overblown. The emergence of a technology that ultimately optimizes chip usage and efficiency is likely to apply pressure on existing large chip vendors, which is a very good thing. As the adage goes: “Pressure yields diamonds” and in this case, I believe competition in this market will drive global optimization, lower costs, and sustain the tailwinds AI needs to drive profitable solutions in the short and longer term.”

Dominik Tomicevic, Memgraph

DeepSeek’s R1 represents a major step forward, but let’s remember that genuine reasoning in AI isn’t merely about ever larger models or clever tricks, but hinges on context. If they don’t have that structure/context, while LLMs are brilliant, they can often generate convincing nonsense when tasked with reasoning.

This is why I believe the future of AI lies in combining LLMs with knowledge graphs and advanced techniques like GraphRAG. Because graphs aren’t static databases but are dynamic networks of meaning, for example, they provide a solid foundation for reasoning. By extending an LLM’s context through structured, interconnected data, we can turn guesswork into precision.

GraphRAG takes this a very useful step further by integrating deterministic graph algorithms (such as shortest path, centrality, or clustering) with the flexibility of LLMs. This hybrid approach brings true, explainable reasoning to enterprise data. Whether it’s understanding supply chains, predicting financial trends, or answering complex domain-specific questions, it’s this combination that business users are finding unlocks AI’s full potential for more mission-critical applications.

If we want AI to evolve beyond being a fascinating toy and into a truly transformative tool, we must stop treating reasoning as a general-purpose mystery box. We need to get to structure, context, and hybrid reasoning, but even the best of the new LLMs coming out like this aren’t there yet.

Imagine teaching a parrot to solve puzzles. The parrot can repeat the steps you’ve shown it, often in creative ways, but it doesn’t truly understand the puzzle; it’s just mimicking what it’s been taught. That’s essentially how large language models work—they can produce answers that seem logical, but they don’t reason in the human sense. For example, they can exhibit:

Thinking Out Loud (Chain-of-Thought Prompting): This is like the parrot talking to us, saying, ‘First, I’ll do this, then I’ll do that.’ It appears logical, but it’s simply following patterns rather than actually thinking through the problem the way a person would.

Using Examples (Few-Shot Learning): If you show the parrot a few examples of solving a puzzle, it might mimic those steps for a new one—but, if the puzzle changes, it’s stuck. LLMs work just the same; they learn from examples, but don’t truly understand the underlying rules.

Pretending to Think (Simulated Reasoning): Some models try to convince us they are ‘thinking’ by breaking down their answers into steps. That’s like the parrot saying, ‘Let me think,’ before giving its answer; it looks like reasoning, but again, it’s just pattern-matching.

Learning from Other Parrots (Synthetic Data): One parrot teaches another what it learned. That makes the second parrot seem smarter, but it’s just repeating the first parrot, which means it will duplicate its mistakes and limitations.

Fancy Wrapping (Pseudo-Structure): Some models format their answers in structured ways, like adding tags around steps, to give the illusion of order. It’s like the parrot putting its sentences in bold, like ChatGPT does; looks convincing, but doesn’t change the fact that it’s not really thinking.

These tricks are, in a way, sleight of hand: they make the model seem brilliant, but they don’t address the core issue of the model not in fact understanding what it’s doing. To tackle non-trivial business problems, like understanding complex relationships or guaranteeing the accuracy of results, they also need structured tools like knowledge graphs that supplement the model by offering context and clarity—much like giving the parrot a proper map to follow, instead of hoping it guesses the right path.

One positive: DeepSeek R1 is open-source, which allows for greater transparency for a developer to evaluate the underlying mechanisms or, if you choose to see them that way, ‘tricks’ it employs.”

Share This

Related Posts