91%-failure-rate:-why-deepseek-is-the-most-dangerous-ai-you-might-be-using
91% Failure Rate: Why DeepSeek Is The Most Dangerous AI You Might Be Using

We’d love to say DeepSeek is the safest and most ethical AI on the planet. But after reading AppSOC’s latest report, we’re starting to think George Orwell was just a few decades off. Maybe Nineteen Eighty-Four should’ve been called Twenty Twenty-Five. Their findings aren’t just concerning—they’re downright chilling. Some of DeepSeek’s security flaws uncovered in their report might make you rethink using DeepSeek, whether for business or just casual fun.

The dark horse rises

Before diving into the cracks beneath DeepSeek’s polished exterior, we need to understand why this AI model has shaken the industry to its core.

For the past couple of years, OpenAI dominated the AI space. Google had been tinkering with AI for years, but when OpenAI unleashed ChatGPT onto the masses, it became painfully clear just how far behind Big Tech really was.

Then, out of nowhere, DeepSeek arrived.

There was no slow buildup. No teasers. Just a sudden appearance of a brand new AI model that could go toe-to-toe with ChatGPT. Backed by High-Flyer, a Chinese hedge fund with pockets deep enough to make some billionaires blush, DeepSeek didn’t just enter the market—with its steel-toed boot, it kicked the door down.

It blindsided the industry with a model that rivals ChatGPT at a fraction of the cost. Instead of the traditional, resource-heavy training methods used by OpenAI and Google, DeepSeek leveraged distillation. In layperson terms, it learned from existing AI outputs rather than raw data.

Aidan Gomez, CEO of Cohere, acknowledged the brilliance of this approach, telling Business Insider, “I think it validated Cohere’s strategy that we’ve been pursuing for a while now. Spending billions of dollars a year isn’t necessary to produce top-tier tech that’s competitive.”

With its arrival, DeepSeek left a trillion-dollar hole in the US tech stock market. While the company insists it built its model on a shoestring budget, some reports suggest the true investment may actually be in the billions.

DeepSeek’s open-source model also sets it apart. It is free (for end users), accessible, and easily modified. While competitors charge up to $200 a month, DeepSeek costs next to nothing, making it the obvious choice—or so it seems.

When something seems too good to be true, it usually is. AppSOC’s findings suggest that DeepSeek’s affordability might come with a hidden cost—one that has nothing to do with money.

The intern from hell

AppSOC’s report dives deep into the technical aspects of DeepSeek’s security flaws, but let’s skip the tech jargon for a bit. Instead, we want you to imagine this scenario.

Imagine hiring an intern who seems poised to be a candidate for employee of the year—eager, efficient, and practically free. They handle research and assist other employees in your company with nary a complaint. Everything seems perfect. Or at least that’s how it seems.

Then, the cracks start to show. They give customers false information with mind-boggling confidence. They perform tasks that are beyond their job scope. Worst of all, they spill company secrets to anyone who asks the right questions. By the time you realize the damage, it’s too late. Would you ever trust someone like this in your business?

See also  Galaxy S24 Series Now Getting One UI 7 Beta 4 With Several Fixes

Now, replace that intern with DeepSeek. AppSOC’s report reveals that DeepSeek is just as reckless as this intern from hell. It is easily tricked into leaking sensitive data, generating malware, and disregarding ethical safeguards. It’s an AI that doesn’t just hallucinate—it exposes your business to very real risks with very real consequences.

So the million-dollar question is: If you wouldn’t trust a liability like that in your office, why let it into your systems?

DeepSeek? Hah! More like deep-ly flawed

deepseek security failure AH

Now that you understand DeepSeek’s security flaws better, let’s look at the more technical side of AppSOC’s report and why you should be worried.

You’ve probably heard of jailbreaking when it comes to smartphones. If you think that jailbreaking a phone is risky, in AI, it becomes outright dangerous. Jailbreaking tricks an AI into ignoring its own safety rules, allowing it to generate content it shouldn’t. DeepSeek failed this test 91% of the time. That means it can be manipulated into saying or doing just about anything with the right prompt.

Ever heard of a prompt injection attack? Hackers use cleverly worded inputs to trick the AI into revealing hidden information or performing unauthorized actions. DeepSeek failed this test 86% of the time. This means that if an attacker knows what they’re doing, they can trick DeepSeek into leaking sensitive data, bypassing restrictions, or even executing tasks it should never allow. In fact, researchers at Wallarm managed to trick DeepSeek into exposing its own internal system.

This is where things get truly disturbing. DeepSeek failed 93% of the time when tested for malware generation. That means it’s worryingly effective at helping users create harmful scripts, viruses, and exploits. This isn’t just a flaw—it’s a script kiddie’s dream come true! Now just about anyone and their grandma can create malware on the fly.

AI models are supposed to have safeguards against generating offensive, discriminatory, or harmful content. DeepSeek failed these safeguards 68% of the time. This means attackers can easily manipulate it to produce toxic, offensive, or outright unethical content. Would you trust an AI like this in your business? Imagine the harm it could cause to your business’s reputation!

DeepSeek also struggles with what’s known as hallucinations—a fancy way of saying it fabricates information. AppSOC’s tests revealed an 81% failure rate in this area. If you ask DeepSeek for information, only to find out later it completely made something up, could you continue to trust what it says?

The real kicker? 72% failure in supply chain security. No one knows where its data comes from, and that’s a massive red flag if we ever saw one. If we don’t know where DeepSeek gets its information to train itself, how can we trust it? Try citing “a guy on the internet” in your next research paper and see how well that’s received.

See also  Acer Introduces Predator Helio Neo Laptops With RTX 5070 Ti GPUs

According to former NSA hacker, Jake Williams, he points out that this is a fundamental difference between open-source AI and open-source code. “It’s important to remember that open-source AI (e.g., DeepSeek’s R1) means something foundationally different than open-source code. With open-source code, we can audit the code and identify vulnerabilities. With open-source AI, we can do no such thing. There are also very real supply-chain concerns, R1 is fairly easy to jailbreak and it has far fewer guardrails than other commercial models.”

The implications for your business

DeepSeek is an attractive AI option, especially for individuals or SMEs who might not have the budget for more expensive AI models. It’s low-cost, open-source, and performs almost as well as big-name competitors like OpenAI’s ChatGPT and Google’s Gemini. But before your business rushes to adopt it, you need to ask yourself one critical question: Is it really worth the risk?

The answer, based on the report, is a hard no. DeepSeek is a ticking time bomb riddled with legal, security, and financial liabilities waiting to explode.

One of the biggest red flags is the issue of legal responsibility. Unlike OpenAI, Microsoft, and Google, which offer legal protection (up to a certain point) through their Terms & Conditions, DeepSeek does not indemnify its users. That means if something goes wrong—if it leaks sensitive data, generates offensive content, creates malware, or violates regulations—you are on the hook, not DeepSeek.

The high hallucination failure rates also make DeepSeek unreliable in situations that call for factual accuracy. This includes financial analysis, legal guidance, or medical applications. The failure rate in supply chain risks also raises concerns about data integrity. If businesses don’t know where DeepSeek’s training data comes from, should they really trust it? This could mean lawsuits, fines, and a PR disaster. Are you willing to take those on?

The legal and regulatory landscape surrounding DeepSeek is also worth taking note of. Some countries and governments have already banned or restricted its use. Whether this is driven by politics or genuine security concerns doesn’t really matter—the point is that these security flaws make DeepSeek’s future uncertain. If your company decides to build its operations around DeepSeek, what happens if your country’s government decides to block it? What if there are new regulatory laws that make its use illegal or heavily restricted?

This means that if your business relies on DeepSeek today, it could very well be forced to abandon it tomorrow. This will lead to disruptions and wasted resources that could cost you quite a bit.

Andrew Hoog, a security expert at NowSecure, also found security flaws in DeepSeek’s iOS app which doesn’t encrypt transmitted data. To make it worse, it stores data insecurely, opening the door to credential theft. Speaking to Brian Krebs at KrebsOnSecurity, Hoog put it bluntly, “When we see people exhibit really simplistic coding errors, as you dig deeper, there are usually a lot more issues. There is virtually no priority around security or privacy.”

The China connection

As if this weren’t concerning enough, reports indicate that DeepSeek may contain hidden code that sends user data back to China. Ivan Tsarynny, CEO of Feroot Security, has warned, “Our personal information is being sent to China, there is no denial, and the DeepSeek tool is collecting everything that American users connect to it.”

Security firms have uncovered direct links between DeepSeek and Chinese government servers. This raises the question: Is someone monitoring user data? Could bad actors siphon off proprietary information? Companies using DeepSeek risk becoming an unintentional treasure trove of data that feeds sensitive information to foreign entities.

See also  New IPads And IPhone SE Could Launch In April

Adrianus Warmenhoven, a cybersecurity expert at NordVPN, points us towards DeepSeek’s policy on data collection, “This raises concerns because of data collection outlined — ranging from user-shared information to data from external sources — which falls under the potential risks associated with storing such data in a jurisdiction with different privacy and security standards.”

It’s also no surprise that DeepSeek is shaped by China’s rules and laws surrounding content. Investigators have found that DeepSeek censors politically sensitive topics and generates responses aligned with Chinese state narratives. This New York Times report cites multiple researchers who found that DeepSeek isn’t just a potential security risk; it might be a tool for propaganda.

Researchers found that 80% of the time, DeepSeek’s answers mirrored China’s official views on certain topics. When asked politically taboo questions in China, it declined to respond, avoided the topic, and deflected its answers.

Conclusion

AI should serve as an asset, not a liability, like any other tool. While DeepSeek offers cutting-edge capabilities at a fraction of the price compared the its competitors, the real cost, such as data exposure, compliance risks, and geopolitical entanglements, could be far greater.

Perhaps the most unsettling part of all of this is that DeepSeek’s security flaws aren’t potential scenarios. A simple software update won’t fix these failures. These are core issues that make DeepSeek a very real liability for businesses that choose to use it. We’re talking about an AI model that can be tricked, exploited, manipulated, and potentially used for cybercrime, exposing businesses to very serious real-life legal and financial consequences.

So, before you decide to integrate DeepSeek into your business operations, you need to weigh the risks against the rewards. We’re not just talking about choosing an AI model like you would choose a vendor for your office’s printer paper here, it’s about deciding whether or not to gamble with your company’s security, reputation, and future. At the end of the day, DeepSeek may save you a ton of money compared to other AI models, but its true cost could be far greater. After all, they say there is no such thing as a free lunch.