Connect with us

Artificial Intelligence

DeepSeek: The Chinese startup challenging Silicon Valley

Published

on

Market disruption and shockwaves through Silicon Valley marked Chinese startup DeepSeek’s launch, challenging some of the fundamental assumptions of how artificial intelligence companies had operated and scaled.

In less than a couple of years, the Beijing-based newcomer has accomplished what many thought impossible: creating AI models that compete with industry giants while spending only a fraction of their competitors’ budgets teaching models and inferring responses.

The impact at the time of the public launch was immediate and measurable. According to the South China Morning Post, major tech stocks, including Nvidia, Microsoft, and Meta, experienced significant declines as investors grappled with the implications of DeepSeek’s existence.

The startup’s free AI assistant application for iOS and Android, launched on January 10, quickly climbed to the top spot on Apple’s US App Store, displacing OpenAI’s ChatGPT and marking a historic first for a Chinese AI product in the American market.

What makes this particularly significant is DeepSeek’s technological approach. The Algorithmic Bridge reports the company has implemented several innovative solutions, including Multi-head Latent Attention (MLA) to reduce memory bottlenecks and Group Relative Policy Optimisation (GRPO) to streamline reinforcement learning.

The advances allow DeepSeek to achieve comparable or superior results to US competitors while using significantly fewer resources. The company’s resource efficiency is striking: DeepSeek operates with less than 100,000 H100 GPUs, while Meta will deploy 1.3 million GPUs by late 2025.

The efficiency extends beyond hardware. The Algorithmic Bridge suggests that DeepSeek’s approach represents a tenfold improvement in resource utilisation when considering factors like development time and infrastructure costs.

However, the rapid rise into Western users’ consciousness wasn’t without challenges. The South China Morning Post reported that DeepSeek’s sudden popularity led to significant infrastructure stress, resulting in server crashes and cybersecurity concerns that forced temporary registration limits. The growing pains highlight the real-world challenges of scaling AI services, regardless of architectural efficiency.

The company’s commitment to open-source development and research transparency starkly contrasts the secretive approaches of major US tech companies. To many industry observers, open and locally-hosted AI may be the preferred deployment blueprint.

The company earned praise from prominent figures in the tech industry, including venture capitalist Marc Andreessen, who described DeepSeek’s developments as “one of the most amazing and impressive breakthroughs.”

The political implications of the events are significant. US President Donald Trump characterised DeepSeek’s emergence as a “wake-up call” for American industry, reflecting broader concerns about technological competition between the United States and China. He continues to battle Chinese competition in technology, imposing restrictive tariffs that have affected all corners of the globe.

However, the situation transcends simple national rivalry, representing a fundamental challenge to established thinking about AI development.

Looking ahead, several key questions remain. Can DeepSeek’s efficient approach scale to meet growing demand? Have established players adapted their strategies in effective response? The Chinese company has demonstrated that algorithmic efficiency and open collaboration can replace raw computational power and secrecy as the primary drivers of AI advancement.

The AI market disruption may ultimately benefit the entire field by forcing a re-evaluation of established practices and could potentially lead to more efficient, accessible AI development methods.

While DeepSeek’s achievements are remarkable since springing into the public’s consciousness, it’s important to note that major US tech companies have released advances of their own, and market volatility in the tech sector remains high.

What’s clear is that DeepSeek introduced a viable alternative to the capital-intensive approach that has dominated AI development. Whether this becomes the new industry standard or simply one of many successful strategies remains to be seen, but the company’s impact on the industry already significant.

Photo by Markus Spiske)

See also: DeepSeek restricts sign-ups amid ‘large-scale malicious attacks’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

AI hacking tool exploits zero-day security vulnerabilities in minutes

Published

on

A new AI tool – built to help companies find and fix their own security weaknesses – has been snatched up by cybercriminals, turned on its head, and used as a devastating hacking weapon exploiting zero-day vulnerabilities.

According to a report from cybersecurity firm Check Point, the framework – called Hexstrike-AI – is the turning point that security experts have been dreading, where the sheer power of AI is put directly into the hands of those who want to do harm.

A tool for good, twisted for bad

Hexstrike-AI was supposed to be one of the good guys. Its creators described it as a “revolutionary Al-powered offensive security framework” that was designed to help security professionals think like hackers to better protect their organisations.

Think of it as an AI “brain” that acts as a conductor for a digital orchestra. It directs over 150 different specialised AI agents and security tools to test a company’s defences, find weaknesses like zero-day vulnerabilities, and report back.

The problem? What makes a tool great for defenders also makes it incredibly attractive to attackers. Almost immediately after its release, chatter on the dark web lit up. Malicious actors weren’t just discussing the tool; they were actively figuring out how to weaponise it.

The race against zero-day vulnerabilities just got shorter

The timing for this AI hacking tool couldn’t have been worse. Just as Hexstrike-AI appeared, Citrix announced three major “zero-day” vulnerabilities in its popular NetScaler products. A zero-day is a flaw so new that there’s been zero days to create a patch for it, leaving companies completely exposed.

Normally, exploiting such complex flaws requires a team of highly skilled hackers and days, if not weeks, of work. With Hexstrike-AI, that process has been reduced to less than 10 minutes.

The AI brain does all the heavy lifting. An attacker can give it a simple command like “exploit NetScaler,” and the system automatically figures out the best tools to use and the precise steps to take. It democratises hacking by turning it into a simple, automated process.

As one cybercriminal boasted on an underground forum: “Watching how everything works without my participation is just a song. I’m no longer a coder-worker, but an operator.”

What these new AI hacking tools means for enterprise security

This isn’t just a problem for big corporations. The speed and scale of these new AI-powered attacks mean that the window for businesses to protect themselves from zero-day vulnerabilities is shrinking dramatically.

Check Point is urging organisations to take immediate action:

  • Get patched: The first and most obvious step is to apply the fixes released by Citrix for the NetScaler vulnerabilities.
  • Fight fire with fire: It’s time to adopt AI-driven defence systems that can detect and respond to threats at machine speed, because humans can no longer keep up.
  • Speed up defences: The days of taking weeks to apply a security patch are over.
  • Listen to the whispers: Monitoring dark web chatter is no longer optional; it’s a source of intelligence that can give you a much-needed head start on the next attack.

What once felt like a theoretical threat is now a very real and present danger. With AI now very much an actively weaponised hacking tool for exploiting zero-day vulnerabilities, the game has changed, and our approach to security has to change with it.

See also: AI security wars: Can Google Cloud defend against tomorrow’s threats?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Microsoft gives free Copilot AI services to US government workers

Published

on

Millions of US federal government workers are about to get a new AI assistant on their devices for free in the form of Microsoft Copilot. The move is part of a deal between Microsoft and the US General Services Administration (GSA) that’s also expected to save taxpayers $3.1 billion in its first year.

The centrepiece of this huge new agreement is a full year of Microsoft 365 Copilot at no extra cost for government workers using the high-security G5 licence. This is a push to get the latest AI tools into the hands of public servants quickly and safely, aiming to improve how the government operates.

Microsoft pushes the US government into the AI era

This deal aims to place the US government at the forefront of AI adoption. It’s a direct response to the administration’s AI Action Plan, designed to bring the power of modern artificial intelligence to everything from managing citizen enquiries to analysing complex data.

“OneGov represents a paradigm shift in federal procurement that is leading to immense cost savings, achieved by leveraging the purchasing power of the entire federal government,” explained FAS Commissioner Josh Gruenbaum.

The free Copilot offer is specifically for users on the Microsoft 365 G5 plan, the premium tier for departments that handle sensitive information and require the tightest security protocols. But the benefits extend further, with the deal helping agencies to use AI for automating routine tasks, freeing up people to focus on the work that matters most.

The agreement also makes it cheaper and easier for different departments to modernise their technology. By offering big discounts on Azure cloud services and getting rid of data transfer fees, it tackles a major headache that has often slowed down collaboration between agencies.

Security is not an afterthought

Of course, giving AI access to government systems raises immediate security questions. The deal addresses this head-on, with Microsoft emphasising that its core cloud and AI services have already passed FedRAMP High security authorisation, a critical standard for handling sensitive government data.

While the full FedRAMP High certification for Copilot itself is expected soon, it has already been given a provisional green light by the Department of Defense. The package also includes advanced security tools like Microsoft Sentinel and Entra ID to support the government’s “zero trust” security goal.

GSA Deputy Administrator Stephen Ehikian strongly encouraged government agencies to take advantage of the new tools.

“GSA is proud to partner with technology companies, like Microsoft, to advance AI adoption across the federal government, a key priority of the Trump Administration,” said Ehikian. “We urge our federal partners to leverage these agreements, providing government workers with transformative AI tools that streamline operations, cut costs, and enhance results.”

Helping government agencies to use AI effectively

Microsoft is also putting money into making sure the technology is actually used effectively. The company has committed an extra $20 million for support and training, including workshops to help agencies get the most out of the new tools and find other areas to reduce waste.

All told, the package is estimated to deliver more than $6 billion in value over the next three years.

“With this new agreement with the US General Services Administration, including a no-cost Microsoft 365 Copilot offer, we will help federal agencies use AI and digital technologies to improve citizen services, strengthen security, and save taxpayers more than $3 billion in the first year alone,” commented Satya Nadella, Chairman and CEO of Microsoft.

For the millions of people working within the US government, this agreement with Microsoft means that an AI-powered assistant is set to change their daily work.

See also: Marketing AI boom faces crisis of consumer trust

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

What Rollup News says about battling disinformation

Published

on

Swarm Network, a platform developing decentralised protocols for AI agents, recently announced the successful results of its first Swarm, a tool (perhaps “organism” is the better term) built to tackle disinformation. Called Rollup News, the swarm is not an app, a software platform, nor a centralised algorithm. It is a decentralised collection of AI agents that collaborate to solve a bigger problem. The problem is that platforms like X allow any type of viral claims, some by incredibly influential people. How can we know what is true?

Currently, we try to solve this problem through equally loud opposing voices who offer facts or expert opinions. But if those sources are from a political side you oppose, why should you trust them? After all, these are people with their own motivations, and two additional issues are created: facts presented by a single person can easily get caught up in the “fake news” accusations; and misinformation presented as “facts” can be used to attack the ground truth.

Unfortunately, this isn’t just a current trend that will eventually lose its popularity and fade out. The more technology and access to varied news sources we have, the harder it becomes to not treat these sources equally. Some might be a traditional outlet that is legally liable if they falsify claims. Others might be a popular podcaster with an audience of millions, and whose fear-mongering ties nicely in with the products in their merch store. If it stopped at this, we could probably tell the truth from fiction. But it isn’t that simple. Official news channels have a history of spinning the news in their own bias, or ignoring other stories that are important to the public. On the other side, there are genuinely powerful influencers who seem to be hell bent on finding the truth and reporting it, no matter what side of the political spectrum it hits.

The world has become both confusing and dangerous, and the old “sticks and stones” saying has been proven false. After all, we have seen global elections swayed by disinformation, major policy shifts driven by false claims, lives damaged and lost as the result of powerful people lying, but lying loudly enough and often enough to sway large groups of people into believing them; and convincing these same groups that any facts to the contrary are the actual “fake news.”

Fixing fact checking

Given how challenging the disinformation industry is, and how absolutely slippery the truth is today, how can anyone hope to battle it? We have seen that people of all sides, realising that all news is skewed to some extent, will believe those sources that support their pre-existing beliefs.

A third party source, backed by overwhelming evidence, is needed to arbitrate. The source should not have an opinion, its methods should be transparent, and everyone should be able to see the same thing. This is nearly impossible, but the Web3 industry has shown that these attributes are what makes it incredibly powerful. Smart contracts handle billions in value daily, managing agreements from complete strangers from anywhere on the globe. The information is validated and the decisions are transparent, then locked in via the blockchain. The model has moved trillions of dollars using these very powerful, and neutral, tools.

Combine this trust with the other element Web3 excels in: decentralisation. Now attach another fast-emerging technology, the AI agent, which is easily built and designed to perform one task very well. The system is the centre of Swarm Network’s model, and its first deployment is Rollup News. The growing population of AI agents, the swarm, is designed to work collectively to scour the corners of X, find claims from users, and collectively test their validity using sources found in the information space. The results of these assessments are posted on the blockchain once validated by a large enough group of independent agents. Selective human participation helps to ensure the context and other subtle areas are handled well. The human element is also decentralised, preventing any particular viewpoint from being able to assert itself, and misconduct equals expulsion if someone tries to present fiction as fact. Rollup News has been operating for several months, with astonishing results: 128,000+ users have been onboarded, with over 5,000 rollup requests daily in July 2025. Over 3 million tweets were processed during that time, which is impressive in its own right, but when you consider the designed scalability of Web3 and AI agents working together, this is the linchpin of the battle in a world of disinformation.

The start of something new?

Rollup News’ success and Swarm Network’s larger model teach us a few things about fixing today’s problems. It is a demonstration that Web3 and AI are components in providing scalable solutions, that small AI agents can effectively work together to solve giant challenges, even if there is no centralised system. That decentralised environment, anchored by Web3, is the key to generating transparency, trust, and allowing strangers anywhere in the world to work together. Finally, the tokenisation of such a system creates the necessary incentives to attract more participants, fuelling the growth of a system. As long as it creates value, people will pay for its use, and those who help to validate and secure the decentralised network earn rewards. The type of truly free market system can scale up or down with the global demand faster than any traditional company. Swarm Network’s founder, Yannick Myson, sums it up nicely: “Rollup News shows what’s possible when AI agents, human insight, and blockchain converge. This isn’t a prototype – it’s working, and it’s scaling.”

We need to pay close attention to these lessons, as they offer a great deal of insight. First, the “truth-tech” sector, which is focused on using technology to combat mis/dis-information, has a strong blueprint for combining blockchain and AI. Second, there are many other sectors that need this level of global scaling and independent management, with untold value just ready to be developed and launched.

Image source: Unsplash

Continue Reading

Trending