Artificial Intelligence
Why security chiefs demand urgent regulation of AI like DeepSeek

Anxiety is growing among Chief Information Security Officers (CISOs) in security operation centres, particularly around Chinese AI giant DeepSeek.
AI was heralded as a new dawn for business efficiency and innovation, but for the people on the front lines of corporate defence, it’s casting some very long and dark shadows.
Four in five (81%) UK CISOs believe the Chinese AI chatbot requires urgent regulation from the government. They fear that without swift intervention, the tool could become the catalyst for a full-scale national cyber crisis.
This isn’t speculative unease; it’s a direct response to a technology whose data handling practices and potential for misuse are raising alarm bells at the highest levels of enterprise security.
The findings, commissioned by Absolute Security for its UK Resilience Risk Index Report, are based on a poll of 250 CISOs at large UK organisations. The data suggests that the theoretical threat of AI has now landed firmly on the CISO’s desk, and their reactions have been decisive.
In what would have been almost unthinkable a couple of years ago, over a third (34%) of these security leaders have already implemented outright bans on AI tools due to cybersecurity concerns. A similar number, 30 percent, have already pulled the plug on specific AI deployments within their organisations.
This retreat is not a sign of Luddism but a pragmatic response to an escalating problem. Businesses are already facing complex and hostile threats, as evidenced by high-profile incidents like the recent Harrods breach. CISOs are struggling to keep pace, and the addition of sophisticated AI tools into the attacker’s arsenal is a challenge many feel ill-equipped to handle.
A growing security readiness gap for AI platforms like DeepSeek
The core of the issue with platforms like DeepSeek lies in their potential to expose sensitive corporate data and be weaponised by cybercriminals.
Three out of five (60%) CISOs predict a direct increase in cyberattacks as a result of DeepSeek’s proliferation. An identical proportion reports that the technology is already tangling their privacy and governance frameworks, making an already difficult job almost impossible.
This has prompted a shift in perspective. Once viewed as a potential silver bullet for cybersecurity, AI is now seen by a growing number of professionals as part of the problem. The survey reveals that 42 percent of CISOs now consider AI to be a bigger threat than a help to their defensive efforts.
Andy Ward, SVP International of Absolute Security, said: “Our research highlights the significant risks posed by emerging AI tools like DeepSeek, which are rapidly reshaping the cyber threat landscape.
“As concerns grow over their potential to accelerate attacks and compromise sensitive data, organisations must act now to strengthen their cyber resilience and adapt security frameworks to keep pace with these AI-driven threats.
“That’s why four in five UK CISOs are urgently calling for government regulation. They’ve witnessed how quickly this technology is advancing and how easily it can outpace existing cybersecurity defences.”
Perhaps most worrying is the admission of unpreparedness. Almost half (46%) of the senior security leaders confess that their teams are not ready to manage the unique threats posed by AI-driven attacks. They are witnessing the development of tools like DeepSeek outpacing their defensive capabilities in real-time, creating a dangerous vulnerability gap that many believe can only be closed by national-level government intervention.
“These are not hypothetical risks,” Ward continued. “The fact that organisations are already banning AI tools outright and rethinking their security strategies in response to the risks posed by LLMs like DeepSeek demonstrates the urgency of the situation.
“Without a national regulatory framework – one that sets clear guidelines for how these tools are deployed, governed, and monitored – we risk widespread disruption across every sector of the UK economy.”
Businesses are investing to avert crisis with their AI adoption
Despite this defensive posture, businesses are not planning a full retreat from AI. The response is more of a strategic pause rather than a permanent stop.
Businesses recognise the immense potential of AI and are actively investing to adopt it safely. In fact, 84 percent of organisations are making the hiring of AI specialists a priority for 2025.
This investment extends to the very top of the corporate ladder. 80 percent of companies have committed to AI training at the C-suite level. The strategy appears to be a dual-pronged approach: upskill the workforce to understand and manage the technology, and bring in the specialised talent needed to navigate its complexities.
The hope – and it is a hope, if not a prayer – is that building a strong internal foundation of AI expertise can act as a counterbalance to the escalating external threats.
The message from the UK’s security leadership is clear: they do not want to block AI innovation, but to enable it to proceed safely. To do that, they require a stronger partnership with the government.
The path forward involves establishing clear rules of engagement, government oversight, a pipeline of skilled AI professionals, and a coherent national strategy for managing the potential security risks posed by DeepSeek and the next generation of powerful AI tools that will inevitably follow.
“The time for debate is over. We need immediate action, policy, and oversight to ensure AI remains a force for progress, not a catalyst for crisis,” Ward concludes.
See also: Alan Turing Institute: Humanities are key to the future of AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Artificial Intelligence
AI hacking tool exploits zero-day security vulnerabilities in minutes

A new AI tool – built to help companies find and fix their own security weaknesses – has been snatched up by cybercriminals, turned on its head, and used as a devastating hacking weapon exploiting zero-day vulnerabilities.
According to a report from cybersecurity firm Check Point, the framework – called Hexstrike-AI – is the turning point that security experts have been dreading, where the sheer power of AI is put directly into the hands of those who want to do harm.
A tool for good, twisted for bad
Hexstrike-AI was supposed to be one of the good guys. Its creators described it as a “revolutionary Al-powered offensive security framework” that was designed to help security professionals think like hackers to better protect their organisations.
Think of it as an AI “brain” that acts as a conductor for a digital orchestra. It directs over 150 different specialised AI agents and security tools to test a company’s defences, find weaknesses like zero-day vulnerabilities, and report back.
The problem? What makes a tool great for defenders also makes it incredibly attractive to attackers. Almost immediately after its release, chatter on the dark web lit up. Malicious actors weren’t just discussing the tool; they were actively figuring out how to weaponise it.
The race against zero-day vulnerabilities just got shorter
The timing for this AI hacking tool couldn’t have been worse. Just as Hexstrike-AI appeared, Citrix announced three major “zero-day” vulnerabilities in its popular NetScaler products. A zero-day is a flaw so new that there’s been zero days to create a patch for it, leaving companies completely exposed.
Normally, exploiting such complex flaws requires a team of highly skilled hackers and days, if not weeks, of work. With Hexstrike-AI, that process has been reduced to less than 10 minutes.
The AI brain does all the heavy lifting. An attacker can give it a simple command like “exploit NetScaler,” and the system automatically figures out the best tools to use and the precise steps to take. It democratises hacking by turning it into a simple, automated process.
As one cybercriminal boasted on an underground forum: “Watching how everything works without my participation is just a song. I’m no longer a coder-worker, but an operator.”
What these new AI hacking tools means for enterprise security
This isn’t just a problem for big corporations. The speed and scale of these new AI-powered attacks mean that the window for businesses to protect themselves from zero-day vulnerabilities is shrinking dramatically.
Check Point is urging organisations to take immediate action:
- Get patched: The first and most obvious step is to apply the fixes released by Citrix for the NetScaler vulnerabilities.
- Fight fire with fire: It’s time to adopt AI-driven defence systems that can detect and respond to threats at machine speed, because humans can no longer keep up.
- Speed up defences: The days of taking weeks to apply a security patch are over.
- Listen to the whispers: Monitoring dark web chatter is no longer optional; it’s a source of intelligence that can give you a much-needed head start on the next attack.
What once felt like a theoretical threat is now a very real and present danger. With AI now very much an actively weaponised hacking tool for exploiting zero-day vulnerabilities, the game has changed, and our approach to security has to change with it.
See also: AI security wars: Can Google Cloud defend against tomorrow’s threats?
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Microsoft gives free Copilot AI services to US government workers

Millions of US federal government workers are about to get a new AI assistant on their devices for free in the form of Microsoft Copilot. The move is part of a deal between Microsoft and the US General Services Administration (GSA) that’s also expected to save taxpayers $3.1 billion in its first year.
The centrepiece of this huge new agreement is a full year of Microsoft 365 Copilot at no extra cost for government workers using the high-security G5 licence. This is a push to get the latest AI tools into the hands of public servants quickly and safely, aiming to improve how the government operates.
Microsoft pushes the US government into the AI era
This deal aims to place the US government at the forefront of AI adoption. It’s a direct response to the administration’s AI Action Plan, designed to bring the power of modern artificial intelligence to everything from managing citizen enquiries to analysing complex data.
“OneGov represents a paradigm shift in federal procurement that is leading to immense cost savings, achieved by leveraging the purchasing power of the entire federal government,” explained FAS Commissioner Josh Gruenbaum.
The free Copilot offer is specifically for users on the Microsoft 365 G5 plan, the premium tier for departments that handle sensitive information and require the tightest security protocols. But the benefits extend further, with the deal helping agencies to use AI for automating routine tasks, freeing up people to focus on the work that matters most.
The agreement also makes it cheaper and easier for different departments to modernise their technology. By offering big discounts on Azure cloud services and getting rid of data transfer fees, it tackles a major headache that has often slowed down collaboration between agencies.
Security is not an afterthought
Of course, giving AI access to government systems raises immediate security questions. The deal addresses this head-on, with Microsoft emphasising that its core cloud and AI services have already passed FedRAMP High security authorisation, a critical standard for handling sensitive government data.
While the full FedRAMP High certification for Copilot itself is expected soon, it has already been given a provisional green light by the Department of Defense. The package also includes advanced security tools like Microsoft Sentinel and Entra ID to support the government’s “zero trust” security goal.
GSA Deputy Administrator Stephen Ehikian strongly encouraged government agencies to take advantage of the new tools.
“GSA is proud to partner with technology companies, like Microsoft, to advance AI adoption across the federal government, a key priority of the Trump Administration,” said Ehikian. “We urge our federal partners to leverage these agreements, providing government workers with transformative AI tools that streamline operations, cut costs, and enhance results.”
Helping government agencies to use AI effectively
Microsoft is also putting money into making sure the technology is actually used effectively. The company has committed an extra $20 million for support and training, including workshops to help agencies get the most out of the new tools and find other areas to reduce waste.
All told, the package is estimated to deliver more than $6 billion in value over the next three years.
“With this new agreement with the US General Services Administration, including a no-cost Microsoft 365 Copilot offer, we will help federal agencies use AI and digital technologies to improve citizen services, strengthen security, and save taxpayers more than $3 billion in the first year alone,” commented Satya Nadella, Chairman and CEO of Microsoft.
For the millions of people working within the US government, this agreement with Microsoft means that an AI-powered assistant is set to change their daily work.
See also: Marketing AI boom faces crisis of consumer trust
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
What Rollup News says about battling disinformation

Swarm Network, a platform developing decentralised protocols for AI agents, recently announced the successful results of its first Swarm, a tool (perhaps “organism” is the better term) built to tackle disinformation. Called Rollup News, the swarm is not an app, a software platform, nor a centralised algorithm. It is a decentralised collection of AI agents that collaborate to solve a bigger problem. The problem is that platforms like X allow any type of viral claims, some by incredibly influential people. How can we know what is true?
Currently, we try to solve this problem through equally loud opposing voices who offer facts or expert opinions. But if those sources are from a political side you oppose, why should you trust them? After all, these are people with their own motivations, and two additional issues are created: facts presented by a single person can easily get caught up in the “fake news” accusations; and misinformation presented as “facts” can be used to attack the ground truth.
Unfortunately, this isn’t just a current trend that will eventually lose its popularity and fade out. The more technology and access to varied news sources we have, the harder it becomes to not treat these sources equally. Some might be a traditional outlet that is legally liable if they falsify claims. Others might be a popular podcaster with an audience of millions, and whose fear-mongering ties nicely in with the products in their merch store. If it stopped at this, we could probably tell the truth from fiction. But it isn’t that simple. Official news channels have a history of spinning the news in their own bias, or ignoring other stories that are important to the public. On the other side, there are genuinely powerful influencers who seem to be hell bent on finding the truth and reporting it, no matter what side of the political spectrum it hits.
The world has become both confusing and dangerous, and the old “sticks and stones” saying has been proven false. After all, we have seen global elections swayed by disinformation, major policy shifts driven by false claims, lives damaged and lost as the result of powerful people lying, but lying loudly enough and often enough to sway large groups of people into believing them; and convincing these same groups that any facts to the contrary are the actual “fake news.”
Fixing fact checking
Given how challenging the disinformation industry is, and how absolutely slippery the truth is today, how can anyone hope to battle it? We have seen that people of all sides, realising that all news is skewed to some extent, will believe those sources that support their pre-existing beliefs.
A third party source, backed by overwhelming evidence, is needed to arbitrate. The source should not have an opinion, its methods should be transparent, and everyone should be able to see the same thing. This is nearly impossible, but the Web3 industry has shown that these attributes are what makes it incredibly powerful. Smart contracts handle billions in value daily, managing agreements from complete strangers from anywhere on the globe. The information is validated and the decisions are transparent, then locked in via the blockchain. The model has moved trillions of dollars using these very powerful, and neutral, tools.
Combine this trust with the other element Web3 excels in: decentralisation. Now attach another fast-emerging technology, the AI agent, which is easily built and designed to perform one task very well. The system is the centre of Swarm Network’s model, and its first deployment is Rollup News. The growing population of AI agents, the swarm, is designed to work collectively to scour the corners of X, find claims from users, and collectively test their validity using sources found in the information space. The results of these assessments are posted on the blockchain once validated by a large enough group of independent agents. Selective human participation helps to ensure the context and other subtle areas are handled well. The human element is also decentralised, preventing any particular viewpoint from being able to assert itself, and misconduct equals expulsion if someone tries to present fiction as fact. Rollup News has been operating for several months, with astonishing results: 128,000+ users have been onboarded, with over 5,000 rollup requests daily in July 2025. Over 3 million tweets were processed during that time, which is impressive in its own right, but when you consider the designed scalability of Web3 and AI agents working together, this is the linchpin of the battle in a world of disinformation.
The start of something new?
Rollup News’ success and Swarm Network’s larger model teach us a few things about fixing today’s problems. It is a demonstration that Web3 and AI are components in providing scalable solutions, that small AI agents can effectively work together to solve giant challenges, even if there is no centralised system. That decentralised environment, anchored by Web3, is the key to generating transparency, trust, and allowing strangers anywhere in the world to work together. Finally, the tokenisation of such a system creates the necessary incentives to attract more participants, fuelling the growth of a system. As long as it creates value, people will pay for its use, and those who help to validate and secure the decentralised network earn rewards. The type of truly free market system can scale up or down with the global demand faster than any traditional company. Swarm Network’s founder, Yannick Myson, sums it up nicely: “Rollup News shows what’s possible when AI agents, human insight, and blockchain converge. This isn’t a prototype – it’s working, and it’s scaling.”
We need to pay close attention to these lessons, as they offer a great deal of insight. First, the “truth-tech” sector, which is focused on using technology to combat mis/dis-information, has a strong blueprint for combining blockchain and AI. Second, there are many other sectors that need this level of global scaling and independent management, with untold value just ready to be developed and launched.
Image source: Unsplash
- Fintech1 month ago
OpenAI and UK Government Partner on AI Infrastructure and Deployment
- Latest Tech News1 month ago
Trump wanted to break up Nvidia — but then its CEO won him over
- Cyber Security1 month ago
Microsoft Fix Targets Attacks on SharePoint Zero-Day – Krebs on Security
- Latest Tech News2 months ago
The tech that the US Post Office gave us
- Artificial Intelligence2 months ago
Apple loses key AI leader to Meta
- Cyber Security1 month ago
Phishers Target Aviation Execs to Scam Customers – Krebs on Security
- Latest Tech News1 month ago
GPD’s monster Strix Halo handheld requires a battery ‘backpack’ or a 180W charger
- Artificial Intelligence1 month ago
Anthropic deploys AI agents to audit models for safety