Connect with us

Artificial Intelligence

Gen AI makes no financial difference in 95% of cases

Published

on

Stocks in US AI technology companies fell in value at the close of trading yesterday, with the NASDAQ Composite index down 1.4%. Among those losing value were Palantir, down 9.4% and Arm Holdings down 5%. According to the Financial Times [paywall], Tuesday saw the biggest one-day fall in the market since the beginning of August.

Some traders put the falls down to a report released [PDF] by an AI company, NANDA, which noted the high failure rate of many generative AI projects in commercial organisations. Project NANDA originated at the Massachusetts Institute of Technology Media Lab and describes itself as an organisation that’s building an “agentic web.” The paper has, since publication, been placed behind a survey wall, but is available for download from this site.

The research authors state only 5% of gen AI pilots reach production and actually produce measurable monetary value, with the vast majority of projects creating little impact on profit & loss metrics. The research undertaken by NANDA comprised of the content of 52 structured interviews with enterprise decision-makers, researchers’ analysis of 300+ public AI initiatives and announcements, and a survey questionnaire completed by 153 company leaders. It measured return on investment over six months after gen AI projects left pilot status.

While many organisations deploy AI in front-office or customer-facing business functions, successful projects tend to be found among back-office workflows, the paper says. It’s in the mundane tasks of the back office where savings are accrued, largely from a lowered need for third-party agencies and BPOs. The survey found there was little impact by AI projects on overall internal staff levels.

While 90% of staff stated they have personally benefited from using publicly-available AIs, typically in the form of large language models like ChatGPT, those subjective gains are not translated at institution level. Around 40% of the companies surveyed pay for a subscription to LLMs.

Many of the failed projects’ owners cited the lack of contextual awareness exhibited by generative AI models – that is, adapting to circumstances, changing over time, and remembering previous enquiries. NANDA states that forming a partnership with an organisation that can supply such a system and ensure it adapts to an organisation’s specific circumstances is the critical element for success. The paper highlights several quotes “derived from interviews” that include between 60%-70% agreeing with the statements, “[The AI system] doesn’t learn from our feedback,” and “Too much manual context required each time.”

The vertical most positively affected by gen AI was media & telecom, followed by professional services, healthcare & pharma, consumer & retail, and financial services. The energy & materials sector’s rate of generative AI project launch is currently negligible, the paper says. In terms of business units, sales & marketing is where most projects are or were based, with finance & procurement least popular as a place where AI projects might be begun.

The area in a typical organisation where generative AI is deployed most is in sales & marketing, with finance and procurement being the least popular site. And complex tasks are those least likely to be expected to be completed by AI; managers would assign projects like client management to an AI only 10% of the time, while tasks like summarising a report or writing an email would go to a human on 70% of occasions.

The language of the published report and its lack of academic rigour suggest that its provenance and purpose are more akin to marketing than intellectual and technological discussion. The paper’s authors urge for strategic partnerships with a knowledgeable vendor to increase the chances of generative AI projects’ success, a partnership which NANDA is, purely coincidentally, able to form one half of. There are “unprecedented opportunities for vendors who can deliver learning-capable, deeply integrated AI systems,” the paper’s conclusions state.

The headlines from the NANDA report make for sobering reading among decision-makers tasked with generative AI implementations, yet the paper’s underlying messages are weakened by the intentions behind its publication. Stock prices this week could have been affected by partisan surveys from authors with obvious skin in the game, but it seems more likely that the NANDA publication simply reflects trading floors’ concerns about generative AI’s practical effectiveness as a business tool.

(Image source: “Arthur Daley” by Tim Dennell is licensed under CC BY-NC-ND 2.0.)

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

AI hacking tool exploits zero-day security vulnerabilities in minutes

Published

on

A new AI tool – built to help companies find and fix their own security weaknesses – has been snatched up by cybercriminals, turned on its head, and used as a devastating hacking weapon exploiting zero-day vulnerabilities.

According to a report from cybersecurity firm Check Point, the framework – called Hexstrike-AI – is the turning point that security experts have been dreading, where the sheer power of AI is put directly into the hands of those who want to do harm.

A tool for good, twisted for bad

Hexstrike-AI was supposed to be one of the good guys. Its creators described it as a “revolutionary Al-powered offensive security framework” that was designed to help security professionals think like hackers to better protect their organisations.

Think of it as an AI “brain” that acts as a conductor for a digital orchestra. It directs over 150 different specialised AI agents and security tools to test a company’s defences, find weaknesses like zero-day vulnerabilities, and report back.

The problem? What makes a tool great for defenders also makes it incredibly attractive to attackers. Almost immediately after its release, chatter on the dark web lit up. Malicious actors weren’t just discussing the tool; they were actively figuring out how to weaponise it.

The race against zero-day vulnerabilities just got shorter

The timing for this AI hacking tool couldn’t have been worse. Just as Hexstrike-AI appeared, Citrix announced three major “zero-day” vulnerabilities in its popular NetScaler products. A zero-day is a flaw so new that there’s been zero days to create a patch for it, leaving companies completely exposed.

Normally, exploiting such complex flaws requires a team of highly skilled hackers and days, if not weeks, of work. With Hexstrike-AI, that process has been reduced to less than 10 minutes.

The AI brain does all the heavy lifting. An attacker can give it a simple command like “exploit NetScaler,” and the system automatically figures out the best tools to use and the precise steps to take. It democratises hacking by turning it into a simple, automated process.

As one cybercriminal boasted on an underground forum: “Watching how everything works without my participation is just a song. I’m no longer a coder-worker, but an operator.”

What these new AI hacking tools means for enterprise security

This isn’t just a problem for big corporations. The speed and scale of these new AI-powered attacks mean that the window for businesses to protect themselves from zero-day vulnerabilities is shrinking dramatically.

Check Point is urging organisations to take immediate action:

  • Get patched: The first and most obvious step is to apply the fixes released by Citrix for the NetScaler vulnerabilities.
  • Fight fire with fire: It’s time to adopt AI-driven defence systems that can detect and respond to threats at machine speed, because humans can no longer keep up.
  • Speed up defences: The days of taking weeks to apply a security patch are over.
  • Listen to the whispers: Monitoring dark web chatter is no longer optional; it’s a source of intelligence that can give you a much-needed head start on the next attack.

What once felt like a theoretical threat is now a very real and present danger. With AI now very much an actively weaponised hacking tool for exploiting zero-day vulnerabilities, the game has changed, and our approach to security has to change with it.

See also: AI security wars: Can Google Cloud defend against tomorrow’s threats?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Microsoft gives free Copilot AI services to US government workers

Published

on

Millions of US federal government workers are about to get a new AI assistant on their devices for free in the form of Microsoft Copilot. The move is part of a deal between Microsoft and the US General Services Administration (GSA) that’s also expected to save taxpayers $3.1 billion in its first year.

The centrepiece of this huge new agreement is a full year of Microsoft 365 Copilot at no extra cost for government workers using the high-security G5 licence. This is a push to get the latest AI tools into the hands of public servants quickly and safely, aiming to improve how the government operates.

Microsoft pushes the US government into the AI era

This deal aims to place the US government at the forefront of AI adoption. It’s a direct response to the administration’s AI Action Plan, designed to bring the power of modern artificial intelligence to everything from managing citizen enquiries to analysing complex data.

“OneGov represents a paradigm shift in federal procurement that is leading to immense cost savings, achieved by leveraging the purchasing power of the entire federal government,” explained FAS Commissioner Josh Gruenbaum.

The free Copilot offer is specifically for users on the Microsoft 365 G5 plan, the premium tier for departments that handle sensitive information and require the tightest security protocols. But the benefits extend further, with the deal helping agencies to use AI for automating routine tasks, freeing up people to focus on the work that matters most.

The agreement also makes it cheaper and easier for different departments to modernise their technology. By offering big discounts on Azure cloud services and getting rid of data transfer fees, it tackles a major headache that has often slowed down collaboration between agencies.

Security is not an afterthought

Of course, giving AI access to government systems raises immediate security questions. The deal addresses this head-on, with Microsoft emphasising that its core cloud and AI services have already passed FedRAMP High security authorisation, a critical standard for handling sensitive government data.

While the full FedRAMP High certification for Copilot itself is expected soon, it has already been given a provisional green light by the Department of Defense. The package also includes advanced security tools like Microsoft Sentinel and Entra ID to support the government’s “zero trust” security goal.

GSA Deputy Administrator Stephen Ehikian strongly encouraged government agencies to take advantage of the new tools.

“GSA is proud to partner with technology companies, like Microsoft, to advance AI adoption across the federal government, a key priority of the Trump Administration,” said Ehikian. “We urge our federal partners to leverage these agreements, providing government workers with transformative AI tools that streamline operations, cut costs, and enhance results.”

Helping government agencies to use AI effectively

Microsoft is also putting money into making sure the technology is actually used effectively. The company has committed an extra $20 million for support and training, including workshops to help agencies get the most out of the new tools and find other areas to reduce waste.

All told, the package is estimated to deliver more than $6 billion in value over the next three years.

“With this new agreement with the US General Services Administration, including a no-cost Microsoft 365 Copilot offer, we will help federal agencies use AI and digital technologies to improve citizen services, strengthen security, and save taxpayers more than $3 billion in the first year alone,” commented Satya Nadella, Chairman and CEO of Microsoft.

For the millions of people working within the US government, this agreement with Microsoft means that an AI-powered assistant is set to change their daily work.

See also: Marketing AI boom faces crisis of consumer trust

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

What Rollup News says about battling disinformation

Published

on

Swarm Network, a platform developing decentralised protocols for AI agents, recently announced the successful results of its first Swarm, a tool (perhaps “organism” is the better term) built to tackle disinformation. Called Rollup News, the swarm is not an app, a software platform, nor a centralised algorithm. It is a decentralised collection of AI agents that collaborate to solve a bigger problem. The problem is that platforms like X allow any type of viral claims, some by incredibly influential people. How can we know what is true?

Currently, we try to solve this problem through equally loud opposing voices who offer facts or expert opinions. But if those sources are from a political side you oppose, why should you trust them? After all, these are people with their own motivations, and two additional issues are created: facts presented by a single person can easily get caught up in the “fake news” accusations; and misinformation presented as “facts” can be used to attack the ground truth.

Unfortunately, this isn’t just a current trend that will eventually lose its popularity and fade out. The more technology and access to varied news sources we have, the harder it becomes to not treat these sources equally. Some might be a traditional outlet that is legally liable if they falsify claims. Others might be a popular podcaster with an audience of millions, and whose fear-mongering ties nicely in with the products in their merch store. If it stopped at this, we could probably tell the truth from fiction. But it isn’t that simple. Official news channels have a history of spinning the news in their own bias, or ignoring other stories that are important to the public. On the other side, there are genuinely powerful influencers who seem to be hell bent on finding the truth and reporting it, no matter what side of the political spectrum it hits.

The world has become both confusing and dangerous, and the old “sticks and stones” saying has been proven false. After all, we have seen global elections swayed by disinformation, major policy shifts driven by false claims, lives damaged and lost as the result of powerful people lying, but lying loudly enough and often enough to sway large groups of people into believing them; and convincing these same groups that any facts to the contrary are the actual “fake news.”

Fixing fact checking

Given how challenging the disinformation industry is, and how absolutely slippery the truth is today, how can anyone hope to battle it? We have seen that people of all sides, realising that all news is skewed to some extent, will believe those sources that support their pre-existing beliefs.

A third party source, backed by overwhelming evidence, is needed to arbitrate. The source should not have an opinion, its methods should be transparent, and everyone should be able to see the same thing. This is nearly impossible, but the Web3 industry has shown that these attributes are what makes it incredibly powerful. Smart contracts handle billions in value daily, managing agreements from complete strangers from anywhere on the globe. The information is validated and the decisions are transparent, then locked in via the blockchain. The model has moved trillions of dollars using these very powerful, and neutral, tools.

Combine this trust with the other element Web3 excels in: decentralisation. Now attach another fast-emerging technology, the AI agent, which is easily built and designed to perform one task very well. The system is the centre of Swarm Network’s model, and its first deployment is Rollup News. The growing population of AI agents, the swarm, is designed to work collectively to scour the corners of X, find claims from users, and collectively test their validity using sources found in the information space. The results of these assessments are posted on the blockchain once validated by a large enough group of independent agents. Selective human participation helps to ensure the context and other subtle areas are handled well. The human element is also decentralised, preventing any particular viewpoint from being able to assert itself, and misconduct equals expulsion if someone tries to present fiction as fact. Rollup News has been operating for several months, with astonishing results: 128,000+ users have been onboarded, with over 5,000 rollup requests daily in July 2025. Over 3 million tweets were processed during that time, which is impressive in its own right, but when you consider the designed scalability of Web3 and AI agents working together, this is the linchpin of the battle in a world of disinformation.

The start of something new?

Rollup News’ success and Swarm Network’s larger model teach us a few things about fixing today’s problems. It is a demonstration that Web3 and AI are components in providing scalable solutions, that small AI agents can effectively work together to solve giant challenges, even if there is no centralised system. That decentralised environment, anchored by Web3, is the key to generating transparency, trust, and allowing strangers anywhere in the world to work together. Finally, the tokenisation of such a system creates the necessary incentives to attract more participants, fuelling the growth of a system. As long as it creates value, people will pay for its use, and those who help to validate and secure the decentralised network earn rewards. The type of truly free market system can scale up or down with the global demand faster than any traditional company. Swarm Network’s founder, Yannick Myson, sums it up nicely: “Rollup News shows what’s possible when AI agents, human insight, and blockchain converge. This isn’t a prototype – it’s working, and it’s scaling.”

We need to pay close attention to these lessons, as they offer a great deal of insight. First, the “truth-tech” sector, which is focused on using technology to combat mis/dis-information, has a strong blueprint for combining blockchain and AI. Second, there are many other sectors that need this level of global scaling and independent management, with untold value just ready to be developed and launched.

Image source: Unsplash

Continue Reading

Trending