Connect with us

Artificial Intelligence

Rachel James, AbbVie: Harnessing AI for corporate cybersecurity

Published

on

Cybersecurity is in the midst of a fresh arms race, and the powerful weapon of choice in this new era is AI.

AI offers a classic double-edged sword: a powerful shield for defenders and a potent new tool for those with malicious intent. Navigating this complex battleground requires a steady hand and a deep understanding of both the technology and the people who would abuse it.

To get a view from the front lines, AI News caught up with Rachel James, Principal AI ML Threat Intelligence Engineer at global biopharmaceutical company AbbVie.

“In addition to the built in AI augmentation that has been vendor-provided in our current tools, we also use LLM analysis on our detections, observations, correlations and associated rules,” James explains.

James and her team are using large language models to sift through a mountain of security alerts, looking for patterns, spotting duplicates, and finding dangerous gaps in their defences before an attacker can.

“We use this to determine similarity, duplication and provide gap analysis,” she adds, noting that the next step is to weave in even more external threat data. “We are looking to enhance this with the integration of threat intelligence in our next phase.”

Central to this operation is a specialised threat intelligence platform called OpenCTI, which helps them build a unified picture of threats from a sea of digital noise.

AI is the engine that makes this cybersecurity effort possible, taking vast quantities of jumbled, unstructured text and neatly organising it into a standard format known as STIX. The grand vision, James says, is to use language models to connect this core intelligence with all other areas of their security operation, from vulnerability management to third-party risk.

Taking advantage of this power, however, comes with a healthy dose of caution. As a key contributor to a major industry initiative, James is acutely aware of the pitfalls.

“I would be remiss if I didn’t mention the work of a wonderful group of folks I am a part of – the ’OWASP Top 10 for GenAI’ as a foundational way of understanding vulnerabilities that GenAI can introduce,” she says.

Beyond specific vulnerabilities, James points at three fundamental trade-offs that business leaders must confront:

  1. Accepting the risk that comes with the creative but often unpredictable nature of generative AI.
  2. The loss of transparency in how AI reaches its conclusions, a problem that only grows as the models become more complex.
  3. The danger of poorly judging the real return on investment for any AI project, where the hype can easily lead to overestimating the benefits or underestimating the effort required in such a fast-moving field.

To build a better cybersecurity posture in the AI era, you have to understand your attacker. This is where James’ deep expertise comes into play.

“This is actually my particular expertise – I have a cyber threat intelligence background and have conducted and documented extensive research into threat actor’s interest, use, and development of AI,” she notes.

James actively tracks adversary chatter and tool development through open-source channels and her own automated collections from the dark web, sharing her findings on her cybershujin GitHub. Her work also involves getting her own hands dirty.

“As the lead for the Prompt Injection entry for OWASP, and co-author of the Guide to Red Teaming GenAI, I also spend time developing adversarial input techniques myself and maintain a network of experts also in this field,” James adds.

So, what does this all mean for the future of the industry? For James, the path forward is clear. She points to a fascinating parallel she discovered years ago: “The cyber threat intelligence lifecycle is almost identical to the data science lifecycle foundational to AI ML systems.”

This alignment is a massive opportunity. “Without a doubt, in terms of the datasets we can operate with, defenders have a unique chance to capitalise on the power of intelligence data sharing and AI,” she asserts.

Her final message offers both encouragement and a warning for her peers in the cybersecurity world: “Data science and AI will be a part of every cybersecurity professional’s life moving forward, embrace it.”

Rachel James will be sharing her insights at this year’s AI & Big Data Expo Europe in Amsterdam on 24-25 September 2025. Be sure to check out her day two presentation on ‘From Principle to Practice – Embedding AI Ethics at Scale’.

See also: Google Cloud unveils AI ally for security teams

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

AI hacking tool exploits zero-day security vulnerabilities in minutes

Published

on

A new AI tool – built to help companies find and fix their own security weaknesses – has been snatched up by cybercriminals, turned on its head, and used as a devastating hacking weapon exploiting zero-day vulnerabilities.

According to a report from cybersecurity firm Check Point, the framework – called Hexstrike-AI – is the turning point that security experts have been dreading, where the sheer power of AI is put directly into the hands of those who want to do harm.

A tool for good, twisted for bad

Hexstrike-AI was supposed to be one of the good guys. Its creators described it as a “revolutionary Al-powered offensive security framework” that was designed to help security professionals think like hackers to better protect their organisations.

Think of it as an AI “brain” that acts as a conductor for a digital orchestra. It directs over 150 different specialised AI agents and security tools to test a company’s defences, find weaknesses like zero-day vulnerabilities, and report back.

The problem? What makes a tool great for defenders also makes it incredibly attractive to attackers. Almost immediately after its release, chatter on the dark web lit up. Malicious actors weren’t just discussing the tool; they were actively figuring out how to weaponise it.

The race against zero-day vulnerabilities just got shorter

The timing for this AI hacking tool couldn’t have been worse. Just as Hexstrike-AI appeared, Citrix announced three major “zero-day” vulnerabilities in its popular NetScaler products. A zero-day is a flaw so new that there’s been zero days to create a patch for it, leaving companies completely exposed.

Normally, exploiting such complex flaws requires a team of highly skilled hackers and days, if not weeks, of work. With Hexstrike-AI, that process has been reduced to less than 10 minutes.

The AI brain does all the heavy lifting. An attacker can give it a simple command like “exploit NetScaler,” and the system automatically figures out the best tools to use and the precise steps to take. It democratises hacking by turning it into a simple, automated process.

As one cybercriminal boasted on an underground forum: “Watching how everything works without my participation is just a song. I’m no longer a coder-worker, but an operator.”

What these new AI hacking tools means for enterprise security

This isn’t just a problem for big corporations. The speed and scale of these new AI-powered attacks mean that the window for businesses to protect themselves from zero-day vulnerabilities is shrinking dramatically.

Check Point is urging organisations to take immediate action:

  • Get patched: The first and most obvious step is to apply the fixes released by Citrix for the NetScaler vulnerabilities.
  • Fight fire with fire: It’s time to adopt AI-driven defence systems that can detect and respond to threats at machine speed, because humans can no longer keep up.
  • Speed up defences: The days of taking weeks to apply a security patch are over.
  • Listen to the whispers: Monitoring dark web chatter is no longer optional; it’s a source of intelligence that can give you a much-needed head start on the next attack.

What once felt like a theoretical threat is now a very real and present danger. With AI now very much an actively weaponised hacking tool for exploiting zero-day vulnerabilities, the game has changed, and our approach to security has to change with it.

See also: AI security wars: Can Google Cloud defend against tomorrow’s threats?

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

Microsoft gives free Copilot AI services to US government workers

Published

on

Millions of US federal government workers are about to get a new AI assistant on their devices for free in the form of Microsoft Copilot. The move is part of a deal between Microsoft and the US General Services Administration (GSA) that’s also expected to save taxpayers $3.1 billion in its first year.

The centrepiece of this huge new agreement is a full year of Microsoft 365 Copilot at no extra cost for government workers using the high-security G5 licence. This is a push to get the latest AI tools into the hands of public servants quickly and safely, aiming to improve how the government operates.

Microsoft pushes the US government into the AI era

This deal aims to place the US government at the forefront of AI adoption. It’s a direct response to the administration’s AI Action Plan, designed to bring the power of modern artificial intelligence to everything from managing citizen enquiries to analysing complex data.

“OneGov represents a paradigm shift in federal procurement that is leading to immense cost savings, achieved by leveraging the purchasing power of the entire federal government,” explained FAS Commissioner Josh Gruenbaum.

The free Copilot offer is specifically for users on the Microsoft 365 G5 plan, the premium tier for departments that handle sensitive information and require the tightest security protocols. But the benefits extend further, with the deal helping agencies to use AI for automating routine tasks, freeing up people to focus on the work that matters most.

The agreement also makes it cheaper and easier for different departments to modernise their technology. By offering big discounts on Azure cloud services and getting rid of data transfer fees, it tackles a major headache that has often slowed down collaboration between agencies.

Security is not an afterthought

Of course, giving AI access to government systems raises immediate security questions. The deal addresses this head-on, with Microsoft emphasising that its core cloud and AI services have already passed FedRAMP High security authorisation, a critical standard for handling sensitive government data.

While the full FedRAMP High certification for Copilot itself is expected soon, it has already been given a provisional green light by the Department of Defense. The package also includes advanced security tools like Microsoft Sentinel and Entra ID to support the government’s “zero trust” security goal.

GSA Deputy Administrator Stephen Ehikian strongly encouraged government agencies to take advantage of the new tools.

“GSA is proud to partner with technology companies, like Microsoft, to advance AI adoption across the federal government, a key priority of the Trump Administration,” said Ehikian. “We urge our federal partners to leverage these agreements, providing government workers with transformative AI tools that streamline operations, cut costs, and enhance results.”

Helping government agencies to use AI effectively

Microsoft is also putting money into making sure the technology is actually used effectively. The company has committed an extra $20 million for support and training, including workshops to help agencies get the most out of the new tools and find other areas to reduce waste.

All told, the package is estimated to deliver more than $6 billion in value over the next three years.

“With this new agreement with the US General Services Administration, including a no-cost Microsoft 365 Copilot offer, we will help federal agencies use AI and digital technologies to improve citizen services, strengthen security, and save taxpayers more than $3 billion in the first year alone,” commented Satya Nadella, Chairman and CEO of Microsoft.

For the millions of people working within the US government, this agreement with Microsoft means that an AI-powered assistant is set to change their daily work.

See also: Marketing AI boom faces crisis of consumer trust

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

Continue Reading

Artificial Intelligence

What Rollup News says about battling disinformation

Published

on

Swarm Network, a platform developing decentralised protocols for AI agents, recently announced the successful results of its first Swarm, a tool (perhaps “organism” is the better term) built to tackle disinformation. Called Rollup News, the swarm is not an app, a software platform, nor a centralised algorithm. It is a decentralised collection of AI agents that collaborate to solve a bigger problem. The problem is that platforms like X allow any type of viral claims, some by incredibly influential people. How can we know what is true?

Currently, we try to solve this problem through equally loud opposing voices who offer facts or expert opinions. But if those sources are from a political side you oppose, why should you trust them? After all, these are people with their own motivations, and two additional issues are created: facts presented by a single person can easily get caught up in the “fake news” accusations; and misinformation presented as “facts” can be used to attack the ground truth.

Unfortunately, this isn’t just a current trend that will eventually lose its popularity and fade out. The more technology and access to varied news sources we have, the harder it becomes to not treat these sources equally. Some might be a traditional outlet that is legally liable if they falsify claims. Others might be a popular podcaster with an audience of millions, and whose fear-mongering ties nicely in with the products in their merch store. If it stopped at this, we could probably tell the truth from fiction. But it isn’t that simple. Official news channels have a history of spinning the news in their own bias, or ignoring other stories that are important to the public. On the other side, there are genuinely powerful influencers who seem to be hell bent on finding the truth and reporting it, no matter what side of the political spectrum it hits.

The world has become both confusing and dangerous, and the old “sticks and stones” saying has been proven false. After all, we have seen global elections swayed by disinformation, major policy shifts driven by false claims, lives damaged and lost as the result of powerful people lying, but lying loudly enough and often enough to sway large groups of people into believing them; and convincing these same groups that any facts to the contrary are the actual “fake news.”

Fixing fact checking

Given how challenging the disinformation industry is, and how absolutely slippery the truth is today, how can anyone hope to battle it? We have seen that people of all sides, realising that all news is skewed to some extent, will believe those sources that support their pre-existing beliefs.

A third party source, backed by overwhelming evidence, is needed to arbitrate. The source should not have an opinion, its methods should be transparent, and everyone should be able to see the same thing. This is nearly impossible, but the Web3 industry has shown that these attributes are what makes it incredibly powerful. Smart contracts handle billions in value daily, managing agreements from complete strangers from anywhere on the globe. The information is validated and the decisions are transparent, then locked in via the blockchain. The model has moved trillions of dollars using these very powerful, and neutral, tools.

Combine this trust with the other element Web3 excels in: decentralisation. Now attach another fast-emerging technology, the AI agent, which is easily built and designed to perform one task very well. The system is the centre of Swarm Network’s model, and its first deployment is Rollup News. The growing population of AI agents, the swarm, is designed to work collectively to scour the corners of X, find claims from users, and collectively test their validity using sources found in the information space. The results of these assessments are posted on the blockchain once validated by a large enough group of independent agents. Selective human participation helps to ensure the context and other subtle areas are handled well. The human element is also decentralised, preventing any particular viewpoint from being able to assert itself, and misconduct equals expulsion if someone tries to present fiction as fact. Rollup News has been operating for several months, with astonishing results: 128,000+ users have been onboarded, with over 5,000 rollup requests daily in July 2025. Over 3 million tweets were processed during that time, which is impressive in its own right, but when you consider the designed scalability of Web3 and AI agents working together, this is the linchpin of the battle in a world of disinformation.

The start of something new?

Rollup News’ success and Swarm Network’s larger model teach us a few things about fixing today’s problems. It is a demonstration that Web3 and AI are components in providing scalable solutions, that small AI agents can effectively work together to solve giant challenges, even if there is no centralised system. That decentralised environment, anchored by Web3, is the key to generating transparency, trust, and allowing strangers anywhere in the world to work together. Finally, the tokenisation of such a system creates the necessary incentives to attract more participants, fuelling the growth of a system. As long as it creates value, people will pay for its use, and those who help to validate and secure the decentralised network earn rewards. The type of truly free market system can scale up or down with the global demand faster than any traditional company. Swarm Network’s founder, Yannick Myson, sums it up nicely: “Rollup News shows what’s possible when AI agents, human insight, and blockchain converge. This isn’t a prototype – it’s working, and it’s scaling.”

We need to pay close attention to these lessons, as they offer a great deal of insight. First, the “truth-tech” sector, which is focused on using technology to combat mis/dis-information, has a strong blueprint for combining blockchain and AI. Second, there are many other sectors that need this level of global scaling and independent management, with untold value just ready to be developed and launched.

Image source: Unsplash

Continue Reading

Trending