Artificial Intelligence
UK urged to seize ‘once-in-20-years’ AI chip design opportunity

The Council for Science and Technology (CST) urges the UK to seize a “once-in-20-years opportunity” to build a world-class AI chip design industry, or risk becoming a nation that simply consumes, rather than creates, the technology that will define our future.
In a report published this week, the council argues the UK must get serious about designing its own AI chips. This isn’t just about economic growth; it’s about national security and sovereignty.
The market for specialised AI chips is exploding, set to grow by 30% every year and make up more than half of the entire global semiconductor industry by 2030. The question is, will the UK have a piece of that pie?
It’s about artificial actual intelligence
Let’s get one thing straight: this isn’t about trying to build giant, multi-billion-dollar manufacturing plants to compete with global titans.
The CST makes it clear that we often confuse chip design with chip manufacturing, and they are two completely different ball games. While a factory costs a fortune, designing a chip is a creative, knowledge-intensive process that plays to the UK’s strengths.
“There is a national tendency to conflate chip design (one of the fastest growing industries in the world) with chip manufacturing (one of the most expensive industries in the world),” the report points out.
The goal is ambitious but achievable: create the right conditions for UK companies to design 50 new AI chip products in the next five years. But to get there, we need to tackle some serious gaps in skills, funding, and strategy.
UK faces AI chip design skills gap
The biggest roadblock is that we simply don’t have enough people to do the work. The UK’s current chip industry is already short about 7,000 designers. To hit that target of 50 new AI chips, we’d need another 5,000 designers – bringing the total to 12,000 – in just five years.
Right now, we’re not even close to producing those numbers.
To fix this, the report urges the government to fund more university bursaries and fellowships to tempt students into the field. It also calls for a top-tier, nationally-recognised chip design course that can be rolled out across the country, getting more people skilled up, fast.
There’s also a golden opportunity in optoelectronics, the tech that uses light to transmit data, which is required for next-gen AI systems and an area where the UK already quite literally shines.
Realism and a coordinated plan
Of course, ambition needs to be matched by a smart, coordinated strategy. The CST report criticises the current siloed approach where different government departments, like the DSIT and the Ministry of Defence, work on their own plans despite having the same goals. They need to work together to spot opportunities for technology that serves both commercial and defence needs.
Industry experts agree that the focus on design is the right one, but they also caution that it won’t be a walk in the park.
Phillip Kaye, Co-Founder of Vespertec, puts it this way: “The UK might not be an AI superpower yet – but if we’re ever going to achieve that status, this would be the place to start. British-led semiconductor research has long been among the best in the world, so it makes sense for us to build on this existing advantage.”
However, he adds a reality check. “More and better semiconductors don’t immediately translate into a mature AI chip industry… Giants like NVIDIA still dominate in no small part because they’ve built these networks over decades.”
The report acknowledges this challenge, noting that UK startups need affordable access to the expensive design tools and licenses controlled by overseas giants. It suggests the government should step in and negotiate access on a national level, potentially as part of trade deals, to give our homegrown companies a fighting chance.
Without our own AI chip design industry, the UK faces a future where our critical infrastructure is powered by technology from a “single dominant supplier,” a situation the report calls “problematic for many reasons”.
But the feeling isn’t one of despair; it’s one of urgent opportunity. As Kaye concludes, with world-class companies like Arm still based here and momentum building, “there is reason to be genuinely hopeful about our place in the AI revolution.”
See also: DeepSeek reverts to Nvidia for R2 model after Huawei AI chip fails

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Artificial Intelligence
Agentic AI: Promise, scepticism, and its meaning for Southeast Asia

Agentic AI is being talked about as the next major wave of artificial intelligence, but its meaning for enterprises remains to be settled. Capgemini Research Institute estimates agentic AI could unlock as much as US$450 billion in economic value by 2028. Yet adoption is still limited: only 2% of organisations have scaled its use, and trust in AI agents is already starting to slip.
That tension – high potential but low deployment – is what Capgemini’s new research explores. Based on an April 2025 survey of 1,500 executives at large organisations in 14 countries, including Singapore, the report highlights trust and oversight as important factors in realising value. Nearly three-quarters of executives said the benefits of human involvement in AI workflows outweigh the costs. Nine out of ten described oversight as either positive or at least cost-neutral.
The message is clear: AI agents work best when paired with people, not left on autopilot.
Early steps, slow progress
Roughly a quarter have launched agentic AI pilots, while only 14% have moved into implementation. For the majority, deployment is still in the planning stage. The report describes this as a widening gap between intent and readiness, now one of the main barriers to capturing economic value.
The technology is not just theoretical – real-world applications are starting to emerge, and one example is a personal shopping assistant that can search for items based on specific requests, generate product descriptions, answer questions, and place items in a cart using voice or text commands. While these tools typically stop short of completing financial transactions for security reasons, they already replicate many of the functions of a human assistant.
This raises bigger questions about the role of traditional websites. If AI can handle tasks like searching, comparing, and preparing purchases, will people still need to navigate online stores directly? For those who find busy websites overwhelming or difficult to navigate, an AI-driven interface may offer a simpler, more accessible option.
Defining agentic AI
To cut through the hype, AI News spoke with Jason Hardy, chief technology officer for artificial intelligence at Hitachi Vantara, about how enterprises in Asia-Pacific should think about the technology.
“Agentic AI is software that can decide, act, and refine its strategy on its own,” Hardy said. “Think of it as a team of domain experts that can learn from experience, coordinate tasks, and operate in real time. Generative AI creates content and is usually reactive to prompts. Agentic AI may use GenAI inside it, but its job is to pursue objectives and take action in dynamic environments.”
The distinction – between producing outputs and driving outcomes – captures the meaning of agentic AI for enterprise IT.
Why adoption is accelerating
According to Hardy, adoption is being driven by scale and complexity. “Enterprises are drowning in complexity, risk, and scale. Agentic AI is catching on because it does more than analyse. It optimises storage and capacity on the fly, automates governance and compliance, anticipates failures before they occur, and responds to security threats in real time. That shift from ‘insight’ to ‘autonomous action’ is why adoption is accelerating,” he explained.
Capgemini’s research supports this. The study found that while confidence in agentic AI is uneven, early deployments are proving useful when the technology takes on routine but essential IT tasks.
Where value is emerging
Hardy pointed to IT operations as the strongest use case so far. “Automated data classification, proactive storage optimisation, and compliance reporting save teams hours each day, while predictive maintenance and real-time cybersecurity responses reduce downtime and risk,” he said.
The impact goes beyond efficiency. The capabilities mean systems can detect problems before they escalate, allocate resources more effectively, and contain security incidents more quickly. “Early users are already using agentic AI to remediate incidents proactively before they escalate, strengthening reliability and performance in hybrid environments,” Hardy added.
For now, IT remains the most practical starting point: its deployment offers measurable results and is central to how enterprises manage both costs and risk, showing the meaning of agentic AI in operations.
Southeast Asia’s starting point
For Southeast Asian organisations, Hardy said the first priority is getting the data right. “Agentic AI delivers value only when enterprise data is properly classified, secured, and governed,” he explained.
Infrastructure also matters, meaning that agentic AI requires systems that can support multi-agent orchestration, persistent memory, and dynamic resource allocation. Without this foundation, adoption will be limited in scope.
Many enterprises may choose to begin with IT operations, where agentic AI can pre-empt outages and optimise performance before rolling out to wider business functions.
Reshaping core workflows
Hardy expects agentic AI to reshape workflows in IT, supply chain management, and customer service. “In IT operations, agentic AI can anticipate capacity needs, rebalance workloads, and reallocate resources in real time. It can also automate predictive maintenance, preventing hardware failures before they occur,” he said.
Cybersecurity is another area of promise. “In cybersecurity, agentic AI is able to detect anomalies, isolate affected systems, and trigger immutable backups in seconds, reducing response times and mitigating potential damage,” Hardy noted.
The capabilities are not limited to proof-of-concept trials. Early deployments already show how agentic AI can strengthen reliability and resilience in hybrid environments.
Skills and leadership
Adoption will also require new human skills. “Agentic AI will shift the human role from execution to oversight and orchestration,” Hardy said. Leaders will need to set boundaries and monitor autonomous systems, ensuring they stay in ethical and organisational limits.
For managers, the change means less focus on administrative tasks and more on mentoring, innovation, and strategy. HR teams will need to build governance skills like auditing readiness and create new structures for integrating agentic AI effectively.
The workforce impact will be uneven. The World Economic Forum predicts that AI could create 11 million jobs in Southeast Asia by 2030 and displace nine million. Women and Gen Z are expected to face the sharpest disruptions, with more than 70% of women and up to 76% of younger workers in roles vulnerable to AI.
This highlights the urgency of reskilling, and major investments are already underway, with Microsoft committing $1.7 billion in Indonesia and rolling out training programmes in Malaysia and the wider region. Hardy stressed that capacity building must be inclusive, rapid, and strategic.
What comes next
Looking three years ahead, Hardy believes many leaders will underestimate the pace of change. “The first wave of benefits is already visible in IT operations: agentic AI is automating tasks like data classification, storage optimisation, predictive maintenance, and cybersecurity response, freeing teams to focus on higher-level strategic work,” he said.
But the larger surprise may be at the economic and business model level. IDC projects AI and generative AI could add around US$120 billion to the GDP of the ASEAN-6 by 2027. Hardy sees the implications as broader and faster than many expect. “The suggests the impact will be much faster and more material than many leaders currently anticipate,” he said.
In Indonesia, more than 57% of job roles are expected to be augmented or disrupted by AI, a reminder that transformation will not be limited to IT. It will cut in how businesses are structured, how they manage risk, and how they create value.
Balancing autonomy with oversight
The Capgemini findings and Hardy’s insights converge on the same theme: agentic AI holds huge promise, but its meaning in practice depends on balancing autonomy with trust and human oversight.
The technology may help enterprises lower costs, improve reliability, and unlock new revenue streams. But without a focus on governance, reskilling, and infrastructure readiness, adoption risks stalling.
For Southeast Asia, the question is not whether agentic AI will take hold, but how quickly – and whether enterprises can balance autonomy with accountability as machines begin to take on more responsibility for business decisions.
(Photo by Igor Omilaev)
See also: Beyond acceleration: the rise of agentic AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Marketing AI boom faces crisis of consumer trust

The vast majority (92%) of marketing professionals are using AI in their day-to-day operations, turning it from a buzzword into a workhorse.
According to SAP Emarsys – which took the pulse of over 10,000 consumers and 1,250 marketers – while businesses are seeing real benefits from AI, shoppers are becoming increasingly distrustful, especially when it comes to their personal data. This divide could easily unravel the personalised shopping experience that brands are working so hard to build.
The rush to bring AI into marketing has been fast and decisive. As Sara Richter, CMO at SAP Emarsys, puts it, “AI marketing is now fully in motion: it has transitioned from the theoretical to the practical as marketers welcome AI into their strategies and test possibilities.”
For businesses, the appeal is obvious. 71 percent of marketers say AI helps them launch campaigns faster, saving them over two hours on average for each one. This newfound efficiency is doing what we often hear AI is best at: freeing up teams from repetitive work. 72 percent report they can now focus on more creative and strategic tasks.
The results are hitting the bottom line, too. 60 percent of marketers have seen customer engagement climb, and 58 percent report a boost in customer loyalty since bringing AI on board.
But shoppers are telling a different story. The report reveals a “personalisation gap,” where the efforts of marketers just aren’t hitting the mark. Even with heavy investment in AI-driven tailoring, 40 percent of consumers feel that brands just don’t get them as people—a huge jump from 25 percent last year. To make matters worse, 60 percent say the marketing emails they receive are mostly irrelevant.
Dig deeper, and you find a real crisis of confidence in how personal data is being handled for AI marketing. 63 percent of consumers globally don’t trust AI with their data, up from 44 percent in 2024. In the UK, it’s even more stark, with 76 percent of shoppers feeling uneasy.
This collapse in trust is happening just as new rules come into play. A year after the EU’s AI Act was introduced, more than a third (37%) of UK marketers have overhauled their approach to AI, with 44% stating their use of the technology has become more ethical.
This creates a tension that the whole industry is talking about: how to be responsible without killing innovation. While the AI Act provides a clearer rulebook, over a quarter (28%) of marketing professionals are worried that rigid regulations could stifle creativity.
As Dr Stefan Wenzell, Chief Product Officer at SAP Emarsys, says, “regulation must strike a balance – protecting consumers without slowing innovation. At SAP Emarsys, we believe responsible AI is about building trust through clarity, relevance, and smart data use.”
The message for retailers is loud and clear: prove your worth. People are happy to use AI when it actually helps them. Over half of shoppers agree that AI makes shopping easier (55%) and faster (53%), using it to find products, compare prices, or come up with gift ideas. The interest in helpful AI is there, but it has to come with a promise of transparency and privacy.
Some brands are getting this right by focusing on people, not just the technology. Sterling Doak, Head of Marketing at iconic guitar maker Gibson, says it’s about thinking differently.
“If I can find a utility [AI] that can help my staff think more strategically and creatively, that’s needed because we’re a very creative business at the core,” Doak explains. For Gibson, AI serves human creativity rather than just automating tasks.
It’s a similar story for Australian retailer City Beach, which used AI marketing to keep its customers coming back. Mike Cheng, the company’s Head of Digital, discovered AI was the ideal tool for spotting and winning back customers who were about to leave.
“AI was able to predict where people were churning or defecting at a 1:1 level, and this allowed us to send campaigns based on customers’ individual lifecycle,” Cheng notes. Their approach brought back 48 percent of those customers within three months.
What these success stories have in common is a focus on solving real problems for people. As retailers venture deeper into what SAP Emarsys calls the “Engagement Era,” the way forward is becoming clearer. Investment in AI isn’t slowing down—64 percent of marketers are planning to increase their spend next year.
The technology isn’t the problem; it’s how it’s being used. Retailers need to close the gap between what they’re doing and what their customers are feeling. That means going beyond basic personalisation to offer real value, being open about how data is used, and proving that sharing information leads to a better experience.
The AI revolution is here, but for it to truly succeed, marketing professionals need to remember the person on the other side of the screen.
See also: Google Vids gets AI avatars and image-to-video tools
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
AI security wars: Can Google Cloud defend against tomorrow’s threats?

In Google’s sleek Singapore office at Block 80, Level 3, Mark Johnston stood before a room of technology journalists at 1:30 PM with a startling admission: after five decades of cybersecurity evolution, defenders are still losing the war. “In 69% of incidents in Japan and Asia Pacific, organisations were notified of their own breaches by external entities,” the Director of Google Cloud’s Office of the CISO for Asia Pacific revealed, his presentation slide showing a damning statistic – most companies can’t even detect when they’ve been breached.
What unfolded during the hour-long “Cybersecurity in the AI Era” roundtable was an honest assessment of how Google Cloud AI technologies are attempting to reverse decades of defensive failures, even as the same artificial intelligence tools empower attackers with unprecedented capabilities.
The historical context: 50 years of defensive failure
The crisis isn’t new. Johnston traced the problem back to cybersecurity pioneer James B. Anderson’s 1972 observation that “systems that we use really don’t protect themselves” – a challenge that has persisted despite decades of technological advancement. “What James B Anderson said back in 1972 still applies today,” Johnston said, highlighting how fundamental security problems remain unsolved even as technology evolves.
The persistence of basic vulnerabilities compounds this challenge. Google Cloud’s threat intelligence data reveals that “over 76% of breaches start with the basics” – configuration errors and credential compromises that have plagued organisations for decades. Johnston cited a recent example: “Last month, a very common product that most organisations have used at some point in time, Microsoft SharePoint, also has what we call a zero-day vulnerability…and during that time, it was attacked continuously and abused.”
The AI arms race: Defenders vs. attackers

Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, describes the current landscape as “a high-stakes arms race” where both cybersecurity teams and threat actors employ AI tools to outmanoeuvre each other. “For defenders, AI is a valuable asset,” Curran explains in a media note. “Enterprises have implemented generative AI and other automation tools to analyse vast amounts of data in real time and identify anomalies.”
However, the same technologies benefit attackers. “For threat actors, AI can streamline phishing attacks, automate malware creation and help scan networks for vulnerabilities,” Curran warns. The dual-use nature of AI creates what Johnston calls “the Defender’s Dilemma.”
Google Cloud AI initiatives aim to tilt these scales in favour of defenders. Johnston argued that “AI affords the best opportunity to upend the Defender’s Dilemma, and tilt the scales of cyberspace to give defenders a decisive advantage over attackers.” The company’s approach centres on what they term “countless use cases for generative AI in defence,” spanning vulnerability discovery, threat intelligence, secure code generation, and incident response.
Project Zero’s Big Sleep: AI finding what humans miss
One of Google’s most compelling examples of AI-powered defence is Project Zero’s “Big Sleep” initiative, which uses large language models to identify vulnerabilities in real-world code. Johnston shared impressive metrics: “Big Sleep found a vulnerability in an open source library using Generative AI tools – the first time we believe that a vulnerability was found by an AI service.”
The program’s evolution demonstrates AI’s growing capabilities. “Last month, we announced we found over 20 vulnerabilities in different packages,” Johnston noted. “But today, when I looked at the big sleep dashboard, I found 47 vulnerabilities in August that have been found by this solution.”
The progression from manual human analysis to AI-assisted discovery represents what Johnston describes as a shift “from manual to semi-autonomous” security operations, where “Gemini drives most tasks in the security lifecycle consistently well, delegating tasks it can’t automate with sufficiently high confidence or precision.”
The automation paradox: Promise and peril
Google Cloud’s roadmap envisions progression through four stages: Manual, Assisted, Semi-autonomous, and Autonomous security operations. In the semi-autonomous phase, AI systems would handle routine tasks while escalating complex decisions to human operators. The ultimate autonomous phase would see AI “drive the security lifecycle to positive outcomes on behalf of users.”

However, this automation introduces new vulnerabilities. When asked about the risks of over-reliance on AI systems, Johnston acknowledged the challenge: “There is the potential that this service could be attacked and manipulated. At the moment, when you see tools that these agents are piped into, there isn’t a really good framework to authorise that that’s the actual tool that hasn’t been tampered with.”
Curran echoes this concern: “The risk to companies is that their security teams will become over-reliant on AI, potentially sidelining human judgment and leaving systems vulnerable to attacks. There is still a need for a human ‘copilot’ and roles need to be clearly defined.”
Real-world implementation: Controlling AI’s unpredictable nature
Google Cloud’s approach includes practical safeguards to address one of AI’s most problematic characteristics: its tendency to generate irrelevant or inappropriate responses. Johnston illustrated this challenge with a concrete example of contextual mismatches that could create business risks.
“If you’ve got a retail store, you shouldn’t be having medical advice instead,” Johnston explained, describing how AI systems can unexpectedly shift into unrelated domains. “Sometimes these tools can do that.” The unpredictability represents a significant liability for businesses deploying customer-facing AI systems, where off-topic responses could confuse customers, damage brand reputation, or even create legal exposure.
Google’s Model Armor technology addresses this by functioning as an intelligent filter layer. “Having filters and using our capabilities to put health checks on those responses allows an organisation to get confidence,” Johnston noted. The system screens AI outputs for personally identifiable information, filters content inappropriate to the business context, and blocks responses that could be “off-brand” for the organisation’s intended use case.
The company also addresses the growing concern about shadow AI deployment. Organisations are discovering hundreds of unauthorised AI tools in their networks, creating massive security gaps. Google’s sensitive data protection technologies attempt to address this by scanning in multiple cloud providers and on-premises systems.
The scale challenge: Budget constraints vs. growing threats
Johnston identified budget constraints as the primary challenge facing Asia Pacific CISOs, occurring precisely when organisations face escalating cyber threats. The paradox is stark: as attack volumes increase, organisations lack the resources to adequately respond.
“We look at the statistics and objectively say, we’re seeing more noise – may not be super sophisticated, but more noise is more overhead, and that costs more to deal with,” Johnston observed. The increase in attack frequency, even when individual attacks aren’t necessarily more advanced, creates a resource drain that many organisations cannot sustain.
The financial pressure intensifies an already complex security landscape. “They are looking for partners who can help accelerate that without having to hire 10 more staff or get larger budgets,” Johnston explained, describing how security leaders face mounting pressure to do more with existing resources while threats multiply.
Critical questions remain
Despite Google Cloud AI’s promising capabilities, several important questions persist. When challenged about whether defenders are actually winning this arms race, Johnston acknowledged: “We haven’t seen novel attacks using AI to date,” but noted that attackers are using AI to scale existing attack methods and create “a wide range of opportunities in some aspects of the attack.”
The effectiveness claims also require scrutiny. While Johnston cited a 50% improvement in incident report writing speed, he admitted that accuracy remains a challenge: “There are inaccuracies, sure. But humans make mistakes too.” The acknowledgement highlights the ongoing limitations of current AI security implementations.
Looking forward: Post-quantum preparations
Beyond current AI implementations, Google Cloud is already preparing for the next paradigm shift. Johnston revealed that the company has “already deployed post-quantum cryptography between our data centres by default at scale,” positioning for future quantum computing threats that could render current encryption obsolete.
The verdict: Cautious optimism required
The integration of AI into cybersecurity represents both unprecedented opportunity and significant risk. While the AI technologies by Google Cloud demonstrate genuine capabilities in vulnerability detection, threat analysis, and automated response, the same technologies empower attackers with enhanced capabilities for reconnaissance, social engineering, and evasion.
Curran’s assessment provides a balanced perspective: “Given how quickly the technology has evolved, organisations will have to adopt a more comprehensive and proactive cybersecurity policy if they want to stay ahead of attackers. After all, cyberattacks are a matter of ‘when,’ not ‘if,’ and AI will only accelerate the number of opportunities available to threat actors.”
The success of AI-powered cybersecurity ultimately depends not on the technology itself, but on how thoughtfully organisations implement these tools while maintaining human oversight and addressing fundamental security hygiene. As Johnston concluded, “We should adopt these in low-risk approaches,” emphasising the need for measured implementation rather than wholesale automation.
The AI revolution in cybersecurity is underway, but victory will belong to those who can balance innovation with prudent risk management – not those who simply deploy the most advanced algorithms.
See also: Google Cloud unveils AI ally for security teams
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
- Fintech1 month ago
OpenAI and UK Government Partner on AI Infrastructure and Deployment
- Latest Tech News1 month ago
Trump wanted to break up Nvidia — but then its CEO won him over
- Cyber Security1 month ago
Microsoft Fix Targets Attacks on SharePoint Zero-Day – Krebs on Security
- Artificial Intelligence1 month ago
Apple loses key AI leader to Meta
- Latest Tech News1 month ago
The tech that the US Post Office gave us
- Cyber Security1 month ago
Phishers Target Aviation Execs to Scam Customers – Krebs on Security
- Latest Tech News1 month ago
GPD’s monster Strix Halo handheld requires a battery ‘backpack’ or a 180W charger
- Artificial Intelligence1 month ago
Anthropic deploys AI agents to audit models for safety