Artificial Intelligence
Inside Tim Cook’s push to get Apple back in the AI race

While other tech companies push out AI tools at full speed, Apple is taking its time. Its Apple Intelligence features – shown off at WWDC – won’t reach most users until at least 2025 or even 2026. Some see this as Apple falling behind, but the company’s track record suggests it prefers to launch only when products are ready.
In contrast, competitors like Microsoft, OpenAI, and Google have already shipped AI features widely – often with bugs and unreliable results, and usually whether or not users ask for them. AI assistants today still struggle with accuracy, consistency, and usefulness in many tasks.
Apple seems to be watching from the sidelines, waiting for the tech to mature. Instead of flooding iOS with half-working tools, it’s holding back. That strategy may pay off if users lose patience with AI that overpromises and underdelivers.
Apple has done this before – launching smartwatches and tablets late, but with stronger products. And since it already owns the hardware and software, and controls its own app store, it can afford to wait.
If current AI tools don’t improve soon, Apple’s slower, more cautious rollout might look less like hesitation and more like smart planning.
That measured approach doesn’t mean Apple is sitting still. Behind the scenes, the company is ramping up investment, hiring, and internal coordination to prepare for an AI shift. That strategy was on full display during a recent all-hands meeting at Apple’s headquarters, where CEO Tim Cook rallied employees and laid out the company’s AI ambitions.
Apple is getting serious about artificial intelligence, and Cook wants everyone at the company on board. As reported by Bloomberg, during a rare all-company gathering at its Cupertino HQ, he spoke directly to employees about what’s next. His message was clear: Apple has to win in AI – and now is the time to make that happen.
Cook called AI a once-in-a-generation shift, comparing its impact to that of the internet, smartphones, and cloud computing. “Apple must do this. Apple will do this. This is sort of ours to grab,” he said, according to people who were there. He promised Apple would spend what it takes to compete.
The company has been slower than others to roll out AI tools. Apple Intelligence – its main AI offering – was introduced long after companies like OpenAI, Google, and Microsoft launched its own products. And even when Apple finally announced its plans, the reaction was underwhelming.
See also: Why Apple is playing it slow with AI
But Cook pointed out that Apple has often shown up late to new technology – only to redefine it. “There was a PC before the Mac; there was a smartphone before the iPhone,” he reminded employees. “There were many tablets before the iPad.” Apple didn’t invent those categories, he said, it just made them work better.
Building the future of Siri
Much of the company’s current AI work centres on Siri, its voice assistant. Apple had originally planned a major overhaul as part of Apple Intelligence, adding features powered by large language models. But that rollout was delayed, leading to internal shakeups and a rethink of the entire system.
Craig Federighi, Apple’s software chief, told employees that trying to merge old and new versions of Siri didn’t work. The team tried to keep the original system for basic tasks like setting timers, while adding generative AI features for more complex requests. But that hybrid setup didn’t meet Apple’s standards. “We realised that approach wasn’t going to get us to Apple quality,” he said.
Now, the team is rebuilding Siri from the ground up. A completely new version is in the works, expected as early as spring 2026. Federighi said the results so far have been strong and could lead to more improvements than originally planned. “There is no project people are taking more seriously,” he told staff.
A key figure behind this new direction is Mike Rockwell, the executive who led development on Apple’s Vision Pro headset. Rockwell and his software team are now leading Siri’s redesign. Federighi said they’ve “supercharged” the work and brought a new level of focus.
Investing in AI talent and tools
Apple is also expanding its AI team quickly. Cook said the company hired 12,000 people in the past year, with 40% of them joining research and development, with many of those hires are focused on AI.
Part of the work involves hardware. Apple is building new chips specifically designed for AI, including a more powerful server chip known internally as “Baltra.” The company is also opening an AI server farm in Houston to support future projects.
Beyond Siri, Apple is quietly building what could become a major AI tool. According to Bloomberg‘s Mark Gurman, Apple has formed a team called “Answers, Knowledge, and Information” (AKI). The group’s job is to create search that works more like ChatGPT – giving direct answers rather than just showing links.
The AKI team is led by Robby Walker, who reports to AI chief John Giannandrea, and Apple has already started hiring engineers for the group. While details are still limited, the project appears to include backend systems, search algorithms, and potentially even a standalone app.
A push to move faster
Cook also encouraged employees to start using AI more in their work. “All of us are using AI in a significant way already, and we must use it as a company as well,” he said. He told employees to bring ideas to their managers and find ways to get AI tools into products faster.
The sense of urgency was echoed during Apple’s recent earnings call. The company posted strong results, with nearly 10% growth in the June quarter – enough to ease concerns about slowing iPhone sales and weak results from the Chinese market. Cook told investors Apple would “significantly” increase its spending on AI.
Yet challenges remain. Apple expects to face a $1.1 billion hit from tariffs this quarter and continues to deal with antitrust pressures in the US and Europe, where regulators are watching closely to see how the company runs its App Store and handles user data.
Cook acknowledged these issues at the staff meeting, saying Apple would continue pushing regulators to adopt rules that don’t hurt privacy or user experience. “We need to continue to push on the intention of the regulation,” he said, “instead of these things that destroy the user experience and user privacy and security.”
New stores, new markets
Beyond AI, Cook touched on Apple’s retail strategy. The company plans to open new stores in emerging markets, including India, the United Arab Emirates, and China. A store in Saudi Arabia is also on the way. Apple is also putting more focus on its online store.
“We need to be in more countries,” Cook said, adding that most of Apple’s future growth will come from new markets. That doesn’t mean existing regions will be ignored, but the company sees more opportunity in expanding its global footprint.
What’s next for Apple products
While Cook didn’t reveal any product details, he said, “I have never felt so much excitement and so much energy before as right now.”
Reports suggest Apple is working on several new devices, including a foldable iPhone, new smart glasses, updated home devices, and robotics. A major iPhone redesign is also rumoured for its 20th anniversary next year.
Cook didn’t confirm any of this, but he hinted at big things ahead. “The product pipeline, which I can’t talk about: It’s amazing, guys. It’s amazing,” he said. “Some of it you’ll see soon, some of it will come later, but there’s a lot to see.”
Cautious but confident
Apple’s cautious approach to AI may have slowed it down, but internally, the company seems to believe that slow and steady might win the race. Cook’s message to employees was clear: Apple can still define what useful, responsible AI looks like – and it’s all hands on deck to get there.
(Photo by: Apple via YouTube)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Artificial Intelligence
Marketing AI boom faces crisis of consumer trust

The vast majority (92%) of marketing professionals are using AI in their day-to-day operations, turning it from a buzzword into a workhorse.
According to SAP Emarsys – which took the pulse of over 10,000 consumers and 1,250 marketers – while businesses are seeing real benefits from AI, shoppers are becoming increasingly distrustful, especially when it comes to their personal data. This divide could easily unravel the personalised shopping experience that brands are working so hard to build.
The rush to bring AI into marketing has been fast and decisive. As Sara Richter, CMO at SAP Emarsys, puts it, “AI marketing is now fully in motion: it has transitioned from the theoretical to the practical as marketers welcome AI into their strategies and test possibilities.”
For businesses, the appeal is obvious. 71 percent of marketers say AI helps them launch campaigns faster, saving them over two hours on average for each one. This newfound efficiency is doing what we often hear AI is best at: freeing up teams from repetitive work. 72 percent report they can now focus on more creative and strategic tasks.
The results are hitting the bottom line, too. 60 percent of marketers have seen customer engagement climb, and 58 percent report a boost in customer loyalty since bringing AI on board.
But shoppers are telling a different story. The report reveals a “personalisation gap,” where the efforts of marketers just aren’t hitting the mark. Even with heavy investment in AI-driven tailoring, 40 percent of consumers feel that brands just don’t get them as people—a huge jump from 25 percent last year. To make matters worse, 60 percent say the marketing emails they receive are mostly irrelevant.
Dig deeper, and you find a real crisis of confidence in how personal data is being handled for AI marketing. 63 percent of consumers globally don’t trust AI with their data, up from 44 percent in 2024. In the UK, it’s even more stark, with 76 percent of shoppers feeling uneasy.
This collapse in trust is happening just as new rules come into play. A year after the EU’s AI Act was introduced, more than a third (37%) of UK marketers have overhauled their approach to AI, with 44% stating their use of the technology has become more ethical.
This creates a tension that the whole industry is talking about: how to be responsible without killing innovation. While the AI Act provides a clearer rulebook, over a quarter (28%) of marketing professionals are worried that rigid regulations could stifle creativity.
As Dr Stefan Wenzell, Chief Product Officer at SAP Emarsys, says, “regulation must strike a balance – protecting consumers without slowing innovation. At SAP Emarsys, we believe responsible AI is about building trust through clarity, relevance, and smart data use.”
The message for retailers is loud and clear: prove your worth. People are happy to use AI when it actually helps them. Over half of shoppers agree that AI makes shopping easier (55%) and faster (53%), using it to find products, compare prices, or come up with gift ideas. The interest in helpful AI is there, but it has to come with a promise of transparency and privacy.
Some brands are getting this right by focusing on people, not just the technology. Sterling Doak, Head of Marketing at iconic guitar maker Gibson, says it’s about thinking differently.
“If I can find a utility [AI] that can help my staff think more strategically and creatively, that’s needed because we’re a very creative business at the core,” Doak explains. For Gibson, AI serves human creativity rather than just automating tasks.
It’s a similar story for Australian retailer City Beach, which used AI marketing to keep its customers coming back. Mike Cheng, the company’s Head of Digital, discovered AI was the ideal tool for spotting and winning back customers who were about to leave.
“AI was able to predict where people were churning or defecting at a 1:1 level, and this allowed us to send campaigns based on customers’ individual lifecycle,” Cheng notes. Their approach brought back 48 percent of those customers within three months.
What these success stories have in common is a focus on solving real problems for people. As retailers venture deeper into what SAP Emarsys calls the “Engagement Era,” the way forward is becoming clearer. Investment in AI isn’t slowing down—64 percent of marketers are planning to increase their spend next year.
The technology isn’t the problem; it’s how it’s being used. Retailers need to close the gap between what they’re doing and what their customers are feeling. That means going beyond basic personalisation to offer real value, being open about how data is used, and proving that sharing information leads to a better experience.
The AI revolution is here, but for it to truly succeed, marketing professionals need to remember the person on the other side of the screen.
See also: Google Vids gets AI avatars and image-to-video tools
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
AI security wars: Can Google Cloud defend against tomorrow’s threats?

In Google’s sleek Singapore office at Block 80, Level 3, Mark Johnston stood before a room of technology journalists at 1:30 PM with a startling admission: after five decades of cybersecurity evolution, defenders are still losing the war. “In 69% of incidents in Japan and Asia Pacific, organisations were notified of their own breaches by external entities,” the Director of Google Cloud’s Office of the CISO for Asia Pacific revealed, his presentation slide showing a damning statistic – most companies can’t even detect when they’ve been breached.
What unfolded during the hour-long “Cybersecurity in the AI Era” roundtable was an honest assessment of how Google Cloud AI technologies are attempting to reverse decades of defensive failures, even as the same artificial intelligence tools empower attackers with unprecedented capabilities.
The historical context: 50 years of defensive failure
The crisis isn’t new. Johnston traced the problem back to cybersecurity pioneer James B. Anderson’s 1972 observation that “systems that we use really don’t protect themselves” – a challenge that has persisted despite decades of technological advancement. “What James B Anderson said back in 1972 still applies today,” Johnston said, highlighting how fundamental security problems remain unsolved even as technology evolves.
The persistence of basic vulnerabilities compounds this challenge. Google Cloud’s threat intelligence data reveals that “over 76% of breaches start with the basics” – configuration errors and credential compromises that have plagued organisations for decades. Johnston cited a recent example: “Last month, a very common product that most organisations have used at some point in time, Microsoft SharePoint, also has what we call a zero-day vulnerability…and during that time, it was attacked continuously and abused.”
The AI arms race: Defenders vs. attackers

Kevin Curran, IEEE senior member and professor of cybersecurity at Ulster University, describes the current landscape as “a high-stakes arms race” where both cybersecurity teams and threat actors employ AI tools to outmanoeuvre each other. “For defenders, AI is a valuable asset,” Curran explains in a media note. “Enterprises have implemented generative AI and other automation tools to analyse vast amounts of data in real time and identify anomalies.”
However, the same technologies benefit attackers. “For threat actors, AI can streamline phishing attacks, automate malware creation and help scan networks for vulnerabilities,” Curran warns. The dual-use nature of AI creates what Johnston calls “the Defender’s Dilemma.”
Google Cloud AI initiatives aim to tilt these scales in favour of defenders. Johnston argued that “AI affords the best opportunity to upend the Defender’s Dilemma, and tilt the scales of cyberspace to give defenders a decisive advantage over attackers.” The company’s approach centres on what they term “countless use cases for generative AI in defence,” spanning vulnerability discovery, threat intelligence, secure code generation, and incident response.
Project Zero’s Big Sleep: AI finding what humans miss
One of Google’s most compelling examples of AI-powered defence is Project Zero’s “Big Sleep” initiative, which uses large language models to identify vulnerabilities in real-world code. Johnston shared impressive metrics: “Big Sleep found a vulnerability in an open source library using Generative AI tools – the first time we believe that a vulnerability was found by an AI service.”
The program’s evolution demonstrates AI’s growing capabilities. “Last month, we announced we found over 20 vulnerabilities in different packages,” Johnston noted. “But today, when I looked at the big sleep dashboard, I found 47 vulnerabilities in August that have been found by this solution.”
The progression from manual human analysis to AI-assisted discovery represents what Johnston describes as a shift “from manual to semi-autonomous” security operations, where “Gemini drives most tasks in the security lifecycle consistently well, delegating tasks it can’t automate with sufficiently high confidence or precision.”
The automation paradox: Promise and peril
Google Cloud’s roadmap envisions progression through four stages: Manual, Assisted, Semi-autonomous, and Autonomous security operations. In the semi-autonomous phase, AI systems would handle routine tasks while escalating complex decisions to human operators. The ultimate autonomous phase would see AI “drive the security lifecycle to positive outcomes on behalf of users.”

However, this automation introduces new vulnerabilities. When asked about the risks of over-reliance on AI systems, Johnston acknowledged the challenge: “There is the potential that this service could be attacked and manipulated. At the moment, when you see tools that these agents are piped into, there isn’t a really good framework to authorise that that’s the actual tool that hasn’t been tampered with.”
Curran echoes this concern: “The risk to companies is that their security teams will become over-reliant on AI, potentially sidelining human judgment and leaving systems vulnerable to attacks. There is still a need for a human ‘copilot’ and roles need to be clearly defined.”
Real-world implementation: Controlling AI’s unpredictable nature
Google Cloud’s approach includes practical safeguards to address one of AI’s most problematic characteristics: its tendency to generate irrelevant or inappropriate responses. Johnston illustrated this challenge with a concrete example of contextual mismatches that could create business risks.
“If you’ve got a retail store, you shouldn’t be having medical advice instead,” Johnston explained, describing how AI systems can unexpectedly shift into unrelated domains. “Sometimes these tools can do that.” The unpredictability represents a significant liability for businesses deploying customer-facing AI systems, where off-topic responses could confuse customers, damage brand reputation, or even create legal exposure.
Google’s Model Armor technology addresses this by functioning as an intelligent filter layer. “Having filters and using our capabilities to put health checks on those responses allows an organisation to get confidence,” Johnston noted. The system screens AI outputs for personally identifiable information, filters content inappropriate to the business context, and blocks responses that could be “off-brand” for the organisation’s intended use case.
The company also addresses the growing concern about shadow AI deployment. Organisations are discovering hundreds of unauthorised AI tools in their networks, creating massive security gaps. Google’s sensitive data protection technologies attempt to address this by scanning in multiple cloud providers and on-premises systems.
The scale challenge: Budget constraints vs. growing threats
Johnston identified budget constraints as the primary challenge facing Asia Pacific CISOs, occurring precisely when organisations face escalating cyber threats. The paradox is stark: as attack volumes increase, organisations lack the resources to adequately respond.
“We look at the statistics and objectively say, we’re seeing more noise – may not be super sophisticated, but more noise is more overhead, and that costs more to deal with,” Johnston observed. The increase in attack frequency, even when individual attacks aren’t necessarily more advanced, creates a resource drain that many organisations cannot sustain.
The financial pressure intensifies an already complex security landscape. “They are looking for partners who can help accelerate that without having to hire 10 more staff or get larger budgets,” Johnston explained, describing how security leaders face mounting pressure to do more with existing resources while threats multiply.
Critical questions remain
Despite Google Cloud AI’s promising capabilities, several important questions persist. When challenged about whether defenders are actually winning this arms race, Johnston acknowledged: “We haven’t seen novel attacks using AI to date,” but noted that attackers are using AI to scale existing attack methods and create “a wide range of opportunities in some aspects of the attack.”
The effectiveness claims also require scrutiny. While Johnston cited a 50% improvement in incident report writing speed, he admitted that accuracy remains a challenge: “There are inaccuracies, sure. But humans make mistakes too.” The acknowledgement highlights the ongoing limitations of current AI security implementations.
Looking forward: Post-quantum preparations
Beyond current AI implementations, Google Cloud is already preparing for the next paradigm shift. Johnston revealed that the company has “already deployed post-quantum cryptography between our data centres by default at scale,” positioning for future quantum computing threats that could render current encryption obsolete.
The verdict: Cautious optimism required
The integration of AI into cybersecurity represents both unprecedented opportunity and significant risk. While the AI technologies by Google Cloud demonstrate genuine capabilities in vulnerability detection, threat analysis, and automated response, the same technologies empower attackers with enhanced capabilities for reconnaissance, social engineering, and evasion.
Curran’s assessment provides a balanced perspective: “Given how quickly the technology has evolved, organisations will have to adopt a more comprehensive and proactive cybersecurity policy if they want to stay ahead of attackers. After all, cyberattacks are a matter of ‘when,’ not ‘if,’ and AI will only accelerate the number of opportunities available to threat actors.”
The success of AI-powered cybersecurity ultimately depends not on the technology itself, but on how thoughtfully organisations implement these tools while maintaining human oversight and addressing fundamental security hygiene. As Johnston concluded, “We should adopt these in low-risk approaches,” emphasising the need for measured implementation rather than wholesale automation.
The AI revolution in cybersecurity is underway, but victory will belong to those who can balance innovation with prudent risk management – not those who simply deploy the most advanced algorithms.
See also: Google Cloud unveils AI ally for security teams
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
Artificial Intelligence
Google Vids gets AI avatars and image-to-video tools

Google is rolling out a raft of powerful new generative AI features for Vids designed to take the pain out of video creation.
Between wrestling with complicated software, finding someone willing to be on camera, and then spending hours editing out all the “ums” and “ahs,” video production often feels more trouble than it’s worth. Google is aiming to change that narrative with Vids.
So far, it seems to be finding its audience. Google announced that Vids has already rocketed past one million monthly active users, a clear sign that teams are crying out for simpler ways to bring their ideas to life with video.
Your photos now move, and avatars do the talking
Among the latest additions is the ability to turn static images into motion pictures. Imagine you’ve got a great photo of a new product but need something more engaging for a social media post or presentation. You can now upload that picture to Vids, type a quick prompt describing what you want to happen, and Google’s Veo AI will turn it into an eight-second animated clip, complete with sound. It’s a simple way to create eye-catching, brand-aligned content in minutes.
For anyone who dreads being on camera, the new AI avatars will be a welcome relief. This feature lets you produce a polished video without ever stepping in front of a lens. You write your script, choose from a selection of digital presenters, and the AI handles the delivery. It’s perfect for creating consistent training guides, product demos, or team updates without worrying about lighting, background noise, or re-recording twenty takes to get it right.
Google is also tackling the tedious task of editing. A new automatic transcript trimming tool listens to your recordings and, with a few clicks, snips out all the filler words and awkward silences. Speaking from plenty of experience, that will be a huge time-saver.
Building on this, the company confirmed that familiar tools from Google Meet – like noise cancellation, custom backgrounds, and appearance filters – are set to arrive next month. Google Vids will also soon support portrait and square formats, making it much easier to create content for different platforms.
Getting started with Google Vids
With these new tools, Google is trying to make video creation as routine as building a slide deck.
The company is broadening access to Google Vids, making it available to more Workspace customers on business and education plans. Better yet, a basic version of the Vids editor is now completely free for all consumers, offering a range of templates to help you create anything from a tutorial to a party invitation.
To get everyone up to speed, Google has also launched a new “Vids on Vids” instructional series. The video guides walk you through the entire process, demonstrates the best features, and offers practical tips to help you create professional-looking content quickly.
Real businesses are seeing the benefit
Companies are already putting Vids to work. At Mercer International, a global manufacturing firm, it’s being used for employee safety training.
Alistair Skey, CIO of Mercer International, said: “Google Vids has given us the ability to create safety content, developed and curated by our organisation rather than having to go to market to hire very expensive resources to produce that for us.”
It’s also a story of speed and scale. Forest Donovan from the data platform Fullstory was impressed by the efficiency gains. “The amount of [high gloss] content we can create in a matter of hours versus what would normally take weeks has been astounding,” he said.
By embedding these powerful yet simple AI tools directly into its Workspace suite, Google is making the case that video is no longer the exclusive domain of specialist creative teams. It’s becoming a fundamental tool for everyday communication, and these updates just made it accessible to everyone.
See also: Google Cloud unveils AI ally for security teams
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
- Fintech1 month ago
OpenAI and UK Government Partner on AI Infrastructure and Deployment
- Latest Tech News1 month ago
Trump wanted to break up Nvidia — but then its CEO won him over
- Cyber Security1 month ago
Microsoft Fix Targets Attacks on SharePoint Zero-Day – Krebs on Security
- Artificial Intelligence1 month ago
Apple loses key AI leader to Meta
- Latest Tech News1 month ago
The tech that the US Post Office gave us
- Cyber Security1 month ago
Phishers Target Aviation Execs to Scam Customers – Krebs on Security
- Latest Tech News1 month ago
GPD’s monster Strix Halo handheld requires a battery ‘backpack’ or a 180W charger
- Artificial Intelligence1 month ago
Anthropic deploys AI agents to audit models for safety