Artificial Intelligence
Can speed and safety truly coexist in the AI race?

A criticism about AI safety from an OpenAI researcher aimed at a rival opened a window into the industry’s struggle: a battle against itself.
It started with a warning from Boaz Barak, a Harvard professor currently on leave and working on safety at OpenAI. He called the launch of xAI’s Grok model “completely irresponsible,” not because of its headline-grabbing antics, but because of what was missing: a public system card, detailed safety evaluations, the basic artefacts of transparency that have become the fragile norm.
It was a clear and necessary call. But a candid reflection, posted just three weeks after he left the company, from ex-OpenAI engineer Calvin French-Owen, shows us the other half of the story.
French-Owen’s account suggests a large number of people at OpenAI are indeed working on safety, focusing on very real threats like hate speech, bio-weapons, and self-harm. Yet, he delivers the insight: “Most of the work which is done isn’t published,” he wrote, adding that OpenAI “really should do more to get it out there.”
Here, the simple narrative of a good actor scolding a bad one collapses. In its place, we see the real, industry-wide dilemma laid bare. The whole AI industry is caught in the ‘Safety-Velocity Paradox,’ a deep, structural conflict between the need to move at breakneck speed to compete and the moral need to move with caution to keep us safe.
French-Owen suggests that OpenAI is in a state of controlled chaos, having tripled its headcount to over 3,000 in a single year, where “everything breaks when you scale that quickly.” This chaotic energy is channelled by the immense pressure of a “three-horse race” to AGI against Google and Anthropic. The result is a culture of incredible speed, but also one of secrecy.
Consider the creation of Codex, OpenAI’s coding agent. French-Owen calls the project a “mad-dash sprint,” where a small team built a revolutionary product from scratch in just seven weeks.
This is a textbook example of velocity; describing working until midnight most nights and even through weekends to make it happen. This is the human cost of that velocity. In an environment moving this fast, is it any wonder that the slow, methodical work of publishing AI safety research feels like a distraction from the race?
This paradox isn’t born of malice, but of a set of powerful, interlocking forces.
There is the obvious competitive pressure to be first. There is also the cultural DNA of these labs, which began as loose groups of “scientists and tinkerers” and value-shifting breakthroughs over methodical processes. And there is a simple problem of measurement: it is easy to quantify speed and performance, but exceptionally difficult to quantify a disaster that was successfully prevented.
In the boardrooms of today, the visible metrics of velocity will almost always shout louder than the invisible successes of safety. However, to move forward, it cannot be about pointing fingers—it must be about changing the fundamental rules of the game.
We need to redefine what it means to ship a product, making the publication of a safety case as integral as the code itself. We need industry-wide standards that prevent any single company from being competitively punished for its diligence, turning safety from a feature into a shared, non-negotiable foundation.
However, most of all, we need to cultivate a culture within AI labs where every engineer – not just the safety department – feels a sense of responsibility.
The race to create AGI is not about who gets there first; it is about how we arrive. The true winner will not be the company that is merely the fastest, but the one that proves to a watching world that ambition and responsibility can, and must, move forward together.
(Photo by Olu Olamigoke Jr.)
See also: Military AI contracts awarded to Anthropic, OpenAI, Google, and xAI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by here.
Artificial Intelligence
Shah Muhammad, Sweco: How AI is building the future of our cities

Shah Muhammad, who leads AI Innovation at the design and engineering firm Sweco, offers his insights into how AI is building the cities of the future.
Ever been stuck in traffic and thought, “Surely, there’s a better way to design this city?” Or walked past a giant new building and wondered if it would be an energy-guzzling monster?
For decades, building our towns and cities has been a slow, complicated process, often relying on educated guesswork. But what if we could give city planners superpowers? What if they could test-drive a dozen different futures before a single shovel hits the ground?
That’s exactly what’s starting to happen. And the secret ingredient is AI.
“AI is revolutionising urban design and infrastructure planning at Sweco by optimising processes, enhancing decision-making, and improving sustainability outcomes,” Shah explains. “It allows us to analyse vast amounts of data, simulate various scenarios, and create more efficient and resilient urban environments.”
Shah is saying that AI gives his team the ability to ask the big questions that will impact people’s lives when designing the cities of the future: “What’s the smartest way to build this neighbourhood to cut down on traffic jams and pollution? How can we design a building that stays cool in a heatwave without huge electricity bills?” The AI can run the numbers on thousands of possibilities to find the best path forward.
Of course, the real world is messy. It’s not a neat and tidy computer simulation. It’s full of unpredictable weather, unexpected delays, and the beautiful chaos of human life. This is the number one headache.
“The biggest challenge in applying data-driven models to physical environments is the complexity and variability of real-world conditions,” Shah says. “Ensuring that models accurately represent these conditions and can adapt to changing conditions is crucial.”
So, how do they deal with that? They start with the basics. They get their house in order. Before they even think about AI, they make sure the information it learns from is rock-solid and trustworthy.
“To ensure data quality and interoperability across projects, we implement rigorous data governance practices, standardise data formats, and use interoperable software tools,” he says.
That might sound a bit technical, but think of it this way: they’re making sure everyone on the team is singing from the same hymn sheet. When all the different software tools can talk to each other and everyone trusts the information, the AI can do its job properly. It “enables seamless data exchange and collaboration among different teams and stakeholders.”
But of all the things AI can do, this next part might be the most hopeful when using it to design future cities. It shows that this technology can have a real heart.
“There are many projects where AI has made a measurable impact on sustainability, making it hard to single out one,” he reflects. “However, if I were to choose, I would highlight a project where AI was used to preserve biodiversity by identifying endangered species and providing this information to researchers.”
In this scenario, technology is giving nature a voice in the planning meeting. It’s like the AI raising its hand and saying, “Hang on, let’s be careful here, there’s a family of rare birds living in this area.” It allows us to build with respect for the world around us.
So, what’s the next chapter? According to Shah, it’s about turning that crystal ball into a real-time guide.
“According to me, the biggest opportunity for AI in the AEC sector lies in predictive analytics and automation,” Shah explains. “By anticipating future trends, identifying potential issues early, and automating routine tasks, AI can greatly enhance efficiency, reduce costs, and improve the overall quality of projects.”
This could mean safer bridges, roads that need fewer repairs, and less disruption to our lives. It means freeing up talented people from the boring tasks to focus on building the cities of the future that are more in tune with the people who call them home.
Shah Muhammad is speaking at AI & Big Data Expo Europe in Amsterdam on 24-25 September 2025 where he will be hosting a presentation on ‘Leveraging Generative and Agentic AI for Intelligent Process Automation’. Find out more about the event and how to attend here.
See also: Zuckerberg outlines Meta’s AI vision for ‘personal superintelligence’

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Artificial Intelligence
Zuckerberg outlines Meta’s AI vision for ‘personal superintelligence’

Meta CEO Mark Zuckerberg has laid out his blueprint for the future of AI, and it’s about giving you “personal superintelligence”.
In a letter, the Meta chief painted a picture of what’s coming next, and he believes it’s closer than we think. He says his teams are already seeing early signs of progress.
“Over the last few months we have begun to see glimpses of our AI systems improving themselves,” Zuckerberg wrote. “The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.”
So, what does he want to do with it? Forget AI that just automates boring office work, Zuckerberg and Meta’s vision for personal superintelligence is far more intimate. He imagines a future where technology serves our individual growth, not just our productivity.
In his words, the real revolution will be “everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.”
But here’s where it gets interesting. He drew a clear line in the sand, contrasting his vision against a very different, almost dystopian alternative that he believes others are pursuing.
“This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output,” he stated.
Meta, Zuckerberg says, is betting on the individual when it comes to AI superintelligence. The company believes that progress has always come from people chasing their own dreams, not from living off the scraps of a hyper-efficient machine.
If he’s right, we’ll spend less time wrestling with software and more time creating and connecting. This personal AI would live in devices like smart glasses, understanding our world because they can “see what we see, hear what we hear.”
Of course, he knows this is powerful, even dangerous, stuff. Zuckerberg admits that superintelligence will bring new safety concerns and that Meta will have to be careful about what they release to the world. Still, he argues that the goal must be to empower people as much as possible.
Zuckerberg believes we’re at a crossroads right now. The choices we make in the next few years will decide everything.
“The rest of this decade seems likely to be the decisive period for determining the path this technology will take,” he warned, framing it as a choice between “personal empowerment or a force focused on replacing large swaths of society.”
Zuckerberg has made his choice. He’s focusing Meta’s enormous resources on building this personal superintelligence future.
See also: Forget the Turing Test, AI’s real challenge is communication
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Artificial Intelligence
Google’s Veo 3 AI video creation tools are now widely available

Google has made its most powerful AI video creator, Veo 3, available for everyone to use on its Vertex AI platform. And for those who need to work quickly, a speedier version called Veo 3 Fast is also ready-to-go for quick creative work.
Ever had a brilliant idea for a video but found yourself held back by the cost, time, or technical skills needed to create it? This tool aims to offer a faster way to turn your text ideas into everything from short films to product demos.
70 million videos have been created since May, showing a huge global appetite for these AI video creation tools. Businesses are diving in as well, generating over 6 million videos since they got early access in June.
The real-world applications for Veo 3
So, what does this look like in the real world? From global design platforms to major advertising agencies, companies are already putting Veo 3 to work. Take design platform Canva, they are building Veo directly into their software to make video creation simple for their users.
Cameron Adams, Co-Founder and Chief Product Officer at Canva, said: “Enabling anyone to bring their ideas to life – especially their most creative ones – has been core to Canva’s mission ever since we set out to empower the world to design.
“By democratising access to a powerful technology like Google’s Veo 3 inside Canva AI, your big ideas can now be brought to life in the highest quality video and sound, all from within your existing Canva subscription. In true Canva fashion, we’ve built this with an intuitive interface and simple editing tools in place, all backed by Canva Shield.”
For creative agencies like BarkleyOKRP, the big wins are speed and quality. They claim to have been so impressed with the latest version that they went back and remade videos.
Julie Ray Barr, Senior Vice President Client Experience at BarkleyOKRP, commented: “The rapid advancements from Veo 2 to Veo 3 within such a short time frame on this project have been nothing short of remarkable.
“Our team undertook the task of re-creating numerous music videos initially produced with Veo 2 once Veo 3 was released, primarily due to the significantly improved synchronization between voice and mouth movements. The continuous daily progress we are witnessing is truly extraordinary.”
It’s even changing how global companies connect with local customers. The investing platform eToro used Veo 3 to create 15 different, fully AI-generated versions of a single advertisement, each customised to a specific country with its own native language.
Shay Chikotay, Head of Creative & Content at eToro, said: “With Veo 3, we produced 15 fully AI‑generated versions of our ad, each in the native language of its market, all while capturing real emotion at scale.
“Ironically, AI didn’t reduce humanity; it amplified it. Veo 3 lets us tell more stories, in more tongues, with more impact.”
Google gives creators a powerful AI video creation tool
Veo 3 and Veo 3 Fast are packed with features designed to give you the control to tell complete stories.
- Create scenes with sound. The AI generates video and audio at the same time, so you can have characters that speak with accurate lip-syncing and sound effects that fit the scene.
- High quality results. The models produce video in high-definition (1080p), making it good enough for professional marketing campaigns and demos.
- Reach a global audience easily. Veo 3’s ability to generate dialogue natively makes it much simpler to produce a video once and then translate the dialogue for many different languages.
- Bring still images to life. A new feature, coming in August, will let you take a single photo, add a text prompt, and watch as Veo animates it into an 8-second video clip.
Of course, with such powerful technology, safety is a key concern. Google has built Veo 3 for responsible enterprise use. Every video frame is embedded with an invisible digital watermark from SynthID to help combat misinformation. The service is also covered by Google’s indemnity for generative AI, giving businesses that extra layer of security.
See also: Google’s newest Gemini 2.5 model aims for ‘intelligence per dollar’
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
- Fintech2 weeks ago
OpenAI and UK Government Partner on AI Infrastructure and Deployment
- Latest Tech News2 weeks ago
The tech that the US Post Office gave us
- Cyber Security2 weeks ago
Microsoft Fix Targets Attacks on SharePoint Zero-Day – Krebs on Security
- Latest Tech News1 week ago
Trump wanted to break up Nvidia — but then its CEO won him over
- Artificial Intelligence2 weeks ago
Apple loses key AI leader to Meta
- Cyber Security1 week ago
Phishers Target Aviation Execs to Scam Customers – Krebs on Security
- Latest Tech News7 days ago
GPD’s monster Strix Halo handheld requires a battery ‘backpack’ or a 180W charger
- Artificial Intelligence2 weeks ago
Why Apple is playing it slow with AI