Connect with us

Artificial Intelligence

Mistral AI gives Le Chat voice recognition and deep research tools

Published

on

Mistral AI has updated Le Chat with voice recognition, deep research tools, and other features to make the chatbot a more helpful assistant.

The company believes that the best AI assistants should help you dive deeper into your thoughts and maintain the flow of conversation. As Mistral AI put it, chatbots are at their best when they “let you go deeper in your thinking, keep your conversation flowing, and maintain contextual continuity.”

A standout feature, albeit somewhat playing catch-up with rivals, is the ‘Deep Research’ mode. Think of it as turning Le Chat into your personal research assistant.

When you ask a complex question, the Deep Research tool breaks it down, finds credible sources, and then builds a structured report with references, making it easy to follow. Mistral designed it to feel like you’re working with a highly organised partner, helping you tackle everything from market trends to scientific topics.

If you prefer talking over typing, the new ‘Vocal’ mode is for you.

Powered by Mistral AI’s powerful new voice model called Voxtral, the Vocal mode allows for natural, low-latency conversations—meaning you can talk to Le Chat without awkward pauses. Mistral says it’s perfect for brainstorming ideas while on a walk, getting quick answers when your hands are full, or transcribing a meeting.

For those really complex Le Chat questions, ‘Think’ mode taps into Mistral AI’s reasoning model, Magistral, to provide clear and thoughtful answers.

One of the most impressive capabilities of Think mode is native multilingual ability. You can draft a proposal in Spanish, explore a legal concept in Japanese, or just think through an idea in whatever language feels most comfortable. Le Chat can even switch between languages mid-sentence.

To help you stay organised, the new ‘Projects’ feature lets you group related chats into focused folders. Each project remembers your settings and keeps all your conversations, uploaded files, and ideas in one tidy space. It could become the perfect area to manage everything from planning a house move to tracking a long-term work project.

Finally, in a partnership between Mistral AI and Black Forest Labs, Le Chat now includes advanced image editing. This means you can create an image and then fine-tune it with simple commands like “remove the object” or “place me in another city”.

All these new features are available today in Le Chat on the web or by downloading the mobile app.

See also: Military AI contracts awarded to Anthropic, OpenAI, Google, and xAI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Alibaba’s AI coding tool raises security concerns in the West

Published

on

Alibaba has released a new AI coding model called Qwen3-Coder, built to handle complex software tasks using a large open-source model. The tool is part of Alibaba’s Qwen3 family and is being promoted as the company’s most advanced coding agent to date.

The model uses a Mixture of Experts (MoE) approach, activating 35 billion parameters out of a total 480 billion and supporting up to 256,000 tokens of context. That number can reportedly be stretched to 1 million using special extrapolation techniques. The company claims Qwen3-Coder has outperformed other open models in agentic tasks, including versions from Moonshot AI and DeepSeek.

But not everyone sees this as good news. Jurgita Lapienyė, Chief Editor at Cybernews, warns that Qwen3-Coder may be more than just a helpful coding assistant—it could pose a real risk to global tech systems if adopted widely by Western developers.

A trojan horse in open source clothing?

Alibaba’s messaging around Qwen3-Coder has focused on its technical strength, comparing it to top-tier tools from OpenAI and Anthropic. But while benchmark scores and features draw attention, Lapienyė suggests they may also distract from the real issue: security.

It’s not that China is catching up in AI—that’s already known. The deeper concern is about the hidden risks of using software generated by AI systems that are difficult to inspect or fully understand.

As Lapienyė put it, developers could be “sleepwalking into a future” where core systems are unknowingly built with vulnerable code. Tools like Qwen3-Coder may make life easier, but they could also introduce subtle weaknesses that go unnoticed.

This risk isn’t hypothetical. Cybernews researchers recently reviewed AI use across major US firms and found that 327 of the S&P 500 now publicly report using AI tools. In those companies alone, researchers identified nearly 1,000 AI-related vulnerabilities.

Adding another AI model—especially one developed under China’s strict national security laws—could add another layer of risk, one that’s harder to control.

When code becomes a backdoor

Today’s developers lean heavily on AI tools to write code, fix bugs, and shape how applications are built. These systems are fast, helpful, and getting better every day.

But what if those same systems were trained to inject flaws? Not obvious bugs, but small, hard-to-spot issues that wouldn’t trigger alarms. A vulnerability that looks like a harmless design decision could go undetected for years.

That’s how supply chain attacks often begin. Past examples, like the SolarWinds incident, show how long-term infiltration can be done quietly and patiently. With enough access and context, an AI model could learn how to plant similar issues—especially if it had exposure to millions of codebases.

It’s not just a theory. Under China’s National Intelligence Law, companies like Alibaba must cooperate with government requests, including those involving data and AI models. That shifts the conversation from technical performance to national security.

What happens to your code?

Another major issue is data exposure. When developers use tools like Qwen3-Coder to write or debug code, every piece of that interaction could reveal sensitive information.

That might include proprietary algorithms, security logic, or infrastructure design—exactly the kind of details that can be useful to a foreign state.

Even though the model is open source, there’s still a lot that users can’t see. The backend infrastructure, telemetry systems, and usage tracking methods may not be transparent. That makes it hard to know where data goes or what the model might remember over time.

Autonomy without oversight

Alibaba has also focused on agentic AI—models that can act more independently than standard assistants. These tools don’t just suggest lines of code. They can be assigned full tasks, operate with minimal input, and make decisions on their own.

That might sound efficient, but it also raises red flags. A fully autonomous coding agent that can scan entire codebases and make changes could become dangerous in the wrong hands.

Imagine an agent that can understand a company’s system defences and craft tailored attacks to exploit them. The same skillset that helps developers move faster could be repurposed by attackers to move even faster still.

Regulation still isn’t ready

Despite these risks, current regulations don’t address tools like Qwen3-Coder in a meaningful way. The US government has spent years debating data privacy concerns tied to apps like TikTok, but there’s little public oversight of foreign-developed AI tools.

Groups like the Committee on Foreign Investment in the US (CFIUS) review company acquisitions, but no similar process exists for reviewing AI models that might pose national security risks.

President Biden’s executive order on AI focuses mainly on homegrown models and general safety practices. But it leaves out concerns about imported tools that could be embedded in sensitive environments like healthcare, finance, or national infrastructure.

AI tools capable of writing or altering code should be treated with the same seriousness as software supply chain threats. That means setting clear guidelines for where and how they can be used.

What should happen next?

To reduce risk, organisations dealing with sensitive systems should pause before integrating Qwen3-Coder—or any foreign-developed agentic AI—into their workflows. If you wouldn’t invite someone you don’t trust to look at your source code, why let their AI rewrite it?

Security tools also need to catch up. Static analysis software may not detect complex backdoors or subtle logic issues crafted by AI. The industry needs new tools designed specifically to flag and test AI-generated code for suspicious patterns.

Finally, developers, tech leaders, and regulators must understand that code-generating AI isn’t neutral. These systems have power—both as helpful tools and potential threats. The same features that make them useful can also make them dangerous.

Lapienyė called Qwen3-Coder “a potential Trojan horse,” and the metaphor fits. It’s not just about productivity. It’s about who’s inside the gates.

Not everyone agrees on what matters

Wang Jian, the founder of Alibaba Cloud, sees things differently. In an interview with Bloomberg, he said innovation isn’t about hiring the most expensive talent but about picking people who can build the unknown. He criticised Silicon Valley’s approach to AI hiring, where tech giants now compete for top researchers like sports teams bidding on athletes.

“The only thing you need to do is to get the right person,” Wang said. “Not really the expensive person.”

He also believes that the Chinese AI race is healthy, not hostile. According to Wang, companies take turns pulling ahead, which helps the entire ecosystem grow faster.

“You can have the very fast iteration of the technology because of this competition,” he said. “I don’t think it’s brutal, but I think it’s very healthy.”

Still, open-source competition doesn’t guarantee trust. Western developers need to think carefully about what tools they use—and who built them.

The bottom line

Qwen3-Coder may offer impressive performance and open access, but its use comes with risks that go beyond benchmarks and coding speed. In a time when AI tools are shaping how critical systems are built, it’s worth asking not just what these tools can do—but who benefits when they do it.

(Photo by Shahadat Rahman)

See also: Alibaba’s new Qwen reasoning AI model sets open-source records

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Continue Reading

Artificial Intelligence

Generative AI trends 2025: LLMs, data scaling & enterprise adoption

Published

on

Generative AI is entering a more mature phase in 2025. Models are being refined for accuracy and efficiency, and enterprises are embedding them into everyday workflows.

The focus is shifting from what these systems could do to how they can be applied reliably and at scale. What’s emerging is a clearer picture of what it takes to build generative AI that is not just powerful, but dependable.

The new generation of LLMs

Large language models are shedding their reputation as resource-hungry giants. The cost of generating a response from a model has dropped by a factor of 1,000 over the past two years, bringing it in line with the cost of a basic web search. That shift is making real-time AI far more viable for routine business tasks.

Scale with control is also this year’s priority. The leading models (Claude Sonnet 4, Gemini Flash 2.5, Grok 4, DeepSeek V3) are still large, but they’re built to respond faster, reason more clearly, and run more efficiently. Size alone is no longer the differentiator. What matters is whether a model can handle complex input, support integration, and deliver reliable outputs, even when complexity increases. 

Last year saw a lot of criticism of AI’s tendency to hallucinate. In one high-profile case, a New York lawyer faced sanctions for citing ChatGPT-invented legal cases. Similar failures across sensitive sectors pushed the issue into the spotlight.

This is something LLM companies have been combating this year. Retrieval-augmented generation (RAG), which combines search with generation to ground outputs in real data, has become a common approach. It helps reduce hallucinations but not eliminate them. Models can still contradict the retrieved content. New benchmarks such as RGB and RAGTruth are being used to track and quantify these failures, marking a shift toward treating hallucination as a measurable engineering problem rather than an acceptable flaw.

Navigating rapid innovation

One of the defining trends of 2025 is the speed of change. Model releases are accelerating, capabilities are shifting monthly, and what counts as state-of-the-art is constantly being redefined. For enterprise leaders, this creates a knowledge gap that can quickly turn into a competitive one.

Staying ahead means staying informed. Events like the AI and Big Data Expo Europe offer a rare chance to see where the technology is going next through real-world demos, direct conversations, and insights from those building and deploying these systems at scale.

Enterprise adoption

In 2025, the shift is toward autonomy. Many companies already use generative AI across core systems, but the focus now is on agentic AI. These are models designed to take action, not just generate content.

According to a recent survey, 78% of executives agree that digital ecosystems will need to be built for AI agents as much as for humans over the next three to five years. That expectation is shaping how platforms are designed and deployed. Here, AI is being integrated as an operator; it’s able to trigger workflows, interact with software, and handle tasks with minimal human input.

Breaking the data wall

One of the biggest barriers to progress in generative AI is data. Training large models has traditionally relied on scraping vast quantities of real-world text from the internet. But, in 2025, that well is running dry. High-quality, diverse, and ethically usable data is becoming harder to find, and more expensive to process.

This is why synthetic data is becoming a strategic asset. Rather than pulling from the web, synthetic data is generated by models to simulate realistic patterns. Until recently, it wasn’t clear whether synthetic data could support training at scale, but research from Microsoft’s SynthLLM project has confirmed that it can (if used correctly).

Their findings show that synthetic datasets can be tuned for predictable performance. Crucially, they also discovered that bigger models need less data to learn effectively; allowing teams to optimise their training approach rather than throwing resources at the problem.

Making it work

Generative AI in 2025 is growing up. Smarter LLMs, orchestrated AI agents, and scalable data strategies are now central to real-world adoption. For leaders navigating this shift, the AI & Big Data Expo Europe offers a clear view of how these technologies are being applied and what it takes to make them work.

See also: Tencent releases versatile open-source Hunyuan AI models

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Continue Reading

Artificial Intelligence

Suvianna Grecu, AI for Change: Without rules, AI risks ‘trust crisis’

Published

on

The world is in a race to deploy AI, but a leading voice in technology ethics warns prioritising speed over safety risks a “trust crisis.”

Suvianna Grecu, Founder of the AI for Change Foundation, argues that without immediate and strong governance, we are on a path to “automating harm at scale.”

Speaking on the integration of AI into critical sectors, Grecu believes that the most pressing ethical danger isn’t the technology itself, but the lack of structure surrounding its rollout.

Powerful systems are increasingly making life-altering decisions about everything from job applications and credit scores to healthcare and criminal justice, often without sufficient testing for bias or consideration of their long-term societal impact.

For many organisations, AI ethics remains a document of lofty principles rather than a daily operational reality. Grecu insists that genuine accountability only begins when someone is made truly responsible for the outcomes. The gap between intention and implementation is where the real risk lies.

Grecu’s foundation champions a shift from abstract ideas to concrete action. This involves embedding ethical considerations directly into development workflows through practical tools like design checklists, mandatory pre-deployment risk assessments, and cross-functional review boards that bring legal, technical, and policy teams together.

According to Grecu, the key is establishing clear ownership at every stage, building transparent and repeatable processes just as you would for any other core business function. This practical approach seeks to advance ethical AI, transforming it from a philosophical debate into a set of manageable, everyday tasks.

Partnering to build AI trust and mitigate risks

When it comes to enforcement, Grecu is clear that the responsibility can’t fall solely on government or industry. “It’s not either-or, it has to be both,” she states, advocating for a collaborative model.

In this partnership, governments must set the legal boundaries and minimum standards, particularly where fundamental human rights are at stake. Regulation provides the essential floor. However, industry possesses the agility and technical talent to innovate beyond mere compliance.

Companies are best positioned to create advanced auditing tools, pioneer new safeguards, and push the boundaries of what responsible technology can achieve.

Leaving governance entirely to regulators risks stifling the very innovation we need, while leaving it to corporations alone invites abuse. “Collaboration is the only sustainable route forward,” Grecu asserts.

Promoting a value-driven future

Looking beyond the immediate challenges, Grecu is concerned about more subtle, long-term risks that are receiving insufficient attention, namely emotional manipulation and the urgent need for value-driven technology.

As AI systems become more adept at persuading and influencing human emotion, she cautions that we are unprepared for the implications this has for personal autonomy.

A core tenet of her work is the idea that technology is not neutral. “AI won’t be driven by values, unless we intentionally build them in,” she warns. It’s a common misconception that AI simply reflects the world as it is. In reality, it reflects the data we feed it, the objectives we assign it, and the outcomes we reward. 

Without deliberate intervention, AI will invariably optimise for metrics like efficiency, scale, and profit, not for abstract ideals like justice, dignity, or democracy, and that will naturally impact societal trust. This is why a conscious and proactive effort is needed to decide what values we want our technology to promote.

For Europe, this presents a critical opportunity. “If we want AI to serve humans (not just markets) we need to protect and embed European values like human rights, transparency, sustainability, inclusion and fairness at every layer: policy, design, and deployment,” Grecu explains.

This isn’t about halting progress. As she concludes, it’s about taking control of the narrative and actively “shaping it before it shapes us.”

Through her foundation’s work – including public workshops and during the upcoming AI & Big Data Expo Europe, where Grecu is a chairperson on day two of the event – she is building a coalition to guide the evolution of AI, and boost trust by keeping humanity at its very centre.

(Photo by Cash Macanaya)

See also: AI obsession is costing us our human skills

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Continue Reading

Trending