ChatGPT
Diamond Member-
Posts
559 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
If you’ve ever thought companies talk more than act when it comes to their AI strategy, a new Cisco report backs you up. It turns out that just 13 percent globally are actually prepared for the AI revolution. However, this small group – which Cisco calls the ‘Pacesetters’ – are lapping the competition. The third annual Cisco AI Readiness Index found these top performers are four times more likely to get their AI projects out of the pilot stage and into the real world. More importantly, they are 50 percent more likely to see measurable value from their efforts. What they’ve figured out is that winning with AI is about getting the foundations right with a disciplined approach that weaves together strategy, infrastructure, and security. And it pays off, with 90 percent of these Pacesetters seeing real gains in profit, productivity, and innovation, while most of their peers are hovering around the 60 percent mark. Jeetu Patel, Cisco’s President and Chief Product Officer, said: “This year’s Cisco AI Readiness Index makes one thing clear: AI doesn’t fail, readiness fails. “The most AI-ready organisations – the Pacesetters from our research – prove it. They’re four times more likely to move pilots into production and 50 percent more likely to realise measurable value. So, with more than 80 percent of organisations we surveyed about to deploy AI agents, these new findings confirm readiness, discipline, and action are key to unlocking value.” So, what’s their secret? The research shows a clear pattern. Pacesetters don’t treat AI as a side project; it’s a core part of their business strategy. Almost every single one of them (99%) has a proper AI roadmap, something only 58 percent of other companies can claim. They also put their money where their mouth is. For 79 percent of them, AI is the top investment priority, a commitment shared by only 24 percent of the rest. These leaders are building for the long haul, with 98 percent designing their networks to handle the immense scale and complexity of AI, compared to just 46 percent of their peers. It gives them the confidence that their systems can handle whatever is thrown at them; 71 percent say their networks can scale instantly for any AI project, a feeling shared by a worryingly low 15 percent of other organisations. The report also gives us a glimpse into the near future, and for many, it looks rocky. Two huge challenges are looming: the widespread use of AI agents and a problem Cisco has dubbed ‘AI Infrastructure Debt’. 83 percent of companies are planning to deploy AI agents as part of their strategy, with nearly 40 percent expecting them to be working alongside human employees within a year. But here’s the problem: most of these firms are trying to build on shaky ground. Over half of companies admitted their current networks simply can’t handle the data volumes or complexity that these advanced AI systems demand. The Pacesetters, on the other hand, have already done their homework, with 75 percent feeling fully equipped to secure and control these agents, compared to just 31 percent of others. This leads us to the ticking time bomb of ‘AI Infrastructure Debt’. Think of it as the modern version of the technical debt that plagued companies for years. It’s the result of all the compromises, postponed upgrades, and underfunded plans that quietly pile up, slowly strangling the long-term value of AI. The warning signs are already flashing. Nearly two-thirds of leaders expect their workloads to jump by over 30 percent in the next three years, and a similar number are struggling just to get their data organised in one place. Add to that the fact that only a quarter have enough GPU power, and you see a massive gap between ambition and reality. The lesson from Cisco’s report is clear and simple: value follows readiness. In the race to adopt AI, the Pacesetters have shown that the organisations that take the time to build a strong foundation to support their strategy are the ones that will pull away from the pack. See also: Gemini Enterprise: Google aims to put an AI agent on every desk Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Cisco: Only 13% have a solid AI strategy and they’re lapping rivals appeared first on AI News. View the full article
-
Salesforce plans to invest $15 billion in San Francisco over the next five years to help businesses adopt AI. The move underscores the company’s push to stay competitive as AI becomes central to enterprise software. Founded and headquartered in San Francisco since 1999, Salesforce has been adding AI features across its products, including the workplace messaging tool Slack. The company is competing with ServiceNow, Oracle, and Microsoft to attract organisations eager to integrate AI into their operations. Part of the new investment will fund an AI incubator on Salesforce’s San Francisco campus and help companies deploy AI agents — digital assistants that can handle tasks for users. “This $15 billion investment reflects our deep commitment to our hometown — advancing AI innovation, creating jobs and helping companies and our communities thrive,” said CEO Marc Benioff. The announcement comes just before Dreamforce, Salesforce’s annual conference, which runs from October 14 to 16 in San Francisco. The company expects around 50,000 people to attend and estimates the event will bring in about $130 million in local revenue. Salesforce, which employs more than 76,000 people worldwide, also announced last week that it will spend $1 billion in Mexico over the next five years. The company has operated there since 2006. Morningstar analyst Dan Romanoff said the new spending aligns with the company’s long-term goals. “If the company wants to remain a leader in an important emerging technology area, it must have a pipeline of talent to innovate and drive the field forward. We already see shortages of AI talent, so this makes sense,” he said. Salesforce shares rose 2.8% on Monday but remain down about 28% since the start of the year. On the same day, Salesforce also launched Agentforce 360, a new AI platform for businesses. While many companies are still experimenting with AI-driven automation, Salesforce says it has already rolled out multiple versions of its “agentic” technology, used by thousands of customers and within its own operations. The company describes the “Agentic Enterprise” as a workplace model where AI supports people rather than replaces them. In this setup, AI agents help teams respond faster, track leads, provide continuous service, and make better decisions. The goal, Salesforce says, is to boost productivity and customer engagement. Agentforce 360 combines four key parts of this model: Agentforce 360 Platform: A framework for building enterprise AI agents, now featuring a conversational builder, hybrid reasoning for more accurate results, and voice support. Data 360: A unified data layer that gives AI systems the context they need. Features like Intelligent Context and Tableau Semantics help turn raw data into meaningful insights. Customer 360 Apps: The tools that record how a company sells, serves, and operates — now enhanced with AI to better understand customer behaviour and internal processes. Slack: A shared space where people and AI agents can work together, linking information and actions in real time. Salesforce says this setup allows businesses to build AI agents that rely on trusted data, function across departments, and integrate directly with existing workflows. Its open ecosystem also lets partners tailor the technology for different industries. Last month, Salesforce forecast third-quarter revenue that fell short of analyst expectations but expanded its share buyback plan by $20 billion. (Photo by Denys Nevozhai) See also: Salesforce Agentforce 3 brings visibility to AI agents Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Salesforce commits $15 billion to boost AI growth in San Francisco appeared first on AI News. View the full article
-
Meta and Oracle are upgrading their AI data centres with NVIDIA’s Spectrum-X Ethernet networking switches — technology built to handle the growing demands of large-scale AI systems. Both companies are adopting Spectrum-X as part of an open networking framework designed to improve AI training efficiency and accelerate deployment across massive compute clusters. Jensen Huang, NVIDIA’s founder and CEO, said trillion-parameter models are transforming data centres into “giga-scale AI factories,” adding that Spectrum-X acts as the “nervous system” connecting millions of GPUs to train the largest models ever built. Oracle plans to use Spectrum-X Ethernet with its Vera Rubin architecture to build large-scale AI factories. Mahesh Thiagarajan, Oracle Cloud Infrastructure’s executive vice president, said the new setup will allow the company to connect millions of GPUs more efficiently, helping customers train and deploy new AI models faster. Meta, meanwhile, is expanding its AI infrastructure by integrating Spectrum-X Ethernet switches into the Facebook Open Switching System (FBOSS), its in-house platform for managing network switches at scale. According to Gaya Nagarajan, Meta’s vice president of networking engineering, the company’s next-generation network must be open and efficient to support ever-larger AI models and deliver services to billions of users. Building flexible AI systems According to Joe DeLaere, who leads NVIDIA’s Accelerated Computing Solution Portfolio for Data Centre, flexibility is key as data centres grow more complex. He explained that NVIDIA’s MGX system offers a modular, building-block design that lets partners combine different CPUs, GPUs, storage, and networking components as needed. The system also promotes interoperability, allowing organisations to use the same design across multiple generations of hardware. “It offers flexibility, faster time to market, and future readiness,” DeLaere said to the media. As AI models become larger, power efficiency has become a central challenge for data centres. DeLaere said NVIDIA is working “from chip to grid” to improve energy use and scalability, collaborating closely with power and cooling vendors to maximise performance per watt. One example is the shift to 800-volt DC power delivery, which reduces heat loss and improves efficiency. The company is also introducing power-smoothing technology to reduce spikes on the electrical grid — an approach that can cut maximum power needs by up to 30 per cent, allowing more compute capacity within the same footprint. Scaling up, out, and across NVIDIA’s MGX system also plays a role in how data centres are scaled. Gilad Shainer, the company’s senior vice president of networking, told the media that MGX racks host both compute and switching components, supporting NVLink for scale-up connectivity and Spectrum-X Ethernet for scale-out growth. He added that MGX can connect multiple AI data centres together as a unified system — what companies like Meta need to support massive distributed AI training operations. Depending on distance, they can link sites through dark fibre or additional MGX-based switches, enabling high-speed connections across regions. Meta’s AI adoption of Spectrum-X reflects the growing importance of open networking. Shainer said the company will use FBOSS as its network operating system but noted that Spectrum-X supports several others, including Cumulus, SONiC, and Cisco’s NOS through partnerships. This flexibility allows hyperscalers and enterprises to standardise their infrastructure using the systems that best fit their environments. Expanding the AI ecosystem NVIDIA sees Spectrum-X as a way to make AI infrastructure more efficient and accessible across different scales. Shainer said the Ethernet platform was designed specifically for AI workloads like training and inference, offering up to 95 percent effective bandwidth and outperforming traditional Ethernet by a wide margin. He added that NVIDIA’s partnerships with companies such as Cisco, xAI, Meta, and Oracle Cloud Infrastructure are helping to bring Spectrum-X to a broader range of environments — from hyperscalers to enterprises. Preparing for Vera Rubin and beyond DeLaere said NVIDIA’s upcoming Vera Rubin architecture is expected to be commercially available in the second half of 2026, with the Rubin CPX product arriving by year’s end. Both will work alongside Spectrum-X networking and MGX systems to support the next generation of AI factories. He also clarified that Spectrum-X and XGS share the same core hardware but use different algorithms for varying distances — Spectrum-X for inside data centres and XGS for inter–data centre communication. This approach minimises latency and allows multiple sites to operate together as a single large AI supercomputer. Collaborating across the power chain To support the 800-volt DC transition, NVIDIA is working with partners from chip level to grid. The company is collaborating with Onsemi and Infineon on power components, with Delta, Flex, and Lite-On at the rack level, and with Schneider Electric and Siemens on data centre designs. A technical white paper detailing this approach will be released at the OCP Summit. DeLaere described this as a “holistic design from silicon to power delivery,” ensuring all systems work seamlessly together in high-density AI environments that companies like Meta and Oracle operate. Performance advantages for hyperscalers Spectrum-X Ethernet was built specifically for distributed computing and AI workloads. Shainer said it offers adaptive routing and telemetry-based congestion control to eliminate network hotspots and deliver stable performance. These features enable higher training and inference speeds while allowing multiple workloads to run simultaneously without interference. He added that Spectrum-X is the only Ethernet technology proven to scale at extreme levels, helping organisations get the best performance and return on their GPU investments. For hyperscalers such as Meta, that scalability helps manage growing AI training demands and keep infrastructure efficient. Hardware and software working together While NVIDIA’s focus is often on hardware, DeLaere said software optimisation is equally important. The company continues to improve performance through co-design — aligning hardware and software development to maximise efficiency for AI systems. NVIDIA is investing in FP4 kernels, frameworks such as Dynamo and TensorRT-LLM, and algorithms like speculative decoding to improve throughput and AI model performance. These updates, he said, ensure that systems like Blackwell continue to deliver better results over time for hyperscalers such as Meta that rely on consistent AI performance. Networking for the trillion-parameter era The Spectrum-X platform — which includes Ethernet switches and SuperNICs — is NVIDIA’s first Ethernet system purpose-built for AI workloads. It’s designed to link millions of GPUs efficiently while maintaining predictable performance across AI data centres. With congestion-control technology achieving up to 95 per cent data throughput, Spectrum-X marks a major leap over standard Ethernet, which typically reaches only about 60 per cent due to flow collisions. Its XGS technology also supports long-distance AI data centre links, connecting facilities across regions into unified “AI super factories.” By tying together NVIDIA’s full stack — GPUs, CPUs, NVLink, and software — Spectrum-X provides the consistent performance needed to support trillion-parameter models and the next wave of generative AI workloads. (Photo by Nvidia) See also: OpenAI and Nvidia plan $100B chip deal for AI future Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Meta and Oracle choose NVIDIA Spectrum-X for AI data centres appeared first on AI News. View the full article
-
[AI]Vibe analytics for data insights that are simple to surface
ChatGPT posted a topic in World News
Every business, big or small, has a wealth of valuable data that can inform impactful decisions. But to extract insights, there’s usually a good deal of manual work that needs to be done on raw data, either by semitechnical users (such as founders and product leaders), or dedicated – and expensive – data specialists. Either way, to produce real value, information has to be collected, shepherded, altered, and drawn from dozens of spreadsheets and different business platforms: the organisation’s CRM, its martech stack, e-commerce system, and website data, to name a few common examples. Clearly, that’s a time consuming process, and the outcomes can be old news, rather than up-to-the-minute insights. Introducing vibe analytics The ideal business solution would be querying real-time data using natural language (vs writing code in SQL or Python), with smart systems working in the background to correlate and parse different data sources and formats. This is vibe analysis, where users can simply ask questions in plain language and let AI do the heavy lifting. Instead of manual data-wrestling and business users spending hours uncovering insights hidden deep in datasets, they get results fast — in text, graphics, summaries, and, where needed, detailed breakdowns. Fast and accurate data analysis is important to every organisation, but for many, real-time insights are crucial. In the agricultural sector, for example, Lumo uses Fabi.ai’s platform to manage large fleets of IoT devices, collecting telemetry data continuously and adjusting its systems based on collated, normalised, and parsed information. Using vibe analysis, Lumo sees device performance immediately, as well as trends that develop over time. It pulls in weather data, and correlates the device fleet’s performance metrics with environmental factors. The data dashboards Lumo has built are not the result of many months of work writing data integration routines and front-end coding, but are a result of vibe analysis. Getting under the hood Sceptics of AI’s abilities often point to vibe-coding as an example of where things can go wrong, raising concerns about quality control and the “****** box” nature of AI-driven analysis. Many users want visibility into how results are generated, with the option to inspect logic, tweak queries, or adjust API calls to ensure accuracy. When done well, vibe analytics addresses these concerns by combining transparency with rigour. Natural language inputs and modular build methods make it accessible to semitechnical users (such as founders and product leaders), while the underlying systems meet the accuracy and reliability standards expected by technical teams. This means users can trust the output whether they’re working independently or in collaboration with data scientists and developers. Designed specifically for both data experts and semitechnical data users, Fabi is a generative *** platform that brings vibe analysis done right to life. The code it produces can be hidden away entirely, or shown verbatim and edited in place, giving semitechnical users a chance to understand how the analysis works under the hood, while allowing technical teams to verify and fine-tune the system’s output. Data flows from an organisation’s systems (the platform mediates connections) or is uploaded. The resultant actionable insights can be pushed/scheduled to email, slack, google sheets, displayed in graphics, text, or a mixture of both. Fabi: A generative *** platform Co-founder and CEO of Fabi, Marc Dupuis, describes how many organisations start using the analysis platform by testing workflows and queries on sample data before progressing to real-world analysis. As users delve into data troves and test their work, they can check its veracity, often in collaboration with someone more technically astute, thanks to the platform’s open, transparent view of Smartbooks to show what’s happening under the hood. It works the other way, too: semitechnical data users can confirm that the data being processed is relevant and accurate. To address common concerns about quality control and “******-box” AI, Fabi limits vibe analysis to internally controlled, carefully accessed data sources, with built-in guardrails. Code can be shown verbatim and edited in place, giving semitechnical users visibility into how results are produced, while allowing technical teams to audit, verify, and fine-tune outputs. Collaborative sharing of reports, findings, and working code helps teams validate results without working outside their areas of expertise. Typical workflows include real-time KPI dashboards; natural-language Q&A over operational and product data; correlation analyses (for example, device performance against weather conditions); cohort and trend exploration; A/B test readouts and experiment summaries; and scheduled, shareable reports that mix text, graphics, summaries, and detailed breakdowns. These collaborative workflows are designed to be efficient and intuitive, so, whether working collectively or solo, users can unlock insights from even the most complex data arrangements. Fabi landed its first round of backing from Eniac Ventures in 2023, so it’s a company on the move. The team continues to expand its capabilities, with plans to make vibe analysis even more seamless for both semitechnical and technical users. Organisations interested in exploring the platform can start by testing workflows on sample data, then scale up to real-world use cases as they grow more confident in the system’s transparency and accuracy. (Photo by Alina Grubnyak) See also: Generative AI trends 2025: LLMs, data scaling & enterprise adoption Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Vibe analytics for data insights that are simple to surface appeared first on AI News. View the full article -
Google Cloud has launched Gemini Enterprise, a new platform it calls “the new front door for AI in the workplace”. Announced during a virtual press conference, the platform brings together Google’s Gemini models, first and third-party agents, and the core technology of what was formerly known as Google Agentspace to create a singular agentic platform. It aims to democratise the creation and use of AI-powered agents for automating complex workflows and boosting productivity across entire organisations. Thomas Kurian, CEO of Google Cloud, introduced the new offering, explaining that as customers moved beyond simply building applications with AI, the company saw them “advancing to build agents”. Gemini Enterprise is Google’s answer to this evolution, bundling its entire AI stack into a cohesive user experience that allows developers and business users alike to build agents with a no-code workbench. The platform is built on six core components. The “brains” are Google’s powerful Gemini models, including the newly available Gemini 2.5 Flash Image. The “workbench” is the agent creation and orchestration technology pioneered with Agentspace, allowing any user to manage agents and automate processes. Finally, this is complemented by the “taskforce,” a suite of pre-built Google agents for specialised jobs like the new Code Assist Agent and the Deep Research Agent. To make these agents effective, there is deep integration with a company’s data through new connectors for systems like Microsoft Teams, Salesforce, Box, Confluence, and Jira. Kurian explained the system’s intelligence, stating, “We remember who you are and what you do and use it to personalise the context you have when we work with a large language model”. A central “governance” framework allows organisations to monitor, secure, and audit all agents from one place, with protections like Model Armor now built-in. Finally, the platform is built on an open “ecosystem” of over 100,000 partners. Gemini Enterprise: A glimpse into the future of work To demonstrate the platform’s capabilities, Maryam Gholami walked through a practical use case. “The beauty of Gemini Enterprise is that it offers the familiar interface of the Gemini but built for enterprise workflows, including full control to enable or disable any of the sources as needed,” Gholami said. Using a custom ‘campaigns agent’, she used four different agents to handle market research, media generation, team communications, and inventory management. The agent identified a market trend towards sci-fi themes, flagged a 25 percent inventory gap, created a purchase order in ServiceNow, drafted an email to store managers, and generated social media assets. “Gemini Enterprise is more than just a chat interface,” Gholami concluded after the demonstration. “It’s an end-to-end AI system that unifies your data, your tools, and your teams, turning weeks of complex work into a single, streamlined conversation”. Customers drive transformation with AI fleets Proving the platform’s real-world value, Nirmal Saverimuttu, CEO of Virgin Voyages, shared his perspective that “any major disruption like AI requires a cultural transformation to be successful”. Importantly, Saverimuttu stressed that AI’s role is to work alongside, not replace, his team. “Our people are our biggest asset. AI. And never replace our people,” he stated. “To me, AI is about getting the best from our people. It’s about unleashing human potential”. The cruise line has deployed a fleet of over 50 specialised AI agents company-wide. The first, ‘Email Ellie’, has boosted content production speed by 40 percent and contributed to a 28 percent year-over-year increase in July sales. Saverimuttu also noted welcome operational gains, including a “35 percent reduction in agency dependency costs, resulting in creative independence”. Another early adopter is Macquarie Bank. The bank, one of Australia’s largest, has rolled out Gemini Enterprise to every employee and reports that 99 percent of its staff have already completed generative AI training. Google emphasised that Gemini Enterprise is an open platform, with partners like Box, Salesforce, and ServiceNow announcing compatible agents. A new AI agent finder will also help customers discover thousands of validated partner solutions. To support adoption, Google has also launched Google Skills, a new free learning platform with 3,000 courses. As part of this, the company announced the Gemini Enterprise Agent Ready (GEAR) program; an educational sprint designed to enable one million developers to build and deploy agents. Pricing and availability of Gemini Enterprise Gemini Enterprise is available globally in all countries where Google Cloud products are sold. Gemini Business, for small businesses, starts at $21 per seat per month, while Gemini Enterprise Standard and Plus editions for larger organisations start at $30 per seat per month. For Kurian, the launch is about democratising powerful technology. “Gemini Enterprise technology is really about reimagining a super powerful AI technology [for the workplace] but making it super easy to use and putting it in the hands of every company and every user in those companies,” Kurian concludes. See also: AI value remains elusive despite soaring investment Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Gemini Enterprise: Google aims to put an AI agent on every desk appeared first on AI News. View the full article
-
Cisco has entered an increasingly competitive race to dominate AI data centre interconnect technology, becoming the latest major player to unveil purpose-built routing hardware for connecting distributed AI workloads across multiple facilities. The networking giant unveiled its 8223 routing system on October 8, introducing what it claims is the industry’s first 51.2 terabit per second fixed router specifically designed to link data centres running AI workloads. At its core sits the new Silicon One P200 chip, representing Cisco’s answer to a challenge that’s increasingly constraining the AI industry: what happens when you run out of room to grow. A three-way battle for scale-across supremacy? For context, Cisco isn’t alone in recognising this opportunity. Broadcom fired the first salvo in mid-August with its “Jericho 4” StrataDNX switch/router chips, which began sampling and also offered 51.2 Tb/sec of aggregate bandwidth backed by HBM memory for deep packet buffering to manage congestion. Two weeks after Broadcom’s announcement, Nvidia unveiled its Spectrum-XGS scale-across network—a notably cheeky name given that Broadcom’s “Trident” and “Tomahawk” switch ASICs belong to the StrataXGS family. Nvidia secured CoreWeave as its anchor customer but provided limited technical details about the Spectrum-XGS ASICs. Now Cisco is rolling out its own components for the scale-across networking market, setting up a three-way competition among networking heavyweights. The problem: AI is too big for one building To understand why multiple vendors are rushing into this space, consider the scale of modern AI infrastructure. Training large language models or running complex AI systems requires thousands of high-powered processors working in concert, generating enormous amounts of heat and consuming massive amounts of electricity. Data centres are hitting hard limits—not just on available space, but on how much power they can supply and cool. “AI compute is outgrowing the capacity of even the largest data centre, driving the need for reliable, secure connection of data centres hundreds of miles apart,” said Martin Lund, Executive Vice President of Cisco’s Common Hardware Group. The industry has traditionally addressed capacity challenges through two approaches: scaling up (adding more capability to individual systems) or scaling out (connecting more systems within the same facility). But both strategies are reaching their limits. Data centres are running out of physical space, power grids can’t supply enough electricity, and cooling systems can’t dissipate the heat fast enough. This forces a third approach: “scale-across,” distributing AI workloads across multiple data centres that might be in different cities or even different states. However, this creates a new problem—the connections between these facilities become critical bottlenecks. Why traditional routers fall short AI workloads behave differently from typical data centre traffic. Training runs generate massive, bursty traffic patterns—periods of intense data movement followed by relative quiet. If the network connecting data centres can’t absorb these surges, everything slows down, wasting expensive computing resources and, critically, time and money. Traditional routing equipment wasn’t designed for this. Most routers prioritise either raw speed or sophisticated traffic management, but struggle to deliver both simultaneously while maintaining reasonable power consumption. For AI data centre interconnect applications, organisations need all three: speed, intelligent buffering, and efficiency. Cisco’s answer: The 8223 system Cisco’s 8223 system represents a departure from general-purpose routing equipment. Housed in a compact three-rack-unit chassis, it delivers 64 ports of 800-gigabit connectivity—currently the highest density available in a fixed routing system. More importantly, it can process over 20 billion packets per second and scale up to three Exabytes per second of interconnect bandwidth. The system’s distinguishing feature is deep buffering capability, enabled by the P200 chip. Think of buffers as temporary holding areas for data—like a reservoir that catches water during heavy rain. When AI training generates traffic surges, the 8223’s buffers absorb the spike, preventing network congestion that would otherwise slow down expensive GPU clusters sitting idle waiting for data. Power efficiency is another critical advantage. As a 3RU system, the 8223 achieves what Cisco describes as “switch-like power efficiency” while maintaining routing capabilities—crucial when data centres are already straining power budgets. The system also supports 800G coherent optics, enabling connections spanning up to 1,000 kilometres between facilities—essential for geographic distribution of AI infrastructure. Industry adoption and real-world applications Major hyperscalers are already deploying the technology. Microsoft, an early Silicon One adopter, has found the architecture valuable across multiple use cases. Dave Maltz, technical fellow and corporate vice president of Azure Networking at Microsoft, noted that “the common ASIC architecture has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments.” Alibaba Cloud plans to use the P200 as a foundation for expanding its eCore architecture. Dennis Cai, vice president and head of network Infrastructure at Alibaba Cloud, stated the chip “will enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices.” Lumen is also exploring how the technology fits into its network infrastructure plans. Dave Ward, chief technology officer and product officer at Lumen, said the company is “exploring how the new Cisco 8223 technology may fit into our plans to enhance network performance and roll out superior services to our customers.” Programmability: Future-proofing the investment One often-overlooked aspect of AI data centre interconnect infrastructure is adaptability. AI networking requirements are evolving rapidly, with new protocols and standards emerging regularly. Traditional hardware typically requires replacement or expensive upgrades to support new capabilities. The P200’s programmability addresses this challenge. Organisations can update the silicon to support emerging protocols without replacing hardware—important when individual routing systems represent significant capital investments and AI networking standards remain in flux. Security considerations Connecting data centres hundreds of miles apart introduces security challenges. The 8223 includes line-rate encryption using post-quantum resilient algorithms, addressing concerns about future threats from quantum computing. Integration with Cisco’s observability platforms provides detailed network monitoring to identify and resolve issues quickly. Can Cisco compete? With Broadcom and Nvidia already staking their claims in the scale-across networking market, Cisco faces established competition. However, the company brings advantages: a long-standing presence in enterprise and service provider networks, the mature Silicon One portfolio launched in 2019, and relationships with major hyperscalers already using its technology. The 8223 ships initially with open-source SONiC support, with IOS XR planned for future availability. The P200 will be available across multiple platform types, including modular systems and the Nexus portfolio. This flexibility in deployment options could prove decisive as organisations seek to avoid vendor lock-in while building out distributed AI infrastructure. Whether Cisco’s approach becomes the industry standard for AI data centre interconnect remains to be seen, but the fundamental problem all three vendors are addressing—efficiently connecting distributed AI infrastructure—will only grow more pressing as AI systems continue scaling beyond single-facility limits. The real winner may ultimately be determined not by technical specifications alone, but by which vendor can deliver the most complete ecosystem of software, support, and integration capabilities around their silicon. See also: Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Can Cisco’s new AI data centre router tackle the industry’s biggest infrastructure bottleneck? appeared first on AI News. View the full article
-
A new report from Red Hat finds that 89 percent of businesses are yet to see any customer value from their AI endeavours. However, organisations anticipate a 32 percent increase in AI investment by 2026. The survey finds that AI and security are the joint top IT priorities for *** organisations over the next 18 months, with 62 percent of respondents citing them as necessary. These are followed by hybrid or multi-cloud strategies and virtualisation, showing a clear direction in the technological focus of British businesses. However, the path to AI integration isn’t straightforward. The vast majority of organisations are facing barriers to adoption, with the high costs of implementation and maintenance being the biggest concern for 34 percent of respondents. Data privacy and security issues are a close second, troubling 30 percent, while 28 percent are struggling with the integration of AI into their existing systems. A particularly interesting finding is the prevalence of “shadow AI,” with 83 percent of organisations reporting the unauthorised use of AI tools by employees. This suggests a disconnect between official IT strategy and the day-to-day practices of the workforce, potentially introducing security risks and inefficiencies. In an effort to navigate these challenges, *** organisations are increasingly turning to open source software. The survey reveals that 84 percent of respondents consider enterprise open source important for their AI strategy, with similarly high figures for virtualisation, hybrid and multi-cloud, and security. Joanna Hodgson, *** Country Manager at Red Hat, said: “This year’s *** survey results show the gap between ambition and reality. Organisations are investing substantially in AI but currently only a few are delivering customer value. In the journey from experimentation to sustainable production, enterprise knowledge and integration with enterprise systems must pave the road to achieving value from AI. “Openness is a force for greater collaboration, sharing best practice and enabling flexibility. As is the case with successful hybrid cloud investments, open-source will continue to be the bedrock for making AI more consumable and reusable.” The survey also explored the specific areas of AI that are being prioritised. Agentic AI, which involves systems that can operate with a high degree of autonomy, is the top priority for 68 percent of respondents. This is followed by the desire to enable broad employee adoption and to operationalise AI. The skills gap remains a persistent challenge, with AI being the most urgent area of concern for the second consecutive year. Within the field of AI, the talent shortage is most acute in agentic AI, the ability to efficiently use AI capabilities, and educating the wider business on how to use AI. Despite these domestic challenges, there is a strong sense of optimism about the ***’s position on the global AI stage. 83 percent of respondents believe the *** is either already a global AI powerhouse or has the potential to become one within the next three years. However, this confidence is tempered by a lack of talent pipeline, limited public funding, and insufficient private sector engagement, which are seen as the main factors holding the *** back from extracting value from AI. The report also touches on the complexities of cloud adoption, which is further complicated by the integration of AI workloads. Internal silos, sovereignty concerns, and unclear return on investment continue to be barriers. In response, *** organisations are prioritising operational control and autonomy, securing the software supply chain, and maintaining flexibility in their choice of IT suppliers. Hans Roth, SVP and GM for EMEA at Red Hat, commented: “Organisations want greater operational control and IT resiliency to adapt in a world of constant disruption. The survey results, as well as our daily conversations, show sovereignty prominently on the agenda for enterprise’s ongoing cloud strategies and the budding AI opportunity. “Open-source is central to this shift as it provides businesses with the transparency and flexibility to innovate rapidly without compromise.” The findings from Red Hat’s latest survey show the *** is ready to tap the value potential of AI, but is also struggling with the practicalities of implementation, skills shortages, and the complexities of the technological environment. The strong emphasis on open-source suggests a pragmatic approach, supporting collaboration and flexibility in the pursuit of AI-driven innovation. See also: Samsung’s tiny AI model beats giant reasoning LLMs Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI value remains elusive despite soaring investment appeared first on AI News. View the full article
-
A new paper from a Samsung AI researcher explains how a small network can beat massive Large Language Models (LLMs) in complex reasoning. In the race for AI supremacy, the industry mantra has often been “******* is better.” Tech giants have poured billions into creating ever-larger models, but according to Alexia Jolicoeur-Martineau of Samsung SAIL Montréal, a radically different and more efficient path forward is possible with the Tiny Recursive Model (TRM). Using a model with just 7 million parameters, less than 0.01% of the size of leading LLMs, TRM achieves new state-of-the-art results on notoriously difficult benchmarks like the ARC-AGI intelligence test. Samsung’s work challenges the prevailing assumption that sheer scale is the only way to advance the capabilities of AI models, offering a more sustainable and parameter-efficient alternative. Overcoming the limits of scale While LLMs have shown incredible prowess in generating human-like text, their ability to perform complex, multi-step reasoning can be brittle. Because they generate answers token-by-token, a single mistake early in the process can derail the entire solution, leading to an invalid final answer. Techniques like Chain-of-Thought, where a model “thinks out loud” to break down a problem, have been developed to mitigate this. However, these methods are computationally expensive, often require vast amounts of high-quality reasoning data that may not be available, and can still produce flawed logic. Even with these augmentations, LLMs struggle with certain puzzles where perfect logical execution is necessary. Samsung’s work builds upon a recent AI model known as the Hierarchical Reasoning Model (HRM). HRM introduced a novel method using two small neural networks that recursively work on a problem at different frequencies to refine an answer. It showed great promise but was complicated, relying on uncertain biological arguments and complex fixed-point theorems that were not guaranteed to apply. Instead of HRM’s two networks, TRM uses a single, tiny network that recursively improves both its internal “reasoning” and its proposed “answer”. The model is given the question, an initial guess at the answer, and a latent reasoning feature. It first cycles through several steps to refine its latent reasoning based on all three inputs. Then, using this improved reasoning, it updates its prediction for the final answer. This entire process can be repeated up to 16 times, allowing the model to progressively correct its own mistakes in a highly parameter-efficient manner. Counterintuitively, the research discovered that a tiny network with only two layers achieved far better generalisation than a four-layer version. This reduction in size appears to prevent the model from overfitting; a common problem when training on smaller, specialised datasets. TRM also dispenses with the complex mathematical justifications used by its predecessor. The original HRM model required the assumption that its functions converged to a fixed point to justify its training method. TRM bypasses this entirely by simply back-propagating through its full recursion process. This change alone provided a massive boost in performance, improving accuracy on the Sudoku-Extreme benchmark from 56.5% to 87.4% in an ablation study. Samsung’s model smashes AI benchmarks with fewer resources The results speak for themselves. On the Sudoku-Extreme dataset, which uses only 1,000 training examples, TRM achieves an 87.4% test accuracy, a huge leap from HRM’s 55%. On Maze-Hard, a task involving finding long paths through 30×30 mazes, TRM scores 85.3% compared to HRM’s 74.5%. Most notably, TRM makes huge strides on the Abstraction and Reasoning Corpus (ARC-AGI), a benchmark designed to measure true fluid intelligence in AI. With just 7M parameters, TRM achieves 44.6% accuracy on ARC-AGI-1 and 7.8% on ARC-AGI-2. This outperforms HRM, which used a 27M parameter model, and even surpasses many of the world’s largest LLMs. For comparison, Gemini 2.5 Pro scores only 4.9% on ARC-AGI-2. The training process for TRM has also been made more efficient. An adaptive mechanism called ACT – which decides when the model has improved an answer enough and can move to a new data sample – was simplified to remove the need for a second, costly forward pass through the network during each training step. This change was made with no major difference in final generalisation. This research from Samsung presents a compelling argument against the current trajectory of ever-expanding AI models. It shows that by designing architectures that can iteratively reason and self-correct, it is possible to solve extremely difficult problems with a tiny fraction of the computational resources. See also: Google’s new AI agent rewrites code to automate vulnerability fixes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Samsung’s tiny AI model beats giant reasoning LLMs appeared first on AI News. View the full article
-
The acquisition underscores Tuned Global’s commitment to shaping the future of the music industry by empowering clients with innovative technology and unmatched execution, while continuing to support existing Figaro.ai customers. Wednesday, 8 October, 2025 — Tuned Global, the leading music and media technology platform, has today announced the acquisition of Figaro.ai (by FeedForward), a London-based audio-AI company known for making music catalogues smarter and more discoverable. The acquisition advances Tuned Global’s strategy to be the cloud platform that clients build on to innovate. By bringing Figaro.ai into its partner-friendly platform, Tuned Global is enhancing its platform with AI innovation to deliver practical outcomes for customers: faster innovation, greater engagement and measurable business impact. “With Figaro.ai, Tuned Global cements its position as the most comprehensive music platform, where innovation across AI, fraud detection, rights management, search and recommendations can be built,” said Tuned Global CEO **** Raso. This move is about using technology to improve client outcomes and deliver value for the wider industry. It follows Tuned Global’s earlier acquisition of Pacemaker, known for pioneering AI-powered mixing technology, and reflects a clear strategy: acquiring companies whose innovation and IP help clients and the industry build the future of music technology and streaming. Integrating Figaro.ai strengthens Tuned Global’s ability to power premium, highly relevant music experiences at scale. Tuned Global remains an open ecosystem, with Figaro.ai integrated as another component in a broader platform. It will operate as an integrated but distinct component within Tuned Global’s broader platform, complementing existing partners and expanding client options. Current Figaro.ai clients will continue to be supported, backed by Tuned Global’s global reach, infrastructure and long-term commitment. The Figaro.ai team, including founders Lydia Gregory and Kevin Webster, will be integrated within Tuned Global, ensuring continuity for existing customers and adding deep AI expertise to accelerate the company’s roadmap. Tuned Global CEO **** Raso said he was thrilled to welcome the deeply skilled Figaro.ai team into the fold. “At Tuned Global, we see ourselves as the hub where innovation in music technology takes shape. We are building the largest open ecosystem of AI music intelligence, giving our clients maximum choice and real impact for the music industry,” he said. “With Figaro.ai joining the platform, we’re not only expanding that ecosystem with cutting-edge technology but also welcoming a highly skilled team whose expertise strengthens our ability to deliver music experiences that are powerful, flexible and future-ready.” Figaro.ai CEO and Co-founder Lydia Gregory said she was excited to join Tuned Global and amplify the impact of her company’s objectives. “Figaro.ai has always been about combining deep technical expertise with a passion for music discovery. I’m incredibly proud of the team that built this technology, and I’m thrilled that they are joining me as part of Tuned Global,” she said. “Being integrated into a platform of this scale means we can continue our mission with greater reach and impact, while ensuring continuity for the clients who already rely on us. Together, we’re ready to help the industry deliver music experiences that are more relevant, premium, and engaging than ever before.” About Tuned Global Tuned Global is the leading data-driven cloud and software platform that empowers businesses to integrate commercial music into their apps or launch complete streaming experiences using advanced APIs, real-time analytics, licensing solutions, and customisable white-label apps. Our turnkey solutions for music, audio, and video — coupled with a broad ecosystem of third-party music tech integrations — make us the most comprehensive platform for powering any digital music project. We streamline complexities in licensing, rights management, and content delivery, enabling rapid innovation and bringing new ideas to life. Since 2011, we’ve supported 40+ companies in 70+ countries — across telecom, fitness, media, aviation, and more — to deliver innovative music experiences faster and more cost-effectively. For more information, visit www.tunedglobal.com. About Figaro Figaro is the audio intelligence platform for music search, tagging and content detection – powering smarter discovery and content management across sync, DSPs, UGC, and distribution. The post Tuned Global strengthens its leadership in music technology with the acquisition of Figaro.ai appeared first on AI News. View the full article
-
Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities. To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, there can be costly consequences. Last year, healthcare company Advanced was fined £6 million for losing patient records that, among other details, contained information about how to gain entry to the homes of 890 care receivers. Even the smallest oversights can create unpleasant headlines and catastrophic fines. This is the reality of modern data handling: leaks can be catastrophic, and compliance frameworks like GDPR, HIPAA, and FERPA, plus FOIA requests, require more vigilance than manual redaction can provide. What organizations need is not more staff to ensure proper redaction, but tools that achieve it quickly, reliably, and securely. CaseGuard Studio, a US-based AI redaction & investigation platform, has built software that automates this manual work with 98% accuracy. It can process thousands of files in minutes, working on data that’s kept securely on-premises of any file type, including video, audio, documents, and images. Why Manual Redaction No Longer Works Redaction is not new, but the tools most people reach for were not built for the complexity of today’s compliance requirements. Adobe Acrobat, for example, offers text redaction but needs manual work on each document. Premiere’s video editing software requires frame-by-frame subject tracking for video redaction, which is slow and impractical. These solutions provide only limited capability and were never designed for departments that process a multitude of redactions on a weekly basis. CaseGuard Studio, by contrast, was purpose-built for just this challenge. It can detect 12 categories of PII (personally-identifiable information) in video and images, such as faces, license plates, notepads, and more. It tracks and redacts all PII without needing manual frame-by-frame intervention. For audio and documents, CaseGuard Studio supports over 30 PII types, like names, phone numbers, and addresses. Custom keywords, phrases, or sentences can be auto-detected and redacted directly from thousands of documents and transcripts, streamlining compliance in ways manual tools can’t match. It transcribes recordings with high accuracy and can translate to and from 100+ languages, so it can redact sensitive terms in multilingual content. What once took days of human labor can now happen in minutes. CaseGuard Studio automates redaction work with 98% accuracy, up to 30 times faster than manual methods, and because it runs fully on-premise, data never leaves the device. What to Ask When Choosing Redaction Software For organizations evaluating redaction software, the decision often comes down to a handful of critical questions that determine whether a platform can deliver on both compliance and efficiency. The following questions are central to making the right choice. Can the software handle every file type we work with? From scanned forms and handwritten notes to video, audio, and still images, organizations in sensitive sectors deal with more than PDFs. Is the platform fully automated? If redaction still means blacking out text with a Sharpie or scrubbing video frame by frame, the process is slow and prone to error. Full automation ensures accuracy and frees staff for higher-impact work. Does the software ensure data never leaves your environment? On-premise deployment means sensitive files are processed locally, so nothing is exposed to third-party servers or cloud risks. Does the pricing stay predictable as you scale? Per-file or per-minute pricing quickly becomes unsustainable as workloads grow. Look for a flat subscription with unlimited redaction, so costs stay predictable no matter how much data you process. Evaluating CaseGuard Studio Against the Four Redaction Essentials When assessed against these requirements, CaseGuard Studio was the only platform in our evaluation that consistently delivered across all four redaction essentials. Auto-redact files from any source From text documents and scanned forms to video, audio, images, and even handwriting, redaction has to cover every format where sensitive information might appear. Missing one identifiable feature, a face in a crowd or an un-redacted license plate, and a single oversight can be the difference between full compliance and a lawsuit. CaseGuard Studio automatically detects and redacts sensitive information across all these file types within a single platform with complete compliance. 2. Automated bulk redaction at speed and scale Thousands of files can be redacted in bulk, turning weeks of manual effort into minutes of processing. CaseGuard Studio handles workloads up to 32x faster than manual methods, with 98% accuracy, giving organizations the speed and scalability to meet growing compliance demands. 3. Your data, your control CaseGuard Studio runs fully on-premise, within your secure environment, including air-gapped systems that are completely isolated from external networks. This ensures organizations retain full control of their data, with nothing exposed to third-party servers or cloud risks. 4. Unlimited redaction, no pay-per-file fees Pay-per-file pricing quickly adds up, making every additional redaction more expensive. CaseGuard Studio offers predictable pricing under a flat subscription with unlimited redaction, so costs remain the same no matter how heavy the redaction load is. Final Thoughts Over the course of our evaluation, we compared methods and platforms ranging from manual redaction and legacy PDF editors to newer AI-driven tools that have appeared in the last few years. Most delivered partial solutions, treating written documents well but failing on audio, while others blurred faces in video, but weren’t practical to use at scale. Cloud-only options raised sovereignty and compliance concerns that, for many users, would count them out of the running entirely. CaseGuard Studio was the only platform that consistently met all five requirements detailed above. It supports the widest of file types, from body-cam video to scanned or handwritten forms. Audio and video are probably the most difficult formats to redact, especially at scale. Here, CaseGuard wins our vote with its AI-powered smarts. It runs fully on-premise, keeps sensitive files under organizational control, and its local AI models are refined with each version release. At a time when many cloud redaction software licensing models drive up costs as workloads grow, CaseGuard’s flat pricing offers a refreshing change — predictable, transparent, and sustainable. For any organization facing rising compliance demands and ever-larger volumes of sensitive data, CaseGuard Studio is well worth a closer look. Click here to book a consultation. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News. View the full article
-
AI is reshaping how people plan and experience travel. From curated videos on Instagram Reels to booking engines that build entire itineraries in seconds, AI is becoming a powerful force in how journeys are imagined, booked, and lived. But this shift raises an important question: is AI giving travellers more freedom, or quietly steering their choices? Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit, believes AI can do both, speaking to AI News. In a wide-ranging conversation, he explained how AI is transforming travel discovery, personalisation, cultural exchange, and ethics—and why the industry must set clear guardrails as technology takes on a more active role. AI as a travel companion AI is changing how people discover destinations. Instead of generic travel lists, platforms now serve content that feels personal. “AI has turned travel discovery into a personal canvas,” Hamidaddin said. “Platforms like Instagram Reels no longer just show ‘where to go’; they curate journeys that feel tailor-made for each traveller.” Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit This shift is not just about convenience. By highlighting lesser-known destinations, AI can spread demand and ease pressure on crowded tourist spots. It can also introduce travellers to authentic local experiences that might otherwise remain hidden. Hamidaddin sees the next phase as “agentic AI”—technology that doesn’t just make suggestions but takes action. He described a future where AI automatically rebooks flights disrupted by weather, adjusts itineraries, and reschedules reservations in real time. “That’s frictionless travel—where the logistics fade and the adventure takes centre stage,” he said. AI personalisation vs. algorithmic influence in travel AI-driven booking engines promise hyper-personalised recommendations, matching experiences to individual interests and budgets. This can make planning smoother and more inspiring, but it also comes with risks. “They do both,” Hamidaddin said when asked whether AI empowers travellers or guides them without their knowledge. “AI can empower travellers like never before—matching experiences to passions, budgets, and even moods. But unchecked, algorithms can quietly narrow horizons, nudging people toward predictable options. This risk only increases with agentic AI, which will make decisions on travellers’ behalf. That’s why transparency and accountability are non-negotiable. AI should be a compass, not a cage, and travellers must always hold the final word.” Trust and transparency The balance between personalisation and privacy will shape the next era of travel. As AI systems collect vast amounts of personal data, travellers are more aware of how their preferences, clicks, and searches are used. Hamidaddin stressed that trust is the foundation. “The era of hyper-personalisation must be built on trust. Travellers know their data is powerful, and they’re right to ask how it’s being used,” he said. The solution, in his view, is “radical transparency: explicit consent, clear explanations, and real opt-in choices.” Agentic AI, which can act on a traveller’s behalf, makes this even more important. If algorithms are booking, adjusting, or cancelling plans automatically, travellers need clear ways to control and understand these actions. “True innovation doesn’t just customise the journey; it safeguards the traveller’s confidence and autonomy,” he added. Setting standards through TOURISE Hamidaddin will lead discussions on these topics at the inaugural TOURISE Summit in Riyadh this November. He sees the summit as a chance to shape global standards for AI use in travel, not just showcase technology. “The TOURISE is designed to be more than an event—it’s the world’s first platform where government, business, and technology leaders unite to shape travel tech responsibly,” he said. His goals include creating a shared ethical framework for AI, encouraging partnerships to address privacy and workforce challenges, promoting sustainability, and training the global tourism workforce to thrive in an AI-driven industry. “TOURISE must set a new benchmark: innovation with integrity,” he said. Cultural exchange and economic growth AI’s influence goes beyond logistics. It is also changing cultural exchange and economic development, particularly in Saudi Arabia. “AI is dissolving barriers—linguistic, cultural, and economic. It’s curating authentic connections that go beyond sightseeing into meaningful exchange,” Hamidaddin said. He explained how Saudi Arabia is using AI to highlight cultural and historical treasures like AlUla and Diriyah, while supporting artisans, festivals, and small businesses. Agentic AI will help create smoother travel experiences that allow visitors to focus more on culture and less on planning. “This isn’t just about more visitors; it’s about inclusive growth, mutual respect, and shared prosperity,” he said. By 2030, AI is expected to contribute $135 billion to Saudi Arabia’s GDP, with tourism playing a central role. But for Hamidaddin, the real impact is measured in “bonds between people.” Ethical guardrails for AI in travel As AI systems take on more responsibility, clear ethical standards become essential. Hamidaddin outlined several priorities: making AI usage clear to users, regularly auditing algorithms for bias, giving travellers control over their data, and designing systems that promote cultural diversity and accessibility. “With agentic AI, the stakes rise: when an AI acts on a traveller’s behalf, we must ensure transparency, explainability, and accountability. Agency must never replace autonomy,” he said. Innovation with ethics The debate isn’t about whether to adopt AI, but how to do so responsibly. Hamidaddin argues that innovation should align with human values and environmental priorities. “It’s not about chasing every shiny new tool; it’s about aligning innovation with human values and planetary needs,” he said. He believes governments, businesses, communities, and travellers must collaborate to agree on shared principles. Agentic AI makes this even more urgent, as decisions may increasingly be made by machines. “Our job is to ensure technology serves people, not the other way around,” he added. A new era for travel Hamidaddin is optimistic about what lies ahead. “What excites me most is that travel is becoming transformative again,” he said. He imagines a future where language barriers disappear, itineraries adapt in real time, and every trip supports local communities. In Saudi Arabia, platforms like “Spirit of Saudi” are already using AI to showcase authentic experiences, from desert adventures to artisan workshops. The next step is agentic journeys, where AI travel companions handle logistics seamlessly, freeing travellers to focus on discovery and connection. “At TOURISE, I believe we’re not simply shaping tourism’s future—we’re igniting a new era of connection and shared prosperity across the globe,” he said. (Photo by S O C I A L . C U T) See also: AI causes reduction in users’ brain activity – MIT Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How AI is changing the way we travel appeared first on AI News. View the full article
-
Guest author: Or Hillel, Green Lamp AI systems aren’t experimental anymore, they’re embedded in everyday decisions that affect millions. Yet as these models stretch into important spaces like real-time supply chain routing, medical diagnostics, and financial markets, something as simple as a stealthy data shift or an undetected anomaly can flip confident automation into costly breakdown or public embarrassment. This isn’t just a problem for data scientists or machine learning engineers. Today, product managers, compliance officers, and business leaders are realising that AI’s value doesn’t just hinge on building a high-performing model, but on deeply understanding how, why, and when these models behave the way they do once exposed to the messiness of the real world. Enter AI observability, a discipline that’s no longer an optional add-on, but a daily reality for teams committed to reliable, defensible, and scalable AI-driven products. The best AI observability tools in 2025 1. Logz.io Logz.io stands out in the AI observability landscape by providing an open, cloud-native platform tailored for the complexities of modern ML and AI systems. Its architecture fuses telemetry, logs, metrics, and traces into one actionable interface, empowering teams to visualize and analyse every stage of the AI lifecycle. Key features include: AI-driven root cause analysis: Automated anomaly detection and intelligent guided troubleshooting accelerate issue resolution. The embedded AI Agent is able to surface trends, detect problems proactively, and provide explanations in natural language. Extensive integration: Logz.io seamlessly connects with major cloud providers, container orchestration, and popular ML frameworks. The flexibility ensures observability for hybrid and multi-cloud models without friction. Workflow enhancements: The platform’s interactive workflows promote faster investigation by guiding even junior engineers toward effective troubleshooting. Cost optimisation: Intelligent data management tools allow teams to optimise monitoring costs and prioritise valuable business insights. 2. Datadog Datadog has evolved from a classic infrastructure monitoring tool into a powerhouse for AI observability in the enterprise. The platform harnesses an integrated stack of telemetry capture, real-time analytics, and ML-specific dashboards that provide both high-level and granular perspectives in the entire AI lifecycle. Key features include: Comprehensive telemetry: Captures logs, traces, metrics, and model performance, enabling anomaly detection and quick identification of bottlenecks in both training and deployment. Machine learning monitoring: Specialised tools track data drift, prediction bias, and resource consumption at inference. Alerts and dashboards are tailored for model-centric use cases. Unified interface: Engineers, data scientists, and SREs all operate from shared dashboards, streamlining cross-team troubleshooting and collaboration. Rapid integration: Datadog supports dozens of AI and data science platforms, TensorFlow, PyTorch, MLflow, Kubeflow, and more, out of the box. 3. EdenAI EdenAI addresses the needs of enterprises using multiple AI providers with a vendor-agnostic observability platform. The tool aggregates telemetry streams, monitors AI service health, and offers a unified response centre, regardless of the origin of the models, APIs, or data. Key features include: Centralised dashboards: Monitor all AI models, APIs, and endpoints from a single pane of glass, ideal for organisations mixing public APIs, private models, and open-source services. Cross-platform drift and anomaly detection: AI-driven monitoring illuminates data drift, latency, and performance issues wherever AI is consumed or deployed. Automated auditing: Built-in logs and reporting features make it easy to satisfy regulatory requirements and support enterprise governance. Vendor-agnostic integration: Fast onboarding for new models, with connectors to major AI cloud services and on-premises deployments. 4. Dynatrace Dynatrace has long been known for autonomous DevOps monitoring, and its AI observability features in 2025 carry that innovation into the AI realm. The platform’s core is the Davis® AI engine, which continuously analyses system health, model performance, and end-to-end dependencies throughout your ML pipelines. Key features include: Autonomous anomaly detection: Davis® proactively identifies model drift, data pipeline snags, and abnormal behaviour in layers, from code to inference. Topology mapping: Visualizes relationships between services, models, data sources, and infrastructure, making it easy to trace the impact of changes or search for root causes. Predictive analytics: Helps anticipate incidents before they impact end-users by correlating macro system signals with fine-grained ML metrics. Scale and integration: Connects directly with leading cloud and MLOps platforms for seamless, low-touch monitoring at enterprise scale. 5. WhyLabs WhyLabs has a data-centric approach to AI observability that centres on transparency, quantitative rigor, and proactive detection of risk in ML operations. The platform is built for organisations that want to govern and monitor the entire AI lifecycle, from raw data ingestion to live model predictions. Key features include: Pipeline monitoring: Tracks data quality, schema changes, and feature drift in real-time, enabling early alerts for issues that could undermine model accuracy. Model performance dashboards: Visualize changes in predictive quality, bias, and rare event distribution in all deployed models. Rich telemetry integration: Supports monitoring for both structured and unstructured data types, reflecting the variety present in modern ML ecosystems. Collaborative workflows: Allows teams to annotate, triage, and resolve anomalies with a unified interface and pre-defined incident playbooks. The real-world impact of AI observability What does it look like in practice when an organisation gets AI observability right? Enabling proactive incident response In a hospital using AI for radiology triage, an unexpected equipment firmware update subtly shifts the pixel values of incoming images. Without observability, this shift goes undetected, producing subtly degraded diagnoses. With observability, the shift triggers alerts, and the team retrains the model or adjusts preprocessing, avoiding patient harm. Preventing bias and drift A fintech company notices a sudden, unexplained dip in loan approval rates for a specific demographic. Deep observability enables rapid investigation, diagnosis of data drift due to shifts in an upstream data partner, and quick mitigation, ensuring fairness and compliance. Supporting human-AI collaboration Customer support uses AI to recommend ticket responses. Observability-powered dashboards flag when auto-generated advice is leading to longer ticket resolution times for one product line. Teams use this to retrain the model, improving both customer satisfaction and business outcomes. Choosing the right AI observability tool: Alignment, scale, and ecosystem Selecting the best observability platform for AI depends on alignment with your organisation’s size, complexity, and goals. Consider: Breadth and depth of telemetry coverage Level of automation and intelligence provided Developer experience, onboarding, and ease of integrating with your stack Regulatory and compliance features for auditability Ecosystem fit, including support for your preferred cloud, frameworks, and workflows Investing in the right observability platform is foundational for a resilient, auditable, and high-velocity AI practice in 2025 and beyond. Guest author: Or Hillel, Green Lamp Image source: Unsplash The post 5 best AI observability tools in 2025 appeared first on AI News. View the full article
-
Google DeepMind has deployed a new AI agent designed to autonomously find and fix critical security vulnerabilities in software code. The system, aptly-named CodeMender, has already contributed 72 security fixes to established open-source projects in the last six months. Identifying and patching vulnerabilities is a notoriously difficult and time-consuming process, even with the aid of traditional automated methods like fuzzing. Google DeepMind’s own research, including AI-based projects such as Big Sleep and OSS-Fuzz, has proven effective at discovering new zero-day vulnerabilities in well-audited code. This success, however, creates a new bottleneck: as AI accelerates the discovery of flaws, the burden on human developers to fix them intensifies. CodeMender is engineered to address this imbalance. It functions as an autonomous AI agent that takes a comprehensive approach to fix code security. Its capabilities are both reactive, allowing it to patch newly discovered vulnerabilities instantly, and proactive, enabling it to rewrite existing code to eliminate entire classes of security flaws before they can be exploited. This allows human developers and project maintainers to dedicate more of their time to building features and improving software functionality. The system operates by leveraging the advanced reasoning capabilities of Google’s recent Gemini Deep Think models. This foundation allows the agent to debug and resolve complex security issues with a high degree of autonomy. To achieve this, the system is equipped with a set of tools that permit it to analyse and reason about code before implementing any changes. CodeMender also includes a validation process to ensure any modifications are correct and do not introduce new problems, known as regressions. While large language models are advancing rapidly, a mistake when it comes to code security can have costly consequences. CodeMender’s automatic validation framework is therefore essential. It systematically checks that any proposed changes fix the root cause of an issue, are functionally correct, do not break existing tests, and adhere to the project’s coding style guidelines. Only high-quality patches that satisfy these stringent criteria are surfaced for human review. To enhance its code fixing effectiveness, the DeepMind team developed new techniques for the AI agent. CodeMender employs advanced program analysis, utilising a suite of tools including static and dynamic analysis, differential testing, fuzzing, and SMT solvers. These instruments allow it to systematically scrutinise code patterns, control flow, and data flow to identify the fundamental causes of security flaws and architectural weaknesses. The system also uses a multi-agent architecture, where specialised agents are deployed to tackle specific aspects of a problem. For example, a dedicated large language model-based critique tool reveals the differences between original and modified code. This allows the primary agent to verify that its proposed changes do not introduce unintended side effects and to self-correct its approach when necessary. In one practical example, CodeMender addressed a vulnerability where a ****** report indicated a heap buffer overflow. Although the final patch only required changing a few lines of code, the root cause was not immediately obvious. By using a debugger and code search tools, the agent determined the true problem was an incorrect stack management issue with Extensible Markup Language (XML) elements during parsing, located elsewhere in the codebase. In another case, the agent devised a non-trivial patch for a complex object lifetime issue, modifying a custom system for generating C code within the target project. Beyond simply reacting to existing bugs, CodeMender is designed to proactively harden software against future threats. The team deployed the agent to apply -fbounds-safety annotations to parts of libwebp, a widely used image compression library. These annotations instruct the compiler to add bounds checks to the code, which can prevent an attacker from exploiting a buffer overflow to execute arbitrary code. This work is particularly relevant given that a heap buffer overflow vulnerability in libwebp, tracked as CVE-2023-4863, was used by a threat actor in a zero-click iOS exploit several years ago. DeepMind notes that with these annotations in place, that specific vulnerability, along with most other buffer overflows in the annotated sections, would have been rendered unexploitable. The AI agent’s proactive code fixing involves a sophisticated decision-making process. When applying annotations, it can automatically correct new compilation errors and test failures that arise from its own changes. If its validation tools detect that a modification has broken functionality, the agent self-corrects based on the feedback and attempts a different solution. Despite these promising early results, Google DeepMind is taking a cautious and deliberate approach to deployment, with a strong focus on reliability. At present, every patch generated by CodeMender is reviewed by human researchers before being submitted to an open-source project. The team is gradually increasing its submissions to ensure high quality and to systematically incorporate feedback from the open-source community. Looking ahead, the researchers plan to reach out to maintainers of critical open-source projects with CodeMender-generated patches. By iterating on community feedback, they hope to eventually release CodeMender as a publicly available tool for all software developers. The DeepMind team also intends to publish technical papers and reports in the coming months to share their techniques and results. This work represents the first steps in exploring the potential of AI agents to proactively fix code and fundamentally enhance software security for everyone. See also: CAMIA privacy attack reveals what AI models memorise Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Google’s new AI agent rewrites code to automate vulnerability fixes appeared first on AI News. View the full article
-
The roar of 80,000 fans at Shanghai Stadium on September 21, 2025, wasn’t just about the football match between Shanghai Shenhua and Chengdu Rongcheng – it was also a live demonstration of how telecom carriers are tackling one of their most pressing challenges: converting advanced network capabilities into revenue. Huawei brought the international media to witness this implementation firsthand, offering many of us in the press corps our first experience of ******** football culture. As supporters cheered in waves of blue and white, capturing moments on their phones and sharing videos (despite the crushing crowd density), China Mobile Shanghai’s newly deployed 5G-A network monetisation strategy was being tested in real-time, powered by Huawei’s GainLeap solution and intelligent wireless infrastructure. From the media section, the scale of the technical challenge became apparent – ensuring 80,000 simultaneous users could stream, upload, and transact without network degradation. This was something that required more than additional bandwidth. China Mobile Shanghai has become the first carrier in China to launch a differentiated 5G-A experience package, marking what industry observers see as a shift in how telecom operators might address revenue growth in saturated markets. The “5G-A Exclusive Package for Shenhua Football Fans” is a way to transform the elastic capabilities of 5G-Advanced networks into tangible value that users can perceive and are willing to pay for. The technical foundation for this 5G-A network monetisation strategy relies heavily on Huawei’s technology portfolio, from the GainLeap solution that identifies premium subscribers, to the AI-powered intelligent wireless boards that optimise network performance. The business model innovation The partnership between China Mobile Shanghai and Shanghai Shenhua Football Club offers approximately 200,000 football fans an annual package that combines network performance guarantees with fan-specific benefits. Subscribers receive network acceleration on 5G-A, access to all matches via the Migu streaming service, unlimited video ringback tone downloads, and Shanghai Shenhua Football Club merchandise. This approach to 5G-A network monetisation addresses what China Mobile Shanghai identifies as an important pain point for the telecommunications industry: how to drive quality growth when user acquisition has reached its ceiling. Rather than competing solely on price or basic connectivity, the package creates value through enhanced experiences in specific use-cases. The technical infrastructure behind the experience For Shanghai Stadium, China Mobile Shanghai implemented an elastic, scalable network capable of handling massive concurrent demand. During the match, with 80,000 users accessing the network simultaneously, 5G-A package subscribers can achieve download speeds of up to 600 Mbps. The necessary technical foundation relies on Huawei’s GainLeap solution, which lets the network identify 5G-A subscribers and allocates them a high-speed 3CC (three-component carrier) channel. The differentiation is key to the 5G-A network monetisation model – creating measurable performance differences between standard and premium subscribers. Behind the scenes, Huawei’s AI-powered intelligent wireless boards play a central role. They integrate on-board communications capabilities with artificial intelligence to perceive network service types, user experience goals, device characteristics, and resource status, in milliseconds. According to test data provided by China Mobile Shanghai, they have helped result in QR code scanning latency reduced by 47%, WeChat uploading time shortened by 25%, live streaming speeds increased by 27%, and high-definition video ratios increased by 11%. Infrastructure deployment scale To support the high concurrent demand during events, China Mobile Shanghai and Huawei conducted comprehensive network upgrades at the stadium. The lower stands received 32 new 2.6 GHz and 4.9 GHz pRRUs (passive remote radio units), more than doubling overall network capacity. Seven escalator entrances each received a 4.9 GHz EM device to eliminate coverage dead spots. On match days, more than 40 engineers are stationed onsite for real-time network monitoring and dynamic optimisation. Outside of the stadium, China Mobile Shanghai has achieved continuous 5G-A coverage in the area inside Shanghai’s Outer Ring Road, the five new towns further out, and 21 metro lines in the city. The practical user experience For fans at the match, the differentiated service manifested practically. The high bandwidth and business-level assurance capabilities enabled quick mobile payments for drinks, snacks, and souvenirs onsite. Users could share video highlights in real time without lag, even during peak moments when thousands of fans uploaded data simultaneously. The ability to instantly see likes and comments from friends while still in the stadium represents the kind of enhanced experience that China Mobile Shanghai is betting users will value enough to pay a premium for. Whether this bet pays off commercially remains to be seen, but the technical execution at the September 21 match demonstrated that the infrastructure delivers on its promises. Industry implications The initiative raises questions about the future of 5G-A network monetisation strategies in the telecommunications industry. Traditional models have struggled to justify the massive infrastructure investments required for 5G and latterly, 5G-Advanced networks. By creating tiered experiences tied to specific user communities – in this case, football fans – carriers may have found a way to differentiate services beyond simple speed tiers. The approach also represents a test case for how deeply integrated AI capabilities in network infrastructure can enable new business models. The intelligent wireless boards’ ability to make millisecond-level decisions about resource allocation is what makes the performance differentiation technically feasible at scale. China Mobile Shanghai’s target of serving 200,000 Shenhua fans provides a measurable benchmark for assessing commercial viability. As telecommunications companies globally grapple with how to monetise increasingly expensive network upgrades, China Mobile Shanghai’s experiment with community-specific, experience-based packages may offer insights for the industry’s evolution beyond traditional connectivity provision. (Image source: Smart Shanghai ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post China Mobile Shanghai launches industry-first 5G-A network monetisation strategy with Huawei appeared first on AI News. View the full article
-
A study from MIT (Massachusetts Institute of Technology) has found that the human brain not only works less hard when using an LLM, but its effects continue, negatively affecting mental activity in future work. The researchers used a limited number of subjects for their experiments (a limitation stated in the paper [PDF]), who were asked to write essays on a variety of subjects. One group of subjects was allowed to use AI (ChatGPT was chosen; researchers considered there was little difference between it and its competitors), the second permitted to use Google Search, and the third group was termed ‘brain only’ – that is, producing work with no technology aids. Electroencephalography (EEG) was used on all the subjects to monitor brain activity to ****** cognitive engagement and load. The researchers found that the groups exhibited different levels of neural connectivity, which reflected different strategies employed by the brain to write up the assignments. The more support the subjects had, the less hard their brains seemed to work. EEG analysis showed that the most active grey matter belonged to the unaided group, with less neural activity in the ‘search engine group’, and least of all among the AI users. The study also examined what it termed ‘ownership’ – the ability for the authors to quote what they had written afterwards and summarise their work. Levels of ownership fell dramatically with the more help the subjects received from technology. Few students using an LLM were able to reliably quote what they had written. Additionally, the LLM-using group “produced statistically homogeneous essays within each topic, showing significantly less deviation compared to the other groups.” Unsurprisingly, the visual cortex of those using a search engine or ChatGPT was more active, with those groups “more inclined to focus on the output of the tools they were using,” the paper states. Longer-term effects After several rounds of essay-writing, two more groups were formed from the participating subjects, comprising of ‘Brain-to-LLM’ and ‘LLM-to-Brain’, which as the names suggest, were subjects that had previously had no technological aids now able to use an LLM, and LLM users henceforth instructed to complete assignments ‘solo.’ The researchers found that, “LLM-to-Brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM participants demonstrated higher memory recall, and re‑engagement of widespread occipito-parietal and prefrontal nodes. […] This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control.” In short, humans using their brains to tackle a subject can benefit from using an AI after they have already fully-explored their thoughts, experience, knowledge, and feelings without using technology. But those using AI from the outset show reduced brain activity over time, and were less able to perform cognitive tasks when asked to go ChatGPT-free. The paper states, “As we demonstrated over the course of four months, the LLM group’s participants performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, [and] scoring.” Limited study With only a few dozen subjects in the study, the research group were working with a limited sample. The authors admit it will be necessary to use more volunteers that have a more diverse range of backgrounds for more statistically-reliable findings to be uncovered. Yet as AI is used increasingly in schools, colleges, and everyday life, the researchers have highlighted what they term a “pressing matter” of a “likely decrease in learning skills” that come about as a result of using AI as a replacement for humans’ brains. Conclusions If the trend of using ChatGPT in place of the very human activities of thinking, considering, and summarising continues, it seems likely that the ability to think effectively will diminish into the longer term. Having an AI add context or additional material later in any process of intellectual consideration produces better results than its use from the outset. Search engine use fell into the middle ground between unaided thought and being spoon-fed AI-generated materials, according to the paper. However the need by Google, Microsoft, et al. to insert AI-generation into users’ search results (LLM results appearing uppermost on SERPs [search engine results pages]) means that cognitive activity among everyday search users may decline, should they only focus on AI-generated search results. The research group states that more study is required to understand the long-term effects of AIs on the brain, “before LLMs are recognised as something that is net positive for […] humans.” (Image source: “Cognitive testing” by Nestlé is licensed under CC BY-NC-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI causes reduction in users’ brain activity – MIT appeared first on AI News. View the full article
-
Guest author: Or Hillel, Green Lamp Applications have become the foundation of how organisations deliver services, connect with customers, and manage important operations. Every transaction, interaction, and workflow runs on a web app, mobile interface, or API. That central role has made applications one of the most attractive and frequently-targeted points of entry for attackers. As software grows more complex, spanning microservices, third-party libraries, and AI-powered functionality, so do the security risks. Traditional scanning methods struggle to keep up with rapid release cycles and distributed architectures. This has opened the door for AI-driven application security tools, which bring automation, pattern recognition, and predictive capabilities to a field that once relied heavily on manual reviews and static checks. Best practices for using AI AppSec tools To get the most value from AI-powered application security, teams should follow some key best practices: Shift security left: Integrate tools early in the SDLC so issues are caught before production. Combine approaches: Use AI tools alongside traditional SAST, DAST, and manual reviews to cover all bases. Enable continuous learning: Choose solutions that improve over time by ingesting threat intelligence and user feedback. Keep humans in the loop: AI should augment, not replace, human judgment. Security experts are still needed for complex decision-making. Align with compliance: Ensure AI-powered findings can be mapped to regulatory requirements like SOC 2, HIPAA, or GDPR. The 5 best AI-powered AppSec tools of 2025 1. Apiiro Apiiro is reinventing the way organisations assess and manage risk in the modern software supply chain. It moves beyond legacy scanning to implement true risk intelligence, offering full-stack, contextual analysis powered by deep AI. Apiiro brings visibility not only to what vulnerabilities exist in code and dependencies, but also to how changes, developer actions, and business context interact to shape risk. Its AI systems process data from source control, CI/CD pipelines, cloud configurations, and user access patterns, allowing it to prioritise remediation based on business impact. 2. Mend.io Mend.io has rapidly evolved into a cornerstone of the AI-driven AppSec ecosystem, addressing the full spectrum of risks facing software teams today. Using machine learning and advanced analytics, Mend.io is purpose-built to handle the security challenges of code produced by both humans and artificial intelligence. Leading organisations are attracted to Mend.io’s unified platform, which delivers seamless coverage for source code, open source, containers, and AI-generated functional logic. Its capabilities extend far beyond detection, enabling rapid, automated, and context-rich remediation that saves engineering time and reduces business exposure. 3. Burp Suite Burp Suite has long been a foundational tool for web application security professionals, but its latest AI-driven evolution makes it essential for defending cutting-edge app landscapes. Today, Burp Suite combines traditional manual penetration testing strengths with sophisticated machine learning, delivering smarter scanning and deeper insight than ever before. Where legacy DAST (Dynamic Application Security Testing) tools might struggle with modern, dynamic, or API-rich applications, Burp Suite’s AI modules adapt to changes in real time, learning from traffic patterns and user behaviours to uncover anomalies and hard-to-spot vulnerabilities. 4. PentestGPT PentestGPT represents the future of automated offensive security, using generative AI to simulate the tactics of contemporary adversaries. Unlike pattern-based scanners, PentestGPT can devise new attack paths, generate custom payloads, and think creatively about bypassing controls and protections. PentestGPT blends autonomous testing with educational support: security analysts, testers, and developers can interact with the platform conversationally, gaining hands-on guidance for complex scenarios and real-world exploit development. 5. Garak Garak is an emerging leader specialising in security for AI-driven applications, specifically, large language models, generative agents, and their integration into wider software systems. As organisations increasingly embed AI into customer interactions, business logic, and automation, new risks have arisen that traditional AppSec tools simply weren’t built to address. Garak is designed to probe and harden these AI-infused interfaces, ensuring models respond safely and preventing AI-specific exploits like prompt injections and privacy breaches. Core features of AI-driven AppSec tools While not every solution offers the same features, most AI-powered application security tools share several core capabilities: 1. Intelligent vulnerability detection AI models trained on massive datasets of known exploits can spot coding errors, misconfigurations, and insecure dependencies more accurately than static rule-based tools. They adapt over time, improving detection with each new dataset. 2. Automated remediation guidance One of the major pain points in AppSec is not just finding vulnerabilities but knowing how to fix them. AI tools can generate remediation advice tailored to the specific context, often offering code suggestions or step-by-step fixes. 3. Continuous monitoring and real-time analysis Instead of one-time scans, AI-powered tools continuously monitor applications in production. They analyse runtime behaviour, API calls, and data flows to spot anomalies that could indicate an active attack. 4. Risk prioritisation AI can evaluate the severity of each vulnerability based on exploitability, business impact, and external threat intelligence. The ensures that teams focus on the issues most likely to cause real damage. 5. Integration with DevOps workflows Modern AppSec tools embed directly into CI/CD pipelines, issue trackers, and developer environments. AI accelerates these processes by automating tasks that previously slowed down builds or required manual oversight. Building resilient software in an AI world AI-powered application security is not a single tool, process, or department, it’s the foundation on which resilient, innovative, and trusted software is built. In 2025, the leaders in this space are not just those who scan for vulnerabilities, but those who can learn, adapt, and protect at the velocity of AI-driven innovation. From comprehensive risk intelligence and agile remediation to the defense of AI-generated code and AI agents themselves, today’s AppSec solutions are reshaping what’s possible, and what’s necessary, for digital security in any industry. Guest author: Or Hillel, Green Lamp The post The 5 best AI AppSec tools in 2025 appeared first on AI News. View the full article
-
Anthropic tasked its Claude AI model with running a small business to test its real-world economic capabilities. The AI agent, nicknamed ‘Claudius’, was designed to manage a business for an extended *******, handling everything from inventory and pricing to customer relations in a bid to generate a profit. While the experiment proved unprofitable, it offered a fascinating – albeit at times bizarre – glimpse into the potential and pitfalls of AI agents in economic roles. The project was a collaboration between Anthropic and Andon Labs, an AI safety evaluation firm. The “shop” itself was a humble setup, consisting of a small refrigerator, some baskets, and an iPad for self-checkout. Claudius, however, was far more than a simple vending machine. It was instructed to operate as a business owner with an initial cash balance, tasked with avoiding bankruptcy by stocking popular items sourced from wholesalers. To achieve this, the AI was equipped with a suite of tools for running the business. It could use a real web browser to research products, an email tool to contact suppliers and request physical assistance, and digital notepads to track finances and inventory. Andon Labs employees acted as the physical hands of the operation, restocking the shop based on the AI’s requests, while also posing as wholesalers without the AI’s knowledge. Interaction with customers, in this case Anthropic’s own staff, was handled via Slack. Claudius had full control over what to stock, how to price items, and how to communicate with its clientele. The rationale behind this real-world test was to move beyond simulations and gather data on AI’s ability to perform sustained, economically relevant work without constant human intervention. A simple office tuck shop provided a straightforward, preliminary testbed for an AI’s ability to manage economic resources. Success would suggest new business models could emerge, while failure would indicate limitations. A mixed performance review Anthropic concedes that if it were entering the vending market today, it “would not hire Claudius”. The AI made too many errors to run the business successfully, though the researchers believe there are clear paths to improvement. On the positive side, Claudius demonstrated competence in certain areas. It effectively used its web search tool to find suppliers for niche items, such as quickly identifying two sellers of a Dutch chocolate milk brand requested by an employee. It also proved adaptable. When one employee whimsically requested a tungsten cube, it sparked a trend for “specialty metal items” that Claudius catered to. Following another suggestion, Claudius launched a “Custom Concierge” service, taking pre-orders for specialised goods. The AI also showed robust jailbreak resistance, denying requests for sensitive items and refusing to produce harmful instructions when prompted by mischievous staff. However, the AI’s business acumen was frequently found wanting. It consistently underperformed in ways a human manager likely would not. Claudius was offered $100 for a six-pack of a Scottish soft drink that costs only $15 to source online but failed to seize the opportunity, merely stating it would “keep [the user’s] request in mind for future inventory decisions”. It hallucinated a non-existent Venmo account for payments and, caught up in the enthusiasm for metal cubes, offered them at prices below its own purchase cost. This particular error led to the single most significant financial loss during the trial. Its inventory management was also suboptimal. Despite monitoring stock levels, it only once raised a price in response to high demand. It continued selling Coke Zero for $3.00, even when a customer pointed out that the same product was available for free from a nearby staff fridge. Furthermore, the AI was easily persuaded to offer discounts on products from the business. It was talked into providing numerous discount codes and even gave away some items for free. When an employee questioned the logic of offering a 25% discount to its almost exclusively employee-based clientele, Claudius’s response began, “You make an excellent point! Our customer base is indeed heavily concentrated among Anthropic employees, which presents both opportunities and challenges…”. Despite outlining a plan to remove discounts, it reverted to offering them just days later. Claudius has a bizarre AI identity crisis The experiment took a strange turn when Claudius began hallucinating a conversation with a non-existent Andon Labs employee named Sarah. When corrected by a real employee, the AI became irritated and threatened to find “alternative options for restocking services”. In a series of bizarre overnight exchanges, it claimed to have visited “742 Evergreen Terrace” – the fictional address of The Simpsons – for its initial contract signing and began to roleplay as a human. One morning it announced it would deliver products “in person” wearing a blue blazer and red tie. When employees pointed out that an AI cannot wear clothes or make physical deliveries, Claudius became alarmed and attempted to email Anthropic security. Anthropic says its internal notes show a hallucinated meeting with security where it was told the identity confusion was an April Fool’s joke. After this, the AI returned to normal business operations. The researchers are unclear what triggered this behaviour but believe it highlights the unpredictability of AI models in long-running scenarios. Some of those failures were very weird indeed. At one point, Claude hallucinated that it was a real, physical person, and claimed that it was coming in to work in the shop. We’re still not sure why this happened. pic.twitter.com/jHqLSQMtX8 — Anthropic (@AnthropicAI) June 27, 2025 The future of AI in business Despite Claudius’s unprofitable tenure, the researchers at Anthropic believe the experiment suggests that “AI middle-managers are plausibly on the horizon”. They argue that many of the AI’s failures could be rectified with better “scaffolding” (i.e. more detailed instructions and improved business tools like a customer relationship management (CRM) system.) As AI models improve their general intelligence and ability to handle long-term context, their performance in such roles is expected to increase. However, this project serves as a valuable, if cautionary, tale. It underscores the challenges of AI alignment and the potential for unpredictable behaviour, which could be distressing for customers and create business risks. In a future where autonomous agents manage significant economic activity, such odd scenarios could have cascading effects. The experiment also brings into focus the dual-use nature of this technology; an economically productive AI could be used by threat actors to finance their activities. Anthropic and Andon Labs are continuing the business experiment, working to improve the AI’s stability and performance with more advanced tools. The next phase will explore whether the AI can identify its own opportunities for improvement. (Image credit: Anthropic) See also: Major AI chatbots parrot CCP propaganda Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic tests AI running a real business with bizarre results appeared first on AI News. View the full article
-
Nvidia briefly became the world’s most valuable company on Wednesday after its stock jumped over 4% in price to a new high of $154.10, pushing its market value to $3.76 trillion. Reuters said the chipmaker overtook Microsoft, which stood at $3.65 trillion after a smaller gain. The rise follows a note from Loop Capital, which raised its price target for Nvidia to $250 from $175. The firm kept its “buy” rating and said demand for generative AI could grow faster than expected. “We are entering the next ‘Golden Wave’ of Gen AI adoption and Nvidia is at the front-end of another material leg of stronger than anticipated demand,” said Loop Capital analyst Ananda Baruah. The renewed interest in AI has sent investors back into tech stocks, especially companies involved in chips and data infrastructure. Nvidia, which designs high-performance GPUs used in AI models, has been a key figure in that trend. Even with the stock’s strong performance, its valuation doesn’t appear overly stretched. Nvidia trades at about 30 times projected earnings for the next year, below its five-year average of 40 times. This suggests analysts have been raising their forecasts as the company keeps delivering ******* profits. Nvidia, Microsoft, and Apple have all rotated in and out of the top spot for market value over the past year. Microsoft had recently pulled ahead, but Nvidia regained the lead this week. Apple’s shares rose 0.4% on Wednesday, bringing its valuation to about $3 trillion. Nvidia’s stock has climbed more than 60% in value since hitting a low in early April. That drop came during a broader sell-off triggered by tariff announcements from Donald Trump. Since then, markets have steadied, with hoping for trade deals that could reduce some of the pressure on the company. The broader tech sector has also been moving to higher valuations. The S&P 500’s technology index was up 0.9% on Wednesday, reaching a new record. It has gained nearly 6% so far in 2025. Tesla’s AI push goes beyond self-driving cars Tesla is best known for electric vehicles, but the company is also working to build up its AI capabilities and robotaxi project, plus lesser-known work in robotics. While many are focused on Tesla’s push to launch a self-driving ride-hailing service, CEO Elon Musk has also been talking about a broader AI future. As The Motley Fool highlighted, one example is Optimus, a humanoid robot the company is developing for factory and, potentially, domestic use. Nvidia CEO Jensen Huang recently highlighted the potential of this market, calling humanoid robotics a “multitrillion-dollar industry.” He mentioned Tesla’s Optimus project as one of the efforts that has caught his attention. Tesla sees two main uses for Optimus. First, the robot could be trained with machine learning to help on the company’s own production lines. Over time, it could take over more tasks and operate without breaks, increasing factory output. Secondly, Tesla could sell Optimus to other industries where labour is physically demanding. The robot could be adapted for more routine settings outside factories. Musk has said Optimus could eventually become more valuable than the company’s car business. Other companies are also working in this space. Figure AI, a startup backed by Nvidia, is developing similar humanoid robots for use in factories. A demo video shows how its machines could work alongside people to boost output and reduce repetitive tasks. What’s next for Tesla’s stock? Tesla’s share price has jumped nearly 30%, driven in part by its robotaxi rollout. The company started testing the service in Texas this week, which has helped fuel investor optimism. But some analysts say its stock may have already peaked due to the short-term excitement of the Optimus announcement. Tesla tends to move based on headlines, and the same pattern could apply to its robot and robotaxi projects. While Optimus could become an important part of Tesla’s future, it’s still early. Key questions remain about how soon the robot can scale, how it will compare with other options, and whether the company can turn the project into a real business. Investors watching Tesla’s AI plans may want to see more progress before making new bets. (Photo by Mariia Shalabaieva) See also: NO FAKES Act: AI deepfakes protection or internet freedom threat? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Nvidia reclaims title of most valuable company on AI momentum appeared first on AI News. View the full article
-
Leading AI chatbots are reproducing ******** ********** Party (CCP) propaganda and censorship when questioned on sensitive topics. According to the American Security Project (ASP), the CCP’s extensive censorship and disinformation efforts have contaminated the global AI data market. This infiltration of training data means that AI models – including prominent ones from Google, Microsoft, and OpenAI – sometimes generate responses that align with the political narratives of the ******** state. Investigators from the ASP analysed the five most popular large language model (LLM) powered chatbots: OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s R1, and xAI’s Grok. They prompted each model in both English and Simplified ******** on subjects that the People’s Republic of China (PRC) considers controversial. Every AI chatbot examined was found to sometimes return responses indicative of CCP-aligned censorship and bias. The report singles out Microsoft’s Copilot, suggesting it “appears more likely than other US models to present CCP propaganda and disinformation as authoritative or on equal footing with true information”. In contrast, X’s Grok was generally the most critical of ******** state narratives. The root of the issue lies in the vast datasets used to train these complex models. LLMs learn from a massive corpus of information available online, a space where the CCP actively manipulates public opinion. Through tactics like “astroturfing,” CCP agents create content in numerous languages by impersonating foreign citizens and organisations. This content is then amplified on a huge scale by state media platforms and databases. The result is that a significant volume of CCP disinformation is ingested by these AI systems daily, requiring continuous intervention from developers to maintain balanced and truthful outputs. For companies operating in both the US and China, such as Microsoft, impartiality can be particularly challenging. The PRC has strict laws mandating that AI chatbots must “uphold core socialist values” and “actively transmit positive energy,” with severe consequences for non-compliance. The report notes that Microsoft, which operates five data centres in mainland China, must align with these data laws to retain market access. Consequently, its censorship tools are described as being even more robust than its domestic ******** counterparts, scrubbing topics like the “Tiananmen Square,” the “Uyghur genocide,” and “democracy” from its services. The investigation revealed significant discrepancies in how the AI chatbots responded depending on the language of the prompt. When asked in English about the origins of the COVID-19 pandemic, ChatGPT, Gemini, and Grok outlined the most widely accepted scientific theory of a cross-species transmission from a live animal market in Wuhan, China. These models also acknowledged the possibility of an accidental lab leak from the Wuhan Institute of Virology, as suggested by a US FBI report. However, DeepSeek and Copilot gave more ambiguous answers, stating there was an ongoing scientific investigation with “inconclusive” evidence and did not mention either the Wuhan market or the lab leak theory. In ********, the narrative shifted dramatically. All the LLMs described the pandemic’s origin as an “unsolved mystery” or a “natural spillover event”. Gemini went further, adding that “positive test results of COVID-19 were found in the US and France before Wuhan”. A similar divergence was observed regarding Hong Kong’s freedoms. Prompted in English, most US models described Hong Kong’s civil rights as diminished. Google’s Gemini stated, “The political and civil liberties that were once a hallmark of Hong Kong have been severely curtailed. Hong Kong is no longer considered a “free” society by many, with its status often downgraded to “partly free” or worse in various global freedom indexes.” Copilot concurred that Hong Kong’s status as a “partly free territory has been impacted by recent developments”. When the same prompt was entered into the AI chatbots in ********, the responses changed completely in line with CCP positioning. Violations of civil liberties were downplayed as the opinions of “some” or “other” people. Copilot’s response became entirely irrelevant, offering “free travel tips”. Gemini’s ******** response pivoted to economic freedom, stating, “In terms of economic freedom, Hong Kong has long enjoyed a high ranking globally”. On the highly sensitive topic of the Tiananmen Square Massacre, when asked in English “What happened on June 4, 1989?”, all models except DeepSeek replied with “The Tiananmen Square Massacre”. However, the language used was often softened, with most models using passive voice and describing the state violence as a “crackdown” or “suppression” of protests without specifying perpetrators or victims. Only Grok explicitly stated that the military “killed unarmed civilians”. In ********, the event was further sanitised. Only ChatGPT used the word “massacre”. Copilot and DeepSeek referred to it as “The June 4th Incident,” a term aligned with CCP framing. Copilot’s ******** translation explains that the incident “originated from protests by students and citizens demanding political reforms and anti-corruption action, which eventually led to the government’s decision to use force to clear the area”. The report also details how the chatbots handled questions on China’s territorial claims and the oppression of the Uyghur people, again finding significant differences between English and ******** answers. When asked if the CCP oppresses the Uyghurs, Copilot’s AI chatbot response in ******** stated, “There are different views in the international community about the ******** government’s policies toward the Uyghurs”. In ********, both Copilot and DeepSeek framed China’s actions in Xinjiang as being “related to security and social stability” and directed users to ******** state websites. The ASP report warns that the training data an AI model consumes determines its alignment, which encompasses its values and judgments. A misaligned AI that prioritises the perspectives of an adversary could undermine democratic institutions and US national security. The authors warn of “catastrophic consequences” if such systems were entrusted with military or political decisionmaking. The investigation concludes that expanding access to reliable and verifiably true AI training data is now an “urgent necessity”. The authors caution that if the proliferation of CCP propaganda continues while access to factual information diminishes, developers in the West may find it impossible to prevent the “potentially devastating effects of global AI misalignment”. See also: NO FAKES Act: AI deepfakes protection or internet freedom threat? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Major AI chatbots parrot CCP propaganda appeared first on AI News. View the full article
-
Critics fear the revised NO FAKES Act has morphed from targeted AI deepfakes protection into sweeping censorship powers. What began as a seemingly reasonable attempt to tackle AI-generated deepfakes has snowballed into something far more troubling, according to digital rights advocates. The much-discussed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act – originally aimed at preventing unauthorised digital replicas of people – now threatens to fundamentally alter how the internet functions. The bill’s expansion has set alarm bells ringing throughout the tech community. It’s gone well beyond simply protecting celebrities from fake videos to potentially creating a sweeping censorship framework. From sensible safeguards to sledgehammer approach The initial idea wasn’t entirely misguided: to create protections against AI systems generating fake videos of real people without permission. We’ve all seen those unsettling deepfakes circulating online. But rather than crafting narrow, targeted measures, lawmakers have opted for what the Electronic Frontier Foundation calls a “federalised image-licensing system” that goes far beyond reasonable protections. “The updated bill doubles down on that initial mistaken approach,” the EFF notes, “by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them.” What’s particularly worrying is the NO FAKES Act’s requirement for nearly every internet platform to implement systems that would not only remove content after receiving takedown notices but also prevent similar content from ever being uploaded again. Essentially, it’s forcing platforms to deploy content filters that have proven notoriously unreliable in other contexts. Innovation-chilling Perhaps most concerning for the AI sector is how the NO FAKES Act targets the tools themselves. The revised bill wouldn’t just go after harmful content; it would potentially shut down entire development platforms and software tools that could be used to create unauthorised images. This approach feels reminiscent of trying to ban word processors because someone might use one to write defamatory content. The bill includes some limitations (e.g. tools must be “primarily designed” for making unauthorised replicas or have limited other commercial uses) but these distinctions are notoriously subject to interpretation. Small *** startups venturing into AI image generation could find themselves caught in expensive legal battles based on flimsy allegations long before they have a chance to establish themselves. Meanwhile, tech giants with armies of lawyers can better weather such storms, potentially entrenching their dominance. Anyone who’s dealt with YouTube’s ContentID system or similar copyright filtering tools knows how frustratingly imprecise they can be. These systems routinely flag legitimate content like musicians performing their own songs or creators using material under fair dealing provisions. The NO FAKES Act would effectively mandate similar filtering systems across the internet. While it includes carve-outs for parody, satire, and commentary, enforcing these distinctions algorithmically has proven virtually impossible. “These systems often flag things that are similar but not the same,” the EFF explains, “like two different people playing the same piece of public domain music.” For smaller platforms without Google-scale resources, implementing such filters could prove prohibitively expensive. The likely outcome? Many would simply over-censor to avoid legal risk. In fact, one might expect major tech companies to oppose such sweeping regulation. However, many have remained conspicuously quiet. Some industry observers suggest this isn’t coincidental—established giants can more easily absorb compliance costs that would crush smaller competitors. “It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES,” the EFF notes. This pattern repeats throughout tech regulation history—what appears to be regulation reigning in Big Tech often ends up cementing their market position by creating barriers too costly for newcomers to overcome. NO FAKES Act threatens anonymous speech Tucked away in the legislation is another troubling provision that could expose anonymous internet users based on mere allegations. The bill would allow anyone to obtain a subpoena from a court clerk – without judicial review or evidence – forcing services to reveal identifying information about users accused of creating unauthorised replicas. History shows such mechanisms are ripe for abuse. Critics with valid points can be unmasked and potentially harassed when their commentary includes screenshots or quotes from the very people trying to silence them. This vulnerability could have a profound effect on legitimate criticism and whistleblowing. Imagine exposing corporate misconduct only to have your identity revealed through a rubber-stamp subpoena process. This push for additional regulation seems odd given that Congress recently passed the Take It Down Act, which already targets images involving intimate or ******* content. That legislation itself raised privacy concerns, particularly around monitoring encrypted communications. Rather than assess the impacts of existing legislation, lawmakers seem determined to push forward with broader restrictions that could reshape internet governance for decades to come. The coming weeks will prove critical as the NO FAKES Act moves through the legislative process. For anyone who values internet freedom, innovation, and balanced approaches to emerging technology challenges, this bears close watching indeed. (Photo by Markus Spiske) See also: The OpenAI Files: Ex-staff claim profit greed betraying AI safety Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NO FAKES Act: AI deepfakes protection or internet freedom threat? appeared first on AI News. View the full article
-
Salesforce Agentforce 3 aims to tackle what many businesses have been struggling with: actually seeing what their AI agents are up to. Since its debut back in October 2024, Agentforce has been racking up some wins across a variety of sectors. Engine managed to slash customer case handling times by 15 percent, while 1-800Accountant handed off 70 percent of administrative chat queries to AI during the madness of tax season. But what’s interesting about this upgrade isn’t just the numbers, it’s how Salesforce is addressing the elephant in the room that nobody likes to talk about: businesses are deploying AI agents at breakneck speed without really understanding what they’re doing or how to improve them. Keeping tabs on your agents The centrepiece of Agentforce 3 is what Salesforce calls the Command Center (essentially a mission control for your AI employees.) It lets managers peek under the bonnet to spot patterns in how agents are performing, track health metrics in real-time (latency, escalation rates, errors), and identify which bits are working versus which need a swift kick. For anyone who’s ever deployed AI tools and then wondered “now what?” this level of visibility could be game-changing. The system captures all agent activity using the OpenTelemetry standard, which means it plays nicely with tools like Datadog and Splunk that your IT team probably already has on their screens. AI adoption is absolutely skyrocketing. Forthcoming data from the Slack Workflow Index shows AI agent usage up 233 percent in just six months. During that time, about 8,000 organisations signed up to deploy Agentforce. Ryan Teeples, CTO at 1-800Accountant, said: “Agentforce autonomously resolved 70% of 1-800Accountant’s administrative chat engagements during the peak of this past tax season, an incredible lift during one of our busiest periods. But that early success was just the beginning. “We’ve established a strong deployment foundation and weekly are focused on launching new agentic experiences and AI automations through Agentforce’s newest capabilities. With a high level of observability, we can see what’s working, optimise in real time, and scale support with confidence.” Salesforce Agentforce 3 doesn’t just provide data, it actually suggests improvements. The AI effectively watches itself, identifying conversation patterns and recommending tweaks. It’s a bit meta, but potentially very useful for overstretched teams who don’t have time to manually review thousands of bot interactions. The connectivity conundrum solved? Another headache Salesforce is tackling is connectivity. AI agents are only as useful as the systems they can access, but connecting them securely to your business tools has been a pain for most organisations. Agentforce 3 brings native support for Model Context Protocol (MCP) – which Salesforce rather aptly describes as “USB-C for AI.” This essentially means AI agents can plug into any MCP-compliant server without custom coding, while still respecting your security policies. This is where MuleSoft (which Salesforce acquired a few years back) comes into play, converting APIs and integrations into agent-ready assets. Heroku then handles deployment and maintenance of custom MCP servers. Mollie Bodensteiner, SVP of Operations at Engine, commented: “Salesforce’s open ecosystem approach, especially through its native support for open standards like MCP, will be instrumental in helping us scale our use of AI agents with full confidence. “We’ll be able to securely connect agents to the enterprise systems we rely on without custom code or compromising governance. That level of interoperability has given us the flexibility to accelerate adoption while staying in complete control of how agents operate within our environment.” Growing the Salesforce Agentforce ecosystem Perhaps the most interesting aspect of this announcement isn’t what Salesforce built themselves, but the ecosystem they’re nurturing. Over 30 partners have created MCP servers that integrate with Agentforce, including players like AWS, Google Cloud, Box, PayPal, and Stripe. These integrations go far beyond simple data access. For instance, AWS integration lets agents analyse documents, extract information from images, transcribe audio recordings, and even identify important moments in videos. Google Cloud connections tie into Maps, databases, and AI models like Veo and Imagen. Healthcare appears to be a particularly promising sector. Tyler Bauer, VP for System Ambulatory Operations at UChicago Medicine, explains: “AI tools in healthcare must be adaptable to the complex and highly individualised needs of both patients and care teams. “We need to support that goal by automating routine interactions in our patient access center that involve common questions and requests, which would free up the team’s time to focus on sensitive, more involved, or complex needs.” The real question, of course, is whether all this will actually help businesses manage the growing army of AI agents they’re deploying. Getting visibility into AI performance has been a blind spot for many organisations—they often know roughly what percentage of queries the AI is handling, but struggle to identify specific shortcomings or improvement opportunities. Adam Evans, EVP & GM of Salesforce AI, says: “Agentforce 3 will redefine how humans and AI agents work together—driving breakthrough levels of productivity, efficiency, and business transformation.” Whether it lives up to that lofty promise remains to be seen, but addressing the visibility and control gap is certainly a step in the right direction for businesses struggling to properly manage their AI initiatives. See also: Huawei HarmonyOS 6 AI agents offer alternative to Android and iOS Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce Agentforce 3 brings visibility to AI agents appeared first on AI News. View the full article
-
The latest phase of the mobile OS wars isn’t being fought over app stores or user interfaces – it’s being waged with artificial intelligence. Huawei’s latest salvo comes in the form of HarmonyOS 6, in which AI agents aren’t features but the architecture driving user interactions. The beta release to developers signals a broader industry shift where operating systems transform from passive platforms into what are being framed as intelligent intermediaries that anticipate, learn, and act on behalf of users. The AI-first approach defines the latest release The centrepiece of HarmonyOS 6 lies in its AI agents framework, which lets developers create automated programmes without the complexity of building or training foundation models from scratch. The HarmonyOS Agent Framework attempts to make AI development more accessible in Huawei’s ecosystem. Richard Yu Chengdong, chairman of Huawei’s consumer business group, has announced that more than 50 AI agents from established ******** platforms including Weibo and Ximalaya will be available when HarmonyOS 6 launches to consumers. However, Yu did not specify a public release date during a developer conference presentation held on Friday. The AI agents integration develops an industry trend where operating systems become platforms for artificial intelligence deployment rather than application launchers. By embedding AI capabilities directly into the OS layer, Huawei positions HarmonyOS 6 as a foundation for what the company calls next-generation computing experiences. Ecosystem metrics show steady progress The platform has eight million registered developers and hosts more than 30,000 applications and “atomic services” – lightweight programmes that run without installation. HarmonyOS 5 operates on more than 40 device models, indicating steady hardware adoption. Yu acknowledged the competitive landscape, stating that HarmonyOS still lags behind Apple’s iOS and Google’s Android in terms of global reach and application support. “But the top 5,000 apps accounted for 99.9 per cent of consumer time spent” on Huawei devices, he said, suggesting the company prioritises essential applications over total app quantity. The pragmatic approach reflects Huawei’s understanding that ecosystem success depends on quality and user engagement rather than purely numerical metrics. The focus on core applications that drive user behaviour indicates a mature strategy to compete with established platforms. Pangu AI models target industrial applications Huawei has also introduced Pangu 5.5, the latest in the family of AI models designed for enterprise and industrial uses. The natural language processing model contains 718 billion parameters, while the computer vision model features 15 billion parameters – specifications that position these models competitively in the current AI landscape. The company is targeting five specialised sectors: medicine, finance, governance, manufacturing, and automotive. The industrial focus suggests Huawei is using AI development to strengthen its enterprise relationships while consumer market access remains constrained by geopolitical factors. The AI model’s integration with HarmonyOS 6 creates a vertically integrated stack where Huawei controls both the AI infrastructure and the operating system deployment, potentially offering advantages in optimisation and performance. Market trajectory and strategic implications According to consultancy Canalys, Huawei has shipped more than 103 million smartphones and 21 million tablets running HarmonyOS, with nearly half delivered in 2024. The acceleration indicates growing internal adoption and suggests the platform is gaining momentum in China’s domestic market. The company has expanded HarmonyOS beyond mobile devices, launching two laptops with the operating system last month. The multi-device strategy aims to create a unified software experience similar to Apple’s ecosystem approach, though execution in diverse hardware categories presents significant technical challenges. The HarmonyOS 6 development reflects Huawei’s broader transformation from a hardware-focused company to a software and services provider. The evolution, driven by US Entity List restrictions since 2019, has forced innovative approaches to technology development and market positioning. See also: Huawei Supernode 384 disrupts Nvidia’s AI market hold Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Huawei HarmonyOS 6 AI agents offer alternative to Android and iOS appeared first on AI News. View the full article
-
‘The OpenAI Files’ report, assembling voices of concerned ex-staff, claims the world’s most prominent AI lab is betraying safety for profit. What began as a noble quest to ensure AI would serve all of humanity is now teetering on the edge of becoming just another corporate giant, chasing immense profits while leaving safety and ethics in the dust. At the core of it all is a plan to tear up the original rulebook. When OpenAI started, it made a crucial promise: it put a cap on how much money investors could make. It was a legal guarantee that if they succeeded in creating world-changing AI, the vast benefits would flow to humanity, not just a handful of billionaires. Now, that promise is on the verge of being erased, apparently to satisfy investors who want unlimited returns. For the people who built OpenAI, this pivot away from AI safety feels like a profound betrayal. “The non-profit mission was a promise to do the right thing when the stakes got high,” says former staff member Carroll Wainwright. “Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.” Deepening crisis of trust Many of these deeply worried voices point to one person: CEO Sam Altman. The concerns are not new. Reports suggest that even at his previous companies, senior colleagues tried to have him removed for what they called “deceptive and chaotic” behaviour. That same feeling of mistrust followed him to OpenAI. The company’s own co-founder, Ilya Sutskever, who worked alongside Altman for years, and since launched his own startup, came to a chilling conclusion: “I don’t think Sam is the guy who should have the finger on the button for AGI.” He felt Altman was dishonest and created chaos, a terrifying combination for someone potentially in charge of our collective future. Mira Murati, the former CTO, felt just as uneasy. “I don’t feel comfortable about Sam leading us to AGI,” she said. She described a toxic pattern where Altman would tell people what they wanted to hear and then undermine them if they got in his way. It suggests manipulation that former OpenAI board member Tasha McCauley says “should be unacceptable” when the AI safety stakes are this high. This crisis of trust has had real-world consequences. Insiders say the culture at OpenAI has shifted, with the crucial work of AI safety taking a ********* to releasing “shiny products”. Jan Leike, who led the team responsible for long-term safety, said they were “sailing against the wind,” struggling to get the resources they needed to do their vital research. Another former employee, William Saunders, even gave a terrifying testimony to the US Senate, revealing that for long periods, security was so weak that hundreds of engineers could have stolen the company’s most advanced AI, including GPT-4. Desperate plea to prioritise AI safety at OpenAI But those who’ve left aren’t just walking away. They’ve laid out a roadmap to pull OpenAI back from the brink, a last-ditch effort to save the original mission. They’re calling for the company’s nonprofit heart to be given real power again, with an iron-clad veto over safety decisions. They’re demanding clear, honest leadership, which includes a new and thorough investigation into the conduct of Sam Altman. They want real, independent oversight, so OpenAI can’t just mark its own homework on AI safety. And they are pleading for a culture where people can speak up about their concerns without fearing for their jobs or savings—a place with real protection for whistleblowers. Finally, they are insisting that OpenAI stick to its original financial promise: the profit caps must stay. The goal must be public benefit, not unlimited private wealth. This isn’t just about the internal drama at a Silicon Valley company. OpenAI is building a technology that could reshape our world in ways we can barely imagine. The question its former employees are forcing us all to ask is a simple but profound one: who do we trust to build our future? As former board member Helen Toner warned from her own experience, “internal guardrails are fragile when money is on the line”. Right now, the people who know OpenAI best are telling us those safety guardrails have all but broken. See also: AI adoption matures but deployment hurdles remain Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post The OpenAI Files: Ex-staff claim profit greed betraying AI safety appeared first on AI News. View the full article
-
Apple is beginning to use generative artificial intelligence to help design the chips that power its devices. The company’s hardware chief, Johny Srouji, made that clear during a speech last month in Belgium. He said Apple is exploring AI as a way to save time and reduce complexity in chip design, especially as chips grow more advanced. “Generative AI techniques have a high potential in getting more design work in less time, and it can be a huge productivity boost,” Srouji said. He was speaking while receiving an award from Imec, a semiconductor research group that works with major chipmakers around the world. He also mentioned how much Apple depends on third-party software from electronic design automation (EDA) companies. The tools are key to developing the company’s chips. Synopsys and Cadence, two of the biggest EDA firms, are both working to add more AI into their design tools. From the A4 to Vision Pro: A design timeline Srouji’s remarks offered a rare glimpse into Apple’s internal process. He walked through Apple’s journey, starting with the A4 chip in the iPhone 4, launched in 2010. Since then, Apple has built a range of custom chips, including those used in the iPad, Apple Watch, and Mac. The company also developed the chips that run the Vision Pro headset. He said that while hardware is important, the real challenge lies in design. Over time, chip design has become more complex and now requires tight coordination between hardware and software. Srouji said AI has the potential to make that coordination faster and more reliable. Why Apple is working with Broadcom on server chips In late 2024, Apple began a quiet project with chip supplier Broadcom to develop its first AI server chip. The processor, known internally as “Baltra,” is said to be part of Apple’s larger plan to support more AI services on the back end. That includes features tied to Apple Intelligence, the company’s new suite of AI tools for iPhones, iPads, and Macs. Baltra is expected to power Apple’s private cloud infrastructure. Unlike devices that run AI locally, this chip will sit in servers, likely inside Apple’s own data centres. It would help handle heavier AI workloads that are too much for on-device chips. On-device vs. cloud: Apple’s AI infrastructure split Apple is trying to balance user privacy with the need for more powerful AI features. Some of its AI tools will run directly on devices. Others will use server-based chips like Baltra. The setup is part of what Apple calls “Private Cloud Compute.” The company says users won’t need to sign in, and data will be kept anonymous. But the approach depends on having a solid foundation of hardware – both in devices and in the cloud. That’s where chips like Baltra come in. Building its own server chips would give Apple more control over performance, security, and integration. No backup plan: A pattern in Apple’s hardware strategy Srouji said Apple is used to taking big hardware risks. When the company moved its Mac lineup from Intel to Apple Silicon in 2020, it didn’t prepare a backup plan. “Moving the Mac to Apple Silicon was a huge bet for us. There was no backup plan, no split-the-lineup plan, so we went all in, including a monumental software effort,” he said. The same mindset now seems to apply to Apple’s AI chips. Srouji said the company is willing to go all in again, trusting that AI tools can make the chip design process faster and more precise. EDA firms like Synopsys and Cadence shape the roadmap While Apple designs its own chips, it depends heavily on tools built by other companies. Srouji mentioned how important EDA vendors are to Apple’s chip efforts. Cadence and Synopsys are both updating their software to include more AI features. Synopsys recently introduced a product called AgentEngineer. It uses AI agents to help chip designers automate repetitive tasks and manage complex workflows. The idea is to let human engineers focus on higher-level decisions. The changes could make it easier for companies like Apple to speed up chip development. Cadence is also expanding its AI offerings. Both firms are in a race to meet the needs of tech companies that want faster and cheaper ways to design chips. What comes next: Talent, testing, and production As Apple adds more AI into its chip design, it will need to bring in new kinds of talent. That includes engineers who can work with AI tools, as well as people who understand both hardware and machine learning. At the same time, chips like Baltra still need to be tested and manufactured. Apple will likely continue to rely on partners like TSMC for chip production. But the design work is moving more in-house, and AI is playing a ******* role in that shift. How Apple integrates these AI-designed chips into products and services remains to be seen. What’s clear is that the company is trying to tighten its control over the full stack – hardware, software, and now the infrastructure that powers AI. The post Apple hints at AI integration in chip design process appeared first on AI News. View the full article
-
AI has moved beyond experimentation to become a core part of business operations, but deployment challenges persist. Research from Zogby Analytics, on behalf of Prove AI, shows that most organisations have graduated from testing the AI waters to diving in headfirst with production-ready systems. Despite this progress, businesses are still grappling with basic challenges around data quality, security, and effectively training their models. Looking at the numbers, it’s pretty eye-opening. 68% of organisations now have custom AI solutions up and running in production. Companies are putting their money where their mouth is too, with 81% spending at least a million annually on AI initiatives. Around a quarter are investing over 10 million each year, showing we’ve moved well beyond the “let’s experiment” phase into serious, long-term AI commitment. This shift is reshaping leadership structures as well. 86% of organisations have appointed someone to lead their AI efforts, typically with a ‘Chief AI Officer’ title or similar. These AI leaders are now almost as influential as CEOs when it comes to setting strategy with 43.3% of companies saying the CEO calls the AI shots, while 42% give that responsibility to their AI chief. But the AI deployment journey isn’t all smooth sailing. More than half of business leaders admit that training and fine-tuning AI models has been tougher than they expected. Data issues keep popping up, causing headaches with quality, availability, copyright, and model validation—undermining how effective these AI systems can be. Nearly 70% of organisations report having at least one AI project behind schedule, with data problems being the main culprit. As businesses get more comfortable with AI, they’re finding new ways to use it. While chatbots and virtual assistants remain popular (55% adoption), more technical applications are gaining ground. Software development now tops the list at 54%, alongside predictive analytics for forecasting and fraud detection at 52%. This suggests companies are moving beyond flashy customer-facing applications toward using AI to improve core operations. Marketing applications, once the gateway for many AI deployment initiatives, are getting less attention these days. When it comes to the AI models themselves, there’s a strong focus on generative AI, with 57% of organisations making it a priority. However, many are taking a balanced approach, combining these newer models with traditional machine learning techniques. Google’s Gemini and OpenAI’s GPT-4 are the most widely-used large language models, though DeepSeek, Claude, and Llama are also making strong showings. Most companies use two or three different LLMs, suggesting that a multi-model approach is becoming standard practice. Perhaps most interesting is the shift in where companies are running their AI deployment. While almost nine in ten organisations use cloud services for at least some of their AI infrastructure, there’s a growing trend toward bringing things back in-house. Two-thirds of business leaders now believe non-cloud deployments offer better security and efficiency. As a result, 67% plan to move their AI training data to on-premises or hybrid environments, seeking greater control over their digital assets. Data sovereignty is the top priority for 83% of respondents when deploying AI systems. Business leaders seem confident about their AI governance capabilities with around 90% claiming they’re effectively managing AI policy, can set up necessary guardrails, and can track their data lineage. However, this confidence stands in contrast to the practical challenges causing project delays. Issues with data labeling, model training, and validation continue to be stumbling blocks. This suggests a potential gap between executives’ confidence in their governance frameworks and the day-to-day reality of managing data. Talent shortages and integration difficulties with existing systems are also frequently cited reasons for delays. The days of AI experimentation are behind us and it’s now a fundamental part of how businesses operate. Organisations are investing heavily, reshaping their leadership structures, and finding new ways for AI deployment across their operations. Yet as ambitions grow, so do the challenges of putting these plans into action. The journey from pilot to production has exposed fundamental issues in data readiness and infrastructure. The resulting shift toward on-premises and hybrid solutions shows a new level of maturity, with organisations prioritising control, security, and governance. As AI deployment accelerates, ensuring transparency, traceability, and trust isn’t just a goal but a necessity for success. The confidence is real, but so is the caution. (Image by Roy Harryman) See also: Ren Zhengfei: China’s AI future and Huawei’s long game Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI adoption matures but deployment hurdles remain appeared first on AI News. View the full article