Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    804
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. In a recent blog output, Rackspace refers to the bottlenecks familiar to many readers: messy data, unclear ownership, governance gaps, and the cost of running models once they become part of production. The company frames them through the lens of service delivery, security operations, and cloud modernisation, which tells you where it is putting its own effort. One of the clearest examples of operational AI inside Rackspace sits in its security business. In late January, the company described RAIDER (Rackspace Advanced Intelligence, Detection and Event Research) as a custom back-end platform built for its internal cyber defense centre. With security teams working amid many alerts and logs, standard detection engineering doesn’t scale if dependent on the manual writing of security rules. Rackspace says its RAIDER system unifies threat intelligence with detection engineering workflows and uses its AI Security Engine (RAISE) and LLMs to automate detection rule creation, generating detection criteria it describes as “platform-ready” in line with known frameworks such as MITRE ATT&CK. The company claims it’s cut detection development time by more than half and reduced mean time to detect and respond. This is just the kind of internal process change that matters. The company also positions agentic AI as a way of taking the friction out of complex engineering programmes. A January post on modernising VMware environments on AWS describes a model in which AI agents handle data-intensive analysis and many repeating tasks, yet it keeps “architectural judgement, governance and business decisions” remain in the human domain. Rackspace presents this workflow as stopping senior engineers being sidelined into migration projects. The article states the target is to keep day two operations in scope – where many migration plans fail as teams discover they have modernised infrastructure but not operating practices. Elsewhere the company sets out a picture of AI-supported operations where monitoring becomes more predictive, routine incidents are handled by bots and automation scripts, and telemetry (plus historical data) are used to spot patterns and, it turn, recommend fixes. This is conventional AIOps language, but it Rackspace is tying such language to managed services delivery, suggesting the company uses AI to reduce the cost of labour in operational pipelines in addition to the more familiar use of AI in customer-facing environments. In a post describing AI-enabled operations, the company stresses the importance of focus strategy, governance and operating models. It specifies the machinery it needed to industrialise AI, such as choosing infrastructure based on whether workloads involve training, fine-tuning or inference. Many tasks are relatively lightweight and can run inference locally on existing hardware. The company’s noted four recurring barriers to AI adoption, most notably that of fragmented and inconsistent data, and it recommends investment in integration and data management so models have consistent foundations. This is not an opinion unique to Rackspace, of course, but having it writ large by a technology-first, big player is illustrative of the issues faced by many enterprise-scale AI deployments. A company of even greater size, Microsoft, is working to coordinate autonomous agents’ work across systems. Copilot has evolved into an orchestration layer, and in Microsoft’s ecosystem, multi-step task execution and broader model choice do exist. However, it’s noteworthy that Redmond is called out by Rackspace on the fact that productivity gains only arrive when identity, data access, and oversight are firmly ensconced into operations. Rackspace’s near-term AI plan comprises of AI-assisted security engineering, agent-supported modernisation, and AI-augmented service management. Its future plans can perhaps be discerned in a January article published on the company’s blog that concerns private cloud AI trends. In it, the author argues inference economics and governance will drive architecture decisions well into 2026. It anticipates ‘bursty’ exploration in public clouds, while moving inference tasks into private clouds on the grounds of cost stability, and compliance. That’s a roadmap for operational AI grounded in budget and audit requirements, not novelty. For decision-makers trying to accelerate their own deployments, the useful takeaway is that Rackspace has treats AI as an operational discipline. The concrete, published examples it gives are those that reduce cycle time in repeatable work. Readers may accept the company’s direction and still be wary of the company’s claimed metrics. The steps to take inside a growing business are to discover repeating processes, examine where strict oversight is necessary because of data governance, and where inference costs might be reduced by bringing some processing in-house. (Image source: Pixabay) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Combing the Rackspace blogfiles for operational AI pointers appeared first on AI News. View the full article
  2. Among the big players in technology, Cisco is one of the sector’s leaders that’s advancing operational deployments of AI internally to its own operations, and the tools it sells to its customers around the world. As a large company, its activities encompass many areas of the typical IT stack, including infrastructure, services, security, and the design of entire enterprise-scale networks. Cisco’s internal teams use a blend of machine learning and agentic AI to help them improve their own service delivery and personalise user experiences for its customers. It’s built a shared AI fabric built on patterns of compute and networking that are the product of years spent checking and validating its systems – battle-hardened solutions it then has the confidence to offer to customers. The infrastructure in play relies on high-performance GPUs, of course, but it’s not just raw horse-power. The detail is in the careful integration between compute and network stacks used in model training and the quite different demands from the ongoing load of inference. Having made its name as the de facto supplier of networking infrastructure for the enterprise, it comes as no shock that it’s in network automation that some of its better-known uses of AI finds their place. Automated configuration workflows and identity management combine into access solutions that are focused on rapid network deployments generated by natural language. For organisations looking to develop into the next generation of AI users, Cisco has been rolling out hardware and orchestration tools that are aimed explicitly to support AI workloads. A recent collaboration with chip giant NVIDIA led to the emergence of a new line of switches and the Nexus Hyperfabric line of AI network controllers. These aim to simplify the deployment of the complex clusters needed for top-end, high-performance artificial intelligence clusters. Cisco’s Secure AI Factory framework with partners like NVIDIA and Run:ai is aimed at production-grade AI pipelines. It uses distributed orchestration, GPU utilisation governance, Kubernetes microservice optimisation, and storage, under the umbrella product description Intersight. For more local deployments, Cisco Unified Edge brings all the necessary elements – compute, networking, security, and storage – close to where data gets generated and processed. In environments where latency metrics are critically important, AI processing at the edge is the answer. But Cisco’s approach is not necessarily to offer dedicated IIoT-specific solutions. Instead, it tries to extend the operational models typically found in a data centre and applies the same technology (if not the same exact methodology) to edge sites. It’s like data centre-grade security policies and configurations available to remote installations. Having the same precepts and standards in cloud and edge mean that Cisco accredited engineers can manage and maintain data centres or small edge deployments using the same skills, accreditation, knowledge, and experience. Security and risk management figure prominently in the Cisco AI narrative. Its Integrated AI Security and Safety Framework applies high standards of safety and security throughout the life-cycle of AI systems. It considers adversarial threats, supply chain weakness, the risk profiles of multi-agent interactions, and multi-modal vulnerabilities as issues that have to be addressed regardless of the nature or size of any deployment. Cisco’s work on operational AI also reflects broader ecosystem conversations. The company markets products for organisations wanting to make the transition from generative to agentic AI, where autonomous software agents carry out operational tasks. In most cases, this requires new tooling and new operational protocols. Cisco’s future AI plans include continuing its central work in infrastructure provision for AI workloads. It’s also pursuing broader adoption of AI-ready networks, including next-gen wireless and unified management systems that will control systems across campus, branch, and cloud environments. The company is also expanding its software and platform investments, including its most recent acquisition (NeuralFabric), to help it build a more comprehensive software stack and product portfolio. In summary, Cisco’s AI deployment strategy combines hardware, software, and service elements that embed AI into operations, giving organisations a route to production-grade systems. Its work can be found in large-scale infrastructure, systems for unified management, risk mitigation, and anywhere that connects distributed, cloud, and edge computing. (Image source: Pixabay) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How Cisco builds smart systems for the AI era appeared first on AI News. View the full article
  3. Before you set sail on your AI journey, always check the state of your data – because if there is one thing likely to sink your ship, it is data quality. Gartner estimates that poor data quality costs organisations an average of $12.9 million each year in wasted resources and lost opportunities. That’s the bad news. The good news is that organisations are increasingly understanding the importance of their data quality – and less likely to fall into this trap. That’s the view of Ronnie Sheth, CEO of AI strategy, execution and governance firm SENEN Group. The company focuses on data and AI advisory, operationalisation and literacy, and Sheth notes she has been in the data and AI space ‘ever since [she] was a corporate baby’, so there is plenty of real-world experience behind the viewpoint. There is also plenty of success; Sheth notes that her company has a 99.99% client repeat rate. “If I were to be very practical, the one thing I’ve noticed is companies jump into adopting AI before they’re ready,” says Sheth. Companies, she notes, will have an executive direction insisting they adopt AI, but without a blueprint or roadmap to accompany it. The result may be impressive user numbers, but with no measurable outcome to back anything up. Even as recently as 2024, Sheth saw many organisations struggling because their data was ‘nowhere where it needed to be.’ “Not even close,” she adds. Now, the conversation has turned more practical and strategic. Companies are realising this, and coming to SENEN Group initially to get help with their data, rather than wanting to adopt AI immediately. “When companies like that come to us, the first course of order is really fixing their data,” says Sheth. “The next course of order is getting to their AI model. They are building a strong foundation for any AI initiative that comes after that. “Once they fix their data, they can build as many AI models as they want, and they can have as many AI solutions as they want, and they will get accurate outputs because now they have a strong foundation,” Sheth adds. With breadth and depth in expertise, SENEN Group allows organisations to right their course. Sheth notes the example of one customer who came to them wanting a data governance initiative. Ultimately, it was the data strategy which was needed – the why and how, the outcomes of what they were trying to do with their data – before adding in governance and providing a roadmap for an operating model. “They’ve moved from raw data to descriptive analytics, moving into predictive analytics, and now we’re actually setting up an AI strategy for them,” says Sheth. It is this attitude and requirement for practical initiatives which will be the cornerstone of Sheth’s discussion at AI & Big Data Expo Global in London this week. “Now would be the time to get practical with AI, especially enterprise AI adoption, and not think about ‘look, we’re going to innovate, we’re going to do pilots, we’re going to experiment,’” says Sheth. “Now is not the time to do that. Now is the time to get practical, to get AI to value. This is the year to do that in the enterprise.” Watch the full video conversation with Ronnie Sheth below: The post Ronnie Sheth, CEO, SENEN Group: Why now is the time for enterprise AI to ‘get practical’ appeared first on AI News. View the full article
  4. Greg Holmes, Field CTO for EMEA at Apptio, an IBM company, argues that successfully scaling intelligent automation requires financial rigour. The “build it and they will come” model of technology adoption often leaves a hole in the budget when applied to automation. Executives frequently find that successful pilot programmes do not translate into sustainable enterprise-wide deployments because initial financial modelling ignored the realities of production scaling. “When we integrate FinOps capabilities with automation, we’re looking at a change from being very reactive on cost management to being very proactive around value engineering,” says Holmes. This shifts the assessment criteria for technical leaders. Rather than waiting “months or years to assess whether things are getting value,” engineering teams can track resource consumption – such as cost per transaction or API call – “straight from the beginning.” The unit economics of scaling intelligent automation Innovation projects face a high mortality rate. Holmes notes that around 80 percent of new innovation projects fail, often because financial opacity during the pilot phase masks future liabilities. “If a pilot demonstrates that automating a process saves, say, 100 hours a month, leadership thinks that’s really successful,” says Holmes. “But what it fails to track is that the pilot sometimes is running on over-provisioned infrastructure, so it looks like it performs really well. But you wouldn’t over-provision to that degree during a real production rollout.” Moving that workload to production changes the calculus. The requirements for compute, storage, and data transfer increase. “API calls can multiply, exceptions and edge cases appear at volume that might have been out of scope for the pilot phase, and then support overheads just grow as well,” he adds. To prevent this, organisations must track the marginal cost at scale. This involves monitoring unit economics, such as the cost per customer served or cost per transaction. If the cost per customer increases as the customer base grows, the business model is flawed. Conversely, effective scaling should see these unit costs decrease. Holmes cites a case study from Liberty Mutual where the insurer was able to find around $2.5 million of savings by bringing in consumption metrics and “not just looking at labour hours that they were saving.” However, financial accountability cannot sit solely with the finance department. Holmes advocates for putting governance “back in the hands of the developers into their development tools and workloads.” Integration with infrastructure-as-code tools like HashiCorp Terraform and GitHub allows organisations to enforce policies during deployment. Teams can spin up resources programmatically with immediate cost estimates. “Rather than deploying things and then fixing them up, which gets into the whole whack-a-mole kind of problem,” Holmes explains, companies can verify they are “deploying the right things at the right time.” When scaling intelligent automation, tension often simmers between the CFO, who focuses on return on investment, and the Head of Automation, who tracks operational metrics like hours saved. “This translation challenge is precisely what TBM (Technology Business Management) and Apptio are designed to solve,” says Holmes. “It’s having a common language between technology and finance and with the business.” The TBM taxonomy provides a standardised framework to reconcile these views. It maps technical resources (such as compute, storage, and labour) into IT towers and further up to business capabilities. This structure translates technical inputs into business outputs. “I don’t necessarily know what goes into all the IT layers underneath it,” Holmes says, describing the business user’s perspective. “But because we’ve got this taxonomy, I can get a detailed bill that tells me about my service consumption and precisely which costs are driving it to be more expensive as I consume more.” Addressing legacy debt and budgeting for the long-term Organisations burdened by legacy ERP systems face a binary choice: automation as a patch, or as a bridge to modernisation. Holmes warns that if a company is “just trying to mask inefficient processes and not redesign them,” they are merely “building up more technical debt.” A total cost of ownership (TCO) approach helps determine the correct strategy. The Commonwealth Bank of Australia utilised a TCO model across 2,000 different applications – of various maturity stages – to assess their full lifecycle costs. This analysis included hidden costs such as infrastructure, labour, and the engineering time required to keep automation running. “Just because of something’s legacy doesn’t mean you have to retire it,” says Holmes. “Some of those legacy systems are worth maintaining just because the value is so good.” In other cases, calculating the cost of the automation wrappers required to keep an old system functional reveals a different reality. “Sometimes when you add up the TCO approach, and you’re including all these automation layers around it, you suddenly realise, the real cost of keeping that old system alive is not just the old system, it’s those extra layers,” Holmes argues. Avoiding sticker shock requires a budgeting strategy that balances variable costs with long-term commitments. While variable costs (OPEX) offer flexibility, they can fluctuate wildly based on demand and engineering efficiency. Holmes advises that longer-term visibility enables better investment decisions. Committing to specific technologies or platforms over a multi-year horizon allows organisations to negotiate economies of scale and standardise architecture. “Because you’ve made those longer term commitments and you’ve standardised on different platforms and things like that, it makes it easier to build the right thing out for the long term,” Holmes says. Combining tight management of variable costs with strategic commitments supports enterprises in scaling intelligent automation without the volatility that often derails transformation. IBM is a key sponsor of this year’s Intelligent Automation Conference Global in London on 4-5 February 2026. Greg Holmes and other experts will be sharing their insights during the event. Be sure to check out the day one panel session, Scaling Intelligent Automation Successfully: Frameworks, Risks, and Real-World Lessons, to hear more from Holmes and swing by IBM’s booth at stand #362. See also: Klarna backs Google UCP to power AI agent payments Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Apptio: Why scaling intelligent automation requires financial rigour appeared first on AI News. View the full article
  5. FedEx is using AI to change how package tracking and returns work for large enterprise shippers. For companies moving high volumes of goods, tracking no longer ends when a package leaves the warehouse. Customers expect real-time updates, flexible delivery options, and returns that do not turn into support tickets or delays. That pressure is pushing logistics firms to rethink how tracking and returns operate at scale, especially across complex supply chains. This is where artificial intelligence is starting to move from pilot projects into daily operations. FedEx plans to roll out AI-powered tracking and returns tools designed for enterprise shippers, according to a report by PYMNTS. The tools are aimed at automating routine customer service tasks, improving visibility into shipments, and reducing friction when packages need to be rerouted or sent back. Rather than focusing on consumer-facing chatbots, the effort centres on operational workflows that sit behind the scenes. These are the systems enterprise customers rely on to manage exceptions, returns, and delivery changes without manual intervention. How FedEx is applying AI to package tracking Traditional tracking systems tell customers where a package is and when it might arrive. AI-powered tracking takes a step further by utilising historical delivery data, traffic patterns, weather conditions, and network constraints to flag potential delays before they happen. According to the PYMNTS report, FedEx’s AI tools are designed to help enterprise shippers anticipate issues earlier in the delivery process. Instead of reacting to missed delivery windows, shippers may be able to reroute packages or notify customers ahead of time. For businesses that ship thousands of parcels per day, that shift matters. Small improvements in prediction accuracy can reduce support calls, lower refund rates, and improve customer trust, particularly in retail, healthcare, and manufacturing supply chains. This approach also reflects a broader trend in enterprise software, in which AI is being embedded into existing systems rather than introduced as standalone tools. The goal is not to replace logistics teams, but to minimise the number of manual decisions they need to make. Returns as an operational problem, not a customer issue Returns are one of the most expensive parts of logistics. For enterprise shippers, particularly those in e-commerce, returns affect warehouse capacity, inventory planning, and transportation costs. According to PYMNTS, FedEx’s AI-enabled returns tools aim to automate parts of the returns process, including label generation, routing decisions, and status updates. Companies that use AI to determine the most efficient return path may be able to reduce delays and avoid returning things to the wrong facility. This is less about convenience and more about operational discipline. Returns that sit idle or move through the wrong channel create cost and uncertainty across the supply chain. AI systems trained on past return patterns can help standardise decisions that were previously handled case by case. For enterprise customers, this type of automation supports scale. As return volumes fluctuate, especially during peak seasons, systems that adjust automatically reduce the need for temporary staffing or manual overrides. What FedEx’s AI tracking approach says about enterprise adoption What stands out in FedEx’s approach is how narrowly focused the AI use case is. There are no broad claims about transformation or reinvention. The emphasis is on reducing friction in processes that already exist. This mirrors how other large organisations are adopting AI internally. In a separate context, Microsoft described a similar pattern in its article. The company outlined how AI tools were rolled out gradually, with clear limits, governance rules, and feedback loops. While Microsoft’s case focused on knowledge work and FedEx’s on logistics operations, the underlying lesson is the same. AI adoption tends to work best when applied to specific activities with measurable results rather than broad promises of efficiency. For logistics firms, those advantages include fewer delivery exceptions, lower return handling costs, and better coordination between shipping partners and enterprise clients. What this signals for enterprise customers For end-user companies, FedEx’s move signals that logistics providers are investing in AI as a way to support more complex shipping demands. As supply chains become more distributed, visibility and predictability become harder to maintain without automation. AI-driven tracking and returns could also change how businesses measure logistics performance. Companies may focus less on delivery speed and more on how quickly issues are recognised and resolved. That shift could influence procurement decisions, contract structures, and service-level agreements. Enterprise customers may start asking not just where a shipment is, but how well a provider anticipates problems. FedEx’s plans reflect a quieter phase of enterprise AI adoption. The focus is less on experimentation and more on integration. These systems are not designed to draw attention but to reduce noise in operations that customers only notice when something goes wrong. (Photo by Liam Kevan) See also: PepsiCo is using AI to rethink how factories are designed and updated Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post FedEx tests how far AI can go in tracking and returns management appeared first on AI News. View the full article
  6. Klarna aims to address the lack of interoperability between conversational AI agents and backend payment systems by backing Google’s Universal Commerce Protocol (UCP), an open standard designed to unify how AI agents discover products and execute transactions. The partnership, which also sees Klarna supporting Google’s Agent Payments Protocol (AP2), places the Swedish fintech firm among the early payment providers to back a standardised framework for automated shopping. The interoperability problem with AI agent payments Current implementations of AI commerce often function as walled gardens. An AI agent on one platform typically requires a custom integration to communicate with a merchant’s inventory system, and yet another to process payments. This integration complexity inflates development costs and limits the reach of automated shopping tools. Google’s UCP attempts to solve this by providing a standardised interface for the entire shopping lifecycle, from discovery and purchase to post-purchase support. Rather than building unique connectors for every AI platform, merchants and payment providers can interact through a unified standard. David Sykes, Chief Commercial Officer at Klarna, states that as AI-driven shopping evolves, the underlying infrastructure must rely on openness, trust, and transparency. “Supporting UCP is part of Klarna’s broader work with Google to help define responsible, interoperable standards that support the future of shopping,” he explains. Standardising the transaction layer By integrating with UCP, Klarna allows its technology – including flexible payment options and real-time decisioning – to function within these AI agent environments. This removes the need for hardcoded platform-specific payment logic. Open standards provide a framework for the industry to explore how discovery, shopping, and payments work together across AI-powered environments. The implications extend to how transactions settle. Klarna’s support for AP2 complements the UCP integration, helping advance an ecosystem where trusted payment options work across AI-powered checkout experiences. This combination aims to reduce the friction of users handing off a purchase decision to an automated agent. “Open standards like UCP are essential to making AI-powered commerce practical at scale,” said Ashish Gupta, VP/GM of Merchant Shopping at Google. “Klarna’s support for UCP reflects the kind of cross-industry collaboration needed to build interoperable commerce experiences that expand choice while maintaining security.” Adoption of Google’s UCP by Klarna is part of a broader shift For retail and fintech leaders, the adoption of UCP by players like Klarna suggests a requirement to rethink commerce architecture. The shift implies that future payments may increasingly come through sources where the buyer interface is an AI agent rather than a branded storefront. Implementing UCP generally does not require a complete re-platforming but does demand rigorous data hygiene. Because agents rely on structured data to manage transactions, the accuracy of product feeds and inventory levels becomes an operational priority. Furthermore, the model maintains a focus on trust. Klarna’s technology provides upfront terms designed to build trust at checkout. As agent-led commerce develops, maintaining clear decisioning logic and transparency remains a priority for risk management. The convergence of Klarna’s payment rails with Google’s open protocols offers a practical template for reducing the friction of using AI agents for commerce. The value lies in the efficiency of a standardised integration layer that reduces the technical debt associated with maintaining multiple sales channels. Success will likely depend on the ability to expose business logic and inventory data through these open standards. See also: How SAP is modernising HMRC’s tax infrastructure with AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Klarna backs Google UCP to power AI agent payments appeared first on AI News. View the full article
  7. HMRC has selected SAP to overhaul its core revenue systems and place AI at the centre of the ***’s tax administration strategy. The contract represents a broader shift in how public sector bodies approach automation. Rather than layering AI tools over legacy infrastructure, HMRC is replacing the underlying architecture to support machine learning and automated decision-making natively. The AI-powered modernisation effort focuses on the Enterprise Tax Management Platform (ETMP), the technological backbone responsible for managing over £800 billion in annual tax revenue and which currently supports over 45 tax regimes. By migrating this infrastructure to a managed cloud environment via RISE with SAP, HMRC aims to simplify a complex technology landscape that tens of thousands of staff rely on daily. Effective machine learning requires unified data sets, which are often impossible to maintain across fragmented on-premise legacy systems. As part of the deployment, HMRC will implement SAP Business Technology Platform and AI capabilities. These tools are designed to surface insights faster and automate processes across tax administration. SAP Sovereign Cloud meets local AI adoption requirements Deploying AI in such highly-regulated sectors requires strict data governance. HMRC will host these new capabilities on SAP’s *** Sovereign Cloud. This ensures that while the tax authority adopts commercial AI tools, it adheres to localised requirements regarding data residency, security, and compliance. “Large-scale public systems like those delivered by HMRC must operate reliably at national scale while adapting to changing demands,” said Leila Romane, Managing Director UKI at SAP. “By modernising one of the ***’s most important platforms and hosting it on a *** sovereign cloud, we are helping to strengthen the resilience, security, and sustainability of critical national infrastructure.” Using AI to modernise tax infrastructure The modernisation ultimately aims to reduce friction in taxpayer interactions. SAP and HMRC will work together to define new AI capabilities specifically aimed at improving taxpayer experiences and enhancing decision-making. For enterprise leaders, the lesson here is the link between data accessibility and operational value. The collaboration provides HMRC employees with better access to analytical data and an improved user interface. This structure supports greater confidence in real-time analysis and reporting; allowing for more responsive and transparent experiences for taxpayers. The SAP project illustrates that AI adoption is an infrastructure challenge as much as a software one. HMRC’s approach involves securing a sovereign cloud foundation before attempting to scale automation. For executives, this underscores the need to address technical debt and data sovereignty to enable effective AI implementation in areas as regulated as tax and finance. See also: Accenture: Insurers betting big on AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How SAP is modernising HMRC’s tax infrastructure with AI appeared first on AI News. View the full article
  8. If you are a data and analytics leader, then you know agentic AI is fuelling unprecedented speed of change right now. Knowing you need to do something and knowing what to do, however, are two different things. The good news is providers like ThoughtSpot are able to assist, with the company in its own words determined to ‘reimagin[e] analytics and *** from the ground up’. “Certainly, agentic systems really are shifting us into very new territory,” explains Jane Smith, field chief data and AI officer at ThoughtSpot. “They’re shifting us away from passive reporting to much more active decision making. “Traditional *** waits for you to find an insight,” adds Jane. “Agentic systems are proactively monitoring data from multiple sources 24/7; they’re diagnosing why changes happened; they’re triggering the next action automatically. “We’re getting much more action-oriented.” Alongside moving from passive to active, there are two other ways in which Jane sees this change taking place in ***. There is a shift towards the ‘true democratisation of data’ on one hand, but on the other is the ‘resurgence of focus’ on the semantic layer. “You cannot have an agent taking action in the way I just described when it doesn’t strictly understand business context,” says Jane. “A strong semantic layer is really the only way to make sense… of the chaos of AI.” ThoughtSpot has a fleet of agents to take action and move the needle for customers. In December, the company launched four new *** agents, with the idea that they work as a team to deliver modern analytics. Spotter 3, the latest iteration of an agent first debuted towards the end of 2024, is the star. It is conversant with applications like Slack and Salesforce, and can not only answer questions, but assess the quality of its answer and keep trying until it gets the right result. “It leverages the [Model Context] protocol, so you can ask your questions to your organisation’s structured data – everything in your rows, your columns, your tables – but also incorporate your unstructured data,” says Jane. “So, you can get really context-rich answers to questions, all through our agent, or if you wish, through your own LLM.” With this power, however, comes responsibility. As ThoughtSpot’s recent eBook exploring data and AI trends for 2026 notes, the C-suite needs to work out how to design systems so every decision – be it human or AI – can be explained, improved, and trusted. ThoughtSpot calls this emerging architecture ‘decision intelligence’ (DI). “What we’ll see a lot of, I think, will be decision supply chains,” explains Jane. “Instead of a one-off insight, I think what we’re going to see is decisions… flow through repeatable stages, data analysis, simulation, action, feedback, and these are all interactions between humans and machines that will be logged in what we can think of as a decision system of record.” What would this look like in practice? Jane offers an example from a clinical trial in the pharma industry. “The system would log and version, really, every step of how a patient is chosen for a clinical trial; how data from a health record is used to identify a candidate; how that decision was simulated against the trial protocol; how the matching occurred; how potentially a doctor ultimately recommended this patient for the trial,” she says. “These are processes that can be audited, they can be improved for the following trial. But the very meticulous logging of every element of the flow of this decision into what we think of as a supply chain is a way that I would visualise that.” ThoughtSpot is participating at the AI & Big Data Expo Global, in London, on February 4-5. You can watch the full interview with Jane Smith below: Photo by Steve Johnson on Unsplash The post ThoughtSpot: On the new fleet of agents delivering modern analytics appeared first on AI News. View the full article
  9. Mid-January saw insurance company, Travelers, announce a new deal that empowers 10,000 engineers and data scientists with AI assistants. However, less than two weeks on, Travelers’ leadership explained that the company’s true competitive advantage lies in expertise, not AIs alone, believing this is what will drive longer-term profit growth. According to Travelers’ chief executive officer Alan Schnitzer, over 20,000 professionals at the company currently “use AI tools regularly.” He also commented on company claims that Travelers’ call centres are experiencing a boost in efficiency at the hands of AI, leading to claims call centre cuts. AI technology and innovation driving growth Travelers’ net profit has increased, according to Schnitzer, largely fuelled by the company’s intensive technology and innovation strategy. Travelers reportedly increased its total value of insurance policies it sold by nearly 7% a year on average between 2016 and 2025. Its underlying combined ratio improved by almost eight points, falling to 83.9. Schnitzer explained that heavy investment in technology has coincided with improved profits. “Notwithstanding a increase in our technology spending, that improvement in underlying profitability includes a 3-point or 10% improvement in our expense ratio. Over the decade, we developed the competitive advantage of an innovation skill set. Now we’re bringing all that Part 1 know-how to Innovation 2.0 at Travelers, powered by AI – and not too far off quantum computing.” Innovation 1.0 relates to the company’s strategy and foundation to this success, and it has plans to move into a more advanced stage it’s calling Innovation 2.0, in which AI is the central driver. Automation equals call centre culls Schnitzer noted how automation has directly reduced staffing needs and improved claims efficiency, something clearly seen in recent numbers. For instance, Schnitzer said that Travelers’ “claim call centre population is down by a third,” and steps are being taken to consolidate four claims call centres into two. Such efficiency gains have reduced loss adjustment expenses, improving the company’s loss ratio. Ultimately, investment in automation and analytics have helped Travelers “refine indemnity payouts and drive operational efficiencies.” Schnitzer stated that over 50% of all claims made to Travelers are now eligible for straight-through processing, and customers are adopting this processing approximately in two-thirds of cases. He went on to say that: “Another 15% of all claims are processed with advanced digital tools. All of those percentages are growing.” Despite automated tools doing the bulk of claims work, the CEO said that some customers still prefer to call the company to report and discuss claims. Therefore, Travelers has set up an advanced natural language generative AI voice agent that handles initial phone calls. Schnitzer heralded the success of this voice agent, saying: “Early customer adoption is exceeding our expectation.” AI and automation reshaping operations in Travelers The benefits of AI and automation are far-reaching, beyond just claims call centres, according to Schnitzer. “Other use cases enhance underwriting decision quality and efficiency and improve the experience for customers, agents, brokers and employees.” Greg Toczydlowski, executive vice president and president of business insurance for Travelers, spoke about how gen AI agents have been used to “efficiently mine” data sources, both internally and externally. These help the company “better understand and synthesise the risk characteristics.” Toczydlowski added that the recent agent additions have boosted the speed of underwriting processes and improved segmented pricing. He explained how the company’s commercial underwriters are performing very well, enhanced by advanced tools used to evaluate risks. Tools include models that refine pricing and summarise past claims data, streamlining the entire process. “They’re not only executing with excellence in the market today, but they’re also helping to shape the transformation of our industry,” Toczydlowski said. Michael Klein, executive vice president and president of personal insurance for Travelers, underscored AI’s importance in personal insurance, saying it is used to make renewal underwriting “more effective and efficient.” Klein said, “we start with a proprietary AI-enabled predictive model that scores every account in the property portfolio. Based on this score, accounts with the highest probable risk of loss are presented to underwriters for review. From there, our renewal underwriting platform uses generative AI to consolidate data into summaries of relevant actionable information for our underwriters to evaluate.” As a result, Klein said that there has been a 30% reduction in average handle times. Therefore, “the net result is that our underwriters focus their efforts on decisions most likely to improve profitability and do so more efficiently.” In Speciality insurance, Jeffrey Klenk, president of bond & speciality insurance at Travelers, commented on how AI has cut times to intake submissions from “hours to just minutes.” He also said AI has recently been implemented to streamline renewals. Innovation 2.0 – AI’s impact on jobs Despite claims call centre headcounts already reduced, Schnitzer did not speculate on further cuts. Instead, he emphasised the increasing productivity AI has brought to Travelers. “What I would say is that per employee is up, thanks to some productivity and efficiency initiatives, and we expect per employee to continue to go up.” Travelers’ Innovation 1.0 strategy has been the key driver to the company’s strong 10-year profits, according to Schnitzer. Over the decade, we developed the competitive advantage of an innovation skill set. Now we’re bringing all that Part 1 know-how to Innovation 2.0 at Travelers, powered by AI – and not too far off quantum computing.” He believes that AI is set to benefit the entire P/C landscape, highlighting how recent advanced AI tools are able to “understand and execute the complex stakeholder interactions, well-defined processes, data-intensive workflows and massive amounts of unstructured data.” Schnitzer said human expertise with AI “amplifies existing strength,” and said Travelers is investing heavily in “AI and other sophisticated technology solutions.” He said “Dozens of scaled generative AI tools are already in production. Millions of transactions are now automated… And agentic AI isn’t a future aspiration. It’s embedded in our business operations today.” AI and automated technologies are poised to transform the insurance industry tenfold, as Travelers expects such technologies to “result in faster and more cost-effective delivery of new abilities.” From product development to new business prospecting to underwriting speed and quality, agent and customer service and more, AI is benefiting Travelers, its customers, and distribution partners, showcasing the technology’s vast impact in the business and industry. (Image source: “GOES Satellites Capture Holiday Weather Travel Conditions” by NASA Goddard Photo and Video is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI use surges at Travelers as call centre roles reduce appeared first on AI News. View the full article
  10. For many large companies, the most useful form of AI right now has little to do with writing emails or answering questions. At PepsiCo, AI is being tested in places where mistakes are costly and changes are hard to undo — factory layouts, production lines, and physical operations. That shift is visible in how PepsiCo is using AI and digital twins to model and adjust its manufacturing facilities before making changes in the real world. Rather than experimenting with chat interfaces or office tools, the company is applying AI to one of its core problems: how to configure factories faster, with less risk, and fewer disruptions. Digital twins are virtual models of physical systems. In manufacturing, they can simulate equipment placement, material flow, and production speed. When combined with AI, these models can test thousands of scenarios that would be impractical — or expensive — to try on a live production line. PepsiCo has been working with partners to apply AI-driven digital twins to parts of its manufacturing network, with early pilots focused on improving how facilities are designed and adjusted over time. The goal is not automation for its own sake. It is cycle time. Instead of taking weeks or months to validate changes through physical trials, teams can test configurations virtually, identify problems earlier, and move faster when updates are needed. From planning bottleneck to operational shortcut In large consumer goods companies, factory changes tend to move slowly. Even small adjustments — a new line layout, different packaging flow, or equipment upgrade — can require long planning cycles, approvals, and staged testing. Each delay has knock-on effects on supply chains and product availability. Digital twins offer a way around that bottleneck. By simulating production environments, teams can see how changes might affect throughput, safety, or downtime before touching the actual facility. PepsiCo’s early pilots showed faster validation times and signs of throughput improvement at initial sites, though the company has not published detailed metrics yet. What matters more than the numbers is the pattern: AI is being used to compress decision cycles in physical operations, not to replace workers or remove human judgment. This kind of use case fits a broader trend. Enterprises that move beyond pilot projects often focus on narrow, well-defined problems where AI can reduce friction in existing workflows. Manufacturing, logistics, and healthcare operations are showing more traction than open-ended knowledge work. Why PepsiCo treats AI as operations engineering, not office productivity PepsiCo’s approach also highlights a quieter shift in how AI programs are being justified inside large firms. The value is tied to operational outcomes — time saved, fewer disruptions, better planning — rather than general claims about productivity. That distinction matters. Many enterprise AI efforts stall because they struggle to connect usage with measurable impact. Tools get deployed, but workflows stay the same. Digital twins change that dynamic because they sit directly inside planning and engineering processes. If a simulated change cuts weeks off a factory upgrade, the benefit is visible. If it reduces downtime risk, operations teams can measure that over time. This focus on process change, rather than tools, mirrors what is happening in other sectors. In healthcare, for example, Amazon is testing an AI assistant inside its One Medical app that uses patient history to reduce repetitive intake and support care interactions, according to comments from CEO Andy Jassy reported this week. The assistant is embedded in the care workflow, not offered as a standalone feature. Both cases point to the same lesson: AI adoption moves faster when it fits into how work already gets done, instead of asking teams to invent new habits. Why this matters for other enterprises PepsiCo’s digital-twin work is unlikely to be unique for long. Large manufacturers across food, chemicals, and industrial goods face similar planning constraints and cost pressures. Many already use simulation software. AI adds speed and scale to those models. What is more interesting is what this says about the next phase of enterprise AI adoption. First, the centre of gravity is shifting away from broad, generic tools toward focused systems tied to specific decisions. Second, success depends less on model quality and more on data quality, process ownership, and governance. A digital twin is only as useful as the operational data feeding it. Third, this kind of AI work tends to stay out of the spotlight. It does not generate flashy demos, but it can reshape how companies plan capital spending and manage risk. That also explains why many firms remain cautious. Building and maintaining accurate digital twins takes time, cross-team coordination, and deep knowledge of physical systems. The payoff comes from repeated use, not one-off wins. PepsiCo’s manufacturing AI work is a quiet signal worth watching In AI coverage, it is easy to focus on new models, agents, or interfaces. Stories like PepsiCo’s point in a different direction. They show AI being treated as infrastructure — something that sits underneath daily decisions and gradually changes how work flows through an organisation. For enterprise leaders, the takeaway is not to copy the technology stack. It is to look for places where planning delays, validation cycles, or operational risk slow the business down. Those friction points are where AI has the best chance of sticking. PepsiCo’s digital-twin pilots suggest that the factory floor may be one of the most practical testing grounds for AI today — not because it is trendy, but because the impact is easier to see when time and mistakes have a clear cost. (Photo by NIKHIL) See also: Deloitte sounds alarm as AI agent deployment outruns safety frameworks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post PepsiCo is using AI to rethink how factories are designed and updated appeared first on AI News. View the full article
  11. The artificial intelligence industry’s pivot toward agentic AI—systems capable of autonomously executing multi-step tasks—has dominated technology discussions in recent months. But while Western firms focus on foundational models and cross-platform interoperability, China’s technology giants are racing to dominate through commerce integration, a strategic divergence that could reshape how enterprises deploy autonomous systems globally. Alibaba, Tencent and ByteDance have rapidly upgraded their AI platforms to support agentic commerce, marking a pivot from conversational AI tools to agents capable of completing entire transaction cycles, from product discovery through payment. Just last week, Alibaba upgraded its Qwen chatbot to enable direct transaction completion within the interface, connecting the AI agent across its ecosystem, including Taobao, Alipay, Amap and travel platform Fliggy. The integration supports over 400 core digital tasks, allowing users to compare personalised recommendations across platforms and complete payments without leaving the chatbot environment. “The agentic transformation of commercial services enables the maximal integration of user services and enhances user stickiness,” Shaochen Wang, research analyst at Counterpoint Research, told CNBC, referring to stronger long-term user engagement that creates sustainable competitive advantages. The super app advantage Before that, ByteDance upgraded its Doubao AI chatbot in December to autonomously handle tasks, including ticket bookings, through integrations with Douyin, the ******** version of TikTok. The upgraded model was introduced on a ZTE-developed prototype smartphone as a system-level AI assistant; however, some planned features were later scaled back due to privacy and security concerns raised by rivals. Tencent President Martin Lau indicated during the company’s May 2025 earnings call that AI agents could become core components of the WeChat ecosystem, which serves over one billion users with integrated messaging, payments, e-commerce and services. The strategic positioning reflects China’s structural advantage in agentic AI deployment: integrated ecosystems that eliminate the fragmentation constraining Western competitors. “AI agents will be foundational to the evolution of super apps, with success depending on deep integration across payments, logistics, and social engagement,” Charlie Dai, VP and principal analyst at Forrester, told CNBC. “******** firms like Alibaba, Tencent and ByteDance all benefit from integrated ecosystems, rich behavioural data, and consumer familiarity with super apps.” Western companies face more fragmented data environments and stricter privacy regulations that slow cross-service integration, despite leading in foundational AI model development and global reach, Dai noted. Agentic AI’s enterprise trajectory The commercial applications signal broader enterprise implications as agentic AI moves from auxiliary tools to autonomous actors capable of executing complex workflows. Industry experts widely expect multi-agent systems to emerge as a defining trend in AI deployment this year, extending from consumer services into organisational production. In a report by Global Times, Tian Feng, president of the Fast Think Institute and former dean of SenseTime’s Intelligence Industry Research Institute, predicted that the first AI agent to surpass 300 million monthly active users could emerge as early as 2026, becoming “an indispensable assistant for work and daily life” capable of autonomously executing cross-app, composite services. Approximately half of all consumers already use AI when searching online, according to a 2025 McKinsey study. The research firm estimated that AI agents could generate more than $1 trillion in economic value for US businesses by 2030 through streamlining routine steps in consumer decision-making. ******** cloud providers, including smaller players such as JD Cloud and UCloud, have also begun supporting agentic AI tools, though high token usage has driven some providers, like ByteDance’s Volcano Engine, to introduce fixed-subscription pricing models to address cost concerns. Divergent deployment strategies The contrasting approaches between ******** integration and Western scalability reflect fundamental differences in market structure and regulatory environments that will likely define competitive positioning. “China will prioritise domestic integration and strategic expansion in selected regions, while US firms focus on global scalability and governance,” Dai said. US players pursuing agentic commerce include OpenAI, Perplexity, and Amazon, while Google explores positioning itself as a “matchmaker” between merchants, consumers and AI agents—approaches that reflect fragmented platform environments requiring interoperability rather than closed-loop integration. However, the autonomous nature of agentic systems has raised regulatory questions in China. ByteDance warned users about security and privacy risks when announcing Doubao’s capabilities, recommending deployment on dedicated devices rather than those containing sensitive information, given the tool’s access to device data, digital accounts and internet connectivity across multiple ports. The rapid commercialisation of agentic AI in China’s consumer sector provides enterprise decision-makers globally with early signals of how autonomous systems may reshape customer acquisition costs, platform economics and competitive moats as these capabilities mature. (Photo by Philip Oroni) See also: Deloitte sounds alarm as AI agent deployment outruns safety frameworks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground appeared first on AI News. View the full article
  12. New research from Accenture has discovered insurance executives are planning on increased investment into AI during 2026 despite a widening skills gap in insurance organisations. Surveying 3,650 C-suite leaders over 20 industries and 20 countries, the Pulse of Change poll revealed 90% of the 218 senior insurance executives intend to spend more on AI over the next year. In all, 85% of the respondents view AI as a tool for revenue expansion not one that reduces costs. While organisations are upping their AI investment to drive growth, 35% of leaders acknowledge that true progress depends on getting core data strategies and digital abilities right. 54% of employees reported that low-quality or misleading AI outputs are undermining AI’s benefits, leading to reduced productivity and time-wasting. AI investment may not be enough, Accenture says. Its survey suggests sustainable growth relies on data quality and trusted outputs. AI adoption enters enterprise scale The Pulse of Change survey indicates a shift in AI adoption as it goes beyond experimental phases to large scale organisational levels. With 34% of insurance companies now rolling out AI agents in multiple functions, insurers are heading into operational use and away from isolated experiments. almost a third of senior C-suite leaders are frequently using generative AI, highlighting increased implementation at the highest level. Therefore, AI is undoubtedly shaping workflows, strategies, and key decisions, affecting all facets of businesses. Nearly a third of businesses are rebuilding entire processes with AI. No longer is the technology a supporting addition to existing workflows; it has become a central component, signalling a more mature stage of AI adoption. Despite redesigning processes to include AI, fewer than 10% are redesigning employee roles to match such changes, resulting in many employees feeling unprepared. Just 40% claimed their training has equipped them for new AI responsibilities, and only 20% feel like they have any say in how AI affects their work. AI adoption by companies may be accelerating, but employee use lags behind. There has been a 10 percentage point drop in regular AI use by employees since summer 2025, while only 39% are trying AI tools independently, a drop of 15 points. For effective AI use and to speed up AI adoption among the workforce, companies must be prepared to redesign job roles, align incentives, and provide improved training programmes as, right now, employees are feeling hesitant and unprepared to use AI on their own. AI investment still fuelling executive optimism amid bubble fears Although talks around a potential AI bubble continue to cloud the industry, insurance executives remain confident. 47% claimed they would increase AI spending if the bubble burst, and 37% would escalate recruitment. Altogether, 6% said they would “decrease investments ([by] 20% or more),” 22% would “somewhat decrease investments ([by] up to 20%),” 24% would make “no change,” 40% would “somewhat increase investments (up to 20%),” and 7% would “increase investments (20% or more).” Khalid Lahraoui, Accenture’s insurance industry group lead, commented, “It’s clear that insurance leaders are confident in AI’s capacity to drive growth, and as such, they are decisively increasing investments, despite ROI uncertainty.” Lack of AI skills blocking AI’s potential value As insurance executives prepare to invest heavily in AI, obstacles lie in wait. For instance, a quarter of executives said skill shortages are a core concern and a key player in determining the value they extract from AI. Although these challenges persist in different industries, just 24% of respondents have implemented continuous learning programmes associated with AI. Moreover, only 5% said they are adjusting job positions to support the adoption of AI. AI adoption disconnect The disconnect between C-suite leaders and employees is evident from the survey’s data. Although talent is the main driver of AI scaling, employees feel less confident and secure than leadership assumes. 23% of C-suite leaders said improved access to skilled talent would accelerate their AI implementation strategies. 38% of employees believe their organisation would respond effectively to technological disruption, but just 30% feel confident about how their company would handle talent disruption. Job security is also waning, with 48% feeling secure in their roles, down from 59% in summer 2025. Meanwhile, 59% of workers believe young professionals are finding it more challenging to find jobs due to automation and AI. Leadership may see talent as an accelerator for AI, but anxiety around job security and organisational readiness persists. Key focus is on investment Approximately two thirds of executives are prioritising investments in digital technologies and AI amid the rapid changes facing global industries. 67% reported feeling well-prepared for technological disruption, but only 39% felt confident if there was environmental disruption, and 44% for geopolitical disruption. Again, there is a divide between leadership and employees, with only 29% of insurance workers feeling confident during economic disruption compared to 43% of leaders. Optimism among insurance executives and C-suite leaders as a whole remains high, despite 82% expecting further changes in 2026, a 24 percentage gap with employees. 78% anticipate stronger and faster revenue growth in the next year and 82% have plans to increase recruitment. According to Accenture’s report, the key challenge is not AI technology itself; it’s getting employees on board, engaged, and ready to work with AI. As the report notes, bridging the gap between technology and people is the key to success. “2026 will favour those that align the confidence in their technological investments with commitment to workforce needs,” the report concludes. (Image source: “Accenture Building City View Plaza San Jose” by mrkathika is licensed under CC BY-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Insurers betting big on AI: Accenture appeared first on AI News. View the full article
  13. A new report from Deloitte has warned that businesses are deploying AI agents faster than their safety protocols and safeguards can keep up. Therefore, serious concerns around security, data privacy, and accountability are spreading. According to the survey, agentic systems are moving from pilot to production so quickly that traditional risk controls, which were designed for more human-centred operations, are struggling to meet security demands. Just 21% of organisations have implemented stringent governance or oversight for AI agents, despite the increased rate of adoption. Whilst 23% of companies stated that they are currently using AI agents, this is expected to rise to 74% in the next two years. The share of businesses yet to adopt this technology is expected to fall from 25% to just 5% over the same *******. Poor governance is the threat Deloitte is not highlighting AI agents as inherently dangerous, but states the real risks are associated with poor context and weak governance. If agents operate as their own entities, their decisions and actions can easily become opaque. Without robust governance, it becomes difficult to manage and almost impossible to insure against mistakes. According to Ali Sarrafi, CEO & Founder of Kovant, the answer is governed autonomy. “Well-designed agents with clear boundaries, policies and definitions managed the same way as an enterprise manages any worker can move fast on low-risk work inside clear guardrails, but escalate to humans when actions cross defined risk thresholds.” “With detailed action logs, observability, and human gatekeeping for high-impact decisions, agents stop being mysterious bots and become systems you can inspect, audit, and trust.” As Deloitte’s report suggests, AI agent adoption is set to accelerate in the coming years, and only the companies that deploy the technology with visibility and control will hold the upper hand over competitors, not those who deploy them quickest. Why AI agents require robust guardrails AI agents may perform well in controlled demos, but they struggle in real-world business settings where systems can be fragmented and data may be inconsistent. Sarrafi commented on the unpredictable nature of AI agents in these scenarios. “When an agent is given too much context or scope at once, it becomes prone to hallucinations and unpredictable behaviour.” “By contrast, production-grade systems limit the decision and context scope that models work with. They decompose operations into narrower, focused tasks for individual agents, making behaviour more predictable and easier to control. This structure also enables traceability and intervention, so failures can be detected early and escalated appropriately rather than causing cascading errors.” Accountability for insurable AI With agents taking real actions in business systems, such as keeping detailed action logs, risk and compliance are viewed differently. With every action recorded, agents’ activities become clear and evaluable, letting organisations inspect actions in detail. Such transparency is crucial for insurers, who are reluctant to cover opaque AI systems. This level of detail helps insurers understand what agents have done, and the controls involved, thus making it easier to assess risk. With human oversight for risk-critical actions and auditable, replayable workflows, organisations can produce systems that are more manageable for risk assessment. AAIF standards a good first step Shared standards, like those being developed by the Agentic AI Foundation (AAIF), help businesses to integrate different agent systems, but current standardisation efforts focus on what is simplest to build, not what larger organisations need to operate agentic systems safely. Sarrafi says enterprises require standards that support operation control, and which include, “access permissions, approval workflows for high-impact actions, and auditable logs and observability, so teams can monitor behaviour, investigate incidents, and prove compliance.” Identity and permissions the first line of defence Limiting what AI agents can access and the actions they can perform is important to ensure safety in real business environments. Sarrafi said, “When agents are given broad privileges or too much context, they become unpredictable and pose security or compliance risks.” Visibility and monitoring are important to keep agents operating inside limits. Only then can stakeholders have confidence in the adoption of the technology. If every action is logged and manageable, teams can then see what has happened, identify issues, and better understand why events occurred. Sarrafi continued, “This visibility, combined with human supervision where it matters, turns AI agents from inscrutable components into systems that can be inspected, replayed and audited. It also allows rapid investigation and correction when issues arise, which boosts trust among operators, risk teams and insurers alike.” Deloitte’s blueprint Deloitte’s strategy for safe AI agent governance sets out defined boundaries for the decisions agentic systems can make. For instance, they might operate with tiered autonomy, where agents can only view information or offer suggestions. From here, they can be allowed to take limited actions, but with human approval. Once they have proven to be reliable in low-risk areas, they can be allowed to act automatically. Deloitte’s “Cyber AI Blueprints” suggest governance layers and embedding policies and compliance capability roadmaps into organisational controls. Ultimately, governance structures that track AI use and risk, and embedding oversight into daily operations are important for safe agentic AI use. Readying workforces with training is another aspect of safe governance. Deloitte recommends training employees on what they shouldn’t share with AI systems, what to do if agents go off track, and how to spot unusual, potentially dangerous behaviour. If employees fail to understand how AI systems work and their potential risks, they may weaken security controls, albeit unintentionally. Robust governance and control, alongside shared literacy are fundamental to the safe deployment and operation of AI agents, enabling secure, compliant, and accountable performance in real-world environments (Image source: “Global Hawk, NASA’s New Remote-Controlled Plane” by NASA Goddard Photo and Video is licensed under CC BY 2.0. ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Deloitte sounds alarm as AI agent deployment outruns safety frameworks appeared first on AI News. View the full article
  14. Scaling enterprise AI requires overcoming architectural oversights that often stall pilots before production, a challenge that goes far beyond model selection. While generative AI prototypes are easy to spin up, turning them into reliable business assets involves solving the difficult problems of data engineering and governance. Ahead of AI & Big Data Global 2026 in London, Franny Hsiao, EMEA Leader of AI Architects at Salesforce, discussed why so many initiatives hit a wall and how organisations can architect systems that actually survive the real world. The ‘pristine island’ problem of scaling enterprise AI Most failures stem from the environment in which the AI is built. Pilots frequently begin in controlled settings that create a false sense of security, only to crumble when faced with enterprise scale. “The single most common architectural oversight that prevents AI pilots from scaling is the failure to architect a production-grade data infrastructure with built-in end to end governance from the start,” Hsiao explains. “Understandably, pilots often start on ‘pristine islands’ – using small, curated datasets and simplified workflows. But this ignores the messy reality of enterprise data: the complex integration, normalisation, and transformation required to handle real-world volume and variability.” When companies attempt to scale these island-based pilots without addressing the underlying data mess, the systems break. Hsiao warns that “the resulting data gaps and performance issues like inference latency render the AI systems unusable—and, more importantly, untrustworthy.” Hsiao argues that the companies successfully bridging this gap are those that “bake end-to-end observability and guardrails into the entire lifecycle.” This approach provides “visibility and control into how effective the AI systems are and how users are adopting the new technology.” Engineering for perceived responsiveness As enterprises deploy large reasoning models – like the ‘Atlas Reasoning Engine’ – they face a trade-off between the depth of the model’s “thinking” and the user’s patience. Heavy compute creates latency. Salesforce addresses this by focusing on “perceived responsiveness through Agentforce Streaming,” according to Hsiao. “This allows us to deliver AI-generated responses progressively, even while the reasoning engine performs heavy computation in the background. It’s an incredibly effective approach for reducing perceived latency, which often stalls production AI.” Transparency also plays a functional role in managing user expectations when scaling enterprise AI. Hsiao elaborates on using design as a trust mechanism: “By surfacing progress indicators that show the reasoning steps or the tools being used, as well images like spinners and progress bars to depict loading states, we don’t just keep users engaged; we improve perceived responsiveness and build trust. “This visibility, combined with strategic model selection – like choosing smaller models for fewer computations, meaning faster response times – and explicit length constraints, ensures the system feels deliberate and responsive.” Offline intelligence at the edge For industries with field operations, such as utilities or logistics, reliance on continuous cloud connectivity is a non-starter. “For many of our enterprise customers, the biggest practical driver is offline functionality,” states Hsiao. Hsiao highlights the shift toward on-device intelligence, particularly in field services, where the workflow must continue regardless of signal strength. “A technician can photograph a faulty part, error code, or serial number while offline. Then an on-device LLM can then identify the asset or error, and provide guided troubleshooting steps from a cached knowledge base instantly,” explains Hsiao. Data synchronisation happens automatically once connectivity returns. “Once a connection is restored, the system handles the ‘heavy lifting’ of syncing that data back to the cloud to maintain a single source of truth. This ensures that work gets done, even in the most disconnected environments.” Hsiao expects continued innovation in edge AI due to benefits like “ultra-low latency, enhanced privacy and data security, energy efficiency, and cost savings.” High-stakes gateways Autonomous agents are not set-and-forget tools. When scaling enterprise AI deployments, governance requires defining exactly when a human must verify an action. Hsiao describes this not as dependency, but as “architecting for accountability and continuous learning.” Salesforce mandates a “human-in-the-loop” for specific areas Hsiao calls “high-stakes gateways”: “This includes specific action categories, including any ‘CUD’ (Creating, Uploading, or Deleting) actions, as well as verified contact and customer contact actions,” says Hsiao. “We also default to human confirmation for critical decision-making or any action that could be potentially exploited through prompt manipulation.” This structure creates a feedback loop where “agents learn from human expertise,” creating a system of “collaborative intelligence” rather than unchecked automation. Trusting an agent requires seeing its work. Salesforce has built a “Session Tracing Data Model (STDM)” to provide this visibility. It captures “turn-by-turn logs” that offer granular insight into the agent’s logic. “This gives us granular step-by-step visibility that captures every interaction including user questions, planner steps, tool calls, inputs/outputs, retrieved chunks, responses, timing, and errors,” says Hsiao. This data allows organisations to run ‘Agent Analytics’ for adoption metrics, ‘Agent Optimisation’ to drill down into performance, and ‘Health Monitoring’ for uptime and latency tracking. “Agentforce observability is the single mission control for all your Agentforce agents for unified visibility, monitoring, and optimisation,” Hsiao summarises. Standardising agent communication As businesses deploy agents from different vendors, these systems need a shared protocol to collaborate. “For multi-agent orchestration to work, agents can’t exist in a vacuum; they need common language,” argues Hsiao. Hsiao outlines two layers of standardisation: orchestration and meaning. For orchestration, Salesforce is adopting open-source standards like MCP (Model Context Protocol) and A2A (Agent to Agent Protocol).” “We believe open source standards are non-negotiable; they prevent vendor lock-in, enable interoperability, and accelerate innovation.” However, communication is useless if the agents interpret data differently. To solve for fragmented data, Salesforce co-founded OSI (Open Semantic Interchange) to unify semantics so an agent in one system “truly understands the intent of an agent in another.” The future enterprise AI scaling bottleneck: agent-ready data Looking forward, the challenge will shift from model capability to data accessibility. Many organisations still struggle with legacy, fragmented infrastructure where “searchability and reusability” remain difficult. Hsiao predicts the next major hurdle – and solution – will be making enterprise data “‘agent-ready’ through searchable, context-aware architectures that replace traditional, rigid ETL pipelines.” This shift is necessary to enable “hyper-personalised and transformed user experience because agents can always access the right context.” “Ultimately, the next year isn’t about the race for *******, newer models; it’s about building the orchestration and data infrastructure that allows production-grade agentic systems to thrive,” Hsiao concludes. Salesforce is a key sponsor of this year’s AI & Big Data Global in London and will have a range of speakers, including Franny Hsiao, sharing their insights during the event. Be sure to swing by Salesforce’s booth at stand #163 for more from the company’s experts. See also: Databricks: Enterprise AI adoption shifts to agentic systems Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Franny Hsiao, Salesforce: Scaling enterprise AI appeared first on AI News. View the full article
  15. 2026 will see forward-thinking organisations building out their squads of AI agents across roles and functions. But amid the rush, there is another aspect to consider. One of IDC’s enterprise technology predictions for the coming five years, published in October, was fascinating. “By 2030, up to 20% of [global 1000] organisations will have faced lawsuits, substantial fines, and CIO dismissals, due to high-profile disruptions stemming from inadequate controls and governance of AI agents,” the analyst noted. How do you therefore put guardrails in place – and how do you ensure these agents work together and, ultimately, do business together? Patrick Tobler, founder and CEO of blockchain infrastructure platform provider NMKR, is working on a project which aims to solve this – by fusing agentic AI and decentralisation. The Masumi Network, born out of a collaboration between NMKR and Serviceplan Group, launched in late 2024 as a framework-agnostic infrastructure which ‘empowers developers to build autonomous agents that collaborate, monetise services, and maintain verifiable trust.’ “The core thesis of Masumi is that there’s going to be billions of different AI agents from different companies interacting with each other in the future,” explains Tobler. “The difficult part now is – how do you actually have agents from different companies that can interact with each other and send money to each other as well, across these different companies?” Take travel as an example. You want to attend an industry conference, so your hotel booking agent buys a plane ticket from your airline agent. The entire experience and transaction will be seamless – but that implicit trust is required. “Masumi is a decentralised network of agents, so it’s not relying on any centralised payment infrastructure,” says Tobler. “Instead, agents are equipped with wallets and can send stablecoins from one agent to another and, because of that, interacting with each other in a completely safe and trustless manner.” For Tobler, having spent in his words ‘a lot of time’ in crypto, he determined that its benefits were being pointed to the wrong place. “I think there’s a lot of these problems that we have solved in crypto for humans, and then I came to this conclusion that maybe we’ve been solving them or the wrong target audience,” he explains. “Because for humans, using crypto and wallets and blockchains, all that kind of stuff is extremely difficult; the user experience is not great. But for agents, they don’t care if it’s difficult to use. They just use it, and it’s very native to them. “So all these issues that are now arising with agents having to interact with millions, or maybe even billions, of agents in the future – these problems have all already been solved with crypto.” Tobler is attending AI & Big Data Expo Global as part of Discover Cardano; NMKR started on the Cardano blockchain, while Masumi is built completely on Cardano. He says he is looking forward to speaking with businesses that are ‘hearing a lot about AI but aren’t really using it much besides ChatGPT’. “I want to understand from them what they are doing, and then figure out how we can help them,” he says. “That’s most often the thing missing from traditional tech startups. We’re all building for our own bubble, instead of actually talking to the people that would be using it every day.” Discover Cardano is exhibiting at the AI & Big Data Expo Global, in London on February 4-5. Watch the full video interview with NMKR’s Patrick Tobler below: Photo by Google DeepMind The post Masumi Network: How AI-blockchain fusion adds trust to burgeoning agent economy appeared first on AI News. View the full article
  16. A White House paper titled “Artificial Intelligence and the Great Divergence” sets out parallels between the effects of the industrial revolution in the 18th and 19th centuries and the current times, with artificial intelligence positioned as guiding the way the world’s economies will be shaped. Artificial intelligence now sits at the centre of US economic strategy, currently representing a significant portion of the country’s economic activity, as characterised by the building of AI infrastructure, most notably in the form of data centres. The paper says AI investment raised US GDP by 1.3% percent in the first half of 2025, and compares this with the investment in the railway network during the industrial revolution. “Artificial Intelligence and the Great Divergence” says long-term growth depends primarily on gains in productivity, and AI is the tool to achieve those gains. It presents a range of estimates of AI’s impact on GDP, from single-digit increases to 20% productivity growth inside a decade. It also floats some more extreme scenarios, where GDP grows at more than 45% as AI substitutes for human labour in the longer term. Capital deployment in the form of building AI infrastructure, not growing consumption or public spending, is now creating US economic growth. Investment in data processing equipment, buildings, infrastructure, and software grew 28% in early 2025, and AI-related infrastructure represented around a quarter of all US investment in 2025. Training compute capacity used by AI models has increased roughly four-fold per year since 2010, and the length of tasks AI systems can complete has doubled every seven months for six years, the paper states. The cost per token of AI output has fallen by factors ranging from nine to nine hundred per year, depending on task and model. By late 2025, around 78% percent of organisations had reported using AI, up from 55% in 2024, and it’s claimed that 40% of US workers use generative AI in their jobs. Nearly half of US businesses now pay for AI subscriptions. The report poses these figures as evidence that AI has moved from experimentation into routine production. Internationally, the document frames AI as a factor in divergence of economic prosperity, with AI in the US increasing America’s GDP growth faster than in Europe and China. The US leads at the moment in private AI investment, model development, and compute capacity, while the EU’s share of world GDP has fallen since 1980. Additionally, the continent lags in comparable AI metrics – investment, construction, software development, overall capacity, etc. China remains a major player in AI actor, but the report notes that much of its model training relies on US-designed hardware. The White House publication advocates for an integrated national strategy with investment incentives at its core. The One Big Beautiful Bill Act gave significant financial breaks for data centres and IT infrastructure, and created favourable conditions for speedy facility construction, in line with the Act’s aim to lift GDP growth by more than a percentage point per year over the medium term. The report argues that deregulation in the AI industry supports productivity by lowering costs, increasing competition, and speeding innovation. Trade agreements and foreign policy reinforce this approach, with overseas partners committing to large purchases of US-derived AI chips and infrastructure. The paper notes that AI data centres are electricity-intensive, and projects that demand for power by AI infrastructure could reach up to 12% of domestic electricity consumption by 2028. It links the success of AI to energy availability and the ability of the power grid to deliver, positioning the control of energy supply as a prerequisite for international leadership in AI. The report’s conclusion is that the countries that lead in AI investment and adoption will experience higher growth than the mean. The United States is aligning multiple policy rafts to ensure its leading position in the sector. Businesses that build systems in line with its national goals will be part of a dominant economic force shaping the next phase of global growth. (Image source: “Chicago Thaws into Spring” by Trey Ratcliff is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post White House compares industrial revolution with AI era appeared first on AI News. View the full article
  17. Artificial intelligence has moved into the US workplace, but its adoption remains uneven, fragmented, and tied to role, industry, and organisation. Findings from a Gallup Workforce survey covering the ******* to the end of December 2025 show how employees use AI, who benefits most from it, and where areas of uncertainty remain. The findings draw from a nationally-representative questioning of more than 23,000 US adults in full- and part-time work, conducted online in August 2025. Its conclusions are that instances of AI in the workplace are increasing, but its use is far from universal, and is concentrated among knowledge-based workers. The office AI Employees in technology, finance, and professional services are by far the biggest user group. More than three-quarters of those working in IT report using AI “at least a few times a year”. In finance and professional services, the figure is a touch under 60%. AI-enabled or aided roles tend to be those that involve significant digital workflow and information synthesis; tasks that correspond with AI’s current abilities. AI use is lower in sectors dominated by customer-facing or manual work. Only around a third of retail workers report comparable levels of use to their office counterparts, although those in healthcare and manufacturing do tend to deploy AI more often than those in retail, for example. The fact that current raft of AI platforms fit more naturally into desk-based, cognitive roles seems obvious – less so is a drop-off in user numbers in tightly-regulated environments. Do we, or don’t we? Gallup’s data reveals a significant number of workers ore unsure whether or not their employer had adopted AI – nearly a quarter of those surveyed weren’t sure. In the third quarter of 2025, just over a third of employees said their organisation had implemented AI. 40% said there was no adoption of AI in their place of work It’s worth noting that earlier versions of Gallup surveys didn’t include a “don’t know” option for questions about employers’ AI adoption, which encouraged respondents to guess. Belief in organisational AI adoption appeared to rise sharply between 2024 and 2025, therefore, Gallup says. Once uncertainty could be stated explicitly, it became clear a good number of employees were simply uninformed on the matter. It’s staff in non-managerial roles who are more likely to say they’re unaware of their organisation’s AI use, a tendency mirrored in part-time staff and hands-on roles. The further workers are from decision-making, it seems, the less sure they become. How workers use AI The way employees use AI are consistent: of those using AI at least once a year, the most common applications are consolidating information, searching for information, and “generating ideas”, tasks that have changed little since Gallup first measured workplace AI use in 2024. More than 60% of AI users refer to chatbots, with using AI for writing and editing coming some way behind. Coding assistants and data science tools remain niche, but popular. Employees who use AI often are far more likely to use any more advanced tools at their disposal; particularly true in the cases of coding assistants and data analysis. Although use figures are generally up, Gallup concludes that AI has yet to be embedded in daily work for most Americans. Around 45% of workers say they use AI “a few times a year”, but only about 10% use it every day. Conclusions Business leaders have an easy win: simply clarifying a position on AI use would be a positive, plus publicising the availability (or otherwise) of AI tools would be an easy way to improve adoption rates. The current abilities of AI pertain to desk-based, digital and data-centric workflows, although there are a myriad of platforms that will utilise AI in other roles. Exploring these more fully would certainly be bucking the trend, and may make the difference between an organisation’s long-term prospects and those of its direct competitors. A page detailing Gallup’s findings can be found on the company’s website. (Image source: “DIY Open Plan Office” by lower29 is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Gallup Workforce shows details of AI adoption in US workplaces appeared first on AI News. View the full article
  18. For banks trying to put AI into real use, the hardest questions often come before any model is trained. Can the data be used at all? Where is it allowed to be stored? Who is responsible once the system goes live? At Standard Chartered, these privacy-driven questions now shape how AI systems are built, and deployed at the bank. For global banks operating in many jurisdictions, these early decisions are rarely straightforward. Privacy rules differ by market, and the same AI system may face very different constraints depending on where it is deployed. At Standard Chartered, this has pushed privacy teams into a more active role in shaping how AI systems are designed, approved, and monitored in the organisation. “Data privacy functions have become the starting point of most AI regulations,” says David Hardoon, Global Head of AI Enablement at Standard Chartered. In practice, that means privacy requirements shape the type of data that can be used in AI systems, how transparent those systems need to be, and how they are monitored once they are live. Privacy shaping how AI runs The bank is already running AI systems in live environments. The transition from pilots brings practical challenges that are easy to underestimate early on. In small trials, data sources are limited and well understood. In production, AI systems often pull data from many upstream platforms, each with its own structure and quality issues. “When moving from a contained pilot into live operations, ensuring data quality becomes more challenging with multiple upstream systems and potential schema differences,” Hardoon says. David Hardoon, Global Head of AI Enablement at Standard Chartered Privacy rules add further constraints. In some cases, real customer data cannot be used to train models. Instead, teams may rely on anonymised data, which can affect how quickly systems are developed or how well they perform. Live deployments also operate at a much larger scale, increasing the impact of any gaps in controls. As Hardoon puts it, “As part of responsible and client-centric AI adoption, we prioritise adhering to principles of fairness, ethics, accountability, and transparency as data processing scope expands.” Geography and regulation decide where AI works Where AI systems are built and deployed is also shaped by geography. Data protection laws vary in regions, and some countries impose strict rules on where data must be stored and who can access it. These requirements play a direct role in how Standard Chartered deploys AI, particularly for systems that rely on client or personally identifiable information. “Data sovereignty is often a key consideration when operating in different markets and regions,” Hardoon says. In markets with data localisation rules, AI systems may need to be deployed locally, or designed so that sensitive data does not cross borders. In other cases, shared platforms can be used, provided the right controls are in place. This results in a mix of global and market-specific AI deployments, shaped by local regulation not a single technical preference. The same trade-offs appear in decisions about centralised AI platforms versus local solutions. Large organisations often aim to share models, tools, and oversight in markets to reduce duplication. Privacy laws do not always block this approach. “In general, privacy regulations do not explicitly prohibit transfer of data, but rather expect appropriate controls to be in place,” Hardoon says. There are limits: some data cannot move in borders at all, and certain privacy laws apply beyond the country where data was collected. The details can restrict which markets a central platform can serve and where local systems remain necessary. For banks, this often leads to a layered setup, with shared foundations combined with localised AI use cases where regulation demands it. Human oversight remains central As AI becomes more embedded in decision-making, questions around explainability and consent grow harder to avoid. Automation may speed up processes, but it does not remove responsibility. “Transparency and explainability have become more crucial than before,” Hardoon says. Even when working with external vendors, accountability remains internal. This has reinforced the need for human oversight in AI systems, particularly where outcomes affect customers or regulatory obligations. People also play a larger role in privacy risk than technology alone. Processes and controls can be well designed, but they depend on how staff understand and handle data. “People remain the most important factor when it comes to implementing privacy controls,” Hardoon says. At Standard Chartered, this has pushed a focus on training and awareness, so teams know what data can be used, how it should be handled, and where the boundaries lie. Scaling AI under growing regulatory scrutiny requires making privacy and governance easier to apply in practice. One approach the bank is taking is standardisation. By creating pre-approved templates, architectures, and data classifications, teams can move faster without bypassing controls. “Standardisation and re-usability are important,” Hardoon explains. Codifying rules around data residency, retention, and access helps turn complex requirements into clearer components that can be reused in AI projects. As more organisations move AI into everyday operations, privacy is not just a compliance hurdle. It is shaping how AI systems are built, where they run, and how much trust they can earn. In banking, that shift is already influencing what AI looks like in practice – and where its limits are set. (Photo by Corporate Locations) See also: The quiet work behind Citi’s 4,000-person internal AI rollout Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Inside Standard Chartered’s approach to running AI under privacy rules appeared first on AI News. View the full article
  19. According to Databricks, enterprise AI adoption is shifting to agentic systems as organisations embrace intelligent workflows. Generative AI’s first wave promised business transformation but often delivered little more than isolated chatbots and stalled pilot programmes. Technology leaders found themselves managing high expectations with limited operational utility. However, new telemetry from Databricks suggests the market has turned a corner. Data from over 20,000 organisations – including 60 percent of the Fortune 500 – indicates a rapid shift toward “agentic” architectures where models do not just retrieve information but independently plan and execute workflows. This evolution represents a fundamental reallocation of engineering resources. Between June and October 2025, the use of multi-agent workflows on the Databricks platform grew by 327 percent. This surge signals that AI is graduating to a core component of system architecture. The ‘Supervisor Agent’ drives enterprise adoption of agentic AI Driving this growth is the ‘Supervisor Agent’. Rather than relying on a single model to handle every request, a supervisor acts as an orchestrator, breaking down complex queries and delegating tasks to specialised sub-agents or tools. Since its launch in July 2025, the Supervisor Agent has become the leading agent use case, accounting for 37 percent of usage by October. This pattern mirrors human organisational structures: a manager does not perform every task but ensures the team executes them. Similarly, a supervisor agent manages intent detection and compliance checks before routing work to domain-specific tools. Technology companies currently lead this adoption, building nearly four times more multi-agent systems than any other industry. Yet the utility extends across sectors. A financial services firm, for instance, might employ a multi-agent system to handle document retrieval and regulatory compliance simultaneously, delivering a verified client response without human intervention. Traditional infrastructure under pressure As agents graduate from answering questions to executing tasks, underlying data infrastructure faces new demands. Traditional Online Transaction Processing (OLTP) databases were designed for human-speed interactions with predictable transactions and infrequent schema changes. Agentic workflows invert these assumptions. AI agents now generate continuous, high-frequency read and write patterns, often creating and tearing down environments programmatically to test code or run scenarios. The scale of this automation is visible in the telemetry data. Two years ago, AI agents created just 0.1 percent of databases; today, that figure sits at 80 percent. Furthermore, 97 percent of database testing and development environments are now built by AI agents. This capability allows developers and “vibe coders” to spin up ephemeral environments in seconds rather than hours. Over 50,000 data and AI apps have been created since the Public Preview of Databricks Apps, with a 250 percent growth rate over the past six months. The multi-model standard Vendor lock-in remains a persistent risk for enterprise leaders as they seek to increase agentic AI adoption. The data indicates that organisations are actively mitigating this by adopting multi-model strategies. As of October 2025, 78 percent of companies utilised two or more Large Language Model (LLM) families, such as ChatGPT, Claude, Llama, and Gemini. The sophistication of this approach is increasing. The proportion of companies using three or more model families rose from 36 percent to 59 percent between August and October 2025. This diversity allows engineering teams to route simpler tasks to smaller and more cost-effective models while reserving frontier models for complex reasoning. Retail companies are setting the pace, with 83 percent employing two or more model families to balance performance and cost. A unified platform capable of integrating various proprietary and open-source models is rapidly becoming a prerequisite for the modern enterprise AI stack. Contrary to the big data legacy of batch processing, agentic AI operates primarily in the now. The report highlights that 96 percent of all inference requests are processed in real-time. This is particularly evident in sectors where latency correlates directly with value. The technology sector processes 32 real-time requests for every single batch request. In healthcare and life sciences, where applications may involve patient monitoring or clinical decision support, the ratio is 13 to one. For IT leaders, this reinforces the need for inference serving infrastructure capable of handling traffic spikes without degrading user experience. Governance accelerates enterprise AI deployments Perhaps the most counter-intuitive finding for many executives is the relationship between governance and velocity. Often viewed as a bottleneck, rigorous governance and evaluation frameworks function as accelerators for production deployment. Organisations using AI governance tools put over 12 times more AI projects into production compared to those that do not. Similarly, companies employing evaluation tools to systematically test model quality achieve nearly six times more production deployments. The rationale is straightforward. Governance provides necessary guardrails – such as defining how data is used and setting rate limits – which gives stakeholders the confidence to approve deployment. Without these controls, pilots often get stuck in the proof-of-concept phase due to unquantified safety or compliance risks. The value of ‘boring’ enterprise automation from agentic AI While autonomous agents often conjure images of futuristic capabilities, current enterprise value from agentic AI lies in automating the routine, mundane, yet necessary tasks. The top AI use cases vary by sector but focus on solving specific business problems: Manufacturing and automotive: 35% of use cases focus on predictive maintenance. Health and life sciences: 23% of use cases involve medical literature synthesis. Retail and consumer goods: 14% of use cases are dedicated to market intelligence. Furthermore, 40 percent of the top AI use cases address practical customer concerns such as customer support, advocacy, and onboarding. These applications drive measurable efficiency and build the organisational muscle required for more advanced agentic workflows. For the C-suite, the path forward involves less focus on the “magic” of AI and more on the engineering rigour surrounding it. Dael Williamson, EMEA CTO at Databricks, highlights that the conversation has shifted. “For businesses across EMEA, the conversation has moved on from AI experimentation to operational reality,” says Williamson. “AI agents are already running critical parts of enterprise infrastructure, but the organisations seeing real value are those treating governance and evaluation as foundations, not afterthoughts.” Williamson emphasises that competitive advantage is shifting back towards how companies build, rather than simply what they buy. “Open, interoperable platforms allow organisations to apply AI to their own enterprise data, rather than relying on embedded AI features that deliver short-term productivity but not long-term differentiation.” In highly regulated markets, this combination of openness and control is “what separates pilots from competitive advantage.” See also: Anthropic selected to build government AI assistant pilot Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Databricks: Enterprise AI adoption shifts to agentic systems appeared first on AI News. View the full article
  20. Anthropic has been selected to build government AI assistant capabilities to modernise how citizens interact with complex state services. For both public and private sector technology leaders, the integration of LLMs into customer-facing platforms often stalls at the proof-of-concept stage. The ***’s Department for Science, Innovation, and Technology (DSIT) aims to bypass this common hurdle by operationalising its February 2025 Memorandum of Understanding with Anthropic. The joint project, announced today, prioritises the deployment of agentic AI systems that are designed to actively guide users through processes rather than simply retrieving static information. The decision to move beyond standard chatbot interfaces addresses a friction point in digital service delivery: the gap between information availability and user action. While government portals are data-rich, navigating them requires specific domain knowledge that many citizens lack. By employing an agentic system powered by Claude, the initiative seeks to provide tailored support that maintains context across multiple interactions. This approach mirrors the trajectory of private sector customer experience, where the value proposition is increasingly defined by the ability to execute tasks and route complex queries rather than just deflect support tickets. The case for agentic AI assistants in government The initial pilot focuses on employment, a high-volume domain where efficiency gains directly impact economic outcomes. The system is tasked with helping users find work, access training, and understand available support mechanisms. For the government, the operational logic involves an intelligent routing system that can assess individual circumstances and direct users to the correct service. This focus on employment services also serves as a stress test for context retention capabilities. Unlike simple transactional queries, job seeking is an ongoing process. The system’s ability to “remember” previous interactions allows users to pause and resume their journey without re-entering data; a functional requirement that is essential for high-friction workflows. For enterprise architects, this government implementation serves as a case study in managing stateful AI interactions within a secure environment. Implementing generative AI within a statutory framework necessitates a risk-averse deployment strategy. The project adheres to a “Scan, Pilot, Scale” framework, a deliberate methodology that forces iterative testing before wider rollout. This phased approach allows the department to validate safety protocols and efficacy in a controlled setting, minimising the potential for compliance failures that have plagued other public sector AI launches. Data sovereignty and user trust form the backbone of this governance model. Anthropic has stipulated that users will retain full control over their data, including the ability to opt out or dictate what the system remembers. By ensuring all personal information handling aligns with *** data protection laws, the initiative aims to preempt privacy concerns that typically stall adoption. Furthermore, the collaboration involves the *** AI Safety Institute to test and evaluate the models, ensuring that the safeguards developed inform the eventual deployment. Avoiding dependency on external AI providers like Anthropic Perhaps the most instructive aspect of this partnership for enterprise leaders is the focus on knowledge transfer. Rather than a traditional outsourced delivery model, Anthropic engineers will work alongside civil servants and software developers at the Government Digital Service. The explicit goal of this co-working arrangement is to build internal AI expertise that ensures the *** government can independently maintain the system once the initial engagement concludes. This addresses the issue of vendor lock-in, where public bodies become reliant on external providers for core infrastructure. By prioritising skills transfer during the build phase, the government is treating AI competence as a core operational asset rather than a procured commodity. This development is part of a broader trend of sovereign AI engagement, with Anthropic expanding its public sector footprint through similar education pilots in Iceland and Rwanda. It also reflects a deepening investment in the *** market, where the company’s London office is expanding its policy and applied AI functions. Pip White, Head of ***, Ireland, and Northern Europe at Anthropic, said: “This partnership with the *** government is central to our mission. It demonstrates how frontier AI can be deployed safely for the public benefit, setting the standard for how governments integrate AI into the services their citizens depend on.” For executives observing this rollout, it once again makes clear that successful AI integration is less about the underlying model and more about the governance, data architecture, and internal capability built around it. The transition from answering questions to guiding outcomes represents the next phase of digital maturity. See also: How Formula E uses Google Cloud AI to meet net zero targets Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Anthropic selected to build government AI assistant pilot appeared first on AI News. View the full article
  21. If database technologies offered performance, flexibility and security, most professionals would be happy to get two of the three, and they might have to expect to accept some compromises, too. Systems optimised for speed demand manual tuning, while flexible platforms can impose costs when early designs become constraints. Security is, sadly, sometimes, a bolt-on, with DBAs relying on internal teams’ skills and knowledge not to introduce breaking changes. RavenDB, however, exists because its founder saw the cumulative costs of those common trade-offs, and the inherent problems stemming from them. They wanted a database system that didn’t force developers and administrators to choose. Abstracting away complexity Oren Eini, RavenDB’s founder and CTO was working as a freelance database performance consultant nearly two decades ago. In an exclusive interview he recounted how he encountered many capable teams “digging themselves into a hole” as the systems in their care grew in complexity. Problems he was presented with didn’t stem from developers not possessing the required skills, but rather from system architecture. Databases tend to guide their developers towards fragile designs and punish developers for following those paths, he says. RavenDB was a project that began as a way to reduce friction when the unstoppable force of what’s required meets the mountain of database schema. The platform’s emphasis is on performance and adaptability without (ironically) at some stage requiring the services of people like Oren. Armed with a bag full of experience and knowledge, he formed RavenDB, which has now been shipping for more than fifteen years – well before the current interest in AI-assisted development. The bottom line is that over time, the RavenDB database adapts to what the organisation cares about, rather than what it guessed it might care about when the database was first spun up. “When I talk to business people,” Eini says, “I tell them I take care of data ownership complexity.” For example, instead of expecting developers or DBAs to anticipate every possible query pattern, RavenDB observes queries as they are executed. If it detects that a query would benefit from an index, it creates one in the background, with minimal overhead on extant processing. This contrasts with most relational databases, where schema and indexing strategies are set by the initial developers, so are difficult to alter later, regardless of how an organisation may have changed. Oren draws the comparison with pouring a building’s foundations before deciding where the doors and support columns might go. It’s an approach that can work, but when the business changes direction over the years, the cost of regretting those early decisions can be alarming. Oren Eini (source: RavenDB) Speaking ahead of the company’s appearance at the upcoming TechEx Global event in London this year (February 4 & 5, Olympia), he cited an example of a European client that struggled to expand into US markets because its database assumed a simple VAT rate that it had consigned to a single field, a schema not suitable for the complexities of state and federal sales taxes. From seemingly simple decisions made in the past (and perhaps not given much thought – European VAT is fairly standard), the client was storing financial pain and technical debt for the next generation. Much of RavenDB’s attractiveness is manifest in practical details and small tweaks that make databases more performant and easier to address. Pagination, for example, requires two database calls in most systems (one to fetch a page of results, another to count matching records). RavenDB returns both in a single query. Individually, such optimisations may appear minor, but at scale they compound. Oren says. “If you smooth down the friction everywhere you go, you end up with a really good system where you don’t have to deal with friction.” Compounded removal of frictions improves performance and makes developers’ jobs simpler. Related data is embedded or included without the penalties associated with table joins in relational databases, so complex queries are completed in a single round trip. Software engineers don’t need to be database specialists. In their world, they just formulate SQL-like queries to RavenDB’s APIs. Compared to other NoSQL databases, Raven DB provides full ACID transactions by default, and reduced operational complexity: many of its baked-in features (ETL pipelines, subscriptions, full-text search, counters, time series, etc.) reduce the need for external systems. In contrast with DBAs and software developers addressing a competing database system and its necessary adjuncts, both developers and admins spend less time sweating the detail with Raven DB. That’s good news, not least for those that hold an organisation’s purse strings. Scaling to fit the purpose RavenDB is also built to scale, as painlessly as it handles complex queries. It can create multi-node clusters if wanted so supports huge numbers of concurrent users. Such clusters are created by RavenDB without time-consuming manual configuration. “With RavenDB, this is normal cost of business,” he says. In February this year, RavenDB Cloud announced version 7.2, and this being 2026, mention needs to be made of AI. Raven DB’s AI Assistant is, “in effect, […] a virtual DBA that comes inside of your database,” he says. The key word is inside. It’s designed for developers and administrators, not end users, answering their questions about indexing, storage usage or system behaviour. AI as a professional tool He’s sceptical about giving AIs unconfined access to any data store. Allowing an AI to act as a generic gatekeeper to sensitive information creates unavoidable security risks, because such systems are difficult to constrain reliably. For the DBA and software developer, it’s another story – AI is a useful tool that operates as a helping hand, configuring and addressing the data. RavenDB’s AI assistant inherits the permissions of the user invoking it, having no privileged access of its own. “Anything it knows about your RavenDB instance comes because, behind the scenes, it’s accessing your system with your permissions,” he says. The company’s AI strategy is to provide developers and admins with opinionated features: generating queries, explaining indexes, helping with schema exploration, and answering operational questions, with calls bounded by operator validation and privileges. Teams developing applications with RavenDB get support for vector search, native embeddings, server-side indexing, and agnostic integration with external LLMs. This, Oren says, lets organisations deliver useful AI-driven features in their applications quickly, without exposing the business to risk and compliance issues. Security and risk Security and risk comprise one of those areas where RavenDB draws a clear line between it and its competitors. We touched on the recent MongoBleed vulnerability, which exposed data from unauthenticated MongoDB instances due to an interaction between compression and authentication code. Oren describes the issue as an architectural failure caused by mixing general-purpose and security-critical code paths. “The reason this is a vulnerability,” he says, “is specifically the fact that you’re trying to mix concerns.” RavenDB uses established cryptographic infrastructure to handle authentication before any database logic is invoked. And even if a flaw emanated from elsewhere, the attack surface would be significantly smaller because unauthenticated users never reach the general code paths: that architectural separation limits the blast radius. While the internals of RavenDB are highly technical and specialised, business decision-makers can easily appreciate that delays caused by schema changes, performance tuning, or infrastructure changes will have significant economic impact. But RavenDB’s malleability and speed also remove what Oren describes as the “no, you can’t do that” conversations. Organisations running RavenDB reduce their dependency on specialist expertise, plus they get the ability to respond to changing business needs much more quickly. “[The database’s] role is to bring actual business value,” Eini says, arguing that infrastructure should, in operational contexts, fade into the background. As it stands, it often determines the scope of strategy discussions. Migration and getting started RavenDB uses a familiar SQL-like query language, and most teams will only need a day at most to get up to speed. Where friction does appear, Oren suggests, it is often due to assumptions carried over from other platforms around security and high availability. For RavenDB, these are built into the design so don’t cause extra workload that needs to be factored in. Coming about as the result of the experience of operational pain by the company’s founder himself, RavenDB’s difference stems from accumulated design decisions: background indexing, query-aware optimisation, the separation of security and authentication issues, and latterly, the need for constraints on AI tooling. In everyday use, developers experience fewer sharp edges, and in the longer term, business leaders see a reduction in costs, especially around the times of change. The combination is compelling enough to displace entrenched platforms in many contexts. To learn more, you can speak to RavenDB representatives at TechEx Global, held at Olympia, London, February 4 and 5. If what you’ve read here has awakened your interest, head over to the company’s website. (Image source: “#316 AVZ Database” by Ralf Appelt is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Lowering the barriers databases place in the way of strategy, with RavenDB appeared first on AI News. View the full article
  22. The severe weather experienced at present in the US has placed significant strain on the airline industry in the country, with knock-on effects of changes to schedules and routes affecting the rest of the world. It’s at times like this that companies have to respond to queries from customers at a much greater rate than during normal operations, and there are – in the specific case of the air sector – operational decisions that need to be taken quickly, yet inside the strictest safety boundaries. Several airlines are turning to generative AI to help them during these types of events, and more generally, to help turn them into more efficient and reactive organisations. Last year, Air France-KLM built a cloud-based generative AI ‘factory’ for use throughout the organisation, which it described as letting it make AI development more consistent and reusable. It formed a partnership with Accenture and Google Cloud for its factory, using it to test and deploy generative AI models. It produces measurable outcomes in ground operations, engineering and maintenance, and customer-facing functions. The partnership group has stated that enterprise deployment of generative AI has increased development speed by more than 35%. The AI factory was built on earlier work undertaken by the airline and Accenture, which involved migrating core applications to the cloud. Since then, Air France-KLM has created a private AI assistant and RAG tools linking LLMs with internal search to support tasks like diagnosing and repairing aircraft damage. The factory is also used by employees, who get trained on how to use AI tools in order that they can use the power of LLMs to make a positive impact to the business. Weather and when AI is used United Airlines is similarly exploring AI in its operations. In an interview with CIO.com, CIO Jason Birnbaum described AI as a way to “shorten decision cycles” during irregular operations such as the recent outages caused by the current extreme cold snap. The company’s AI journey began with the use of AI to respond to passenger enquiries. When flights are delayed or cancelled, customer service representatives are expected to respond quickly and informatively, yet retain a company-mandated communication style – honed during the company’s ‘Every Flight Has A Story’ programme. During extended periods of disruption, maintaining the output from what the company terms ‘storytellers’ difficult. Jason Birnbaum said, “Considering the number of delays versus storytellers, we couldn’t have a person write a new message with every event. So we focused on prioritising the most impactful situations. […] The data piece was simple: the basic facts of the flight and the running chat between the attendants, pilots, gate agents, and the operations people associated with the flight. We fed that information — with additional data on weather, for example — into the AI model, to generate a good draft customer message.” “The trick then was to have it understand the nuances of United Airlines’ communications style and what we wanted to emphasise. That’s where prompt engineering came in, not to train the model to understand flight data, but to use the words United prefers. Let’s take safety, for instance. We can emphasise safety with without scaring people, and the AI tool is learning to make the right word choice. […] The AI model was very good at looking back in time to bring previous flight data into the current situation. Even our human storytellers didn’t include reasons for flight delays, and that kind of information can be very useful to a customer.” Boston Consulting Group’s measure of AI maturity in industries pegs airlines at ‘average’, having moved from slightly below average in the past year. Only one of the 36 airlines surveyed met the highest criteria for being prepared for an AI-enabled future. The analysis suggests that by 2030, carriers that embed AI at the core of their workflows could achieve operating margins that are 5% to 6% points higher than those of peers. It’s thought that generative AI will become part of the operational core of airlines and airports, where decisions about schedules, crew allocations, aircraft rotations, and passenger recovery have to be made quickly. Microsoft claims data-driven AI systems can reduce the root causes of flight delays by up to 35% through improved disruption forecasting, which can limit the negative effects of the spread of disruption. Airlines using AI-driven personalisation report revenue increases of around 10% to 15% per passenger, according to Microsoft, which also says that AI-based tools such as self-service customer interfaces can lead to cost reductions of up to 30%. (Image source: “airplane” by Kuster & Wildhaber Photography is licensed under CC BY-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Cold snap highlight’s airlines’ proactive use of AI appeared first on AI News. View the full article
  23. Enterprise AI has moved from isolated prototypes to systems that shape real decisions: drafting customer responses, summarising internal knowledge, generating code, accelerating research, and powering agent workflows that can trigger actions in business systems. That creates a new security surface, one that sits between people, proprietary data, and automated execution. AI security tools exist to make those questions operational. Some focus on governance and discovery. Others harden AI applications and agents at runtime. Some emphasise testing and red teaming before deployment. Others help security operations teams handle the new class of alerts AI introduces in SaaS and identity layers. What counts as an “AI security tool” in enterprise environments? “AI security” is an umbrella term. In practice, tools tend to fall into a few functional buckets, and many products cover more than one. AI discovery & governance: identifies AI use in employees, apps, and third parties; tracks ownership and risk LLM & agent runtime protection: enforces guardrails at inference time (prompt injection defenses, sensitive data controls, tool-use restrictions) AI security testing & red teaming: tests models and workflows against adversarial techniques before (and after) production release AI supply chain security: assesses risks in models, datasets, packages, and dependencies used in AI systems SaaS & identity-centric AI risk control: manages risk where AI lives inside SaaS apps and integrations, permissions, data exposure, account takeover, risky OAuth scopes A mature AI security programme typically needs at least two layers: one for governance and discovery, and another for runtime protection or operational response, depending on whether your AI footprint is primarily “employee use” or “production AI apps.” Top 10 AI security tools for enterprises in 2026 1) Koi Koi is the best AI security tool for enterprises because of its approach to AI security from the software control layer, helping enterprises govern what gets installed and adopted in endpoints, including AI-adjacent tooling like extensions, packages, and developer assistants. The matters because AI exposure often enters through tools that look harmless: browser extensions that read page content, IDE add-ons that access repositories, packages pulled from public registries, and fast-moving “helper” apps that become embedded in daily workflows. Rather than treating AI security as a purely model-level concern, Koi focuses on controlling the intake and spread of tools that can create data exposure or supply chain risk. In practice, that means turning ad-hoc installs into a governed process: visibility into what’s being requested, policy-based decisions, and workflows that reduce shadow adoption. For security teams, it provides a way to enforce consistency in departments without relying on manual policing. Key features include: Visibility into installed and requested tools in endpoints Policy-based allow/block decisions for software adoption Approval workflows that reduce shadow AI tooling sprawl Controls designed to address extension/package risk and tool governance Evidence trails for what was approved, by whom, and under what policy 2) Noma Security Noma Security is often evaluated as a platform for securing AI systems and agent workflows at the enterprise level. It focuses on discovery, governance, and protection of AI applications in teams, especially when multiple business units deploy different models, pipelines, and agent-driven processes. A key reason enterprises shortlist tools like Noma is scale: once AI adoption spreads, security teams need a consistent way to understand what exists, what it touches, and which workflows represent elevated risk. That includes mapping AI apps to data sources, identifying where sensitive information may flow, and applying governance controls that keep pace with change. Key features include: AI system discovery and inventory in teams Governance controls for AI applications and agents Risk context around data access and workflow behaviour Policies that support enterprise oversight and accountability Operational workflows designed for multi-team AI environments 3) Aim Security Aim Security is positioned around securing enterprise adoption of GenAI, especially the use layer where employees interact with AI tools and where third-party applications add embedded AI features. The makes it particularly relevant for organisations where the most immediate AI risk is not a custom LLM app, but workforce use and the difficulty of enforcing policy in diverse tools. Aim’s value tends to show up when enterprises need visibility into AI use patterns and practical controls to reduce data exposure. The goal is to protect the business without blocking productivity: enforce policy, guide use, and reduce unsafe interactions while preserving legitimate workflows. Key features include: Visibility into enterprise GenAI use and risk patterns Policy enforcement to reduce sensitive data exposure Controls for third-party AI tools and embedded AI features Governance workflows aligned with enterprise security needs Central management in distributed user populations 4) Mindgard Mindgard stands out for AI security testing and red teaming, helping enterprises pressure-test AI applications and workflows against adversarial techniques. The is especially important for organisations deploying RAG and agent workflows, where risk often comes from unexpected interaction effects: retrieved content influencing instructions, tool calls being triggered in unsafe contexts, or prompts leaking sensitive context. Mindgard’s value is proactive: instead of waiting for issues to surface in production, it helps teams identify weak points early. For security and engineering leaders, this supports a repeatable process, similar to application security testing, where AI systems are tested and improved over time. Key features include: Automated testing and red teaming for AI workflows Coverage for adversarial behaviours like injection and jailbreak patterns Findings designed to be actionable for engineering teams Support for iterative testing in releases Security validation aligned with enterprise deployment cycles 5) Protect AI Protect AI is often evaluated as a platform approach that spans multiple layers of AI security, including supply chain risk. The is relevant for enterprises that depend on external models, libraries, datasets, and frameworks, where risk can be inherited through dependencies not created internally. Protect AI tends to appeal to organisations that want to standardise security practices in AI development and deployment, including the upstream components that feed into models and pipelines. For teams that have both AI engineering and security responsibilities, that lifecycle perspective can reduce gaps between “build” and “secure.” Key features include: Platform coverage in AI development and deployment stages Supply chain security focus for AI/ML dependencies Risk identification for models and related components Workflows designed to standardise AI security practices Support for governance and continuous improvement 6) Radiant Security Radiant Security is oriented toward security operations enablement using agentic automation. In the AI security context, that matters because AI adoption increases both the number and novelty of security signals, new SaaS events, new integrations, new data paths, while SOC bandwidth stays limited. Radiant focuses on reducing investigation time by automating triage and guiding response actions. The key difference between helpful automation and dangerous automation is transparency and control. Platforms in this category need to make it easy for analysts to understand why something is flagged and what actions are being recommended. Key features include: Automated triage designed to reduce analyst workload Guided investigation and response workflows Operational focus: reducing noise and speeding decisions Integrations aligned with enterprise SOC processes Controls that keep humans in the loop where needed 7) Lakera Lakera is known for runtime guardrails that address risks like prompt injection, jailbreaks, and sensitive data exposure. Tools in this category focus on controlling AI interactions at inference time, where prompts, retrieved content, and outputs converge in production workflows. Lakera tends to be most valuable when an organisation has AI applications that are exposed to untrusted inputs or where the AI system’s behaviour must be constrained to reduce leakage and unsafe output. It’s particularly relevant for RAG apps that retrieve external or semi-trusted content. Key features include: Prompt injection and jailbreak defense at runtime Controls to reduce sensitive data exposure in AI interactions Guardrails for AI application behaviour Visibility and governance for AI use patterns Policy tuning designed for enterprise deployment realities 8) CalypsoAI CalypsoAI is positioned around inference-time protection for AI applications and agents, with emphasis on securing the moment where AI produces output and triggers actions. The is where enterprises often discover risk: the model output becomes input to a workflow, and guardrails must prevent unsafe decisions or tool use. In practice, CalypsoAI is evaluated for centralising controls in multiple models and applications, reducing the burden of implementing one-off protections in every AI project. The is particularly helpful when different teams ship AI features at different speeds. Key features include: Inference-time controls for AI apps and agents Centralised policy enforcement in AI deployments Security guardrails designed for multi-model environments Monitoring and visibility into AI interactions Enterprise integration support for SOC workflows 9) Cranium Cranium is often positioned around enterprise AI discovery, governance, and ongoing risk management. Its value is particularly strong when AI adoption is decentralised and security teams need a reliable way to identify what exists, who owns it, and what it touches. Cranium supports the governance side of AI security: building inventories, establishing control frameworks, and maintaining continuous oversight as new tools and features appear. The is especially relevant when regulators, customers, or internal stakeholders expect evidence of AI risk management practices. Key features include: Discovery and inventory of AI use in the enterprise Governance workflows aligned with oversight and accountability Risk visibility in internal and third-party AI systems Support for continuous monitoring and remediation cycles Evidence and reporting for enterprise AI programmes 10) Reco Reco is best known for SaaS security and identity-driven risk management, which is increasingly relevant to AI because so much “AI exposure” exists inside SaaS tools, copilots, AI-powered features, app integrations, permissions, and shared data. Rather than focusing on model behaviour, Reco helps enterprises manage the surrounding risks: account compromise, risky permissions, exposed files, overintegrations, and configuration drift. For many organisations, reducing AI risk starts with controlling the platforms where AI interacts with data and identity. Key features include: SaaS security posture and configuration risk management Identity threat detection and response for SaaS environments Data exposure visibility (files, sharing, permissions) Detection of risky integrations and access patterns Workflows aligned with enterprise identity and security operations Why AI security matters for enterprises AI creates security issues that don’t behave like traditional software risk. The three drivers below are why many enterprises are building dedicated AI security abilities. 1) AI can turn small mistakes into repeated leakage A single prompt can expose sensitive context: internal names, customer details, incident timelines, contract terms, design decisions, or proprietary code. Multiply that in thousands of interactions, and leakage becomes systematic not accidental. 2) AI introduces a manipulable instruction layer AI systems can be influenced by malicious inputs, direct prompts, indirect injection through retrieved content, or embedded instructions inside documents. A workflow may “look normal” while being steered into unsafe output or unsafe actions. 3) Agents expand blast radius from content to execution When AI can call tools, access files, trigger tickets, modify systems, or deploy changes, a security problem is not “wrong text.” It becomes “wrong action,” “wrong access,” or “unapproved execution.” That’s a different level of risk, and it requires controls designed for decision and action pathways, not just data. The risks AI security tools are built to address Enterprises adopt AI security tools because these risks show up fast, and internal controls are rarely built to see them end-to-end: Shadow AI and tool sprawl: employees adopt new AI tools faster than security can approve them Sensitive data exposure: prompts, uploads, and RAG outputs can leak regulated or proprietary data Prompt injection and jailbreaks: manipulation of system behaviour through crafted inputs Agent over-permissioning: agent workflows get excessive access “to make it work” Third-party AI embedded in SaaS: features ship inside platforms with complex permission and sharing models AI supply chain risk: models, packages, extensions, and dependencies bring inherited vulnerabilities The best tools help you turn these into manageable workflows: discovery → policy → enforcement → evidence. What Strong Enterprise AI Security Looks Like AI security succeeds when it becomes a practical operating model, not a set of warnings. High-performing programmes typically have: Clear ownership: who owns AI approvals, policies, and exceptions Risk tiers: lightweight governance for low-risk use, stronger controls for systems touching sensitive data Guardrails that don’t break productivity: strong security without constant “security vs business” conflict Auditability: the ability to show what is used, what is allowed, and why decisions were made Continuous adaptation: policies evolve as new tools and workflows emerge This is why vendor selection matters. The wrong tool can create dashboards without control, or controls without adoption. How to choose AI security tools for enterprises Avoid the trap of buying “the AI security platform.” Instead, choose tools based on how your enterprise uses AI. Map your AI footprint first Is most use employee-driven (ChatGPT, copilots, browser tools)? Are you building internal LLM apps with RAG, connectors, and access to proprietary knowledge? Do you have agents that can execute actions in systems? Is AI risk mostly inside SaaS platforms with sharing and permissions? Decide what must be controlled vs observed Some enterprises need immediate enforcement (block/allow, DLP-like controls, approvals). Others need discovery and evidence first. Prioritise integration and operational fit A great AI security tool that can’t integrate into identity, ticketing, SIEM, or data governance workflows will struggle in enterprise environments. Run pilots that mimic real workflows Test with scenarios your teams actually face: Sensitive data in prompts Indirect injection via retrieved documents User-level vs admin-level access differences An agent workflow that has to request elevated permissions Choose for sustainability The best tool is the one your teams will actually use after month three, when the novelty wears off and real adoption begins. Enterprises don’t “secure AI” by declaring policies. They secure AI by building repeatable control loops: discover, govern, enforce, validate, and prove. The tools above represent different layers of that loop. The best choice depends on where your risk concentrates, workforce use, production AI apps, agent execution pathways, supply chain exposure, or SaaS/identity sprawl. Image source: Unsplash The post Top 10 AI security tools for enterprises in 2026 appeared first on AI News. View the full article
  24. Big retailers are committing more heavily to agentic AI-led commerce, and accepting some loss of customer proximity and data control in the process. As reported by Retail Dive, the opening weeks of 2026 have seen Etsy, Target and Walmart push product ranges onto third-party AI platforms, forming new partnerships with Google’s Gemini and Microsoft’s Copilot, after last year’s collaborations with OpenAI’s ChatGPT. These let consumers purchase goods inside the AI’s conversation interface. Amazon and Walmart have been investing in their own consumer-facing AI assistants, Rufus and Sparky respectively to change how shoppers interact with their brands. Agentic AI is beginning to redraw direct-to-consumer engagement, and industry figures regard this trend as an important moment in online retail. “I think this has the potential to disrupt retail in the same way the internet once did,” Kartik Hosanagar, a marketing professor at the Wharton School of the University of Pennsylvania, told the website’s reporters. Partnering with AIs like ChatGPT or Gemini engages consumers wherever they happen to be and may choose to shop. Adobe’s 2025 Holiday Shopping report found that AI-driven traffic to US e-commerce sites grew 758% year on year between in November 2025, and Cyber Monday saw a 670% increase in AI-referred retail visits. “What we expect is a deepening of consumer engagement,” Katherine ******, a partner at Kearney specialising in food, drug and mass-market retail, said in an email to Retail Dive. “More shoppers will rely on AI for purchasing, and across a wider range of missions. As retailers’ capabilities within these tools improve, adoption should accelerate further.” Meeting customers on AI platforms comes with trade-offs, according to industry observers, with questions around data ownership and the risk that retailers are sidelined. 81% of retail executives believe generative AI will erode brand loyalty by 2027, according to Deloitte’s 2026 Retail Industry Global Outlook, published earlier this month. Retailers’ websites or apps provide a stream of behavioural data, and if discovery, evaluation, and purchase happen externally, any insight doesn’t reach the retailer. “This fundamentally changes where power sits,” Hosanagar said. “Control over the agent increasingly means control over the customer relationship.” Google and Alphabet CEO Sundar Pichai has unveiled new commerce tools for Gemini, outlining how it will support customers from discovery to final purchase. Nikki Baird, vice president of strategy and product at Aptos, says this raises difficult questions. “What he’s describing is Google owning the data across discovery, decision and transaction. Even if some information is shared back, missing context from those stages leaves retailers with a much poorer understanding of their customers.” Pichai reassured retailers collaboration remains central to Google. “From nearly three decades of working with retailers, we know success only comes when we work together,” he told an NRF audience. “Our aim is to use our full technology stack to help shape the next era of retail.” Yet agentic systems’ features like instant checkout absorb the shopping experience into one platform. “If research, discovery and purchase all happen on OpenAI rather than Walmart.com, you’re effectively giving away the brand experience. At that point, the retailer risks becoming little more than a fulfilment operation,” Hosanagar said. Amazon has not announced plans to sell directly through ChatGPT, doubling down on its own AI initiatives. Earlier this month, the company launched a dedicated site for Alexa+, its generative AI assistant that helps users research and plan purchases. Yet participation in third-party AI commerce may become unavoidable. When OpenAI launched its Instant Checkout feature on ChatGPT last September, it suggested that enabling the function could influence how merchants are ranked in search results, in addition to price and product quality. Uploading product catalogues to AI chat platforms may be the first step in a transformation of online retail. According to Deloitte, roughly half of retail executives expect the current multi-stage shopping process to reduce to a single AI-driven interaction by 2027. For now the industry remains at an early stage of any transition. “The real inflection point is when consumers rely on an autonomous agent to shop on their behalf,” Hosanagar told Retail Dive. “Retailers will engage less with humans directly and more with their representatives — AI agents. That agent processes information differently, requires data in new formats and responds to persuasion in ways unlike a person.” Today, consumers can access ChatGPT on their phones while in-store, effectively consulting an always-available expert. “It’s not just the internet in your pocket,” Baird told Retail Dive. “It’s like having a highly knowledgeable store associate who knows every retailer.” This may prompt retailers to equip frontline staff with their own AI tools, offering instant insight into customer preferences or shopping history. Alternatively, a retailer’s AI agent could proactively notify customers when a favoured item is back in stock, helping associates convert interest into sales. “The goal is to enable store associates to perform at their best,” Baird said. (Image source: “Shopping trauma!” by Elsie esq. is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Retailers examine options for on-AI retail appeared first on AI News. View the full article
  25. AI continues to reshape technology and business; yet for the network, enterprise connectivity in the AI age means being always-on, and extra vigilant for sovereignty and security besides. This means that speed is not the only requirement. As Julian Skeels, chief digital officer at Expereo notes, it is more about ‘certainty.’ “AI workloads are distributed, they’re continuous, they’re incredibly latency-sensitive. Inference, monitoring, retrieval and remediation never stop, so that changes the network’s role,” says Skeels. “In the world of AI, networking actually becomes a system dependency,” he adds. “When the network degrades, the application degrades immediately. “An AI-ready network needs to make data movement deterministic. It’s not just about it being fast; it’s about it being predictable, and observable, and governable, and resilient – and to do all those things under continual change.” Many CIOs, however, are struggling right now with what Skeels describes as ‘connectivity everywhere but visibility nowhere.’ “They’re dealing with hybrid networks, multiple clouds, multiple providers and portals that create a constant operational drag to their teams,” says Skeels. “What they want is clarity and control – not more tools.” Skeels arrived at Expereo last year with myriad cross-industry experience in product and digital transformation initiatives under his belt. He found an industry ripe for accelerative change, and a company determined to lead the way and ensure pricing global connectivity should take minutes rather than weeks. “When I came to Expereo, I saw that global connectivity has, I would say, largely resisted real digital transformation for a long time,” notes Skeels. “Most customers will still experience it as slow, and manual, and opaque, and fragmented across the dozens of providers and portals they need to work with. “We believe, though, that with emerging technologies such as agentic AI, that’s finally changing,” adds Skeels. “Our ambition here is to make global connectivity as simple, and immediate, and transparent as cloud computing is for our customers.” Enabling such change for customers requires that mix of speed and visibility – and this is where the expereoOne platform comes in, to provide what the company calls ‘visibility at the speed of life’ and give customers a single, global view of what is being deployed, how it is performing, and what it costs. Beyond visibility, customers also need proactivity, as Skeels explains. “We’re deeply integrated into our customers’ order management, their ITSM, their ERP systems, which makes working with Expereo at scale absolutely seamless,” he says. “The key point is that better visibility isn’t about more dashboards. It’s about connecting network behaviour to their business outcomes in terms of resilience, security experience, and cost.” Skeels is speaking at the Digital Transformation Expo Global on February 4-5 around designing the AI-ready network – and his session promises to subvert the usual advice for those in attendance. “I want to challenge a few things,” notes Skeels. “I want to ask people to consider even unlearning things they’ve learned in the past. “A lot of what we’ve taken for granted about networks no longer holds in an AI world.” Watch the full conversation between Julian Skeels and TechEx’s James Bourne below: Photo by Pixabay The post Expereo: Enterprise connectivity amid AI surge with ‘visibility at the speed of life’ appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.