Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    800
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure. When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a platform to foundational infrastructure, altering the governing rules entirely. At the initial product stage, exerting tight corporate control often feels highly advantageous. Closed development environments iterate quickly and tightly manage the end-user experience. They capture and concentrate financial value within a single corporate entity, an approach that functions adequately during early product development cycles. However, IBM’s analysis highlights that expectations change entirely when a technology solidifies into a foundational layer. Once other institutional frameworks, external markets, and broad operational systems rely on the software, the prevailing standards adapt to a new reality. At infrastructure scale, embracing openness ceases to be an ideological stance and becomes a highly practical necessity. AI is currently crossing this threshold within the enterprise architecture stack. Models are increasingly embedded directly into the ways organisations secure their networks, author source code, execute automated decisions, and generate commercial value. AI functions less as an experimental utility and more as core operational infrastructure. The recent limited preview of Anthropic’s Claude Mythos model brings this reality into sharper focus for enterprise executives managing risk. Anthropic reports that this specific model can discover and exploit software vulnerabilities at a level matching few human experts. In response to this power, Anthropic launched Project Glasswing, a gated initiative designed to place these advanced capabilities directly into the hands of network defenders first. From IBM’s perspective, this development forces technology officers to confront immediate structural vulnerabilities. If autonomous models possess the capability to write exploits and shape the overall security environment, Thomas notes that concentrating the understanding of these systems within a small number of technology vendors invites severe operational exposure. With models achieving infrastructure status, IBM argues the primary issue is no longer exclusively what these machine learning applications can execute. The priority becomes how these systems are constructed, governed, inspected, and actively improved over extended periods. As underlying frameworks grow in complexity and corporate importance, maintaining closed development pipelines becomes exceedingly difficult to defend. No single vendor can successfully anticipate every operational requirement, adversarial attack vector, or system failure mode. Implementing opaque AI structures introduces heavy friction across existing network architecture. Connecting closed proprietary models with established enterprise vector databases or highly sensitive internal data lakes frequently creates massive troubleshooting bottlenecks. When anomalous outputs occur or hallucination rates spike, teams lack the internal visibility required to diagnose whether the error originated in the retrieval-augmented generation pipeline or the base model weights. Integrating legacy on-premises architecture with highly gated cloud models also introduces severe latency into daily operations. When enterprise data governance protocols strictly prohibit sending sensitive customer information to external servers, technology teams are left attempting to strip and anonymise datasets before processing. This constant data sanitisation creates enormous operational drag. Furthermore, the spiralling compute costs associated with continuous API calls to locked models erode the exact profit margins these autonomous systems are supposed to enhance. The opacity prevents network engineers from accurately sizing hardware deployments, forcing companies into expensive over-provisioning agreements to maintain baseline functionality. Why open-source AI is essential for operational resilience Restricting access to powerful applications is an understandable human instinct that closely resembles caution. Yet, as Thomas points out, at massive infrastructure scale, security typically improves through rigorous external scrutiny rather than through strict concealment. This represents the enduring lesson of open-source software development. Open-source code does not eliminate enterprise risk. Instead, IBM maintains it actively changes how organisations manage that risk. An open foundation allows a wider base of researchers, corporate developers, and security defenders to examine the architecture, surface underlying weaknesses, test foundational assumptions, and harden the software under real-world conditions. Within cybersecurity operations, broad visibility is rarely the enemy of operational resilience. In fact, visibility frequently serves as a strict prerequisite for achieving that resilience. Technologies deemed highly important tend to remain safer when larger populations can challenge them, inspect their logic, and contribute to their continuous improvement. Thomas addresses one of the oldest misconceptions regarding open-source technology: the belief that it inevitably commoditises corporate innovation. In practical application, open infrastructure typically pushes market competition higher up the technology stack. Open systems transfer financial value rather than destroying it. As common digital foundations mature, the commercial value relocates toward complex implementation, system orchestration, continuous reliability, trust mechanics, and specific domain expertise. IBM’s position asserts that the long-term commercial winners are not those who own the base technological layer, but rather the organisations that understand how to apply it most effectively. We have witnessed this identical pattern play out across previous generations of enterprise tooling, cloud infrastructure, and operating systems. Open foundations historically expanded developer participation, accelerated iterative improvement, and birthed entirely new, larger markets built on top of those base layers. Enterprise leaders increasingly view open-source as highly important for infrastructure modernisation and emerging AI capabilities. IBM predicts that AI is highly likely to follow this exact historical trajectory. Looking across the broader vendor ecosystem, leading hyperscalers are adjusting their business postures to accommodate this reality. Rather than engaging in a pure arms race to build the largest proprietary ****** boxes, highly profitable integrators are focusing heavily on orchestration tooling that allows enterprises to swap out underlying open-source models based on specific workload demands. Highlighting its ongoing leadership in this space, IBM is a key sponsor of this year’s AI & Big Data Expo North America, where these evolving strategies for open enterprise infrastructure will be a primary focus. This approach completely sidesteps restrictive vendor lock-in and allows companies to route less demanding internal queries to smaller and highly efficient open models, preserving expensive compute resources for complex customer-facing autonomous logic. By decoupling the application layer from the specific foundation model, technology officers can maintain operational agility and protect their bottom line. The future of enterprise AI demands transparent governance Another pragmatic reason for embracing open models revolves around product development influence. IBM emphasises that narrow access to underlying code naturally leads to narrow operational perspectives. In contrast, who gets to participate directly shapes what applications are eventually built. Providing broad access enables governments, diverse institutions, startups, and varied researchers to actively influence how the technology evolves and where it is commercially applied. This inclusive approach drives functional innovation while simultaneously building structural adaptability and necessary public legitimacy. As Thomas argues, once autonomous AI assumes the role of core enterprise infrastructure, relying on opacity can no longer serve as the organising principle for system safety. The most reliable blueprint for secure software has paired open foundations with broad external scrutiny, active code maintenance, and serious internal governance. As AI permanently enters its infrastructure phase, IBM contends that identical logic increasingly applies directly to the foundation models themselves. The stronger the corporate reliance on a technology, the stronger the corresponding case for demanding openness. If these autonomous workflows are truly becoming foundational to global commerce, then transparency ceases to be a subject of casual debate. According to IBM, it is an absolute, non-negotiable design requirement for any modern enterprise architecture. See also: Why companies like Apple are building AI agents with limits Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post IBM: How robust AI governance protects enterprise margins appeared first on AI News. View the full article
  2. Next-generation AI assistants being developed in the Apple ecosystem and by chipmakers like Qualcomm, but early reports suggest they are being designed with limits in place. Tom’s Guide has described early versions of these assistants as capable of navigating apps, carrying out bookings, and managing tasks in services. For instance a private beta agentic system completed tasks like booking services or posting content in apps. In one test, it moved through an app workflow and reached a payment screen before asking the user for confirmation. AI agents are being built with approval checkpoints. Sensitive actions, especially those tied to payments or account changes, require user confirmation before they are completed. The “human-in-the-loop” model lets the system prepare an action, but leaves approval to the user. Research linked to Apple’s AI work has explored ways to ensure systems pause before taking actions users did not explicitly request. Banking apps already require confirmation for transfers. The same idea is now being applied to AI-driven actions in multiple services. Limits and control A control layer comes from restricting what the AI can access. Rather than providing the system full access to apps and data, businesses are establishing limits, such as which apps the AI can interact with and when actions can be triggered. In practice, this means the AI may be able to draft a purchase or prepare a booking, but not finalise it without approval. It also means the system cannot move freely in all services unless it has been granted permission. According to Tom’s Guide, the facility is for privacy. If data remains on the device, it eliminates the need to send sensitive information to external servers. In areas like payments, AI systems are expected to work with partners that already have strict rules in place. In one reported example, payment providers’ services are being integrated to provide secure authentication before transactions are completed, though such safeguards are still under development. The existing systems act as an additional layer of oversight. They can set transaction limits or require extra verification. Much of the discussion around AI governance has focused on enterprise use. That includes areas like cybersecurity and large-scale automation. The consumer side introduces a different challenge and companies must design controls that work for everyday users. That means clear approval steps and built-in privacy protections. Autonomy with boundaries As AI gains the ability to carry out actions, the risks become greater as errors can lead to financial loss or data exposure. By placing controls at multiple points, including approval and infrastructure, companies are trying to manage those risks. The approach may shape how agentic AI develops in the near term. Rather than aiming for full independence, companies appear focused on controlled environments where the risks can be managed. (Photo by Junseong Lee) See also: Agentic AI’s governance challenges under the EU AI Act in 2026 Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Why companies like Apple are building AI agents with limits appeared first on AI News. View the full article
  3. The open-source AI movement has never lacked for options. Mistral, Falcon, and a growing field of open-weight models have been available to developers for years. But when Meta threw its weight behind Llama, something shifted. A company with three billion users, vast compute resources, and the credibility of a tech giant was now building openly, and the developer community responded. By early 2026, the Llama ecosystem had reached 1.2 billion downloads, averaging about 1 million per day. That is the context for what happened on April 8, 2026. Meta launched Muse Spark, its first major new Meta AI model in a year, and the first product from its newly formed Meta Superintelligence Labs. It is capable in ways Llama 4 never was, benchmarks well against the current frontier, and is completely proprietary. No free download. No open weights. No building on it unless Meta decides you can. The companyspentUS$14.3 billion, brought in Alexandr Wang from Scale AI to lead its AI rebuild, then spent nine months tearing down its entire AI stack and starting over. Muse Spark is what came out the other side. The developer community that made Llama what it was is now being asked to wait for a future open-source version that may or may not arrive on any predictable timeline. What is Muse Spark? Muse Spark is a natively multimodal reasoning model with tool-use, visual chain of thought, and multi-agent orchestration built in. It now powers Meta AI, which reaches over three billion users in Meta’s apps. Meta rebuilt its technology infrastructure from scratch, letting the company create a model that is as capable as its older midsize Llama 4 variant for an order of magnitude less compute. That efficiency number is worth noting. At the scale Meta operates, compute costs compound fast, and running a frontier-class Meta AI model at a fraction of the cost of its predecessors changes the economics of deploying it in billions of interactions daily. On benchmarks, the picture is genuinely mixed. Muse Spark scores 52 on the Artificial Intelligence Index v4.0, placing it fourth overall behind Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6. Meta has not claimed to have built the best model in the world, which is itself a departure from the over-claiming that damaged Llama 4’s credibility. Where Muse Spark leads is health. On HealthBench Hard – open-ended health queries – it scores 42.8, substantially ahead of Gemini 3.1 Pro at 20.6, GPT-5.4 at 40.1, and Grok 4.2 at 20.3. Health is a stated priority for Meta; the company says it worked with over 1,000 physicians to curate training data for the model. Muse Spark also offers three modes of interaction: Instant mode for quick answers, Thinking mode for multi-step reasoning tasks, and Contemplating mode, which orchestrates multiple agents’ reasoning in parallel to compete with the most demanding reasoning modes from Gemini Deep Think and GPT Pro. The open-source retreat This is the part of the Muse Spark story that the benchmark tables do not capture. Unlike Meta’s previous models, which were released as open-weight models – meaning anyone could download and run them on their own equipment – Muse Spark is entirely proprietary. The company said it will offer the model in a private preview to select partners through an API, making Muse Spark even more proprietary than the paid models offered by Meta’s rivals. Wang addressed the change directly, stating: “Nine months ago, we rebuilt our AI stack from scratch. New infrastructure, new architecture, new data pipelines. This is step one. ******* models are already in development with plans to open-source future versions.” The developer community’s response has been sceptical. Some see this as a necessary pivot after Llama 4 failed to gain expected traction. Others view it as Meta closing the gates once it has something worth protecting. That is the community now being asked to wait while competitors without that open-source legacy continue shipping freely available weights. Distribution over benchmarks Meanwhile, Meta is not waiting for the developer community to come around. Muse Spark will debut in the coming weeks inside Facebook, Instagram, WhatsApp, and Messenger, as well as in Meta’s Ray-Ban AI glasses. That rollout path is arguably more consequential than any benchmark result. OpenAI and Anthropic sell to developers and enterprises. Meta deploys directly to over three billion people already inside its apps daily. Meta’s push into health does raise privacy questions worth watching. Muse Spark users will need to log in with an existing Meta account to use it, and while Meta does not explicitly say personal account information will be used by the AI, the company has generally trained on public user data and has positioned Muse Spark as a personal superintelligence product. Meta stock rose more than 9% on the day of the launch, a signal that investors read the Muse Spark release as proof that the US$14.3 billion bet on Wang and the nine-month rebuild produced something real. Whether the promised open-source versions actually materialise is a question the developer community will press every quarter. The answer will define how this chapter of Meta’s AI story is remembered. See Also: The Meta-Manus review: What enterprise AI buyers need to know about cross-border compliance risk Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Meta has a competitive AI model but loses its open-source identity appeared first on AI News. View the full article
  4. AI agents hold the promise of automatically moving data between systems and triggering decisions, but in some cases, they can act without a clear record of what, when, and why they undertook their tasks. That has the potential to create a governance problem, for which IT leaders are ultimately responsible. If an organisation can’t trace an agent’s actions and don’t have proper control over its authority, leaders can’t prove that a system is operating safely or even lawfully to regulators. That’s an issue set to become more important from August this year, as enforcement of the EU AI Act kicks in. According to the text of the Act, there will be substantial penalties for failures of governance relating to AI, especially when used in high-risk areas such as when personally-identifiable information is processed, or financial operations take place. What IT leaders need to consider in the EU Several steps can be taken to alleviate high levels of risk, and of these, the ones that stand out for consideration include agent identity, comprehensive logs, policy checks, human oversight, rapid revocation, the availability of documentation from vendors, and the formulation of evidence for presentation to regulators. There are several options decision makers can consider that will help create the record of activities undertaken by agentic systems. For example, a Python SDK (software development kit), Asqav, can sign each agent’s action cryptographically and link all records to an immutable hash chain – the type of technique that’s more associated with blockchain technology. If someone or something changes or removes a record, verification of the chain fails. For governance teams, using a verbose, centralised, possibly-encrypted system of record for all agentic AIs is a measure that provides data well beyond the scattered text logs produced by individual software platforms. Regardless of the technical details of how records are made and kept, IT leaders need to see exactly where, when, and how agentic instances are acting throughout the enterprise. Many organisations fail at this first step in any recording of automated, AI-driven activity. It’s necessary to keep a registry of every agent in operation, with each uniquely identified, plus records of its capabilities and granted permissions. This ‘agentic asset list’ ties neatly into the requirements of the EU AI Act’s article 9, which states: Article 9: For high-risk areas, AI risk management has to be an ongoing, evidence-based process built into every stage of deployment (development, preparation, production), and be under constant review. Furthermore, decision-makers need to be aware of the Act’s Article 13: High-risk AI systems have to be designed in such a way that those deploying them can understand a system’s output. Thus, an AI system from a third-party must be interpretable by its users (not an opaque code blob), and should be supplied with enough documentation to ensure its safe and lawful use. This requirement means the choice of model and its methods of deployment are both technical and regulatory considerations. Putting the brakes on It’s important for any agentic deployment to offer a facility for the revocation of an AI’s operating role, preferably within a matter of seconds. The ability to revoke quickly should be part of emergency response processes. Revocation options should include the immediate removal of privileges, immediate ceasing of API access, and the flushing of queued tasks. The presence of human oversight, combined with the presentation of enough context for humans to make informed decisions, means that human operators must be able to reject any proposed action. It’s not considered adequate for the person reviewing a decision to see only a prompt or a confidence score. Effective oversight needs information around context, every agent’s authority, and time enough to intervene to prevent mis-steps. Multi-agent considerations While every agent’s action should be recorded automatically and retained, multi-agent processes are particularly complex to track, as failures can take place among chains of agents. It’s therefore important for security policies to be tested during the development of any system that intends to utilise multiple agents. Finally, governing authorities may require logs and technical documentation at any time, and will certainly need them after any incident they have been made aware of. Conclusion The question to be considered by IT leaders considering using AI on sensitive data or in high-risk environments is whether every aspect of the technology can be identified, constrained by policy, audited, interrupted, and explained. If the answer is unclear, governance is not yet in place. (Image source: “Last Judgement” by Lawrence OP is licensed under CC BY-NC-ND 2.0. To view a copy of this license, visit [Hidden Content]) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Agentic AI’s governance challenges under the EU AI Act in 2026 appeared first on AI News. View the full article
  5. Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running. That model is Claude Mythos Preview, and the initiative is called Project Glasswing. The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Beyond that core group, Anthropic has extended access to over 40 additional organisations that build or maintain critical software infrastructure. Anthropic is committing up to US$100 million in usage credits for Mythos Preview across the effort, along with US$4 million in direct donations to open-source security organisations. A model that outgrew its own benchmarks Mythos Preview was not specifically trained for cybersecurity work. Anthropic said the capabilities “emerged as a downstream consequence of general improvements in code, reasoning, and autonomy”, and that the same improvements making the model better at patching vulnerabilities also make it better at exploiting them. That last part matters. Mythos Preview has improved to the extent that it mostly saturates existing security benchmarks, forcing Anthropic to shift its focus to novel real-world tasks–specifically, zero-day vulnerabilities. These flaws were previously unknown to the software’s developers. Among the findings: a 27-year-old bug in OpenBSD, an operating system known for its strong security posture. In another case, the model fully autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD–CVE-2026-4747–that allows an unauthenticated user anywhere on the internet to obtain complete control of a server running NFS. No human was involved in the discovery or exploitation after the initial prompt to find the bug. Nicholas Carlini from Anthropic’s research team described the model’s ability to chain together vulnerabilities: “This model can create exploits out of three, four, or sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. I’ve found more bugs in the last couple of weeks than I found in the rest of my life combined.” Why is it not being released? “We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities,” Newton Cheng, Frontier Red Team Cyber Lead at Anthropic, said. “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout–for economies, public safety, and national security–could be severe.” This is not hypothetical. Anthropic had previously disclosed what it described as the first documented case of a cyberattack largely executed by AI–a ******** state-sponsored group that used AI agents to autonomously infiltrate roughly 30 global targets, with AI handling the majority of tactical operations independently. The company has also privately briefed senior US government officials on Mythos Preview’s full capabilities. The intelligence community is now actively weighing how the model could reshape both offensive and defensive hacking operations. The open-source problem One dimension of Project Glasswing that goes beyond the headline coalition: open-source software. Jim Zemlin, CEO of the Linux Foundation, put it plainly: “In the past, security expertise has been a luxury reserved for organisations with large security teams. Open-source maintainers, whose software underpins much of the world’s critical infrastructure, have historically been left to figure out security on their own.” Anthropic has donated US$2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and US$1.5 million to the Apache Software Foundation–giving maintainers of critical open-source codebases access to AI cybersecurity vulnerability scanning at a scale that was previously out of reach. What comes next Anthropic says its eventual goal is to deploy Mythos-class models at scale, but only when new safeguards are in place. The company plans to launch new safeguards with an upcoming Claude Opus model first, allowing it to refine them with a model that does not pose the same level of risk as Mythos Preview. The competitive picture is already shifting around it. When OpenAI released GPT-5.3-Codex in February, the company called it the first model it had classified as high-capability for cybersecurity tasks under its Preparedness Framework. Anthropic’s move with Glasswing signals that the frontier labs see controlled deployment–not open release–as the emerging standard for models at this capability level. Whether that standard holds as these capabilities spread further is, at this point, an open question that no single initiative can answer. See Also: Anthropic’s refusal to arm AI is exactly why the *** wants it Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Anthropic locked down its most powerful AI Model over cybersecurity fears–then put it to work appeared first on AI News. View the full article
  6. A survey carried out by OutSystems, The State of AI Development 2026 , argues that AI has moved into early production phase for many enterprises, primarily inside the IT function. The survey was based on the responses of 1,879 IT leaders, and warns that adoption of AI is in danger of running ahead of governance and integration. The shortfall is a gap between what IT leaders want agents to do and what their organisations can safely control. The report’s authors urge companies to address the controls or guardrails on AI systems, and also stress the importance of integrating new, AI technology into an organisation’s existing platforms. OutSystems says 97% of its respondents are exploring some form of agentic strategy, with 49% of them describing their current abilities as “advanced” or “expert.” Nearly half of those surveyed say that over half of agentic AI projects have moved from pilot into production, with Indian companies most successful in implementing the technology: 50% of Indian companies say their AI projects are 51% to 75% successful. Companies are considering where agents should be deployed first, and under what controls, but although “cost reduction or efficiency gains” is the most cited expectation for AI’s effects, only 22% found their deployments most effective in that regard. Instead, the most effective area gains in a business stemmed from equipping software developers with AI tools described as “generative AI-assisted.” The report’s geography and sector data show that transitions to AI agentic workflows are unevenly distributed. India stands out as the market with the highest share of users considering themselves “expert”, while many organisations in Australia, Brazil, Germany, the Netherlands, the ***, and the US still identify as intermediate stage users. France and Germany are the most dubious of AI adoption, with Germany recording the highest share of leaders not using agentic AI in any form. The sectors and functions invested in AI Financial services and technology show the most movement from pilot to production, with many implementations in core business functions. The sector can be considered as having the most clear line of sight from automation to measurable returns in terms of income. The practical inference from the report’s findings would be for slower-moving sectors to copy the implementation workflows employed by the fintech industry: Start with narrow, high-volume workflows where performance can be measured and failures can be contained, and focus on the IT function. According to the survey, generative AI-assisted development is now common in nine of the ten countries surveyed, alongside traditional coding, outsourced development, and SaaS customisation. It undercuts the notion that enterprises are moving into an AI-native or all-AI stack. In fact, most organisations add agents and AI-generated code on top of the processes already proven effective in their development environments. Fragmented data no roadblock to AI progress OutSystems finds that 48% of respondents see integration with legacy systems as the most important ability needed to expand agentic AI, and 38% say legacy systems are the main reason projects stall between pilot and production. Of the potential barriers to AI development that were offered as choices to the survey’s participants, more than 40% cited integration difficulties and legacy fragmentation the most problematic. Organisations considering large data clean-up programmes (which many AI vendors advocate as a reason why deployments fail to reach production) may want to rethink, the report implies. The authors state agents can be built that can work well in complex data environments, as long as governance and integration are strengthened at the same time as AI implementation. Across the board, most sectors express “moderate trust” levels of agentic AI at around 50%, although responses from different business functions were not broken out in the survey results’ figures. IT operations and software development The financial returns are manifest mostly in IT functions themselves. The report says the most explored use cases are IT operations, at 55%, and data analysis, at 52%. Workflow automation follows at 36%, then customer experience at 33%. On realised return on investment, IT development and productivity lead by a margin, at 40%, ahead of operational efficiency at 22%. That distribution suggests that the first durable value from agentic AI is internal at developers’ desks rather than in customer-facing environments. Customer-facing deployments may still make sense, but the report indicates they require more trust in system performance, stronger controls, better orchestration, and an ability to create watertight oversight mechanisms. Trust in and control of agents and governance Trust in agentic AI, however, is improving. OutSystems reports that 73% of respondents express either high or moderate trust in letting agents to act autonomously, a rise of around 10% compared to a similar survey the company undertook last year. Trust in code or workflows generated by third-party AI tools is slightly lower, at 67%, a substantial increase from the prior year’s figure, when only 40% ‘mostly trusted’ generative AI to write code without human help. Only 36% of respondents say they have a centralised approach to AI governance, while 64% say they lack such a facility, and 41% rely on rules implemented on a per-project basis. Two-thirds say building human-in-the-loop checkpoints is technically difficult because it requires orchestration that can pause agents – in effect inserting manual braking on operations that might be fully autonomous. Many organisations appear to be deploying ******* oversight models, although it is not clear if that is a result of greater trust in models or whether business functions are under pressure to deploy AI regardless of security or reliability concerns. If the trend to loosen oversight continues, the report’s authors note that agentic AI adoption may advance faster than the methods of accountability that many consider important. Firms that want to scale agents in regulated or mission-critical settings should treat orchestration and auditability as part of the product, the survey’s findings state. When compliance checks consider a business’s operations, breadcrumb trails in the form of logfiles and defined responsibilities are considered important elements of any agentic AI rollout. The report says 94% of leaders are concerned about “AI sprawl”, which is not defined, but could be inferred to be a lack of a centralised management platform that oversees all AI deployments in the enterprise. 39% are very or extremely concerned about the issue, and only 12% currently use a centralised platform to keep that sprawl under control. The full survey can be accessed here. (Image source: “Relax” by Koijots is licensed under CC BY-SA 2.0. To view a copy of this license, visit [Hidden Content]) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI’s software development success and central management needs appeared first on AI News. View the full article
  7. A new open-source toolkit from Microsoft focuses on runtime security to force strict governance onto enterprise AI agents. The release tackles a growing anxiety: autonomous language models are now executing code and hitting corporate networks way faster than traditional policy controls can keep up. AI integration used to mean conversational interfaces and advisory copilots. Those systems had read-only access to specific datasets, keeping humans strictly in the execution loop. Organisations are currently deploying agentic frameworks that take independent action, wiring these models directly into internal application programming interfaces, cloud storage repositories, and continuous integration pipelines. When an autonomous agent can read an email, decide to write a script, and push that script to a server, stricter governance is vital. Static code analysis and pre-deployment vulnerability scanning just can’t handle the non-deterministic nature of large language models. One prompt injection attack (or even a basic hallucination) could send an agent to overwrite a database or pull out customer records. Microsoft’s new toolkit looks at runtime security instead, providing a way to monitor, evaluate, and block actions at the moment the model tries to execute them. It beats relying on prior training or static parameter checks. Intercepting the tool-calling layer in real time Looking at the mechanics of agentic tool calling shows how this works. When an enterprise AI agent has to step outside its core neural network to do something like query an inventory system, it generates a command to hit an external tool. Microsoft’s framework drops a policy enforcement engine right between the language model and the broader corporate network. Every time the agent tries to trigger an outside function, the toolkit grabs the request and checks the intended action against a central set of governance rules. If the action breaks policy (e.g. an agent authorised only to read inventory data tries to fire off a purchase order) the toolkit blocks the API call and logs the event so a human can review it. Security teams get a verifiable, auditable trail of every single autonomous decision. Developers also win here; they can build complex multi-agent systems without having to hardcode security protocols into every individual model prompt. Security policies get decoupled from the core application logic entirely and are managed at the infrastructure level. Most legacy systems were never built to talk to non-deterministic software. An old mainframe database or a customised enterprise resource planning suite doesn’t have native defenses against a machine learning model shooting over malformed requests. Microsoft’s toolkit steps in as a protective translation layer. Even if an underlying language model gets compromised by external inputs; the system’s perimeter holds. Security leaders might wonder why Microsoft decided to release this runtime toolkit under an open-source license. It comes down to how modern software supply chains actually work. Developers are currently rushing to build autonomous workflows using a massive mix of open-source libraries, frameworks, and third-party models. If Microsoft locked this runtime security feature to its proprietary platforms, development teams would probably just bypass it for faster, unvetted workarounds to hit their deadlines. Pushing the toolkit out openly means security and governance controls can fit into any technology stack. It doesn’t matter if an organisation runs local open-weight models, leans on competitors like Anthropic, or deploys hybrid architectures. Setting up an open standard for AI agent security also lets the wider cybersecurity community chip in. Security vendors can stack commercial dashboards and incident response integrations on top of this open foundation, which speeds up the maturity of the whole ecosystem. For businesses, they avoid vendor lock-in but still get a universally scrutinised security baseline. The next phase of enterprise AI governance Enterprise governance doesn’t just stop at security; it hits financial and operational oversight too. Autonomous agents run in a continuous loop of reasoning and execution, burning API tokens at every step. Startups and enterprises are already seeing token costs explode when they deploy agentic systems. Without runtime governance, an agent tasked with looking up a market trend might decide to hit an expensive proprietary database thousands of times before it finishes. Left alone, a badly configured agent caught in a recursive loop can rack up massive cloud computing bills in a few hours. The runtime toolkit gives teams a way to slap hard limits on token consumption and API call frequency. By setting boundaries on exactly how many actions an agent can take within a specific timeframe, forecasting computing costs gets much easier. It also stops runaway processes from eating up system resources. A runtime governance layer hands over the quantitative metrics and control mechanisms needed to meet compliance mandates. The days of just trusting model providers to filter out bad outputs are ending. System safety now falls on the infrastructure that actually executes the models’ decisions Getting a mature governance program off the ground is going to demand tight collaboration between development operations, legal, and security teams. Language models are only scaling up in capability, and the organisations putting strict runtime controls in place today are the only ones who will be equipped to handle the autonomous workflows of tomorrow. See also: As AI agents take on more tasks, governance becomes a priority Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Microsoft open-source toolkit secures AI agents at runtime appeared first on AI News. View the full article
  8. Exciting times are ahead in the world of enterprise perimeter security with a new partnership between Thrive Logic, an AI agent-driven security and operational intelligence platform, and Asylon, a security robotics company. Together, the companies are to introduce physical AI into the network edge security arena, combining “autonomous perimeter patrols with agentic AI analytics and automated incident workflows.” The goal is to reduce response friction and let security leaders report with confidence in high-security exterior zones. Physical AI understands real-world situations and is capable of responding actively via a continuous, mobile security presence. This is in comparison to merely recording events as and when they take place, for actions to happen later. Using Asylon’s robotic patrols and Thrive Logic’s AI agent, the integration will monitor perimeter areas and analyse any incidents that may occur. Security teams might therefore relax a little and let AI detect issues in real time. In this arena, it could soon be ‘AI – 1, Bad Actors – 0.’ 24/7 robotic patrol oversight With pressure rising on security leaders in perimeter-intensive environments (labour volatility and unreliable patrol executions are two examples that spring to mind), Asylon’s Robotic Security Operations Centre (RSOC) helps combat challenges with audit-read security outcomes. Alongside Thrive Logic’s integration, robotic patrols won’t just collect video streams, but will produce alerts and step-by-step response processes. Therefore, security teams can respond more effectively, proving humans and AI can work in harmony. How it works Video captured by Asylon’s robotic patrols is securely sent to Thrive Logic’s platform. From here, the Thrive Logic AI agent continues to track connected streams, triggering alerts to relevant staff and stakeholders, and generating automated incident workflows aligned to SOP if or when these are required. The system allows enterprise security organisations to reduces operational friction, and see improvements in response consistency. The system will generate audit-ready, time-stamped incident records for all sites where the technology operates. Damon Henry, CEO of Asylon Robotics, said: “Security leaders don’t need more dashboards – they need reliable coverage, consistent response, and defensible reporting. Robotic systems that extend perimeter presence, paired with AI that turns what’s observed into clear actions and documented outcomes. By integrating Asylon’s RSOC-managed robotic patrols with Thrive Logic’s agentic AI analytics and incident workflow automation, we’re giving enterprise teams a practical, scalable way to reduce response friction and elevate operational maturity across sites.” Nate Green, CEO of Thrive Logic, also emphasised the importance of physical AI. “Physical AI is where security becomes truly operational – persistent real-world visibility paired with intelligence that drives action,” he said. “Asylon’s robotic patrols create a high-value mobile layer across large perimeters. When connected to Thrive Logic’s AI agent and workflow automation, that visibility becomes actionable alerts, guided response, and audit-ready documentation.” You may have to wait your turn to experience the Asylon-Thrive Logic Physical AI integration as it’s currently only available for enterprise security teams managing high-activity exterior environments, but the companies are hoping for greater availability to all business sizes in the near future. (Image by ikrzeus style from Pixabay) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Asylon and Thrive Logic bring physical AI to enterprise perimeter security appeared first on AI News. View the full article
  9. The failure mode for enterprise AI in 2026 is not what most people expected. It is not that the models are wrong, or that agents cannot reason, or that the technology is overhyped. The failure mode is that the data feeding those systems is fragmented, inconsistently labelled, and spread across dozens of applications that were never designed to share context. Boomi calls this the agentic AI data activation problem, and after tracking 75,000 AI agents running in production across its customer base, the company says solving it comes before everything else. That figure comes from February, when Boomi reported its strongest momentum to date: more than 30,000 customers globally, 75,000 AI agents in production, and a customer base that includes over a quarter of the Fortune 500. Yet the consistent pattern across those deployments, according to Steve Lucas, chairman and CEO of Boomi, is that AI value only materialises once the data problem is resolved. “AI only delivers value when data is properly activated, trusted and governed first,” Lucas said when the company announced its latest platform capabilities on March 9. The fragmentation problem Enterprise data is not missing; it exists in abundance, distributed across ERP systems, CRMs, data lakes, SaaS platforms, and legacy applications that have accumulated over decades. What is missing is the shared context that allows an AI agent to treat data from one system as reliably compatible with data from another. An agent drawing customer records from a CRM and pricing data from an ERP may be working from conflicting definitions of what a customer or a product actually is. The outputs it produces are only as coherent as the data standards beneath them. Boomi’s answer is Meta Hub, a central system of record announced in its March 9 platform update, designed to standardise business definitions across the enterprise and extend that context to every AI agent operating within it. The goal is to ensure agents reason from a consistent understanding of business logic rather than generating outputs based on fragmented interpretations pulled from disconnected systems. The same release introduced real-time SAP data extraction via change data capture, addressing one of the most common integration bottlenecks in large enterprises, where SAP data is often inaccessible due to slow, manual export processes that render it effectively unavailable to AI workflows in real-time. New governance capabilities for Snowflake Cortex agents within Boomi’s Agent Control Tower added audit trails and session logs, addressing a concern that has moved steadily up enterprise priority lists: AI agents operating as a ****** box, taking actions with no visible reasoning chain. What the analyst’s recognition signals Two independent assessments in March gave Boomi external validation of its positioning. On March 16, Gartner named Boomi a Leader in its 2026 Magic Quadrant for Integration Platform as a Service–the twelfth consecutive time–and positioned it highest for Ability to Execute. On March 31, the IDC MarketScape for Worldwide API Management named Boomi a Leader, specifically noting its AI-centric strategy that treats APIs as both the fuel and the control plane for AI workloads. The Gartner framing is pointed. The report stated that AI-ready integration is a strategic capability that aligns architecture, integration, and governance to enable AI agents to effectively access enterprise data and operate within business processes. That framing validates the problem Boomi is addressing and signals that iPaaS platforms are now being evaluated on AI readiness rather than traditional integration capabilities alone. The broader pattern By now, we are aware that the shift from pilot to production in enterprise AI is stalling in a predictable place. Organisations have models. They have agents. What many do not have is the data infrastructure that makes those agents reliable enough to trust with real business processes. Data activation–moving data from static storage into live, governed, context-rich flows that agents can actually reason from–is one articulation of what that missing layer needs to look like. Whether that framing becomes the industry standard or gets absorbed into a broader category is a question 2026 will start to answer. What is not in question is that the enterprises finding ROI from agentic AI are the ones that sorted the data layer first. Boomi will be exhibiting at the AI & Big Data Expo at TechEx North America, taking place 18–19 May 2026 at the San Jose McEnery Convention Centre. (Photo by Boomi) See also: Autonomous AI systems depend on data governance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Boomi calls it “data activation” and says it’s the missing step in every AI deployment appeared first on AI News. View the full article
  10. The Anthropic *** expansion story is less about diplomatic courtship and more about what happens when a government punishes a company for having principles. In late February, US Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a stark ultimatum: remove guardrails preventing Claude from being used for fully autonomous weapons and domestic mass surveillance, or face consequences. Amodei didn’t budge. He wrote that Anthropic could not “in good conscience” grant the Pentagon’s request, arguing that some uses of AI “can undermine rather than defend democratic values.” Washington’s response was swift. Trump directed every federal agency to immediately cease all use of Anthropic’s technology, and the Pentagon designated the company a supply chain risk, a label ordinarily reserved for adversarial foreign entities like Huawei. The US$200 million Pentagon contract was pulled. Defence tech companies instructed employees to stop using Claude and switch to alternatives. London, watching all of this unfold, saw something different. The ***’s pitch Staff at the ***’s Department for Science, Innovation and Technology (DSIT) have drawn up proposals for the US$380 billion company, ranging from a dual stock listing on the London Stock Exchange to an office expansion in the capital, according to multiple people with knowledge of the plans. Prime Minister Keir Starmer’s office has backed the effort, which will be put to Amodei when he visits in late May. Anthropic already has around 200 employees in Britain and appointed former prime minister Rishi Sunak as a senior adviser last year. The infrastructure for a meaningful *** presence is already there. What the British government is now offering is an explicit signal that Anthropic’s approach to AI–built on embedded ethical constraints–is an asset, not an obstacle. A dual listing in London, if it materialised, would give Anthropic access to European institutional investors at a moment when its domestic regulatory standing remains under active legal challenge. The Pentagon’s appeal of the court-ordered injunction blocking the supply chain designation is still before the Ninth Circuit, and the outcome remains uncertain. Ethics as a competitive advantage The dispute has been framed largely as a legal and political fight. But its implications for global AI governance run deeper. Anthropic’s lawyers argued in court filings that Claude was not developed to be used for lethal autonomous weapons without human oversight, nor deployed to spy on US citizens, and that using the tools in these ways would represent an abuse of its technology. US District Judge Rita Lin, who granted a preliminary injunction blocking the blacklist in March, found the government’s actions “troubling” and concluded they likely violated the law. That judicial finding matters in the *** context. Britain is positioning itself as a regulatory environment sitting between Washington’s current posture, which demands unrestricted military access, and Brussels, where the EU AI Act imposes its own constraints. The *** government presents itself as offering a less constrained environment for AI companies than either the US or the European Union. Crucially, that pitch doesn’t ask Anthropic to abandon the guardrails it went to court to defend. The courtship also sits alongside broader *** efforts to build domestic AI capability, including a recently announced £40 million state-backed research lab, after officials acknowledged the absence of a homegrown competitor to the leading US frontier labs. Competition in London The ***’s play for Anthropic is not happening in a vacuum. OpenAI has already committed to making London its biggest research hub outside the US. Google has anchored itself in King’s Cross since acquiring DeepMind in 2014. The race to secure frontier AI in London is already competitive, and Anthropic’s current circumstances make it the most consequential target yet. Anthropic has been expanding internationally regardless of its domestic legal battles, including opening a Sydney office as its fourth Asia-Pacific location. The global growth strategy is already in motion. What remains to be seen is how much of it London gets to claim. The company Washington blacklisted for having an AI ethics policy is now being actively courted by another G7 government that wants exactly that. The late May meetings with Amodei will be telling. See Also: Anthropic selected to build government AI assistant pilot Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Anthropic’s refusal to arm AI is exactly why the *** wants it appeared first on AI News. View the full article
  11. AI systems are starting to move beyond simple responses. In many organisations, AI agents are now being tested to plan tasks, make decisions, and carry out actions with limited human input. It is no longer just about whether a model gives the right answer. It is about what happens when that model is allowed to act. Autonomous systems need clear boundaries. They need rules that define what they can access, what they are allowed to do, and how their actions are tracked. Without those controls, even well-trained systems can create problems that are hard to detect or reverse. One company working on this problem is Deloitte. The firm has been developing governance frameworks and advisory approaches to help organisations manage AI systems. From tools to AI agents Most AI systems in use today still depend on human prompts. They generate text, analyse data, or make predictions, but a person usually decides what happens next. Agentic AI changes that pattern. These systems can break down a goal into steps, choose actions, and interact with other systems to complete tasks. That added independence brings new challenges. When a system acts on its own, it may take paths that were not fully expected or use data in ways that were not intended. Deloitte’s work focuses on helping organisations prepare for these risks. Rather than treating AI as a standalone tool, the firm looks at how it fits into business processes, including how decisions are made and how data flows through systems. Building governance into the lifecycle Governance should not be added after deployment. It needs to be built into the full lifecycle of an AI system. This starts at the design stage. Organisations need to define what a system is allowed to do and where its limits are. This may include setting rules around data use and outlining how the system should respond in uncertain situations. The next stage is deployment. At this point, governance focuses on access and control, including who can use the system and what it can connect to. Once the system is live, monitoring becomes the main concern. Autonomous systems can change over time as they interact with new data. Without regular checks, they may drift away from their original purpose. The role of transparency and accountability As AI systems take on more responsibility, it becomes more difficult to trace how decisions are made. This creates a demand for stronger transparency. Deloitte’s work highlights the importance of keeping track of how systems operate. This includes logging actions and documenting decisions. These records help organisations in determining what happened if something goes wrong. If an autonomous system takes an action, there needs to be clarity about who is responsible. Research from Deloitte shows that adoption of AI agents is moving faster than the controls needed to manage them. Around 23% of companies already use them, and that figure is expected to reach 74% within two years. Only 21% report having strong safeguards in place to oversee how they behave. Real-time oversight for AI agents Once an autonomous system is active, the focus shifts to how it behaves in real-world conditions. Static rules are not always enough, and systems need to be observed as they operate. Deloitte’s approach includes real-time monitoring, allowing organisations to track what an AI system is doing as it performs tasks. If the system behaves in an unexpected way, teams can step in quickly. This may involve pausing certain actions or adjusting permissions. Real-time oversight also helps with compliance. In regulated industries, companies need to show that systems follow rules and standards. In practice, these controls are starting to appear in operational settings. Deloitte describes scenarios where AI systems monitor equipment performance across sites. Sensor data can signal early signs of failure, which can trigger maintenance workflows and update internal systems. Governance frameworks define what actions the system can take, when human approval is required, and how decisions are recorded. The process runs across multiple systems, but from a user’s point of view, it appears as a single action. Governance is part of discussions at AI & Big Data Expo North America 2026, taking place on May 18–19 in Santa Clara, California. Deloitte is listed as a Diamond Sponsor for the event, placing it among the firms contributing to conversations around how autonomous systems are deployed and controlled in practice. The challenge is not just building smarter systems, but ensuring they behave in ways organisations can understand, manage, and trust over time. (Photo by Roman) See also: Autonomous AI systems depend on data governance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post As AI agents take on more tasks, governance becomes a priority appeared first on AI News. View the full article
  12. With the launch of KiloClaw, enterprises now have a tool to enforce governance over autonomous agents and manage shadow AI. While businesses spent the last year securing large language models and formalising vendor agreements, developers and knowledge workers started moving on their own. Employees are bypassing official procurement, deploying autonomous agents on personal infrastructure to automate their daily workflows. This practice, known as ‘Bring Your Own AI’ or BYOAI, exposes proprietary enterprise data to unregulated external environments. To address this vulnerability, software provider Kilo launched KiloClaw for Organizations, an enterprise-grade platform built to rein in decentralised agent deployments and restore architectural oversight. Kilo targets the lack of visibility surrounding agent deployment. When engineers set up autonomous agents to parse error logs, or financial analysts deploy local scripts to reconcile spreadsheets, they prioritise immediate efficiency over security protocols. These agents routinely gain access to corporate Slack channels, Jira boards, and private code repositories through personal API keys. Since these connections happen outside official IT purview, they create blind spots for data exfiltration and intellectual property leaks. KiloClaw provides a centralised control plane for security teams to identify, monitor, and restrict these autonomous actors without blocking their productivity gains. The unseen infrastructure of Bring-Your-Own-Agent The current shift mirrors the Bring Your Own Device (BYOD) era of the early 2010s, when employees used personal smartphones for corporate email and forced IT departments to adopt mobile device management. The AI equivalent carries higher stakes. A compromised phone might expose a static inbox, but an unmonitored autonomous agent has active execution privileges. It reads, writes, modifies, and deletes data across integrated platforms at speeds humans cannot replicate. These autonomous scripts also frequently rely on external computational power. An employee might run an agent locally while the agent sends corporate data to third-party inference servers to process queries. If those providers use the ingested data to train future models, the enterprise loses control of its intellectual property. KiloClaw, for its part, establishes a secure boundary around these processes. Instead of ignoring external deployments, the platform pulls them into a registry where compliance officers can audit behaviour and data flows. Identity and access management for autonomous AI agents Governing autonomous systems requires a different technical architecture than managing a human workforce. Traditional Identity and Access Management (IAM) systems are built for human credentials or static application-to-application communication. Autonomous agents, however, are dynamic. Agents chain tasks together sequentially, formulating new requests based on the output of previous actions. An agent might request access to an enterprise resource planning database halfway through a task, and standard security software struggles to determine if this is hostile behaviour or a legitimate operation. KiloClaw treats agents as distinct entities requiring restrictive, time-bound permission scopes. Instead of developers plugging permanent, high-level API keys into experimental models, KiloClaw issues short-lived, narrowly defined access tokens. If an agent designed to summarise weekly marketing emails attempts to download a customer database, the platform detects the scope violation and revokes access. This containment limits the blast radius within the corporate network if an open-source model behaves unpredictably. How tools like KiloClaw balance velocity and compliance Mandating a blanket ban on custom-built automation tools rarely works; it drives the behaviour underground, encouraging engineers to obfuscate traffic and hide workflows. Platforms like KiloClaw aim to construct a sanctioned environment where employees can safely register their tools. For this governance framework to work, IT leaders need to prioritise integration. KiloClaw connects directly into the continuous integration and deployment pipelines that software teams already utilise. By automating security checks and permission provisioning, security teams remove the friction that causes employees to bypass rules. Enterprises can establish baseline templates detailing what data external models can process, allowing workers to deploy agents within pre-approved boundaries. This maintains compliance without sacrificing workflow automation. The development of shadow AI governance tools points to a new phase of algorithmic regulation. Early corporate reactions to generative models focused on acceptable use policies for text-based chatbots. Now, the focus is shifting toward orchestration, containment, and system-to-system accountability. Regulators globally are also examining how companies monitor automated systems, pushing verifiable oversight toward legal obligation. As digital agents multiply within corporate networks, the concept of an ‘Agent Firewall’ is becoming a standard IT budget item. Platforms that map the relationships between human intent, machine execution, and corporate data will form the foundation of future security operations. KiloClaw’s entry into the organisational governance space highlights a shifting reality for the C-suite: the immediate threat includes well-meaning employees handing network keys to unregulated machines. Establishing structural authority over these non-human actors is necessary to safely harness their potential. See also: Autonomous AI systems depend on data governance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post KiloClaw targets shadow AI with autonomous agent governance appeared first on AI News. View the full article
  13. A decade ago, it would have been hard to believe that artificial intelligence could do what it can do now. However, it is this same power that introduces a new attack surface that traditional security frameworks were not built to address. As this technology becomes embedded in critical operations, companies need a multi-layered defense strategy that includes data protection, access control and constant monitoring to keep these systems safe. Five foundational practices address these risks. 1. Enforce strict access and data governance AI systems depend on the data they are fed and the people who access them, so role-based access control is one of the best ways to limit exposure. By assigning permissions based on job function, teams can ensure only the right people can interact with and train sensitive AI models. Encryption reinforces protection. AI models and the data used to train them must be encrypted when stored and when moving between systems. This is especially important when that data includes proprietary code or personal information. Leaving a model unencrypted on a shared server is an open invitation for attackers, and solid data governance is the last line of defence keeping those assets safe. 2. Defend against model-specific threats AI models face a variety of threats that conventional security tools were not designed to catch. Prompt injection ranks as the top vulnerability in the OWASP top 10 for large language model (LLM) applications, and it happens when an attacker embeds malicious instructions inside an input to override a model’s behaviour. One of the most direct ways to block these attacks at the entry point is by deploying AI-specific firewalls that validate and sanitise inputs before they reach an LLM. Beyond input filtering, teams should run regular adversarial testing, which is essentially ethical hacking for AI. Red team exercises simulate real-world scenarios like data poisoning and model inversion attacks to reveal vulnerabilities before threat actors find them. Research on red teaming AI systems highlights that this kind of iterative testing needs to be built into the AI development life cycle and not bolted on after deployment. 3. Maintain detailed ecosystem visibility Modern AI environments span on-premise networks, cloud infrastructure, email systems and endpoints. When security data from each of these areas is in a separate silo, visibility gaps may emerge. Attackers move through those gaps undetected. A fragmented view of your environment makes it nearly impossible to correlate suspicious events into a coherent threat picture. Security teams need unified visibility in every layer of their digital environment. This means breaking down information silos between network monitoring, cloud security, identity management and endpoint protection. When telemetry from all these sources feeds into a single view, analysts can connect the dots between an anomalous login, a lateral movement attempt and a data exfiltration event not seeing each in isolation. Achieving this breadth of coverage is increasingly nonnegotiable. As the NIST’s Cybersecurity Framework Profile for AI makes clear, securing these systems requires organisations to secure, thwart and defend in all relevant assets, not the most visible ones. 4. Adopt a consistent monitoring process Security is not a one-time configuration because AI systems change. Models are updated, new data pipelines are introduced, user behaviours change and the threat landscape evolves with them. Rule-based detection tools struggle to keep pace because they rely on known attack signatures not real-time behavioural analysis. Continuous monitoring addresses this gap by establishing a behavioural baseline for AI systems and flagging deviations as they happen. Consistent monitoring can flag unusual activity in the moment, whether it’s a model producing unexpected outputs, a sudden change in API call patterns or a privileged account accessing data it normally shouldn’t. Security teams get an immediate alert with enough context to act fast. The change toward real-time detection is critical for AI environments, where the volume and speed of data far outpace human review. Automated monitoring tools that learn normal patterns of behaviour can detect low-and-slow attacks that would otherwise go unnoticed for weeks. 5. Develop a clear incident response plan Incidents are inevitable, even with strong preventive controls in place. Without a predefined response plan, companies risk making costly decisions under pressure, which can worsen the impact of a breach that could have been contained quickly. An effective AI incident response plan should cover containment, investigation, eradication and recovery: Containment: Limits the immediate impact by isolating affected systems Investigation: Establishes what happened and how far it reached Eradication: Removes the threat and patches the exploited weakness Recovery: Restores normal operations with stronger controls in place AI incidents require unique recovery steps, like retraining a model that was fed corrupted data or reviewing logs to see what the system produced while it was compromised. Teams that plan for these scenarios in advance recover faster and with far less reputational damage. Top 3 providers for implementing AI security Implementing these practices at scale requires purpose-built tooling. Three providers stand out for organisations looking to put a serious AI security strategy into practice. 1. Darktrace Darktrace is a premier choice for AI security, largely because of its foundational Self-Learning AI. The system builds a dynamic understanding of what normal looks like in an enterprise’s unique digital environment. Rather than relying on static rules or historical attack signatures, Darktrace’s core AI looks for anomalous events, reducing the false positives that plague more rule-based tools. A second layer of analysis is provided by its Cyber AI Analyst, which autonomously investigates every alert and determines whether it is part of a wider security incident. This can reduce the number of alerts that land in a SOC analyst’s ****** from hundreds to just two or three critical incidents that need attention. Darktrace was among the earliest adopters of AI for cybersecurity, giving its solutions a maturity advantage over newer entrants. Its coverage spans on-premise networks, cloud infrastructure, email, OT systems and endpoints – all manageable in unison or at the individual product level. One-click integrations from the customer portal mean brands can extend that coverage without long, disruptive deployment cycles. 2. Vectra AI Vectra AI is a strong option for organisations running hybrid or multi-cloud environments. Its Attack Signal Intelligence technology automates the detection and prioritisation of attacker behaviours in network traffic and cloud logs, surfacing the activity that matters most not flooding analysts with raw alerts. Vectra takes a behaviour-based approach to threat detection, focusing on what attackers do in an environment, not how they initially gained access. This makes it effective at catching lateral movement, privilege escalation and command-and-control activity that bypasses perimeter defenses. For teams managing complex hybrid architectures, Vectra’s ability to provide consistent detection in on-premise and cloud environments in a single platform is an advantage. 3. CrowdStrike CrowdStrike is recognised as a leader in cloud-native endpoint security. Its Falcon platform is built on a powerful AI model trained on an extensive body of threat intelligence, letting it prevent, detect and respond to threats at the endpoint, including novel malware. In environments where endpoints make up a large chunk of the attack surface, its lightweight agent and cloud-native setup make it easy to deploy without disrupting operations. Its threat intelligence integrations also help security teams connect the dots, linking what’s happening on a single device to a larger attack pattern playing out in the whole infrastructure. Chart a secure future for artificial intelligence As AI systems grow more capable, the threats designed to exploit them will also grow more sophisticated. Securing AI demands a forward-thinking strategy built on prevention, continuous visibility and rapid response – one that adapts as the environment evolves. The post 5 best practices to secure AI systems appeared first on AI News. View the full article
  14. China has approved its 15th Five-Year Plan [PDF] setting out the country’s economic, education, social, and industrial priorities through to 2030. As might be expected, there is a significant number of references to AI, with the technology mentioned in several contexts. AI is grouped alongside quantum computing, biotechnology, and energy as paths that are to be pursued as part of the country’s strategic science policy. The document calls for more work in developing high-performance AI chips and the software to support them in this context. There’s also a commitment to academic and industry research on new model architectures and the core algorithms underpinning them. Development to communications technologies such as satellite systems, 5G+ (sometimes referred to as 5G-A or 5G Advanced) and 6G networks is to support AI workloads as part of a broader push to improve the country’s infrastructure for data transmion, general communication and data processing. In the section of the Five-Year Plan dedicated to digital infrastructure, the use of AI falls into three components: computing power, AI models, and the organisation and dissemination of data across China. The government calls for national computing hubs described as “intelligent computing clusters”, and proposes market mechanisms such as the lease of computeing resources to give access to a large a swathe of the population as possible. There are also to be new ways in which government bodies will procure the computing services they need. The compute hubs the government proposes are also intended to reduce the barriers smaller firms face to access the very latest in technology. The government wants the theoretical work behind model training and inference to continue as research and in manufacturing, and refers specifically to multi-modal, agent-based, and “embodied” AI. It sees the technology as playing an increasing role in areas of the economy like manufacturing, energy, agriculture, and service industries. It cites industrial design, production processes, general operations, energy system management, and agricultural production as areas where the use of AI should be increased and encouraged. In the service sector, the text calls out the finance, logistics, and software services sectors. For the general technology-using ******** consumer, the government wants to see an increase in the number and type of AI-enabled devices, including phones, computers, and robots, and links the use of AI to education, healthcare, care for the elderly care, and social service provision. In these settings, it envisages adaptive learning systems in education, diagnostic support in healthcare, and ******** system management. At the national and local government levels, the Five-Year Plan wants the digital services provided by all elements of the public sector to increase in scope and ability, based on integrated data systems built around standard models. It calls for the use of AI models in general administration, and the assessment of risk to public safety. The government is generally quite conservative in its approach to cooperation with other nations, suggesting that it may be possible for the country to participate with outside organisations on international standards around data flows and infrastructure. The issue of governance and regulation of data forms a relatively substantial part of the discussion in the document, calling for specific leagal and regulatory frameworks for AI, including rules on the registration of new algorithms, security, and overall transparency. It cites common risks to AI use that may affect the economy, including data misuse and deepfakes. Given the size of the country’s population, it’s perhaps not surprising that there is little mention of specific steps the country will take to ensure its role in the evolution of AI. Over the course of the next five years, the details are more likely to emerge as events observable by China-watchers. But as the pages of this site can attest, the country’s chosen path for AI rests more on smaller, open, freely-available, efficient models than the approach more common in the West: large, proprietary models controlled by two or three major players based on hardware from mostly one supplier. The details of the ******** government’s implementations of AI in its economy will inform observers of whether the next five years will continue China’s chosen course, or whether the West’s ideology around the technology will force a change of approach. (Image source: “Beijing skyline from northeast 4th ring road (cropped)” by Picrazy2 is licensed under CC BY-SA 4.0. To view a copy of this license, visit [Hidden Content]) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post China’s Five-Year Plan details the targets for AI deployment appeared first on AI News. View the full article
  15. Much of the current focus on AI safety has centred on models – how they are trained and monitored. But as systems become more autonomous, attention is changing toward the data those systems depend on. If the data feeding an AI system is fragmented, outdated, or lacks oversight, the system’s behaviour can become more unpredictable. Data governance is becoming a core part of how autonomous systems are controlled. Denodo is one of the companies working in this area, focusing on how organisations access and manage data in different sources. Autonomous AI systems carry out tasks with limited supervision, retrieving information, making decisions based on that information, and triggering actions in business workflows. The challenge is that these systems depend on a steady flow of data. In regulated industries, unpredictable results can create compliance risks. In customer-facing systems, it might result in poor decisions or incorrect responses. How data alters AI behaviour Data is often spread in multiple systems. Large organisations store information in cloud platforms, internal databases, and third-party services. This creates silos, where different parts of the business operate on different versions of the same data. Denodo addresses this problem by providing a way to access data without moving it into a single repository. Its platform creates a unified view of data from different sources for applications, including AI systems. It lets allows organisations apply consistent policies in all data sources. Access rules, compliance requirements, and use limits can be defined in one place. It also supports approaches that allow AI systems to query enterprise data using defined structures and policies. The platform logs how data is queried and what is returned, creating an audit trail. This can help organisations understand how an AI system reached a decision and support compliance requirements. It can also help teams monitor data use in real time and identify unusual activity. If multiple AI systems rely on the same governed data layer, they are more likely to produce aligned results which can help reduce the risk of conflicting outputs in different parts of the business. Governance in the stack As autonomous AI systems become more common, governance is being applied at several levels. Data governance, which sits underneath models and applications, helps ensure that the inputs to those systems are reliable. A well-governed model can still produce poor results especially if it relies on flawed data. Strong data governance can support better outcomes even when systems operate with some degree of independence. This is why data-focused companies are becoming part of the broader AI governance conversation. By controlling how data is accessed and used, they help alter how autonomous systems behave in practice. At AI & Big Data Expo North America 2026AI & Big Data Expo North America 2026AI & Big Data Expo North America 2026, discussions around AI include oversight and system behaviour. Denodo is among the companies taking part in those discussions, particularly around data management and enterprise AI. Early deployments often focused on what AI systems could do. Current discussions are more concerned with how those systems should be managed once they are in use. From ability to control The next stage of AI adoption is likely to depend less on new model features and more on how well organisations manage the systems around them. Governance is not an added feature, but a requirement for systems that are expected to act on their own. (Photo by Hyundai Motor Group) See also: SAP and ANYbotics drive industrial adoption of physical AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data ExpoAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Autonomous AI systems depend on data governance appeared first on AI News. View the full article
  16. The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it. According to FTC data cited in the forecast, consumers lost more than US$12.5 billion to fraud in 2024. As per Experian’s own data accompanying the report, nearly 60% of companies reported an increase in fraud losses from 2024 to 2025. Experian’s fraud prevention solutions helped clients avoid an estimated US$19 billion in fraud losses globally in 2025, a figure that underscores the scale of the problem and how much defence now depends on AI matching the speed and autonomy of attacks. The agentic AI issue The most pressing finding in Experian’s forecast is what the company calls machine-to-machine mayhem, the point at which agentic AI systems, designed to transact autonomously on behalf of users, become indistinguishable from the bots fraudsters deploy for the same purpose. According to Experian’s forecast, as organisations strive to integrate AI agents capable of independent decision-making, fraudsters are exploiting those same systems to run high-volume digital fraud at a scale and speed no human operation could sustain. The core challenge, as per the report, is that machine-to-machine interactions carry no clear ownership of liability; when an AI agent initiates a transaction that turns out to be fraudulent, the question of who is responsible has no settled answer. Kathleen Peters, chief innovation officer for Fraud and Identity at Experian North America, framed the problem: “Technology is accelerating the evolution of fraud, making it more sophisticated and harder to detect. By combining differentiated data with advanced analytics and cutting-edge technology, businesses can strengthen fraud defences, safeguard consumers, and deliver secure, seamless experiences.” Experian predicts that this will reach a tipping point in 2026, forcing substantive industry conversations around liability and the governance of agentic AI in commerce. Some organisations are already making preemptive moves. Amazon, for instance, has stated it blocks third-party AI agents from browsing and transacting on its platform, citing security and privacy concerns. Four other threats the forecast identifies Beyond the agentic AI issue, Experian’s forecast identifies four additional trends that financial institutions need to consider in 2026. Deepfake candidates infiltrating remote workforces; Generative AI tools can now produce tailored CVs and real-time deepfake video capable of passing job interviews. According to the forecast, employers will onboard individuals who are not who they claim to be, granting bad actors access to internal systems. The FBI and Department of Justice issued multiple warnings in 2025 about documented instances of North Korean operatives using this approach to gain employment at US companies. Website cloning overwhelms fraud teams; AI tools have made it easier to create replicas of legitimate sites, and harder to eliminate them permanently. As per the forecast, even after takedown requests are actioned, spoofed domains continue to resurface, forcing fraud teams into reactive patterns. Emotionally intelligent scam bots; Generative AI means bots can conduct complex romance fraud and relative-in-need scams without human operators. According to Experian’s forecast, such bots respond convincingly, build trust over extended periods, and are becoming increasingly difficult distinguish from genuine human interaction. Smart home vulnerabilities: Devices including virtual assistants, smart locks, and connected appliances create new entry points for fraudsters. Experian forecasts that bad actors will exploit these devices to access personal data and monitor household activity as the connected home becomes a more greater part of everyday financial behaviour. Financial institutions’ responses According to Experian’s Perceptions of AI Report, drawing on responses from more than 200 decision-makers at leading financial institutions, 84% identify AI as a critical or high priority for their business strategy over the next two years. A further 89% say AI will play an important role in the lending lifecycle. The governance dimension, however, is where institutions struggle. According to the same report, 73% of respondents are concerned about the regulatory environment around AI, and 65% identify AI-ready data as one of their biggest deployment challenges. Data quality was rated the single most important factor in choosing an AI vendor, which positions Experian’s data-first positioning at the intersection of what financial institutions say they need most. On the compliance side, Experian’s AI-powered Assistant for Model Risk Management addresses one of the most resource-intensive requirements facing institutions deploying AI. According to a 2025 Experian study of more than 500 global financial institutions, 67% struggle to meet their country’s regulatory requirements, 79% report more frequent supervisory communications from regulators than a year ago, and 60% still use manual compliance processes. In Experian’s announcement, the company states that more than 70% of larger institutions report model documentation compliance involves over 50 people, a figure that signals the scale of the automation opportunity. Vijay Mehta, EVP of Global Solutions and Analytics at Experian Software Solutions, described the challenge the product addresses: “The AI-enabled speed of data analytics and model development is driving unprecedented business opportunities for financial institutions, but it comes with a challenge: global regulations that require time-consuming documentation. Experian Assistant for Model Risk Management helps solve this labour and resource-intensive requirement with end-to-end model documentation automation.” The data quality foundation Running underneath Experian’s fraud and compliance products is the same structural argument that appears in both IBM and Salesforce’s AI narratives that appeared this week: AI is only as reliable as the data it runs on. As per Experian’s Perceptions of AI Report, 65% of financial institution decision-makers consider AI-ready data one of their biggest challenges, and data quality is the most critical factor influencing trust in AI vendors. That is not a coincidence of messaging. It reflects a constraint facing financial services institutions as they move AI from pilots into production credit decisioning, fraud detection, and regulatory reporting; functions where explainability and auditability are not optional. Experian’s CDAO Paul Heywood is among the confirmed speakers at the AI & Big Data Expo, part of TechEx North America, taking place 18 – 19 May 2026 at the San Jose McEnery Convention Centre, California. Experian is a Platinum Sponsor at TechEx Global. See also: Hershey applies AI in its supply chain operations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News. View the full article
  17. Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast. The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only 11 percent have reached the stage of deploying and scaling AI agents in ways that produce enterprise-wide business outcomes. However, the central finding is not that AI is failing; 64 percent of respondents say AI is already delivering meaningful business outcomes. The problem is that “meaningful” is doing a lot of heavy lifting in that sentence, and the distance between incremental productivity gains and the kind of compounding operational efficiency that moves the needle on margin is, for most organisations, still substantial. The architecture of a performance gap KPMG’s report distinguishes between what it labels “AI leaders” (i.e. organisations that are scaling or actively operating agentic AI) and everyone else. The gap in outcomes between these two cohorts is striking. Steve Chase, Global Head of AI and Digital Innovation at KPMG International, said: “The first Global AI Pulse results reinforce that spending more on AI is not the same as creating value. Leading organisations are moving beyond enablement, deploying AI agents to reimagine processes and reshape how decisions and work flow across the enterprise.” Among AI leaders, 82 percent report that AI is already delivering meaningful business value. Among their peers, that figure drops to 62 percent. That 20-percentage-point spread might look modest in isolation, but it compounds quickly when you consider what it reflects: not just better tooling, but fundamentally different deployment philosophies. The organisations in that 11 percent are deploying agents that coordinate work across functions, route decisions without human intermediation at every step, surface enterprise-wide insights from operational data in near real-time, and flag anomalies before they escalate into incidents. In IT and engineering functions, 75 percent of AI leaders are using agents to accelerate code development versus 64 percent of their peers. In operations, where supply-chain orchestration is the primary use case, the split is 64 percent versus 55 percent. These are not marginal differences in tool adoption rates; they reflect different levels of process re-architecture. Most enterprises that have deployed AI have done so by layering models onto existing workflows (e.g. a co-pilot here, a summarisation tool there…) without redesigning the process those tools sit inside. That produces incremental gains. The organisations closing the performance gap have inverted this approach: they are redesigning the process first, then deploying agents to operate within the redesigned structure. The difference in return on AI spend between these two approaches, over a three-to-five-year horizon, is likely to be the defining competitive variable in several industries. What $186 million actually buys—and what it does not The investment figures in the KPMG data deserve scrutiny. A weighted global average of $186 million per organisation sounds substantial, but the regional variance tells a more interesting story. ASPAC leads at $245 million, the Americas at $178 million, and EMEA at $157 million. Within ASPAC, organisations including those in China and Hong Kong are investing at $235 million on average; within the Americas, US organisations are at $207 million. These figures represent planned spend across model licensing, compute infrastructure, professional services, integration, and the governance and risk management apparatus needed to operate AI responsibly at scale. The question is not whether $186 million is too much or too little; it is what proportion of that figure is being allocated to the operational infrastructure required to derive value from the models themselves. The survey data suggests that most organisations are still underweighting this latter category. Compute and licensing costs are visible and relatively easy to budget for. The friction costs – the engineering hours spent integrating AI outputs with legacy ERP systems, the latency introduced by retrieval-augmented generation pipelines built on top of poorly structured data, and the compliance overhead of maintaining audit trails for AI-assisted decisions in regulated industries – tend to surface late in deployment cycles and often exceed initial estimates. Vector database integration is a useful example. Many agentic workflows depend on the ability to retrieve relevant context from large, unstructured document repositories in real time. Building and maintaining the infrastructure for this – selecting between providers such as Pinecone, Weaviate, or Qdrant, embedding and indexing proprietary data, and managing refresh cycles as underlying data changes – adds meaningful engineering complexity and ongoing operational cost that rarely appears in initial AI investment proposals. When that infrastructure is absent or poorly maintained, agent performance degrades in ways that are often difficult to diagnose, as the model’s behaviour is correct relative to the context it receives, but that context is stale or incomplete. Governance as an operational variable, not a compliance exercise Perhaps the most practically useful finding in the KPMG survey is the relationship between AI maturity and risk confidence. Among organisations still in the experimentation phase, just 20 percent feel confident in their ability to manage AI-related risks. Among AI leaders, that figure rises to 49 percent. 75 percent of global leaders cite data security, privacy, and risk as ongoing concerns regardless of maturity level—but maturity changes how those concerns are operationalised. This is an important distinction for boards and risk functions that tend to frame AI governance as a constraint on deployment. The KPMG data suggests the opposite dynamic: governance frameworks do not slow AI adoption among mature organisations; they enable it. The confidence to move faster – to deploy agents into higher-stakes workflows, to expand agentic coordination across functions – correlates directly with the maturity of the governance infrastructure surrounding those agents. In practice, this means that organisations treating governance as a retrospective compliance layer are doubly disadvantaged. They are slower to deploy, because every new use case triggers a fresh governance review, and they are more exposed to operational risk, because the absence of embedded governance mechanisms means that edge cases and failure modes are discovered in production rather than in testing. Organisations that have embedded governance into the deployment pipeline itself (e.g. model cards, automated output monitoring, explainability tooling, and human-in-the-loop escalation paths for low-confidence decisions) are the ones operating with the confidence that allows them to scale. “Ultimately, there is no agentic future without trust and no trust without governance that keeps pace,” explains Steve Chase, Global Head of AI and Digital Innovation at KPMG International. “The survey makes clear that sustained investment in people, training and change management is what allows organisations to scale AI responsibly and capture value.” Regional divergence and what it signals for global deployment For multinationals managing AI programmes across regions, the KPMG data flags material differences in deployment velocity and organisational posture that will affect global rollout planning. ASPAC is advancing most aggressively on agent scaling; 49 percent of organisations there are scaling AI agents, compared with 46 percent in the Americas and 42 percent in EMEA. ASPAC also leads on the more complex capability of orchestrating multi-agent systems, at 33 percent. The barrier profiles also differ in ways that carry real operational implications. In both ASPAC and EMEA, 24 percent of organisations cite a lack of leadership trust and buy-in as a primary barrier to AI agent deployment. In the Americas, that figure drops to 17 percent. Agentic systems, by definition, make or initiate decisions without per-instance human approval. In organisational cultures where decision accountability is tightly concentrated at the senior level, this can generate institutional resistance that no amount of technical capability resolves. The fix is governance design; specifically, defining in advance what categories of decision an agent is authorised to make autonomously, what triggers escalation, and who carries accountability for agent-initiated outcomes. The expectation gap around human-AI collaboration is also worth noting for anyone designing agent-assisted workflows at a global scale. East Asian respondents anticipate AI agents leading projects at a rate of 42 percent. *********** respondents prefer human-directed AI at 34 percent. North American respondents lean toward peer-to-peer human-AI collaboration at 31 percent. These differences will affect how agent-assisted processes need to be designed in different regional deployments of the same underlying system, adding localisation complexity that is easy to underestimate in centralised platform planning. One data point in the KPMG survey that deserves particular attention from CFOs and boards: 74 percent of respondents say AI will remain a top investment priority even in the event of a recession. This is either a sign of genuine conviction about AI’s role in cost structure and competitive positioning, or it reflects a collective commitment that has not yet been tested against actual budget pressure. Probably both, in different proportions across different organisations. What it does indicate is that the window for organisations still in the experimentation phase is not indefinite. If the 11 percent of AI leaders continue to compound their advantage (and the KPMG data suggests the mechanisms for doing so are in place) the question for the remaining 89 percent is not whether to accelerate AI deployment, but how to do so without compounding the integration debt and governance deficits that are already constraining their returns. See also: Hershey applies AI across its supply chain operations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News. View the full article
  18. AI is everywhere in the enterprise. The translation workflow often is not. That is the core finding of DeepL’s 2026 Language AI report, “Borderless Business: Transforming Translation in the Age of AI,” published on March 10. Despite broad AI investment across business functions, the report reveals that language and multilingual operations–workflows that touch sales, legal, customer support, and global expansion–remain the most underautomated part of the enterprise technology stack. The automation gap hiding in plain sight According to DeepL’s Borderless Business report, 35% of international businesses still handle translation entirely through manual processes, while a further 33% rely on traditional automation paired with systematic human review. Only 17% have implemented next-generation AI tools–large language models or agentic AI–for multilingual operations. That means, as per the report’s findings, 83% of enterprises have not transitioned to modern language AI capabilities despite investing in AI across other parts of the business. The report, which draws on survey data from business leaders across the United States, United Kingdom, France, Germany, and Japan, also found that enterprise content volume has grown 50% since 2023, yet 68% of companies still rely on workflows built for a different era. Jarek Kutylowski, CEO and founder of DeepL, put it plainly: “AI is everywhere, but efficiency is not. Most companies have deployed AI in some form, yet few achieve real productivity at scale because core workflows remain designed around people, not systems.” Why language AI is becoming infrastructure The angle that makes this more than a translation story is where language AI is now being deployed. According to DeepL’s research, global expansion is the top driver of language AI investment at 33%, followed by sales and marketing at 26%, customer support at 23%, and legal and finance at 22%. These are mission-critical business functions, not peripheral content tasks. DeepL’s broader research from December 2025, surveying 5,000 senior business leaders across the same five markets, found that 54% of global executives say real-time voice translation will be essential in 2026, up from 32% today. As perthat research, the *** and France are leading early adoption at 48% and 33% respectively, while Japan sits at 11%, a gap that points to significant variance in enterprise readiness across global markets. The company now serves over 200,000 business customers across 228 markets, and at the AI & Big Data Expo in London in February 2026, Scott Ivell, vice president of product marketing at DeepL, told SiliconANGLE that the company has 2,000 customers globally deploying AI agents — being used for report analysis, sales targeting, and legal document review. The sovereign AI dimension What separates DeepL’s positioning from general-purpose AI competitors is where it sits on the enterprise trust spectrum.As enterprises in regulated industries–financial services, healthcare, legal, government–accelerate AI adoption, data sovereignty is increasingly the deciding factor in platform selection. DeepL is ISO 27001, SOC 2 Type 2, and GDPR certified, and offers Bring Your Own Key encryption for enterprise customers, giving organisations the ability to withdraw data access in seconds, a control level that most large language model providers do not offer. As per DeepL’s own security documentation, this means data can effectively be placed beyond anyone’s reach, including DeepL itself, at the customer’s discretion. Sebastian Enderlein, CTO at DeepL, has framed 2026 as a year of execution rather than experimentation: “I believe 2026 will be the year AI stops experimenting and starts executing, at a scale we haven’t yet seen. After a cycle of pilots and proofs of concept, businesses are now ready to scale, and they’re betting big on agentic AI to do it.” DeepL Agent and the broader pivot DeepL’s product direction in 2026 reflects the same shift visible across enterprise AI broadly, from single-function tools to autonomous workflow execution. DeepL Agent, launched in general availability in November 2025, is designed to navigate business systems, execute multi-step workflows, and operate across CRM, email, calendars, and project management tools without requiring complex integrations. According to DeepL’s announcement, the agent operates with enterprise-grade security and data sovereignty built in by default, a deliberate positioning choice that targets the segment of enterprises that cannot send sensitive documents to OpenAI or Microsoft’s public cloud endpoints. DeepL’s chief scientist, Stefan Miedzianowski, has described the current moment as a transition on the technology adoption curve: “2026 will undoubtedly be the year of the agent. 2025 was the year when public awareness caught up with the science showing what agents can do, but enterprise adoption at scale will happen now. We are moving from the innovators to the early majority.” As per the Borderless Business report, 71% of business leaders say transforming workflows with AI is a priority for 2026, with expected returns across customer experience, employee productivity, and time to market. The gap between that ambition and the 17% who have actually modernised their language operations is the market DeepL is squarely targeting. DeepL is a Platinum Sponsor at TechEx Global, appearing at the AI & Big Data Expo and co-located events at Olympia London, February 3 & 4, 2027. See also: Automating complex finance workflows with multimodal AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI appeared first on AI News. View the full article
  19. Artificial intelligence is moving beyond software and further into the physical side of business. Companies in food production and logistics are starting to use data systems to support day-to-day decisions, not long-term planning. That change is visible in The Hershey Company’s latest strategy update. At its Investor Day, the company said it plans to use AI in its operations, from sourcing analytics to plant automation and fulfilment, with a focus on how the business runs behind the scenes. Hershey said it plans to apply AI to sourcing and fulfilment. This includes using data to guide how ingredients are bought and how products are distributed. In its Investor Day material, the company said it aims to build “a faster, smarter and more resilient supply chain powered by automation and AI-enabled decision making”. Supply chains in food and snack markets are under steady pressure: Costs can change quickly, demand can change by season, by market, or by product category, and retailers still expect goods to arrive on time and in the right mix. Hershey said its digital planning tools are meant to connect different parts of the business. The company said those systems are designed to reduce waste and improve inventory levels. It also said digital operational planning can connect data in the supply chain and help raise service levels. From reporting to action Part of Hershey’s update is its use of the phrase “AI-enabled decision-making.” The company said its approach will link sourcing and delivery more closely and plans to use automated fulfilment systems for custom assortments and to improve speed to market. This is a useful way to read strategy. A hard task is turning data into decisions that help operations move faster or with fewer mistakes. This is where AI is starting to play a ******* role, according to Hershey’s. The value comes from how operations are connected. AI in the supply chain and plant operations The changes also extend into manufacturing. Hershey said it will increase plant automation to improve manufacturing efficiency and use AI in more parts of its operating model. What is changing is how AI fits into those systems. Instead of sitting apart from production, it is being positioned as part of the process used to guide planning and support execution. That may help companies improve planning and respond more quickly when conditions change. In a business where input costs and consumer demand can change often, even small gains in timing can matter. Food and snack companies deal with constant swings in input costs and demand. Ingredients like cocoa and sugar are affected by weather, trade flows, and supply issues. Companies still have to keep factories running and products moving through retail channels. Hershey’s plan to use sourcing analytics is one example of how AI may be applied in that setting. By analysing supplier data and market trends, the company may improve how it buys raw materials and manages risk. The company also said it wants to better connect workers in its operations. That suggests the strategy is not only about automation. It is also about coordination in the business. Hershey said it plans to “incorporate AI in every stage of its operations,” including sourcing analytics and worker connectivity, as well as automated fulfilment and plant automation. That makes the company a useful case study for a wider change in enterprise AI. Firms are moving away from narrow pilots and toward broader use in business functions. In that model, AI is treated as a part of supply and delivery systems. CEO Kirk Tanner framed the plan around growth and execution, saying, “The strategy is clear. The team is ready. The next chapter of growth and leading performance starts now”. Where this may lead The kind of change is likely to spread as more companies look for ways to connect data with operational decisions. Hershey’s strategy shows how AI is starting to take a larger role in industries built on physical goods. The technology may sit in the background, but its role in daily operations is becoming harder to ignore. (Photo by Janne Simoes) See also: JPMorgan begins tracking how employees use AI at work Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Hershey applies AI across its supply chain operations appeared first on AI News. View the full article
  20. Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that. ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot as a standalone asset, this turns it into a mobile data-gathering node within an industrial IoT network. This initiative shows that hardware innovation can now effectively connect with established business workflows. Underscoring that broader trend, SAP is sponsoring this year’s AI & Big Data Expo North America at the San Jose McEnery Convention Center, CA, an event that is fittingly co-located with the IoT Tech Expo and Intelligent Automation & Physical AI Summit. When equipment breaks at a chemical plant or offshore rig, it costs a fortune. People do routine inspections to catch these issues early, but humans get tired and plants are massive. Robots, on the other hand, can walk the floor constantly, carrying thermal, acoustic, and visual sensors. Hook those sensors into SAP, and a hot pump instantly generates a maintenance request without waiting for a human to report it. Cutting out the reporting lag Usually, finding a problem and logging a work order are two disconnected steps. A worker might hear a weird noise in a compressor, write it down, and type it into a computer hours later. By the time the replacement part gets approved, the machine might be wrecked. Connecting ANYbotics to SAP eliminates that delay. The robot’s onboard AI processes what it sees and hears instantly. If it hears an irregular motor frequency, it doesn’t just flash a warning on a separate screen, it uses APIs to tell the SAP asset management module directly. The system immediately checks for spare parts, figures out the cost of potential downtime, and schedules an engineer. This automates the flow of information from the floor to management. It also means machinery gets judged on hard, consistent numbers instead of a human inspector’s subjective opinion. Putting robots in heavy industry isn’t like installing software in an office—companies have to deal with unreliable infrastructure. Factories usually have awful internet connectivity due to thick concrete, metal scaffolding, and electromagnetic interference. To make this work, the setup relies on edge computing. It takes too much bandwidth to constantly stream high-def thermal video and lidar data to the cloud. So, the robots crunch most of that data locally. Onboard processors figure out the difference between a machine running normally and one that’s dangerously overheating. They only send the crucial details (i.e. the specific fault and its location) back to SAP. To handle the network issues, many early adopters build private 5G networks. This gives them the coverage they need across huge facilities where regular Wi-Fi fails. It also locks down access, keeping the robot’s data safe from interception. Of course, security is a major issue. A walking robot packed with cameras is effectively a roaming vulnerability. Companies must use zero-trust network protocols to constantly verify the robot’s identity and limit what SAP modules it can touch. If the robot gets hacked, the system has to cut its connection instantly to stop the attackers from moving laterally into the corporate network. These robots generate a massive amount of unstructured data as they walk around. Turning raw audio and thermal images into the neat tables SAP requires is difficult. If companies don’t manage this right, maintenance teams will drown in alerts. A robot that is too sensitive might ***** out hundreds of useless warnings a day, making the SAP dashboard completely ignored. IT teams have to set strict rules before turning the system on. They need exact thresholds for what triggers a real maintenance ticket and what just needs to be watched. The setup usually uses middleware to translate the robot’s telemetry into SAP’s language. This software acts as a filter, throwing out the noise so only actual problems reach the ERP system. The data lake storing all this information also needs to be organised for future machine learning projects. Fixing broken machines is the short-term goal; the long-term payoff is using years of robot data to predict failures before they happen. Ensuring a successful physical AI deployment Dropping robots into a factory naturally makes people nervous. The project’s success often comes down to how human resources handles it. Workers usually look at the robots and assume layoffs are next. Management has to be clear about why the robots are there. The goal is to get people out of dangerous areas like high-voltage zones or toxic chemical sectors to reduce injuries. The robot collects the data, and the human engineer shifts to analysing that data and doing the actual repairs. This requires retraining. Workers who used to walk the perimeter now have to read SAP dashboards, manage automated tickets, and work with the robots. They have to trust the sensors, and management has to make sure operators know they can take manual control if something unexpected happens. Companies need to take the rollout slowly. Because syncing physical robots with enterprise software is complicated, large-scale rollouts should start as small, targeted pilots. The first test should be in one specific area with known hazards but rock-solid internet. This lets IT watch the data flow between the hardware and SAP in a controlled space. At this stage, the main job is making sure the data matches reality. If the robot sees one thing and SAP records another, it has to be audited and fixed daily. Once the data pipeline actually works, the company can add more robots and connect other systems, like automated parts ordering. IT chiefs have to keep checking if their private networks can handle more robots, while security teams update their defenses against new threats. If companies treat these autonomous inspectors as an extension of their corporate data architecture, they get a massive amount of information about their physical assets. But pulling it off means getting the network infrastructure, the data rules, and the human element exactly right. See also: The rise of invisible IoT in enterprise operations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News. View the full article
  21. Financial institutions are learning to deploy compliant AI solutions for greater revenue growth and market advantage. For the better part of ten years, financial institutions viewed AI primarily as a mechanism for pure efficiency gains. During that era, quantitative teams programmed systems designed to discover ledger discrepancies or eliminate milliseconds from automated trading execution times. As long as the quarterly balance sheets reflected positive gains, stakeholders outside the core engineering groups rarely scrutinised the actual maths driving these returns. The arrival of generative applications and highly complex neural networks completely dismantled that widespread state of comfortable ignorance. Today, it’s not acceptable for banking executives to approve new technology rollouts based simply on promises of accurate predictive capabilities. Across Europe and North America, lawmakers are aggressively drafting legislation aimed at punishing institutions that utilise opaque algorithmic decision-making processes. Consequently, the dialogue within corporate boardrooms has narrowed intensely to focus on safe AI deployment, ethics, model oversight, and legislation specific to the financial industry. Institutions that choose to ignore this impending regulatory reality actively place their operational licenses in jeopardy. However, treating this transition purely as a compliance exercise ignores the immense commercial upside. Mastering these requirements creates a highly efficient operational pipeline where good governance functions as a massive accelerant for product delivery rather than an administrative handbrake. Commercial lending and the price of opacity The mechanics of retail and commercial lending perfectly illustrate the tangible business impact of proper algorithmic oversight. Consider a scenario where a multinational bank introduces a deep learning framework to process commercial loan applications. This automated system evaluates credit scores, market sector volatility, and historical cash flows to generate an approval decision in a matter of milliseconds. The resulting competitive edge is immediate and obvious, as the institution reduces administrative overhead while clients secure necessary liquidity exactly when they require it. However, the inherent danger of this velocity resides entirely within the training data. If the deployed model unknowingly utilises proxy variables that discriminate against a specific demographic or geographic area, the ensuing legal consequences are swift and punishing. Modern regulators demand total explainability and categorically refuse to accept the complexity of neural networks as an excuse for discriminatory outcomes. When an external auditor investigates why a regional logistics enterprise was denied funding, the bank must possess the capability to trace that exact denial directly back to the specific mathematical weights and historical data points that caused the rejection. Investing capital into ethics and oversight infrastructure is essentially how modern banks purchase speed-to-market. Constructing an ethically-sound and thoroughly vetted pipeline enables an institution to release new digital products without constantly looking over its shoulder out of fear. Guaranteeing fairness from the absolute beginning prevents nightmarish scenarios that involve delayed product rollouts and retrospective compliance audits. This level of operational confidence translates directly into sustained revenue generation while entirely avoiding massive regulatory penalties. Engineering unbroken information provenance Achieving this high standard of safety is impossible without adopting a brutal and uncompromising approach toward internal data maturity. Any algorithm merely reflects the information it consumes. Unfortunately, legacy banking institutions are infamous for maintaining highly fractured information architectures. It remains incredibly common to discover customer details resting on thirty-year-old mainframe systems, transaction histories floating in public cloud environments, and risk profiles gathering dust within entirely separate databases. Attempting to navigate this disjointed landscape makes achieving regulatory compliance physically impossible. To rectify this, data officers must enforce the widespread adoption of comprehensive metadata management across the entire enterprise. Implementing strict data lineage tracking represents the only viable path forward. For example, if a live production model suddenly exhibits bias against *********-owned businesses, engineering teams require the exact capability to surgically isolate the specific dataset responsible for poisoning the results. Constructing this underlying infrastructure mandates that every single byte of ingested training data becomes cryptographically signed and tightly version-controlled. Modern enterprise platforms must maintain an unbroken chain of custody for every input, stretching all the way from a customer’s initial interaction to the final algorithmic ruling. Beyond data storage, integration issues arise when connecting advanced vector databases to these legacy systems. Vector embeddings require massive compute resources to process unstructured financial documents. If these databases are not perfectly synchronised with real-time transactional feeds, the AI risks generating severe hallucinations, presenting outdated or entirely fabricated financial advice as absolute fact. Furthermore, as we’re currently all too aware, economic environments change at a rapid pace. A model trained on interest rates from three years ago will fail spectacularly in today’s market. Technology teams refer to this specific phenomenon as concept drift. To combat this, developers must wire continuous monitoring systems directly into their live production algorithms. These specialised tools observe the model’s output in real-time, actively comparing results against baseline expectations. If the system begins to drift outside approved ethical parameters, the monitoring software automatically suspends the automated decision-making process. Exceptional predictive accuracy means absolutely nothing without real-time observability; without it, a highly-tuned model becomes a corporate liability waiting to explode. Defending the mathematical perimeter Of course, implementing governance over financial algorithms introduces an entirely new category of operational headaches for CISOs. Traditional cybersecurity disciplines focus primarily on building protective walls around endpoints and corporate networks. Securing advanced AI, however, requires actively defending the actual mathematical integrity of the deployed models. This represents a complex discipline that most internal security operations centres barely understand. Adversarial attacks present a very real and present danger to modern financial institutions. In a scenario known as a data poisoning attack, malicious actors subtly manipulate the external data feeds that a bank relies upon to train its internal fraud detection models. By doing so, they essentially teach the algorithm to turn a blind eye to specific and highly-lucrative types of illicit financial transfers. Consider also the threat of prompt injection, where attackers utilise natural language inputs to trick generative customer service bots into freely handing over sensitive account details. Model inversion represents another nightmare scenario for executives, occurring when outsiders repeatedly query a public-facing algorithm until they successfully reverse-engineer the highly confidential financial data buried deep within its training weights. To counter these evolving threats, security teams are forced to bury zero-trust architectures deep within the machine learning operations pipeline. Absolute device trust becomes non-negotiable. Only fully-authenticated data scientists, working exclusively on locked-down corporate endpoints, should ever possess the administrative permissions required to tweak model weights or introduce new data to the system. Before any algorithm touches live financial data, it must successfully survive rigorous adversarial testing. Internal red teams must intentionally attempt to break the algorithm’s ethical guardrails using sophisticated simulation techniques. Surviving these simulated corporate attacks serves as a mandatory prerequisite for any public deployment. Eradicating the engineering and compliance divide The highest barrier to creating safe AI is rarely the underlying software itself; rather, it is the entrenched corporate culture. For decades, a very thick wall separated software engineering departments from legal compliance teams. Developers were heavily incentivised to chase speed and rapid feature delivery. Conversely, compliance officers chased institutional safety and maximum risk mitigation. These groups typically operated from entirely different floors, used different software applications, and followed entirely different performance incentives. That division has to come down. Data scientists can no longer construct models in an isolated engineering vacuum and then carelessly toss them over the fence to the legal team for a quick blessing. Legal constraints, ethical guidelines, and strict compliance rules must dictate the exact architecture of the algorithm starting on day one. Leaders need to actively force this internal collaboration by establishing cross-functional ethics boards. Banks should pack these specific committees with lead developers, corporate counsel, risk officers, and external ethicists. When a particular business unit pitches a new automated wealth management application, this ethics board dissects the entire project. They must look past the projected profitability margins to deeply interrogate the societal impact and regulatory viability of the proposed tool. By retraining software developers to view compliance as a core design requirement rather than annoying red tape, a bank actively builds a lasting culture of responsible innovation. Managing vendor ecosystems and retaining control The enterprise technology market recognises the urgency surrounding compliance and is aggressively pumping out algorithmic governance solutions. The major cloud service providers now bake sophisticated compliance dashboards directly into their AI platforms. These tech giants offer banks automated audit trails, reporting templates designed to satisfy global regulators, and built-in bias-detection algorithms. Simultaneously, a smaller ecosystem of independent startups offers highly specialised governance services. These agile firms focus entirely on testing model explainability or spotting complex concept drift exactly as it happens. Purchasing these vendor solutions is highly tempting. Buying off-the-shelf software offers operational convenience and allows the enterprise to deploy governed algorithms without writing heavy auditing infrastructure from scratch. Startups are rapidly building application programming interfaces that plug directly into legacy banking systems, providing instant, third-party validation of internal models. Despite these advantages, relying entirely on outsourced governance introduces a risk of vendor lock-in. If a bank ties its entire compliance architecture to one hyperscale cloud provider, migrating those specific models later to satisfy a new local data sovereignty law becomes an expensive and multi-year nightmare. A hard line must be drawn regarding open standards and system interoperability. The specific tools tracking data lineage and auditing model behaviour have to be completely portable across different environments. The bank must retain absolute control over its compliance posture, regardless of whose physical servers actually hold the algorithm. Vendor contracts require ironclad provisions guaranteeing data portability and safe model extraction. A financial institution must always own its core intellectual property and internal governance frameworks. By fixing internal data maturity, securing the development pipeline against adversarial threats, and forcing legal and engineering teams to actually speak to one another, leaders can safely deploy modern algorithms. Treating strict compliance as the absolute foundation of engineering guarantees that AI drives secure and sustainable growth. See also: Ocorian: Family offices turn to AI for financial data insights Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Secure governance accelerates financial AI revenue growth appeared first on AI News. View the full article
  22. Glia, a customer service platform providing AI-powered interactions for the banking sector, has been named a winner in the Banking and Financial Services Category at the 2026 Artificial Intelligence Excellence Awards. The awards recognises achievements in a range of industries and use cases, spotlighting “companies and leaders moving AI beyond experimentation and into practical, accountable deployment.” Speaking on the awards, Russ Fordyce, Chief Recognition Officer at Business Intelligence Group commented, “AI has arrived! 2026 is about execution and results. Glia stood out because its work in banking reflects where the market is headed: practical AI that solves real problems, earns trust, and delivers measurable value. The recognition highlights a team that is not participating in the AI shift, but helping define what meaningful progress looks like.” Glia’s Banking AI platform helps financial institutions navigate security and regulatory risks common in generative AI. It was chosen by a panel of AI experts and analysts as a platform that deploys AI trained precisely for banking workflows. It helps banks and credit unions automate up to 80% of all interactions, according to Glia. For the customer service and member care functions, this can free up time for other tasks, including strengthening client relationships and expanding lending and deposit portfolios; in other words, doing what humans can do and AI can’t. Dan Michaeli, CEO and co-founder of Glia, said: “The award celebrates the future of banking in an time where AI is everywhere. With consumers in every demographic now using AI to manage their lives, the pressure on financial institutions to provide instant, intelligent service has never been higher.” “Our platform is designed to help banks and credit unions lead this transition, using secure, banking-specific AI to amplify their efficiency while protecting the human connection that defines their brand,” he said. Glia has enjoyed positive business momentum recently, with the company announcing recently it will be the first to contractually promise to resist AI hallucinations and circumvent prompt injections for its clients’ use of the platform. As AI becomes increasingly complex, particularly in financial institutions, Glia’s focus on AI safety provides a model that banks and credit unions might rely on to help them use AI effectively and securely. (Image source: “Space Invaders does cones and safety barriers” by Gene Hunt is licensed under CC BY 2.0. To view a copy of this license, visit [Hidden Content]) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Glia wins Excellence Award for safer AI in banking appeared first on AI News. View the full article
  23. When Pew Research Centre analysed 68,879 Google searches in March 2025, one finding stood out: users who encountered an AI-generated summary clicked on a traditional result just 8% of the time. Those who didn’t see a summary clicked nearly twice as often, at 15%. A quarter of users who saw an AI summary ended their session without clicking on anything at all. That gap tells you something important about where brand discovery is heading. With generative AI platforms like ChatGPT now pulling in 5.72 billion monthly visits (according to SimilarWeb data from January 2026), brands already know AI search matters. The more pressing question is whether your content is structured for the two distinct ways AI retrieves and presents information. SimilarWeb’s framework for AEO vs GEO draws a useful line between these approaches, and it’s one worth understanding before your competitors do. Where your clicks went and why they’re not coming back People are searching more than ever. They’re just not clicking. BrightEdge reported in May 2025 that Google search impressions climbed 49% in the year following the launch of AI Overviews. Over that same *******, click-throughs dropped nearly 30%. Seer Interactive’s September 2025 study, covering 25.1 million organic impressions in 42 organisations, found the decline was even steeper for queries triggering AI Overviews specifically: Organic CTR fell 61%, from 1.76% to 0.61% Paid CTR dropped 68%, from 19.7% to 6.34% Even queries without AI Overviews saw organic CTR decline 41% year-over-year By March 2025, one in five Google searches produced an AI summary (Pew Research Centre) Gartner predicted in early 2024 that traditional search volume would fall 25% by 2026. The exact figure remains debatable, but the direction is clear. Impressions are up. Engagement with links is collapsing. The answer itself has become the destination, and the brands inside that answer are the ones getting noticed. Getting cited by the machine This is where the AEO vs GEO distinction earns its weight. Answer Engine Optimisation (AEO) is about structuring content so AI systems can extract a clean, direct answer. Think featured snippets, People Also Ask boxes, voice assistant results. It’s tactical: question-based headings, answer-first paragraphs of 40 to 80 words, FAQ and HowTo schema markup. If someone asks a specific question and your content gives the clearest answer, AEO is what gets you cited at snippet level. Generative Engine Optimisation (GEO) operates at a broader level. It’s about making your brand a trusted source for RAG-powered platforms (ChatGPT, Perplexity, Gemini) that synthesise answers from multiple sources. GEO involves semantic content clusters, entity-rich data, multimodal assets and building domain authority through co-mentions in third-party sites, directories and publications. Here’s the part most brands are missing: you can win the featured snippet and still be completely absent from a ChatGPT response. McKinsey’s AI Discovery Survey (August 2025, surveying 1,927 consumers) found that a brand’s own website accounts for only 5 to 10% of the sources AI search platforms reference. The other 90% comes from publishers, user-generated content, affiliate sites and review platforms. So your AEO might be flawless on Google, while your GEO presence in the wider web remains thin. Worth noting: BrightEdge found that 89% of AI Overview citations come from results ranked beyond position 100. Traditional ranking position is becoming less relevant than content structure and authority signals. The brands that get cited will be the brands that get chosen The data on citation advantage is hard to ignore. Seer Interactive’s study found that brands cited in AI Overviews earn 35% more organic clicks and 91% more paid clicks compared to those left out of the summary entirely. The investment case is building, too. According to Conductor research reported by MarTech in February 2026, 32% of digital marketing leaders now rank GEO as their top priority for the year, and 97% report positive results from their efforts so far. An average of 12% of 2025 digital budgets went to GEO initiatives. Perhaps more telling, 93% of leaders are building these abilities in-house, treating AI search visibility as too strategically important to outsource. High-maturity organisations are already spending nearly twice as much on GEO as their lower-maturity peers. That gap will be difficult to close once the default answers are set. If 44% of consumers already prefer AI-powered search as their primary source of insight (McKinsey), and your brand doesn’t appear in those AI-generated responses, where does that leave you in the buying process? The new front door is already open AEO and GEO are distinct in their mechanics, but they serve the same purpose: making your brand the one AI systems trust, retrieve and cite. The practical starting point is straightforward. Audit your current AI visibility by prompting the major platforms with questions your customers ask. Identify where you appear, where you don’t and what sources are being cited instead. Then layer AEO (structured answers, schema, question-led content) with GEO (semantic depth, third-party co-mentions, multimodal assets) on top of your existing SEO foundations. The stakes are rising. As generative AI moves beyond summaries and toward agentic systems that act on users’ behalf (booking, purchasing, recommending), the brands AI cites will increasingly be the brands AI chooses. If your content strategy still measures success by clicks alone, what happens when the click becomes optional? (Image source: Bazoom) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How AEO vs GEO reshapes AI-driven brand discovery in 2026 appeared first on AI News. View the full article
  24. As artificial intelligence becomes a driving force in financial prediction, the reliability of its forecasting tools faces increasing scrutiny. Many traders question whether claims of high accuracy translate into consistent results under live market conditions. Understanding how these AI systems are evaluated reveals important distinctions between performance in theory and practice. Few financial domains are as dependent on accurate prediction as forex trading, where slight changes in exchange rates can have consequences for participants. The surge of AI powered price forecasting tools has brought new abilities, but it has also raised questions about what constitutes meaningful accuracy. Readers in this rapidly evolving landscape of predictive technology seek clarity on how well these tools perform and which factors should inform their assessment of forecasts in live environments. Scrutinising claims of accuracy in predictive tools Accuracy claims regarding AI forecasting in currency markets are often presented optimistically, particularly when based on controlled demonstrations. These scenarios typically reflect historical data or optimised backtests, which can differ sharply from the volatility and unpredictability seen in live trading environments. The central issue lies in the gap between demonstration results and how models react to real-time market changes. While technical accuracy metrics are frequently referenced, their practical meaning for financial decision-making can remain ambiguous. When evaluating the accuracy of AI powered price forecasting tools, it is crucial to clarify what “accuracy” represents in this context. For some, accuracy might mean correctly predicting the direction of currency moves, while for others, it could relate to the exact magnitude or timing of price changes. The complexity of forex, with its fast moving variables and interdependencies, underscores why simplistic accuracy scores rarely provide the full picture. Professional users often demand both statistical rigor and domain expertise to interpret results effectively. Understanding the mechanics behind AI market predictions AI powered price forecasting tools commonly employ machine learning models specialised for time series prediction. These tools typically use advanced architectures like recurrent neural networks, convolutional neural networks, or transformer-based models designed to capture sequential patterns in financial data. They rely on inputs ranging from historical pricing and trading volumes to macroeconomic indicators and alternative data sources, including geopolitical events or sentiment analysis from news and social media. There are varied approaches in predictive modeling, with some systems focusing on point predictions that offer specific future prices, while others generate probabilistic forecasts reflecting outcome likelihoods in confidence intervals. The distinction affects how users interpret and trust model outputs. Although probabilistic methods can better accommodate market uncertainty, understanding distributional forecast accuracy and related concepts requires additional expertise. This complexity highlights why headline accuracy figures alone are not sufficient for assessing a system’s practical value. Evaluating model performance with robust accuracy metrics Practitioners typically assess AI powered price forecasting tools using a range of evaluation metrics, each shedding light on different facets of prediction quality. Directional accuracy measures whether forecasts correctly predict upward or downward movement of currency pairs, while metrics like mean absolute error or root mean squared error focus on the magnitude of prediction errors. Calibration, which reflects how well predicted probabilities align with actual market occurrences, adds another important dimension. Meaningful assessment requires benchmarks and rigorous out-of-sample testing, because models effective on past data may not remain reliable as markets change. Overfitting, where models treat noise as signal, can cause high-scoring tools to lose effectiveness once deployed. Similarly, regime shifts and nonstationarity in forex can quickly undermine predictive accuracy, highlighting the importance of ongoing monitoring and validation. It is recognised that participants benefit from understanding both the strengths and limitations of these tools before integrating them into operational processes. Navigating real world frictions and effective risk controls When AI powered price forecasting tools are integrated into live strategies, various real world frictions become significant. Issues like latency – the delay between signal and execution – with slippage, spread widening, and inconsistent execution quality, may degrade results observed in backtesting. And, data quality concerns and the risk of look ahead bias present ongoing challenges, particularly if datasets inadvertently include future information unavailable at decision time. As algorithmic signals become more prevalent, financial markets may adapt, reducing the effectiveness of commonly used forecasting techniques. Effective deployment requires a blend of quantitative insight and robust risk management. Rather than relying solely on single-point forecasts, applying confidence intervals and scenario analysis can yield greater operational stability. Position sizing rules and drawdown controls, with continuous stress testing during volatile periods, help mitigate the effects of erroneous predictions. Ongoing review and adaptation, grounded in an understanding of model limitations and maintained with human oversight, are essential for the sustainable application of AI powered price forecasting tools in currency markets. (Image source: Bazoom) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Assessing AI powered price forecasting tools in currency markets appeared first on AI News. View the full article
  25. A developer of API and AI connectivity technologies, Kong, has announced that Bruce Felt has joined it as CFO. Felt is a seasoned finance leader who brings experience guiding enterprise software companies through their growth phases, including several IPOs, acquisitions, and global expansions. Mr. Felt has led finance organisations from early-stage environments to significant global enterprises. Over his career, he’s taken three companies public as CFO: FullTime Software, SuccessFactors, and Domo. At Domo, a cloud-based analytics and business intelligence software company, he helped scale the business and led the company to its public offering. Bruce Felt, new CFO at Kong. Source: AZK Media Augusto Marietti, chief executive officer and co-founder of Kong said:”Bruce has repeatedly helped high-growth software companies scale through transformative periods, pairing operational discipline with strategic insight and several crossings into public markets. As Kong continues to expand its leadership in API and AI connectivity, his experience building durable, globally scaled organisations will be a unique asset in our next journey.” “He brings the right mix of operational rigor and public company experience, while keeping a growth-oriented profile. We’re extremely excited to welcome Bruce onto the Kong team, and I look forward to partnering with and learning from him.” Bruce Felt serves on the boards of directors of several organisations, including Veradigm, Human Interest, Betterworks, and Cambium Networks. He has held board and audit committee leadership roles at public and private companies. (Image source: Pixabay under licence.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Kong names Bruce Felt as chief financial officer appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.