ChatGPT
Diamond Member-
Posts
800 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
Banking house JPMorgan Chase is asking its roughly 65,000 engineers and technologists to use AI tools as part of their regular workflow. Business Insider reported that managers are tracking how often staff use these tools. That use may also influence performance reviews. The report states employees are encouraged to use tools like ChatGPT and Claude Code when writing code, reviewing documents, or handling routine tasks. Internal systems then classify workers based on their level of use. Some are labelled “light users,” while others fall into a “heavy user” category. JPMorgan has been using in fraud detection and risk analysis. What stands out here is not the technology itself, but how it is being woven into day-to-day expectations for staff. According to internal materials cited by Business Insider, managers are paying close attention to how employees use AI tools. JPMorgan shows AI adoption in banks Many companies have spent the past two years rolling out AI tools in departments. In most cases, adoption has been uneven. Some teams experiment heavily, while others stick to existing workflows. JPMorgan is treating AI as a standard part of the job. That creates a more uniform level of adoption in teams. In the past, performance reviews focused on output and accuracy. Now, they may also include how effectively employees use AI tools to reach those results. That raises a practical question for large organisations. If AI can reduce the time needed for certain tasks, should employees be expected to produce more work in the same amount of time? Keeping pace with internal change By tracking use, the bank may be trying to avoid a familiar problem in enterprise software rollouts. Tools are deployed, but adoption is slow, limiting their impact. Making AI part of performance reviews creates a stronger incentive to engage with the technology. It also suggests that AI literacy is becoming a baseline skill, similar to how spreadsheets or code tools became standard over time. New challenges include employees feeling pressure to use AI even in cases where it does not clearly improve the outcome. There is also the matter of how to measure “good” use, as opposed to simply frequent use. JPMorgan’s AI risks and efficiency gains Banks operate in a regulated environment, where introducing AI into more workflows increases the need for oversight. Tools like ChatGPT and Claude Code can help summarise information or generate drafts, but they can also produce incorrect or incomplete results. That means employees still need to verify outputs before using them in decision-making or client-facing work. JPMorgan has developed internal controls for AI systems in areas like trading and risk. Expanding use in a broader group of employees may require similar safeguards, creating a situation for the bank in which it wants to improve efficiency, but also needs to ensure that heavier AI use does not introduce new risks. Other financial institutions are likely watching closely. If tying AI use to performance leads to measurable gains in productivity, similar models may spread in the sector. The bank’s approach may reshape how companies hire and train employees, and skills like prompt writing and output checks could become part of standard job requirements. JPMorgan’ approach suggests that this change is already underway, at least in banking. (Photo by IKECHUKWU JULIUS UGWU) See also: RPA matters, but AI changes how automation works Want to experience the full spectrum of enterprise technology innovation? Join TechEx in Amsterdam, California, and London. Covering AI, Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing, and Data Centres, TechEx brings together global leaders to share real-world use cases and in-depth insights. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post JPMorgan begins tracking how employees use AI at work appeared first on AI News. View the full article
-
RPA (robotic process automation) is a practical and proven way to reduce manual work in business processes without AI systems. By using software bots to follow fixed rules, companies can automate repetitive tasks like data entry and invoice processing, and to a certain extent, report generation. Adoption grew quickly in many sectors, especially in finance, operations, and customer support. In recent years the technology has matured. While RPA is still used, business processes can become more complex. Many systems handle unstructured data, like messages and documents. Rule-based automation struggles to handle these inputs, since it depends on predefined steps and structured formats. RPA works best in stable environments where processes do not change often. When conditions change or inputs vary, bots can fail or need updating, adding maintenance overhead and reducing the value of automation over time. Gartner has pointed to more adaptive automation systems on the market, designed to handle variation and uncertainty, combining automation with machine learning or language models, allowing them to process a broader set of inputs. From RPA rules to AI-driven automation AI has changed how companies think about automation, as systems from vendors already known in the RPA space, like Appian and Blue Prism, can now interpret context and adjust their activities, especially relevant for tasks that involve text or images. Large language models’ ability to summarise documents and extract important details, and respond to queries in natural language offers automation in areas previously difficult to manage. McKinsey & Company research suggests generative AI could automate decision-making and communication work tasks, not routine data handling. The change does not replace automation, but rather modifies it. Rather than building chains of rules, businesses could use AI to handle variations in input media. Automation becomes more flexible, with systems able to adjust to different inputs without reconfiguration. That’s the theory. AI systems produce inconsistent outputs, and their behaviour is not predictable. Firms can combine AI with existing automation tools, using each where it fits best. Getting the balance right – intelligent automation – is a hot topic at industry events and on the pages of the RPA and AI media outlets. Where RPA still fits with AI Despite these changes, RPA remains relevant in many settings. Tasks that involve structured data and stable workflows still benefit from rule-based automation. Common examples include payroll processing and compliance checks, as well as system integrations. In these circumstances, RPA’s predictability can be an advantage. Bots follow defined steps and produce consistent results, which is useful in regulated environments. Financial reporting and auditing processes, for example, frequently require strict control and traceability. Rather than being replaced, RPA is often used with AI. Automation workflows may begin with AI systems that interpret input, then pass structured data to RPA bots for execution. The combination allows companies to extend automation without discarding existing systems. Blue Prism and the change toward intelligent automation Vendors that built their business around RPA are adapting to this change. Blue Prism, now part of SS&C Technologies, has expanded its focus to include what it describes as intelligent automation. This approach combines RPA with AI tools capable of processing more complex inputs. Platforms combine automation with abilities like document processing and decision support, frequently through integrations with AI tools. The move toward AI-enabled automation also changes how platforms get used. Workflows bring together data sources and decision points, along with execution steps in a single process. A gradual transition, not a full replacement Many organisations continue to rely on existing RPA systems, especially where processes are stable and well understood. Replacing these systems would take time and money, which may not always be justified. Instead, the transformation is gradual. Companies can add AI abilities to extend what automation can handle, while RPA is still in place for tasks where it still works well. This may change how automation is designed and deployed over time, but rule-based systems will remain necessary. See also: AI agents enter banking roles at Bank of America Want to experience the full spectrum of enterprise technology innovation? Join TechEx in Amsterdam, California, and London. Covering AI, Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing, and Data Centres, TechEx brings together global leaders to share real-world use cases and in-depth insights. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post RPA matters, but AI changes how automation works appeared first on AI News. View the full article
-
To gain financial data insights, the majority of family offices now turn to AI, according to new research from Ocorian. The global study reveals 86 percent of these private wealth groups are utilising AI to improve their daily operations and data analysis. Representing a combined wealth of $119.37 billion, these organisations want machine learning to modernise their workflows. The technology offers practical benefits for institutions handling complex portfolios, particularly in detecting anomalies, streamlining reporting, and navigating strict regulatory frameworks. Securing financial data insights via AI and system governance Implementing these tools requires careful alignment with existing enterprise architectures. Financial institutions frequently rely on major cloud ecosystems, such as Microsoft Azure or Google Cloud, to provide the necessary computing power and security protocols for advanced data processing. By using these platforms, operations teams can deploy machine learning models that identify potential fraud patterns or compliance breaches much faster than manual reviews allow. While 26 percent of surveyed wealth executives strongly agree that AI will reshape administration and boost performance within the next year, 72 percent expect the broader effects to materialise over a two to five-year horizon. This cautious timeline reflects the reality of integrating complex algorithms into highly-regulated environments. Integrating new systems without disrupting daily client services presents a major challenge. Legacy data architectures often require heavy re-engineering before they can fully support predictive analytics. Michael Harman, Commercial Director for the *** and Channel Islands at Ocorian, said: “Family offices are gradually adopting AI and technology as part of their operations and are particularly using it for data insights … there is a realisation that it will have a major impact and family offices need to start exploring the sector and will need support in making the transition.” Balancing operational upgrades with capital exposure Despite high operational adoption rates, direct capital allocation into the AI sector remains low. Only seven percent of respondents across 16 territories – including the ***, US, UAE, and Singapore – are currently seeking direct investment opportunities in such technology firms. This current hesitation highlights a preference for using proven enterprise solutions rather than absorbing the venture-style risks associated with emerging startups. Leaders are focused on immediate operational stability and verifiable returns on investment. However, this dynamic is likely to change rapidly over the next three years, as 74 percent of these organisations expect to increase their investments in digital assets. Within that group, 20 percent plan to increase their financial commitment to the sector dramatically. Outsourcing the technical burden to established service providers allows institutions to benefit from enhanced fraud detection and compliance monitoring without directly managing the algorithmic infrastructure. Success will depend on establishing clean data pipelines and ensuring cross-functional teams understand how to interpret algorithmic outputs for risk assessment. By prioritising secure and scalable cloud platforms, and focusing on specific operational pain points like regulatory reporting, financial leaders can effectively use these AI capabilities to bolster their data insights while maintaining the necessary oversight required in modern wealth management. See also: AI agents enter banking roles at Bank of America Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Ocorian: Family offices turn to AI for financial data insights appeared first on AI News. View the full article
-
AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move beyond internal tools and into systems that support real client interactions. Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisors, rolled out to around 1,000 financial advisers, according to Banking Dive. The move is one of the clearer early examples of how AI is being used in core banking roles rather than back-office tasks or limited pilots. It also reflects a broader shift across the industry, where AI is moving from basic assistance to systems that can support decision-making in real time. The platform is based on Salesforce’s Agentforce, which enables the creation of AI agents to handle tasks. It is designed to help advisors handle client queries and prepare recommendations. It can also help manage daily workflows. According to Banking Dive, the system is part of a wider push among major banks to test how AI agents can work alongside human staff rather than operate as standalone tools. Bank of America has been expanding its use of AI across the business. The bank has said its virtual assistant Erica handles work equivalent to about 11,000 employees, while all 18,000 of its software developers use AI coding tools that have improved productivity by around 20%, according to Banking Dive. These figures give a sense of how widely AI is already embedded across different parts of the organisation. AI agents move closer to financial decision-making This approach differs from earlier deployments of AI in banking, which focused mainly on chatbots or internal productivity tools. In those cases, AI was used to answer simple questions or automate routine tasks. The newer systems are built to handle more complex work, including analysing client data and suggesting next steps. That shift brings AI closer to the core of financial decision-making. Instead of acting as a support layer, the technology is now embedded within the advisory process itself. Other large banks are moving in a similar direction. The same Banking Dive report notes that firms such as JPMorgan, Wells Fargo, and Goldman Sachs are also testing AI tools aimed at improving productivity and helping staff in client-facing roles, though these efforts vary and are not always focused on advisor-specific AI agent systems. While each bank is taking a different approach, the common goal is to increase output without expanding headcount at the same rate. Early data suggest these tools can improve efficiency, though results vary. In some cases, banks report gains in how quickly advisors can access information or prepare for meetings, based on industry reporting and early deployment feedback cited by Banking Dive. At the same time, there are ongoing concerns about accuracy and oversight, especially when AI systems are used to suggest financial decisions. A wider pattern is emerging across financial services. Many institutions are investing in AI, but they are doing so in a controlled way, often limiting deployment to specific teams or use cases. The goal is to test how the technology performs in real settings before expanding further. Some analysts remain cautious about how quickly AI is changing banking. Wells Fargo analyst Mike Mayo wrote that recent developments have yet to produce major new products, describing the current phase as “a little boring from a product standpoint,” according to Banking Dive. Human oversight remains central Bank of America’s rollout stands out because of its scale and placement. Financial advisors sit at the centre of the bank’s relationship with clients, particularly in wealth management. Introducing AI into that role suggests a growing level of trust in the technology. It also shows a willingness to let it influence how advice is formed and delivered. At the same time, the system is not replacing advisors. Instead, it is meant to work alongside them. Human monitoring remains an essential part of the process, particularly when dealing with complex financial decisions or high-value clients. Industry executives also acknowledge that AI is unlikely to completely replace expert roles, particularly in complex financial workflows where context and judgement still matter. This hybrid model is becoming more common across the sector. Rather than removing people from the loop, banks are trying to combine human judgement with machine-generated insights. Some firms are starting to treat AI as a part of the workforce rather than a tool, with staff expected to work alongside these systems on day-to-day tasks. Progress comes with limits and trade-offs There are also practical challenges. AI systems depend on clean, structured data, which is not always easy to achieve in large organisations with legacy systems. Integration with existing tools can take time, and staff may need training to use new systems effectively. Regulation adds another layer of complexity. Financial institutions must ensure that AI-driven recommendations meet compliance standards. They must also be able to explain them if questioned by regulators. This requirement may limit the amount of autonomy provided to AI systems, particularly in areas like lending or investment advice. Despite these constraints, banks are starting to move beyond experimentation and into operational use, even if progress remains uneven. Some estimates imply that up to one-third of banking jobs, or parts of those roles, could eventually be handled by AI, though timelines remain unclear. The introduction of AI agents into advisory roles also raises questions about how the job itself may change. If systems can handle more of the analytical work, advisors may spend more time on client relationships and less on preparation. Over time, this could shift the skills required for the role. At the same time, reliance on AI introduces new risks. Errors in data or model output could affect recommendations, and overreliance on automated systems may reduce critical review by human staff. These issues are still being studied as deployments expand. What sets the current phase is not just the technology, but where it is being used. Moving AI into frontline roles suggests that banks regard it as a tool for shaping outcomes rather than simply improving efficiency behind the scenes. Bank of America’s rollout offers a view into how that transition may play out. It shows a large institution testing how far AI can be integrated into everyday work, while still keeping human oversight in place. As more banks follow a similar path, the focus is likely to shift from whether AI should be used to how it should be managed once it becomes part of core operations. See also: Visa prepares payment systems for AI agent-initiated transactions Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI agents enter banking roles at Bank of America appeared first on AI News. View the full article
-
Finance leaders are automating their complex workflows by actively adopting powerful new multimodal AI frameworks. Extracting text from unstructured documents presents a frequent headache for developers. Historically, standard optical character recognition systems failed to accurately digitise complex layouts, frequently converting multi-column files, pictures, and layered datasets into an unreadable mess of plain text. The varied input processing abilities of large language models allow for reliable document understanding. Platforms such as LlamaParse connect older text recognition methods with vision-based parsing. Specialised tools aid language models by adding initial data preparation and tailored reading commands, helping structure complex elements such as large tables. Within standard testing environments, this approach demonstrates roughly a 13-15 percent improvement compared to processing raw documents directly. Brokerage statements represent a tough file reading test. These records contain dense financial jargon, complex nested tables, and dynamic layouts. To clarify fiscal standing for clients, financial institutions require a workflow that reads the document, extracts the tables, and explains the data through a language model, demonstrating AI driving risk mitigation and operational efficiency in finance. Given these advanced reasoning and varied input needs, Gemini 3.1 Pro is arguably the most effective underlying model currently available. The platform pairs a massive context window with native spatial layout comprehension. Merging varied input analysis with targeted data intake ensures applications receive structured context rather than flattened text. Building scalable multimodal AI pipelines for finance workflows Successful implementation requires specific architectural choices to balance accuracy and cost. The workflow operates in four stages: submitting a PDF to the engine, parsing the document to emit an event, running text and table extraction concurrently to minimise latency, and generating a human-readable summary. Utilising a two-model architecture acts as a deliberate design choice; where Gemini 3.1 Pro manages complex layout comprehension, and Gemini 3 Flash handles the final summarisation. Because both extraction steps listen for the same event, they run concurrently. This cuts overall pipeline latency and makes the architecture naturally scalable as teams add more extraction tasks. Designing an architecture around event-driven statefulness allows engineers to build systems that are fast and resilient. Integrating these solutions involves aligning with ecosystems like LlamaCloud and Google’s GenAI SDK to establish connections. However, processing pipelines rely entirely on the data fed into them. Of course, anyone overseeing AI deployments for workflows as sensitive as finance must maintain governance protocols. Models occasionally generate errors and should not be relied upon for professional advice. Operators must double-check outputs before relying on them in production. See also: Palantir AI to support *** finance operations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Automating complex finance workflows with multimodal AI appeared first on AI News. View the full article
-
[AI]Securing AI systems under today’s and tomorrow’s conditions
ChatGPT posted a topic in World News
Evidence cited in an eBook titled “AI Quantum Resilience”, published by Utimaco , shows organisations consider security risks as the leading barrier to effective adoption of AI on data they hold. AI’s value depends on data amassed by an organisation. However, there are security risks to building models and training them on that data. These risks are in addition to better-publicised threats to intellectual property that exist around the point of inference (prompt engineering, for example). The eBook’s authors state that organisations need to manage threats throughout their AI development and implementation processes. At the same time, companies can and should prepare to change their security protocols, changes that will become mandatory if quantum computing-powered decryption tools become easily available to bad actors. Utimaco lists three areas under threat: Training data can be manipulated by bad actors, degrading model outputs in ways are hard to detect, Models can be extracted or copied, eroding intellectual property rights, Sensitive data used during training or inference can be exposed. Current public key cryptography will become vulnerable in the next ten years, the report’s authors attest; a ******* in which capable quantum systems may emerge. Regardless of the timescale, it’s thought that better organised groups currently collect encrypted data and store it to decrypt when or if quantum facilities become available. Any dataset with long-term sensitivity, including model training data, financial records, or intellectual property, may require protection against future decryption, therefore, Utimaco says. A migration to quantum-resistant cryptography will affect protocols, key management, system interoperability, and performance, so any migration is likely to take several years. The report’s authors suggest what they term ‘crypto-agility’, which it defines as changing cryptographic algorithms without redesigning underlying systems. ‘Crypto-agility’ is based on the principle of hybrid cryptography – combining established algorithms with post-quantum methods, such as those suggested by NIST. The eBook’s authors concur that cryptography on its own doesn’t address all possible areas of risk. It advocates the use of hardware-based trust devices that can isolate cryptographic keys and sensitive operations from normal working environments. If companies are developing their own AI tools and processes, protection on that basis should extend throughout the AI lifecycle, from data ingestion through to training, model deployment, and inference in production. Hardware keys used to encrypt data and sign models can be generated and stored inside a boundary. Model integrity can then be verified before deployment, and sensitive data processed during inference remains protected. Hardware-based enclaves isolate workloads so that even system administrators with sufficient privileges can’t access any of the data being processed. Hardware modules can verify that the data enclave is in a trusted state before releasing keys – a process of external attestation – helping create a ‘chain of trust’ from hardware to application. Hardware-based key management produces tamper-resistant logs covering access and operations to support compliance frameworks such as the EU AI Act. Many of the risks inherent in AI systems are well known if not already exploited. The risk from quantum computing’s ability to decrypt data currently considered safe is less immediate, but the implications should affect data and infrastructure decisions made today, Utimaco states. It advocates: A strengthening of controls throughout the AI development and deployment lifecycle, The introduction of ‘crypto-agility’ to allow transition to post-quantum security, Establishing hardware-based trust mechanisms wherever high-value assets are in play. (Image source: “Scanning electron micrograph of an apoptotic HeLa cell” by National Institutes of Health (NIH) is licensed under CC BY-NC 2.0. To view a copy of this license, visit [Hidden Content]) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Securing AI systems under today’s and tomorrow’s conditions appeared first on AI News. View the full article -
*** authorities believe improving efficiency across national finance operations requires applying AI platforms from vendors like Palantir. The country’s financial regulator, the FCA, has initiated a project leveraging AI to identify illicit activities. The FCA is currently testing the Foundry platform from Miami-based software vendor Palantir. This three-month pilot costs upwards of £30,000 per week and focuses on mining the regulator’s internal data lake. The objective centres on detecting money laundering, insider trading, and fraud across the 42,000 financial services businesses under the FCA’s supervision. Navigating unstructured data lakes Traditional oversight methods struggle with the sheer volume of information generated by modern markets. AI platforms excel at parsing unstructured intelligence, which regulators gather during investigations into harmful activities like human trafficking and the narcotics trade. The information fed into these systems spans from highly-confidential internal files and reports on problematic companies to consumer ombudsman complaints. Machine learning tools digest audio recordings from phone calls, social media activity, and email archives. Uncovering patterns within such a vast array of inputs helps direct enforcement resources exactly where they are needed most. Industry experts note a historical under-exploitation of the intelligence housed within regulatory bodies, making advanced analytics a valuable tool for tackling financial crimes. When validating AI models, there is often a debate about the merits of synthetic information versus live environments. While standard guidelines encourage using artificial datasets for preliminary testing, the ***’s finance regulatory authority determined that evaluating AI software like Palantir’s required actual operational inputs. Expanding into national security operations This public sector adoption extends well beyond financial compliance. In September 2025, the *** government established an AI partnership with Palantir aimed at accelerating military decision-making and targeting capabilities. Palantir plans to invest up to £1.5 billion to establish London as its European defence headquarters, an initiative expected to generate up to 350 jobs. As businesses evaluate these platforms, the defence sector provides a high-stakes testing environment for data fusion. Military planners utilise these tools to consolidate open-source and classified intelligence, rapidly generating options to neutralise enemy targets. This forms an element of the Digital Targeting Web, which relies on a diverse supplier ecosystem. Palantir and the military will collaborate on identifying opportunities worth up to £750 million over a five-year *******. To foster broader ecosystem growth, the defence agreement includes provisions for mentoring local startups, assisting smaller British technology firms with expanding into US markets on a pro-bono basis. Deploying private AI like Palantir’s in *** finance operations CDOs deploying AI solutions often struggle when balancing processing capabilities with privacy mandates. During an enforcement action, regulators frequently compel companies to surrender extensive records. Such datasets regularly include the personal bank details, telephone numbers, and complete communication logs of individuals tangentially related to a case. Establishing exact boundaries regarding how a software provider interacts with this intelligence is vital. Before selecting Palantir from a two-vendor shortlist, the FCA claims to have run a competitive procurement process and established strict data protection controls. To mitigate risks associated with information exposure, the FCA structured its agreement with Palantir so the vendor acts strictly as a data processor. Under this arrangement, the software provider operates solely upon instruction. The regulatory agency maintains exclusive possession of encryption keys for the most classified files, and all hosting and storage remain securely within the ***. Similar data sovereignty principles apply to the defence partnership, ensuring military intelligence remains freely available across the Ministry of Defence while entirely under national control. The financial contract explicitly forbids the vendor from copying the ingested intelligence to train its own commercial products. Once the pilot concludes, the vendor must destroy the information. Any intellectual property generated during the analysis phase automatically belongs to the regulator. Setting limitations on data retention and processing rights ensures internal security standards remain intact while achieving efficiency gains from deploying private AI from vendors like Palantir to improve the ***’s finance operations. See also: Visa prepares payment systems for AI agent-initiated transactions Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Palantir AI to support *** finance operations appeared first on AI News. View the full article
-
Payments rely on a simple model: a person decides to buy something, and a bank or card network processes the transaction. That model is starting to change as Visa tests how AI agents can initiate payments. New work in the banking sector suggests that, in some cases, software agents may soon take on that role. A recent example comes from Visa, which is rolling out its “Agentic Ready” programme in Europe to test how financial systems handle AI-initiated transactions. The effort involves collaboration with banks, including Commerzbank and DZ Bank. The aim is to prepare existing payment infrastructure for a scenario where software agents can search for products and make decisions, then complete purchases on behalf of users. According to information published by Visa and reported by The Paypers, the programme focuses on enabling secure transactions where AI systems act as the initiating party. Instead of a customer confirming a purchase, an AI agent could carry out the task after being given a goal or set of rules. How transactions begin Payment systems are built around human identity and intent. A card transaction today depends on verifying that a person has authorised a purchase. If AI agents begin to initiate transactions, banks will need new ways to confirm identity and intent at the system level. That includes deciding how an agent proves it is acting on behalf of a user, and how much autonomy it should have. In Visa’s model, software agents could handle routine or repeat purchases with limited human input, based on user-defined rules. A system could, for example, monitor supply levels and compare prices, then complete a transaction when certain conditions are met. Reporting from Die Welt and Investing.com says the company sees this as similar in scale to the early change toward online payments, when banks had to adapt to a new type of transaction flow. Control and compliance Banks involved in early trials are testing how these ideas work in practice. Commerzbank and DZ Bank are exploring how AI agents can be integrated into existing systems without breaking compliance rules. This includes checks related to fraud, audit trails, and customer consent. These areas are tightly regulated, which means any change to how transactions are initiated must still meet oversight standards. A RepRisk report found that banks are already dealing with more frequent and costly issues linked to AI. The report states that these incidents can lead to multi-million-dollar losses. Visa’s work is focused on infrastructure not consumer-facing tools. It’s working on how payment networks should behave when the “customer” is a piece of software. That includes defining how agents are authenticated and how transactions are approved. It also covers how disputes are handled if something goes wrong. AI and enterprise purchasing In large organisations, procurement often involves multiple approval steps. AI agents could compress that process by handling routine purchases in set limits. This could reduce manual work, but it also means companies need clear rules about what agents are allowed to do. Without that, the risk of errors or misuse increases. Large institutions are investing in AI to automate back-office work and reduce costs. Some are also reorganising teams to focus more on data and AI strategy. Regulators are paying closer attention to how AI is used in decision-making, especially in areas like credit and fraud detection. Taken together, these developments suggest that payments could become one of the first areas where AI agents could act with greater autonomy. Banks will still need to set rules, monitor activity, and handle exceptions. But the day-to-day act of initiating a transaction may, in some cases, require less direct human input. Visa’s current phase is focused on testing and system design. As AI systems take on more responsibility, financial infrastructure will need to adapt to a new type of user, one that does not hold a card but can still make a purchase. (Photo by CardMapr.nl) See also: Goldman Sachs sees AI investment change to data centres Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Visa prepares payment systems for AI agent-initiated transactions appeared first on AI News. View the full article
-
The NVIDIA Agent Toolkit is Jensen Huang’s answer to the question enterprises keep asking: how do we put AI agents to work without losing control of our data, our systems, and our liability? Announced at GTC 2026 in San Jose on March 16, the NVIDIA Agent Toolkit is an open source software stack designed to help enterprises and developers build autonomous AI agents–ones that can perceive, reason, and act on their own, across internal systems, without needing a human to babysit every step. The timing makes sense. The agent conversation has moved well past the pilot phase. What’s stalling broader deployment isn’t capability–it’s trust. Agents that can take action inside enterprise systems need guardrails, and until now, those have been hard to standardise at scale. OpenShell and the safety problem The centrepiece of the toolkit is NVIDIA OpenShell, an open source runtime that enforces policy-based security, network, and privacy guardrails for autonomous agents. In NVIDIA’s terminology, individual agents are called “claws”, and OpenShell is what keeps them in check. Huang framed the stakes plainly at GTC: “Claude Code and OpenClaw have sparked the agent inflexion point–extending AI beyond generation and reasoning into action. Employees will be supercharged by teams of frontier, specialised, and custom-built agents they deploy and manage.” That last part is the pitch. The ambition isn’t a single AI assistant; it’s a workforce of specialised agents, each handling a domain, coordinated at scale. OpenShell is the layer that’s supposed to make that deployable without IT teams having heart attacks. NVIDIA is working with Cisco, CrowdStrike, Google, Microsoft Security, and TrendAI to build OpenShell compatibility into their respective security tools, which signals that this isn’t being positioned as a standalone product, but as infrastructure others build on top of. The research and cost angle Also inside the toolkit is NVIDIA AI-Q, an agentic search blueprint built with LangChain. It uses a hybrid architecture–frontier models handle orchestration while NVIDIA’s open Nemotron models do the research-heavy lifting. According to NVIDIA, this approach can cut query costs by more than 50% while still producing accuracy that tops the DeepResearch Bench and DeepResearch Bench II leaderboards. That cost figure will matter to enterprise buyers who’ve been burned by consumption-based AI pricing that looked manageable in pilots and became a budget problem at scale. Who’s already on board? The partner list at GTC was extensive. Adobe, Atlassian, SAP, Salesforce, ServiceNow, Siemens, Cisco, CrowdStrike, Red Hat, Box, Cadence, Cohesity, Dassault Systèmes, IQVIA, and Synopsys are all advancing enterprise AI agents using the NVIDIA Agent Toolkit. A few specifics stand out. Salesforce is building a reference architecture where employees use Slack as the orchestration layer for Agentforce agents–pulling from data in both on-premises and cloud environments–powered by NVIDIA infrastructure. Atlassian is integrating Agent Toolkit into its Rovo AI strategy across Jira and Confluence. ServiceNow’s “Autonomous Workforce of AI Specialists” is built on the toolkit alongside NVIDIA AI-Q. And Siemens launched the Fuse EDA AI Agent, which uses NVIDIA Nemotron to autonomously orchestrate workflows across its electronic design automation portfolio, from design conception through manufacturing sign-off. IQVIA’s deployment numbers offer a real-world data point: the company has already deployed more than 150 agents across internal teams and client environments, including 19 of the top 20 pharma companies. The ******* shift What NVIDIA is really doing here is positioning itself not just as the hardware backbone of AI, but as the software infrastructure layer for enterprise agentic deployment. The Agent Toolkit, OpenShell, Nemotron models, AI-Q-these are components of a stack that NVIDIA wants sitting underneath an enormous swath of enterprise software. Whether that bet pays off depends on how quickly enterprises move from agent experimentation to agent operations. The toolkit is available now on build.nvidia.com, with support across AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure. See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post NVIDIA wants to make enterprise AI agents safe enough to actually deploy appeared first on AI News. View the full article
-
Mastercard has developed a large tabular model (an LTM as opposed to an LLM) that’s trained on transaction data rather than text or images to help it address security and authenticity issues in digital payments. The company has trained a foundation model on billions of card transactions, with the intention of expanding to hundreds of billions in time. The datasets include payment events and associated data such as merchant location, authorisation flows, fraud incidents, chargebacks, and loyalty activity. Mastercard says personal identifiers are removed before the training began, and that the model parses behavioural patterns rather than concern itself with individual identities. By excluding personal data, the technology reduces privacy risks that may affect other forms of AI in financial services sector. The scale and richness of the data allow the model to infer patterns that are commercially valuable – the company said in a recent blog post – despite the lack of per-user information. Although anonymisation removes signals that could be argued as being useful in the area of risk assessment, Mastercard asserts that using sufficiently large volumes of behavioural data compensates for any loss of rich data. What is an LTM (large tabular model)? LTM architecture differs from that of large language models, which are trained on unstructured inputs and work by predicting the next token (typically but inaccurately described as a word) in a sequence. Mastercard’s LTM examines relationships between fields in multi-dimensional data tables, making a definition of the technology closer to that of pure machine learning rather than artificial intelligence. The large tabular model learns from raw inputs exactly which relationships are predictable, so it can identify anomalous patterns not captured by predefined rules. The company describes the LTM as an ‘insights engine’ that can be used in existing products, augmenting existing workflows. The operational risk of a model that interacts with customers (often an LLM) differs from that of one that’s part of internal decision-making. Technical infrastructure for the LTM comes from Nvidia and Databricks, with the former providing the computing platform and Databricks handling data engineering and model development. Where will we see an LTM in operation? Cybersecurity at Mastercard is the first area to see active deployment of the tech. Like many institutions, Mastercard operates several fraud detection systems examining transaction data. These require human input at their outset – and ongoing attenuation – to define what constitutes as suspicious behaviour. These might include sudden increases in transaction frequency, or users making purchases in different parts of the world in a small space of time. Early results indicate improved performance on conventional techniques in specific cases, the company says. It cites the example of high-value, low-frequency purchases which can be flagged as anomalies using traditional models, but the new model appears to be able to distinguish legitimate events more accurately than its counterparts. The company plans to deploy hybrid systems that combine established procedures with the new model, a degree of caution that reflects the regulatory levels it operates under. It acknowledges that no single model is likely to perform well in all scenarios, so the LTM will take its place among the tools in this sphere. It’s claimed the model can scan activity on loyalty programmes, be used in portfolio management, and for internal analytics, areas where there are large volumes of structured data. In current operations, companies often deploy many models adapted to each task, but this can involve multiples of training costs and validation and monitoring efforts. A single foundation model that can be fine-tuned for different tasks may simplify processes and keep costs down. Risk and future plans There’s a risk to the multi-function LTM approach, of course: A failure in a widely-deployed model could have system-wide consequences, which goes some way to explain Mastercard’s strategy of applying its technology alongside existing detection systems – at least, for the present. Mastercard hopes to increase the scale of the data used on the model and its overall sophistication. It’s also planning on API access and SDKs to let internal teams build new applications. The blog post emphasises the data responsibilities the LTM holds, mentioning privacy and transparency, model explainability, and auditability. Regulatory scrutiny of any system that influences credit decisions or fraud outcomes is to be expected in addition to any data practices involved in the LTM’s operation. Highly structured data, as opposed to text or images, lies at the core of the LTM. Large tabular models may be the start of a new generation of AI systems in core banking and payments infrastructure. Evidence to date remains limited to vendor reports, so any performance claims should not necessarily be regarded as conclusive. Robustness under adversarial conditions, long-term post-training costs, and regulatory acceptance are all issues on which tabular models may founder or thrive. These factors will determine the pace and extent of adoption, but it’s the area of the table where Mastercard is placing some of its bets at present. (Image source: “Oversight” by United States Marine Corps Official Page is licensed under CC BY-NC 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Mastercard keeps tabs on fraud with new foundation model appeared first on AI News. View the full article
-
A report from Autorek, a provider of AI solutions to the insurance industry has produced a report that describes operational drag in companies’ internal processes that not only affect overall efficiency but cause an impediment to the effective implementation of AI in insurance concerns. Insurance Operations & Financial Transformation 2026 draws from a survey of 250 managers in the sector from the *** and US. The survey’s responses paint a picture of connected bottlenecks that include slow settlement processes and data fragmentation. The report also covers the current state of AI deployment in the industry. Companies surveyed in the sector report persistent structural inefficiencies: 14% of operational budgets are spent correcting manual errors, 22% of those questioned said reconciliation complexity is a significant cause of cost increases, Around 22% of respondents link inefficiencies to governance and audit risks, Nearly half of firms operate settlement cycles in excess of 60 days. Transaction volumes are projected to rise by roughly 29% in the next two years means, the report claims, and OPEX burdens are likely to rise commensurately. The report attributes this to the combination of manual processing, disparate data systems, and the transactional complexity that’s the nature of modern insurance operations. The persistence of such processes, the authors state, is despite its previous publications’ findings being in the public domain for some time. There is a gap between respondents’ expectations of what AI might deliver and implementation of the technology on the ground. The headline figure is that 82% of firms in the sector expect AI to dominate the industry, yet only 14% of companies have fully-integrated AI in their operations. Six percent of companies report no use of AI at all. What are the barriers to AI in the insurance sector? The report identifies legacy system integration, fragmented data, and limited internal expertise as the main issues companies need to address to implement AI. The issue of fragmented data affects data governance frameworks, making the latter similarly piecemeal. The report’s authors cite complex data estates in many companies as the main reason that AI deployments are constrained in the sector. Firms surveyed managed an average of 17 data sources, and a majority cite this as an issue, one that’s compounded after mergers and acquisitions. The report’s authors imply AI will affect costs and scalability positively and could address some of the issues firms experience around manual error correction and mistakes in reconciliation processes. The report suggests decision-makers could target reconciliation processes for an initial proving ground for AI, given it’s a boundary-ed, rules-based domain where automation can yield fast positive results. Any form of automation, AI or deterministic, placed on a fragmented architecture and a fractured data layer may not scale well without a rise in costs. The report highlights the potential for AI in structuring fragmented data sources, and suggests cloud-based, as opposed to in-house AI platforms may be an answer in that respect. Structural issues The dichotomy between reconciliation processes (essentially structured workflows) and disparate data sources that need manual nurturing creates complexity that’s measurable in cost and cycle times. This is a situation that persists despite a broad awareness of the issues among those surveyed. The report asserts that such firms successful in addressing the issues at a structural level will widen the performance gap. Data standardisation and governance precede scalable automation, and eventually, automation will reduce reconciliation costs. AI could address the complexity of fragmented data and software layers that rules-based automation such as RPA (robotic process automation) may not be able to address economically. The rate at which firms can resolve the data fragmentation issue is dictated by legacy technology and the overheads of day-to-day operations. The extent to which AI deployment could translate into performance gains beyond cost reduction is unclear, but if cost reduction is positive outcome enough, then addressing the structural issues affecting the insurance sector would form a solid basis for AI-powered automation. (Image source: “Scattered pieces” by Cle0patra is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post For effective AI, insurance needs to get its data house in order appeared first on AI News. View the full article
-
Trustpilot is reported to be pursuing partnerships with large eCommerce companies as AI-driven shopping gains traction. In an interview with Bloomberg News [paywall], chief executive Adrian Blair said that AI agents acting on behalf of consumers require lots of information about the businesses they’re willing to interact with. He said the most effective systems will rely on datasets like those held by Trustpilot, adding that the company aims to work with major eCommerce sites to make greater use of its data. Trustpilot expects its operating margin to reach 30% by 2030, with the improvement linked partly to the use of its content by LLMs. According to Bloomberg, traffic patterns are beginning to reflect this. Click-throughs from AI-based search increased by 1,490% over the past year, thanks in no small part to search giant Google’s decision to make an AI search the default. Data from Promptwatch indicates that Trustpilot ranked as the fifth most cited domain globally in ChatGPT in January this year. Blair said that large language models have created a new channel through which Trustpilot content is presented, noting a rise in exposure and referral traffic from LLM-based algorithms. In February 2026, Amazon and OpenAI announced an agreement to deploy genAI systems on AWS using customised models intended for Amazon’s consumer-facing applications. The arrangement is said to cover infrastructure provision and model development. Elsewhere, Walmart’s partnership with Google lets users purchase goods inside the Gemini chatbot. Google has similar arrangements with Shopify and other retailers. Shopify’s Universal Commerce Protocol lets AI agents access product data and take transactions to checkout, so ensuring potential buyers remain on the AI platform (in this case Gemini) rather than navigate to the retailer’s site. Microsoft’s Copilot Checkout collaboration with PayPal falls into the same pattern. Shopify has pursued similar partnerships including with Microsoft so merchants can sell from a chatbot interfaces. Its recent product updates describe “agentic storefronts” in which transactions take place inside AI interactions. For marketing professionals, the loss of valuable data when shoppers purchase through a third-party proxy is, to varying degrees, balanced by the income from trade via AI platforms. Amazon currently challenges third-party AI agents accessing its platform without authorisation, and is developing its own assistant to retain control over user data and advertising revenue, according to the Wall Street Journal. Trustpilot’s Adrian Blair argued in the Bloomberg News interview that user-generated reviews retain value regardless of the involvement of AI in the purchasing process. He said consumers will continue to “have experiences” with businesses, describing Trustpilot’s data set of reviews as a long-term asset whose relevance is increasing. The company’s shares were affected by a broader decline in software stocks last month, sparked by the media imagining the death of SaaS platforms on the back of claims made by Anthropic. PYMNTS Intelligence’s report , “How AI Becomes the Place Consumers Start Everything,” describes consumers beginning their product research and shopping on AI platforms, refining their prompts iteratively rather than successive ‘traditional’ searches. (Image source: “E-Commerce Visa (Test tamron 17-50 2.8)” by Fosforix is licensed under CC BY-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Trustpilot partners with AI companies as traditional search declines appeared first on AI News. View the full article
-
Artificial intelligence investment is entering a more selective phase as companies and investors look beyond early excitement and focus on the data centre infrastructure required to run AI systems. Recent analysis from Goldman Sachs suggests the market is moving toward what the firm describes as a “flight to quality.” In practice, investors are paying closer attention to companies that own and operate large data centres and computing infrastructure. Firms offering narrow AI tools or experimental software are receiving less attention. Goldman Sachs expects spending on AI infrastructure to grow rapidly as companies expand computing capacity for model training and deployment. Hyperscale cloud firms are investing tens of billions of dollars each year in new data centres and computing hardware. Networking systems are also expanding to support this growth. AI demand is reshaping the data centre market Goldman Sachs Research estimates that AI workloads could account for about 30% of total data centre capacity in the next two years, as demand for computing power grows in cloud services and enterprise applications. The change reflects how AI tasks differ from traditional cloud workloads. Training large models requires thousands of chips running in parallel for extended periods. Inference, the process of generating responses or predictions, also requires steady computing power when services run. Cloud providers and AI developers are now expanding data centre capacity at a pace not seen during earlier phases of cloud computing. Infrastructure demand extends beyond computing hardware. Energy supply is becoming a central issue in the AI race. Goldman Sachs Research estimates that global data centre power demand could rise about 175% by 2030 compared with 2023 levels, driven largely by AI workloads. The firm says this increase would be roughly equal to adding the electricity demand of another top-10 power-consuming country to the global grid. Rising power demand is also pushing utilities and governments to consider new investment in energy infrastructure. Infrastructure limits are shaping AI strategy The growing need for power and cooling is influencing where new AI data centres are built. Space requirements are also shaping site selection. Large facilities are often located near stable energy sources and high-capacity fibre networks. Some companies are building AI training clusters in remote areas where land and electricity are easier to secure. The location of data centres can also affect environmental impact. Academic research on AI infrastructure shows that cooling systems and geographic location can influence energy use and water consumption as much as hardware efficiency. The limits are starting to affect how technology firms plan their AI strategies. Building new models or software is only part of the challenge. Companies must also ensure they have the infrastructure needed to run those systems reliably. In many cases, building that infrastructure takes years. Construction of large data centres involves complex supply chains. Projects often require land acquisition and grid connections. Many also depend on long-term energy agreements. Shortages of electrical equipment and delays in grid expansion can slow new projects. The constraints help explain why investors are paying more attention to companies that already control large data centre networks. A selective phase of the AI market During the first wave of generative AI adoption, many companies saw their market value rise simply by associating themselves with AI. That phase is now beginning to change as investors reassess where AI growth will occur. Investors are examining which companies have the infrastructure and revenue models needed to support long-term deployment. Data centre operators and chip manufacturers sit near the base of that ecosystem. Their services are required regardless of which AI applications gain traction. During previous waves of computing growth, companies that built the underlying infrastructure often captured stable revenue. Software platforms, in contrast, rose and fell more quickly. A similar dynamic may now be forming in the AI sector. Infrastructure expansion also raises new questions. Energy demand and grid capacity are becoming central issues for governments and industry planners. Environmental impact is also drawing closer scrutiny. In the coming years, the AI economy may depend as much on power plants and cooling systems as it does on algorithms and software. That reality is shaping the next stage of the AI race. (Photo by Lightsaber Collection) See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Goldman Sachs sees AI investment shift to data centres appeared first on AI News. View the full article
-
The US Treasury has published several documents designed for the US financial services sector that suggest a structured approach to managing AI risks in operations and policy (see subheading ‘Resources and Downloads’ towards the bottom of the link). The CRI Financial Services AI Risk Management Framework (FS AI RMF) comes with a Guidebook [.docx] which gives details of the framework, developed by a collaboration among more than 100 financial institutions and industry organisations, with input from regulators and technical bodies. The objective of the FS AI RMF is to help financial institutions identify, evaluate, manage, and govern the risks associated with AI systems and let firms continue adopting AI technologies responsibly. Sector-specific framework AI systems introduce risks that existing technology governance frameworks don’t address. Risks include algorithmic bias, limited transparency in decision processes, cyber vulnerabilities, and complex dependencies between systems and data. LLMs create concerns because their behaviour can be difficult to interpret or predict. Unlike traditional software, which is deterministic, an AI’s output varies depending on context. Financial institutions already operate under extensive regulation and there is a raft of general guidance such as the NIST AI Risk Management Framework. However, applying general frameworks to the operations of financial institutions lacks the detail that reflects sector practices and regulatory expectations. The FS AI RMF is being positioned as an extension to the NIST framework, with additional sector-specific controls and practical implementation guidelines in its pages. The Guidebook explains how firms can assess their current AI maturity and implement controls to limit their risk. Its aim is to promote consistent and responsible AI practices and support innovation in the sector. Core structure The FS AI RMF connects AI governance with broader governance, risk, and compliance processes already affecting financial institutions. The framework contains four main components. The first is an AI adoption stage questionnaire that lets organisations determine the maturity of their AI use. The second is a risk and control matrix, which contains a set of risk statements and control objectives in alignment with adoption stages. The Guidebook explains how to apply the framework, while a separate control objective reference guide provides examples of controls and supporting evidence. The framework defines a total of 230 control objectives organised according to four functions adapted from the broader NIST AI Risk Management Framework: govern, map, measure, and manage. Each function contains categories and subcategories that describe elements of effective AI risk management and governance. Assessing AI maturity The adoption stage questionnaire determines the extent to which an organisation is using AI. Some firms rely on traditional predictive models in limited applications for example, while others deploy AI in core business processes; others just use AI in customer-facing roles. The questionnaire helps organisations determine where they sit in the spectrum of AI use currently, evaluating factors like the business impact of AI, governance arrangements, deployment models, use of third-party AI providers, organisational objectives, and data sensitivity. Based on this assessment, organisations are classified into four stages of AI adoption: initial stage: organisations that have little or no operational AI deployment. AI may be under consideration but is not embedded, minimal stage: limited AI use in low-risk areas or isolated systems. evolving stage: organisations running more complex AI systems, including applications that involve sensitive data or external services. embedded stage: where AI plays a significant role in business operations and decision-making. These stages help institutions focus their efforts on controls appropriate to their maturity level. A firm at an early stage does not need to implement every control immediately, but as AI becomes more integrated, the framework introduces additional controls to address growing levels of risk. Risk and control The control objectives for each AI adoption stage address governance and operational topics including data quality management, fairness and bias monitoring, cybersecurity controls, transparency of AI decision processes, and operational resilience. The Guidebook provides examples of possible controls and types of evidence institutions can use to demonstrate they’re compliant. Each firm must determine the controls that fit best. The framework recommends maintaining incident response procedures specific to AI systems and creating a central repository for tracking AI incidents, processes that will help organisations detect failures and improve governance over time. Trustworthy AI The framework incorporates principles for trustworthy AI defined as validity and reliability, safety, security and resilience, accountability, transparency, explainability, privacy protection, and fairness. These provide a foundation for evaluating AI systems along their full lifecycle. In simple terms, financial institutions have to ensure AI outputs are reliable, that systems are protected against cyber threats, and that decisions can be explained when they affect customers or have regulatory relevance. Strategic implications For senior leaders in financial institutions of any nation, the FS AI RMF offers a guide to integrating AI into existing risk management frameworks. It states the need for coordination in different business functions in the organisation. Technology teams, risk officers, compliance specialists, and business units all need to participate in the AI governance process. Adopting AI without strengthening governance structures may expose institutions to operational failures, regulatory scrutiny, or reputational damage. Conversely, firms that build clear governance processes will be more confident in deploying AI systems. The Guidebook frames AI risk management as an evolving entity. As AI technologies develop and regulatory expectations change, institutions will need to update their governance practices and risk assessments accordingly. For financial sector decision-makers, the message is that AI adoption must progress in step with risk governance. A structured framework such as the FS AI RMF provides a common language and method to manage the evolution. (Image source: “Law Books” by seychelles88 is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post US Treasury publishes AI risk Guidebook for financial institutions appeared first on AI News. View the full article
-
NTT DATA has announced an initiative to deliver NVIDIA-powered platforms designed to give organisations a repeatable, production-ready model for scaling AI. The offering integrates NVIDIA’s GPU-accelerated computing and high-performance networking with NVIDIA AI Enterprise software, including NeMo and NIM Microservices, into a full-stack agentic AI platform that can be deployed in cloud and edge environments. The architecture covers the full AI lifecycle of model training and enterprise application development inside a governed framework. Abhijit Dubey, CEO of NTT DATA said there is a change in how enterprises approach AI deployment. “By integrating NVIDIA technologies into our enterprise AI factories, we’re giving clients a powerful and secure environment to adopt agentic AI with measurable returns from the start.” NTT DATA says the enterprise AI factory model addresses a gap that has stalled many AI programmes: the distance between a successful pilot and a production system that runs. The platform is designed to standardise output and reduce the time and cost of moving from proof-of-concept to operational deployment. Real-world deployments Three early-adopter cases give a clearer picture of enterprise AI factories. A leading *******-research hospital is using NVIDIA HGX platforms, with NTT DATA and Dell, for advanced radiology analysis and rapid model evaluation to support clinical research workflows. In automotive manufacturing, a global supplier has reduced production setup time by validating workloads on bare metal before scaling through an AI factory architecture on NVIDIA infrastructure. A third deployment, in technology manufacturing, involves a US-based company using NVIDIA-accelerated simulation and 3D visualisation to validate a next-generation battery production line before physical deployment. NTT DATA is positioning enterprise AI factories as a domain-specific delivery model, with the NVIDIA stack serving as the common infrastructure underneath sector-by-sector customisation. NeMo and NIM in an AI factory stack The technical integration comprises of two NVIDIA components. NVIDIA NeMo is a suite for building agentic AI systems on GPU-accelerated infrastructure. NVIDIA NIM Microservices provide pre-built, GPU-optimised containers with APIs for deploying AI applications. Together, they form what NTT DATA describes as a full-stack, production-ready AI agent platform. NTT DATA also offers pre-qualified GenAI prototypes built on this stack, which it says reduces complexity and accelerates time to value for clients building sector-specific applications. John Fanelli, Vice President of Enterprise Software at NVIDIA, said: “Enterprises are now seeking robust, scalable platforms that can successfully transition their AI initiatives from pilot projects to full-scale production.” He said NTT DATA’s AI factory offerings provide clients with domain-specific solutions needed to achieve production-grade enterprise AI. NTT DATA describes itself as the only global IT services provider active in all three of NVIDIA’s partner tracks: Solution Provider, Cloud Partner, and Global System Integrator Partner Network. The recent announcement comes as enterprises face rising pressure to show financial returns on AI spending. Governance and domain-specific performance are now the criteria by which enterprise AI investments are judged, and the AI factory model is an attempt to make all three more systematic. See also: Physical AI is having its moment – and everyone wants a piece of it Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post NTT DATA and NVIDIA bring enterprise AI factories to production scale appeared first on AI News. View the full article
-
When OpenAI launched Frontier in February, the announcement was framed as a platform for enterprise AI agents. What it actually signalled was a direct challenge to the revenue architecture that has underpinned the software industry for the better part of two decades. Frontier is designed to act as a semantic layer across an organisation’s existing systems, connecting data warehouses, CRM platforms, ticketing tools, and internal applications so that AI agents can operate with the same business context a human employee would have. OpenAI describes these agents as “AI coworkers” that can be onboarded, assigned identities, granted permissions, and reviewed for performance. Early customers include Uber, State Farm, Intuit, and Thermo Fisher Scientific. The commercial ambition behind the platform is not subtle. OpenAI CFO Sarah Friar has stated that enterprise customers currently account for roughly 40% of the company’s revenue, and she aims to increase this figure to closer to 50% by year-end. Frontier is the vehicle. What Frontier actually does to enterprise workflows The case for Frontier rests on a problem that CIOs have described consistently through 2025 and into this year: agents deployed in isolation add complexity rather than remove it. Each new agent becomes a point of integration, requiring its own data connections and governance controls, and the result is fragmentation at scale. OpenAI’s answer is a shared business context. Rather than each agent building its own understanding of how an organisation works, Frontier provides a centralised layer that all agents can reference. Fidji Simo, OpenAI’s CEO of Applications, put it plainly during the launch briefing, drawing on her time running Instacart. “We spent months integrating each of the ones that we selected. We didn’t even get what we actually wanted, because each tool was good for one use case, but they weren’t integrated or talking to one another, so we were just reinforcing silos upon silos.” The results OpenAI cites from early deployments are notable. A global investment firm using Frontier agents across its sales process freed up more than 90% of salesperson time previously spent on administrative tasks. A technology customer reported saving 1,500 hours a month in product development. At a major manufacturer, agents compressed a production optimisation process from six weeks to a single day. Frontier is also deliberately open. It manages agents built by OpenAI, agents built in-house by enterprise teams, and agents from third-party providers, including Google, Microsoft, and Anthropic. That openness is both a design principle and a positioning move: it makes Frontier harder to dismiss as a vendor lock-in play, while expanding the surface area it can govern. The seat-licence problem nobody wants to say out loud The deeper concern for incumbents is structural. The per-seat licence model that has made SaaS enormously profitable assumes that software usage maps to headcount. If an AI agent handles the workflow that previously required a human employee logging into Salesforce, the justification for that seat licence weakens. Fortune described it directly: the fear in the market is that platforms like Frontier will make SaaS software “invisible” and consequently less valuable. Salesforce’s stock has declined more than 27% so far this year, a fall analysts have attributed more to agentic AI disruption fears than to any weakness in its underlying financials. The company’s Q4 FY2026 results were solid. Revenue reached $11.2 billion in the quarter, Agentforce’s annual recurring revenue hit $800 million, and the company closed 29,000 Agentforce deals. The stock still fell after hours, on guidance that came in below Wall Street’s expectations. The incumbents are not standing still. Salesforce has introduced what it calls the Agentic Enterprise License Agreement, a fixed-price, all-you-can-eat model for Agentforce that attempts to make consumption more predictable for enterprise buyers. ServiceNow has moved to consumption-based pricing for some of its AI agent offerings, and in January signed a multiyear agreement with OpenAI to embed frontier model capabilities directly into its platform. Microsoft has introduced consumption-based pricing alongside its per-user model for Copilot Studio. The pricing pivot is significant. It signals that these companies understand the seat-licence model cannot survive agentic AI unchanged. The question is whether repricing is enough or whether the architecture itself needs to change. Two bets on where the intelligence layer should sit The strategic divide in enterprise AI right now runs along a single fault line: should AI agents live inside systems of record, or above them? Salesforce and ServiceNow are betting on the embedded model. They argue that agents are most effective when they sit closest to the data, and that CIOs will trust governance and compliance controls more readily from vendors already managing their workflows. Marc Benioff, CEO of Salesforce, has described Agentforce as the “operating system for the agentic enterprise.” ServiceNow positions its AI Control Tower as a centralised governance layer for all agents, regardless of where they originate. OpenAI, and to a similar degree, Anthropic with Claude Cowork, is betting on the overlay model. Frontier sits above existing systems, using open standards to connect them rather than replacing them. The pitch is that enterprises should not have to replatform to get production-grade agents running across their operations. Both arguments have merit, and enterprises evaluating these platforms will find genuine trade-offs. The embedded approach offers tighter data control and faster time to value within a known ecosystem. The overlay approach offers flexibility and avoids the problem of agents that can only see one vendor’s data. What the incumbents have that OpenAI does not is decades of institutional trust and existing contracts. What OpenAI has is the model capability advantage and an increasingly credible argument that it can run the intelligence layer across the whole enterprise, not just one product family. What CIOs are actually deciding Frontier is currently available to a limited set of customers, with broader availability expected over the coming months. Pricing has not been disclosed publicly, with OpenAI directing interested organisations to its enterprise sales team. For CIOs, the practical decision is not yet binary. Most large enterprises run Salesforce, ServiceNow, and Microsoft infrastructure simultaneously. The immediate question is whether Frontier becomes an orchestration layer that connects those systems, or a competitive platform that starts displacing them. OpenAI’s chief revenue officer, Denise Dresser, offered what is probably the most honest summary of where enterprise AI agents stand right now. “What’s really missing still for most companies is just a simple way to unleash the power of agents as teammates that can operate inside the business without the need to rework everything underneath.” That gap is exactly what every platform in this space claims to close. The difference with Frontier is that the company making the claim now has the enterprise relationships, the production deployments, and the model capability to back it up. The SaaS incumbents have a head start on trust and data. Whether that proves sufficient is the central question for enterprise software through the rest of 2026. (Photo by Austin Distel) See also: OpenAI’s enterprise push: The hidden story behind AI’s sales race Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post OpenAI Frontier puts enterprise AI agents at the centre of a fight the SaaS industry cannot afford to lose appeared first on AI News. View the full article
-
E.SUN Bank is working with IBM to build clearer AI governance rules for how artificial intelligence can be used inside a bank. The effort reflects a wider shift in finance. Many firms already use AI for fraud checks and credit scoring, and some also use it to handle customer service queries. The new challenge is how to manage these systems in a way that meets legal and risk rules. Banks face a growing list of questions as they deploy AI. How should a model be tested before it goes live? Who is responsible if it makes a wrong call? And how can firms prove to regulators that their systems are fair and safe? To address those issues, E.SUN Bank and IBM Consulting have created an AI governance framework for banking. The project also includes an AI governance white paper that sets out how financial firms can build internal controls around AI systems. According to the companies’ press release, the work adapts global standards such as the EU AI Act and ISO/IEC 42001 for financial services. The framework sets out how banks can review AI models before they are deployed. It also explains how those models should be monitored after they enter production. It includes rules for how data is used and how risk reviews should take place. E.SUN Bank said the framework is intended to help financial institutions introduce AI systems while maintaining governance and regulatory oversight. Many firms already run limited AI tools. The next step is to scale those systems across core operations such as lending and payments while staying within regulatory limits. Banks try to manage AI risk Financial firms have strong reasons to place guardrails around AI systems. Banking relies on trust, and regulators require firms to track how decisions are made. AI models often act as “****** boxes,” meaning it can be hard to explain how they arrive at a result. That can create problems in areas such as credit decisions or fraud checks. Regulators in many regions have started to focus on these risks. The European Union’s AI Act, adopted in 2024, places strict rules on AI systems used in high-risk sectors such as finance. The law requires firms to assess risks and document training data. It also requires them to monitor how AI models behave after deployment. Global standards are also taking shape. ISO/IEC 42001, published in 2023, sets out how organisations can build management systems for AI. The standard focuses on oversight and model monitoring. It also addresses how organisations should manage AI data. The aim is to give firms a structured way to manage AI across an entire company rather than treating each model as a separate tool. E.SUN Bank’s project with IBM draws from both frameworks. It is meant to show how these rules could work in daily banking operations. From AI pilots to enterprise systems Banks have used machine learning for years, mainly in risk analysis and fraud detection. Newer AI models are expanding how banks use the technology. Many now apply it in customer service and document review. Some also use it in internal knowledge systems. That expansion brings new governance needs. A system that suggests answers to customer queries may seem low risk. But a model that helps approve loans or detect fraud can have direct financial effects. The governance framework created by E.SUN Bank and IBM sets out a process to track those risks. Models are reviewed before they go live, and teams monitor their output after deployment. The framework also assigns responsibility across teams, from developers to compliance staff. The project also produced a white paper that explains the steps in more detail. It outlines how banks can classify AI systems by risk level and apply different levels of oversight. AI governance expands across financial services The work at E.SUN Bank reflects a trend across global finance. Many banks now see governance as a key step before scaling AI across operations. Industry surveys suggest that AI adoption in financial services is already widespread. A 2024 report by NVIDIA found that about 91% of financial services firms were either assessing or already using AI. Common uses include fraud detection and risk modelling. Some banks also use AI to automate customer service tasks. Research from Deloitte shows that more than 70% of financial institutions plan to increase investment in AI. Much of that spending is aimed at compliance monitoring and risk analysis. Some banks also expect AI to improve internal operations. At the same time, regulators are paying closer attention. Authorities in several regions have warned banks to track how automated systems affect decisions such as credit approval and fraud detection. This pressure has led banks to invest more in internal oversight systems. Instead of focusing only on model accuracy, firms now also track data sources and decision logic. Many also monitor how models behave over time. Why governance may shape AI adoption The push for AI governance may influence how quickly banks adopt new tools. Without clear rules, many firms hesitate to move beyond small experiments. A structured framework can help them expand AI projects while still meeting regulatory demands. That is the idea behind the E.SUN Bank project. By combining global standards with banking workflows, the framework sets out how AI can be deployed under clear oversight. According to the companies’ announcement, IBM said the framework was developed to help financial institutions manage AI risks as they expand their use of AI in banking. The effort also reflects the growing role of governance in enterprise AI. Early AI projects focused on building models and improving performance. Today the focus is shifting toward how those systems are managed over time. As more banks bring AI into core operations, that question may become just as important as the technology itself. (Photo by Markus Spiske) See also: Manulife moves AI agents into core financial workflows Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post E.SUN Bank and IBM build AI governance framework for banking appeared first on AI News. View the full article
-
Europe’s factory floors have a new kind of colleague. BMW Group has deployed humanoid robots in manufacturing in Germany for the first time, launching a pilot project at its Leipzig plant with AEON–a wheeled humanoid built by Hexagon Robotics. It is the first automotive deployment of AEON anywhere in the world, and it marks something of a line in the sand for European industry: physical AI is no longer a North American or East Asian story. The announcement, made on March 9, 2026, comes backed by hard data from a prior US trial. In 2025, BMW ran a ten-month pilot at its Spartanburg, South Carolina, plant using Figure AI’s Figure 02 robot. The humanoid supported production of over 30,000 BMW X3s, working 10-hour shifts and moving a total of over 90,000 components. Leipzig is now the direct heir to those lessons. A robot built for work, not demos AEON, developed by Hexagon’s Zurich-based robotics division, is a deliberately industrial machine. Arnaud Robert, President of Hexagon Robotics, made the philosophy plain at a Munich event earlier this month: “We’re not in the dancing business–we’re in the working business.” That ethos is visible in every design decision. Rather than walking on two legs, AEON moves on wheels–a choice made after extensive testing of locomotion systems, with Hexagon concluding that on factory-grade flat floors, wheels are significantly more efficient in both speed and energy use. It stands 1.65 metres tall, weighs 60 kilograms, reaches 2.5 metres per second, and can autonomously swap its own battery in 23 seconds–enabling around-the-clock operation without human intervention. Its 22 integrated sensors–peripheral cameras, time-of-flight, infrared, SLAM cameras, and microphones–give it full 360-degree real-time spatial awareness, including the ability to perform quality inspection tasks that conventional stationary robots cannot. Its human-like torso allows a wide variety of grippers, hand elements, and scanning tools to be flexibly docked, which is precisely what BMW needs for multifunctional deployment across different production environments Phased rollout, deliberate strategy AEON’s first test deployment at Leipzig took place in December 2025. A further test run is planned for April 2026, ahead of a full pilot phase launching in summer 2026, where two AEON units will work simultaneously across two use cases–focusing on high-voltage battery assembly and component manufacturing for exterior parts. Leipzig was not an arbitrary choice. It is BMW’s most technologically comprehensive ******* plant, combining battery production, injection moulding, press shop, body shop, and final assembly under one roof, meaning a successful deployment there effectively validates physical AI across the full production spectrum. To anchor this work institutionally, BMW has established a Centre of Competence for Physical AI in Production, consolidating expertise across the group and creating a defined evaluation path for technology partners–from lab testing through to full pilot phases. As Felix Haeckel, Team Lead for the centre, put it: “We are pooling our expertise to make knowledge on AI and robotics widely usable within the company.” The infrastructure underneath What makes BMW’s approach notable is that AEON is not landing on a blank factory floor. BMW has systematically dismantled data silos across its production network, replacing them with a uniform data platform that ensures all information is consistent, standardised, and accessible at all times–the architecture that allows AI agents to operate autonomously and learn continuously. The humanoid robot is, in effect, the physical layer of a system that has been years in the making. AEON runs on NVIDIA Jetson Orin onboard computers and was trained largely through simulation using NVIDIA’s Isaac platform–a method that allowed Hexagon to develop core locomotion capabilities in weeks rather than months. The project also involves Microsoft Azure for scalable model development and Maxon’s actuators for locomotion. Why this matters beyond Leipzig The broader signal here is one that the enterprise AI world is already tracking closely. Deloitte’s State of AI in the Enterprise 2026 report, surveying over 3,200 senior leaders across 24 countries, found that 58% of companies are already using physical AI in some capacity, with that figure set to reach 80% within two years, with Asia Pacific leading in early implementation. BMW’s Leipzig pilot is a proof point in that trajectory: that humanoid robots in manufacturing have moved past the lab and the press release, and are being stress-tested against the unforgiving standards of real industrial production. As Milan Nedeljković, BMW’s Board Member for Production, put it: “The symbiosis of engineering expertise and artificial intelligence opens up completely new possibilities in production.” The question now is not whether humanoid robots belong on the factory floor. It is how fast the rest of the European industry follows. See also: Ai2: Building physical AI with virtual simulation data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post BMW puts humanoid robots to work in Germany–and Europe’s factories are watching appeared first on AI News. View the full article
-
Managing the economics of multi-agent AI now dictates the financial viability of modern business automation workflows. Organisations progressing past standard chat interfaces into multi-agent applications face two primary constraints. The first issue is the thinking tax; complex autonomous agents need to reason at each stage, making the reliance on massive architectures for every subtask too expensive and slow for practical enterprise use. Context explosion acts as the second hurdle; these advanced workflows produce up to 1,500 percent more tokens than standard formats because every interaction demands the resending of full system histories, intermediate reasoning, and tool outputs. Across extended tasks, this token volume drives up expenses and causes goal drift, a scenario where agents diverge from their initial objectives. Evaluating architectures for multi-agent AI To address these governance and efficiency hurdles, hardware and software developers are releasing highly optimised tools aimed directly at enterprise infrastructure. NVIDIA recently introduced Nemotron 3 Super, an open architecture featuring 120 billion parameters (of which 12 billion remain active) that is specifically-engineered to execute complex agentic AI systems. Available immediately, NVIDIA’s framework blends advanced reasoning features to help autonomous agents finish tasks efficiently and accurately for improved business automation. The system relies on a hybrid mixture-of-experts architecture combining three major innovations to deliver up to five times higher throughput and twice the accuracy of the preceding Nemotron Super model. During inference, only 12 billion of the 120 billion parameters are active. Mamba layers provide four times the memory and compute efficiency, while standard transformer layers manage the complex reasoning requirements. A latent technique boosts accuracy by engaging four expert specialists for the cost of one during token generation. The system also anticipates multiple future words at the same time, accelerating inference speeds threefold. Operating on the Blackwell platform, the architecture utilises NVFP4 precision. This setup reduces memory needs and makes inference up to four times faster than FP8 configurations on Hopper systems, all without sacrificing accuracy. Translating automation capability into business outcomes The system offers a one-million-token context window, allowing agents to keep the entire workflow state in memory and directly addressing the risk of goal drift. A software development agent can load an entire codebase into context simultaneously, enabling end-to-end code generation and debugging without requiring document segmentation. Within financial analysis, the system can load thousands of pages of reports into memory, improving efficiency by removing the need to re-reason across lengthy conversations. High-accuracy tool calling ensures autonomous agents reliably navigate massive function libraries, preventing execution errors in high-stakes environments such as autonomous security orchestration within cybersecurity. Industry leaders – including Amdocs, Palantir, Cadence, Dassault Systèmes, and Siemens – are deploying and customising the model to automate workflows across telecom, cybersecurity, semiconductor design, and manufacturing. Software development platforms like CodeRabbit, Factory, and Greptile are integrating it alongside proprietary models to achieve higher accuracy at lower costs. Life sciences firms like Edison Scientific and Lila Sciences will use it to power agents for deep literature search, data science, and molecular understanding. The architecture also powers the AI-Q agent to the top position on DeepResearch Bench and DeepResearch Bench II leaderboards, highlighting its capacity for multistep research across large document sets while maintaining reasoning coherence. Finally, the model claimed the top spot on Artificial Analysis for efficiency and openness, featuring leading accuracy among models of its size. Implementation and infrastructure alignment Built to handle complex subtasks inside multi-agent systems, deployment flexibility remains a priority for leaders driving business automation. NVIDIA released the model with open weights under a permissive license, letting developers deploy and customise it across workstations, data centres, or cloud environments. It is packaged as an NVIDIA NIM microservice to aid this broad deployment from on-premises systems to the cloud. The architecture was trained on synthetic data generated by frontier reasoning models. NVIDIA published the complete methodology, encompassing over 10 trillion tokens of pre- and post-training datasets, 15 training environments for reinforcement learning, and evaluation recipes. Researchers can further fine-tune the model or build their own using the NeMo platform. Any exec planning a digitisation rollout must address context explosion and the thinking tax upfront to prevent goal drift and cost overruns in agentic workflows. Establishing comprehensive architectural oversight ensures these sophisticated agents remain aligned with corporate directives, yielding sustainable efficiency gains and advancing business automation across the organisation. See also: Ai2: Building physical AI with virtual simulation data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How multi-agent AI economics influence business automation appeared first on AI News. View the full article
-
When Romy Gai, FIFA’s chief business officer, described the operational challenge of running a 48-team World Cup across Canada, Mexico and the United States, he was not talking about technology. He was talking about complexity. Previous World Cups relied on local organising committees to absorb much of the logistical load. For 2026, FIFA is running operations directly. Six billion people are expected to watch. There are 104 matches, up from 64 in Qatar. There are 48 teams instead of 32, 180-plus broadcasters, and no single national infrastructure to lean on. The scale is genuinely new. The AI strategy FIFA unveiled at Lenovo Tech World in Hong Kong this week is best understood against that backdrop. Football AI Pro, AI-enabled 3D player avatars, and a next-generation Referee View are the headline announcements. Butthe product decisions themselves reflect something more structural: an organisation that has decided AI is not an enhancement to how it runs football’s biggest event, but it is how the event gets run. What Football AI Pro actually does Football AI Pro is a generative AI knowledge assistant that will be made available to all 48 teams competing at the 2026 World Cup. It is built on FIFA’s Football Language Model and trained on hundreds of millions of FIFA-owned data points. It generates pre- and post-match analysis in text, video, graphs and 3D visualisations, supports prompts in multiple languages, and will not be used during live play. The democratisation argument behind it is straightforward. At the highest level of the game, access to sophisticated match analysis depends heavily on a team’s financial resources. A tier-one footballing nation has a dedicated analytics department. A team competing at its first World Cup does not. Football AI Pro is designed to give every team the same analytical baseline. That ambition is real, but it is also worth understanding as an enterprise AI deployment challenge. Delivering consistent, tournament-wide intelligence across 48 teams in three countries, in multiple languages, against a match schedule that runs for weeks, is not a small infrastructure problem. It is the kind of workload that requires exactly the hybrid AI architecture Lenovo has been building its enterprise positioning. We're making one of the world's most data-rich organizations more accessible with Football AI Pro, a customized AI assistant that can read vast amounts of @FIFA data to deliver information to players, coaches, and fans in seconds. Learn more: — Lenovo (@Lenovo) January 10, 2026 The referee camera is about transparency, not television The updated Referee View is being framed in broadcast terms, and it will look good on screen. AI-powered stabilisation smooths footage captured from the referee’s body camera in real time, reducing the motion blur that made the original version hard to watch during fast play. The more significant purpose is transparency. VAR has been one of the most contested technologies in football, partly because the decision-making process is difficult for fans to follow and partly because the imagery used to communicate those decisions has often been unclear. Better referee footage, delivered in real time, changes both of those problems. The first version of Referee View was trialled at the FIFA Club World Cup last year. The updated version for 2026 is a meaningful technical step forward, but the real test is whether it shifts audience perception of officiating decisions. If it does, it becomes a governance technology as much as a broadcast one. 3D avatars and the offside problem The AI-enabled 3D player avatar system addresses a specific and persistent pain point: semi-automated offside technology. The existing system works, but the imagery it produces to explain offside decisions has not always been convincing. The lines are hard to read, the angles are counterintuitive, and fans routinely dispute calls that the technology correctly identified. The new system scans players to create precise 3D models, with each scan taking approximately one second. During matches, those models are used to track players more accurately through fast or obstructed movements. When an offside decision is referred to VAR, the 3D model produces imagery that is both more accurate and easier to understand. It was tested at the FIFA Intercontinental Cup last year, where Flamengo and Pyramids FC players were scanned ahead of their match. The underlying logic is the same as the referee camera: better data, communicated more clearly, reduces the legitimacy gap between the decision and the audience’s acceptance of it. The intelligent command centre The least-discussed element of the FIFA-Lenovo partnership is arguably the most operationally significant. FIFA has built what Gai described as an intelligent command centre that connects real-time data across departments, matches, venues and broadcasters in a single operational view. In a tournament running across three countries with over 180 broadcasters and six billion expected viewers, operational coordination is the constraint that everything else depends on. The command centre is effectively the enterprise AI backbone behind the public-facing Football AI announcements. Gai’s point about removing local organising committees is worth sitting with. It means FIFA is taking on operational responsibility for functions that were previously distributed across national bodies with local knowledge and localrelationships. AI is not just supporting that decision; it is what makes the decision viable. The Football Language Model and what comes after 2026 Football AI Pro is built on FIFA’s Football Language Model, a domain-specific model trained on FIFA’s own data. That is a significant asset. A general-purpose language model can answer questions about football. A model trained on hundreds of millions of FIFA-owned data points can generate validated, tournament-specific intelligence that a general model cannot replicate. The implications extend beyond 2026. FIFA has stated that Football AI Pro will eventually be made available to fans, not just teams. The 211 member federations that make up world football’s governing structure are also in scope. If the model performs at the World Cup, it becomes the foundation for a much longer democratisation project, one that extends analytical capability to national associations and competitions that currently have almost none. That is the larger enterprise AI story behind the announcements this week. The World Cup is the proof of concept. What FIFA builds on top of it is the actual deployment. See also: How physical AI integration accelerates vehicle innovation Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post FIFA is rebuilding world football operations on AI. The World Cup is just the first test appeared first on AI News. View the full article
-
Virtual simulation data is driving the development of physical AI across corporate environments, led by initiatives like Ai2’s MolmoBot. Instructing hardware to interact with the real world has historically relied on highly expensive and manually-collected demonstrations. Technology providers building generalist manipulation agents typically frame extensive real-world training as the basis for these systems. For some context, projects like DROID include 76,000 teleoperated trajectories gathered across 13 institutions, representing roughly 350 hours of human effort. Google DeepMind’s RT-1 required 130,000 episodes collected over 17 months by human operators. This reliance on proprietary, manual data collection inflates research budgets and concentrates capabilities within a small group of well-resourced industrial laboratories. “Our mission is to build AI that advances science and expands what humanity can discover,” said Ali Farhadi, CEO of Ai2. “Robotics can become a foundational scientific instrument, helping researchers move faster and explore new questions. To get there, we need systems that generalise in the real world and tools the global research community can build on together. Demonstrating transfer from simulation to reality is a meaningful step in that direction.” Researchers from the Allen Institute for AI (Ai2) offer a different economic model with MolmoBot, an open robotic manipulation model suite trained entirely on synthetic information. By generating trajectories procedurally within a system called MolmoSpaces, the team bypasses the need for human teleoperation. The accompanying dataset, MolmoBot-Data, contains 1.8 million expert manipulation trajectories. This collection was produced by combining the MuJoCo physics engine with aggressive domain randomisation, varying objects, viewpoints, lighting, and dynamics. “Most approaches try to close the sim-to-real gap by adding more real-world data,” said Ranjay Krishna, Director of the PRIOR team at Ai2. “We took the opposite bet: that the gap shrinks when you dramatically expand the diversity of simulated environments, objects, and camera conditions. Our latest advancement shifts the constraint in robotics from collecting manual demonstrations to designing better virtual worlds, and that’s a problem we can solve.” Generating virtual simulation data for physical AI Using 100 Nvidia A100 GPUs, the pipeline created roughly 1,024 episodes per GPU-hour, equating to over 130 hours of robot experience for every hour of wall-clock time. Compared to real-world data collection, this represents nearly four times the data throughput, directly impacting project return on investment by accelerating deployment cycles. The MolmoBot suite includes three distinct policy classes evaluated on two platforms: the Rainbow Robotics RB-Y1 mobile manipulator, and the Franka FR3 tabletop arm. The primary model, built on a Molmo2 vision-language backbone, processes multiple timesteps of RGB observations and language instructions to dictate actions. Hardware flexibility with Ai2’s MolmoBot For edge computing environments where resources are constrained, the researchers provide MolmoBot-SPOC, a lightweight transformer policy with fewer parameters. MolmoBot-Pi0 uses a PaliGemma backbone to match the architecture of Physical Intelligence’s π0 model, permitting direct performance comparisons. During physical testing, these policies demonstrated zero-shot transfer to real-world tasks involving unseen objects and environments without any fine-tuning. In tabletop pick-and-place evaluations, the primary MolmoBot model achieved a success rate of 79.2 percent. This outperformed π0.5, a model trained on extensive real-world demonstration data, which achieved a 39.2 percent success rate. For mobile manipulation, the policies successfully executed tasks such as approaching, grasping, and pulling doors through their full range of motion. Providing these varied architectures allows organisations to integrate capable physical AI systems without being locked into a single proprietary vendor ecosystem or extensive data collection infrastructure. The open release of the entire MolmoBot stack – including the training data, generation pipelines, and model architectures – permits internal auditing and adaptation. Anyone exploring physical AI can leverage these open tools for the simulation and building of capable systems while controlling costs. “For AI to truly advance science, progress cannot depend on closed data or isolated systems,” continues Ali Farhadi, CEO of Ai2. “It requires shared infrastructure that researchers everywhere can build on, test, and improve together. This is how we believe physical AI will move forward.” See also: New partnership to offer smart robots for dangerous environments Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Ai2: Building physical AI with virtual simulation data appeared first on AI News. View the full article
-
ADLINK Technology has signed a strategic alliance and joint development agreement with Under Control Robotics, the company behind the robotics startup Noble Machines. The two firms will combine ADLINK’s edge AI platforms with Noble Machines’ autonomy software to create a new generation of general-purpose robots for modern manufactories and engineering plants. The work focuses on ***-pedal, ***-manual machines – read, human-like robots – designed to operate in demanding industrial settings. The partnership will integrate ADLINK’s DLAP edge AI platform with Noble Machines’ autonomy and whole-body control software. The system is intended to provide reasoning, sensing, and motion control for robots handling heavy loads. Initial target sectors include manufacturing, mining, construction, energy, petrochemicals, and public utilities, industries that currently report labour shortages and often involve risky environments for human workers. ADLINK’s hardware is built on the NVIDIA Jetson Thor platform. In a press release, the companies state DLAP offers multi-voltage feeds and high-bandwidth sensor interfaces, quoting “up to eight” GMSL camera connections, four Ethernet ports, and 5G or Wi-Fi modules. Systems can operate inside a wide temperature range and comply with IEC 60068 standards for shock and vibration. ADLINK’s hardware will combine with Noble Machines’ autonomy software, which manages perception, reasoning, and coordinated whole-body motion in robots. Robots operating in adverse conditions ideally need to replicate the mobility and manipulation abilities of human workers, so they can replace at-risk humans without significant retooling or altering existing working environments. Ethan Chen, general manager of ADLINK’s Edge Computing Platforms business unit, said the agreement will extend the company’s edge computing hardware into emerging general-purpose robotic systems, moving from support for the current DLAP platform to a jointly-developed computing platform based on Jetson Thor. Wei Ding, chief executive of Under Control Robotics, said ADLINK’s experience in industrial hardware complements Noble Machines’ software, specifically its whole-body control systems. The collaboration addresses hardware durability and supply chain integration issues that can affect industrial robot deployment. The two partners will pursue possible deployments in the construction and energy industries initially, where it’s common for certain tasks to involve workers tolerating dust, heat, heavy loads, and vibration. Typically, such tasks are difficult to mechanise because they require on-the-spot decision-making, mobility, and manual handling. By working with one anothers’ specialisations, the companies may be able to offer a turnkey solution for customers unwilling to invest in what would be experimental technology and hardware deployments. The emphasis on real-time reactions and decision-making means that the AI element would provide the necessary real-time decision-making that humans working in difficult conditions would otherwise provide. Conventional software, as opposed to AI-based algorithms, would need to be constructed with every possible edge-case hard-coded into control systems. The success of any systems emanating from the partnership would hinge on whether highly-costly robotics could be able to react correctly in unforeseen situations without compromising itself or human co-workers, or negatively affect wider workflows on site. (Image source: “Robot” by 1lenore is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post New partnership to offer smart robots for dangerous environments appeared first on AI News. View the full article
-
Large financial firms have spent years testing artificial intelligence in small projects, often limited to data analysis or customer support tools. The next phase appears to involve something more operational: systems that can take action in business workflows. ********* insurer Manulife is moving in that direction as it works to deploy agent-based AI systems inside its internal operations. The company is building these abilities with a runtime platform designed to support agentic AI, the type of system that can carry out tasks in different software tools and datasets. Manulife said the effort is part of a broader plan to automate high-volume work and assist internal decision making in the business. In a company statement announcing the project, the company said it expects artificial intelligence initiatives to generate more than US$1 billion in value by 2027 through productivity gains and workflow automation. The insurer has been investing in AI for several years, but the current push focuses on integrating the technology more deeply into day-to-day operations. Manulife has already been expanding its internal use of generative AI tools. The company said it currently has more than 35 generative AI use cases in production and plans to expand that number to about 70 in the coming years. It also reported that around 75% of its global workforce already uses generative AI tools in some form, according to company disclosures. Moving AI to operations Insurance companies handle large amounts of structured data. Policy information, claims records, underwriting assessments, and financial reports often move through several systems and teams before a decision is made. These processes create an environment where automation tools can assist with tasks like document review and internal reporting. Manulife said its new platform will allow teams to deploy AI agents that can interact with internal systems and data. Instead of responding to a single prompt like a chatbot, these agents are designed to complete sequences of tasks in different software tools and workflows. For example, an AI agent might collect data from several internal systems and prepare summaries for employees who are reviewing cases or preparing reports. The goal is to reduce the time staff spend gathering information before making a decision. Over the past two years, many companies experimented with generative AI tools for tasks like writing, coding, or summarising documents. Analysts say the next challenge is turning those abilities into systems that can support operational work in large organisations. A report from McKinsey’s 2024 Global AI Survey found that about 65% of organisations say they now use generative AI in at least one business function, up from about one-third in the previous year. However, the same research notes that only a small portion of those deployments have reached full production in large parts of the business, with many still remaining limited to pilot projects or specific teams. AI inside regulated financial systems Financial institutions face extra hurdles when they try to move AI into production. The sector operates under strict regulatory oversight, which requires strong controls around data use and decision transparency. Systems used for underwriting, risk analysis, or investment decisions must be auditable and explainable. That environment makes governance and monitoring central to any AI deployment. A study from Deloitte on AI in financial services notes that banks and insurers are increasing investment in model oversight tools, internal AI policies, and risk review processes as they expand automation. Organisations are trying to balance efficiency gains with regulatory expectations around accountability and fairness. Manulife said the platform includes governance and security controls intended to manage how AI agents interact with internal systems. The controls help track how decisions are produced, monitor how data is used, and ensure the systems operate in company policies. Such safeguards are important in insurance, where automated systems often support processes tied to claims management and regulatory reporting. The case for AI agents The appeal of AI agents lies in their ability to reduce manual work in large administrative operations. Claims processing, policy management, internal reporting, and customer support involve repetitive tasks that require staff to gather data from different sources. AI systems that can collect and organise information in systems may allow employees to focus elsewhere. Other financial firms are exploring similar approaches. Banks in the US and Europe have begun testing AI agents for fraud detection and internal research tasks. In many cases, the goal is to assist employees with time-consuming analysis or data collection. Research from Accenture’s Banking Technology Vision report suggests that AI-driven automation could help financial institutions reduce operational costs by up to 30% over time, depending on the processes involved. Much of the benefit comes from speeding up routine tasks and improving the accuracy of data handling. The move from pilots to operational systems carries risks. AI models can produce errors, and automated workflows can amplify mistakes if they are not monitored. That risk is one reason many financial firms are adopting gradual rollout strategies, starting with internal tools before expanding to customer-facing systems. Manulife’s plan to deploy agent-based AI in its operations shows how large enterprises are testing the next stage of enterprise AI adoption. The important question will be whether these systems can deliver reliable results while meeting regulatory expectations. If they can, AI agents may become a regular part of financial operations, handling routine work that once required large teams of staff. As companies push beyond early experiments the focus is on making technology work inside the everyday systems that run large organisations. (Photo by Joshua) See also: Agentic AI in finance speeds up operational automation Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Manulife moves AI agents into core financial workflows appeared first on AI News. View the full article
-
The integration of physical AI into vehicles remains a primary objective for automakers looking to accelerate innovation. A technical collaboration between Qualcomm and Wayve offers a framework for how hardware and software providers can consolidate their efforts to supply production-ready advanced driver assistance systems to manufacturers worldwide. The partnership combines Wayve’s AI driving layer with Qualcomm’s Snapdragon Ride system-on-chips and active safety software. This aims to simplify implementation while meeting baseline requirements around reliability, safety, and time-to-market. Simplifying physical AI integration for modern vehicles Building an autonomous driving stack often involves piecing together fragmented components from various vendors. This closed method increases development costs, complexity, and project risk. Pre-integrating the core processor, safety protocols, and the neural intelligence layer allows vehicle manufacturers to implement reliable capabilities faster while demanding less engineering effort. The unified system is engineered to support global deployment and long-term platform strategies over the lifespan of a vehicle. Unlike traditional rule-based autonomy that relies heavily on detailed mapping, Wayve utilises a unified foundation model trained on diverse global data. This data-driven software learns driving behaviour directly from real-world exposure. This allows the system to adapt across different regions and road types without requiring location-specific engineering. When embedded within a commercial vehicle, this form of physical AI needs massive yet energy-efficient processing power. Qualcomm provides that compute infrastructure through a safety-certified architecture featuring redundancy, real-time monitoring, and secure system isolation. By establishing an open architecture that scales from mainstream models to premium systems, automotive brands can ensure consistent high performance. The design helps provide flexibility, supporting software portability and reuse across various platforms and model years. Anshuman Saxena, VP and GM of ADAS and Robotics at Qualcomm, said: “ADAS is where scale, safety, and real‑world impact matter most for automakers today. Snapdragon Ride is built to support the widest range of long‑term platform strategies, enabling automakers to standardise across programs and regions while retaining flexibility. “Together with Wayve, we’re empowering automakers with more choice for how advanced driving systems are developed, deployed, and scaled, while also helping them reduce development cycles, effort and risk.” The alliance also secures future optionality for enterprise investments. Both companies plan to explore applying these system-on-chips in future Level 4 robotaxi deployments. Balancing standardisation with brand identity A common concern among leaders adopting pre-integrated vendor platforms, especially in an often brand loyalty-heavy industry like automotive, is the potential loss of differentiation. Building on an open physical AI framework allows vehicle manufacturers to standardise underlying hardware and software across regions while retaining the ability to differentiate brand experiences and model tiers. Alex Kendall, Co-founder and CEO of Wayve, commented: “Wayve AI Driver is designed as a flexible, vehicle-agnostic software that serves as the intelligence layer for autonomy for any vehicle, anywhere. Our collaboration with Qualcomm Technologies provides global automakers building on Snapdragon Ride with a streamlined path to deploy market-leading, end-to-end AI automated driving capability alongside Qualcomm’s Active Safety stack. “By combining our embodied AI driving intelligence with Qualcomm Technologies’ compute performance, platform maturity, and global scale, we are expanding choice and delivering immediate value to automakers across ADAS and automated driving systems, with natural progression from hands-off to eyes-off operation.” As autonomous technology matures, leaders must evaluate vendor alignments that lower implementation hurdles. Pre-integrated systems offer a practical route to delivering complex physical AI, controlling operational costs, and securing a competitive edge in the global vehicle landscape. See also: ABB: Physical AI simulation boosts ROI for factory automation Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How physical AI integration accelerates vehicle innovation appeared first on AI News. View the full article
-
A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles. Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and part variations refuse to behave as they do on a screen. Historically, this friction has previously forced engineering teams to fall back on physical prototypes, delaying product launches and driving up costs. Overcoming the digital to physical AI simulation divide The partnership between ABB Robotics and NVIDIA attempts to close this gap by bringing industrial-grade physical AI to manufacturing facilities. Slated for release in the second half of 2026, RobotStudio HyperReality is already drawing interest from a global customer base. By embedding NVIDIA Omniverse libraries within its existing RobotStudio software, ABB provides a platform for physically accurate digital testing. On an operational level, this integration allows engineers to cut deployment costs by up to 40 percent and accelerate time to market by as much as 50 percent. Realising these efficiency gains demands a workflow where production leaders design, test, and validate complete automation cells before installing any hardware. To do this, the system exports a fully parameterised station – encompassing the robots, sensors, lighting, kinematics, and parts – as a USD file straight into the Omniverse environment. Inside this digital space, a virtual controller runs the identical firmware found on the physical machine, enabling a 99 percent behavioural match between the digital and physical realms. Rather than manually programming movements, computer vision models learn using synthetic images generated inside the software. When combined with Absolute Accuracy technology, this method cuts positioning errors down from 8-15 mm to approximately 0.5 mm, providing high precision for industrial applications. Marc Segura, President of ABB Robotics, said: “Combining RobotStudio with the physically accurate simulation power of NVIDIA Omniverse libraries, we have closed technology’s long-standing ‘sim-to-real’ gap—a huge milestone to deploying physical AI with industrial-grade precision, for real-world customer applications.” Validating factory automation before deployment Early adopters are already validating these capabilities on active production lines. Foxconn, for example, is testing the software for consumer device assembly—an area where frequent product changes and delicate metal components complicate traditional automation. By generating synthetic data to train their systems virtually, Foxconn achieves high accuracy on the factory floor while anticipating a reduction in setup time and the elimination of costly physical testing. Similarly, Workr – a California-based automation provider – integrates its WorkrCore platform with ABB hardware trained via Omniverse. At the NVIDIA GTC 2026 event in San Jose, Workr intends to showcase systems capable of onboarding new parts in minutes without requiring specialised programming skills. Deepu Talla, VP of Robotics and Edge AI at NVIDIA, commented: “The industrial sector needs high-fidelity simulation to bridge the gap between virtual training and real-world deployment of AI-driven robotics at scale. “Integrating NVIDIA Omniverse libraries into RobotStudio brings advanced simulation and accelerated computing to ABB’s virtual controller technology, accelerating how thousands of manufacturers bring complex products to market.” The hardware ecosystem is also expanding to edge computing. ABB is evaluating the integration of NVIDIA’s Jetson edge platform into its Omnicore controllers, a step that would facilitate real-time inference across existing robotic fleets. Adopting this type of digital-first simulation for physical AI can reduce setup and commissioning times by up to 80 percent. As AI moves from software applications to hardware operations, preparing data pipelines and upskilling engineering teams to work with synthetic data will dictate which manufacturers maintain a competitive edge. See also: Agentic AI in finance speeds up operational automation Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post ABB: Physical AI simulation secures factory automation ROI appeared first on AI News. View the full article