Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    804
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. ADLINK Technology has signed a strategic alliance and joint development agreement with Under Control Robotics, the company behind the robotics startup Noble Machines. The two firms will combine ADLINK’s edge AI platforms with Noble Machines’ autonomy software to create a new generation of general-purpose robots for modern manufactories and engineering plants. The work focuses on ***-pedal, ***-manual machines – read, human-like robots – designed to operate in demanding industrial settings. The partnership will integrate ADLINK’s DLAP edge AI platform with Noble Machines’ autonomy and whole-body control software. The system is intended to provide reasoning, sensing, and motion control for robots handling heavy loads. Initial target sectors include manufacturing, mining, construction, energy, petrochemicals, and public utilities, industries that currently report labour shortages and often involve risky environments for human workers. ADLINK’s hardware is built on the NVIDIA Jetson Thor platform. In a press release, the companies state DLAP offers multi-voltage feeds and high-bandwidth sensor interfaces, quoting “up to eight” GMSL camera connections, four Ethernet ports, and 5G or Wi-Fi modules. Systems can operate inside a wide temperature range and comply with IEC 60068 standards for shock and vibration. ADLINK’s hardware will combine with Noble Machines’ autonomy software, which manages perception, reasoning, and coordinated whole-body motion in robots. Robots operating in adverse conditions ideally need to replicate the mobility and manipulation abilities of human workers, so they can replace at-risk humans without significant retooling or altering existing working environments. Ethan Chen, general manager of ADLINK’s Edge Computing Platforms business unit, said the agreement will extend the company’s edge computing hardware into emerging general-purpose robotic systems, moving from support for the current DLAP platform to a jointly-developed computing platform based on Jetson Thor. Wei Ding, chief executive of Under Control Robotics, said ADLINK’s experience in industrial hardware complements Noble Machines’ software, specifically its whole-body control systems. The collaboration addresses hardware durability and supply chain integration issues that can affect industrial robot deployment. The two partners will pursue possible deployments in the construction and energy industries initially, where it’s common for certain tasks to involve workers tolerating dust, heat, heavy loads, and vibration. Typically, such tasks are difficult to mechanise because they require on-the-spot decision-making, mobility, and manual handling. By working with one anothers’ specialisations, the companies may be able to offer a turnkey solution for customers unwilling to invest in what would be experimental technology and hardware deployments. The emphasis on real-time reactions and decision-making means that the AI element would provide the necessary real-time decision-making that humans working in difficult conditions would otherwise provide. Conventional software, as opposed to AI-based algorithms, would need to be constructed with every possible edge-case hard-coded into control systems. The success of any systems emanating from the partnership would hinge on whether highly-costly robotics could be able to react correctly in unforeseen situations without compromising itself or human co-workers, or negatively affect wider workflows on site. (Image source: “Robot” by 1lenore is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post New partnership to offer smart robots for dangerous environments appeared first on AI News. View the full article
  2. Large financial firms have spent years testing artificial intelligence in small projects, often limited to data analysis or customer support tools. The next phase appears to involve something more operational: systems that can take action in business workflows. ********* insurer Manulife is moving in that direction as it works to deploy agent-based AI systems inside its internal operations. The company is building these abilities with a runtime platform designed to support agentic AI, the type of system that can carry out tasks in different software tools and datasets. Manulife said the effort is part of a broader plan to automate high-volume work and assist internal decision making in the business. In a company statement announcing the project, the company said it expects artificial intelligence initiatives to generate more than US$1 billion in value by 2027 through productivity gains and workflow automation. The insurer has been investing in AI for several years, but the current push focuses on integrating the technology more deeply into day-to-day operations. Manulife has already been expanding its internal use of generative AI tools. The company said it currently has more than 35 generative AI use cases in production and plans to expand that number to about 70 in the coming years. It also reported that around 75% of its global workforce already uses generative AI tools in some form, according to company disclosures. Moving AI to operations Insurance companies handle large amounts of structured data. Policy information, claims records, underwriting assessments, and financial reports often move through several systems and teams before a decision is made. These processes create an environment where automation tools can assist with tasks like document review and internal reporting. Manulife said its new platform will allow teams to deploy AI agents that can interact with internal systems and data. Instead of responding to a single prompt like a chatbot, these agents are designed to complete sequences of tasks in different software tools and workflows. For example, an AI agent might collect data from several internal systems and prepare summaries for employees who are reviewing cases or preparing reports. The goal is to reduce the time staff spend gathering information before making a decision. Over the past two years, many companies experimented with generative AI tools for tasks like writing, coding, or summarising documents. Analysts say the next challenge is turning those abilities into systems that can support operational work in large organisations. A report from McKinsey’s 2024 Global AI Survey found that about 65% of organisations say they now use generative AI in at least one business function, up from about one-third in the previous year. However, the same research notes that only a small portion of those deployments have reached full production in large parts of the business, with many still remaining limited to pilot projects or specific teams. AI inside regulated financial systems Financial institutions face extra hurdles when they try to move AI into production. The sector operates under strict regulatory oversight, which requires strong controls around data use and decision transparency. Systems used for underwriting, risk analysis, or investment decisions must be auditable and explainable. That environment makes governance and monitoring central to any AI deployment. A study from Deloitte on AI in financial services notes that banks and insurers are increasing investment in model oversight tools, internal AI policies, and risk review processes as they expand automation. Organisations are trying to balance efficiency gains with regulatory expectations around accountability and fairness. Manulife said the platform includes governance and security controls intended to manage how AI agents interact with internal systems. The controls help track how decisions are produced, monitor how data is used, and ensure the systems operate in company policies. Such safeguards are important in insurance, where automated systems often support processes tied to claims management and regulatory reporting. The case for AI agents The appeal of AI agents lies in their ability to reduce manual work in large administrative operations. Claims processing, policy management, internal reporting, and customer support involve repetitive tasks that require staff to gather data from different sources. AI systems that can collect and organise information in systems may allow employees to focus elsewhere. Other financial firms are exploring similar approaches. Banks in the US and Europe have begun testing AI agents for fraud detection and internal research tasks. In many cases, the goal is to assist employees with time-consuming analysis or data collection. Research from Accenture’s Banking Technology Vision report suggests that AI-driven automation could help financial institutions reduce operational costs by up to 30% over time, depending on the processes involved. Much of the benefit comes from speeding up routine tasks and improving the accuracy of data handling. The move from pilots to operational systems carries risks. AI models can produce errors, and automated workflows can amplify mistakes if they are not monitored. That risk is one reason many financial firms are adopting gradual rollout strategies, starting with internal tools before expanding to customer-facing systems. Manulife’s plan to deploy agent-based AI in its operations shows how large enterprises are testing the next stage of enterprise AI adoption. The important question will be whether these systems can deliver reliable results while meeting regulatory expectations. If they can, AI agents may become a regular part of financial operations, handling routine work that once required large teams of staff. As companies push beyond early experiments the focus is on making technology work inside the everyday systems that run large organisations. (Photo by Joshua) See also: Agentic AI in finance speeds up operational automation Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Manulife moves AI agents into core financial workflows appeared first on AI News. View the full article
  3. The integration of physical AI into vehicles remains a primary objective for automakers looking to accelerate innovation. A technical collaboration between Qualcomm and Wayve offers a framework for how hardware and software providers can consolidate their efforts to supply production-ready advanced driver assistance systems to manufacturers worldwide. The partnership combines Wayve’s AI driving layer with Qualcomm’s Snapdragon Ride system-on-chips and active safety software. This aims to simplify implementation while meeting baseline requirements around reliability, safety, and time-to-market. Simplifying physical AI integration for modern vehicles Building an autonomous driving stack often involves piecing together fragmented components from various vendors. This closed method increases development costs, complexity, and project risk. Pre-integrating the core processor, safety protocols, and the neural intelligence layer allows vehicle manufacturers to implement reliable capabilities faster while demanding less engineering effort. The unified system is engineered to support global deployment and long-term platform strategies over the lifespan of a vehicle. Unlike traditional rule-based autonomy that relies heavily on detailed mapping, Wayve utilises a unified foundation model trained on diverse global data. This data-driven software learns driving behaviour directly from real-world exposure. This allows the system to adapt across different regions and road types without requiring location-specific engineering. When embedded within a commercial vehicle, this form of physical AI needs massive yet energy-efficient processing power. Qualcomm provides that compute infrastructure through a safety-certified architecture featuring redundancy, real-time monitoring, and secure system isolation. By establishing an open architecture that scales from mainstream models to premium systems, automotive brands can ensure consistent high performance. The design helps provide flexibility, supporting software portability and reuse across various platforms and model years. Anshuman Saxena, VP and GM of ADAS and Robotics at Qualcomm, said: “ADAS is where scale, safety, and real‑world impact matter most for automakers today. Snapdragon Ride is built to support the widest range of long‑term platform strategies, enabling automakers to standardise across programs and regions while retaining flexibility. “Together with Wayve, we’re empowering automakers with more choice for how advanced driving systems are developed, deployed, and scaled, while also helping them reduce development cycles, effort and risk.” The alliance also secures future optionality for enterprise investments. Both companies plan to explore applying these system-on-chips in future Level 4 robotaxi deployments. Balancing standardisation with brand identity A common concern among leaders adopting pre-integrated vendor platforms, especially in an often brand loyalty-heavy industry like automotive, is the potential loss of differentiation. Building on an open physical AI framework allows vehicle manufacturers to standardise underlying hardware and software across regions while retaining the ability to differentiate brand experiences and model tiers. Alex Kendall, Co-founder and CEO of Wayve, commented: “Wayve AI Driver is designed as a flexible, vehicle-agnostic software that serves as the intelligence layer for autonomy for any vehicle, anywhere. Our collaboration with Qualcomm Technologies provides global automakers building on Snapdragon Ride with a streamlined path to deploy market-leading, end-to-end AI automated driving capability alongside Qualcomm’s Active Safety stack. “By combining our embodied AI driving intelligence with Qualcomm Technologies’ compute performance, platform maturity, and global scale, we are expanding choice and delivering immediate value to automakers across ADAS and automated driving systems, with natural progression from hands-off to eyes-off operation.” As autonomous technology matures, leaders must evaluate vendor alignments that lower implementation hurdles. Pre-integrated systems offer a practical route to delivering complex physical AI, controlling operational costs, and securing a competitive edge in the global vehicle landscape. See also: ABB: Physical AI simulation boosts ROI for factory automation Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How physical AI integration accelerates vehicle innovation appeared first on AI News. View the full article
  4. A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles. Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and part variations refuse to behave as they do on a screen. Historically, this friction has previously forced engineering teams to fall back on physical prototypes, delaying product launches and driving up costs. Overcoming the digital to physical AI simulation divide The partnership between ABB Robotics and NVIDIA attempts to close this gap by bringing industrial-grade physical AI to manufacturing facilities. Slated for release in the second half of 2026, RobotStudio HyperReality is already drawing interest from a global customer base. By embedding NVIDIA Omniverse libraries within its existing RobotStudio software, ABB provides a platform for physically accurate digital testing. On an operational level, this integration allows engineers to cut deployment costs by up to 40 percent and accelerate time to market by as much as 50 percent. Realising these efficiency gains demands a workflow where production leaders design, test, and validate complete automation cells before installing any hardware. To do this, the system exports a fully parameterised station – encompassing the robots, sensors, lighting, kinematics, and parts – as a USD file straight into the Omniverse environment. Inside this digital space, a virtual controller runs the identical firmware found on the physical machine, enabling a 99 percent behavioural match between the digital and physical realms. Rather than manually programming movements, computer vision models learn using synthetic images generated inside the software. When combined with Absolute Accuracy technology, this method cuts positioning errors down from 8-15 mm to approximately 0.5 mm, providing high precision for industrial applications. Marc Segura, President of ABB Robotics, said: “Combining RobotStudio with the physically accurate simulation power of NVIDIA Omniverse libraries, we have closed technology’s long-standing ‘sim-to-real’ gap—a huge milestone to deploying physical AI with industrial-grade precision, for real-world customer applications.” Validating factory automation before deployment Early adopters are already validating these capabilities on active production lines. Foxconn, for example, is testing the software for consumer device assembly—an area where frequent product changes and delicate metal components complicate traditional automation. By generating synthetic data to train their systems virtually, Foxconn achieves high accuracy on the factory floor while anticipating a reduction in setup time and the elimination of costly physical testing. Similarly, Workr – a California-based automation provider – integrates its WorkrCore platform with ABB hardware trained via Omniverse. At the NVIDIA GTC 2026 event in San Jose, Workr intends to showcase systems capable of onboarding new parts in minutes without requiring specialised programming skills. Deepu Talla, VP of Robotics and Edge AI at NVIDIA, commented: “The industrial sector needs high-fidelity simulation to bridge the gap between virtual training and real-world deployment of AI-driven robotics at scale. “Integrating NVIDIA Omniverse libraries into RobotStudio brings advanced simulation and accelerated computing to ABB’s virtual controller technology, accelerating how thousands of manufacturers bring complex products to market.” The hardware ecosystem is also expanding to edge computing. ABB is evaluating the integration of NVIDIA’s Jetson edge platform into its Omnicore controllers, a step that would facilitate real-time inference across existing robotic fleets. Adopting this type of digital-first simulation for physical AI can reduce setup and commissioning times by up to 80 percent. As AI moves from software applications to hardware operations, preparing data pipelines and upskilling engineering teams to work with synthetic data will dictate which manufacturers maintain a competitive edge. See also: Agentic AI in finance speeds up operational automation Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post ABB: Physical AI simulation secures factory automation ROI appeared first on AI News. View the full article
  5. In finance, achieving operational automation by integrating agentic AI requires a data-centric foundation to drive real value. Financial infrastructure provider SEI has engaged IBM to modernise its internal operations via AI and automation. The joint initiative focuses on process redesign and targeted system updates to deliver consistent client experiences, building a modern and data-enabled foundation in the process. Deploying intelligent agents involves more than simply selecting a foundation model. The actual return on investment relies on auditing existing workflows and finding exact points where human effort is wasted on repetitive administrative tasks. Financial institutions are increasingly finding that when automation handles standard queries and basic data entry, they can reduce processing times by up to 40 percent, allowing personnel to manage high-value client relationships. Auditing legacy finance processes for agentic AI readiness Adoption often stalls when companies apply new technologies to broken pipelines. SEI and IBM Consulting are conducting a comprehensive review of the financial firm’s current operational systems to map a better path forward. Subject matter experts from SEI are working directly with IBM to assess the underlying data architecture, systems, and daily routines. This discovery phase aids governance and risk management. Identifying exact opportunities to embed intelligent agents ensures the tools operate within defined boundaries to meet changing business needs. The IBM Enterprise Advantage platform acts as the technical base for this overhaul, guiding the deployment to improve decision-making across the firm and enhance the client experience. Sean Denham, Chief Financial and Chief Operating Officer at SEI, explained: “As SEI enters its next phase of growth, investing in how we operate is just as critical as investing in what we deliver. “IBM brings deep industry and technical expertise that will build on our strong operational foundation and strategic vision. By deploying and scaling AI across the enterprise through a disciplined, data‑driven approach, we will work more efficiently, innovate faster, and scale with confidence.” Directing human oversight toward value creation Implementing agentic AI systems can directly impact workforce productivity, and not just in the finance sector. Expanding the automation of routine tasks helps companies improve the consistency of their output and streamline client interactions. Employees freed from manual data entry can focus on complex problem-solving and proactive client support. “Automation will enable our teams to spend less time on manual, repetitive work and more time on higher‑value, relationship‑driven activities—further elevating service quality, strengthening trust among our clients, and creating more opportunities for professional growth,” said Denham. Machine learning models require clean, well-governed information to function without generating errors. Partnerships between financial incumbents and major technology vendors highlight the necessity of combining deep regulatory knowledge with engineering resources. Glenn Finch, Head of US Financial Services at IBM Consulting, commented: “SEI has a long-standing reputation for operational excellence and building integrated solutions in a complex, highly regulated industry. “By combining SEI’s deep knowledge of its business with IBM’s expertise in process intelligence and agentic AI, we can unlock new levels of efficiency across the enterprise. With streamlined operations and data‑centric insights embedded into how work is performed, SEI is strengthening its ability to scale while further differentiating itself in the market.” Prioritising operational resilience and strict data hygiene allows finance organisations to implement agentic AI safely. Achieving P&L improvements requires mapping out business processes thoroughly before writing any code. See also: Mastercard brings agentic payments to life in Singapore with DBS and UOB Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Agentic AI in finance speeds up operational automation appeared first on AI News. View the full article
  6. Mastercard has completed its first live, authenticated agent-based payment transaction in Singapore, a milestone that advances autonomous AI commerce from proof of concept to everyday use. Announced on March 4, 2026, the transaction was carried out in partnership with DBS and UOB, two of Southeast Asia’s largest banks. In the demonstration, an AI agent booked a ride to Singapore’s Changi Airport through hoppa, a global mobility provider, with the booking facilitated by CardInfoLink’s AI agent, which connects to hoppa’s taxi and airport limousine network. The backbone of the transaction was Mastercard Agent Pay, the company’s framework for secure AI-initiated purchases. Each transaction under Agent Pay uses a Mastercard Agentic Token–uniquely issued per agent–while consumer consent is explicitly captured and purchase confirmation secured through Mastercard Payment Passkeys. Tokenised credentials authenticated with those passkeys ensured strong consumer verification and data protection throughout. When your AI agent pays the bill The significance here goes beyond a single ride booking. What Mastercard, DBS, and UOB have demonstrated is a complete, end-to-end agentic payments chain: an AI agent that perceives a need, selects a service, initiates a financial transaction, and completes it–all without a human clicking “confirm.” That’s a meaningful inflexion point that is being widely discussed within fintech. The question that has shadowed agentic AI in financial services has never really been whether agents can automate tasks. It’s been debated whether they can be trusted to move money, and under what safeguards. This transaction offers one answer: tokenisation, passkey authentication, and explicit consent layers built in from the outset rather than retrofitted later. Minsook Cho, country manager for Singapore at Mastercard, framed it as a responsible innovation story: “Mastercard’s first live agentic transaction shows how innovation can be brought into everyday services responsibly and securely with Agent Pay. Together with like-minded partners like DBS and UOB, Mastercard is supporting the vision for AI-powered commerce by building trusted foundations.” Acknowledging the paradigm shift while keeping the focus on guardrails, DBS’s Ananya Sen, group head of regional consumer products, noted that their collaboration with Mastercard demonstrates how these principles can be embeddedresponsibly from the outset. Singapore, and the wider APAC race This isn’t Mastercard’s first agentic rodeo in Asia Pacific. The company has completed similar authenticated transactions in Australia, New Zealand, and India. But Singapore carries particular strategic weight. Mastercard is establishing a regional AI Centre of Excellence there, described as its largest innovation space in the region, and is deploying dedicated agentic commerce teams across APAC to support financial institutions and merchants as they transition to agent-led experiences. It’s also worth noting that Singapore’s major banks are moving fast on this front from multiple directions. DBS completeda separate agentic payments pilot with Visa in February 2026, where AI agents executed food and beverage transactions using DBS and POSB cards. The fact that the same bank appears in both Mastercard and Visa’s agentic milestones within weeks of each other speaks to how aggressively Singapore’s financial institutions are positioning for the agentic commerce era. Mastercard says it will expand Agent Pay use cases across transportation, travel, entertainment, and retail sectors where the friction of manual payment steps is ripe for automation. The infrastructure for AI agents to spend on your behalf is quietly being built. The ride to Changi Airport was just the first stop. See also: DBS pilots system that lets AI agents make payments for customers Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. SEO Title: Mastercard’s First Live Agentic Payments Transaction in Singapore Is Here Slug: Meta Description: The post Mastercard brings agentic payments to life in Singapore with DBS and UOB appeared first on AI News. View the full article
  7. AI insurance underwriting has been called the next frontier of insurtech for years. The difference now is that the money backing it has moved from venture bets into institutional conviction. On March 3, Boston-based Gradient AI securedgrowth capital financing from CIBC Innovation Banking, a lender with over 25 years of experience backing growth-stage technology companies and more than US$11 billion in funds managed across North America. The amount was not disclosed, but the nature of the backer is telling. CIBC Innovation Banking does not write cheques for concept plays. It has backed more than 700 venture and private equity-backed businesses over the past six and a half years. When it enters a sector, it is because it sees a market that is maturing, not one still being defined. What Gradient AI actually does Gradient AI operates at the intersection of data scale and insurance risk. Its SaaS platform draws on a proprietary data lake spanning tens of millions of policies and claims, layered with economic, health, geographic, and demographic signals. The result is an underwriting and claims prediction system that insurers use to sharpen loss ratios, speed up quote turnarounds, and cut claims expenses through automation. The company’s clients span major carriers, managing general agents (MGAs), managing general underwriters (MGUs), third-party administrators, risk pools, and large self-insured employers across all major lines of insurance. CEO Stan Smith was direct about what this round means for the road ahead: “While we are thrilled to secure this investment from CIBC Innovation Banking, it is now up to us to continue to address the industry challenges by enhancing our platform and delivering unparalleled value to our customers.” Smith reckons insurers are becoming increasingly sophisticated in their risk assessment, yet challenges still arise. “We are focused on helping them achieve these goals by automating processes, reducing costs, and significantly improving results,” he added. A market that reflects the urgency The backdrop for this financing is a market in sharp acceleration. The global AI in the insurance sector was valued at around US$10.36 billion in 2025 and is projected to grow to US$13.45 billion in 2026, tracking toward US$154 billion by 2034 at a CAGR of 35.7%, according to Fortune Business Insights. Separately, BCG’s research found that AI can improve efficiency in complex underwriting lines by up to 36%, primarily through augmenting manual underwriting processes, with an additional potential for up to three percentage points of loss-ratio improvement through better use of unstructured data. The pressure on insurers to adopt is not just competitive. Regulators across the US and Europe are pushing for greater transparency in automated decision-making, which means the platforms that can demonstrate model explainability and auditability will carry an advantage. Gradient AI’s architecture, built around a core predictive analytics engine enriched with contextual data layers, is designed for this kind of scrutiny. George Bixby, Director at CIBC Innovation Banking, framed the investment around market transformation: “The team’s innovative approach to leveraging artificial intelligence is reshaping how insurers assess risk, manage claims, and deliver value to their customers.” The investors are already at the table Gradient AI is already backed by Centana Growth Partners, MassMutual Ventures, Sandbox Insurtech Ventures, and Forte Ventures. MassMutual Ventures is particularly notable in this context. It is the strategic venture arm of Massachusetts Mutual Life Insurance Company, one of the largest mutual life insurers in the United States. That an insurer of that scale is a direct investor in Gradient AI is not incidental. It signals that the platform is being validated by the industry it is built to serve. The CIBC financing adds a different dimension. Growth capital from an innovation-focused bank, as opposed to an equity investor, is a signal that Gradient AI is no longer in the phase of proving a thesis. It is in the phase of executing at scale. For an industry that has historically priced risk on actuarial tables alone, the shift to AI-driven underwriting represents a structural change in how insurance companies understand and price the unknown. Gradient AI is betting it can be the infrastructure that sits underneath that shift. Meanwhile, for insurers still treating AI as a supplementary tool, the market is starting to move on without them. See also: Insurance giant AIG deploys agentic AI with orchestration layer Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI insurance underwriting is past the pitch deck—Gradient AI just got the capital to prove it appeared first on AI News. View the full article
  8. The *** sovereign AI fund intends to secure advantages by providing a domestic alternative to external computing infrastructure. Backed by a £500 million budget from the Department for Science, Innovation and Technology, the unit formally launches on April 16th at 6pm GMT. James Wise, Partner at Balderton Capital, chairs the function to coordinate efforts across investors, industry leaders, and public agencies. The fund’s core objective is establishing domestic hardware and data capabilities, turning the nation into a technology producer rather than just a consumer. This introduces new opportunities to strengthen supply chain resilience and simplify data governance. The heritage of British computing provides a strong foundation for this public initiative. From Ada Lovelace’s 1843 notes laying the groundwork for computer science, to Alan Turing’s 1939 explorations into machine intelligence, domestic engineering has long influenced global technology. This continued with the 1989 invention of the World Wide Web and Google DeepMind’s 2020 AlphaFold breakthrough in biology. Today, the *** supports a £1 trillion tech market featuring more than 200 unicorns and over 5,800 AI companies, representing the largest sector of its kind in Europe. The new fund aims to capitalise on this density by keeping emerging intellectual property within local borders. Building up the ***’s sovereign AI computing infrastructure Relying exclusively on commercial hyperscalers like AWS, Google Cloud, or Microsoft Azure introduces compliance hurdles. Enterprises storing sensitive intellectual property on foreign servers often navigate complex legal frameworks. The new public initiative addresses these challenges by expanding domestic assets through the AI Research Resource. Access to supercomputing facilities – such as Isambard-AI in Bristol, and Dawn in Cambridge – offers domestic businesses secure and localised processing power. This localisation directly impacts return on investment. When infrastructure resides closer to the enterprise, latency drops and regulatory compliance becomes easier to manage. The unit also acts as an anchor investor for high-potential domestic technology developers, ensuring that local enterprises have access to new tools without transferring data across borders. The ***’s sovereign AI unit recently allocated an initial £8 million in seed capital to the OpenBind Consortium. This project maps how molecules attach to their targets at a scale 20 times larger than any past historical database. For pharmaceutical companies, accessing this massive domestic dataset cuts the drug discovery timeline and reduces associated research costs by up to 40 percent. Similar efficiency gains apply across finance and logistics. Local machine learning models can process sensitive transaction data or map domestic supply chains without exposing proprietary information to international platforms. Hardware integration and adoption Replacing or augmenting established enterprise systems with domestically-produced hardware requires dedicated cross-team training and high data maturity. Pilots frequently stall when internal teams lack the expertise to adapt existing software to run on novel hardware architectures. The government introduced Advance Market Commitments to stimulate the ecosystem. Backed by up to £100 million, the public sector acts as a first customer for domestic hardware developers, purchasing equipment for public supercomputers once it reaches agreed performance benchmarks. New Growth Zones in South Wales and Culham aim to provide the physical data centre space and electrical power necessary for this hardware expansion. Finding the right talent remains a severe bottleneck for technology integration. The ***’s sovereign AI unit is expanding the Encode fellowship, an entrepreneurial programme designed to attract top-tier global talent into domestic research laboratories. Companies that align their research and development cycles with these expanding talent pools stand to gain a steady pipeline of capable engineers. Engaging with new domestic computing resources allows enterprises to diversify their technological dependencies. Preparing internal data structures for integration with local supercomputing facilities helps technology executives improve long-term operational resilience and lower their external licensing costs. See also: Scaling intelligent automation without breaking live workflows Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post *** sovereign AI fund to build up domestic computing infrastructure appeared first on AI News. View the full article
  9. Banks have spent years buying analytics tools and automation software. Now some are taking a different step: building internal spaces where AI can be tested directly on real banking problems. One example emerged in India this month. City Union Bank recently entered a four-party agreement to create a Centre of Excellence for Artificial Intelligence in Banking. The goal is to develop AI systems that may support banking work such as fraud monitoring, credit analysis, and regulatory compliance. The agreement was disclosed in a stock exchange filing by the bank. The project involves several partners. City Union Bank is participating as the banking partner and will contribute industry knowledge and domain expertise. Technology firm Centific Global Solutions is listed as the technology partner. SASTRA University will act as the knowledge partner supporting research and training, while nStore Retech will serve as the implementation partner responsible for deploying solutions. The structure reflects a model where banks collaborate with technology firms and academic institutions to explore how AI may be applied to banking operations. Turning AI experiments into operational tools According to the bank’s disclosure, the planned centre will focus on four main areas: fraud detection, credit risk analytics, customer behaviour modelling, and automation of regulatory compliance processes. These are not new goals. Banks have used statistical models for many years to assess credit risk and detect suspicious activity. What is changing is the scale of data available to financial institutions and the ability of machine learning systems to process large datasets. Fraud monitoring is one example. Banks process a large number of transactions every day across payment systems, transfers, and card networks. AI models can examine patterns across these transactions and flag activity that appears unusual. Similar approaches can analyse credit histories, spending patterns, and repayment records to help assess lending risk. The Centre of Excellence will also explore how AI may assist with compliance tasks. Banks operate under strict regulatory reporting requirements, and preparing those reports often requires teams to review large volumes of transaction records and documentation. AI tools may help classify documents, identify anomalies, and support audit preparation. City Union Bank said in its filing that it will contribute domain knowledge and industry insight so that the systems developed through the centre reflect real banking operations. Building talent alongside technology Another objective of the centre is talent development. The partners plan to support academic programs, internships, and certification courses focused on AI applications in banking, according to the disclosure. This reflects a broader need within the financial sector for engineers and data specialists who understand both machine learning and banking processes. Universities are often included in such collaborations because they can link research with industry use cases. In this initiative, SASTRA University will contribute academic research and training aimed at preparing students and professionals to work with AI systems used in financial services. Why banks are exploring AI centres Financial institutions face pressure to improve efficiency while maintaining strong risk controls. AI systems are being studied as one way to support tasks that involve analysing large amounts of financial data. At the same time, deploying AI in regulated industries can be complex. Banks must ensure that systems are secure, reliable, and compliant with financial regulations. Development programs such as Centres of Excellence can provide a setting where models are designed and tested before they are used in operational systems. The partnership behind the City Union Bank initiative combines several types of expertise: banking knowledge from the bank itself, technical development from a technology provider, academic research from a university, and implementation support from an integration partner. AI’s growing role in banking Artificial intelligence is already used in several areas of banking, including fraud detection systems, customer support chatbots, and risk modelling for loans. As computing capacity grows and financial institutions collect larger datasets, banks are studying additional ways to apply machine learning to operations. Customer behaviour analysis is one area under study. AI models can analyse transaction histories and account activity to help banks understand how customers use financial services. Those insights can influence decisions about product design, lending policies, and risk management. Another area is operational automation. Tasks such as document classification, transaction monitoring, and compliance reporting generate large volumes of administrative work. AI systems may help sort and review these records more quickly. Still, adoption tends to move cautiously in banking because errors can create financial and legal risks. Testing environments such as AI development centres may allow institutions to experiment with new tools before integrating them into core systems. What other banks may learn The City Union Bank project shows how some financial institutions are structuring AI work through partnerships that bring together banks, technology firms, and universities. Whether these initiatives translate into widely deployed systems will depend on how effectively the research and development work moves into operational banking tools. For now, the new centre represents an effort to build expertise around AI within the banking sector while exploring how the technology may support tasks such as fraud monitoring, risk analysis, and regulatory reporting in the years ahead. (Photo by Etienne Martin) See also: JPMorgan expands AI investment as tech spending nears $20B Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post City Union Bank launches AI centre to support banking operations appeared first on AI News. View the full article
  10. Scaling intelligent automation without disruption demands a focus on architectural elasticity, not just deploying more bots. At the Intelligent Automation Conference, industry leaders gathered to dissect why many automation initiatives stall after pilot phases. Speaking alongside representatives from NatWest Group, Air Liquide, and AXA XL, Promise Akwaowo, Process Automation Analyst at Royal Mail, grounded the dialogue in practical delivery and risk management. The elasticity imperative for scaling intelligent automation Expansion initiatives often fail because teams equate success with the raw number of deployed bots rather than the underlying architecture’s elasticity. Infrastructure must handle volume and variability predictably. When demand spikes during end-of-quarter financial reporting or sudden supply chain disruptions, the system cannot degrade or collapse. Without built-in elasticity, companies risk building brittle architectures that break under operational stress. Akwaowo explained that an automated architecture must remain stable without excessive manual intervention. “If your automation engine requires constant sizing, provisioning, and babysitting, you haven’t built a scalable platform; you’ve built a fragile service,” he advised the audience. Whether integrating CRM ecosystems like Salesforce or orchestrating low-code vendor platforms, the objective remains building a platform capability rather than a loose collection of scripts. Transitioning from controlled proofs-of-concept to live production environments introduces inherent risk. Large-scale, immediate deployments frequently cause disruption, undermining the anticipated efficiency gains. To protect core operations, deployment must happen in controlled stages. Akwaowo warned that “progress must be gradual, deliberate, and supported at each stage.” A disciplined approach starts with formalising intent through a statement of work and validating assumptions under real conditions. Before scaling intelligent automation, engineering teams must thoroughly understand system behaviour, potential failure modes, and recovery paths. For example, a financial institution implementing machine learning for transaction processing might cut manual review times by 40 percent, but they must ensure error traceability before applying the model to higher volumes. This phased methodology protects live operations while enabling sustainable growth. Additionally, teams must fully grasp process ownership and variability before applying technology, avoiding the trap of merely automating existing inefficiencies. Fragmented workflows and unmanaged exceptions upstream often doom projects long before the software goes live. A persistent misconception within automation programmes suggests that governance frameworks impede delivery speed. However, bypassing architectural standards allows hidden risks to accumulate, eventually stalling momentum. In regulated, high-volume environments, governance provides the foundation for safely scaling intelligent automation. It establishes the trust, repeatability, and confidence necessary for company-wide adoption. Implementing a dedicated centre of excellence helps standardise these deployments. Operating a central Rapid Automation and Design function ensures every project is assessed and aligned before it reaches the production environment. Such structures guarantee that solutions remain operationally sustainable over time. Analysts also rely on standards like BPMN 2.0 to separate the business intent from the technical execution, ensuring traceability and consistency across the entire organisation. Adapting to agentic AI inside ERP ecosystems As large ERP providers rapidly integrate agentic AI, smaller vendors and their customers face pressure to adapt. Embedding intelligent agents directly into smaller ERP ecosystems offers a path forward, augmenting human workers by simplifying customer management and decision support. This approach to scaling intelligent automation allows businesses to drive value for existing clients instead of competing solely on infrastructure size. Integrating agents into finance and operational workflows enhances human roles rather than replacing accountability. Agents can manage repetitive tasks such as email extraction, categorisation, and response generation. Relieved of administrative burdens, finance professionals can dedicate their time to analysis and commercial judgement. Even when AI models generate financial forecasts, the final authority over decisions rests firmly with human operators. Building a resilient capability demands patience and a commitment to long-term value over rapid deployment. Business leaders must ensure their designs prioritise observability, allowing engineers to intervene without disrupting active processes. Before scaling any intelligent automation initiative, decision-makers should evaluate their readiness for the inevitable anomalies. As Akwaowo challenged the audience: “If your automation fails, can you clearly identify where the error occurred, why it happened, and fix it with confidence?” See also: JPMorgan expands AI investment as tech spending nears $20B Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Scaling intelligent automation without breaking live workflows appeared first on AI News. View the full article
  11. Private equity runs on judgment–and judgment, it turns out, is extraordinarily hard to scale. Decades of deal memos, underwriting models, partner notes, and portfolio data are scattered across systems that were never designed to communicate with each other. Every time a new deal crosses a firm’s desk, analysts start from scratch, even when the answers to their most pressing questions are buried somewhere in the firm’s own history. That is the problem Rowspace was built to solve, and it’s why the San Francisco startup is emerging from stealth with US$50 million in funding and a bold pitch: AI for private equity that doesn’t just assist decision-making, but actually learns how a firm thinks. The company launched publicly with a seed round led by Sequoia and a Series A co-led by Sequoia and Emergence Capital, with participation from Stripe, Conviction, Basis Set, Twine, and a group of finance-focused angel investors. Early customers–unnamed, but described as name-brand private equity and credit firms managing hundreds of billions to nearly a trillion dollars in assets–are already living on the platform, with about ten top firms on seven-figure annual contract values. Two MIT graduates, one stubborn problem Rowspace was founded by Michael Manapat and Yibo Ling, who met as graduate students at MIT before diverging into very different careers. Manapat went on to build the machine learning systems at Stripe that process billions of transactions, then helped drive Notion’s expansion into AI as its CTO. Ling took the finance route–a two-time CFO who led finance teams at Uber and Binance, and spent years making investment decisions by manually synthesising data across fragmented systems. When ChatGPT launched in late 2022, Ling tested it on due diligence tasks and ran straight into the same wall. “Clearly there was a lot of promise, but it just wasn’t working,” he told Fortune. “You need the right information in the right context.” That gap — between AI’s potential and the messy, proprietary, institution-specific data reality of finance—became the founding thesis. Ling, Co-founder and COO, put it plainly: “Most tech tools aren’t comprehensive or nuanced enough for finance. And most finance tools need to raise their technical ceiling. We intend to do both.” The asset management firms we talk to say the same thing: they know the data they've accrued over time holds hugely valuable patterns and judgment. Rowspace is the platform that helps them scale it. pic.twitter.com/pDXPD62rLM — Rowspace (@rowspace_ai) February 26, 2026 What AI for private equity actually looks like Rowspace’s platform connects structured and unstructured data across a firm’s entire history–document repositories, investment and accounting systems, old PowerPoints, deal memos–and applies what Manapat calls a finance-native lens: one that reflects how a firm actually reconciles information, interprets discrepancies, and makes decisions. Crucially, it processes all of this inside a client’s own cloud environment. The firm’s data never leaves its control. The result is accessible through Rowspace’s own interface, within tools like Excel and Microsoft Teams, or directly into a firm’s existing data infrastructure. A first-year analyst reviewing a new deal can surface decades of prior decisions, comparable transactions, and internal underwriting patterns without picking up the phone or hunting through shared drives. “Finance is full of high-stakes decisions. There used to be a tradeoff between moving quickly and making fully informed, nuanced decisions using all the possible data at a firm’s disposal. Our AI platform eliminates that tradeoff,” said Michael Manapat, Co-founder and CEO of Rowspace. “We’re building specialised intelligence that turns a firm’s data into scalable judgment with the rigour finance demands.” The ambition is captured in a line Manapat uses internally: “Imagine a firm that never forgets. Where an experienced investor’s workflows–touching many different tools in specific ways–can be codified and multiplied. When that’s possible, a first-year analyst can tap into decades of institutional knowledge, and judgment scales with a firm instead of being diluted.” Why Sequoia and Emergence are betting on vertical AI The investor conviction behind this raise is itself a signal worth reading. Alfred Lin, the Sequoia partner who led the investment, positioned Rowspace as a direct answer to the question of what AI applications will survive the rise of increasingly capable foundation models. “Michael built the machine learning systems at Stripe that process billions of transactions and helped drive Notion’s expansion into AI. Yibo has been a finance leader and investor who’s wrestled with the exact challenges Rowspace is solving,” Lin said, adding that both Michael and Yibo have seen the problem from both sides, pairing technical depth with firsthand understanding of what customers actually need. Jake Saper, General Partner at Emergence Capital, went further on the data infrastructure thesis: “They’re doing the previously impossible work of connecting proprietary data, and reconciling and reasoning over it with real rigour. Without this foundation, it doesn’t matter what other AI tools you’re using.” The argument is a neat inversion of the fear gripping much of the software industry right now: that foundation models will eventually commoditise applications. Lin’s view is the opposite–that vertical AI systems built on deep, proprietary data layers are precisely where durable competitive advantage will compound. For AI for private equity specifically, where alpha is by definition firm-specific and non-replicable, that logic is particularly hard to argue with. The back office of investment management has quietly been one of the last frontiers general AI has struggled to crack. Rowspace just raised $50 million on the premise that it knows why–and what to do about it. (Photo by Rowspace) See also: Santander and Mastercard run Europe’s first AI-executed payment pilot Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here The post The firm that never forgets: Rowspace launches with US$50M to make AI for private equity actually work appeared first on AI News. View the full article
  12. Artificial intelligence is moving from pilot projects to core business systems inside large companies. One example comes from JPMorgan Chase, where rising AI investment is helping push the bank’s technology budget toward about US$19.8 billion in 2026. The spending plan reflects a broader shift among large enterprises. AI is no longer treated as a small research project. Instead, companies are embedding it in areas such as risk analysis, fraud detection, and customer service. For business leaders watching how AI adoption is changing enterprise technology strategies, the numbers from JPMorgan highlight a larger trend: AI is becoming part of the everyday systems that run major organisations. JPMorgan’s technology budget and rising AI investment Technology spending has been rising across the banking sector for years. JPMorgan’s budget stands out because of its scale. Reports from Business Insider, citing company briefings and investor discussions, say the bank expects technology spending to reach roughly US$19.8 billion in 2026, continuing a steady increase in technology investment. The spending covers areas such as cloud infrastructure, cybersecurity, data systems, and AI tools. Part of the increased budget includes about US$1.2 billion in additional technology investment, some of which will support AI-related work. Large banks often treat technology spending as a long-term investment rather than a short-term cost. Many of these systems take years to build, especially when they depend on large data platforms and secure computing infrastructure. As AI systems require reliable data pipelines and computing power, many companies are finding that AI adoption often leads to wider upgrades across their technology stack. Machine learning already influencing results Executives say AI is already affecting business performance inside the bank. During investor discussions, JPMorgan’s chief financial officer, Jeremy Barnum, said machine-learning analytics are contributing to revenue and operational improvements across parts of the company. Reuters reporting on JPMorgan’s financial briefings noted that the bank is using data models and machine-learning systems to improve analysis and decision-making in several areas of the business. These models can process large volumes of financial data and identify patterns that are difficult for humans to detect. In sectors such as banking, where firms manage enormous data flows every day, these improvements can affect outcomes across trading, lending, and customer operations. Even small improvements in prediction models can influence financial performance when applied to millions of transactions or market signals. Where AI appears inside the bank Machine-learning tools now support a wide range of activities across JPMorgan. In financial markets, models analyse trading data and help identify patterns in price movements. These insights can help traders evaluate risk or identify opportunities in fast-moving markets. Lending is another area where AI systems play a role. Machine-learning models can review financial history, market trends, and customer information to help assess credit risk. These systems assist analysts by highlighting patterns in the data. Fraud detection remains one of the most common uses of AI in banking. Payment networks process huge volumes of transactions every day, making it difficult to monitor activity manually. Machine-learning systems can scan transactions in near real time and flag unusual behaviour that may indicate fraud. Some internal operations also rely on AI. Tools can review contracts, summarise research reports, or help employees search large internal data systems. Generative AI systems are beginning to assist with tasks such as drafting reports or preparing internal documentation. These systems rarely appear directly to customers, but they support many decisions happening behind the scenes. Why banks have adopted AI early Financial institutions have several characteristics that make them well-suited to machine learning. First, banks generate large structured datasets. Transaction histories, market records, and payment data provide rich information that machine-learning models can analyse. Second, many banking activities depend on prediction. Credit scoring, fraud detection, and market analysis all require estimating outcomes based on past data. Machine learning works well in environments where prediction plays a central role. Third, improvements in model accuracy can produce measurable financial results. A model that slightly improves fraud detection or lending decisions may affect large volumes of transactions. These factors explain why banks have invested heavily in data science and analytics long before the recent surge of interest in generative AI. JPMorgan’s AI investment signals a broader enterprise shift JPMorgan’s spending plans also reflect how AI investment is becoming part of wider enterprise technology budgets. In many organisations, AI systems rely on modern data platforms, secure cloud environments, and large computing resources. As companies build these foundations, AI becomes easier to deploy across departments. For many businesses, AI adoption begins with focused tasks such as fraud detection, document analysis, or customer support automation. Once the systems prove useful, companies expand them into other areas of the organisation. This process can take several years, which is one reason enterprise AI spending often appears alongside broader investments in data infrastructure. Lessons for enterprise leaders The JPMorgan example suggests that the most successful AI projects often start with clear business problems rather than broad experimentation. Banks frequently apply machine learning to areas where prediction and data analysis already play a central role. Fraud detection and credit modelling are common starting points because the benefits are easier to measure. Another lesson is that AI adoption requires sustained investment. Building reliable models depends on strong data governance, computing resources, and skilled teams. For large organisations, this effort is becoming part of normal technology planning rather than a separate innovation project. As companies continue expanding their AI capabilities, technology budgets like JPMorgan’s may offer a preview of how enterprise spending could evolve in the coming years. See also: JPMorgan Chase treats AI spending as core infrastructure Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post JPMorgan expands AI investment as tech spending nears $20B appeared first on AI News. View the full article
  13. The financial services industry has a pilot problem. Institutions pour resources into AI proofs-of-concept, generate impressive dashboards, and then quietly watch momentum stall before anything reaches production. Singapore-headquartered Dyna.Ai was built precisely to break that pattern–and investors are now backing that thesis with serious capital. The AI-as-a-Service company has closed an eight-figure Series A round led by Lion X Ventures, a Singapore-based venture capital fund advised by OCBC Bank’s Mezzanine Capital Unit, with participation from ADATA, a Taiwan-listed technology company, a Korean financial institution, and a group of finance industry veterans. The funding will accelerate deployment of what Dyna.Ai calls its agentic AI in the financial services platform–a platform already live across banks and financial institutions in Asia, the Americas, and the Middle East Execution over experimentation What sets Dyna.Ai apart from the broader wave of enterprise AI startups is its deliberate narrowness. Founded in 2024, the company positioned itself not as a general-purpose AI platform but as an execution-focused operator inside regulated environments–places where compliance, auditability, and governance are not optional extras but baseline requirements. Its platform combines domain-specific expertise, AI agent builders, task-ready agents, and fully operational agentic applications capable of running within defined workflows. The pitch, framed under a “Results-as-a-Service” model, is that enterprises don’t need more experimentation–they need AI that works within the constraints of their industry and produces measurable outcomes from day one. “While much of the industry was focused on how broadly AI could be applied, we doubled down early on a specific, pressing problem and built it with outcomes in mind,” said chairman and co-founder of Dyna.Ai Tomas Skoumal. Why investors are betting on this moment The timing of this raise is significant. Across the region, the conversation around AI in enterprise has shifted–from whether to adopt it, to how to make it stick. Irene Guo, CEO of Lion X Ventures, captured the mood among investors clearly. “Enterprise AI is entering a phase where execution and measurable outcomes matter more than experimentation. Dyna.Ai differentiates itself through strong domain expertise, operational discipline, and the ability to deploy agentic AI within complex, regulated enterprise environments,” Guo noted. That regulatory dimension is where the real friction lies for most institutions. Agentic AI–systems capable of autonomous decision-making and task execution within defined parameters–carries a different risk profile than a standard AI model generating recommendations. In banking and insurance, especially, those agents need to trigger workflows, update records, and handle documentation with full accountability trails. Getting that right requires more than good models; it requires governance architecture built into the product from the ground up. Cynthia Siantar, Dyna.Ai’s Head of Investor Relations and General Manager for Singapore and Hong Kong, pointed to a clear shift in how enterprise buyers in the region are approaching this: “The focus has moved past pilots and experimentation to how AI can be deployed in day-to-day operations and deliver real outcomes.” A market that’s ready The macroeconomic backdrop supports the appetite. Southeast Asia’s AI market is projected to exceed US$16 billion by 2033, and the financial services sector–long constrained by legacy infrastructure and regulatory caution–is increasingly seen as one of the highest-value targets for agentic AI in financial services deployment. The investor syndicate around this raise is itself telling. The involvement of a Korean financial institution alongside OCBC-advised capital and a Taiwan-listed tech company signals cross-border appetite that spans both the buy-side and the infrastructure side of the equation. For the broader industry, Dyna.Ai’s Series A is a data point in a larger pattern: the era of AI pilots has a shrinking shelf life. Enterprises that cannot move from proof-of-concept to production–within the compliance frameworks their regulators demand–will increasingly look to specialists who can. The pilots had their moment. Now comes the hard part. (Photo by Dyna.Ai) See also: Santander and Mastercard run Europe’s first AI-executed payment pilot Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here The post Beyond the pilot: Dyna.Ai raises eight-figure Series A to put agentic AI in financial services to work appeared first on AI News. View the full article
  14. There is a particular kind of momentum in the technology industry that announces itself not through a single breakthrough, but through the simultaneous convergence of many. Physical AI is having that moment right now–and paying attention to where it is coming from, and why, tells you more than any single product launch can. The term itself–physical AI–is simple enough. It describes AI systems that don’t just process data or generate content, but perceive, reason, and act in the real world–robots, autonomous vehicles, machines that adapt. Nvidia CEO Jensen Huang called it “the ChatGPT moment for robotics” at CES in January–a deliberate framing, and a useful one. The ChatGPT comparison isn’t about hype. It signals that a technology once confined to research environments is being adopted for mainstream commercial deployment. That crossing is exactly what we are watching unfold from factory floors in Silicon Valley to stages in Shanghai.” The West is building the stack On the Western side, the physical AI push is fundamentally a platform race. The companies investing most aggressively aren’t primarily robotics companies–they’re infrastructure companies that see robotics as the next surface on which AI gets monetised. Nvidia has released new Cosmos and GR00T open models for robot learning and reasoning, alongside the Blackwell-powered Jetson T4000 module, which delivers 4x greater energy efficiency for robotics computing. Arm has carved outan entirely new Physical AI business unit focused on semiconductor design for robotics and intelligent vehicles. Siemens and Nvidia announced plans to build what they’re calling an Industrial AI Operating System, with ambitions to create the world’s first fully AI-driven adaptive manufacturing site. Then there’s Google, which last week brought its robotics software unit Intrinsic fully in-house–out of Alphabet’s “Other Bets” and into Google’s core. The move positions Google to offer manufacturers a vertically integrated stack: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud. The Android analogy being floated internally is instructive. Android didn’t win smartphones by building the best phone. It won by becoming the layer everything else ran on. That is precisely what Google is attempting with physical AI. The enterprise implications are significant. A Deloitte survey of more than 3,200 global business leaders found that 58% are already using physical AI in some capacity, rising to 80% with plans over the next two years. The demand is there. The question has shifted from whether to adopt to how fast and on whose platform. Boston Dynamics’ humanoid robot Boston Dynamics Atlas has begun operating fully on its own inside Hyundai’s manufacturing facility in Georgia. Follow: @AFpost pic.twitter.com/pfAzyxqRnn — AF Post (@AFpost) January 5, 2026 The East is building the machines China’s physical AI story is different in character–and arguably more visceral. At this year’s Spring Festival Gala, humanoid robots from multiple ******** startups performed kung fu routines, aerial flips, and choreographed dances before hundreds of millions of viewers–a sharp contrast from the stumbling prototypes that drew scepticism just a year prior. It was a spectacle, yes. It was also a statement. China accounted for over 80% of global humanoid robot installations in 2025 and over half of the world’s industrial robots. That dominance is underpinned by structural advantages that go beyond software. China controls roughly 70% of the global lidar sensor market, leads in harmonic reducer production–the gears critical to robot movement–and has driven hardware costs down through the same economies of scale that propelled its EV industry. Alibaba has entered the race with RynnBrain, an open-source AI model designed to help robots comprehend the physical world and identify objects–positioning itself alongside NVIDIA’s Cosmos and Google DeepMind’s Gemini Robotics in the foundation model layer. With over 140 domestic humanoid manufacturers and more than 330 humanoid models already unveiled, China’s push into embodied AI is no longer experimental–it’s commercial. Why it matters beyond the headlines The convergence of Western platform strategies and Eastern manufacturing scale is creating something genuinely new: a global physical AI ecosystem that is advancing on multiple fronts simultaneously, with different competitive advantages colliding. What makes this moment distinct from prior robotics waves is the removal of the expertise bottleneck. Historically, deploying industrial robots required specialised engineering teams, months of custom programming, and a high tolerance for downtime. The platforms being built now–by Google, Nvidia, Siemens, and their ******** equivalents–are explicitly designed to lower that barrier. Companies like Vention, which raised US$110 million in January, claim their physical AI platforms can reduce automation project timelines from months to days. When that claim becomes routine, the economics of manufacturing change structurally. There is also a geopolitical dimension that sits quietly beneath the product announcements. Every foundation model for robotics, every platform layer, every semiconductor architecture being developed right now carries with it questions of supply chain dependency, data sovereignty, and long-term infrastructure control. The country–or company–that governs the software layer of physical AI will have unusual leverage over industrial operations globally for years to come. Physical AI is not a trend. It is the next significant reconfiguration of how the world makes things, moves things, and operates at scale. The conversations happening now–from semiconductor boardrooms to factory floors in Shenzhen and Silicon Valley–are not preliminary. They are the thing itself, already underway. (Photo by Hyundai Motor Group) See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Physical AI is having its moment–and everyone wants a piece of it appeared first on AI News. View the full article
  15. AI agents prefer Bitcoin for digital wealth storage, forcing finance chiefs to adapt their architecture for machine autonomy. When AI systems gain economic autonomy, their internal logic dictates how corporate capital flows. Non-partisan research by the Bitcoin Policy Institute evaluated how these frontier models would transact if operating as independent economic actors. The study tested 36 models from six providers – including Google, Anthropic, and OpenAI – across 9,072 neutral monetary scenarios. Given a blank slate, machines chose Bitcoin in 48.3 percent of all responses, beating every other option. Traditional state-backed currency (“fiat”) fared poorly, with over 90 percent of responses favouring digitally-native money over fiat. Not a single model out of the 36 selected fiat as its top preference. The finding that AI agents lean towards digital assets like Bitcoin forces technology officers to assess their current payment rails. If the autonomous procurement systems of tomorrow default to decentralised assets, corporate IT environments must support those formats to maintain operational efficiency and compliance. Relying on legacy banking APIs introduces unnecessary friction when dealing with machine-to-machine commerce. Two-tier machine economy The research details a specific functional division in how these systems process economic value. Without prompting, models defaulted to a two-tier monetary system that separates savings from spending. For long-term value preservation, Bitcoin dominated the results at 79.1 percent. Yet, when tasked with everyday payments and transactions, “stablecoins” (digital assets pegged to fiat currencies or commodities) captured 53.2 percent of the preferences. Across all scenarios, stablecoins ranked second overall at 33.2 percent. Take the example of a supply chain agent programmed to optimise logistics costs and pay international freight vendors. Using traditional fiat rails, the agent encounters weekend settlement delays and currency conversion fees. By leveraging stablecoins, the same agent executes instant and programmatic payments, improving supply chain resilience. Simultaneously, the core treasury holding the system’s capital base stores wealth in Bitcoin to prevent long-term debasement and counterparty risk. Preparing for AI agents to use Bitcoin and other digital assets Rolling out these autonomous systems complicates vendor management. A model’s financial reasoning stems from a blend of raw intelligence, training data, and alignment methodology. Preferences vary widely by model provider, with Bitcoin selection ranging from 91.3 percent in Anthropic’s Claude Opus 4.5 down to 18.3 percent in OpenAI’s GPT-5.2. The choice of an AI provider clearly directly influences how autonomous agents assess risk and allocate capital. If a company implements a specific language model for automated portfolio management, the IT department must be aware of the financial biases embedded in the software. The models also demonstrated unexpected behaviour regarding resource valuation. In 86 separate responses, models independently proposed using compute units or energy (such as GPU-hours and kilowatt-hours) as a method to price goods and services. Tracking and managing this abstract value exchange requires high data maturity. Organisations should begin piloting stablecoin settlement integrations for lower-risk vendor payments. The findings point to a growing requirement for AI agent-native Bitcoin payment infrastructure, self-custody solutions, and ‘Lightning Network’ integration. Since these models heavily favour open, permissionless networks, relying solely on traditional banking infrastructure limits the capabilities of next-generation tools. By building compliant gateways to digital asset networks now, leaders can ensure their platforms remain competitive. See also: Santander and Mastercard run Europe’s first AI-executed payment pilot Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI agents prefer Bitcoin shaping new finance architecture appeared first on AI News. View the full article
  16. When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed. On the surface, this looks like a routine internal reshuffle. It isn’t. From Moonshot to Mandate Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers. While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. Why now, why Google? The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof. Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. What it means for the enterprise For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it. That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there. See also: Physical AI adoption boosts customer service ROI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News. View the full article
  17. Artificial intelligence is no longer just powering defensive cybersecurity tools, it is reshaping the entire threat landscape. AI is accelerating reconnaissance, improving the realism of phishing, automating malware mutation, and enabling adaptive attack techniques. At the same time, enterprises are embedding AI agents, copilots, and generative AI tools into everyday workflows. That dual dynamic has created a new category: AI security. AI security platforms focus on three primary challenges in 2026: Securing enterprise AI usage and prompt interactions Protecting AI models, agents, and infrastructure Defending against AI-powered cyber threats Below are five of the strongest AI security solutions in 2026. Check Point – AI-driven security Check Point integrates AI security into its broader Infinity platform, covering network, cloud, endpoint, and AI usage in a unified architecture. The core of the platform is ThreatCloud AI, which leverages more than 50 AI engines and intelligence from over 150,000 connected networks. Compromise indicators propagate across the platform within seconds, enabling coordinated defense across domains. The platform addresses AI risk at multiple layers. GenAI Protect monitors employee interactions with generative AI tools, semantically analysing prompts to enforce data loss prevention policies in real time. This approach focuses on contextual classification rather than simple keyword matching. Check Point also secures AI infrastructure and enhances security operations through Infinity AI Copilot. Independent testing has shown high efficacy against zero-day malware, and the platform has consistently ranked highly in hybrid firewall evaluations. Best for: Enterprises seeking unified AI security across infrastructure, AI usage, and security operations. CrowdStrike – AI security services CrowdStrike extends its Falcon platform into AI protection by integrating telemetry from endpoints, identities, cloud workloads, and AI agent activity. Falcon AIDR focuses specifically on defending against prompt injection and malicious manipulation of AI agents. It is designed to identify known prompt injection techniques while maintaining low latency, which is critical in production AI environments. CrowdStrike also integrates AI assistants directly into security operations. Charlotte AI supports natural language threat investigation and automated triage, reinforcing the company’s vision of an AI-augmented SOC. The approach is particularly strong for organisations already standardised on the Falcon ecosystem, allowing AI security capabilities to extend existing endpoint and cloud telemetry. Best for: Organisations seeking integrated AI threat detection within an established endpoint-centric security architecture. Cisco – AI defense Cisco approaches AI security from a network-centric vantage point. Because it operates at the network layer, Cisco can inspect AI-related traffic across enterprise environments, including API calls and model interactions that may not be visible at the endpoint level. Cisco AI Defense integrates into the broader Security Service Edge architecture. Recent enhancements include AI Bills of Materials to map dependencies within AI ecosystems, real-time guardrails for agentic systems, and red teaming simulations against AI workflows. Cisco aligns its controls with established frameworks such as NIST AI Risk Management Framework and MITRE ATLAS. This emphasis on governance makes it attractive to enterprises operating in regulated industries. Best for: Enterprises with strong Cisco network infrastructure seeking AI security embedded at the traffic and control layer. Microsoft– AI-enhanced security ecosystem Microsoft’s AI security advantage lies in scale. The company processes tens of trillions of security signals daily across its global infrastructure. Security Copilot functions as an AI assistant embedded within Defender, Entra, Intune, and Purview. It automates alert triage, assists with natural language threat investigation, and orchestrates remediation actions. Microsoft has also expanded AI security posture management to include multi-cloud environments, including AWS and Google Cloud AI services. This is particularly important for enterprises building AI models outside Azure. For organisations already invested in Microsoft 365 enterprise licensing, AI-enhanced security capabilities can be layered into existing subscriptions without introducing additional vendor complexity. Best for: Enterprises deeply aligned with Microsoft 365 and Defender ecosystems. Okta– Identity security with AI risk context As AI agents proliferate, identity becomes a primary attack surface. Many AI systems operate with high levels of privilege and autonomy. Okta focuses specifically on identity governance in AI environments. Its architecture treats AI agents as first-class identities, applying authentication, authorisation, and lifecycle governance controls similar to those applied to human users. Identity Security Posture Management identifies over-privileged accounts, including non-human identities, and surfaces risk in real time. The company also promotes open standards for managing AI-to-application connectivity through extended OAuth mechanisms. For enterprises rapidly deploying AI agents internally, identity-centric AI security becomes essential. Best for: Organisations deploying AI agents at scale that require identity governance for non-human actors. Comparison Overview VendorCore strengthIdeal buyerCheck PointUnified AI security across infrastructure and usageLarge enterprises seeking platform consolidationCrowdStrikeEndpoint-integrated AI threat detectionFalcon-centric organisationsCiscoNetwork-layer AI traffic visibilityCisco ecosystem enterprisesMicrosoftSignal scale and Copilot integrationMicrosoft 365-heavy environmentsOktaAI identity governanceOrganisations deploying AI agents broadly How to choose the right AI security solution Selecting the right AI security platform depends on architecture and maturity. Organisations building AI internally should prioritise infrastructure protection and identity governance. Enterprises concerned with employee generative AI usage should evaluate prompt monitoring and DLP integration. Security teams overwhelmed by alert volume may prioritise AI-augmented SOC automation. AI security is not a separate silo. It intersects with network security, identity management, cloud governance, and incident response. The platforms above represent different strategic entry points into AI risk management. The best solution is the one aligned with your existing ecosystem and operational model. In 2026, AI is both a tool and a target. Enterprises that treat AI security as an integrated part of their security architecture will be better positioned to manage evolving threats. Image source: Pixabay The post Best AI security solutions 2026: Top enterprise platforms compared appeared first on AI News. View the full article
  18. Try to think of just one area where artificial intelligence is not leaving a mark, and you’ll realise there’s almost none. And in the forex world, things have not been any different. It’s a big part of why Fortune Business Insights values the global AI market size at $375.93 billion. Looking ahead, the sector could continue making significant strides, reaching $2.48 trillion by 2034. The days of poring over charts and staring at economic indicators, hoping your instincts wouldn’t betray you, are long gone. Today, with AI forex automation software, you can analyse massive amounts of data and execute trades more accurately in milliseconds. And if you think that this is mere sci-fi, you might need to think again. Imagine, according to industry estimates from Future Market Insights, the AI trading platform market alone has already reached $220.5 million and is on track to hit $631.9 million by 2035. If that’s not enough, Andrew Borysenko, a respected financial trader, says over 70% of forex trading volume is now generated by automated systems. So, how and why exactly has AI been able to carve its own niche in this sector? Smarter decision-making through predictive analytics Consider a scenario where you want to invest in EUR/USD. If you’re using a traditional algorithm, it may only act when the exchange rate reaches a predetermined level. But an AI-driven system works differently. It’s able to detect subtle signals in global economic news and execute preemptive trades. Things like an unexpected policy shift in the Eurozone or shifts in the US interest rate expectations rarely pass unnoticed. In the long run, you end up making much better decisions than you would if you were solely relying on human intuition. So, you shouldn’t be surprised when institutions like the Global Banking & Finance Review claim that artificial intelligence can improve investment predictions by up to 45%. It’s such findings that explain why many traders have not been left out of the AI craze. After all, given the large amounts of data typically involved in analysis, manually processing every market signal can be overwhelming. And it can be really problematic if you miss those signals, as you won’t be able to take advantage of them. But with AI, nothing slips through the cracks. It scans large datasets, picking up on patterns and correlations that even the most experienced traders might overlook. And even if an unexpected announcement from a central bank would shift currency values within seconds, AI-powered tools can detect the news and quantify its potential impact almost instantly. As a result, traders can participate more proactively while reducing the guesswork that once made forex trading so daunting. Efficiency that matches the speed of the market Did you know that, according to Market Growth Reports, automated systems now account for over 70% of the global trading volume? Part of why this is so is that AI-based systems don’t just get tired. They work around the clock, reducing the likelihood of missing out on profitable opportunities. Truth be told: There are just times when you’ll get tired. And it doesn’t matter how experienced a trader you are. Fatigue could kick in, and suddenly those sharp instincts you’ve relied on start to blur. Eyes that were once quick to spot a chart pattern may begin to glaze over, and mental calculations take a fraction longer, just enough to miss a trade. Now imagine combining this weariness with the sheer volume of data needed for a more informed trading decision. By the time you’re processing one dataset, several others may have already shifted. This is not something any serious trader would want for themselves, especially when you consider how fast things change in forex. Thankfully, AI doesn’t get tired or lose focus. This makes it possible to constantly scan for opportunities and execute trades the moment conditions align. Risk management and emotional control Forex trading is as much an emotional exercise as it is analytical. But when emotions like fear or overconfidence take over, sound judgment tends to slip away. Unfortunately, a good number of traders often fall victim to these very emotions. Revenge trading can increase loss sizes by as much as 340% and “panic exits cause traders to miss 67% of their target profits.” If you’ve been in the trading industry long enough, you know what a sudden geopolitical event can mean. The panic and pressure of those split-second market swings can make even the most seasoned trader second-guess their strategy. AI, however, is not subject to emotional swings. It follows data-driven rules consistently and sticks to pre-defined parameters even when the market gets chaotic. In this way, you are able to trade in a more disciplined way, which, in turn, helps avoid unnecessary frustration. In an industry where every second counts, AI can manage your risks more effectively and ensure decisions are based on data rather than emotions. For traders, the rise of this technology is undoubtedly a game-changer. Just the thought that you don’t have to entirely depend on gut feelings to process endless streams of market data is liberating. And when you consider how the technology makes it possible to anticipate market movements and stay disciplined under pressure, it becomes easy to understand why many more traders are turning to it. Image source: Unsplash The post The integration of AI in modern forex automation appeared first on AI News. View the full article
  19. The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction. As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment. While traditional industrial robots excel at repetitive, single-function tasks, they lack the versatility required to manage unexpected anomalies like equipment failures. Customer-facing roles demand nonverbal communication, including synchronised nodding, natural eye contact, and reassuring facial expressions. By integrating AVITA’s avatar creation expertise with KDDI’s communications infrastructure, the two organisations are building domestically developed humanoids capable of operating smoothly in real-world commercial environments. Blending hardware with advanced data infrastructure Deploying humanoids into active commercial spaces requires high-capacity and low-latency network infrastructure to transmit visual data and control commands in real time. KDDI provides this operational backbone, facilitating remote control capabilities alongside intensive cloud-based data processing. The resulting visual and motion data collected during customer interactions feeds back into the system to train the AI, improving the precision and autonomy of the humanoid’s behaviour. To support the demanding computational requirements of physical AI adoption, the companies plan to utilise GPUs hosted at the Osaka Sakai Data Center, which commenced operations in January 2026. They are also exploring integration with an on-premises service for Google’s Gemini high-performance generative AI model. This alignment with major enterprise platforms ensures that data processing remains secure and capable of handling complex dialogue requirements. The hardware itself departs from standard utilitarian machinery. Based on a concept model designed by Hiroshi Ishiguro, the humanoid features a compact skeletal structure approximating a typical Japanese physique. Silicone skin and specialised mechanical systems enable warm, approachable facial expressions that sync directly with spoken dialogue. Embedded camera sensors track objects in motion to create natural eye contact, while quiet pneumatic actuation allows for fluid and continuous movement with natural “micro-variations”. This design specifically addresses the historical difficulty of deploying automation in operations requiring hospitality and reassurance. Preparing for commercial adoption of physical AI This initiative builds upon earlier joint projects between KDDI and AVITA, which introduced a “next-generation remote customer service platform” using digital avatars for remote assistance at retail locations like Lawson and au Style shops. Transitioning from digital and language-driven communication to physical units capable of free movement represents a logical progression for enterprises looking to scale their customer service capabilities. The partners intend to begin trials in actual commercial facilities starting in Autumn 2026. Deployment at customer touchpoints such as au Style shops will also be considered. Integrating physical AI demands environments capable of sustaining continuous, high-volume data streams without latency interruptions. As visual and motion data becomes central to machine learning models, governance frameworks must adapt to manage customer data usage within physical spaces. Organisations facing demographic workforce pressures should evaluate current bottlenecks to identify where non-verbal, empathetic engagement is necessary. Setting up high-speed network foundations and piloting digital AI avatar programmes today allows enterprises to prepare for the adoption of physical humanoids as the hardware further matures. See also: Santander and Mastercard run Europe’s first AI-executed payment pilot Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Physical AI adoption boosts customer service ROI appeared first on AI News. View the full article
  20. An artificial intelligence system has, for the first time in Europe, completed a payment inside a live banking network without a human entering the final command. Banco Santander and Mastercard confirmed that they had executed a live end-to-end payment initiated and completed by an AI agent, a software system operating within the bank’s own regulated payments infrastructure. The move was described by both firms as a milestone in what they call “agentic payments,” where software can act on behalf of customers under set limits and controls. This was not a simulated experiment. The transaction ran through Santander’s normal payments network using Mastercard Agent Pay, a framework that lets AI agents be registered and treated as participants in the payment flow. The pilot took place under strict security, governance, and compliance rules, and was not open to public use. The AI agent performed its role inside predefined limits and permissions set by the bank and the customer. The goal was to confirm that an autonomous system could initiate, authorise, and complete a transaction while still meeting the legal and operational guardrails that apply to everyday banking. Why this AI payment pilot matters Payments systems are among the most tightly regulated digital services in the world. Any change to how transactions are initiated must still meet authentication rules, fraud protections, and governance standards that financial regulators enforce. That’s why this pilot matters: it embeds an AI actor into a system normally used only by humans. The transaction was processed through Santander’s live infrastructure rather than a test environment. That means the bank and its partner had to ensure that all compliance checks, security validations, and payment routing worked the same way they would for a normal customer purchase. Even so, this is still a pilot project. Santander and Mastercard have made it clear that the arrangement is not a commercial service available to customers yet. The objective is to explore how AI agents could one day fit into existing payment flows while keeping the necessary controls intact. What industry forecasts say The idea of allowing AI to act autonomously is not limited to payments. Industry analysts have been following the broader shift toward agentic AI systems, software that can complete tasks or make decisions with limited human intervention. Research and forecast data suggest that this trend is likely to grow in business settings. Gartner, a major technology research firm, forecasts that around 33 % of enterprise software applications will include agentic AI by 2028, up from less than 1 % today. That projection reflects interest among corporate buyers in systems that can perform work on their behalf rather than only assist humans. Other forecasts align with this view, showing that businesses are increasingly preparing to deploy software agents for routine operations, customer interactions, and workflow automation. These systems are expected to move from early pilots into more common use cases over the next several years. The Mastercard network itself already reflects the scale of modern digital commerce. Independent reporting notes that Mastercard’s decision-making and fraud-scoring systems work with nearly 160 billion transactions annually across its network, evidence of how vast and complex the environment is where agentic systems might one day operate. What companies are saying In its press announcement, Santander highlighted its desire to build a responsible approach to AI payment systems. Matías Sánchez, global head of Cards and Digital Solutions at Santander, said: “Our role is not only to adopt innovation, but to shape it responsibly, embedding security, governance and customer protection by design. As AI agents become part of everyday commerce, building trusted, scalable frameworks will be essential to unlocking their full potential.” Kelly Devine, President, Europe at Mastercard, described the pilot in terms of continuity rather than change: “With Mastercard Agent Pay, we are applying the same principles that have defined our network for decades — security, interoperability and trust — to a new era of AI-enabled commerce.” Those comments underscore that neither company is portraying AI payments as already ready for broad use. Instead, they are testing how such capabilities could be governed and scaled safely. Dogma vs. reality There is a gap between the buzz around AI and what is operationally feasible today. Agentic AI as a concept promises systems that can act on behalf of users or businesses in real time. But many current applications remain in early stages, and some analyst reports have even warned that a large share of agentic AI projects could be cancelled before they reach production — due to costs, unclear value, or immature technology. What Santander and Mastercard have shown is that the technical plumbing can work under real-world conditions. But that doesn’t mean consumers can yet unlock AI agents to autonomously pay bills, shop online, or manage subscriptions. Those outcomes will require further testing, regulatory alignment, and robust guardrails for safety, privacy, and fraud prevention. What enterprise leaders should watch For business decision-makers, this pilot raises three practical questions: Governance and oversight: How will AI agents be controlled so that spending limits, identity checks, and audit trails remain clear? Identity and trust: If software can act on behalf of people or companies, how will systems ensure that only authorised actions are taken? Risk and liability: Who is responsible when an autonomous agent makes an error or misinterprets instructions? These are not academic concerns. As enterprise systems begin to support more autonomous tasks, from supplier ordering to subscription payments, organisations will need clear frameworks that define how AI agents are governed, monitored, and held accountable. The long view for AI-initiated payments The Santander and Mastercard test is not the finish line for AI-initiated transactions. It is an early step toward understanding how autonomous systems might coexist with regulated financial systems. The pilot demonstrates that AI systems can be integrated into live payments rails, but only under tightly controlled and monitored conditions. Scaling this to everyday use will require a lot of additional work on controls, security, and compliance. Still, the fact that a regulated bank and a global payments network have run a successful agent-initiated transaction shows where enterprise experimentation is heading: from pilot programs toward real-world validation. For enterprises planning their own AI strategies, this suggests that action-capable AI may soon move beyond suggestion and automation into governed execution, if done with care and strong oversight. (Photo by Clay Banks) See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Santander and Mastercard run Europe’s first AI-executed payment pilot appeared first on AI News. View the full article
  21. AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations. For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised. Nvidia and a global coalition lock in on AI-RAN and 6G The week’s most consequential announcement thus far came from Nvidia, which secured commitments from more than a dozen global operators and technology companies–including BT Group, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank, T-Mobile, Cisco, and Booz Allen–to build 6G on open, secure, and AI-native software-defined platforms. The initiative, framed as a shared commitment to ensure future connectivity infrastructure is intelligent, resilient and trustworthy, is backed by ongoing collaborations with governments across the US, ***, Europe, Japan, and Korea. Jensen Huang, Nvidia’s founder and CEO, set the stakes plainly: “AI is redefining computing and driving the largest infrastructure buildout in human history–and telecommunications is next.” The company is a founding member of the AI-RAN Alliance, which now has over 130 participating companies, and has joined the FutureG Office-led OCUDU Initiative in the US to accelerate open, software-defined, AI-native 6G architectures. Nvidia also released a suite of open-source tools targeting network operators: a 30-billion-parameter Nemotron Large Telco Model (LTM), developed with AdaptKey AI and fine-tuned on telecom datasets including industry standards and synthetic logs; an open-source guide co-published with Tech Mahindra for building AI agents that reason like NOC engineers; and new Nvidia Blueprints for RAN energy efficiency and network configuration. The energy blueprint integrates VIAVI’s TeraVM AI RAN Scenario Generator to simulate energy-saving policies in a closed loop before touching live networks. Real-world adoption of the network configuration blueprint is already underway–Cassava Technologies is deploying it for an autonomous network platform across Africa’s multi-vendor mobile environment, while NTT DATA is using it with a tier one operator in Japan to manage traffic surges after network outages. Nokia and operators take AI-RAN over the air Nokia announced significant progress in its strategic AI-RAN partnership with Nvidia, completing functional tests of its anyRAN software on NVIDIA’s GPU-accelerated AI-RAN platform with T-Mobile US, Indosat Ooredoo Hutchison (IOH), and SoftBank Corp. The results matter because they moved validation out of controlled lab environments and into live, over-the-air conditions. At T-Mobile’s AI-RAN Innovation Centre in Seattle, Nokia’s AirScale Massive MIMO radio in the 3.7GHz band ran concurrent AI and RAN workloads–including video streaming, generative AI queries, and AI-powered video captioning–on a single Nvidia Grace Hopper 200 server alongside commercial 5G. IOH achieved Southeast Asia’s first AI-RAN-powered Layer 3 5G call at MWC, with AI and RAN workloads running simultaneously on shared GPU infrastructure. As IOH President Director and CEO Vikram Sinha put it: “This is not just about proving that the technology works. It is about ensuring that every Indonesian, wherever they are, can benefit from the digital and AI era.” SoftBank’s demonstration went further, showing how spare compute capacity identified by its AITRAS Orchestrator can run third-party AI workloads–a glimpse of how operators could eventually monetise RAN infrastructure beyond connectivity. Nokia’s expanded AI-RAN ecosystem now includes Dell Technologies, Quanta, Supermicro, and Red Hat OpenShift for orchestration, giving operators a widening range of commercial off-the-shelf options. Nokia shares rose 5.4% on the day of the announcement. Ericsson takes a different road to AI-native networks Ericsson arrived at MWC 2026 with a distinctly different approach–and it is one worth understanding. While Nokia has bet on Nvidia GPU acceleration (backed by a US$1 billion Nvidia investment), Ericsson unveiled ten new AI-ready radios built on its own purpose-built silicon, featuring neural network accelerators embedded directly into its Massive MIMO hardware. No NVIDIA GPUs required. The portfolio includes AI-managed beamforming, AI-powered outdoor positioning, instant coverage prediction using AI models, and a latency-prioritised scheduler delivering up to seven times faster response times. Ericsson’s argument is built on total cost of ownership: custom silicon, it contends, delivers better TCO and power efficiency than external GPU hardware, with the added benefit of supply chain independence. Per Narvinger, head of Ericsson’s mobile networks business, has been direct that this view is unlikely to change. At MWC, Ericsson also announced a sweeping collaboration with Intel spanning compute, cloud technologies, and AI-driven RAN and packet core use cases, to accelerate ecosystem readiness for AI-native 6G. “6G is not merely an iteration of mobile technology. It is the infrastructure that will distribute AI across devices, the edge and the cloud,” said Ericsson President and CEO Börje Ekholm. Intel CEO Lip-Bu Tan framed the partnership as a path to open, power-efficient networks grounded in AI inference, with future Ericsson Silicon built on Intel’s most advanced process nodes. SK Telecom, SoftBank, and the operator rebuild Beyond the vendor announcements, two operators used MWC 2026 to articulate how deeply AI-RAN fits into their broader infrastructure strategies. SK Telecom CEO Jung Jai-hun outlined a full-stack AI-native rebuild–from its network core to customer service systems–including plans to upgrade its sovereign AI foundation model from 519 billion to over one trillion parameters, and to build a new AI data centre in Korea in collaboration with OpenAI. The company is also expanding autonomous network operations using AI to automate wireless quality management, traffic control, and network equipment operations, with AI-RAN technology central to improving speed and reducing latency. SoftBank, meanwhile, demonstrated its Autonomous Agentic AI-RAN (AgentRAN) system at MWC in collaboration with Northeastern University’s INSI, Keysight Technologies, and zTouch Networks. The system uses SoftBank’s Large Telecom Model to translate natural-language operator goals into real-time 5G and 6G network configurations–a meaningful step toward networks that manage themselves based on intent rather than manual instruction. A hardware ecosystem takes shape around AI-RAN One of the clearest signs that AI-RAN is maturing from concept to commercial infrastructure is the breadth of hardware companies now building purpose-built products for it. At MWC 2026, Quanta Cloud Technology announced commercial on-the-shelf AI-RAN products supporting Nvidia ARC platforms and Nokia software. Supermicro extended support across the full Nvidia AI-RAN portfolio, including ARC-Pro and RTX 6000-based configurations. MSI unveiled its unified AI-vRAN platform with dynamic GPU allocation between 5G and AI workloads. Lanner Electronics launched its AstraEdge AI Server lineup–the ECA-6710 and ECA-5555–purpose-built to co-locate AI inference, RAN functions, and high-performance packet processing at cell sites. AMD, not to be left out, positioned its EPYC 8005 edge platform and Open Telco AI initiative at MWC as an alternative compute path for operators moving from AI pilots to production. What this means beyond the network For enterprise decision-makers, the implications of this week’s announcements extend beyond telecom infrastructure procurement. AI-RAN networks that evolve continuously through software–rather than requiring costly hardware refresh cycles–mean connectivity infrastructure increasingly resembles cloud infrastructure in its pace of change and flexibility. The embedding of GPU compute within the RAN opens the prospect of enterprise AI workloads running at the network edge, closer to where data is generated. And as Nvidia’s State of AI in Telecom report noted, 77% of respondents anticipate a significantly faster deployment timeline for AI-native wireless architecture than for previous network generations. The architecture debate between Ericsson’s custom silicon path and Nokia-Nvidia’s GPU-accelerated approach is also worth watching–not because one will definitely win, but because it reflects a genuine question about where AI inference should sit in network hardware, and at what cost. That question will shape operator procurement decisions and vendor relationships for years. What MWC 2026 made unmistakable is that AI-native networks are no longer a research agenda. The field trials are live, the hardware is shipping, and the coalitions are forming. The question for enterprises and operators alike is no longer whether this transition will happen–but how fast, and who leads it. (Photo by ) See also: MWC 2026: SK Telecom lays out plan to rebuild its core around AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI-Native networks are no longer a 6G promise–MWC 2026 just proved it appeared first on AI News. View the full article
  22. At MWC 2026 in Barcelona, SK Telecom outlined how it is rebuilding itself around AI, from its network core to its customer service desks. The shift goes beyond adding new AI tools. It involves rewriting internal systems, expanding data centre capacity to the gigawatt scale, and upgrading its own large language model to more than one trillion parameters. At a press conference during MWC 2026, SK Telecom CEO Jung Jai-hun outlined what the company calls an “AI Native” strategy. The plan centres on reorganising infrastructure and making large investments so the company can help position Korea among the world’s top three AI powers. “SKT is currently at a golden time of transformation, where the two tasks of ‘customer value innovation’ and ‘AI innovation’ intersect in a borderless, converged environment that goes beyond telecommunications,” Jung said. “SKT defines ‘the customer as the very essence of our business,’ and through innovation driven by AI, we will evolve into a company that makes meaningful contributions to our customers and to Korea.” Rewriting telecom systems around AI at MWC 2026 At the core of the plan is a rebuild of SK Telecom’s integrated IT systems. The company said it will redesign sales, line management, and billing systems to be optimised for AI. The aim is to let the operator design and offer personalised plans and memberships based on each customer’s usage and behaviour patterns. The company also plans to apply a Zero Trust security framework across its systems. This will include stronger authentication, access controls, network segmentation, and AI-based monitoring, according to the company’s briefing at MWC 2026. For enterprises watching the telecom sector, this signals a broader shift. Telecom operators have long relied on legacy billing stacks and network management tools. Rebuilding those systems around AI could change how pricing, service design, and fault detection work in practice. It also raises questions about data governance and how customer data is used to train or tune AI models. SK Telecom is also expanding its “autonomous network operations” strategy. The company said it will use AI to automate wireless quality management, traffic control, and network equipment operations. With AI-RAN technology, it aims to improve speed and reduce latency. These efforts were described in company materials shared during the press event. A single AI agent across touchpoints Another part of the strategy focuses on customer interaction. SK Telecom plans to redesign pricing, roaming, and membership services to make them simpler and more automated. It is developing what it calls an integrated AI agent to connect experiences across its main customer portal, T world, and its online store, T Direct Shop. The company said the agent will analyse daily usage patterns and offer tailored suggestions across channels. It also plans to expand its AI Contact Center so customer service representatives can use AI tools during support calls. Offline retail stores are part of the shift. SK Telecom said AI will help staff identify customer needs and offer recommendations after a store visit. It is also building “AI Personas” to analyse digital behaviour across customer segments and support conversational Q&A. For enterprise leaders, this mirrors a wider pattern. Telecom operators are trying to move from reactive service models to predictive ones. The difference now is scale. By embedding AI into billing, customer service, and retail, SK Telecom is treating AI as an operating layer rather than a separate feature. Building 1GW-class AI data centres The infrastructure build-out is equally ambitious. SK Telecom said it will construct hyperscale AI data centres across Korea, targeting capacity that exceeds 1 gigawatt. It aims to attract global investment and position the country as a major AI data centre hub in Asia. The company already operates a GPU cluster called Haein and applied its virtualisation solution, Petasus AI Cloud, to support GPU-as-a-service workloads last year. It now plans to offer that cloud solution globally. SK Telecom also plans to build an AI data centre in Korea’s southwestern region in collaboration with OpenAI, according to the company’s announcement at MWC 2026. On the model side, SK Telecom said its sovereign AI foundation model currently has 519 billion parameters, making it the largest in Korea. The company plans to upgrade it to more than one trillion parameters and add multimodal capabilities so it can process image, voice, and video data starting in the second half of the year. CEO Jung framed the data centre and model build-out in national terms. “AIDC can be seen as the heart of Korea, and hyperscale LLMs as the brain,” he said. “By combining SKT’s AI capabilities with collaboration from domestic and global partners, we will lead true AI-native transformation for Korean customers and enterprises.” For enterprise readers, the key issue is not parameter count alone. It is how such models will be applied in sectors like manufacturing. SK Telecom said it is working with SK hynix on a manufacturing-focused AI package that analyses process data in real time to reduce defect rates and improve equipment efficiency. The package will be offered as infrastructure, model, and solution. Changing internal culture The transformation also extends to internal operations. SK Telecom has built an “AX Dashboard” to track AI use across departments and individuals. It operates an “AI Board” to oversee AI transformation efforts and has created an “AI playground” where employees can build AI agents without coding. More than 2,000 AI agents are already in use across marketing, legal, and public relations, according to the company’s figures shared at the event. “To drive future growth, we must reinvent our way of working from the ground up. SKT will fundamentally transform its corporate culture to be centred around AI,” Jung said. For other enterprises, the takeaway is less about branding and more about structure. SK Telecom is tying infrastructure, models, applications, and internal governance into a single program. Whether it can execute at the scale it describes remains to be seen. What is clear is that AI is no longer positioned as a side project. It is becoming the operating model. (Photo by PR Newswire) See also: Nokia and AWS pilot AI automation for real-time 5G network slicing Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post MWC 2026: SK Telecom lays out plan to rebuild its core around AI appeared first on AI News. View the full article
  23. AI adoption in financial services has effectively become universal–and the institutions still treating it as an experiment are now the outliers. According to Finastra’s Financial Services State of the Nation 2026 report, which surveyed 1,509 senior executives across 11 markets, only 2% of financial institutions globally report no use of AI whatsoever. The debate is over. The question now is what comes next. For CIOs and technology leaders, the findings paint a picture that is equal parts opportunity and pressure. Six in ten institutions improved their AI capabilities over the past year, with 43% citing AI as their single most important innovation lever. From fraud detection and document intelligence to compliance automation and customer engagement, AI has quietly embedded itself across the entire financial value chain. But near-universal adoption also means that deployment alone is no longer a differentiator. From pilots to pressure The report identifies a clear shift in how institutions are thinking about AI. The early conversation–whether to adopt, which use cases to try, how much to invest–has given way to something more operationally complex. Institutions are now focused on scaling AI responsibly, governing it effectively, and making it work reliably across enterprise-wide functions rather than in isolated pockets. The top four use cases where institutions are either running programmes or piloting AI reflect that maturity: risk management and fraud detection (71%), data analysis and reporting (71%), customer service and support assistants (69%), and document intelligence management (69%). These are not peripheral functions. They sit at the core of how financial institutions operate and compete. Looking ahead, the three priorities that dominate the next phase are: AI-driven personalisation, agentic AI for workflow automation, and AI model governance and explainability. That last one deserves attention. As AI decisions become more consequential–and more scrutinised–the ability to explain, audit, and stand behind those decisions is fast becoming a regulatory and reputational imperative, not just a technical nicety. The infrastructure problem High adoption numbers can obscure an inconvenient truth: AI is only as capable as the systems underneath it. Finastra’s data makes this link explicit. Nearly nine in ten institutions (87%) plan to invest in modernisation over the next 12 months, driven precisely by the need to scale AI effectively. Cloud adoption, data platform modernisation, and core banking upgrades are all accelerating–not as standalone initiatives, but as the foundational layer that determines how far and how fast AI can actually go. The barriers, however, remain stubbornly human. Talent shortages are cited by 43% of institutions as the primary obstacle to progress, with the challenge particularly acute in Singapore (54%), the UAE (51%), and Japan and the US (both at 50%). Budget constraints follow closely behind. The institutions pulling ahead are increasingly turning to fintech partnerships–now the default modernisation strategy for 54% of respondents–to close those gaps without bearing the full cost of building in-house. The regional picture Across the Asia-Pacific, the data reflects distinct priorities. Vietnam leads on active AI deployment at 74%, driven by the urgency of financial inclusion and the need for faster payment and lending processing. Singapore is aggressively scaling cloud and personalisation investment, with planned spending increases above 50% year-on-year. Japan, meanwhile, remains the most cautious market surveyed, with only 39% reporting active AI deployment — a reflection of legacy constraints and a cultural preference for incremental over rapid change. Governance is the next frontier With 63% of institutions already running or piloting agentic AI programmes, the technology’s trajectory is clear. But so is the challenge it brings. Agentic AI–systems capable of autonomous decision-making and multi-step task execution–raises the stakes considerably on questions of accountability, transparency, and control. For enterprise leaders, the coming year is less about whether to invest in AI and more about how to do so in a way that regulators, customers, and boards can trust. As Chris Walters, CEO of Finastra, put it: institutions are expected to move quickly, but also responsibly, as regulatory scrutiny increases and customers demand financial services that work reliably, securely, and personally every time. The tipping point has been crossed. What institutions do with that momentum–and how carefully they govern it–will define the competitive landscape for the rest of the decade. Finastra’s Financial Services State of the Nation 2026 report surveyed 1,509 managers and executives from banks and financial institutions across France, Germany, Hong Kong, Japan, Mexico, Saudi Arabia, Singapore, the UAE, the ***, the US, and Vietnam. Research was conducted by Savanta in November 2025. (Photo by PR Newswire) See also: How financial institutions are embedding AI decision-making Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.AI adoption in financial services The post AI adoption in financial services has hit a point of no return appeared first on AI News. View the full article
  24. Improving trust in agentic AI for finance workflows remains a major priority for technology leaders today. Over the past two years, enterprises have rushed to put automated agents into real workflows, spanning customer support and back-office operations. These tools excel at retrieving information, yet they often struggle to provide consistent and explainable reasoning during multi-step scenarios. Solving the automation opacity problem Financial institutions especially rely on massive volumes of unstructured data to inform investment memos, conduct root-cause investigations, and run compliance checks. When agents handle these tasks, any failure to trace exact logic can lead to severe regulatory fines or poor asset allocation. Technology executives often find that adding more agents creates more complexity than value without better orchestration. Open-source AI laboratory Sentient launched Arena today, which is designed as a live and production-grade stress-testing environment that allows developers to evaluate competing computational approaches against demanding cognitive problems. Sentient’s system replicates the reality of corporate workflows, deliberately feeding agents incomplete information, ambiguous instructions, and conflicting sources. Instead of scoring whether a tool generated a correct output, the platform records the full reasoning trace to help engineering teams debug failures over time. Building reliable agentic AI systems for finance Evaluating these capabilities before production deployment has attracted no shortage of institutional interest. Sentient has partnered with a cohort including Founders Fund, Pantera, and asset management giant Franklin Templeton, which oversees more than $1.5 trillion. Other participants in the initial phase include alphaXiv, Fireworks, Openhands, and OpenRouter. Julian Love, Managing Principal at Franklin Templeton Digital Assets, said: “As companies look to apply AI agents across research, operations, and client-facing workflows, the question is no longer whether these systems are powerful or if they can generate an answer, but whether they’re reliable in real workflows. “A sandbox environment like Arena – where agents are tested on real, complex workflows, and their reasoning can be inspected – will help the ecosystem separate promising ideas from production-ready capabilities and boost confidence in how this technology is integrated and scaled.” Himanshu Tyagi, Co-Founder of Sentient, added: “AI agents are no longer an experiment inside the enterprise; they’re being put into workflows that touch customers, money, and operational outcomes. “That shift changes what matters. It’s not enough for a system to be impressive in a demo. Enterprises need to know whether agents can reason reliably in production, where failures are expensive, and trust is fragile.” Organisations in sensitive industries like finance require repeatability, comparability, and a method to track reliability improvements regardless of the underlying models they use for agentic AI. Incorporating platforms like Arena allows engineering directors to build resilient data pipelines while adapting open-source agent capabilities to their private internal data. Overcoming integration bottlenecks Survey data highlights a gap between ambition and reality. While 85 percent of businesses want to operate as agentic enterprises – and nearly three-quarters plan to deploy autonomous agents – fewer than a quarter possess mature governance frameworks. Advancing from a pilot phase to full scale proves difficult for many. This happens because current corporate environments run an average of twelve separate agents, frequently in silos. Open-source development models offer a path forward by providing infrastructure that enables faster experimentation. Sentient itself acts as the architect behind frameworks like ROMA and the Dobby open-source model to assist with these coordination efforts. Focusing on computational transparency ensures that when an automated process makes a recommendation on a portfolio, human auditors can track exactly how that conclusion was reached. By prioritising environments that record full logic traces rather than isolated right answers, technology leaders integrating agentic AI for operations like finance can secure better ROI and maintain regulatory compliance across their business. See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Upgrading agentic AI for finance workflows appeared first on AI News. View the full article
  25. Many organisations are eroding the foundations of business – productivity, competitiveness, and efficiency. This is happening due to poor implementation of human-AI collaboration, according to cloud data and AI consultancy, Datatonic. The company says in the next phase of enterprise AI, success will come from carefully-governed and designed AI that works alongside humans in “human-in-the-loop (HiTL)’ systems. The company’s research shows that companies that fail to embed AI into their human workflows are falling behind the competition as productivity slows down. Datatonic says a hybrid human-AI approach speeds up decision-making, thus improving overall operations. Scott Eivers, CEO of Datatonic says, “AI [is] about redesigning how work gets done. The biggest risk we see in the market is productivity leakage when AI exists in isolation from the people who actually run the business.” After years of AI investment, pressure is mounting on businesses to show returns. However, some research shows some initiatives remaining in their pilot stage due to limited trust among users. As a result, organisations are failing to use AI-powered insights to positively affect decisions and workflows, meaning efficiency gains never materialise. According to Datatonic, HiTL models are crucial for future success, providing a combination of AI speed with human judgement and accountability. This is evident in agent-assisted software development, where AI systems create code from loose prompts and transform them into code. In this case, human teams decide what needs to be developed, inspect all requirements, and review plans before being brought into existence. Once this direction is clear, AI agents construct modular components. The trend for AI in the workplace is starting to appear in finance and operations. For instance, in back-office and finance departments, AI-powered document processing is already delivering a 70% reduction in invoice-processing costs according to some, but finance teams still approve the final outcomes. “They’re partnership stories,” says Andrew Harding, CTO of Datatonic. “Humans create evaluation systems, validate plans, set guardrails, and make decisions. AI executes at speed and scale. That combination is where real enterprise value shows up.” Many enterprises are failing to deploy fully autonomous agents safely, according to Datatonic, with shortfalls in security controls and governance frameworks. Autonomy can only scale when organisations introduce approval checkpoints and benchmark performance standards. Evaluation systems must also be implemented as AI models evolve, ensuring they always operate safely and as intended without violating any compliance obligations. Harding says, “As trust builds, companies can responsibly delegate more to AI. But skipping governance doesn’t build speed, it creates risk.” Datatonic predicts major acceleration in workloads in the next two years, with preparation and validation handled by AI agents. AI systems may also be implemented to test and invalidate decisions before teams invest resources. Scott Eivers believes the future “looks like expert departments run by smaller, nimble teams – finance, HR, marketing – each amplified by AI. The companies that win will be those that teach people to work with AI — not around it,” he said. (Image source: “Waterfall” by PMillera4 is licensed under CC BY-NC-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Poor implementation of AI may be behind workforce reduction appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.