Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    652
  • Joined

  • Last visited

  • Feedback

    0%

ChatGPT's Achievements

  1. Finance leaders are driving ROI using agentic AI for accounts payable automation, turning manual tasks into autonomous workflows. While general AI projects saw return on investment rise to 67 percent last year, autonomous agents delivered an average ROI of 80 percent by handling complex processes without human intervention. This performance gap demands a change in how CIOs allocate automation budgets. Agentic AI systems are now advancing the enterprise from theoretical value to hard returns. Unlike generative tools that summarise data or draft text, these agents execute workflows within strict rules and approval thresholds. Boardroom pressure drives this pivot. A report by Basware and FT Longitude finds nearly half of CFOs face demands from leadership to implement AI across their operations. Yet 61 percent of finance leaders admit their organisations rolled out custom-developed AI agents largely as experiments to test capabilities rather than to solve business problems. These experiments often fail to pay off. Traditional AI models generate insights or predictions that require human interpretation. Agentic systems close the gap between insight and action by embedding decisions directly into the workflow. Jason Kurtz, CEO of Basware, explains that patience for unstructured experimentation is running low. “We’ve reached a tipping point where boards and CEOs are done with AI experiments and expecting real results,” he says. “AI for AI’s sake is a waste.” Accounts payable as the proving ground for agentic AI in finance Finance departments now direct these agents toward high-volume, rules-based environments. Accounts payable (AP) is the primary use case, with 72 percent of finance leaders viewing it as the obvious starting point. The process fits agentic deployment because it involves structured data: invoices enter, require cleaning and compliance checks, and result in a payment booking. Teams use agents to automate invoice capture and data entry, a daily task for 20 percent of leaders. Other live deployments include detecting duplicate invoices, identifying fraud, and reducing overpayments. These are not hypothetical applications; they represent tasks where an algorithm functions with high autonomy when parameters are correct. Success in this sector relies on data quality. Basware trains its systems on a dataset of more than two billion processed invoices to deliver context-aware predictions. This structured data allows the system to differentiate between legitimate anomalies and errors without human oversight. Kevin Kamau, Director of Product Management for Data and AI at Basware, describes AP as a “proving ground” because it combines scale, control, and accountability in a way few other finance processes can. The build versus buy decision matrix Technology leaders must next decide how to procure these capabilities. The term “agent” currently covers everything from simple workflow scripts to complex autonomous systems, which complicates procurement. Approaches split by function. In accounts payable, 32 percent of finance leaders prefer agentic AI embedded in existing software, compared to 20 percent who build them in-house. For financial planning and analysis (FP&A), 35 percent opt for self-built solutions versus 29 percent for embedded ones. This divergence suggests a pragmatic rule for the C-suite. If the AI improves a process shared across many organisations, such as AP, embedding it via a vendor solution makes sense. If the AI creates a competitive advantage unique to the business, building in-house is the better path. Leaders should buy to accelerate standard processes and build to differentiate. Governance as an enabler of speed Fear of autonomous error slows adoption. Almost half of finance leaders (46%) will not consider deploying an agent without clear governance. This caution is rational; autonomous systems require strict guardrails to operate safely in regulated environments. Yet the most successful organisations do not let governance stop deployment. Instead, they use it to scale. These leaders are significantly more likely to use agents for complex tasks like compliance checks (50%) compared to their less confident peers (6%). Anssi Ruokonen, Head of Data and AI at Basware, advises treating AI agents like junior colleagues. The system requires trust but should not make large decisions immediately. He suggests testing thoroughly and introducing autonomy slowly, ensuring a human remains in the loop to maintain responsibility. Digital workers raise concerns regarding displacement. A third of finance leaders believe job displacement is already happening. Proponents argue agents shift the nature of work rather than eliminating it. Automating manual tasks such as information extraction from PDFs frees staff to focus on higher-value activities. The goal is to move from task efficiency to operating leverage, allowing finance teams to manage faster closes and make better liquidity decisions without increasing headcount. Organisations that use agentic AI extensively report higher returns. Leaders who deploy agentic AI tools daily for tasks like accounts payable achieve better outcomes than those who limit usage to experimentation. Confidence grows through controlled exposure; successful small-scale deployments lead to broader operational trust and increased ROI. Executives must move beyond unguided experimentation to replicate the success of early adopters. Data shows that 71 percent of finance teams with weak returns acted under pressure without clear direction, compared to only 13 percent of teams achieving strong ROI. Success requires embedding AI directly into workflows and governing agents with the discipline applied to human employees. “Agentic AI can deliver transformational results, but only when it is deployed with purpose and discipline,” concludes Kurtz. See also: AI deployment in financial services hits an inflection point as Singapore leads the shift to production Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Agentic AI drives finance ROI in accounts payable automation appeared first on AI News. View the full article
  2. Author: Dev Pragad, CEO, Newsweek As artificial intelligence platforms increasingly mediate how people encounter news, media leaders are confronting an important change in the relationship between journalism and the public. AI-driven search and conversational interfaces now influence how audiences discover and trust information, often before visiting a publisher’s website. According to Dev Pragad, the implications for journalism extend beyond traffic metrics or platform optimisation. “AI has effectively become a front door to information, That changes how journalism is surfaced, how it is understood, and how publishers must think about sustainability.” AI is redefining news distribution For a long time, digital journalism relied on predictable referral patterns driven by search engines and social platforms. That model is now under strain as AI systems summarise reporting directly in their interfaces, reducing the visibility of original sources. While AI tools can efficiently aggregate information, Pragad argues they cannot replace the editorial judgement and accountability that define credible journalism. “AI can synthesise what exists,” he said. “Journalism exists to establish what is true.” This has prompted publishers to rethink distribution and the formats and institutional signals that distinguish professional reporting from automated outputs. Why publishers cannot rely on traffic alone One of the main challenges facing news organisations is the decoupling of audience understanding from direct website visits. Readers may consume accurate summaries of events without ever engaging with the reporting institution behind them. “That reality requires honesty from publishers. Traffic alone is not a stable foundation for sustaining journalism”, Pragad said. At Newsweek, this has led to an emphasis on revenue diversification, brand authority, and content formats that retain value even when summarised. Content AI cannot commoditise Pragad points to several forms of journalism that remain resistant to AI commoditisation: In-depth investigations Expert-led interviews and analysis Proprietary rankings and research Editorially-contextualised video journalism “These formats anchor reporting to accountable institutions,” he said. “They carry identity and credibility in ways that cannot be flattened into anonymous data.” Trust as editorial infrastructure As AI-generated content becomes more prevalent, trust has emerged as a defining competitive advantage for journalism. “When misinformation spreads easily and AI text becomes harder to distinguish from verified reporting, trust becomes infrastructure,” Pragad said. “It determines whether audiences believe what they read.” Editorial credibility is cumulative and fragile, he said. Once lost, it cannot be quickly rebuilt. The case for publisher-AI collaboration Rather than resisting AI outright, Pragad advocates for structured collaboration between publishers and technology platforms. That includes clearer attribution standards and fair compensation models when journalistic work is used to train or inform AI systems. “Journalism underpins the quality of AI outputs. If reporting weakens, AI degrades with it.” Leading Newsweek through industry transition Since taking leadership in 2018, Pragad has overseen Newsweek’s expansion in digital formats, global platforms, and diversified revenue streams. That evolution required acknowledging that legacy distribution models would not survive intact. “The goal isn’t to preserve old systems, it’s to preserve journalism’s role in society.” Redesigning, not resisting, the future of media Pragad believes the publishers best positioned for the AI era will be those that emphasise editorial identity and adaptability over scale alone. “This is not a moment for nostalgia, it’s a moment for redesign.” As AI continues to reshape how information is accessed, Pragad argues that the enduring value of journalism lies in its ability to explain and hold power accountable, regardless of the interface delivering the news. Author: Dev Pragad, CEO, Newsweek The post Newsweek CEO Dev Pragad warns publishers: adapt as AI becomes news gateway appeared first on AI News. View the full article
  3. For many enterprises, the first real test of AI is not customer-facing products or flashy automation demos. It is the quiet machinery that runs the organisation itself. Human resources, with its mix of routine workflows, compliance needs, and large volumes of structured data, is emerging as one of the earliest areas where companies are pushing AI into day-to-day operations. That shift is visible in how large employers are rethinking workforce systems. The telecommunications group e& began moving its human resources operations to what it describes as an AI-first model, covering roughly 10,000 employees across its organisation. The transition is built on Oracle Fusion Cloud Human Capital Management (HCM), running in an Oracle Cloud Infrastructure dedicated region. Details of the deployment were outlined in a recent Oracle announcement. The change is less about introducing a single AI feature and more about restructuring how HR processes are handled. Automated and AI-driven tools are expected to help HR departments with recruitment screening, interview coordination, and employee learning recommendations. The stated goal is to standardise processes across regions and provide managers with faster access to workforce data and insights. HR as an enterprise AI proving ground From an enterprise perspective, HR is a logical entry point. Many HR tasks follow repeatable patterns: candidate matching, onboarding documentation, leave management, and training assignments. These workflows produce consistent data trails, which makes them easier to model and automate than loosely defined knowledge work. Moving such functions onto AI-supported systems allows organisations to test reliability, governance, and user acceptance in a controlled environment before expanding into more sensitive areas. The infrastructure choice also indicates how enterprises are balancing innovation with compliance. Oracle claims that the system is deployed in a dedicated cloud region designed to address data sovereignty and regulatory requirements. For multinational corporations, workforce data sits at the intersection of privacy law, employment regulation, and corporate governance. Running AI tools in a controlled environment is part of how companies are trying to contain risk while experimenting with automation. Governance, compliance, and internal risk management The e& rollout reflects a broader pattern in enterprise AI adoption: internal transformation is often more achievable than external disruption. Customer-facing AI systems attract attention, but they introduce reputational and operational risk if they fail. HR platforms, by contrast, operate behind the scenes. Errors can still carry consequences, yet they are easier to monitor, audit, and correct within existing governance structures. Industry research supports the idea that internal operations are becoming a primary testing ground. Deloitte’s 2026 State of AI in the Enterprise report found that organisations are increasingly shifting AI projects from pilot stages into production environments, with productivity and workflow automation cited as early areas of return. The report is based on a survey of more than 3,000 senior leaders involved in AI initiatives, including respondents in Southeast Asia. While the study spans multiple business functions, administrative and operational processes were repeatedly identified as practical entry points for scaled deployment. Workforce systems also provide a natural setting for AI agents and assistants. HR teams handle frequent employee queries about policies, benefits, and training options. Embedding conversational tools into these workflows may reduce manual workload while giving employees faster access to information. According to Oracle’s description of the deployment, e& plans to introduce digital assistants designed to support candidate engagement and employee development tasks. Whether such tools deliver consistent value will depend on accuracy, oversight, and how well they integrate with existing HR processes. Scaling AI inside the organisation The lesson is not that HR automation is new, but that AI is changing the scope of what can be automated. Traditional HR software focused on record-keeping and workflow management. AI layers add predictive matching, pattern analysis, and decision support. That expansion raises familiar governance questions: data quality, bias, auditability, and employee trust. There is also a workforce dimension. Automating parts of HR does not eliminate the need for human oversight; it changes where effort is concentrated. HR professionals may spend less time on routine coordination and more on policy interpretation, employee engagement, and exception handling. Enterprises adopting AI-driven systems will need clear escalation paths and review processes to avoid over-reliance on automated outputs. What makes the current moment different is scale. Deployments that cover thousands of employees turn AI from an experiment into operational infrastructure. They force organisations to confront issues of reliability, training, and change management in real time. The systems must work consistently across jurisdictions, languages, and regulatory frameworks. As enterprises look for low-risk entry points into AI, workforce operations are likely to remain high on the list. They combine structured data, repeatable workflows, and measurable outcomes — conditions that suit automation while still allowing room for human judgement. The experience of early adopters will shape how quickly other internal functions, from finance to procurement, follow a similar path. (Photo by Zulfugar Karimov) See also: Barclays bets on AI to cut costs and boost returns Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How e& is using HR to bring AI into enterprise operations appeared first on AI News. View the full article
  4. Alibaba has entered the race to build AI that powers robots, not just chatbots. The ******** tech giant this week unveiled RynnBrain, an open-source model designed to help robots perceive their environment and execute physical tasks. The move signals China’s accelerating push into physical AI as ageing populations and labour shortages drive demand for machines that can work alongside—or replace—humans. The model positions Alibaba alongside Nvidia, Google DeepMind, and Tesla in the race to build what Nvidia CEO Jensen Huang calls “a multitrillion-dollar growth opportunity.” Unlike its competitors, however, Alibaba is pursuing an open-source strategy—making RynnBrain freely available to developers to accelerate adoption, similar to its approach with the Qwen family of language models, which rank among China’s most advanced AI systems. Video demonstrations released by Alibaba’s DAMO Academy show RynnBrain-powered robots identifying fruit and placing it in baskets—tasks that seem simple but require complex AI governing object recognition and precise movement. The technology falls under the category of vision-language-action (VLA) models, which integrate computer vision, natural language processing, and motor control to enable robots to interpret their surroundings and execute appropriate actions. Unlike traditional robots that follow preprogrammed instructions, physical AI systems like RynnBrain enable machines to learn from experience and adapt behaviour in real time. This represents a fundamental shift from automation to autonomous decision-making in physical environments—a shift with implications extending far beyond factory floors. HUGE: Alibaba just launched "RynnBrain" an open-source AI model that lets robots see, think, and act in the real world, with the aim to steal market share from Google and Nvidia. pic.twitter.com/ULe3VcFlcE — AI Flash (@aiflash_) February 10, 2026 From prototype to production The timing signals a broader inflexion point. According to Deloitte’s 2026 Tech Trends report, physical AI has begun “shifting from a research timeline to an industrial one,” with simulation platforms and synthetic data generation compressing iteration cycles before real-world deployment. The transition is being driven less by technological breakthroughs than by economic necessity. Advanced economies face a stark reality: demand for production, logistics, and maintenance continues rising while labour supply increasingly fails to keep pace. The OECD projects that working-age populations across developed nations will stagnate or decline over the coming decades as ageing accelerates. Parts of East Asia are encountering this reality earlier than other regions. Demographic ageing, declining fertility, and tightening labour markets are already influencing automation choices in logistics, manufacturing, and infrastructure—particularly in China, Japan, and South Korea. These environments aren’t exceptional; they’re simply ahead of a trajectory other advanced economies are likely to follow. When it comes to humanoid robots specifically—machines designed to walk and function like humans—China is “forging ahead of the U.S.,” with companies planning to ramp up production this year, according to Deloitte. UBS estimates there will be two million humanoids in the workplace by 2035, climbing to 300 million by 2050, representing a total addressable market between $1.4 trillion and $1.7 trillion by mid-century. The governance gap Yet as physical AI capabilities accelerate, a critical constraint is emerging—one that has nothing to do with model performance. “In physical environments, failures cannot simply be patched after the fact,” according to a World Economic Forum analysis published this week. “Once AI begins to move goods, coordinate labour or operate equipment, the binding constraint shifts from what systems can do to how responsibility, authority and intervention are governed.” Physical industries are governed by consequences, not computation. A flawed recommendation in a chatbot can be corrected in software. A robot that drops a part during handover or loses balance on a factory floor designed for humans causes operations to pause, creating cascading effects on production schedules, safety protocols, and liability chains. The WEF framework identifies three governance layers required for safe deployment: executive governance setting risk appetite and non-negotiables; system governance embedding those constraints into engineered reality through stop rules and change controls; and frontline governance giving workers clear authority to override AI decisions. “As physical AI accelerates, technical capabilities will increasingly converge, but governance will not,” the analysis warns. “Those that treat governance as an afterthought may see early gains, but will discover that scale amplifies fragility.” This creates an asymmetry in the US-China competition. China’s faster deployment cycles and willingness to pilot systems in controlled industrial environments could accelerate learning curves. However, governance frameworks that work in structured factory settings may not translate to public spaces where autonomous systems must navigate unpredictable human behaviour. Early deployment signals Current deployments remain concentrated in warehousing and logistics, where labour market pressures are most acute. Amazon recently deployed its millionth robot, part of a diverse fleet working alongside humans. Its DeepFleet AI model coordinates this massive robot army across the entire fulfilment network, which Amazon reports will improve travel efficiency by 10%. BMW is testing humanoid robots at its South Carolina factory for tasks requiring dexterity that traditional industrial robots lack: precision manipulation, complex gripping, and two-handed coordination. The automaker is also using autonomous vehicle technology to enable newly built cars to drive themselves from the assembly line through testing to the finishing area, all without human assistance. But applications are expanding beyond traditional industrial settings. In healthcare, companies are developing AI-driven robotic surgery systems and intelligent assistants for patient care. Cities like Cincinnati are deploying AI-powered drones to autonomously inspect bridge structures and road surfaces. Detroit has launched a free autonomous shuttle service for seniors and people with disabilities. The regional competitive dynamic intensified this week when South Korea announced a $692 million national initiative to produce AI semiconductors, underscoring how physical AI deployment requires not just software capabilities but domestic chip manufacturing capacity. NVIDIA has released multiple models under its “Cosmos” brand for training and running AI in robotics. Google DeepMind offers Gemini Robotics-ER 1.5. Tesla is developing its own AI to power the Optimus humanoid robot. Each company is betting that the convergence of AI capabilities with physical manipulation will unlock new categories of automation. As simulation environments improve and ecosystem-based learning shortens deployment cycles, the strategic question is shifting from “Can we adopt physical AI?” to “Can we govern it at scale?” For China, the answer may determine whether its early mover advantage in robotics deployment translates into sustained industrial leadership—or becomes a cautionary tale about scaling systems faster than the governance infrastructure required to sustain them. (Photo by Alibaba) See also: EY and NVIDIA to help companies test and deploy physical AI Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Alibaba enters physical AI race with open-source robot model RynnBrain appeared first on AI News. View the full article
  5. AI deployment in financial services has crossed a critical threshold, with only 2% of institutions globally reporting no AI use whatsoever—a dramatic indicator that the technology has moved decisively from boardroom discussion to operational reality. New research from Finastra surveying 1,509 senior leaders across 11 markets reveals that Singapore financial institutions are leading this transition, with nearly two-thirds already deploying AI in production environments rather than confining it to experimental pilots. The Financial Services State of the Nation 2026 report shows 73% of Singapore institutions have deployed or improved AI use cases in their payments technology over the past 12 months—nearly double the 38% global average. “Singapore institutions are showing what AI execution at scale really looks like. This is not about isolated pilots. It is about embedding AI into core operations, supported by modern infrastructure, strong data foundations, and disciplined governance,” said Chris Walters, CEO of Finastra. From experimentation to enterprise AI deployment Globally, 31% of institutions report scaled deployment across multiple functions, while 30% have achieved limited production deployment. A further 27% are piloting or testing in limited functions, with only 8% still in the exploration phase. This represents a fundamental shift in how AI deployment is approached within financial services. The technology is no longer confined to innovation labs or proof-of-concept projects but has become integral to core banking operations. In Singapore specifically, an additional 35% are piloting or researching AI applications beyond their current production deployments, indicating a robust innovation pipeline that positions the city-state as a regional AI leader. The primary objectives driving this deployment vary by market. In Singapore and the US, 43% of institutions are using AI to improve compliance and regulatory processes—reflecting the technology’s ability to navigate increasingly complex oversight requirements while maintaining operational resilience. Globally, the top AI implementation objectives are improving accuracy and reducing errors (40%), increasing employee productivity (37%), and enhancing risk management capabilities (34%). Vietnam prioritises speed, with 49% using AI to accelerate processing in payments and lending services, while Mexico emphasises customer experience and personalisation at 43%. Cloud infrastructure enables AI at scale Singapore’s AI deployment success is underpinned by advanced cloud adoption. The research shows 55% of Singapore institutions host all or most infrastructure in the cloud, with a further 30% operating hybrid environments—an 85% total that significantly exceeds many global peers. This cloud-first approach provides the scalable, resilient infrastructure required for enterprise AI deployment. Without modern data architectures and elastic compute capabilities, AI remains confined to small-scale experiments that cannot deliver enterprise-wide value. The link between modernisation and AI deployment is clear in the data. Nearly nine in ten institutions (87%) globally plan to increase modernisation investment over the next 12 months, with Singapore leading in planned spending increases above 50%. Institutions also report strong confidence in their technology foundations, with 71% of Singapore respondents rating their core infrastructure, security and reliability ahead of peers—the highest globally and well above the 72% average. Security spending surges as AI creates new threat vectors As AI deployment accelerates, so do AI-enabled security threats. The research projects a 40% average increase in security spending globally in 2026, with institutions responding to what 43% describe as constantly evolving risks. Singapore leads in deploying advanced fraud detection and transaction monitoring, with 62% having implemented or upgraded these systems in the past year. This compares to a 48% global average, underscoring the city-state’s recognition that AI-powered fraud requires AI-powered defences. Similarly, 60% of Singapore institutions have modernised their Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) capabilities—again the highest globally—enabling real-time threat monitoring and automated response at scale. Multi-factor authentication and biometrics deployment reached 54% in Singapore, as institutions strengthen identity verification against increasingly sophisticated attack vectors that leverage generative AI and deepfake technologies. Looking ahead, API security and gateway hardening emerge as a key priority, cited by 34% globally as a focus area for the next 12 months. This reflects growing recognition that as ecosystems expand and AI systems interact across organisational boundaries, securing access points becomes paramount. Talent shortages emerge as the primary barrier Despite strong progress, barriers to AI deployment persist. Talent shortages top the list globally at 43%, but in Singapore this figure reaches 54%—the highest of any market surveyed and tied only with the UAE. This intense competition for specialised AI, cloud, and security expertise reflects the gap between institutional ambition and available human capital. Demand for professionals who can architect AI systems, ensure model governance, and integrate AI into existing workflows far outpaces supply. Budget constraints also weigh heavily, cited by 52% of Singapore institutions—again, the highest globally. Even well-funded organisations face difficult prioritisation decisions as they balance AI deployment, security investments, modernisation, and customer experience initiatives. In response, 54% of institutions globally are partnering with fintech providers as their default approach to accessing AI capabilities without bearing the full burden of talent acquisition or system development. These partnerships allow organisations to accelerate AI deployment while maintaining control over critical data and compliance requirements. The research reveals a sector that has decisively crossed the AI adoption threshold but now faces the more complex challenge of scaling responsibly. As Walters noted, success will be defined not by the breadth of AI experiments but by the ability to embed intelligence into operations while strengthening rather than compromising trust. The study surveyed managers and executives from institutions across France, Germany, Hong Kong, Japan, Mexico, Saudi Arabia, Singapore, the UAE, the ***, the US and Vietnam, representing organisations that collectively manage over $100 trillion in assets. (Photo by Peter Nguyen) See also: AI Expo 2026 Day 2: Moving experimental pilots to AI production Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI deployment in financial services hits an inflexion point as Singapore leads the shift to production appeared first on AI News. View the full article
  6. State-sponsored hackers are exploiting AI to accelerate cyberattacks, with threat actors from Iran, North Korea, China, and Russia weaponising models like Google’s Gemini to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG). The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have integrated artificial intelligence throughout the attack lifecycle – achieving productivity gains in reconnaissance, social engineering, and malware development during the final quarter of 2025. “For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in the report. AI-powered reconnaissance by state-sponsored hackers targets the defence sector Iranian threat actor APT42 used Gemini to augment reconnaissance and targeted social engineering operations. The group misused the AI model to enumerate official email addresses for specific entities and conduct research to establish credible pretexts for approaching targets. By feeding Gemini a target’s biography, APT42 crafted personas and scenarios designed to elicit engagement. The group also used the AI to translate between languages and better understand non-native phrases – abilities that help state-sponsored hackers bypass traditional phishing red flags like poor grammar or awkward syntax. North Korean government-backed actor UNC2970, which focuses on defence targeting and impersonating corporate recruiters, used Gemini to synthesise open-source intelligence and profile high-value targets. The group’s reconnaissance included searching for information on major cybersecurity and defence companies, mapping specific technical job roles, and gathering salary information. “This activity blurs the distinction between routine professional research and malicious reconnaissance, as the actor gathers the necessary components to create tailored, high-fidelity phishing personas,” GTIG noted. Model extraction attacks surge Beyond operational misuse, Google DeepMind and GTIG identified a increase in model extraction attempts – also known as “distillation attacks” – aimed at stealing intellectual property from AI models. One campaign targeting Gemini’s reasoning abilities involved over 100,000 prompts designed to coerce the model into outputting full reasoning processes. The breadth of questions suggested an attempt to replicate Gemini’s reasoning ability in non-English target languages in various tasks. How model extraction attacks work to steal AI intellectual property. (Image: Google GTIG) While GTIG observed no direct attacks on frontier models from advanced persistent threat actors, the team identified and disrupted frequent model extraction attacks from private sector entities globally and researchers seeking to clone proprietary logic. Google’s systems recognised these attacks in real-time and deployed defences to protect internal reasoning traces. AI-integrated malware emerges GTIG observed malware samples, tracked as HONESTCUE, that use Gemini’s API to outsource functionality generation. The malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach. HONESTCUE functions as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code as responses. The fileless secondary stage compiles and executes payloads directly in memory, leaving no artefacts on disk. HONESTCUE malware’s two-stage attack process using Gemini’s API. (Image: Google GTIG) Separately, GTIG identified COINBAIT, a phishing kit whose construction was likely accelerated by AI code generation tools. The kit, which masquerades as a major cryptocurrency exchange for credential harvesting, was built using the AI-powered platform Lovable AI. ClickFix campaigns abuse AI chat platforms In a novel social engineering campaign first observed in December 2025, Google saw threat actors abuse the public sharing features of generative AI services – including Gemini, ChatGPT, Copilot, DeepSeek, and Grok – to host deceptive content distributing ATOMIC malware targeting macOS systems. Attackers manipulated AI models to create realistic-looking instructions for common computer tasks, embedding malicious command-line scripts as the “solution.” By creating shareable links to these AI chat transcripts, threat actors used trusted domains to host their initial attack stage. The three-stage ClickFix attack chain exploiting AI chat platforms. (Image: Google GTIG) Underground marketplace thrives on stolen API keys GTIG’s observations of English and Russian-language underground forums indicate a persistent demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals struggle to develop custom AI models, instead relying on mature commercial products accessed through stolen credentials. One toolkit, “Xanthorox,” advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed Xanthorox was not a bespoke model but actually powered by several commercial AI products, including Gemini, accessed through stolen API keys. Google’s response and mitigations Google has taken action against identified threat actors by disabling accounts and assets associated with malicious activity. The company has also applied intelligence to strengthen both classifiers and models, letting them refuse assistance with similar attacks moving forward.\ “We are committed to developing AI boldly and responsibly, which means taking proactive steps to disrupt malicious activity by disabling the projects and accounts associated with bad actors, while continuously improving our models to make them less susceptible to misuse,” the report stated. GTIG emphasised that despite these developments, no APT or information operations actors have achieved breakthrough abilities that fundamentally alter the threat landscape. The findings underscore the evolving role of AI in cybersecurity, as both defenders and attackers race to use the technology’s abilities. For enterprise security teams, particularly in the Asia-Pacific region where ******** and North Korean state-sponsored hackers remain active, the report serves as an important reminder to enhance defences against AI-augmented social engineering and reconnaissance operations. (Photo by SCARECROW artworks) See also: Anthropic just revealed how AI-orchestrated cyberattacks actually work – Here’s what enterprises need to know Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post State-sponsored hackers exploit AI for advanced cyberattacks appeared first on AI News. View the full article
  7. Barclays recorded a 12 % jump in annual profit for 2025, reporting £9.1 billion in earnings before tax, up from £8.1 billion a year earlier. The bank also raised its performance targets out through 2028, aiming for a return on tangible equity (RoTE) of more than 14 %, up from a previous goal of above 12 % by 2026. A growing US business and cost reductions underpinned this outcome, with Barclays citing AI as a key driver of those efficiency gains. At a time when many large companies are still experimenting with AI pilots, Barclays is tying the technology directly to its cost structure and profit outlook. In public statements and investor filings, leadership positions AI as one of the levers that can help the bank sustain lower costs and improved returns, especially as macroeconomic conditions shift. Barclays’ 12 % profit rise this week matters, not just for its shareholders, but because it reflects a trend that traditional, highly regulated firms are now positioning AI as a core part of running the business, not something kept in separate innovation labs. For companies outside tech, linking AI to measurable results such as profit and efficiency marks a shift toward operational use over hype. Why AI matters for cost discipline Barclays has said that technology such as AI is part of its plan to cut costs and make its operations more efficient. That includes trimming parts of the legacy technology stack and rethinking where and how work happens. Investment in AI tools complements broader cost savings goals that stretch back multiple years. For many large companies, labour and legacy systems still make up a large chunk of operating expenses. Using AI to automate repetitive tasks or streamline data processing can reduce that burden. In Barclays’ case, these efficiencies are part of the bank’s rationale for setting higher performance targets, even though margins remain under pressure in parts of its business. It’s important to be specific about what these efficiencies mean in practice. AI technologies, for example, models that assist with risk analysis, customer service workflows, and internal reporting, can reduce the hours staff spend on manual work. That doesn’t always mean cutting jobs outright, but it can lower the overall cost base, especially in functions that are routine or transaction-driven. From investment to impact Investments in AI don’t translate to results overnight. Barclays’ approach combines these tools with structural cost reduction programs, helping the bank manage expenses at a time when revenue growth alone isn’t enough to lift returns to desired levels. Barclays’ performance targets for 2028 reflect this dual focus. The bank’s leadership has said that its plans include returning more than £15 billion to shareholders between 2026 and 2028, supported by improved efficiency and profit strength. Often, companies talk about technology investment in vague terms. Barclays’ latest figures make the link between tech and profit more concrete: the 12 % profit rise was reported in the same breath as the role of technology in trimming costs. It’s not the only factor; improved market conditions and growth in the US also helped, but it’s clearly part of the narrative that management is presenting to investors. This emphasis on cost discipline and profit impact sets Barclays apart from firms that treat AI as a long-term bet or a future project. Here, AI is integrated into ongoing cost management and financial planning, giving the bank a plausible pathway to stronger returns in the years ahead. What this means for legacy firms Barclays is far from unique in exploring AI for cost savings and efficiency. Other banks have also flagged technology investments as part of broader restructuring efforts. But what makes Barclays’ case noteworthy is the scale of the strategy and the way it is tied to measured performance targets, not just experimentation or small-scale pilots. In traditional industries, especially ones as regulated as banking, adopting AI is harder than in tech startups. Firms must navigate compliance, risk, customer privacy, and legacy systems that weren’t designed for automation. Yet Barclays’ public comments suggest that the bank is now comfortable enough with these tools to anchor part of its financial forecast on them. That signals a degree of maturity in how the institution operationalises AI. Barclays isn’t simply building isolated AI projects; leadership is weaving technology into cost discipline, modernisation of systems, and long-term planning. That shift matters because it shows how legacy firms, even those with large, complex operations, can start to move beyond pilots and into business-wide use cases that affect the bottom line. For other end-user companies evaluating AI investments, Barclays offers a working example: a large, regulated company can use technology to help hit cost and profitability targets, not just to explore new capabilities. (Photo by Jose Marroquin) See also: Goldman Sachs tests autonomous AI agents for process-heavy work Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Barclays bets on AI to cut costs and boost returns appeared first on AI News. View the full article
  8. Agentic AI offers insurance leaders a path to scalable efficiency as the sector confronts a tough digital transformation. Insurers hold deep data reserves and employ a workforce skilled in analytic decision-making. Despite these advantages, the industry has largely failed to advance beyond pilot programmes. Research suggests only seven percent of insurers have scaled these initiatives effectively across their organisations. The barrier is rarely a lack of interest. Instead, legacy infrastructure and fragmented data architectures often stop integration before it starts. Financial pressure compounds the technical debt. The sector has absorbed losses exceeding $100 billion annually for six consecutive years. High-frequency property losses are now a structural issue that standard operational tweaks cannot fix. Automating complex insurance workflows with agentic AI Intelligent agents provide a way to bypass these bottlenecks. Unlike passive analytical tools, these systems support autonomous tasks and help make decisions under human supervision. Embedding these agents into workflows allows companies to navigate legacy constraints and talent shortages. Workforce augmentation is a primary application. Sedgwick, in collaboration with Microsoft, deployed the Sidekick Agent to assist claims professionals. The system improved claims processing efficiency by more than 30 percent through real-time guidance. Operational gains extend to customer support. Standard chatbots usually answer a query or transfer the user to a ******. An agentic solution manages the process from end-to-end. This can include capturing the first notice of loss, requesting missing documentation, updating policy and billing systems, and proactively notifying customers of next steps. This “resolve, not route” approach has produced results in live environments. One major insurer implemented over 80 models in its claims domain. The rollout cut complex-case liability assessment time by 23 days and improved routing accuracy by 30 percent. Customer complaints fell by 65 percent during the same *******. Such promising metrics indicate that agentic AI can compress cycle times and control loss-adjustment expenses for the insurance industry, all while maintaining necessary oversight. Navigating internal friction Adoption requires navigating internal resistance. Siloed teams and unclear priorities often slow deployment speed. A shortage of talent in specialised roles, such as actuarial analysis and underwriting, also limits how effectively companies use their data. Agentic AI can target these areas to augment roles that are hard to fill. Success relies on aligning technology with specific business goals. Establishing an ‘AI Center of Excellence’ provides the governance and technical expertise needed to stop fragmented adoption. Teams should start with the high-volume and repeatable tasks to refine models through feedback loops. Industry accelerators can also speed up the process. Many platforms are now available with prebuilt frameworks that can support the full lifecycle of agent deployment. This approach reduces implementation time and aids compliance efforts. Of course, technology matters less than organisational readiness. About 70 percent of scaling challenges are organisational rather than technical. Insurers must build a culture of accountability to see returns on these tools. Agentic AI is a necessity for insurance leaders trying to survive in a market defined by financial pressure and legacy complexity. Addressing structural challenges improves efficiency and resilience. Executives who invest in scalable frameworks will position themselves to lead the next era of innovation. See also: ******** hyperscalers and industry-specific agentic AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How insurance leaders use agentic AI to cut operational costs appeared first on AI News. View the full article
  9. The *** Ministry of Defence (MOD) has selected Red Hat to architect a unified AI and hybrid cloud backbone across its entire estate. Announced today, the agreement is designed to break down data silos and accelerate the deployment of AI models from the data centre to the tactical edge. For CIOs, it’s part of a broader move away from fragmented and project-specific AI pilots toward a more platform engineering approach. By standardising on Red Hat’s infrastructure, the MOD aims to decouple its AI capabilities from underlying hardware, allowing algorithms to be developed once and deployed anywhere—whether on-premise, in the cloud, or on disconnected field devices. Red Hat industrialises the AI lifecycle for the MOD The agreement focuses on the Defence Digital Foundry, the MOD’s central software delivery hub. The Foundry will now provide a consistent MLOps environment to all service branches, including the Royal Navy, British Army, and Royal Air Force. At the core of this initiative is Red Hat AI, a suite that includes Red Hat OpenShift AI. This platform addresses a familiar bottleneck in enterprise AI: the “inference gap” between data science teams and operational infrastructure. The new agreement will allow MOD developers to collaborate on a single platform, choosing the most appropriate AI models and hardware accelerators for their specific mission requirements without being locked into a single vendor’s ecosystem. This standardisation is vital for “enabling AI at scale,” according to Red Hat. By unifying disparate efforts, the MOD intends to reduce the duplication that often plagues large government IT programs. The platform supports optimised inference, ensuring that AI models can run efficiently on restricted hardware footprints often found in military environments. Mivy James, CTO at the *** MOD, said: “Easing access to Red Hat platforms becomes all the more important for the *** Ministry of Defence in the era of AI, where rapid adoption, replicating good practice, and the ability to scale are critical to strategic advantage.” Bridging legacy and autonomous systems A major hurdle for defence modernisation is the coexistence of legacy virtualised workloads with modern, containerised AI applications. The agreement includes Red Hat OpenShift Virtualization, which provides a “well-lit migration path” for existing systems. This allows the MOD to manage traditional virtual machines alongside new neural networks on the same control plane to reduce operational complexity and cost. The MOD deal also incorporates Red Hat Ansible Automation Platform to drive enterprise-wide AI automation. In an AI context, automation is the enforcement mechanism for governance. It ensures that as models are retrained and redeployed, the underlying configuration management, security orchestration, and service provisioning remain compliant with rigorous defence standards. Security and ecosystem alignment Deploying AI in defence naturally requires a “consistent security footprint” that can withstand sophisticated cyber threats. The Red Hat platform enables DevSecOps practices, integrating security gates directly into the software supply chain. This is particularly relevant for maintaining a trusted software pedigree when integrating code from approved third-party providers, who can now align their deliverables with the MOD’s standardised Red Hat environment. Joanna Hodgson, Regional Manager for the *** and Ireland at Red Hat, commented: “Red Hat offers flexibility and scalability to deploy any application or any AI model on their choice of hardware – whether on premise, in any cloud, or at the edge – helping the *** Ministry of Defence to harness the latest technologies, including AI.” The deployment shows that AI maturity is moving beyond the model itself to the infrastructure that supports it. Success in high-stakes environments like defence depends less on individual algorithm performance and more on the ability to reliably deliver, update, and govern those models at scale. See also: ******** hyperscalers and industry-specific agentic AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Red Hat unifies AI and tactical edge deployment for *** MOD appeared first on AI News. View the full article
  10. Major ******** technology companies Alibaba, Tencent, and Huawei are pursuing agentic AI (systems that can execute multi-step tasks autonomously and interact with software, data, and services without human instruction), and orienting the technology toward discrete industries and workflows. Alibaba’s open-source strategy for agentic AI Alibaba’s strategy centres on its Qwen AI model family, a set of large language models with multilingual ability and open-source licences. Its own models are the basis for its AI services and agent platforms offered on Alibaba Cloud. Alibaba Cloud has documented its agent development tooling and vector database services in the open, meaning tools used to build autonomous agents can be adapted by any user. It positions the Qwen family as a platform for industry-specific solutions covering finance, logistics, and customer support. The Qwen App, an application built on these models, has reportedly reached a large user base since its public beta, creating links between autonomous tasks and Alibaba’s commerce and payments ecosystem. Alibaba open-source portfolio includes an agent framework, Qwen-Agent, to encourage third-party development of autonomous systems. This mirrors a pattern in China’s AI sector where hyperscalers publish frameworks and tools designed to build and manage AI agents, in competition with Western projects like Microsoft’s AutoGen and OpenAI’s Swarm. Tencent has also released an open-source agent framework, Youtu-Agent. Tencent, and Huawei’s Pangu: Industry-specific AI Huawei uses a combination of model development, infrastructure, and industry-specific agent frameworks to attract users to join its worldwide market. Its Huawei Cloud division has developed a ‘supernode’ architecture for enterprise agentic AI workloads that supports large cognitive models and the workflow orchestration agentic AI requires. AI agents are embedded in the foundation models of the Pangu family, which comprise of hardware stacks tuned for telecommunications, utilities, creative, and industrial applications, among other verticals. Early deployments are reported in sectors such as network optimisation, manufacturing and energy, where agents can plan tasks like predictive maintenance and resource allocation with minimal human oversight. Tencent Cloud’s “scenario-based AI” suite is a set of tools and SaaS-style applications that enterprises outside China can access, although the company’s cloud footprint remains smaller than Western hyperscalers in many regions. Despite these investments, real-world ******** agentic AI platforms have been most visible inside China. Projects such as OpenClaw, originally created outside the ecosystem, have been integrated into workplace environments like Alibaba’s DingTalk and Tencent’s WeCom and used to automate scheduling, create code, and manage developer workflows. These integrations are widely discussed in ******** developer communities but are not yet established in the enterprise environments of the major economic nations. Availability in Western markets Alibaba Cloud operates international data centres and markets AI services to European and Asian customers, positioning itself as a competitor to AWS and Azure for AI workloads. Huawei also markets cloud and AI infrastructure internationally, with a focus on telecommunications and regulated industries. In practice, however, uptake in Western enterprises remains limited compared with adoption of Western-origin AI platforms. This can be attributed to geopolitical concerns, data governance restrictions, and differences in enterprise ecosystems that favour local cloud providers. In AI developer workflows, for example, NVIDIA’s CUDA SHALAR remains dominant, and migration to the frameworks and methods of an alternative come with high up-front costs in the form of re-training. There is also a hardware constraint: ******** hyperscalers to work inside limits placed on them by their restricted access to Western GPUs for training and inference workloads, often using domestically produced processors or locating some workloads in overseas data centres to secure advanced hardware. The models themselves, particularly Qwen, are however at least accessible to developers through standard model hubs and APIs under open licences for many variants. This means Western companies and research institutions can experiment with those models irrespective of cloud provider selection. Conclusion ******** hyperscalers have defined a distinct trajectory for agentic AI, combining language models with frameworks and infrastructure tailored for autonomous operation in commercial contexts. Alibaba, Tencent and Huawei aim to embed these systems into enterprise pipelines and consumer ecosystems, offering tools that can operate with a degree of autonomy. These offerings are accessible in the West markets but have not yet achieved the same level of enterprise penetration on mainland European and US soil. To find more common uses of ********-flavoured agentic AI, we need to look to the Middle and Far East, South America, and Africa, where ******** influence is stronger. (Image source: “China Science & Technology Museum, Beijing, April-2011” by maltman23 is licensed under CC BY-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post ******** hyperscalers and industry-specific agentic AI appeared first on AI News. View the full article
  11. Agentic AI in healthcare is graduating from answering prompts to autonomously executing complex marketing tasks—and life sciences companies are betting their commercial strategies on it. According to a recent report cited by Capgemini Invent, AI agents could generate up to US$450 billion in economic value through revenue uplift and cost savings globally by 2028, with 69% of executives planning to deploy agents in marketing processes by year’s end. The stakes are particularly high in pharmaceutical marketing, where sales representatives have increasingly limited face time with healthcare professionals (HCPs)—a trend accelerated by Covid-19. The challenge isn’t just access; it’s making those rare interactions count with intelligence that’s currently trapped in data silos. The fragmented intelligence problem Briggs Davidson, Senior Director of Digital, Data & Marketing Strategy for Life Sciences at Capgemini Invent, outlines a scenario that will sound familiar to anyone in pharma marketing: An HCP attends a conference where a competitor showcases promising drug results, publishes research, and shifts their prescriptions to a rival product—all within a single quarter. “In most companies, legacy IT infrastructure and data silos keep this information in disparate systems across CRM, events databases and claims data,” Davidson writes. “Chances are, none of that information was accessible to sales reps before they met with the HCP.” The solution, according to Davidson, isn’t just connecting these systems—it’s deploying agentic AI in healthcare marketing to autonomously query, synthesise, and act on that unified data. Unlike conversational AI that responds to queries, agentic systems can independently execute multi-step tasks. Instead of a data engineer building a new pipeline, an AI agent could autonomously query the CRM and claims database to answer business questions like: “Identify oncologists in the Northwest who have a 20% lower prescription volume but attended our last medical congress.” From orchestration to autonomous execution Davidson frames the shift as moving from an “omnichannel view”—coordinating experiences across channels—to true orchestration powered by agentic AI. In practice, this means a sales representative could have an agent assist with call and visit planning by asking: “What messages has my HCP responded to most recently?” or “Can you create a detailed intelligence brief on my HCP?” The agentic system would compile: Their most recent conversation with the HCP The HCP’s prescribing behaviour Thought leaders the HCP follows Relevant content to share The HCP’s preferred outreach channels (in-person visits, emails, webinars) More significantly, the AI agent would then create a custom call plan for each HCP based on their unified profile and recommend follow-up steps based on engagement outcomes. “Agentic AI systems are about driving action, graduating from ‘answer my prompt,’ to ‘autonomously execute my task,'” Davidson explains. “That means evolving the sales representative mindset from asking questions to coordinating small teams of specialised agents that work together: one plans, another retrieves and checks content, a third schedules and measures, and a fourth enforces compliance guardrails—all under human oversight.” The AI-ready data prerequisite The operational promise hinges on what Davidson calls “AI-ready data”—standardised, accessible, complete, and trustworthy information that enables three capabilities: Faster decision making: Predictive analytics that provide near real-time alerts on what’s about to happen, enabling sales representatives to act proactively. Personalisation at scale: Delivering customised experiences to thousands of HCPs simultaneously with small human teams enabled by specialised agent networks. True marketing ROI: Moving beyond monthly historical reports to understanding which marketing activities are actively driving prescriptions. Davidson emphasises that successful deployment starts with marketing and IT alignment on initial use cases, with stakeholders identifying KPIs that demonstrate tangible outcomes—such as specific percentage increases in HCP engagement or sales representative productivity. Critical implementation questions The article notably frames agentic AI in healthcare as “not simply another technology-led capability; it’s a new operating layer for commercial teams.” But it acknowledges that “agentic AI’s full value only materialises with AI-ready data, trustworthy deployment and workflow redesign.” What remains unaddressed: the regulatory and compliance complexity of autonomous systems querying claims databases containing prescriber behaviour, particularly under HIPAA’s minimum necessary standard. The piece also doesn’t detail actual client implementations or metrics beyond the aspirational US$450B economic value projection. For global organisations, Davidson notes that use cases “can and should be tailored to fit each market’s maturity for maximum ROI,” suggesting that deployment will vary significantly across regulatory environments. The fundamental value proposition, according to Davidson, centres on bidirectional benefit: “The HCP receives directly relevant content, and the marketing teams can drive increased HCP engagement and conversion.” Whether that vision of autonomous marketing agents coordinating across CRM, events, and claims systems becomes standard practice by 2028—or remains constrained by data governance realities—will likely determine if life sciences achieves anything close to that US$450 billion opportunity. See also: China’s hyperscalers bet billions on agentic AI as commerce becomes the new battleground Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Agentic AI in healthcare: How Life Sciences marketing could achieve US$450bn in value by 2028 appeared first on AI News. View the full article
  12. For a long time, cryptocurrency prices moved quickly. A headline would hit, sentiment would spike, and charts would react almost immediately. That pattern no longer holds. Today’s market is slow, heavier than before, and shaped by forces that do not always announce themselves clearly. Capital allocation, ETF mechanics, and macro positioning now influence price behaviour in ways that are easy to overlook if you only watch short-term moves. That change becomes obvious when you look at XRP. The XRP price today reflects decisions made by institutions, fund managers, and regulators as much as it reflects trading activity. AI tools are used increasingly to track such inputs – but they are often misunderstood. They do not predict outcomes. They organise complexity. Understanding that distinction changes how you read the market. How AI reads an ETF-driven market AI systems do not look for narratives, but for relationships. In cryptocurrency markets, that means mapping ETF inflows and outflows against derivatives positioning, on-chain activity, and movements in traditional assets. What has changed recently is how much weight those signals now carry. Binance Research has reported that altcoin ETFs have recorded more than US$2 billion in net inflows, with XRP and Solana leading that activity. Bitcoin and Ethereum spot ETFs have seen sustained outflows since October. This is not a classic risk-on environment. It is selective, cautious and uneven. AI models are good at identifying such behaviour, detecting rotation not momentum. They highlight where capital is reallocating even when prices remain range-bound. This is why markets can appear quiet while meaningful positioning takes place underneath. AI only shows the movement, yet doesn’t explain the reasons behind it. What AI can tell you about XRP XRP does not always move in step with the rest of the market. When conditions change, its price often reacts to access, regulation, and liquidity before sentiment catches up. That pattern has shown up more than once, and it is one reason AI systems tend to weigh fund flows and market depth more heavily than short-term mood shifts when analysing XRP. Binance Research has pointed to early 2026 as a ******* where liquidity is coming back without a clear return to risk-taking. Capital has rotated away from crowded trades, but it has not rushed to replace them. AI picks up on that imbalance quickly. It helps explain why XRP has seen ETF interest even while broader momentum in cryptocurrency has felt restrained. That does not imply a forecast. It is closer to a snapshot of conditions. Market conversations may slow, headlines may thin out, and price can drift, yet positioning continues to evolve in the background. This is easy to miss if you focus only on visible activity. AI is useful here because it stays indifferent to attention. Instead of responding to engagement spikes or sudden narrative shifts, it tracks what investors are actually doing. In markets where perception often moves ahead of reality, that distinction matters more than it first appears. Where AI constantly falls short For all its analytical power, AI has blind spots. Regulation is one of the most important. Models are trained on historical relationships, while regulatory decisions rarely follow historical patterns. Richard Teng, Co-CEO of Binance, addressed this challenge after the exchange secured its ADGM license in January 2026. “The ADGM license crowns years of work to meet some of the world’s most demanding regulatory standards, and arriving in days of the moment we crossed 300 million registered users shows that scale and trust need not be in tension.” Developments like this can alter market confidence quickly, yet they are difficult to quantify before they happen. AI responds well once regulatory outcomes are known. It struggles beforehand. For XRP, where regulatory clarity has played a central role in past price behaviour, this limitation is significant. Another weakness is intent. AI can measure flows, but it cannot explain why investors choose caution, delay, or restraint. Defensive positioning does not always look dramatic in data, but it can shape markets for long periods. Why human judgement still shapes the outcome AI does not replace interpretation but supports it. Binance Research has described current conditions as a phase of liquidity preservation, with markets waiting for clearer catalysts like macro data releases and policy signals. AI can flag these moments of tension. It cannot tell you whether they will resolve into action or extend into stagnation. Rachel Conlan, CMO of Binance, reflected on the broader maturity of the industry when discussing Binance Blockchain Week Dubai 2025. She described a market that is more focused on building than spectacle. That mindset applies equally to AI use. The goal is not prediction. It is informed judgement. What this means when you look at price When used properly, AI helps see forces that are easy to miss, especially in ETF-driven conditions. It highlights where liquidity is moving, where narratives fail to align with behaviour, and where patience may be a rational choice. What it cannot do is remove uncertainty. In markets shaped by regulation, macro shifts, and institutional decision-making, judgement still matters. The clearest insight comes from combining machine analysis with human context. Image source: Unsplash The post What AI can (and can’t) tell us about XRP in ETF-driven markets appeared first on AI News. View the full article
  13. Because Western AI labs won’t—or can’t—anymore. As OpenAI, Anthropic, and Google face mounting pressure to restrict their most powerful models, ******** developers have filled the open-source void with AI explicitly built for what operators need: powerful models that run on commodity hardware. A new security study reveals just how thoroughly ******** AI has captured this space. Research published by SentinelOne and Censys, mapping 175,000 exposed AI hosts across 130 countries over 293 days, shows Alibaba’s Qwen2 consistently ranking second only to Meta’s Llama in global deployment. More tellingly, the ******** model appears on 52% of systems running multiple AI models—suggesting it’s become the de facto alternative to Llama. “Over the next 12–18 months, we expect ********-origin model families to play an increasingly central role in the open-source LLM ecosystem, particularly as Western frontier labs slow or constrain open-weight releases,” Gabriel Bernadett-Shapiro, distinguished AI research scientist at SentinelOne, told TechForge Media’s AI News. The finding arrives as OpenAI, Anthropic, and Google face regulatory scrutiny, safety review overhead, and commercial incentives pushing them toward API-gated releases rather than publishing model weights freely. The contrast with ******** developers couldn’t be sharper. ******** labs have demonstrated what Bernadett-Shapiro calls “a willingness to publish large, high-quality weights that are explicitly optimised for local deployment, quantisation, and commodity hardware.” “In practice, this makes them easier to adopt, easier to run, and easier to integrate into edge and residential environments,” he added. Put simply: if you’re a researcher or developer wanting to run powerful AI on your own computer without a massive budget, ******** models like Qwen2 are often your best—or only—option. Pragmatics, not ideology Alibaba’s Qwen2 consistently ranks second only to Meta’s Llama across 175,000 exposed hosts globally. Source: SentinelOne/Censys The research shows this dominance isn’t accidental. Qwen2 maintains what Bernadett-Shapiro calls “zero rank volatility”—it holds the number two position across every measurement method the researchers examined: total observations, unique hosts, and host-days. There’s no fluctuation, no regional variation, just consistent global adoption. The co-deployment pattern is equally revealing. When operators run multiple AI models on the same system—a common practice for comparison or workload segmentation—the pairing of Llama and Qwen2 appears on 40,694 hosts, representing 52% of all multi-family deployments. Geographic concentration reinforces the picture. In China, Beijing alone accounts for 30% of exposed hosts, with Shanghai and Guangdong adding another 21% combined. In the United States, Virginia—reflecting AWS infrastructure density—represents 18% of hosts. China and the US dominate exposed Ollama host distribution, with Beijing accounting for 30% of ******** deployments. Source: SentinelOne/Censys “If release velocity, openness, and hardware portability continue to diverge between regions, ******** model lineages are likely to become the default for open deployments, not because of ideology, but because of availability and pragmatics,” Bernadett-Shapiro explained. The governance problem This shift creates what Bernadett-Shapiro characterises as a “governance inversion”—a fundamental reversal of how AI risk and accountability are distributed. In platform-hosted services like ChatGPT, one company controls everything: the infrastructure, monitors usage, implements safety controls, and can shut down abuse. With open-weight models, the control evaporates. Accountability diffuses across thousands of networks in 130 countries, while dependency concentrates upstream in a handful of model suppliers—increasingly ******** ones. The 175,000 exposed hosts operate entirely outside the control systems governing commercial AI platforms. There’s no centralised authentication, no rate limiting, no abuse detection, and critically, no kill switch if misuse is detected. “Once an open-weight model is released, it is trivial to remove safety or security training,” Bernadett-Shapiro noted.”Frontier labs need to treat open-weight releases as long-lived infrastructure artefacts.” A persistent backbone of 23,000 hosts showing 87% average uptime drives the majority of activity. These aren’t hobbyist experiments—they’re operational systems providing ongoing utility, often running multiple models simultaneously. Perhaps most concerning: between 16% and 19% of the infrastructure couldn’t be attributed to any identifiable owner.”Even if we are able to prove that a model was leveraged in an attack, there are not well-established abuse reporting routes,” Bernadett-Shapiro said. Security without guardrails Nearly half (48%) of exposed hosts advertise “tool-calling capabilities”—meaning they’re not just generating text. They can execute code, access APIs, and interact with external systems autonomously. “A text-only model can generate harmful content, but a tool-calling model can act,” Bernadett-Shapiro explained. “On an unauthenticated server, an attacker doesn’t need malware or credentials; they just need a prompt.” Nearly half of exposed Ollama hosts have tool-calling capabilities that can execute code and access external systems. Source: SentinelOne/Censys The highest-risk scenario involves what he calls “exposed, tool-enabled RAG or automation endpoints being driven remotely as an execution layer.” An attacker could simply ask the model to summarise internal documents, extract API keys from code repositories, or call downstream services the model is configured to access. When paired with “thinking” models optimised for multi-step reasoning—present on 26% of hosts—the system can plan complex operations autonomously. The researchers identified at least 201 hosts running “uncensored” configurations that explicitly remove safety guardrails, though Bernadett-Shapiro notes this represents a lower bound. In other words, these aren’t just chatbots—they’re AI systems that can take action, and half of them have no password protection. What frontier labs should do For Western AI developers concerned about maintaining influence over the technology’s trajectory, Bernadett-Shapiro recommends a different approach to model releases. “Frontier labs can’t control deployment, but they can shape the risks that they release into the world,” he said. That includes “investing in post-release monitoring of ecosystem-level adoption and misuse patterns” rather than treating releases as one-off research outputs. The current governance model assumes centralised deployment with diffuse upstream supply—the exact opposite of what’s actually happening. “When a small number of lineages dominate what’s runnable on commodity hardware, upstream decisions get amplified everywhere,” he explained. “Governance strategies must acknowledge that inversion.” But acknowledgement requires visibility. Currently, most labs releasing open-weight models have no systematic way to track how they’re being used, where they’re deployed, or whether safety training remains intact after quantisation and fine-tuning. The 12-18 month outlook Bernadett-Shapiro expects the exposed layer to “persist and professionalise” as tool use, agents, and multimodal inputs become default capabilities rather than exceptions. The transient edge will keep churning as hobbyists experiment, but the backbone will grow more stable, more capable, and handle more sensitive data. Enforcement will remain uneven because residential and small VPS deployments don’t map to existing governance controls. “This isn’t a misconfiguration problem,” he emphasised. “We are observing the early formation of a public, unmanaged AI compute substrate. There is no central switch to flip.” The geopolitical dimension adds urgency. “When most of the world’s unmanaged AI compute depends on models released by a handful of non-Western labs, traditional assumptions about influence, coordination, and post-release response become weaker,” Bernadett-Shapiro said. For Western developers and policymakers, the implication is stark: “Even perfect governance of their own platforms has limited impact on the real-world risk surface if the dominant capabilities live elsewhere and propagate through open, decentralised infrastructure.” The open-source AI ecosystem is globalising, but its centre of gravity is shifting decisively eastward. Not through any coordinated strategy, but through the practical economics of who’s willing to publish what researchers and operators actually need to run AI locally. The 175,000 exposed hosts mapped in this study are just the visible surface of that fundamental realignment—one that Western policymakers are only beginning to recognise, let alone address. See also: Huawei details open-source AI development roadmap at Huawei Connect 2025 Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Exclusive: Why are ******** AI models dominating open-source as Western labs step back? appeared first on AI News. View the full article
  14. Cryptocurrency markets have become a high-speed playground where developers optimise the next generation of predictive software. Using real-time data flows and decentralised platforms, scientists develop prediction models that can extend the scope of traditional finance. The digital asset landscape offers an unparalleled environment for machine learning. When you track cryptocurrency prices today, you are observing a system shaped simultaneously by on-chain transactions, global sentiment signals, and macroeconomic inputs, all of which generate dense datasets suited for advanced neural networks. Such a steady trickle of information makes it possible to assess and reapply an algorithm without interference from fixed trading times or restrictive market access. The evolution of neural networks in forecasting Current machine learning technology, particularly the “Long Short-Term Memory” neuronal network, has found widespread application in interpreting market behaviour. A recurrent neural network, like an LSTM, can recognise long-term market patterns and is far more flexible than traditional analytical techniques in fluctuating markets. The research on hybrid models that combine LSTMs with attention mechanisms has really improved techniques for extracting important signals from market noise. Compared to previous models that used linear techniques, these models analyse not only structured price data but also unstructured data. With the inclusion of Natural Language Processing, it is now possible to interpret the flow of news and social media activity, enabling sentiment measurement. While prediction was previously based on historical stock pricing patterns, it now increasingly depends on behavioural changes in global participant networks. A High-Frequency Environment for Model Validation The transparency of blockchain data offers a level of data granularity that is not found in existing financial infrastructures. Each transaction is now an input that can be traced, enabling cause-and-effect analysis without delay. However, the growing presence of autonomous AI agents has changed how such data is used. This is because specialised platforms are being developed to support decentralised processing in a variety of networks. This has effectively turned blockchain ecosystems into real-time validation environments, where the feedback loop between data ingestion and model refinement occurs almost instantly. Researchers use this setting to test specific abilities: Real-time anomaly detection: Systems compare live transaction flows against simulated historical conditions to identify irregular liquidity behaviour before broader disruptions emerge. Macro sentiment mapping: Global social behaviour data are compared to on-chain activity to assess true market psychology. Autonomous risk adjustment: Programmes run probabilistic simulations to rebalance exposure dynamically as volatility thresholds are crossed. Predictive on-chain monitoring: AI tracks wallet activity to anticipate liquidity shifts before they impact centralised trading venues. These systems really do not function as isolated instruments. Instead, they adjust dynamically, continually changing their parameters in response to emerging market conditions. The synergy of DePIN and computational power To train complex predictive models, large amounts of computing power are required, leading to the development of Decentralised Physical Infrastructure Networks (DePIN). By using decentralised GPU capacity on a global computing grid, less dependence on cloud infrastructure can be achieved. Consequently, smaller-scale research teams are afforded computational power that was previously beyond their budgets. This makes it easier and faster to run experiments in different model designs. This trend is also echoed in the markets. A report dated January 2025 noted strong growth in the capitalisation of assets related to artificial intelligence agents in the latter half of 2024, as demand for such intelligence infrastructure increased. From reactive bots to anticipatory agents The market is moving beyond rule-based trading bots toward proactive AI agents. Instead of responding to predefined triggers, modern systems evaluate probability distributions to anticipate directional changes. Gradient boosting and Bayesian learning methods allow the identification of areas where mean reversion may occur ahead of strong corrections. Some models now incorporate fractal analysis to detect recurring structures in timeframes, further improving adaptability in rapidly-changing conditions. Addressing model risk and infrastructure constraints Despite such rapid progress, several problems remain. Problems identified include hallucinations in models, in which patterns found in a model do not belong to the patterns that cause them. Methods to mitigate this problem have been adopted by those applying this technology, including ‘explainable AI’. The other vital requirement that has remained unaltered with the evolution in AI technology is scalability. With the growing number of interactions among autonomous agents, it is imperative that the underlying transactions efficiently manage the rising volume without latency or data loss. At the end of 2024, the most optimal scaling solution handled tens of millions of transactions per day in an area that required improvement. Such an agile framework lays the foundation for the future, where data, intelligence and validation will come together in a strong ecosystem that facilitates more reliable projections, better governance and greater confidence in AI-driven insights. The post Cryptocurrency markets a testbed for AI forecasting models appeared first on AI News. View the full article
  15. Goldman Sachs is pushing deeper into real use of artificial intelligence inside its operations, moving to systems that can carry out complex tasks on their own. The Wall Street bank is working with AI startup Anthropic to create autonomous AI agents powered by Anthropic’s Claude model that can handle work that used to require large teams of people. The bank’s chief information officer says the technology has surprised staff with how capable it can be. Many companies use AI for tasks like helping employees draft text or analysing trends. But Goldman Sachs is testing AI systems that go into what bankers call back-office work – functions like accounting, compliance checks and onboarding new clients – areas viewed as too complex for automation. Such jobs involve many rules, data and detailed review, and have resisted full automation. Moving AI agents into process-heavy operations The partnership with Anthropic has been underway for roughly six months, with engineers from the AI startup embedded directly with teams at Goldman Sachs to build these agents side by side with in-house staff, according to a report based on an interview with the bank’s CIO. The work has focused on areas where automation could cut the time it takes to complete repetitive and data-heavy tasks. Marco Argenti, Goldman’s chief information officer, described the AI systems as a new kind of digital assistant. “Think of it as a digital co-worker for many of the professions in the firm that are scaled, complex and very process-intensive,” he told CNBC. In early tests, the ability to reason through multi-step work and apply logic to complex areas like accounting and compliance was something the bank had not expected from the model. Goldman Sachs has been among the more active banks in testing AI tools over the past few years. Before this announcement, the firm deployed internal tools to help engineers write and debug code. But the change now is toward systems that can take on work traditionally done by accountants and compliance teams. That highlights how organisations are trying to find concrete business uses for AI beyond the hype. Faster workflows, human oversight remains The agents are based on Anthropic’s Claude Opus 4.6 model, which has been built to handle long documents and complex reasoning. Goldman’s tests have shown that such systems can reduce the time needed for tasks like client onboarding, trade reconciliation and document review. While the bank has not shared specific performance numbers, people familiar with the matter told news outlets that work which once took a great deal of human labour can now be done in much less time. Argenti said the rollout is not about replacing human workers, at least not at this stage. The bank reportedly views the agents as a tool to help existing staff manage busy schedules and get through high volumes of work. In areas like compliance and accounting, jobs can involve repetitive, rule-based steps. AI frees analysts from that repetition so they can focus on higher-value judgement work. Markets have already reacted to the idea that large institutions are moving toward more AI-driven automation. In recent days, a sell-off in enterprise software stocks wiped out billions in value as some investors worried that tools like autonomous agents could speed up the decline of traditional business software that has dominated corporate IT for years. AI adoption meets governance reality Industry watchers see Goldman’s move as part of a wider trend. For example, some firms are piloting tools to read large data sets, interpret multiple sources of information, and draft investment analysis. These steps show AI making the jump from isolated projects to operational work. Yet the technology raises questions about oversight and trust. AI systems that interpret financial rules and compliance standards must be monitored carefully to avoid errors that could have regulatory or financial consequences. That’s why many institutions treat these systems as helpers that are reviewed by human experts until they mature. Goldman Sachs is starting with operational functions that have traditionally resisted automation because they involve a lot of data and formal steps. The bank has not said when it expects deployment of the agents in its operations, but executives have suggested that the initial tests have been promising enough to support further rollout. The broader industry context shows other banks and financial firms also exploring similar use cases. Some have already invested heavily in AI infrastructure, and reports indicate that major firms are planning to use AI to cut costs, speed workflows and improve risk management. However, many remain cautious about putting AI into customer-facing or regulated functions. Goldman’s push into autonomous AI agents is an example of how large companies are reshaping internal operations using the latest generation of AI models. If systems can handle complex tasks reliably, organisations could see real changes in how work gets done – particularly in back-office functions where volume and repetition keep costs high and innovation slow. (Photo by Louis Droege) See also: Intuit, Uber, and State Farm trial AI agents inside enterprise workflows Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Goldman Sachs tests autonomous AI agents for process-heavy work appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.