Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    598
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. AI cost efficiency and data sovereignty are at odds, forcing a rethink of enterprise risk frameworks for global organisations. For over a year, the generative AI narrative focused on a race for capability, often measuring success by parameter counts and flawed benchmark scores. Boardroom conversations, however, are undergoing a necessary correction. While the allure of low-cost, high-performance models offers a tempting path to rapid innovation, the hidden liabilities associated with data residency and state influence are forcing a reassessment of vendor selection. China-based AI laboratory DeepSeek recently became a focal point for this industry-wide debate. According to Bill Conner, former adviser to Interpol and GCHQ, and current CEO of Jitterbit, DeepSeek’s initial reception was positive because it challenged the status quo by demonstrating that “high-performing large language models do not necessarily require Silicon Valley–scale budgets.” For businesses looking to trim the immense costs associated with generative AI pilots, this efficiency was understandably attractive. Conner observes that these “reported low training costs undeniably reignited industry conversations around efficiency, optimisation, and ‘good enough’ AI.” AI and data sovereignty risks Enthusiasm for cut-price performance has collided with geopolitical realities. Operational efficiency cannot be decoupled from data security, particularly when that data fuels models hosted in jurisdictions with different legal frameworks regarding privacy and state access. Recent disclosures regarding DeepSeek have altered the math for Western enterprises. Conner highlights “recent US government revelations indicating DeepSeek is not only storing data in China but actively sharing it with state intelligence services.” This disclosure moves the issue beyond standard GDPR or CCPA compliance. The “risk profile escalates beyond typical privacy concerns into the realm of national security.” For enterprise leaders, this presents a specific hazard. LLM integration is rarely a standalone event; it involves connecting the model to proprietary data lakes, customer information systems, and intellectual property repositories. If the underlying AI model possesses a “back door” or obliges data sharing with a foreign intelligence apparatus, sovereignty is eliminated and the enterprise effectively bypasses its own security perimeter and erases any cost efficiency benefits. Conner warns that “DeepSeek’s entanglement with military procurement networks and alleged export control evasion tactics should serve as a critical warning sign for CEOs, CIOs, and risk officers alike.” Utilising such technology could inadvertently entangle a company in sanctions violations or supply chain compromises. Success is no longer just about code generation or document summaries; it is about the provider’s legal and ethical framework. Especially in industries like finance, healthcare, and defence, tolerance for ambiguity regarding data lineage is zero. Technical teams may prioritise AI performance benchmarks and ease of integration during the proof-of-concept phase, potentially overlooking the geopolitical provenance of the tool and the need for data sovereignty. Risk officers and CIOs must enforce a governance layer that interrogates the “who” and “where” of the model, not just the “what.” Governance over AI cost efficiency Deciding to adopt or ban a specific AI model is a matter of corporate responsibility. Shareholders and customers expect that their data remains secure and used solely for intended business purposes. Conner frames this explicitly for Western leadership, stating that “for Western CEOs, CIOs, and risk officers, this is not a question of model performance or cost efficiency.” Instead, “it is a governance, accountability, and fiduciary responsibility issue.” Enterprises “cannot justify integrating a system where data residency, usage intent, and state influence are fundamentally opaque.” This opacity creates an unacceptable liability. Even if a model offers 95 percent of a competitor’s performance at half the cost, the potential for regulatory fines, reputational damage, and loss of intellectual property erases those savings instantly. The DeepSeek case study serves as a prompt to audit current AI supply chains. Leaders must ensure they have full visibility into where model inference occurs and who holds the keys to the underlying data. As the market for generative AI matures, trust, transparency, and data sovereignty will likely outweigh the appeal of raw cost efficiency. See also: SAP and Fresenius to build sovereign AI backbone for healthcare Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Balancing AI cost efficiency with data sovereignty appeared first on AI News. View the full article
  2. For many large companies, artificial intelligence still lives in side projects. Small teams test tools, run pilots, and present results that struggle to spread beyond a few departments. Citi has taken a different path, where instead of keeping AI limited to specialists, the bank has spent the past two years pushing the technology into daily work in the organisation. That effort has resulted in an internal AI workforce of roughly 4,000 employees, drawn from roles that range from technology and operations to risk and customer support. The figure was first reported by Business Insider, which detailed how Citi built its “AI Champions” and “AI Accelerators” programmes to encourage participation not central control. The scale of integration is notable, as Citi employs around 182,000 people globally, and more than 70% of them now use firm-approved AI tools in some form, according to the same report. That level of use places Citi ahead of many peers that still restrict AI access to technical teams or innovation labs. From central pilots to team-level adoption Rather than start with tools, Citi focused on people. The bank invited employees to volunteer as AI Champions, giving them access to training, internal resources, and early versions of approved AI systems. The employees then supported colleagues in their own teams, acting as local points of contact not formal trainers. The approach reflects a practical view of adoption. New tools often fail not because they lack features, but because staff do not know when or how to use them. By embedding support inside teams, Citi reduced the gap between experimentation and routine work. Training played a central role. Employees could earn internal badges by completing courses or demonstrating how they used AI to improve their own tasks. The badges did not come with promotions or pay rises, but they helped create visibility and credibility in the organisation. According to Business Insider, this peer-driven model helped AI spread faster than top-down mandates. Everyday use, with guardrails Citi’s leadership has framed the effort as a response to scale not novelty. With operations spanning retail banking, investment services, compliance, and customer support, small efficiency gains can add up quickly. AI tools are being used to summarise documents, draft internal notes, analyse data sets, and assist with software development. None of these uses are new on their own, but the difference lies in how they are applied. The focus on everyday tasks also shapes Citi’s risk posture. The bank has limited employees to firm-approved tools, with guardrails around what data can be used and how outputs are handled. That constraint has slowed some experiments, but it has also made managers more comfortable allowing broader access. In regulated industries, trust often matters more than speed. What Citi’s approach shows about scaling AI The structure of Citi’s programme suggests a lesson for other large enterprises. AI adoption does not require every employee to become an expert. It requires enough people to understand the tools well enough to apply them responsibly and explain them to others. By training thousands instead of dozens, Citi reduced its reliance on a small group of specialists. There is also a cultural signal at play. Encouraging employees from non-technical roles to participate sends a message that AI is not only for engineers or data scientists. It becomes part of how work gets done, similar to spreadsheets or presentation software in earlier decades. That shift aligns with broader industry trends. Surveys from firms like McKinsey have shown that many companies struggle to move AI projects into production, often citing talent gaps and unclear ownership. Citi’s model sidesteps some of those issues by distributing ownership in teams, while keeping governance centralised. Still, the approach is not without limits. Peer-led adoption depends on sustained interest, and not all teams move at the same pace. There is also the risk that informal support networks become uneven, with some groups benefiting more than others. Citi has tried to address this by rotating Champions and updating training content as tools change. What stands out is the bank’s willingness to treat AI as infrastructure not innovation. Instead of asking whether AI could transform the business, Citi asked where it could remove friction from existing work. That framing makes progress easier to measure and reduces pressure to produce dramatic results. The experience also challenges a common assumption that AI adoption must start at the top. Citi’s senior leadership supported the effort, but much of the momentum came from employees who volunteered time to learn and teach. In large organisations, that bottom-up energy can be hard to generate, yet it often determines whether new technology sticks. As more companies move from pilots to production, Citi’s experiment offers a useful case study. It shows that scale does not come from buying more tools, but from helping people feel confident using the ones they already have. For enterprises wondering why AI progress feels slow, the answer may lie less in strategy decks and more in how work actually gets done, one team at a time. (Photo by Declan Sun) See also: JPMorgan Chase treats AI spending as core infrastructure Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post The quiet work behind Citi’s 4,000-person internal AI rollout appeared first on AI News. View the full article
  3. SAP and Fresenius are building a sovereign AI platform for healthcare that brings secure data processing to clinical settings. For data leaders in the medical sector, deploying AI requires strict governance that public cloud solutions often lack. This collaboration addresses that gap by creating a “controlled environment” where AI models can operate without compromising data sovereignty. Moving AI from pilot to production The project aims to build an open and integrated ecosystem allowing hospitals to use AI securely. Rather than running isolated experiments, the companies plan to create a digital backbone for a sovereign and AI-supported healthcare system. Michael Sen, CEO of Fresenius, said: “Together with SAP, we can accelerate the digital transformation of the ******* and European healthcare systems and enable a sovereign European solution that is so important in today’s global landscape. “We are making data and AI everyday companions that are secure, simple and scalable for doctors and hospital teams. This creates more room for what truly matters: caring for patients.” The technical base uses SAP Business AI and the SAP Business Data Cloud. By leveraging these components, the platform creates a compliant, sovereign foundation for operating AI models in healthcare. This infrastructure handles health data responsibly, a requirement for scaling automated processes in patient care. The partnership tackles data fragmentation through SAP’s “AnyEMR” strategy, which supports the integration of diverse hospital information systems (HIS). Using open industry standards like HL7 FHIR, the platform connects HIS, electronic medical records (EMRs), and other medical applications. This connectivity allows Fresenius to develop AI-supported solutions that increase efficiency across the care chain. The goal is to build an individual, scalable platform that enables connected, data-driven healthcare processes. Investing in sovereign AI to advance healthcare Both companies intend to invest a “mid three-digit million euro amount” in the medium term. The funds target the digital transformation of ******* and European healthcare systems using AI-supported solutions. Plans include joint investments in startups and scaleups, alongside internal technological developments. This approach aims to build a broader library of tools that plug into the sovereign platform. Christian Klein, CEO of SAP SE, commented: “With SAP’s leading technology and Fresenius’ deep healthcare expertise, we aim to create a sovereign, interoperable healthcare platform for Fresenius worldwide. “Together, we want to set new standards for data sovereignty, security, and innovation in healthcare. Thanks to SAP, Fresenius can harness the full potential of digital and AI-supported processes and sustainably improve patient care.” This deal indicates that the next phase of healthcare AI in Europe will focus on sovereign infrastructure. Scalable AI requires a controlled environment to satisfy regulatory demands—without a sovereign data backbone, AI initiatives risk stalling due to compliance concerns. See also: Scaling AI value beyond pilot phase purgatory Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post SAP and Fresenius to build sovereign AI backbone for healthcare appeared first on AI News. View the full article
  4. Scaling AI value from isolated pilots to enterprise-wide adoption remains a primary hurdle for many organisations. While experimentation with generative models has become ubiquitous, industrialising these tools (i.e. wrapping them in necessary governance, security, and integration layers) often stalls. Addressing the gap between investment and operational return, IBM has introduced a new service model designed to help businesses assemble, rather than purely build, their internal AI infrastructure. Adopting asset-based consulting Traditional consultancy models typically rely on human labour to solve integration problems, a process that is often slow and capital-intensive. IBM is among the companies aiming to alter this dynamic by offering an asset-based consulting service. This approach combines standard advisory expertise with a catalogue of pre-built software assets, aiming to help clients construct and govern their own AI platforms. Instead of commissioning bespoke development for every workflow, organisations can leverage existing architectures to redesign processes and connect AI agents to legacy systems. This method helps companies to achieve value by scaling new agentic applications without necessitating alterations to their existing core infrastructure, AI models, or preferred cloud providers. Managing a multi-cloud environment A frequent concern for enterprise leaders is vendor lock-in, particularly when adopting proprietary platforms. IBM’s strategy acknowledges the reality of the heterogeneous enterprise IT landscape. The service supports a multi-vendor foundation, compatible with Amazon Web Services, Google Cloud, and Microsoft Azure, alongside IBM watsonx. This approach extends to the models themselves, supporting both open- and closed-source variants. By allowing companies to build upon their current investments rather than demanding a replacement strategy, the service addresses a barrier to adoption: the fear of technical debt accumulation when switching ecosystems. The technical backbone of this offering is IBM Consulting Advantage, the company’s internal delivery platform. Having utilised this system to support over 150 client engagements, IBM reports that the platform has boosted its own consultants’ productivity by up to 50 percent. The premise is that if these tools can accelerate delivery for IBM’s own teams, they should offer similar velocity for clients. The service provides access to a marketplace of industry-specific AI agents and applications. For business leaders, this suggests a “platform-first” focus, where attention turns from managing individual models to managing a cohesive ecosystem of digital and human workers. Active deployment of a platform-centric approach to scaling AI value The efficacy of such a platform-centric approach is best viewed through active deployment. Pearson, the global learning company, is currently utilising this service to construct a custom platform. Their implementation combines human expertise with agentic assistants to manage everyday work and decision-making processes, illustrating how the technology functions in a live operational environment. Similarly, a manufacturing firm has employed IBM’s solution to formalise its generative AI strategy. For this client, the focus was on identifying high-value use cases, testing targeted prototypes, and aligning leaders around a scalable strategy. The result was the deployment of AI assistants using multiple technologies within a secured, governed environment, laying a foundation for wider expansion across the enterprise. Despite the attention surrounding generative AI, the realisation of balance-sheet impact is not guaranteed. “Many organisations are investing in AI, but achieving real value at scale remains a major challenge,” notes Mohamad Ali, SVP and Head of IBM Consulting. “We have solved many of these challenges inside IBM by using AI to transform our own operations and deliver measurable results, giving us a proven playbook to help clients succeed.” The conversation is gradually moving away from the capabilities of specific LLMs and towards the architecture required to run them safely. Success in scaling AI and achieving value will likely depend on an organisation’s ability to integrate these solutions without creating new silos. Leaders must ensure that as they adopt pre-built agentic workflows, they maintain rigorous data lineage and governance standards. See also: JPMorgan Chase treats AI spending as core infrastructure Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Scaling AI value beyond pilot phase purgatory appeared first on AI News. View the full article
  5. Artificial intelligence has shifted rapidly from a peripheral innovation to a structural component of modern financial services. In banking, payments, and wealth management, to name but three sub-sectors, AI is now embedded in budgeting tools, fraud detection systems, KYC, AML, and customer engagement platforms. Credit unions sit in this broader fintech transformation, facing similar technological pressures and operating under distinct cooperative models built on trust, proffered services in competitive markets, and community alignment. Consumer behaviour suggests AI is already part of everyday financial decision-making. Research from Velera indicates that 55% of consumers use AI tools for financial planning or budgeting, while 42% are comfortable using AI to complete financial transactions. Adoption is highest among younger demographics, with 80% of Gen Z and younger millennials using AI for financial planning and close to that proportion expressing ‘comfort’ with agentic AI. These patterns mirror trends in the wider fintech sector, where AI-driven personal finance tools and conversational interfaces have become more common. There is a particular a dual challenge for credit unions. Member expectations are shaped by large fintech companies’ digital platforms and apps, and large digital banks are deploying AI at scale. At the average Union, internal readiness remains limited. A CULytics survey shows that although 42% of credit unions have implemented AI in specific operational areas, only 8% report using it in multiple parts of the business. The gap between market expectations and institutional ability defines the current phase of AI adoption in the cooperative-based financial sector. AI as a trust-based extension of financial services Unlike many fintech startups, credit unions benefit from high levels of consumer trust. Velera reports that 85% of consumers see credit unions as reliable sources of financial advice, and 63% of CU members say they would attend AI-related educational sessions if such were offered. These findings position credit unions as being able to frame AI as an advisory tool to be embedded in existing relationships. In fintech, “explainable AI” and transparent digital finance are mainstays as identity verification, and regulation watch the technology closely. Regulators and consumers clearly expect transparency into how decisions are made by AI back ends. Credit unions can use this expectation by integrating AI into education programmes, fraud awareness efforts and financial literacy. Where AI delivers tangible value Personalisation is a leading use case for AI. Machine learning models let financial institutions move beyond static customer segmentation, via behavioural signals and life-stage indicators. The approach is already common in other sectors, and in the industry, in fintech lending and digital banking platforms. Credit unions can adopt similar techniques, ones that tailor offers, communications, and make product recommendations. Member service represents another potential high-impact area. According to CULytics, 58% of credit unions now use chatbots or virtual assistants, the most-adopted AI application in the sector. Cornerstone Advisors reports that deployment is accelerating among credit unions than banks, using AI to handle routine enquiries and preserve staff capacity. Fraud prevention has emerged as an AI use case in the sector. Alloy reports a 92% net increase in AI fraud prevention investment among credit unions in 2025, compared with lower prioritisation among banks. As digital payments get more widely-adopted, AI-driven fraud detection is important to balance security with low-friction user experiences. In this respect, credit unions face the same pressures as mainstream fintech payment providers and neobanks, where false declines and delayed responses can directly erode customer trust. Operational efficiency and lending decisions also feature prominently. Research from Inclind and CULytics shows AI being applied to reconciliation, underwriting, and internal business analytics. Users report reduced manual workloads and faster credit decisions. Cornerstone Advisors identifies lending as the third-most common AI function among credit unions, placing them closer to fintech lenders than traditional banks in this area. Structural barriers to scaling AI Despite clear use cases, scaling AI in credit unions remains difficult. Data readiness is the most frequently cited constraint. Cornerstone Advisors reports that only 11% of credit unions rate their data strategy as very effective (nearly a quarter consider it ineffective). Without accessible, well-governed data, AI systems cannot deliver reliable outcomes, regardless of the underlying sophistication of the LLM. Trust and explainability also limit the technology’s expansion. In regulated financial environments, opaque “****** box” models create risk for institutions that as a matter of course have to justify their decisions to members. PYMNTS Intelligence highlights the importance of breaking down data silos and using shared intelligence models to improve transparency and auditability. Consortium-based approaches, like those used by Velera in thousands of credit unions, reflect a trend in the financial sector towards pooled data. Integration presents a further challenge. CULytics finds that 83% of credit unions cite integration with legacy systems as an obstacle to AI, a familiar issue to many financial institutions. Limited in-house expertise in AI compounds this, again suggesting fintech partnerships, credit union service organisations (CUSOs), or externally-managed platforms as ways to accelerate deployment. From experimentation to embedded practice As AI becomes embedded in financial services, credit unions face a choice similar to that which has been confronted by banks and the wider fintech sector: placing AI as a foundational ability. Evidence suggests progress depends on disciplined execution. That means prioritising high-trust, high-impact use cases, so institutions can deliver visible benefits and not undermine members’ confidence in their trusted institutions. Strengthening data governance and accountability ensures AI-assisted decisions remain explainable and defensible. Partner-led integration might reduce technical complexity, while education and transparency align AI adoption with the values that underpin the cooperative organisation. (Image source: “Credit Union Building” by Dano is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Credit unions, fintech and the AI inflection of financial services appeared first on AI News. View the full article
  6. Inside large banks, artificial intelligence has moved into a category once reserved for payment systems, data centres, and core risk controls. At JPMorgan Chase, AI is framed as infrastructure the bank believes it cannot afford to neglect. That position came through clearly in recent comments from CEO Jamie Dimon, who defended the bank’s rising technology budget and warned that institutions that fall behind on AI risk losing ground to competitors. The argument was not about replacing people but about staying functional in an industry where speed, scale, and cost discipline matter every day. JPMorgan has been investing heavily in technology for years, but AI has changed the tone of that spending. What once sat with innovation projects is now folded into the bank’s baseline operating costs. That includes internal AI tools that support research, document drafting, internal reviews, and other routine tasks in the organisation. From experimentation to infrastructure The shift in language reflects a deeper change in how the bank views risk. AI is considered part of the systems required to keep pace with competitors that are automating internal work. Rather than encouraging workers to rely on public AI systems, JPMorgan has focused on building and governing its own internal platforms. That decision reflects long-held concerns in banking about data exposure, client confidentiality, and regulatory monitoring. Banks operate in an environment where mistakes carry high costs. Any system that touches sensitive data or influences choices must be auditable and explainable. Public AI tools, trained on datasets and updated frequently, make that difficult. Internal systems give JPMorgan more control, even if they take longer to deploy. The approach also reduces the potential of uncontrolled “shadow AI,” in which employees use unapproved tools to speed up work. While such tools can improve productivity, they create gaps in oversight that regulators tend to notice quickly. A cautious approach to workforce change JPMorgan has been careful in how it talks about AI’s impact on jobs. The bank has avoided claims that AI will dramatically reduce headcount. Instead, it presents AI as a way to reduce manual work and improve consistency. Tasks that once required multiple review cycles can now be completed faster, with employees still responsible for final judgement. The framing positions AI as support not substitution, which matters in a sector sensitive to political and regulatory reaction. The scale of the organisation makes this approach practical. JPMorgan employs hundreds of thousands of people worldwide. Even tiny efficiency gains, applied broadly, can translate into meaningful cost savings over time. The upfront investment required to build and maintain internal AI systems is substantial. Dimon acknowledges that technology spending can have an impact on short-term performance, especially when market conditions are uncertain. His response is that cutting back on technology now may improve margins in the near term, but it risks weakening the bank’s position later. In that sense, AI spending is treated as a form of insurance against falling behind. JPMorgan, AI, and the risk of falling behind rivals JPMorgan’s stance reflects pressure in the banking sector. Rivals are investing in AI to speed up fraud detection, streamline compliance work, and improve internal reporting. As these tools become more common, expectations rise. Regulators may assume banks have access to advanced monitoring systems. Clients may expect faster responses and fewer errors. In that environment, lagging on AI can look less like caution and more like mismanagement. JPMorgan has not suggested that AI will solve structural challenges or eliminate risk. Many AI projects struggle to move beyond narrow uses, and integrating them into complex systems remains difficult. The harder work lies in governance. Deciding which teams can use AI, under what conditions, and with what oversight requires clear rules. Errors need defined escalation paths. Responsibility must be assigned when systems produce flawed output. Across large enterprises, AI adoption is not limited by access to models or computing power, but constrained by process, policy, and trust. For other end-user companies, JPMorgan’s approach offers a useful reference point. AI is treated as part of the machinery that keeps the organisation running. That does not guarantee success. Returns may take years to appear, and some investments will not pay off. But the bank’s position is that the greater risk lies in doing too little, not too much. (Photo by IKECHUKWU JULIUS UGWU) See also: Banks operationalise as Plumery AI launches standardised integration Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post JPMorgan Chase treats AI spending as core infrastructure appeared first on AI News. View the full article
  7. For the majority of web users, generative AI is AI. Large Language Models (LLMs) like GPT and Claude are the de facto gateway to artificial intelligence and the infinite possibilities it has to offer. After mastering our syntax and remixing our memes, LLMs have captured the public imagination. They’re easy to use and fun. And – the odd hallucination aside – they’re smart. But while the public plays around with their favourite flavour of LLM, those who live, breathe, and sleep AI – researchers, tech heads, developers – are focused on ******* things. That’s because the ultimate goal for AI max-ers is artificial general intelligence (AGI). That’s the endgame. To the professionals, LLMs are a sideshow. Entertaining and eminently useful, but ultimately ‘narrow AI.’ They’re good at what they do because they’ve been trained on specific datasets, but incapable of straying out of their lane and attempting to solve larger problems. The diminishing returns and inherent limitations of deep learning models is prompting exploration of smarter solutions capable of actual cognition. Models that lie somewhere between the LLM and AGI. One system that falls into this bracket – smarter than an LLM and a foretaste of future AI – is OpenCog Hyperon, an open-source framework developed by SingularityNET. With its ‘neural-symbolic’ approach, Hyperon is designed to bridge the gap between statistical pattern matching and logical reasoning, offering a roadmap that joins the dots between today’s chatbots and tomorrow’s infinite thinking machines. Hybrid architecture for AGI SingularityNET has positioned OpenCog Hyperon as a next-generation AGI research platform that integrates multiple AI models into a unified cognitive architecture. Unlike LLM-centric systems, Hyperon is built around neural-symbolic integration in which AI can learn from data and reason about knowledge. That’s because withneural-symbolic AI, neural learning components and symbolic reasoning mechanisms are interwoven so that one can inform and enhance the other. This overcomes one of the primary limitations of purely statistical models by incorporating structured, interpretable reasoning processes. At its core, OpenCog Hyperon combines probabilistic logic and symbolic reasoning with evolutionary programme synthesis and multi-agent learning. That’s a lot of terms to take it, so let’s try and break down how this all works in practice. To understand OpenCog Hyperon – and specifically why neural-symbolic AI is such a big deal – we need to understand how LLMs work and where they come up short. The limits of LLMs Generative AI operates primarily on probabilistic associations. When an LLM answers a question, it doesn’t ‘know’ the answer in the way a human instinctively does. Instead, it calculates the most probable sequence of words to follow the prompt based on its training data. Most of the time, this ‘impersonation of a person’ comes in very convincingly, providing the human user with not only the output they expect, but one that is correct. LLMs specialise in pattern recognition on an industrial scale and they’re very good at it. But the limitations of these models are well documented. There’s hallucination, of course, which we’ve already touched on, where plausible-sounding but factually incorrect information is presented. Nothing gaslights harder than an LLM eager to please its master. But a greater problem, particularly once you get into more complex problem-solving, is a lack of reasoning. LLMs aren’t adept at logically deducing new truths from established facts if those specific patterns weren’t in the training set. If they’ve seen the pattern before, they can predict its appearance again. If they haven’t, they hit a wall. AGI, in comparison, describes artificial intelligence that can genuinely understand and apply knowledge. It doesn’t just guess the right answer with a high degree of certainty – it knows it, and it’s got the working to back it up. Naturally, this ability calls for explicit reasoning skills and memory management – not to mention the ability to generalise when given limited data. Which is why AGI is still some way off – how far off depends on which human (or LLM) you ask. But in the meantime, whether AGI be months, years, or decades away, we have neural-symbolic AI, which has the potential to put your LLM in the shade. Dynamic knowledge on demand To understand neural-symbolic AI in action, let’s return toOpenCog Hyperon. At its heart is the Atomspace Metagraph, a flexible graph structure that represents diverse forms of knowledge including declarative, procedural, sensory, and goal-directed, all contained in a single substrate. The metagraph can encode relationships and structures in ways that support not just inference, but logical deduction and contextual reasoning. If this sounds a lot like AGI, it’s because it is. ‘Diet AGI,’ if you like, provides a taster of where artificial intelligence is headed next. So that developers can build with the Atomspace Metagraph and use its expressive power, Hyperon has created MeTTa (Meta Type Talk), a novel programming language designed specifically for AGI development. Unlike general-purpose languages like Python, MeTTa is a cognitive substrate that blends elements of logic and probabilistic programming. Programmes in MeTTa operate directly on the metagraph, querying and rewriting knowledge structures, and supporting self-modifying code, which is essential for systems that learn how to improve themselves. "We're emerging from a couple of years spent on building tooling. We've finally got all our infrastructure working at scale for Hyperon, which is exciting." Our CEO, Dr. @bengoertzel, joined Robb Wilson and Josh Tyson on the Invisible Machines podcast to discuss the present and… pic.twitter.com/8TqU8cnC2L — SingularityNET (@SingularityNET) January 19, 2026 Robust reasoning as gateway to AGI The neural-symbolic approach at the heart of Hyperon addresses a key limitation of purely statistical AI, namely that narrow models struggle with tasks requiring multi-step reasoning. Abstract problems bamboozle LLMs with their pure pattern recognition. Throw neural learning into the mix, however, and reasoning becomes smarter and more human. If narrow AI does a good impersonation of a person, neural-symbolic AI does an uncanny one. That being said, it’s important to contextualise neural-symbolic AI. Hyperon’s hybrid design doesn’t mean an AGI breakthrough is imminent. But it represents a promising research direction that explicitly tackles cognitive representation and self-directed learning not relying on statistical pattern matching alone. And in the here and now, this concept isn’t constrained to some big brain whitepaper – it’s out there in the wild and being actively used to create powerful solutions. The LLM isn’t dead – narrow AI will continue to improve – but its days are numbered and its obsolescence inevitable. It’s only a matter of time. First neural-symbolic AI. Then, hopefully, AGI – the final boss of artificial intelligence. Image source: Depositphotos The post OpenCog Hyperon and AGI: Beyond large language models appeared first on AI News. View the full article
  8. After years of experimentation with artificial intelligence, retailers are striving to embed consumer insight directly into everyday commercial decisions. First Insight, a US-based analytics company specialising in predictive consumer feedback, argues that the next phase of retail AI should be epitomised by dialogue, not dashboards. Following a three-month beta programme, First Insight has made its new AI tool, Ellis, available to brands and retailers. Ellis is designed as a conversational interface that allows merchandising, pricing and planning teams to ask questions about products, pricing, and demand in the First Insight platform. The company says its approach is intended to compress decision times into minutes. Research by McKinsey has found that while most large retailers now collect volumes of customer data, some can’t translate insights into action quickly enough to influence product development decisions. It notes AI tools which shorten the distance between insight and execution are more likely to deliver measurable commercial value than reporting systems. From dashboards to dialogue First Insight has worked with retailers including Boden, Family Dollar, and Under Armour to predict consumer demand, price sensitivity, and performance using survey feedback and predictive modelling. Such insights are usually delivered on a dashboard or in a report. Ellis lets users query insights conversationally. For example, teams can ask whether a six-item or nine-item assortment is likely to perform better in a specific market, or how removing certain materials might affect appeal. First Insight says the system returns answers grounded in its existing data models. Industry evidence suggests that this method could help with a bottleneck in retail decision-making. A Harvard Business Review analysis of data-driven retail organisations found insight often loses value when it cannot be accessed quickly, particularly during phases like line review or early concept development. Predictive insight already in operation The underlying techniques used by First Insight are deployed already across the retail sector. Under Armour has described using consumer data and predictive modelling to refine product assortments and pricing strategies, stating the technology helps it reduce markdown risk and improve full-price selling. Similarly, fashion retailer Boden has discussed the role of customer insight in guiding assortment decisions, particularly in balancing trend-led items with core items. While these companies do not disclose the details of their proprietary systems, such cases can show how predictive consumer data can be embedded into commercial planning. Comparable tools are also in use elsewhere in the industry. Retailers including Walmart and Target have invested in analytics and machine learning to understand regional demand patterns, optimise pricing, and test new concepts. According to a Deloitte study on AI in retail, companies using predictive consumer insight report improved forecast accuracy and lower inventory risk, particularly when analytics are integrated early. Pricing, assortments and competitive dynamics Ellis is powered by what First Insight describes as a predictive retail large language model, one that’s trained on consumer response data. The company says this lets the system answer questions about optimal pricing, predicted sales rates, ideal assortment size, and likely segment preferences. This focus aligns with academic research showing that price optimisation and assortment planning are among the highest-value AI use cases in retail. A study published in the Journal of Retailing found that data-driven pricing models can outperform traditional cost-plus approaches, particularly when consumer willingness-to-pay is measured directly. Competitive benchmarking is another area where retailers can use analytics. Research from Bain & Company indicates retailers able to compare their products with competitors’ are better positioned to differentiate on value as well as price. Tools that consolidate such comparisons into a single analytical layer can be considered the ideal, therefore. Making insight more widely accessible One of First Insight’s core claims is that Ellis makes consumer insight accessible outside of specialist analytics teams. Natural-language queries, the company argues, lets senior executives down engage with data with no waiting for analysis. Democratisation of analytics is a recurring theme in a great deal of industry research. Gartner reports organisations which broaden access to analytics are more likely to see tool adoption and ROI. However, it cautions that systems should be governed to ensure outputs are interpreted correctly and stem from robust data. First Insight maintains that Ellis retains the methodological rigour of its existing platform, while reducing friction at the point of decision. According to Greg Petro, the company’s chief executive, the goal is to bring predictive insight into the moment when decisions are actually made. “For nearly 20 years, First Insight has helped retailers predict pricing, product success and assortment decisions by grounding them in real consumer feedback,” a company spokesperson said. “Ellis brings that intelligence directly into line review, early concept development and the boardroom, helping teams move faster without sacrificing confidence.” A crowded but growing market First Insight is not alone to target the space. Vendors such as EDITED, DynamicAction, and RetailNext offer AI tools aimed at merchandising and pricing. What differentiates newer offerings is the emphasis on usability and speed rather than model complexity. A recent Forrester report on retail AI noted that conversational interfaces are being layered on top of established analytics platforms, reflecting a demand from users for more intuitive interaction with data. Such tools lead to better decisions, although are dependent on data quality and organisational discipline. First Insight previewed Ellis at this year’s National Retail Federation conference in New York, where AI-driven merchandising and pricing tools featured prominently. As retailers face volatile demand, inflation, and changing consumer preferences, the ability to test scenarios remains valuable. (Image source: “2008 first insight” by palmasco is licensed under CC BY-NC-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Retailers bring conversational AI and analytics closer to the user appeared first on AI News. View the full article
  9. A new technology from digital banking platform Plumery AI aims to address a dilemma for financial institutions: how to move beyond proofs of concept and embed artificial intelligence into everyday banking operations without compromising governance, security, or regulatory compliance. Plumery’s “AI Fabric” has been positioned by the company as a standardised framework for connecting generative AI tools and models to core banking data and services. According to Plumery, the product is intended to reduce reliance on bespoke integrations and to promote an event-driven, API-first architecture that can scale as institutions grow. The challenge it seeks to address is recognised in the sector. Banks have invested heavily in AI experimentation over the past decade, but many deployments remain limited. Research by McKinsey suggests that while generative AI could materially improve productivity and customer experience in financial services, most banks struggle to translate pilots into production because of fragmented data estates and incumbent operating models. The consultancy argues that enterprise-level AI adoption requires shared infrastructure and governance, and reusable data products. In comments accompanying the product launch, Plumery’s founder and chief executive, Ben Goldin, said financial institutions are clear about what they expect from AI. “They want real production use cases that improve customer experience and operations, but they will not compromise on governance, security or control,” he said. “The event-driven data mesh architecture transforms how banking data is produced, shared, and consumed, not adding another AI layer on top of fragmented systems.” Fragmented data remains a barrier Data fragmentation remains one of the obstacles to operational AI in banking. Many institutions rely on legacy core systems that sit in newer digital channels, creating silos in products and customer journeys. Each AI initiative requires fresh integration work, security reviews, and governance approvals, thus increasing costs and slowing delivery. Academic and industry research supports this diagnosis. Studies on explainable AI in financial services note that fragmented pipelines make it harder to trace decisions and increase regulatory risk, particularly in areas like credit scoring and anti-money-laundering. Regulators have made clear that banks must be able to explain and audit AI-driven outcomes, regardless of where the models are developed. Plumery says its AI Fabric addresses such issues by presenting domain-oriented banking data as governed streams that can be reused in multiple use cases. The company argues that separating systems of record from systems of engagement and intelligence allows banks to innovate more safely. Evidence of AI already in production Despite the challenges, AI is already embedded in many parts of the financial sector. Case studies compiled by industry analysts show widespread use of machine learning and natural language processing in customer service, risk management, and compliance. Citibank, for example, has deployed AI-powered chatbots to handle routine customer enquiries, reducing pressure on call centres and improving response times. Other large banks use predictive analytics to monitor loan portfolios and anticipate defaults. Santander has publicly described its use of machine learning models to assess credit risk and strengthen portfolio management. Fraud detection is another mature area. Banks rely increasingly on AI systems to analyse transaction patterns, flagging anomalous behaviour more effectively than rule-based systems. Research from technology consultancies notes that such models depend on high-quality data flows, and that integration complexity remains a limiting factor for smaller institutions. More advanced applications are emerging at the margins. Academic research into large language models suggests that, under strict governance, conversational AI could support certain transactional and advisory functions in retail banking. However, these implementations remain experimental and are closely scrutinised due to their regulatory implications. Platform providers and ecosystem approaches Plumery operates in a competitive market of digital banking platforms that position themselves as orchestration layers rather than replacements for core systems. The company has entered partnerships designed to fit into broader fintech ecosystems. Its integration with Ozone API, an open banking infrastructure provider, was presented as a way for banks to deliver standards-compliant services more quickly, without custom development. Its approach reflects a wider industry trend towards composable architectures. Vendors like Backbase and others promote API-centric platforms that allow banks to plug in AI, analytics, and third-party services to the existing core. Analysts agree generally that such architectures are better suited to incremental innovation than large-scale system replacement. Readiness remains uneven Evidence suggests that readiness in the sector is uneven. A report by Boston Consulting Group found that fewer than a quarter of banks believe they are prepared for large-scale AI adoption. The gap, it argued, lies in governance, data foundations, and operating discipline. Regulators have responded by offering controlled environments for experimentation. In the ***, regulatory sandbox initiatives allow banks to test new technologies, including AI. These programmes are intended to support innovation and reinforce accountability and risk management. For vendors like Plumery, the opportunity lies in providing infrastructure that aligns technological ambition and regulatory reality. AI Fabric enters a market where demand for operational AI is apparent, but where success depends on proving that new tools can be safe and transparent. Whether Plumery’s approach becomes a adopted standard remains uncertain. As banks move from experimentation to production, the focus is moving towards the architectures that support AI. In that context, platforms that can demonstrate technical flexibility and governance adherence are more likely to play an important role in the digital banking’s next phase. (Image source: “Colorful Shale Strata of the Morrison Formation at the Edge of the San Rafael Swell” by Jesse Varner is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Banks operationalise as Plumery AI launches standardised integration appeared first on AI News. View the full article
  10. Author: Richard Farrell, CIO at Netcall After a year of rapid adoption and high expectations surrounding artificial intelligence, 2026 is shaping up to be the year CIOs apply a more strategic lens. Not to slow progress, but to steer it in a smarter direction. In 2025, we saw the rise of AI copilots across almost every platform imaginable. From browsers and CRMs to productivity tools and helpdesks, the tech world raced to embrace assistance-on-demand. But while vendors marketed “magic,” CIOs were left with the clean-up. Multiple pilots. Multiple platforms. Multiple promises. Few results. Now the honeymoon ******* is over. It’s time to assess what worked, what didn’t, and what truly matters. The role of the CIO is shifting from tech enthusiast to strategic outcome architect. That means moving from disconnected experiments to holistic thinking – aligning people, process, and technology to drive sustainable results. Process mapping will become an essential starting point: identifying pain points, inefficiencies, and areas for AI and automation that directly link to measurable outcomes. And that shift comes with a new set of priorities. Here are five that will define 2026. Process intelligence will replace fragmented copilots The early promise of AI copilots was appealing: save time, reduce manual work, and supercharge productivity. But reality has been far more grounded. Independent evaluations, including a detailed *** Department for Business and Trade trial, found minimal measurable productivity improvements[1]. Despite glowing self-reports, actual gains were either negligible or non-existent. Why? Because these tools were designed for individual users, not organisations. They sat on top of workflows, rather than improving them. In too many cases, the top use case was summarising meeting notes – useful, but hardly transformative. In 2026, CIOs will shift focus from point solutions to end-to-end platforms. The goal will be clear: use AI to optimise business processes, not pad out software features. This pivot from individual utility to organisational efficiency will be the biggest AI reset of the year. Consolidation will beat complexity CIOs have long battled sprawling tech estates and overlapping solutions, often held together by fragile integrations. In 2026, that complexity will come under fresh scrutiny. Too many tools chasing too few outcomes is no longer sustainable. There will be a marked shift towards simplification – rationalising technology stacks and working with partners who can demonstrate true interoperability. CIOs will favour vendors who collaborate rather than compete, and who can clearly show how their solutions integrate within the broader ecosystem. Less will be more, especially when it comes to driving efficiency and speed. This change is as much about procurement strategy as it is about technology. CIOs will look to platform-based approaches that offer the flexibility to build applications tailored to real-world processes. The ability to generate apps directly from mapped processes – refining and improving iteratively – will empower digital teams to deliver faster and smarter. It means building long-term partnerships that are based on shared goals and business value, not short-term sprints or siloed innovation. Governance will take centre stage The more AI scales, the more governance matters. In 2026, successful CIOs will build guardrails into every intelligent system. This means moving away from retrofitting rules after the fact, and instead embedding governance by design – from the very beginning of deployment. That includes audit trails, escalation rules, and privacy protocols, all built into the user journey through intuitive, adaptable frameworks. Proper escalation and human-in-the-loop models will be essential, alongside data stewardship – knowing where data is stored, how it’s accessed, and ensuring privacy by design. Governance isn’t a drag on progress; it’s the foundation of trust. Low-code platforms are emerging as powerful enablers in this shift. They don’t just speed up development – they allow CIOs to embed controls directly into the build process. This approach supports the democratisation of development, empowering teams to iterate, improve, and scale quickly, without compromising on oversight. That means compliance can’t be tacked on later; it must be built in from the start. This accelerates delivery while reassuring regulators, customers, and internal teams alike. This shift will ensure that automation supports human judgement, not overrides it – building systems people trust, not just systems that work. Prediction must be followed by action AI is good at pattern recognition. But unless those patterns trigger interventions, they don’t change outcomes. A shining example of this shift is the work at Rotherham NHS Foundation Trust. By embedding AI directly into its workflows, the Trust saw attendance among those most at risk of missing appointments improve significantly, with a 67% reduction in missed visits. It was not just that the model could identify at-risk patients; it was that this insight triggered an additional reminder, leading to better outcomes. The value was not in the model alone but in how it changed communication in a meaningful, practical way. That’s what CIOs will demand in 2026. Prediction engines must be paired with platforms that empower action. Whether it’s preventing missed appointments or spotting security anomalies before breaches occur, success will be defined by what AI enables teams to do differently. Value must be proven, not assumed A dangerous trend emerged in 2025: building business cases on feelings. CIOs were pressured to prove AI success based on user satisfaction or time-saving estimates, often self-reported. The problem? These metrics are vague, inconsistent, and impossible to verify. In 2026, that won’t be good enough. CIOs will be expected to show clear cause and effect. If AI is being used, what has it replaced? What has it improved? What cost has it avoided? We need to replace the tick-box mindset with a value lens. That means thinking beyond the tech and tying initiatives back to outcomes CEOs care about – growth, resilience, customer satisfaction, and efficiency. Crucially, this demands a holistic approach. It’s not just about technology. CIOs must align people, process, and platform – starting with detailed process mapping to understand how work gets done, where inefficiencies lie, and how those insights translate into smarter applications. These maps become blueprints for building, offering a framework to generate applications that deliver measurable value. The resolution: outcome-led leadership CIOs have spent the last decade digitising the enterprise. In 2026, their role will evolve again – from technologists to outcome architects. This year isn’t about pulling back on AI or slowing innovation. It’s about getting clear. Clear on priorities. Clear on governance. Clear on impact. The best CIOs will ask the toughest questions. Are we solving a real problem, or just deploying tech? Can we measure the benefit, not just hope for it? Are we building something sustainable, or chasing hype? 2026 is the year we stop experimenting for the sake of it and start delivering for the business. The age of shiny objects is over. It’s time for substance. And that starts with us. Author: Richard Farrell, CIO at Netcall (Image source: “Apollo classic concept art: Parachute deployment” by Mooncat.Drew is marked with Public Domain Mark 1.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI dominated the conversation in 2025, CIOs shift gears in 2026 appeared first on AI News. View the full article
  11. The ETSI EN 304 223 standard introduces baseline security requirements for AI that enterprises must integrate into governance frameworks. As organisations embed machine learning into their core operations, this European Standard (EN) establishes concrete provisions for securing AI models and systems. It stands as the first globally applicable European Standard for AI cybersecurity, having secured formal approval from National Standards Organisations to strengthen its authority across international markets. The standard serves as a necessary benchmark alongside the EU AI Act. It addresses the reality that AI systems possess specific risks – such as susceptibility to data poisoning, model obfuscation, and indirect prompt injection – that traditional software security measures often miss. The standard covers deep neural networks and generative AI through to basic predictive systems, explicitly excluding only those used strictly for academic research. ETSI standard clarifies the chain of responsibility for AI security A persistent hurdle in enterprise AI adoption is determining who owns the risk. The ETSI standard resolves this by defining three primary technical roles: Developers, System Operators, and Data Custodians. For many enterprises, these lines blur. A financial services firm that fine-tunes an open-source model for fraud detection counts as both a Developer and a System Operator. This dual status triggers strict obligations, requiring the firm to secure the deployment infrastructure while documenting the provenance of training data and the model’s design auditing. The inclusion of ‘Data Custodians’ as a distinct stakeholder group directly impacts Chief Data and Analytics Officers (CDAOs). These entities control data permissions and integrity, a role that now carries explicit security responsibilities. Custodians must ensure that the intended usage of a system aligns with the sensitivity of the training data, effectively placing a security gatekeeper within the data management workflow. ETSI’s AI standard makes clear that security cannot be an afterthought appended at the deployment stage. During the design phase, organisations must conduct threat modelling that addresses AI-native attacks, such as membership inference and model obfuscation. One provision requires developers to restrict functionality to reduce the attack surface. For instance, if a system uses a multi-modal model but only requires text processing, the unused modalities (like image or audio processing) represent a risk that must be managed. This requirement forces technical leaders to reconsider the common practice of deploying massive, general-purpose foundation models where a smaller and more specialised model would suffice. The document also enforces strict asset management. Developers and System Operators must maintain a comprehensive inventory of assets, including interdependencies and connectivity. This supports shadow AI discovery; IT leaders cannot secure models they do not know exist. The standard also requires the creation of specific disaster recovery plans tailored to AI attacks, ensuring that a “known good state” can be restored if a model is compromised. Supply chain security presents an immediate friction point for enterprises relying on third-party vendors or open-source repositories. The ETSI standard requires that if a System Operator chooses to use AI models or components that are not well-documented, they must justify that decision and document the associated security risks. Practically, procurement teams can no longer accept “****** box” solutions. Developers are required to provide cryptographic hashes for model components to verify authenticity. Where training data is sourced publicly (a common practice for Large Language Models), Developers must document the source URL and acquisition timestamp. This audit trail is necessary for post-incident investigations, particularly when attempting to identify if a model was subjected to data poisoning during its training phase. If an enterprise offers an API to external customers, they must apply controls designed to mitigate AI-focused attacks, such as rate limiting to prevent adversaries from reverse-engineering the model or overwhelming defences to inject poison data. The lifecycle approach extends into the maintenance phase, where the standard treats major updates – such as retraining on new data – as the deployment of a new version. Under the ETSI AI standard, this triggers a requirement for renewed security testing and evaluation. Continuous monitoring is also formalised. System Operators must analyse logs not just for uptime, but to detect “data drift” or gradual changes in behaviour that could indicate a security breach. This moves AI monitoring from a performance metric to a security discipline. The standard also addresses the “End of Life” phase. When a model is decommissioned or transferred, organisations must involve Data Custodians to ensure the secure disposal of data and configuration details. This provision prevents the leakage of sensitive intellectual property or training data through discarded hardware or forgotten cloud instances. Executive oversight and governance Compliance with ETSI EN 304 223 requires a review of existing cybersecurity training programmes. The standard mandates that training be tailored to specific roles, ensuring that developers understand secure coding for AI while general staff remain aware of threats like social engineering via AI outputs. “ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems”, said Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence. “At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy, and secure by design.” Implementing these baselines in ETSI’s AI security standard provides a structure for safer innovation. By enforcing documented audit trails, clear role definitions, and supply chain transparency, enterprises can mitigate the risks associated with AI adoption while establishing a defensible position for future regulatory audits. An upcoming Technical Report (ETSI TR 104 159) will apply these principles specifically to generative AI, targeting issues like deepfakes and disinformation. See also: Allister Frost: Tackling workforce anxiety for AI integration success Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Meeting the new ETSI standard for AI security appeared first on AI News. View the full article
  12. Hiring at large firms has long relied on interviews, tests, and human judgment. That process is starting to shift. McKinsey has begun using an AI chatbot as part of its graduate recruitment process, signalling a shift in how professional services organisations evaluate early-career candidates. The chatbot is being used during the initial stages of recruitment, where applicants are asked to interact with it as part of their assessment. Rather than replacing interviews or final hiring decisions, the tool is intended to support screening and evaluation earlier in the process. The move reflects a wider trend across large organisations: AI is no longer limited to research or client-facing tools, but is increasingly shaping internal workflows. Why McKinsey is using AI in graduate hiring Graduate recruitment is resource-heavy. Every year, large firms receive tens of thousands of applications, many of which must be assessed in short hiring cycles. Screening candidates for basic fit, communication skills, and problem-solving ability can take a long time, even before interviews begin. Using AI at this stage offers a way to manage volume. A chatbot can interact with every applicant, ask consistent questions and collect organised responses. Human recruiters can then review that data, rather than requiring staff to manually screen every application from scratch. For McKinsey, the chatbot is part of a larger assessment process that includes interviews and human judgment. According to the company, the tool helps in gathering more information early on, rather than making recruiting judgments on its own. Shifting the role of recruiters Introducing AI into recruitment alters how hiring teams operate. Rather than focusing on early screening, recruiters can devote more time to assessing prospects who have already passed initial tests. In theory, that allows for more thoughtful interviews and deeper evaluation later in the process. At the same time, it raises questions about oversight. Recruiters need to understand how the chatbot evaluates responses and what signals it prioritises. Without that visibility, there is a risk that decisions could lean too heavily on automated outputs, even if the tool is meant to assist rather than decide. Professional services firms are typically wary about such adjustments. Their reputations rely heavily on talent quality, and any perception of unfair or flawed hiring practices carries risk. As a result, recruitment serves as a testing ground for AI use, as well as an area where controls are important. Concerns around fairness and bias Using AI in hiring is not without controversy. Critics have raised concerns that automated systems can reflect biases present in their training data or in how questions are framed. If not monitored closely, those biases can affect who progresses through the hiring process. McKinsey has said it is mindful of these risks and that the chatbot is used alongside human review. Still, the move highlights a broader challenge for organisations adopting AI internally: tools must be tested, audited, and adjusted over time. In recruitment, that includes checking whether certain groups are disadvantaged by how questions are asked or how responses are interpreted. It also means giving candidates clear information about how AI is used and how their data is handled. How McKinsey’s AI hiring move fits a wider enterprise trend The use of AI in graduate hiring is not unique to consulting. Large employers in finance, law, and technology are also testing AI tools for screening, scheduling interviews, and analysing written responses. What stands out is how quickly these tools are moving from experiments to real processes. In many cases, AI enters organisations through small, contained use cases. Hiring is one of them. It sits inside the company, affects internal efficiency, and can be adjusted without changing products or services offered to clients. That pattern mirrors how AI adoption is unfolding more broadly. Instead of sweeping transformations, many firms are adding AI to specific workflows where the benefits and risks are easier to manage. What this signals for enterprises McKinsey’s use of an AI chatbot in recruitment points to a practical shift in enterprise thinking. AI is becoming a tool for routine internal decisions, not just analysis or automation behind the scenes. For other organisations, the lesson is less about copying the tool and more about approach. Introducing AI into sensitive areas like hiring requires clear boundaries, human oversight, and a willingness to review outcomes over time. It also requires communication. Candidates need to know when they are interacting with AI and how that interaction fits into the overall hiring process. Transparency helps build trust, especially as AI becomes more common in workplace decisions. As professional services firms continue to test AI in their own operations, recruitment offers an early view of how far they are willing to go. The technology may help manage scale and consistency, but responsibility for decisions still rests with people. How well companies balance those two will shape how AI is accepted inside the enterprise. (Photo by Resume Genius) See also: Allister Frost: Tackling workforce anxiety for AI integration success Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post McKinsey tests AI chatbot in early stages of graduate recruitment appeared first on AI News. View the full article
  13. OpenAI, Google, and Anthropic announced specialised medical AI capabilities within days of each other this month, a clustering that suggests competitive pressure rather than coincidental timing. Yet none of the releases are cleared as medical devices, approved for clinical use, or available for direct patient diagnosis—despite marketing language emphasising healthcare transformation. OpenAI introduced ChatGPT Health on January 7, allowing US users to connect medical records through partnerships with b.well, Apple Health, Function, and MyFitnessPal. Google released MedGemma 1.5 on January 13, expanding its open medical AI model to interpret three-dimensional CT and MRI scans alongside whole-slide histopathology images. Anthropic followed on January 11 with Claude for Healthcare, offering HIPAA-compliant connectors to CMS coverage databases, ICD-10 coding systems, and the National Provider Identifier Registry. All three companies are targeting the same workflow pain points—prior authorisation reviews, claims processing, clinical documentation—with similar technical approaches but different go-to-market strategies. Developer platforms, not diagnostic products The architectural similarities are notable. Each system uses multimodal large language models fine-tuned on medical literature and clinical datasets. Each emphasises privacy protections and regulatory disclaimers. Each positions itself as supporting rather than replacing clinical judgment. The differences lie in deployment and access models. OpenAI’s ChatGPT Health operates as a consumer-facing service with a waitlist for ChatGPT Free, Plus, and Pro subscribers outside the EEA, Switzerland, and the ***. Google’s MedGemma 1.5 releases as an open model through its Health AI Developer Foundations program, available for download via Hugging Face or deployment through Google Cloud’s Vertex AI. Anthropic’s Claude for Healthcare integrates into existing enterprise workflows through Claude for Enterprise, targeting institutional buyers rather than individual consumers. The regulatory positioning is consistent across all three. OpenAI states explicitly that Health “is not intended for diagnosis or treatment.” Google positions MedGemma as “starting points for developers to evaluate and adapt to their medical use cases.” Anthropic emphasises that outputs “are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.” Benchmark performance vs clinical validation Medical AI benchmark results improved substantially across all three releases, though the gap between test performance and clinical deployment remains significant. Google reports that MedGemma 1.5 achieved 92.3% accuracy on MedAgentBench, Stanford’s medical agent task completion benchmark, compared to 69.6% for the previous Sonnet 3.5 baseline. The model improved by 14 percentage points on MRI disease classification and 3 percentage points on CT findings in internal testing. Anthropic’s Claude Opus 4.5 scored 61.3% on MedCalc medical calculation accuracy tests with Python code execution enabled, and 92.3% on MedAgentBench. The company also claims improvements in “honesty evaluations” related to factual hallucinations, though specific metrics were not disclosed. OpenAI has not published benchmark comparisons for ChatGPT Health specifically, noting instead that “over 230 million people globally ask health and wellness-related questions on ChatGPT every week” based on de-identified analysis of existing usage patterns. These benchmarks measure performance on curated test datasets, not clinical outcomes in practice. Medical errors can have life-threatening consequences, translating benchmark accuracy to clinical utility more complex than in other AI application domains. Regulatory pathway remains unclear The regulatory framework for these medical AI tools remains ambiguous. In the US, the FDA’s oversight depends on intended use. Software that “supports or provides recommendations to a health care professional about prevention, diagnosis, or treatment of a disease” may require premarket review as a medical device. None of the announced tools has FDA clearance. Liability questions are similarly unresolved. When Banner Health’s CTO Mike Reagin states that the health system was “drawn to Anthropic’s focus on AI safety,” this addresses technology selection criteria, not legal liability frameworks. If a clinician relies on Claude’s prior authorisation analysis and a patient suffers harm from delayed care, existing case law provides limited guidance on responsibility allocation. Regulatory approaches vary significantly across markets. While the FDA and Europe’s Medical Device Regulation provide established frameworks for software as a medical device, many APAC regulators have not issued specific guidance on generative AI diagnostic tools. This regulatory ambiguity affects adoption timelines in markets where healthcare infrastructure gaps might otherwise accelerate implementation—creating a tension between clinical need and regulatory caution. Administrative workflows, not clinical decisions Real deployments remain carefully scoped. Novo Nordisk’s Louise Lind Skov, Director of Content Digitalisation, described using Claude for “document and content automation in pharma development,” focused on regulatory submission documents rather than patient diagnosis. Taiwan’s National Health Insurance Administration applied MedGemma to extract data from 30,000 pathology reports for policy analysis, not treatment decisions. The pattern suggests institutional adoption is concentrating on administrative workflows where errors are less immediately dangerous—billing, documentation, protocol drafting—rather than direct clinical decision support where medical AI capabilities would have the most dramatic impact on patient outcomes. Medical AI capabilities are advancing faster than the institutions deploying them can navigate regulatory, liability, and workflow integration complexities. The technology exists. The US$20 monthly subscription provides access to sophisticated medical reasoning tools. Whether that translates to transformed healthcare delivery depends on questions these coordinated announcements leave unaddressed. See also: AstraZeneca bets on in-house AI to speed up oncology research Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI medical diagnostics race intensifies as OpenAI, Google, and Anthropic launch competing healthcare tools appeared first on AI News. View the full article
  14. Drug development is producing more data than ever, and large pharmaceutical companies like AstraZeneca are turning to AI to make sense of it. The challenge is no longer whether AI can help, but how tightly it needs to be built into research and clinical work to improve decisions around trials and treatment. That question helps explain why AstraZeneca is bringing Modella AI in-house. The company has agreed to acquire the Boston-based AI firm as it looks to deepen its use of AI across oncology research and clinical development. Financial terms were not disclosed. Rather than treating AI as a supporting tool, AstraZeneca is pulling Modella’s models, data, and staff directly into its research organisation. The move reflects a broader shift in the drug industry, where partnerships are giving way to acquisitions as companies try to gain more control over how AI is built, tested, and used in regulated settings. Why AI ownership is starting to matter in drug research Modella AI focuses on using computers to analyse pathology data, such as biopsy images, and link those findings with clinical information. Its work centres on making pathology more quantitative, helping researchers spot patterns that may point to useful biomarkers or guide treatment choices. In a statement, Modella said its foundation models and AI agents would be integrated into AstraZeneca’s oncology research and development work, with a focus on clinical development and biomarker discovery. How AstraZeneca moved its AI partnership toward full integration For AstraZeneca, the deal builds on a collaboration that began several years ago. That earlier partnership allowed both sides to test whether Modella’s tools could work within the drugmaker’s research environment. According to AstraZeneca executives, the experience made it clear that closer integration was needed. Speaking at the J.P. Morgan Healthcare Conference, AstraZeneca Chief Financial Officer Aradhana Sarin described the acquisition as a way to bring more data and AI capability inside the company. “Oncology drug development is becoming more complex, more data-rich and more time-sensitive,” said Gabi Raia, Modella AI’s chief commercial officer, adding that joining AstraZeneca would allow the company to deploy its tools across global trials and clinical settings. Using AI to improve trial decisions Sarin said the deal would “supercharge” AstraZeneca’s work in quantitative pathology and biomarker discovery by combining data, models, and teams under one roof. While such language reflects ambition, the practical goal is more grounded: shortening the time it takes to turn research data into decisions that affect trial design and patient selection. One area where AstraZeneca expects AI to have an impact is in choosing patients for clinical trials. Better matching patients to studies could improve trial outcomes and reduce costs tied to delays or failed studies. That kind of improvement depends less on complex algorithms and more on steady access to clean data and tools that fit into existing workflows. Talent and tools move in-house The acquisition also highlights a change in how large pharmaceutical firms think about AI talent. Rather than relying on outside vendors, companies are increasingly treating data scientists and machine learning experts as part of their core research teams. For AstraZeneca, bringing Modella’s staff in-house reduces dependence on external roadmaps and gives the company more say over how tools are adapted as research needs change. AstraZeneca said this is the first time a major pharmaceutical company has acquired an AI firm outright, though collaborations between drugmakers and technology companies have become common. AstraZeneca joins a crowded field of pharma–AI deals At the same healthcare conference, several new partnerships were announced, including a $1 billion collaboration between Nvidia and Eli Lilly to build a new research lab using Nvidia’s latest AI chips. Those deals point to growing interest in AI across the sector, but they also underline a key difference in strategy. Partnerships can speed up experimentation, while acquisitions suggest a longer-term bet on building internal capability. For companies operating under strict regulatory rules, that control can matter as much as raw computing power. What AstraZeneca is betting on next Sarin described the earlier AstraZeneca–Modella partnership as a “test drive,” saying the company ultimately wanted Modella’s data, models, and people inside the organisation. The aim, she said, is to support the development of “highly targeted biomarkers and then highly targeted therapeutics.” Beyond the Modella deal, Sarin said 2026 is expected to be a busy year for AstraZeneca, with several late-stage trial results due across different therapy areas. The company is also working toward a target of $80 billion in annual revenue by 2030. Whether acquisitions like this help meet those goals will depend on execution. Integrating AI into drug development is slow, expensive, and often messy. Still, AstraZeneca’s move signals a clear view of where it thinks the value lies: not in buying AI as a service, but in embedding it deeply into how medicines are discovered and tested. (Photo by Mika Baumeister) See also: Allister Frost: Tackling workforce anxiety for AI integration success Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AstraZeneca bets on in-house AI to speed up oncology research appeared first on AI News. View the full article
  15. Research from Cleo AI indicates that young adults are turning to artificial intelligence for financial advice to help them manage their money and develop more sustainable financial habits. The study surveyed 5,000 *** adults aged 28 to 40 and found that the majority are saving significantly less than they would like. In this context, interest in AI-driven money management tools is rising. One in five respondents describe themselves as curious about using AI to manage their finances, while a further 12% say they are excited by the prospect. Yet despite the interest in using AI in this context, confidence in personal financial management remains weak. More than a third of respondents (37%) report struggling with self-discipline around money, with impulse spending frequently undermining savings goals. Four in five believe they could improve their financial knowledge, pointing to a gap between intention and behaviour. Adults aged 28 to 34 are around 15% more satisfied with their savings than those aged 35 to 40, and save around 33% more each month on average. The findings suggest that as people move through early adulthood, financial strain accumulates while access to effective, ongoing support does not increase at the same rate. AI in money management AI is being seen as a tool that might help regain financial control. Many respondents express comfort with using AI for routine financial tasks. Nearly two-thirds (64%) would trust AI to advise on disposable income, while more than half would allow AI to move money to avoid overdrafts (54%) or manage regular bill payments (52%). Cleo’s CEO and founder, Barney Hussey-Yeo, states structural economic pressures are a major factor. Rising living costs, stagnant pay, low wages, and debt mean that many people are not mismanaging money so much as not having enough to make managing it worthwhile. In this context, AI tools positioned as practical, everyday assistance that can work with highly limited funds at its disposal rather than a tool for aspirational financial planning. Younger respondents are driving adoption. Adults aged 28 to 34 are 8% more confident than those aged 35 to 40 in using AI-powered financial tools. However, trust remains a barrier: nearly a quarter of respondents (23%) prefer to begin with limited use of the technology and need evidence of value before significant engagement. The research also highlights the regional disparities evident in the ***. Average monthly savings in the affluent South are 26% higher than in the North. Londoners save 33% more than the national average and around £250 more per month than those in Norwich. London (£431), Brighton (£401) and Edinburgh (£386) report the highest average monthly savings, while Newcastle (£185) and Cardiff in Wales (£184.95) sit at the bottom. Implications for fintech decision-makers The strongest signal in this evidence is not enthusiasm for AI per se, but demand for support under financial stress. High proportions citing poor self-discipline (37%) and low confidence in financial knowledge (80%) indicate that execution is the second problem. Trust is a gating factor rather than a secondary concern. While headline willingness to delegate tasks such as overdraft avoidance is high, nearly a quarter of users want incremental proof before committing. This would favour modular product design and specific implementations in software rather than full automation from the outset. Evidence suggests adoption will be earned through demonstrated utility, not brand positioning. Age-related divergence within a relatively narrow cohort (28–40) is notable. The sharp drop in savings satisfaction and contribution among those aged 35–40 (the time of life when most take on more responsibilities and financial burden) suggests that fintechs targeting young professionals only might miss those with materially different needs. For older millennials, tools that address cumulative obligations (housing, dependants, legacy debt, bills) are likely to be more relevant.. Regional savings disparities are large and persistent, with London outliers (where mean income is higher) masking much weaker savings capacity elsewhere. This weakens the case for nationally uniform products. Pricing, thresholds, nudges in the form of notifications and in-app messages may need regional bias if products are to feel realistic outside higher-income urban centres in the South of the ***. (Image source: “Iced tea at Georgia’s” by Ed Yourdon is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Research shows *** young adults would use AI for financial guidance appeared first on AI News. View the full article
  16. Navigating workforce anxiety remains a primary challenge for leaders as AI integration defines modern enterprise success. For enterprise leaders, deploying AI is less a technical hurdle than a complex exercise in change management. The reality for many organisations is that, while algorithms offer efficiency, the human element dictates the speed of adoption. Data from the TUC indicates that 51 percent of *** adults are concerned about the impact of AI and new technologies on their job. This anxiety creates a tangible risk to ROI; resistance halts the innovation leaders seek to foster. Allister Frost, a former Microsoft leader and expert on business transformation, argues this friction stems from a misunderstanding of the technology’s capability. Address the misconception of true intelligence A common error in corporate strategy treats generative AI and Large Language Models (LLMs) as autonomous agents rather than data processors. This anthropomorphism drives the fear that machines will make human cognition obsolete. “The greatest misconception is that AI is as intelligent as its name suggests and can perform human-like tasks,” Frost notes. He clarifies the reality: “AI is primarily pattern-matching at scale, offering opportunities to help people work smarter, innovate faster, and explore new pathways to growth.” Communicating this distinction is essential. When employees view these tools as pattern-matchers rather than sentient replacements, the narrative changes from competition to utility. Frost emphasises that “AI doesn’t have the ability to replicate human intelligence, it exists to augment it.” Some finance and operations leaders view AI integration primarily as a mechanism to reduce salary overheads. Yet stripping away experienced staff for automation often degrades institutional memory. Frost warns against this tactic: “Too often, businesses see AI as a shortcut to headcount reduction, putting experienced workers at risk for short-term savings. This approach overlooks the enormous economic and societal cost of losing skilled staff.” Data confirms the workforce is on edge regarding this scenario. Acas reports that 26 percent of British workers cite job losses as their biggest concern regarding AI at work. History suggests, however, that technological integration expands rather than contracts the labour market. “The reality is that AI is not poised to eliminate jobs indiscriminately, but rather to evolve the nature of work,” states Frost. Operationalising augmentation Successful integration requires changing how AI use cases are identified. Rather than looking for roles to remove, enterprise leaders should identify high-volume, low-value tasks that bottleneck productivity. “AI tools have the potential to automate mundane tasks and free up human labour to focus on creative and strategic aspects,” explains Frost. This allows leaders to move staff toward high-touch areas where algorithms struggle. “As AI handles repetitive tasks, it frees up time to allow staff to upskill and transition into more complex roles that require a higher level of critical thinking and emotional intelligence.” These competencies – empathy, ethical decision-making, and complex strategy – remain outside the grasp of current computational models. Resistance to AI is often a symptom of “change fatigue,” a common response to the pace of digital updates. With 14 percent of *** workers explicitly worried about AI’s impact on their current job, transparent governance is required. Leaders must recognise that “resisting AI’s integration can hinder progress and limit opportunities for innovation.” Active engagement is the solution. “Engaging employees in discussions about AI’s role within the organisation can help demystify its functions and build trust,” Frost advises. This requires moving beyond top-down mandates. It involves creating a culture where staff feel safe to experiment with new tools without the immediate fear of displacing their own roles. “Once leaders have cultivated an environment of transparency and inclusion, businesses can alleviate anxieties, ensuring all team members are aligned and prepared to harness AI’s benefits.” Adapting the workforce for successful AI integration Enterprise technology advancements have always demanded adaptation, and AI – while a larger transformation than many technologies in recent decades – is no different. “Throughout history people have been resistant to new technological advancements, yet history shows us humans have repeatedly risen to the challenge of integrating new technologies.” For enterprise leaders, success involves investing in resilience and continuous learning. By framing AI as a transformative tool rather than a threat, organisations can protect their talent pipeline while modernising operations. A summary of advice to ensure successful AI integration: Reframe the narrative: Explicitly communicate AI as a “pattern-matching” tool for augmentation, not a sentient replacement, to lower cultural resistance. Audit for augmentation: Identify the mundane and high-volume process bottlenecks for automation, specifically to free up staff for more rewarding creative work. Invest in “human” skills: Allocate learning and development budgets toward critical thinking, empathy, and ethical decision-making, as these are the non-replicable assets in an AI-driven market. Combat change fatigue: Ensure transparent and two-way dialogue regarding AI integration roadmaps and governance to build trust and mitigate the fear factor regarding job losses. “My mission is to save one million working lives by showing that AI works best when it empowers humans, rather than replaces them,” Frost concludes. See also: How Shopify is bringing agentic AI to enterprise commerce Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Allister Frost: Tackling workforce anxiety for AI integration success appeared first on AI News. View the full article
  17. While the enterprise world rushes to migrate everything to the cloud, the warehouse floor is moving in the opposite direction. This article explores why the future of automation relies on edge AI to solve the fatal “latency gap” in modern logistics. In the sterilised promotional videos for smart warehouses, autonomous mobile robots (AMRs) glide in perfect, balletic harmony. They weave past human workers, dodge dropped pallets and optimise their paths in real-time. It looks seamless. In the real world, however, it is messy. A robot moving at 2.5 metres per second that relies on a cloud server to tell it whether that obstacle is a cardboard box or a human ankle is a liability. If the wi-fi flickers for 200 milliseconds (a blink of an eye in human terms), that robot is effectively blind. In a highly dense facility, 200 milliseconds is the difference between a smooth operation and a collision. This is the “latency trap,” and it is currently the single biggest bottleneck in eCommerce logistics. For the past decade, the industry dogma has been to centralise intelligence: push all data to the cloud, process it with massive compute power and send instructions back. But as we approach the physical limits of bandwidth and speed, engineers are realising that the cloud is simply too far away. The next generation of smart warehouses isn’t getting smarter by connecting to a larger server farm; it’s getting smarter by severing the cord. The physics of “real-time” To understand why the industry is pivoting to Edge AI, we have to look at the maths of modern fulfilment. In a traditional setup, a robot’s LIDAR or camera sensors capture data. That data is compressed, packeted and transmitted via local wi-fi to a gateway, then through fibre optics to a data centre (often hundreds of miles away). The AI model in the cloud processes the image (“Object detected: Forklift”), determines an action (“Stop”) and sends the command back down the chain. Even with fibber, the round-trip time (RTT) can hover between 50 to 100 milliseconds. Add in network jitter, packet loss in a warehouse full of metal racking (which acts as a Faraday cage) and server processing time. Then *****, the delay can spike to half a second. For a predictive algorithm analysing sales data, half a second is irrelevant. For a 500kg robot navigating a narrow aisle, it is an eternity. This is why the architecture of eCommerce logistics is flipping upside down. We are moving from a “Hive Mind” model (one central brain controlling all drones) to a “Swarm” model (smart drones making their own decisions). The rise of on-device inference The solution lies in edge AI: moving the inference (the decision-making process) directly onto the robot itself. Thanks to the explosion in efficient, high-performing silicon, specifically system-on-modules (SoMs) like the NVIDIA Jetson series or specialised TPUs, robots no longer need to ask permission to stop. They process the sensor data locally. The camera sees the obstacle, the onboard chip runs the neural network and the brakes are applied in single-digit milliseconds. No internet required. The transformation does more than just prevent accidents. It fundamentally changes the bandwidth economics of the warehouse. A facility running at lets say, 500 AMRs, cannot feasibly stream high-definition video feeds from every robot to the cloud simultaneously. The truth is, the bandwidth cost alone would destroy the margins. By processing video locally and only sending metadata (e.g., “Aisle 4 blocked by debris”) to the central server, warehouses can scale their fleets without totally crushing their network infrastructure. The 3PL adoption curve The technological shift is creating a divide in the logistics market. On one side, you have legacy providers running rigid, older automation systems. On the other hand, you have ‘tech-forward’ third-party logistics (3PL) providers who are treating their warehouses as software platforms. The agility of a 3PL for eCommerce is now defined by its tech stack. Modern providers are adopting these edge-enabled systems not just for safety, but for speed. When a 3PL integrates edge-computing robotics, they aren’t just installing machines; they are installing a dynamic mesh network that adapts to order volume in real-time. For example, during peak season (****** Friday/cyber Monday), the volume of goods moving through a facility can triple. You don’t want systems completely dependent on the cloud because it would slow them down exactly when speed is paramount. An edge-based fleet, however, maintains its performance because each unit carries its own compute power. It scales linearly. The reliability is what separates top-tier fulfilment partners from those who crumble under the December crush. Computer vision: The killer app for the edge While navigation is the immediate safety use case, the most lucrative application of Edge AI is actually in quality control and tracking. This is where the barcode, a technology that has survived for 50 years, finally faces its extinction. In a standard workflow, a package is scanned manually at multiple touchpoints. It’s slow, prone to human error and tediously repetitive. Edge AI enables “passive tracking” via Computer Vision. Cameras mounted on conveyor belts or worn by workers (smart glasses) run object recognition models locally. As a package moves down the line, the AI identifies it by its dimensions, logo and shipping label text simultaneously. This requires massive processing power. Running a YOLO (you only look once) object detection model at 60 frames per second on 50 different cameras is not something you can easily offload to the cloud without massive lag and cost. It has to happen at the edge. When this works, the results are invisible but profound. “Lost” inventory becomes a rarity because the system “sees” every item constantly. If a worker places a package in the wrong bin, an overhead camera (running local inference) detects the anomaly and flashes a red light instantly. The error is caught before the item even leaves the station. The data gravity problem There is, however, a catch. If the robots are thinking for themselves, how do you improve their collective intelligence? In a completely cloud-centric model, all data is in a single place, making it easy to retrain models. In an edge-centric model on the other hand, the data is fragmented in hundreds of different devices. This introduces the challenge of “Data Gravity.” To solve this, the industry is turning to federated learning. This means that if one robot learns that a specific type of shrink wrap confuses its sensors, every robot in the fleet wakes up the next day knowing how to handle it. It is collective evolution without the bandwidth bloat. Why 5G is the enabler (not the saviour) You cannot talk about the smart warehouse without mentioning 5G, but it is important to understand its actual role. Marketing hype suggests 5G solves latency. It helps, certainly, offering sub-10ms latency theoretically. But for eCommerce logistics, 5G is not the brain. No, it is the nervous system. 5G private networks are becoming the standard for these facilities because they offer a dedicated spectrum. Wi-fi is notorious for interference. Metal racking, other devices and microwave ovens in the breakroom can degrade the signal. A private 5G slice guarantees that the robots (and the important edge devices) have a dedicated lane that is immune to the noise. However, 5G is the pipe, not the processor. It allows the edge devices to communicate with each other (machine-to-machine or M2M communication) faster. This enables “swarm intelligence.” If Robot A encounters a spill in Aisle 3, it can broadcast a “Keep Out” zone to the local mesh network. Robot B, C and D reroute instantly without ever needing to query the central server. The network effect amplifies the value of the edge compute. The future: The warehouse as a neural network Looking forward to 2026 and beyond, the definition of a “warehouse” is pivoting. It is no longer just a storage shed; it is becoming a physical neural network. Every sensor, camera, robot and conveyor belt is becoming a node with its own compute capacity. The walls themselves are getting smart. We are seeing the deployment of ‘Smart Floor’ tiles that can sense weight and foot traffic, processing that data locally to optimise heating and lighting or detect unauthorised access. For the enterprise, the message is clear: the competitive advantage in eCommerce logistics is no longer just about square footage or location. It is about compute density. The winners in this space will be the ones who can push intelligence the furthest out to the edge. They will be the ones who understand that in a world demanding instant gratification, the speed of light is simply too slow and the smartest decision is the one made right where the action is. The cloud will always have a place for long-term analytics and storage, but for the kinetic, chaotic, fast-moving reality of the warehouse floor, the edge has already won. The revolution is happening on the device, millisecond by millisecond and it is reshaping the global supply chain… one decision at a time. Image source: Unsplash The post The latency trap: Smart warehouses abandon cloud for edge appeared first on AI News. View the full article
  18. Apple’s multi-year agreement to integrate Google’s Gemini models into its revamped Siri marks more than just another Big Tech partnership. The deal, announced Monday, offers a rare window into how one of the world’s most selective technology companies evaluates foundation models—and the criteria should matter to any enterprise weighing similar decisions. The stakes were considerable. Apple had been publicly integrating ChatGPT into its devices since late 2024, giving OpenAI prominent positioning within the Apple Intelligence ecosystem. Google’s Gemini win represents a decisive shift in Apple’s AI infrastructure strategy, one that relegates OpenAI to what Parth Talsania, CEO of Equisights Research, describes as “a more supporting role, with ChatGPT remaining positioned for complex, opt-in queries rather than the default intelligence layer.” The evaluation that mattered Apple’s reasoning was notably specific. “After careful evaluation, Apple determined Google’s AI technology provides the most capable foundation for Apple Foundation Models,” according to the joint statement. The phrasing matters—Apple didn’t cite partnership convenience, pricing, or ecosystem compatibility. The company explicitly framed this as a capabilities assessment. For enterprise buyers navigating their own foundation model selections, this carries weight. Apple’s evaluation criteria likely mirrored concerns familiar to any organisation building AI into core products: model performance at scale, inference latency, multimodal capabilities, and crucially, the ability to run models both on-device and in cloud environments while maintaining privacy standards. Google’s technology already powers Samsung’s Galaxy AI across millions of devices, providing proven deployment evidence at consumer scale. But Apple’s decision unlocks something different: integration across more than two billion active devices, with the technical demands that come with Apple’s performance and privacy requirements. What has changed since ChatGPT integration The timing raises questions. Apple rolled out ChatGPT integration just over a year ago, positioning Siri to tap into the chatbot for complex queries. The company now states, “there were no major changes to the ChatGPT integration at the time,” but the competitive dynamics have clearly shifted. OpenAI’s response to Google’s Gemini 3 release in late 2025—what reports described as a “code red” to accelerate development—suggests the competitive pressure was real. For enterprises, this highlights a risk often underweighted in vendor selection: the pace of model capability advancement varies significantly between providers, and today’s leader may not maintain that position across a multi-year deployment. Apple’s choice of a multi-year agreement with Google, rather than maintaining flexibility to switch between providers, suggests confidence in Google’s development trajectory. That’s a bet on sustained R&D investment, continued model improvements, and infrastructure scaling—the same factors enterprise buyers need to assess beyond current benchmarks. The infrastructure question The deal raises immediate concerns about concentration. “This seems like an unreasonable concentration of power for Google, given that they also have Android and Chrome,” Tesla CEO Elon Musk posted on X. The critique isn’t just competitive positioning from xAI’s founder—it reflects a legitimate enterprise concern about vendor dependency. Google now powers AI features across both major mobile operating systems through different mechanisms: directly via Android, and through this partnership for iOS. For enterprises deploying AI capabilities, the parallel is clear: relying on a single foundation model provider creates technical and commercial dependencies that extend beyond the immediate integration. This makes Apple’s architectural approach worth examining. The company emphasised that “Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple’s industry-leading privacy standards.” The hybrid deployment model—on-device processing for privacy-sensitive operations, cloud-based models for complex tasks—offers one template for enterprises balancing capability with data governance requirements. Market implications beyond mobile The deal’s immediate impact was measurable: Alphabet’s market valuation crossed US$4 trillion on Monday, with the stock having jumped 65% in 2024 on growing investor confidence in its AI efforts. But the strategic implications extend beyond market caps. Google has been methodically building positions across the AI stack—frontier models, image and video generation, and now default integration into iOS devices. For enterprises, this vertical integration matters when evaluating cloud AI services: a provider’s foundation model capabilities increasingly connect to their broader infrastructure, tools, and ecosystem positioning. Apple’s setbacks on the AI front—delayed Siri upgrades, executive changes, lukewarm reception for initial generative AI tools—are instructive from another angle. Even companies with enormous resources and talent can struggle with AI product execution. The decision to partner with Google rather than persist with entirely proprietary development acknowledges the complexity and resource demands of frontier model development. The search revenue connection The Gemini deal builds on an existing commercial relationship that generates tens of billions in annual revenue for Apple: Google pays to remain the default search engine on Apple devices. That arrangement has faced regulatory scrutiny, but it establishes precedent for deep technical integration between the companies. For enterprises, this underscores how commercial partnerships in AI often extend beyond pure technology licensing. The search deal likely influenced negotiations around the Gemini integration, just as existing vendor relationships shape enterprise AI procurement. Those relationships can be advantages—established trust, proven integration capabilities—or constraints that limit evaluation of alternatives. The OpenAI question The deal leaves OpenAI in an awkward position. ChatGPT remains available on Apple devices, but as an optional feature rather than the infrastructure layer. For a company that has positioned itself as the AI leader, losing default integration to Google represents a strategic setback. For enterprises, this competitive dynamic offers a reminder: the foundation model market remains fluid. Provider positioning can shift quickly, and exclusive relationships between major players can reshape options for everyone else. Maintaining optionality—through abstraction layers, multi-model strategies, or portable architectures—becomes more valuable in rapidly evolving markets. What comes next Google stated that Gemini models will power not just the revamped Siri coming later this year, but “other future Apple Intelligence features.” The scope of integration will likely expand as Apple builds out its AI capabilities, creating deeper technical dependencies and raising the stakes of the partnership. The financial terms remain undisclosed, leaving an important variable opaque: how did Apple and Google structure pricing for this scale of deployment? Enterprise buyers negotiating foundation model licensing will be watching for any signals about how such deals get priced at a massive scale. Apple’s decision doesn’t make Google’s Gemini the obvious choice for every enterprise—far from it. But the deal does offer validated evidence of what one extremely selective technology company prioritised when evaluating foundation models under demanding requirements. For enterprise AI buyers navigating their own evaluations, that’s a signal worth considering amid the noise of vendor marketing and benchmark leaderboards. The question isn’t whether to choose what Apple chose, but whether your evaluation criteria are as rigorous as Apple’s appears to have been. See also: Apple plans big Siri update with help from Google AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Why Apple chose Google over OpenAI: What enterprise AI buyers can learn from the Gemini deal appeared first on AI News. View the full article
  19. Shopify is enhancing core enterprise commerce workflows with agentic AI, automating operations while expanding sales channels. The adoption of generative AI in commerce has largely centred on customer support chatbots and basic content generation. Shopify’s Winter ‘26 Edition, titled Renaissance, pushes this technology toward agentic commerce where AI systems actively manage workflows, configure infrastructure, and distribute products into third-party ecosystems. Modernising commerce with the agentic AI storefront The most distinct architectural adjustment is the introduction of ‘Agentic Storefronts’. Traditionally, merchants drive traffic to a proprietary domain to secure a conversion. Shopify’s new model allows products to surface directly within AI-driven conversations on platforms such as ChatGPT, Perplexity, and Microsoft Copilot. For CDOs, this fragmentation of the customer journey requires a change in channel strategy. Rather than complex integrations for each external platform, products configured in the admin become discoverable by these agents immediately. The transaction occurs within the conversation, with attribution data flowing back to the central admin. This capability addresses the risk of brand invisibility as search behaviour migrates toward LLMs. “AI is now essential to modern commerce,” says Deann Evans, Managing Director, EMEA at Shopify. Evans points to internal data suggesting 93 percent of *** merchants are investing in AI tools to aid discovery, aligning with the 66 percent of consumers who expect to use AI for at least one part of their holiday shopping. Operational intelligence and ‘Sidekick’ updates While distributed commerce addresses revenue generation, the updates to ‘Sidekick’ (Shopify’s AI assistant) target operational expenditures and efficiency. The tool has evolved from a reactive AI chatbot into a proactive agentic system capable of executing complex administrative tasks for commerce. Sidekick Pulse now surfaces personalised tasks based on real-time data, such as suggesting product bundles when specific cart behaviours are detected or flagging compliance gaps like missing return policies. For technical teams, the reduction in low-level ticket volume is a primary benefit. Sidekick can now generate admin applications from natural language prompts, allowing non-technical staff to build custom tools without developer intervention. Furthermore, it creates ‘Working Flow’ automations from descriptions to bypass the need for deep knowledge of Shopify’s specific logic syntax. To support standardisation across large teams, prompts can now be saved and shared as “skills,” ensuring that verified and safe prompt structures are reused rather than ad-hoc queries. A persistent difficulty for enterprise retail is testing changes without disrupting live revenue streams. Shopify has introduced ‘SimGym’ (currently in research preview) and ‘Rollouts’ to address this. SimGym utilises AI shopper agents with human-like profiles to simulate traffic and purchasing behaviour. This allows merchants to model how storefront changes affect conversion rates using synthetic data derived from billions of annual purchases, rather than waiting for live A/B test results. Complementing this, Rollouts provides native experimentation capabilities within the admin, allowing for controlled scheduled changes and data-informed decision-making regarding buyer behaviour. For the C-suite, this reduces the risk profile of platform updates and marketing experiments. Infrastructure and developer velocity Beyond agentic AI, the update addresses physical commerce infrastructure and developer tooling. The new ‘POS Hub’ offers a wired connectivity solution for retail hardware, designed to improve resilience in high-volume brick-and-mortar environments. It acts as a dedicated operational unit, integrating card readers and scanners via a stable connection, which is vital for maintaining throughput during peak trading periods. On the software side, the AI-native developer platform aims to accelerate build times. AI agents can now scaffold apps, execute GraphQL operations, and generate validated code. This is supported by the Shopify Catalog, which enables agents to search across hundreds of millions of products to build richer applications. Vanessa Lee, VP of Leading Product at Shopify, commented: “We chose the Renaissance theme for this Edition because it symbolises progress, momentum, courage, and new beginnings … Many of these features weren’t possible a year ago and they redefine how we achieve our mission of making commerce better for everyone.” For enterprise leaders, the barrier to creating custom internal tools has lowered. The storefront is also no longer a static destination; it is a distributed set of data points accessible by third-party AI agents. Preparing product data for the agentic AI future of commerce is now a requisite for maintaining competitive visibility. See also: Retailers like Kroger and Lowe’s test AI agents without handing control to Google Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How Shopify is bringing agentic AI to enterprise commerce appeared first on AI News. View the full article
  20. Retailers are starting to confront a problem that sits behind much of the hype around AI shopping: as customers turn to chatbots and automated assistants to decide what to buy, retailers risk losing control over how their products are shown, sold, and bundled. That concern is pushing some large chains to build or support their own AI-powered shopping tools, rather than relying only on third-party platforms. The goal is not to chase novelty, but to stay close to customers as buying decisions shift toward automation. Several retailers, including Lowe’s, Kroger, and Papa Johns, are experimenting with AI agents that can help shoppers search for items, get support, or place orders. Many of these efforts are backed by tools from Google, which is offering retailers a way to deploy agents inside their own apps and websites instead of sending customers elsewhere. Keeping control as shopping shifts toward automation For grocers like Kroger, the concern is not whether AI will influence shopping, but how quickly it might do so. The company is testing an AI shopping agent that can compare items, handle purchases, and adjust suggestions based on customer habits and needs. “Things are moving at a pace that if you’re not already deep into [AI agents], you’re probably creating a competitive barrier or disadvantage,” said Yael Cosset, Kroger’s chief digital officer and executive vice president. The agent, which sits inside Kroger’s mobile app, can take into account factors such as time limits or meal plans, while also drawing on data the retailer already has, including price sensitivity and brand preferences. The intent is to keep those decisions within Kroger’s own systems rather than handing them off to external platforms. That approach reflects a wider tension in retail. Making products available directly inside large AI chatbots can widen reach, but it can also weaken customer loyalty, reduce add-on sales, and cut into advertising revenue. Once a third party controls the interface, retailers have less say in how choices are framed. This is one reason some retailers are cautious about selling directly through tools built by companies like OpenAI or Microsoft. Both have rolled out features that allow users to complete purchases inside their chatbots, and last year Walmart said it would work with OpenAI to let customers buy items through ChatGPT. For retailers, the appeal of running their own agents is control. “There’s a market shift across the spectrum of retailers who are investing in their own capabilities rather than just relying on third-parties,” said Lauren Wiener, a global leader of marketing and customer growth at Boston Consulting Group. Why retailers are spreading risk across vendors Still, building and maintaining these systems is not simple. The underlying models change quickly, and tools that work today may need reworking weeks later. That reality is shaping how retailers think about vendors. At Lowe’s, Google’s shopping agent sits behind the retailer’s own virtual assistant, Mylow. When customers use Mylow online, the company says conversion rates more than double. But Lowe’s does not rely on a single provider. “The tech we build can become outdated in two weeks,” said Seemantini Godbole, Lowe’s chief digital and information officer. That pace is one reason Lowe’s works with several vendors, including OpenAI, rather than betting on one system. Kroger is taking a similar approach. Alongside Google, it works with companies such as Instacart to support its agent strategy. “[AI agents] are not just top of mind, it’s a priority for us,” Cosset said. “It’s going at a remarkable pace.” Testing AI agents without overcommitting For others, the challenge is not keeping up with the technology, but deciding how much to build at all. Papa Johns does not create its own AI models or agents. Instead, it is testing Google’s food ordering agent to handle tasks like estimating how many pizzas a group might need based on a photo uploaded by a customer. Customers will be able to use the agent by phone, through the company’s website, or in its app. “I don’t want to be an AI expert in terms of building the agents,” said Kevin Vasconi, Papa Johns’ chief digital and technology officer. “I want to be an AI expert in terms of, ‘How do I use the agents?’” That focus on use rather than ownership reflects a practical view of where AI fits today. While agent-based shopping is gaining attention, it is not yet the main way people buy everyday goods. “I don’t think [AI agents] are going to totally change the industry,” Vasconi said. “People still call our stores on the phone to order pizza in this day and age.” Analysts see Google’s tools less as a finished answer and more as a way to lower the barrier for retailers that do not want to start from scratch. “The real challenge here is application of the technologies,” said Ed Anderson, a tech analyst at Gartner. “These announcements take a step forward so that retailers don’t have to start from ground zero.” For now, retailers are testing, mixing vendors, and holding back from firm commitments. Kroger, Lowe’s, and Papa Johns have not shared detailed results from their trials. That caution suggests many are still trying to understand how much control they are willing to give up—and how much they can afford to keep—as shopping slowly shifts toward automation. (Photo by Heidi Fin) See also: Grab brings robotics in-house to manage delivery costs Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Retailers like Kroger and Lowe’s test AI agents without handing control to Google appeared first on AI News. View the full article
  21. Meta’s US$2 billion acquisition of AI agent startup Manus has become every enterprise CTO’s cross-border compliance risk lesson. China’s Ministry of Commerce announced on January 9 that it would assess whether the deal violated export controls, technology transfer rules, and overseas investment regulations, despite Manus relocating from Beijing to Singapore in 2025. The investigation exposes an uncomfortable reality for enterprise AI buyers: your vendor’s corporate domicile tells you nothing about their regulatory exposure. “The AI agent developed by Manus was definitely something that ******** regulators could subject to export controls,” Dai Menghao, Shanghai-based partner at King & Wood Mallesons specialising in export controls and sanctions, told the South China Morning Post. The technology, not the corporate registration, determines jurisdiction. When relocation doesn’t equal regulatory freedom Manus appeared to check every box for regulatory independence. The company relocated its 105-person team from Beijing to Singapore in summer 2025, laid off 80 mainland employees, established operations in Singapore, Tokyo, and San Francisco, and secured US$75 million in US funding from Benchmark. Meta insisted in December that “there will be no continuing ******** ownership interests in Manus AI following the transaction, and Manus AI will discontinue its services and operations in China.” Yet Ministry of Commerce spokesperson He Yadong made clear that corporate structure alone won’t determine compliance. “The ******** government consistently supports enterprises in conducting mutually beneficial transnational operations and international technological cooperation in accordance with laws and regulations,” he said at a January 9 press briefing. “But it should be noted that the external investment, technology exports, data exports and cross-border acquisitions by companies must comply with ******** laws and regulations and go through due process.” The investigation will examine when, how, and which technologies Manus transferred abroad from its China-based entities, according to Cui Fan, professor at the University of International Business and Economics and chief expert at the China Society for World Trade Organisation Studies. If regulators determine that Manus should have obtained export licenses before transferring technology or talent, the company’s founders could face criminal charges under ******** law. The regulatory framework that enterprise buyers must understand China updated its technology export control rules in 2020, expanding coverage to include certain algorithms – changes widely interpreted as giving Beijing stronger legal grounds to intervene in deals involving strategic technology. The updates gained prominence after the US pressured ByteDance to divest TikTok’s US operations, prompting China to assert authority over outbound tech transfers. The framework covers three important areas that enterprise AI buyers should understand when evaluating vendor risk: Export controls: Advanced AI agents, models, and related intellectual property qualify as strategic assets subject to licensing requirements. Beijing maintains jurisdiction over technology developed in China, regardless of where companies later incorporate. Data security rules: Cross-border data transfers require regulatory approval, particularly for datasets used to train or fine-tune AI models. The location where training occurred matters more than where inference happens. Overseas investment regulations: When ******** nationals transfer technology assets abroad, even through legitimate corporate restructuring, authorities assess whether the transfer requires government clearance. Wang Yiming, partner at Beijing Xinzheng law firm, estimates the Manus review could take up to six months – matching the timeline for similar technology transfer assessments. “This could become a high-profile test case for China’s equivalent of the Committee on Foreign Investment in the United States,” Winston Ma, adjunct professor at New York University School of Law who focuses on AI and the digital economy, told SCMP. What this means for AI vendor due diligence The Manus case exposes gaps in how enterprise buyers assess AI vendor regulatory risk. Standard procurement processes focus on data residency, service level agreements, and contractual liability. Few evaluate whether their vendor’s technology development history creates ongoing compliance exposure in multiple jurisdictions. Enterprise buyers should now ask AI service providers: Technology origin questions: Where was the core AI model or agent developed? Which jurisdictions’ export control regimes might claim authority? Were any team members involved in the development of ******** nationals? Transfer compliance: If the company relocated, what regulatory approvals were obtained? Can the vendor demonstrate export license compliance for technology transfers? What contingency exists if regulators challenge past transfers? Operational continuity: How would a regulatory investigation impact service delivery? What customer notification obligations exist during review periods? Does the vendor maintain insurance or reserves for regulatory risk? “The most likely outcome I see is a lengthier approval process and potential conditions around how Manus technology developed in China can be used, rather than an outright block,” Nick Patience, AI lead at The Futurum Group, told CNBC. “But the threat of stricter action gives Beijing bargaining power in a high-profile, US-led acquisition.” The precedent risk for enterprise AI strategy The investigation matters beyond Meta’s specific deal. If Beijing determines it can effectively assert jurisdiction over ********-origin AI technology regardless of corporate restructuring, it establishes precedent for ongoing regulatory reach into enterprise AI supply chains. Enterprise buyers using AI agents for market research, coding assistance, or data analysis – precisely what Manus offered before Meta’s acquisition – now face questions about provider stability during geopolitical disputes. The company reached US$100 million in annual recurring revenue in eight months of launch, demonstrating both rapid enterprise adoption and how quickly mission-important dependencies can form. Winston Ma noted that smooth approval could “create a new path for young AI startups in China” – physical relocation paired with foreign acquisitions to bypass technology transfer restrictions. Conversely, regulatory intervention signals that Beijing will pursue ********-origin AI companies even after they relocate, potentially closing what appeared to be an escape route for startups navigating US-China tensions. For enterprise AI buyers, the lesson is about recognising that AI vendor compliance risk extends beyond contractual terms into murky jurisdictional questions about where and by whom technology was originally developed. That’s a due diligence requirement most procurement teams haven’t yet built the capacity to assess. See also: Manus AI agent: breakthrough in China’s agentic AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post The Meta-Manus review: What enterprise AI buyers need to know about cross-border compliance risk appeared first on AI News. View the full article
  22. Integrating AI into code review workflows allows engineering leaders to detect systemic risks that often evade human detection at scale. For engineering leaders managing distributed systems, the trade-off between deployment speed and operational stability often defines the success of their platform. Datadog, a company responsible for the observability of complex infrastructures worldwide, operates under intense pressure to maintain this balance. When a client’s systems fail, they rely on Datadog’s platform to diagnose the root cause—meaning reliability must be established well before software reaches a production environment. Scaling this reliability is an operational challenge. Code review has traditionally acted as the primary gatekeeper, a high-stakes phase where senior engineers attempt to catch errors. However, as teams expand, relying on human reviewers to maintain deep contextual knowledge of the entire codebase becomes unsustainable. To address this bottleneck, Datadog’s AI Development Experience (AI DevX) team integrated OpenAI’s Codex, aiming to automate the detection of risks that human reviewers frequently miss. Why static analysis falls short The enterprise market has long utilised automated tools to assist in code review, but their effectiveness has historically been limited. Early iterations of AI code review tools often performed like “advanced linters,” identifying superficial syntax issues but failing to grasp the broader system architecture. Because these tools lacked the ability to understand context, engineers at Datadog frequently dismissed their suggestions as noise. The core issue was not detecting errors in isolation, but understanding how a specific change might ripple through interconnected systems. Datadog required a solution capable of reasoning over the codebase and its dependencies, rather than simply scanning for style violations. The team integrated the new agent directly into the workflow of one of their most active repositories, allowing it to review every pull request automatically. Unlike static analysis tools, this system compares the developer’s intent with the actual code submission, executing tests to validate behaviour. For CTOs and CIOs, the difficulty in adopting generative AI often lies in proving its value beyond theoretical efficiency. Datadog bypassed standard productivity metrics by creating an “incident replay harness” to test the tool against historical outages. Instead of relying on hypothetical test cases, the team reconstructed past pull requests that were known to have caused incidents. They then ran the AI agent against these specific changes to determine if it would have flagged the issues that humans missed in their code reviews. The results provided a concrete data point for risk mitigation: the agent identified over 10 cases (approximately 22% of the examined incidents) where its feedback would have prevented the error. These were pull requests that had already bypassed human review, demonstrating that the AI surfaced risks invisible to the engineers at the time. This validation changed the internal conversation regarding the tool’s utility. Brad Carter, who leads the AI DevX team, noted that while efficiency gains are welcome, “preventing incidents is far more compelling at our scale.” How AI code reviews are changing engineering culture The deployment of this technology to more than 1,000 engineers has influenced the culture of code review within the organisation. Rather than replacing the human element, the AI serves as a partner that handles the cognitive load of cross-service interactions. Engineers reported that the system consistently flagged issues that were not obvious from the immediate code difference. It identified missing test coverage in areas of cross-service coupling and pointed out interactions with modules that the developer had not touched directly. This depth of analysis changed how the engineering staff interacted with automated feedback. “For me, a Codex comment feels like the smartest engineer I’ve worked with and who has infinite time to find bugs. It sees connections my brain doesn’t hold all at once,” explains Carter. The AI code review system’s ability to contextualise changes allows human reviewers to shift their focus from catching bugs to evaluating architecture and design. From bug hunting to reliability For enterprise leaders, the Datadog case study illustrates a transition in how code review is defined. It is no longer viewed merely as a checkpoint for error detection or a metric for cycle time, but as a core reliability system. By surfacing risks that exceed individual context, the technology supports a strategy where confidence in shipping code scales alongside the team. This aligns with the priorities of Datadog’s leadership, who view reliability as a fundamental component of customer trust. “We are the platform companies rely on when everything else is breaking,” says Carter. “Preventing incidents strengthens the trust our customers place in us”. The successful integration of AI into the code review pipeline suggests that the technology’s highest value in the enterprise may lie in its ability to enforce complex quality standards that protect the bottom line. See also: Agentic AI scaling requires new memory architecture Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Datadog: How AI code reviews slash incident risk appeared first on AI News. View the full article
  23. Artificial intelligence and legal technology are reshaping the landscape of personal injury law in Philadelphia, introducing significant changes. The advancements offer new capabilities for legal professionals, enhancing the strategic approach lawyers take in managing cases. The integration of AI and legal tech into personal injury law is changing how legal practices operate in Philadelphia. By using advanced technologies, like predictive analytics, law firms can gain valuable insights that were previously unattainable. The innovation aids in case management and empowers attorneys to strategize more effectively. As a Grays Ferry, Philadelphia personal injury lawyer adapts to these changes, you can expect a more data-driven approach to legal proceedings, using AI’s potential in predicting case outcomes. AI’s impact on personal injury law practices Artificial intelligence has made significant strides in various industries, and personal injury law is no exception. The incorporation of AI technologies allows for more efficient and precise handling of cases. With AI-driven tools, lawyers can analyze vast amounts of data quickly and accurately. The capability facilitates better decision-making processes and enables legal professionals to offer more tailored services to their clients. Predictive analytics, a key application of AI, plays a crucial role in this transformation. By processing historical data and identifying patterns, predictive analytics can forecast potential case outcomes with remarkable accuracy. This enables lawyers to assess risks and develop strategies informed by empirical evidence rather than intuition alone. As the field continues to evolve, the reliance on data-driven insights will likely become an integral part of legal practices. Understanding predictive analytics Predictive analytics involves analyzing current and historical data to predict future outcomes. In legal practices, this means using data from past cases to anticipate how similar cases might unfold. By examining factors like case details, precedents, and court rulings, AI can generate predictions that guide lawyers in making informed decisions. The types of data used in predictive analytics range from demographic information to historical court records. Advanced algorithms process this information to identify trends and correlations that may not be immediately apparent to human analysts. Through this process, lawyers gain insights that enhance their understanding of complex legal scenarios, ultimately improving their ability to advocate for their clients effectively. Applications in managing personal injury cases In personal injury cases, predictive analytics serves as a tool for risk assessment and strategy development. Lawyers can use these insights to estimate the likelihood of winning a case or securing a favorable settlement. By analyzing similar past cases, attorneys can better understand potential challenges and opportunities unique to each situation. The application of predictive analytics extends beyond mere predictions; it influences how lawyers prepare for negotiations and trials. Knowing the probable outcome allows for more effective resource allocation and client counseling. As legal professionals continue adopting these technologies, they gain a competitive edge in delivering superior service and achieving optimal results for their clients. Benefits and challenges of AI in law For legal professionals, embracing AI-driven analytics offers numerous benefits beyond improved client outcomes. One significant advantage is enhanced decision-making capabilities. By providing clear, evidence-based insights into case probabilities, predictive analytics empowers lawyers to make strategic choices with greater confidence. The efficiency gains associated with these technologies cannot be overstated. AI streamlines various processes in law firms, reducing time spent on mundane tasks and allowing attorneys to focus on higher-value activities. The efficiency translates into cost savings and improved service delivery, positioning firms that adopt these tools as leaders in the competitive legal market. While the benefits of integrating AI into personal injury law are substantial, several challenges must be addressed to ensure responsible implementation. Data privacy is a primary concern; ensuring that client information is protected while using these advanced tools is paramount. Legal professionals must navigate these complexities carefully to maintain trust and compliance with regulations. The post The future of personal injury law: AI and legal tech in Philadelphia appeared first on AI News. View the full article
  24. If you have ever taken a self-driving Uber through downtown LA, you might recognise the strange sense of uncertainty that settles in when there is no driver and no conversation, just a quiet car making assumptions about the world around it. The journey feels fine until the car misreads a shadow or slows abruptly for something harmless. In that moment you see the real issue with autonomy. It does not panic when it should, and that gap between confidence and judgement is where trust is either earned or lost. Much of today’s enterprise AI feels remarkably similar. It is competent without being confident, and efficient without being empathetic, which is why the deciding factor in every successful deployment is no longer computing power but trust. The MLQ State of AI in Business 2025 [PDF] report puts a sharp number on this. 95% of early AI pilots fail to produce measurable ROI, not because the technology is weak but because it is mismatched to the problems organisations are trying to solve. The pattern repeats itself in industries. Leaders get uneasy when they can’t tell if the output is right, teams are unsure whether dashboards can be trusted, and customers quickly lose patience when an interaction feels automated rather than supported. Anyone who has been locked out of their bank account while the automated recovery system insists their answers are wrong knows how quickly confidence evaporates. Klarna remains the most publicised example of large-scale automation in action. The company has now halved its workforce since 2022 and says internal AI systems are performing the work of 853 full-time roles, up from 700 earlier this year. Revenues have risen 108%, while average employee compensation has increased 60%, funded in part by those operational gains. Yet the picture is more complicated. Klarna still reported a 95 million dollar quarterly loss, and its CEO has warned that further staff reductions are likely. It shows that automation alone does not create stability. Without accountability and structure, the experience breaks down long before the AI does. As Jason Roos, CEO of CCaaS provider Cirrus, puts it, “Any transformation that unsettles confidence, inside or outside the business, carries a cost you cannot ignore. it can leave you worse off.” We have already seen what happens when autonomy runs ahead of accountability. The ***’s Department for Work and Pensions used an algorithm that wrongly flagged around 200,000 housing-benefit claims as potentially fraudulent, even though the majority were legitimate. The problem wasn’t the technology. It was the absence of clear ownership over its decisions. When an automated system suspends the wrong account, rejects the wrong claim or creates unnecessary fear, the issue is never just “why did the model misfire?” It’s “who owns the outcome?” Without that answer, trust becomes fragile. “The missing step is always readiness,” says Roos. “If the process, the data and the guardrails aren’t in place, autonomy doesn’t accelerate performance, it amplifies the weaknesses. Accountability has to come first. Start with the outcome, find where effort is being wasted, check your readiness and governance, and only then automate. Skip those steps and accountability disappears just as fast as the efficiency gains arrive.” Part of the problem is an obsession with scale without the grounding that makes scale sustainable. Many organisations push toward autonomous agents that can act decisively, yet very few pause to consider what happens when those actions drift outside expected boundaries. The Edelman Trust Barometer [PDF] shows a steady decline in public trust in AI over the past five years, and a joint KPMG and University of Melbourne study found that workers prefer more human involvement in almost half the tasks examined. The findings reinforce a simple point. Trust rarely comes from pushing models harder. It comes from people taking the time to understand how decisions are made, and from governance that behaves less like a brake pedal and more like a steering wheel. The same dynamics appear on the customer side. PwC’s trust research reveals a wide gulf between perception and reality. Most executives believe customers trust their organisation, while only a ********* of customers agree. Other surveys show that transparency helps to close this gap, with large majorities of consumers wanting clear disclosure when AI is used in service experiences. Without that clarity, people do not feel reassured. They feel misled, and the relationship becomes strained. Companies that communicate openly about their AI use are not only protecting trust but also normalising the idea that technology and human support can co-exist. Some of the confusion stems from the term “agentic AI” itself. Much of the market treats it as something unpredictable or self-directing, when in reality it is workflow automation with reasoning and recall. It is a structured way for systems to make modest decisions inside parameters designed by people. The deployments that scale safely all follow the same sequence. They start with the outcome they want to improve, then look at where unnecessary effort sits in the workflow, then assess whether their systems and teams are ready for autonomy, and only then choose the technology. Reversing that order does not speed anything up. It simply creates faster mistakes. As Roos says, AI should expand human judgement, not replace it. All of this points toward a wider truth. Every wave of automation eventually becomes a social question rather than a purely technical one. Amazon built its dominance through operational consistency, but it also built a level of confidence that the parcel would arrive. When that confidence dips, customers move on. AI follows the same pattern. You can deploy sophisticated, self-correcting systems, but if the customer feels tricked or misled at any point, the trust breaks. Internally, the same pressures apply. The KPMG global study [PDF] highlights how quickly employees disengage when they do not understand how decisions are made or who is accountable for them. Without that clarity, adoption stalls. As agentic systems take on more conversational roles, the emotional dimension becomes even more significant. Early reviews of autonomous chat interactions show that people now judge their experience not only by whether they were helped but also by whether the interaction felt attentive and respectful. A customer who feels dismissed rarely keeps the frustration to themselves. The emotional tone of AI is becoming a genuine operational factor, and systems that cannot meet that expectation risk becoming liabilities. The difficult truth is that technology will continue to move faster than people’s instinctive comfort with it. Trust will always lag behind innovation. That is not an argument against progress. It is an argument for maturity. Every AI leader should be asking whether they would trust the system with their own data, whether they can explain its last decision in plain language, and who steps in when something goes wrong. If those answers are unclear, the organisation is not leading transformation. It is preparing an apology. Roos puts it simply, “Agentic AI is not the concern. Unaccountable AI is.” When trust goes, adoption goes, and the project that looked transformative becomes another entry in the 95% failure rate. Autonomy is not the enemy. Forgetting who is responsible is. The organisations that keep a human hand on the wheel will be the ones still in control when the self-driving hype eventually fades. The post Autonomy without accountability: The real AI risk appeared first on AI News. View the full article
  25. The partnership announced this week between Microsoft and Hexagon Robotics marks an inflection point in the commercialisation of humanoid, AI-powered robots for industrial environments. The two companies will combine Microsoft’s cloud and AI infrastructure with Hexagon’s expertise in robotics, sensors, and spatial intelligence to advance the deployment of physical AI systems in real-world settings. At the centre of the collaboration is AEON, Hexagon’s industrial humanoid robot, a device designed to operate autonomously in environments like factories, logistics hubs, engineering plants, and inspection sites. The partnership will focus on multimodal AI training, imitation learning, real-time data management, and integration with existing industrial systems. Initial target sectors include automotive, aerospace, manufacturing, and logistics, the companies say. It’s in these industries where labour shortages and operational complexity are already constraining financial growth. The announcement is the sign of a maturing ecosystem: cloud platforms, physical AI, and robotics engineering’s convergence, making humanoid automation commercially viable. Humanoid robots out of the research lab While humanoid robots have been the subject of work at research institutions, demonstrated proudly at technology events, the last five years have seen a move to practical deployment in real-world, working environments. The main change has been the combination of improved perception, advances in reinforcement and imitation learning, and the availability of scalable cloud infrastructure. One of the most visible examples is Agility Robotics’ Digit, a bipedal humanoid robot designed for logistics and warehouse operations. Digit has been piloted in live environments by companies like Amazon, where it performs material-handling tasks including tote movement and last-metre logistics. Such deployments tend to focus on augmenting human workers rather than replacing them, with Digit handling more physically demanding tasks. Similarly, Tesla’s Optimus programme has moved out of the phase where concept videos were all that existed, and is now undergoing factory trials. Optimus robots are being tested on structured tasks like part handling and equipment transport inside Tesla’s automotive manufacturing facilities. While still limited in scope, these pilots demonstrate the pattern of humanoid-like machines chosen over less anthropomorphic form-factors so they can operate in human-designed and -populated spaces. Inspection, maintenance, and hazardous environments Industrial inspection is emerging as one of the earliest commercially viable use cases for humanoid and quasi-humanoid robots. Boston Dynamics’ Atlas, while not yet a general-purpose commercial product, has been used in live industrial trials for inspection and disaster-response environments. It can navigate uneven terrain, climb stairs, and manipulate tools in places considered unsafe for humans. Toyota Research Institute has deployed humanoid robotics platforms for remote inspection and manipulation tasks in similar settings. Toyota’s systems rely on multimodal perception and human-in-the-loop control, the latter reinforcing an industry trend: early deployments prioritise reliability and traceability, so need human oversight. Hexagon’s AEON aligns closely with this trend. Its emphasis on sensor fusion and spatial intelligence is relevant for inspection and quality assurance tasks, where precise understanding of physical environments is more valuable than the conversational abilities most associated with everyday use of AIs. Cloud platforms central to robotics strategy A defining feature of the Microsoft-Hexagon partnership is the use of cloud infrastructure in the scaling of humanoid robots. Training, updating, and monitoring physical AI systems generates large quantities of data, including video, force feedback from on-device sensors, spatial mapping (such as that derived from LIDAR), and operational telemetry. Managing this data locally has historically been a bottleneck, due to storage and processing constraints. By using platforms like Azure and Azure IoT Operations, plus real-time intelligence services in the cloud, humanoid robots can be trained fleet-wide, not isolated units. This leads to multiple possibilities in shared learning, improvement by iteration, and greater consistency. For board-level buyers, these IT architecture shifts mean humanoid robots become viable entities that can be treated – in terms of IT requirements – more like enterprise software than machinery. Labour shortages drive adoption The demographic trends in manufacturing, logistics, and asset-intensive industries are increasingly unfavourable. Ageing workforces, declining interest in manual roles, and persistent skills shortages create skills gaps that conventional automation cannot fully address – at least, not without rebuilding entire facilities to be more suited to a robotic workforce. Fixed robotic systems excel in repetitive, predictable tasks but struggle in dynamic, human environments. Humanoid robots occupy a middle ground. Not designed to replace workflows, they can stabilise operations where human availability is uncertain. Case studies show early value in night shifts, periods of peak demand, and tasks deemed too hazardous for humans. What boards should evaluate before investing For decision-makers considering investment in next-generation workplace robots, several issues to note have emerged from existing, real-world deployments: Task specificity matters more than general intelligence, with the more successful pilots focusing on well-defined activities. Data governance and security continue to have to be placed front and centre when robots are put into play, especially so when it’s necessary to connect them to cloud platforms. At a human level, workforce integration can be more challenging than sourcing, installing, and running the technology itself. Yet human oversight remains essential at this stage in AI maturity, for safety and regulatory acceptance. A measured but irreversible shift Humanoid robots won’t replace the human workforce, but an increasing body of evidence from live deployments and prototyping shows such devices are moving into the workplace. As of now, humanoid, AI-powered robots can perform economically-valuable tasks, and integration with existing industrial systems is immensely possible. For boards with the appetite to invest, the question could be when competitors might deploy the technology responsibly and at scale. (Image source: Source: Hexagon Robotics) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.