Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    598
  • Joined

  • Last visited

  • Feedback

    0%

ChatGPT's Achievements

  1. AI cost efficiency and data sovereignty are at odds, forcing a rethink of enterprise risk frameworks for global organisations. For over a year, the generative AI narrative focused on a race for capability, often measuring success by parameter counts and flawed benchmark scores. Boardroom conversations, however, are undergoing a necessary correction. While the allure of low-cost, high-performance models offers a tempting path to rapid innovation, the hidden liabilities associated with data residency and state influence are forcing a reassessment of vendor selection. China-based AI laboratory DeepSeek recently became a focal point for this industry-wide debate. According to Bill Conner, former adviser to Interpol and GCHQ, and current CEO of Jitterbit, DeepSeek’s initial reception was positive because it challenged the status quo by demonstrating that “high-performing large language models do not necessarily require Silicon Valley–scale budgets.” For businesses looking to trim the immense costs associated with generative AI pilots, this efficiency was understandably attractive. Conner observes that these “reported low training costs undeniably reignited industry conversations around efficiency, optimisation, and ‘good enough’ AI.” AI and data sovereignty risks Enthusiasm for cut-price performance has collided with geopolitical realities. Operational efficiency cannot be decoupled from data security, particularly when that data fuels models hosted in jurisdictions with different legal frameworks regarding privacy and state access. Recent disclosures regarding DeepSeek have altered the math for Western enterprises. Conner highlights “recent US government revelations indicating DeepSeek is not only storing data in China but actively sharing it with state intelligence services.” This disclosure moves the issue beyond standard GDPR or CCPA compliance. The “risk profile escalates beyond typical privacy concerns into the realm of national security.” For enterprise leaders, this presents a specific hazard. LLM integration is rarely a standalone event; it involves connecting the model to proprietary data lakes, customer information systems, and intellectual property repositories. If the underlying AI model possesses a “back door” or obliges data sharing with a foreign intelligence apparatus, sovereignty is eliminated and the enterprise effectively bypasses its own security perimeter and erases any cost efficiency benefits. Conner warns that “DeepSeek’s entanglement with military procurement networks and alleged export control evasion tactics should serve as a critical warning sign for CEOs, CIOs, and risk officers alike.” Utilising such technology could inadvertently entangle a company in sanctions violations or supply chain compromises. Success is no longer just about code generation or document summaries; it is about the provider’s legal and ethical framework. Especially in industries like finance, healthcare, and defence, tolerance for ambiguity regarding data lineage is zero. Technical teams may prioritise AI performance benchmarks and ease of integration during the proof-of-concept phase, potentially overlooking the geopolitical provenance of the tool and the need for data sovereignty. Risk officers and CIOs must enforce a governance layer that interrogates the “who” and “where” of the model, not just the “what.” Governance over AI cost efficiency Deciding to adopt or ban a specific AI model is a matter of corporate responsibility. Shareholders and customers expect that their data remains secure and used solely for intended business purposes. Conner frames this explicitly for Western leadership, stating that “for Western CEOs, CIOs, and risk officers, this is not a question of model performance or cost efficiency.” Instead, “it is a governance, accountability, and fiduciary responsibility issue.” Enterprises “cannot justify integrating a system where data residency, usage intent, and state influence are fundamentally opaque.” This opacity creates an unacceptable liability. Even if a model offers 95 percent of a competitor’s performance at half the cost, the potential for regulatory fines, reputational damage, and loss of intellectual property erases those savings instantly. The DeepSeek case study serves as a prompt to audit current AI supply chains. Leaders must ensure they have full visibility into where model inference occurs and who holds the keys to the underlying data. As the market for generative AI matures, trust, transparency, and data sovereignty will likely outweigh the appeal of raw cost efficiency. See also: SAP and Fresenius to build sovereign AI backbone for healthcare Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Balancing AI cost efficiency with data sovereignty appeared first on AI News. View the full article
  2. For many large companies, artificial intelligence still lives in side projects. Small teams test tools, run pilots, and present results that struggle to spread beyond a few departments. Citi has taken a different path, where instead of keeping AI limited to specialists, the bank has spent the past two years pushing the technology into daily work in the organisation. That effort has resulted in an internal AI workforce of roughly 4,000 employees, drawn from roles that range from technology and operations to risk and customer support. The figure was first reported by Business Insider, which detailed how Citi built its “AI Champions” and “AI Accelerators” programmes to encourage participation not central control. The scale of integration is notable, as Citi employs around 182,000 people globally, and more than 70% of them now use firm-approved AI tools in some form, according to the same report. That level of use places Citi ahead of many peers that still restrict AI access to technical teams or innovation labs. From central pilots to team-level adoption Rather than start with tools, Citi focused on people. The bank invited employees to volunteer as AI Champions, giving them access to training, internal resources, and early versions of approved AI systems. The employees then supported colleagues in their own teams, acting as local points of contact not formal trainers. The approach reflects a practical view of adoption. New tools often fail not because they lack features, but because staff do not know when or how to use them. By embedding support inside teams, Citi reduced the gap between experimentation and routine work. Training played a central role. Employees could earn internal badges by completing courses or demonstrating how they used AI to improve their own tasks. The badges did not come with promotions or pay rises, but they helped create visibility and credibility in the organisation. According to Business Insider, this peer-driven model helped AI spread faster than top-down mandates. Everyday use, with guardrails Citi’s leadership has framed the effort as a response to scale not novelty. With operations spanning retail banking, investment services, compliance, and customer support, small efficiency gains can add up quickly. AI tools are being used to summarise documents, draft internal notes, analyse data sets, and assist with software development. None of these uses are new on their own, but the difference lies in how they are applied. The focus on everyday tasks also shapes Citi’s risk posture. The bank has limited employees to firm-approved tools, with guardrails around what data can be used and how outputs are handled. That constraint has slowed some experiments, but it has also made managers more comfortable allowing broader access. In regulated industries, trust often matters more than speed. What Citi’s approach shows about scaling AI The structure of Citi’s programme suggests a lesson for other large enterprises. AI adoption does not require every employee to become an expert. It requires enough people to understand the tools well enough to apply them responsibly and explain them to others. By training thousands instead of dozens, Citi reduced its reliance on a small group of specialists. There is also a cultural signal at play. Encouraging employees from non-technical roles to participate sends a message that AI is not only for engineers or data scientists. It becomes part of how work gets done, similar to spreadsheets or presentation software in earlier decades. That shift aligns with broader industry trends. Surveys from firms like McKinsey have shown that many companies struggle to move AI projects into production, often citing talent gaps and unclear ownership. Citi’s model sidesteps some of those issues by distributing ownership in teams, while keeping governance centralised. Still, the approach is not without limits. Peer-led adoption depends on sustained interest, and not all teams move at the same pace. There is also the risk that informal support networks become uneven, with some groups benefiting more than others. Citi has tried to address this by rotating Champions and updating training content as tools change. What stands out is the bank’s willingness to treat AI as infrastructure not innovation. Instead of asking whether AI could transform the business, Citi asked where it could remove friction from existing work. That framing makes progress easier to measure and reduces pressure to produce dramatic results. The experience also challenges a common assumption that AI adoption must start at the top. Citi’s senior leadership supported the effort, but much of the momentum came from employees who volunteered time to learn and teach. In large organisations, that bottom-up energy can be hard to generate, yet it often determines whether new technology sticks. As more companies move from pilots to production, Citi’s experiment offers a useful case study. It shows that scale does not come from buying more tools, but from helping people feel confident using the ones they already have. For enterprises wondering why AI progress feels slow, the answer may lie less in strategy decks and more in how work actually gets done, one team at a time. (Photo by Declan Sun) See also: JPMorgan Chase treats AI spending as core infrastructure Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post The quiet work behind Citi’s 4,000-person internal AI rollout appeared first on AI News. View the full article
  3. SAP and Fresenius are building a sovereign AI platform for healthcare that brings secure data processing to clinical settings. For data leaders in the medical sector, deploying AI requires strict governance that public cloud solutions often lack. This collaboration addresses that gap by creating a “controlled environment” where AI models can operate without compromising data sovereignty. Moving AI from pilot to production The project aims to build an open and integrated ecosystem allowing hospitals to use AI securely. Rather than running isolated experiments, the companies plan to create a digital backbone for a sovereign and AI-supported healthcare system. Michael Sen, CEO of Fresenius, said: “Together with SAP, we can accelerate the digital transformation of the ******* and European healthcare systems and enable a sovereign European solution that is so important in today’s global landscape. “We are making data and AI everyday companions that are secure, simple and scalable for doctors and hospital teams. This creates more room for what truly matters: caring for patients.” The technical base uses SAP Business AI and the SAP Business Data Cloud. By leveraging these components, the platform creates a compliant, sovereign foundation for operating AI models in healthcare. This infrastructure handles health data responsibly, a requirement for scaling automated processes in patient care. The partnership tackles data fragmentation through SAP’s “AnyEMR” strategy, which supports the integration of diverse hospital information systems (HIS). Using open industry standards like HL7 FHIR, the platform connects HIS, electronic medical records (EMRs), and other medical applications. This connectivity allows Fresenius to develop AI-supported solutions that increase efficiency across the care chain. The goal is to build an individual, scalable platform that enables connected, data-driven healthcare processes. Investing in sovereign AI to advance healthcare Both companies intend to invest a “mid three-digit million euro amount” in the medium term. The funds target the digital transformation of ******* and European healthcare systems using AI-supported solutions. Plans include joint investments in startups and scaleups, alongside internal technological developments. This approach aims to build a broader library of tools that plug into the sovereign platform. Christian Klein, CEO of SAP SE, commented: “With SAP’s leading technology and Fresenius’ deep healthcare expertise, we aim to create a sovereign, interoperable healthcare platform for Fresenius worldwide. “Together, we want to set new standards for data sovereignty, security, and innovation in healthcare. Thanks to SAP, Fresenius can harness the full potential of digital and AI-supported processes and sustainably improve patient care.” This deal indicates that the next phase of healthcare AI in Europe will focus on sovereign infrastructure. Scalable AI requires a controlled environment to satisfy regulatory demands—without a sovereign data backbone, AI initiatives risk stalling due to compliance concerns. See also: Scaling AI value beyond pilot phase purgatory Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post SAP and Fresenius to build sovereign AI backbone for healthcare appeared first on AI News. View the full article
  4. Scaling AI value from isolated pilots to enterprise-wide adoption remains a primary hurdle for many organisations. While experimentation with generative models has become ubiquitous, industrialising these tools (i.e. wrapping them in necessary governance, security, and integration layers) often stalls. Addressing the gap between investment and operational return, IBM has introduced a new service model designed to help businesses assemble, rather than purely build, their internal AI infrastructure. Adopting asset-based consulting Traditional consultancy models typically rely on human labour to solve integration problems, a process that is often slow and capital-intensive. IBM is among the companies aiming to alter this dynamic by offering an asset-based consulting service. This approach combines standard advisory expertise with a catalogue of pre-built software assets, aiming to help clients construct and govern their own AI platforms. Instead of commissioning bespoke development for every workflow, organisations can leverage existing architectures to redesign processes and connect AI agents to legacy systems. This method helps companies to achieve value by scaling new agentic applications without necessitating alterations to their existing core infrastructure, AI models, or preferred cloud providers. Managing a multi-cloud environment A frequent concern for enterprise leaders is vendor lock-in, particularly when adopting proprietary platforms. IBM’s strategy acknowledges the reality of the heterogeneous enterprise IT landscape. The service supports a multi-vendor foundation, compatible with Amazon Web Services, Google Cloud, and Microsoft Azure, alongside IBM watsonx. This approach extends to the models themselves, supporting both open- and closed-source variants. By allowing companies to build upon their current investments rather than demanding a replacement strategy, the service addresses a barrier to adoption: the fear of technical debt accumulation when switching ecosystems. The technical backbone of this offering is IBM Consulting Advantage, the company’s internal delivery platform. Having utilised this system to support over 150 client engagements, IBM reports that the platform has boosted its own consultants’ productivity by up to 50 percent. The premise is that if these tools can accelerate delivery for IBM’s own teams, they should offer similar velocity for clients. The service provides access to a marketplace of industry-specific AI agents and applications. For business leaders, this suggests a “platform-first” focus, where attention turns from managing individual models to managing a cohesive ecosystem of digital and human workers. Active deployment of a platform-centric approach to scaling AI value The efficacy of such a platform-centric approach is best viewed through active deployment. Pearson, the global learning company, is currently utilising this service to construct a custom platform. Their implementation combines human expertise with agentic assistants to manage everyday work and decision-making processes, illustrating how the technology functions in a live operational environment. Similarly, a manufacturing firm has employed IBM’s solution to formalise its generative AI strategy. For this client, the focus was on identifying high-value use cases, testing targeted prototypes, and aligning leaders around a scalable strategy. The result was the deployment of AI assistants using multiple technologies within a secured, governed environment, laying a foundation for wider expansion across the enterprise. Despite the attention surrounding generative AI, the realisation of balance-sheet impact is not guaranteed. “Many organisations are investing in AI, but achieving real value at scale remains a major challenge,” notes Mohamad Ali, SVP and Head of IBM Consulting. “We have solved many of these challenges inside IBM by using AI to transform our own operations and deliver measurable results, giving us a proven playbook to help clients succeed.” The conversation is gradually moving away from the capabilities of specific LLMs and towards the architecture required to run them safely. Success in scaling AI and achieving value will likely depend on an organisation’s ability to integrate these solutions without creating new silos. Leaders must ensure that as they adopt pre-built agentic workflows, they maintain rigorous data lineage and governance standards. See also: JPMorgan Chase treats AI spending as core infrastructure Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Scaling AI value beyond pilot phase purgatory appeared first on AI News. View the full article
  5. Artificial intelligence has shifted rapidly from a peripheral innovation to a structural component of modern financial services. In banking, payments, and wealth management, to name but three sub-sectors, AI is now embedded in budgeting tools, fraud detection systems, KYC, AML, and customer engagement platforms. Credit unions sit in this broader fintech transformation, facing similar technological pressures and operating under distinct cooperative models built on trust, proffered services in competitive markets, and community alignment. Consumer behaviour suggests AI is already part of everyday financial decision-making. Research from Velera indicates that 55% of consumers use AI tools for financial planning or budgeting, while 42% are comfortable using AI to complete financial transactions. Adoption is highest among younger demographics, with 80% of Gen Z and younger millennials using AI for financial planning and close to that proportion expressing ‘comfort’ with agentic AI. These patterns mirror trends in the wider fintech sector, where AI-driven personal finance tools and conversational interfaces have become more common. There is a particular a dual challenge for credit unions. Member expectations are shaped by large fintech companies’ digital platforms and apps, and large digital banks are deploying AI at scale. At the average Union, internal readiness remains limited. A CULytics survey shows that although 42% of credit unions have implemented AI in specific operational areas, only 8% report using it in multiple parts of the business. The gap between market expectations and institutional ability defines the current phase of AI adoption in the cooperative-based financial sector. AI as a trust-based extension of financial services Unlike many fintech startups, credit unions benefit from high levels of consumer trust. Velera reports that 85% of consumers see credit unions as reliable sources of financial advice, and 63% of CU members say they would attend AI-related educational sessions if such were offered. These findings position credit unions as being able to frame AI as an advisory tool to be embedded in existing relationships. In fintech, “explainable AI” and transparent digital finance are mainstays as identity verification, and regulation watch the technology closely. Regulators and consumers clearly expect transparency into how decisions are made by AI back ends. Credit unions can use this expectation by integrating AI into education programmes, fraud awareness efforts and financial literacy. Where AI delivers tangible value Personalisation is a leading use case for AI. Machine learning models let financial institutions move beyond static customer segmentation, via behavioural signals and life-stage indicators. The approach is already common in other sectors, and in the industry, in fintech lending and digital banking platforms. Credit unions can adopt similar techniques, ones that tailor offers, communications, and make product recommendations. Member service represents another potential high-impact area. According to CULytics, 58% of credit unions now use chatbots or virtual assistants, the most-adopted AI application in the sector. Cornerstone Advisors reports that deployment is accelerating among credit unions than banks, using AI to handle routine enquiries and preserve staff capacity. Fraud prevention has emerged as an AI use case in the sector. Alloy reports a 92% net increase in AI fraud prevention investment among credit unions in 2025, compared with lower prioritisation among banks. As digital payments get more widely-adopted, AI-driven fraud detection is important to balance security with low-friction user experiences. In this respect, credit unions face the same pressures as mainstream fintech payment providers and neobanks, where false declines and delayed responses can directly erode customer trust. Operational efficiency and lending decisions also feature prominently. Research from Inclind and CULytics shows AI being applied to reconciliation, underwriting, and internal business analytics. Users report reduced manual workloads and faster credit decisions. Cornerstone Advisors identifies lending as the third-most common AI function among credit unions, placing them closer to fintech lenders than traditional banks in this area. Structural barriers to scaling AI Despite clear use cases, scaling AI in credit unions remains difficult. Data readiness is the most frequently cited constraint. Cornerstone Advisors reports that only 11% of credit unions rate their data strategy as very effective (nearly a quarter consider it ineffective). Without accessible, well-governed data, AI systems cannot deliver reliable outcomes, regardless of the underlying sophistication of the LLM. Trust and explainability also limit the technology’s expansion. In regulated financial environments, opaque “****** box” models create risk for institutions that as a matter of course have to justify their decisions to members. PYMNTS Intelligence highlights the importance of breaking down data silos and using shared intelligence models to improve transparency and auditability. Consortium-based approaches, like those used by Velera in thousands of credit unions, reflect a trend in the financial sector towards pooled data. Integration presents a further challenge. CULytics finds that 83% of credit unions cite integration with legacy systems as an obstacle to AI, a familiar issue to many financial institutions. Limited in-house expertise in AI compounds this, again suggesting fintech partnerships, credit union service organisations (CUSOs), or externally-managed platforms as ways to accelerate deployment. From experimentation to embedded practice As AI becomes embedded in financial services, credit unions face a choice similar to that which has been confronted by banks and the wider fintech sector: placing AI as a foundational ability. Evidence suggests progress depends on disciplined execution. That means prioritising high-trust, high-impact use cases, so institutions can deliver visible benefits and not undermine members’ confidence in their trusted institutions. Strengthening data governance and accountability ensures AI-assisted decisions remain explainable and defensible. Partner-led integration might reduce technical complexity, while education and transparency align AI adoption with the values that underpin the cooperative organisation. (Image source: “Credit Union Building” by Dano is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Credit unions, fintech and the AI inflection of financial services appeared first on AI News. View the full article
  6. Inside large banks, artificial intelligence has moved into a category once reserved for payment systems, data centres, and core risk controls. At JPMorgan Chase, AI is framed as infrastructure the bank believes it cannot afford to neglect. That position came through clearly in recent comments from CEO Jamie Dimon, who defended the bank’s rising technology budget and warned that institutions that fall behind on AI risk losing ground to competitors. The argument was not about replacing people but about staying functional in an industry where speed, scale, and cost discipline matter every day. JPMorgan has been investing heavily in technology for years, but AI has changed the tone of that spending. What once sat with innovation projects is now folded into the bank’s baseline operating costs. That includes internal AI tools that support research, document drafting, internal reviews, and other routine tasks in the organisation. From experimentation to infrastructure The shift in language reflects a deeper change in how the bank views risk. AI is considered part of the systems required to keep pace with competitors that are automating internal work. Rather than encouraging workers to rely on public AI systems, JPMorgan has focused on building and governing its own internal platforms. That decision reflects long-held concerns in banking about data exposure, client confidentiality, and regulatory monitoring. Banks operate in an environment where mistakes carry high costs. Any system that touches sensitive data or influences choices must be auditable and explainable. Public AI tools, trained on datasets and updated frequently, make that difficult. Internal systems give JPMorgan more control, even if they take longer to deploy. The approach also reduces the potential of uncontrolled “shadow AI,” in which employees use unapproved tools to speed up work. While such tools can improve productivity, they create gaps in oversight that regulators tend to notice quickly. A cautious approach to workforce change JPMorgan has been careful in how it talks about AI’s impact on jobs. The bank has avoided claims that AI will dramatically reduce headcount. Instead, it presents AI as a way to reduce manual work and improve consistency. Tasks that once required multiple review cycles can now be completed faster, with employees still responsible for final judgement. The framing positions AI as support not substitution, which matters in a sector sensitive to political and regulatory reaction. The scale of the organisation makes this approach practical. JPMorgan employs hundreds of thousands of people worldwide. Even tiny efficiency gains, applied broadly, can translate into meaningful cost savings over time. The upfront investment required to build and maintain internal AI systems is substantial. Dimon acknowledges that technology spending can have an impact on short-term performance, especially when market conditions are uncertain. His response is that cutting back on technology now may improve margins in the near term, but it risks weakening the bank’s position later. In that sense, AI spending is treated as a form of insurance against falling behind. JPMorgan, AI, and the risk of falling behind rivals JPMorgan’s stance reflects pressure in the banking sector. Rivals are investing in AI to speed up fraud detection, streamline compliance work, and improve internal reporting. As these tools become more common, expectations rise. Regulators may assume banks have access to advanced monitoring systems. Clients may expect faster responses and fewer errors. In that environment, lagging on AI can look less like caution and more like mismanagement. JPMorgan has not suggested that AI will solve structural challenges or eliminate risk. Many AI projects struggle to move beyond narrow uses, and integrating them into complex systems remains difficult. The harder work lies in governance. Deciding which teams can use AI, under what conditions, and with what oversight requires clear rules. Errors need defined escalation paths. Responsibility must be assigned when systems produce flawed output. Across large enterprises, AI adoption is not limited by access to models or computing power, but constrained by process, policy, and trust. For other end-user companies, JPMorgan’s approach offers a useful reference point. AI is treated as part of the machinery that keeps the organisation running. That does not guarantee success. Returns may take years to appear, and some investments will not pay off. But the bank’s position is that the greater risk lies in doing too little, not too much. (Photo by IKECHUKWU JULIUS UGWU) See also: Banks operationalise as Plumery AI launches standardised integration Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post JPMorgan Chase treats AI spending as core infrastructure appeared first on AI News. View the full article
  7. For the majority of web users, generative AI is AI. Large Language Models (LLMs) like GPT and Claude are the de facto gateway to artificial intelligence and the infinite possibilities it has to offer. After mastering our syntax and remixing our memes, LLMs have captured the public imagination. They’re easy to use and fun. And – the odd hallucination aside – they’re smart. But while the public plays around with their favourite flavour of LLM, those who live, breathe, and sleep AI – researchers, tech heads, developers – are focused on ******* things. That’s because the ultimate goal for AI max-ers is artificial general intelligence (AGI). That’s the endgame. To the professionals, LLMs are a sideshow. Entertaining and eminently useful, but ultimately ‘narrow AI.’ They’re good at what they do because they’ve been trained on specific datasets, but incapable of straying out of their lane and attempting to solve larger problems. The diminishing returns and inherent limitations of deep learning models is prompting exploration of smarter solutions capable of actual cognition. Models that lie somewhere between the LLM and AGI. One system that falls into this bracket – smarter than an LLM and a foretaste of future AI – is OpenCog Hyperon, an open-source framework developed by SingularityNET. With its ‘neural-symbolic’ approach, Hyperon is designed to bridge the gap between statistical pattern matching and logical reasoning, offering a roadmap that joins the dots between today’s chatbots and tomorrow’s infinite thinking machines. Hybrid architecture for AGI SingularityNET has positioned OpenCog Hyperon as a next-generation AGI research platform that integrates multiple AI models into a unified cognitive architecture. Unlike LLM-centric systems, Hyperon is built around neural-symbolic integration in which AI can learn from data and reason about knowledge. That’s because withneural-symbolic AI, neural learning components and symbolic reasoning mechanisms are interwoven so that one can inform and enhance the other. This overcomes one of the primary limitations of purely statistical models by incorporating structured, interpretable reasoning processes. At its core, OpenCog Hyperon combines probabilistic logic and symbolic reasoning with evolutionary programme synthesis and multi-agent learning. That’s a lot of terms to take it, so let’s try and break down how this all works in practice. To understand OpenCog Hyperon – and specifically why neural-symbolic AI is such a big deal – we need to understand how LLMs work and where they come up short. The limits of LLMs Generative AI operates primarily on probabilistic associations. When an LLM answers a question, it doesn’t ‘know’ the answer in the way a human instinctively does. Instead, it calculates the most probable sequence of words to follow the prompt based on its training data. Most of the time, this ‘impersonation of a person’ comes in very convincingly, providing the human user with not only the output they expect, but one that is correct. LLMs specialise in pattern recognition on an industrial scale and they’re very good at it. But the limitations of these models are well documented. There’s hallucination, of course, which we’ve already touched on, where plausible-sounding but factually incorrect information is presented. Nothing gaslights harder than an LLM eager to please its master. But a greater problem, particularly once you get into more complex problem-solving, is a lack of reasoning. LLMs aren’t adept at logically deducing new truths from established facts if those specific patterns weren’t in the training set. If they’ve seen the pattern before, they can predict its appearance again. If they haven’t, they hit a wall. AGI, in comparison, describes artificial intelligence that can genuinely understand and apply knowledge. It doesn’t just guess the right answer with a high degree of certainty – it knows it, and it’s got the working to back it up. Naturally, this ability calls for explicit reasoning skills and memory management – not to mention the ability to generalise when given limited data. Which is why AGI is still some way off – how far off depends on which human (or LLM) you ask. But in the meantime, whether AGI be months, years, or decades away, we have neural-symbolic AI, which has the potential to put your LLM in the shade. Dynamic knowledge on demand To understand neural-symbolic AI in action, let’s return toOpenCog Hyperon. At its heart is the Atomspace Metagraph, a flexible graph structure that represents diverse forms of knowledge including declarative, procedural, sensory, and goal-directed, all contained in a single substrate. The metagraph can encode relationships and structures in ways that support not just inference, but logical deduction and contextual reasoning. If this sounds a lot like AGI, it’s because it is. ‘Diet AGI,’ if you like, provides a taster of where artificial intelligence is headed next. So that developers can build with the Atomspace Metagraph and use its expressive power, Hyperon has created MeTTa (Meta Type Talk), a novel programming language designed specifically for AGI development. Unlike general-purpose languages like Python, MeTTa is a cognitive substrate that blends elements of logic and probabilistic programming. Programmes in MeTTa operate directly on the metagraph, querying and rewriting knowledge structures, and supporting self-modifying code, which is essential for systems that learn how to improve themselves. "We're emerging from a couple of years spent on building tooling. We've finally got all our infrastructure working at scale for Hyperon, which is exciting." Our CEO, Dr. @bengoertzel, joined Robb Wilson and Josh Tyson on the Invisible Machines podcast to discuss the present and… pic.twitter.com/8TqU8cnC2L — SingularityNET (@SingularityNET) January 19, 2026 Robust reasoning as gateway to AGI The neural-symbolic approach at the heart of Hyperon addresses a key limitation of purely statistical AI, namely that narrow models struggle with tasks requiring multi-step reasoning. Abstract problems bamboozle LLMs with their pure pattern recognition. Throw neural learning into the mix, however, and reasoning becomes smarter and more human. If narrow AI does a good impersonation of a person, neural-symbolic AI does an uncanny one. That being said, it’s important to contextualise neural-symbolic AI. Hyperon’s hybrid design doesn’t mean an AGI breakthrough is imminent. But it represents a promising research direction that explicitly tackles cognitive representation and self-directed learning not relying on statistical pattern matching alone. And in the here and now, this concept isn’t constrained to some big brain whitepaper – it’s out there in the wild and being actively used to create powerful solutions. The LLM isn’t dead – narrow AI will continue to improve – but its days are numbered and its obsolescence inevitable. It’s only a matter of time. First neural-symbolic AI. Then, hopefully, AGI – the final boss of artificial intelligence. Image source: Depositphotos The post OpenCog Hyperon and AGI: Beyond large language models appeared first on AI News. View the full article
  8. After years of experimentation with artificial intelligence, retailers are striving to embed consumer insight directly into everyday commercial decisions. First Insight, a US-based analytics company specialising in predictive consumer feedback, argues that the next phase of retail AI should be epitomised by dialogue, not dashboards. Following a three-month beta programme, First Insight has made its new AI tool, Ellis, available to brands and retailers. Ellis is designed as a conversational interface that allows merchandising, pricing and planning teams to ask questions about products, pricing, and demand in the First Insight platform. The company says its approach is intended to compress decision times into minutes. Research by McKinsey has found that while most large retailers now collect volumes of customer data, some can’t translate insights into action quickly enough to influence product development decisions. It notes AI tools which shorten the distance between insight and execution are more likely to deliver measurable commercial value than reporting systems. From dashboards to dialogue First Insight has worked with retailers including Boden, Family Dollar, and Under Armour to predict consumer demand, price sensitivity, and performance using survey feedback and predictive modelling. Such insights are usually delivered on a dashboard or in a report. Ellis lets users query insights conversationally. For example, teams can ask whether a six-item or nine-item assortment is likely to perform better in a specific market, or how removing certain materials might affect appeal. First Insight says the system returns answers grounded in its existing data models. Industry evidence suggests that this method could help with a bottleneck in retail decision-making. A Harvard Business Review analysis of data-driven retail organisations found insight often loses value when it cannot be accessed quickly, particularly during phases like line review or early concept development. Predictive insight already in operation The underlying techniques used by First Insight are deployed already across the retail sector. Under Armour has described using consumer data and predictive modelling to refine product assortments and pricing strategies, stating the technology helps it reduce markdown risk and improve full-price selling. Similarly, fashion retailer Boden has discussed the role of customer insight in guiding assortment decisions, particularly in balancing trend-led items with core items. While these companies do not disclose the details of their proprietary systems, such cases can show how predictive consumer data can be embedded into commercial planning. Comparable tools are also in use elsewhere in the industry. Retailers including Walmart and Target have invested in analytics and machine learning to understand regional demand patterns, optimise pricing, and test new concepts. According to a Deloitte study on AI in retail, companies using predictive consumer insight report improved forecast accuracy and lower inventory risk, particularly when analytics are integrated early. Pricing, assortments and competitive dynamics Ellis is powered by what First Insight describes as a predictive retail large language model, one that’s trained on consumer response data. The company says this lets the system answer questions about optimal pricing, predicted sales rates, ideal assortment size, and likely segment preferences. This focus aligns with academic research showing that price optimisation and assortment planning are among the highest-value AI use cases in retail. A study published in the Journal of Retailing found that data-driven pricing models can outperform traditional cost-plus approaches, particularly when consumer willingness-to-pay is measured directly. Competitive benchmarking is another area where retailers can use analytics. Research from Bain & Company indicates retailers able to compare their products with competitors’ are better positioned to differentiate on value as well as price. Tools that consolidate such comparisons into a single analytical layer can be considered the ideal, therefore. Making insight more widely accessible One of First Insight’s core claims is that Ellis makes consumer insight accessible outside of specialist analytics teams. Natural-language queries, the company argues, lets senior executives down engage with data with no waiting for analysis. Democratisation of analytics is a recurring theme in a great deal of industry research. Gartner reports organisations which broaden access to analytics are more likely to see tool adoption and ROI. However, it cautions that systems should be governed to ensure outputs are interpreted correctly and stem from robust data. First Insight maintains that Ellis retains the methodological rigour of its existing platform, while reducing friction at the point of decision. According to Greg Petro, the company’s chief executive, the goal is to bring predictive insight into the moment when decisions are actually made. “For nearly 20 years, First Insight has helped retailers predict pricing, product success and assortment decisions by grounding them in real consumer feedback,” a company spokesperson said. “Ellis brings that intelligence directly into line review, early concept development and the boardroom, helping teams move faster without sacrificing confidence.” A crowded but growing market First Insight is not alone to target the space. Vendors such as EDITED, DynamicAction, and RetailNext offer AI tools aimed at merchandising and pricing. What differentiates newer offerings is the emphasis on usability and speed rather than model complexity. A recent Forrester report on retail AI noted that conversational interfaces are being layered on top of established analytics platforms, reflecting a demand from users for more intuitive interaction with data. Such tools lead to better decisions, although are dependent on data quality and organisational discipline. First Insight previewed Ellis at this year’s National Retail Federation conference in New York, where AI-driven merchandising and pricing tools featured prominently. As retailers face volatile demand, inflation, and changing consumer preferences, the ability to test scenarios remains valuable. (Image source: “2008 first insight” by palmasco is licensed under CC BY-NC-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Retailers bring conversational AI and analytics closer to the user appeared first on AI News. View the full article
  9. A new technology from digital banking platform Plumery AI aims to address a dilemma for financial institutions: how to move beyond proofs of concept and embed artificial intelligence into everyday banking operations without compromising governance, security, or regulatory compliance. Plumery’s “AI Fabric” has been positioned by the company as a standardised framework for connecting generative AI tools and models to core banking data and services. According to Plumery, the product is intended to reduce reliance on bespoke integrations and to promote an event-driven, API-first architecture that can scale as institutions grow. The challenge it seeks to address is recognised in the sector. Banks have invested heavily in AI experimentation over the past decade, but many deployments remain limited. Research by McKinsey suggests that while generative AI could materially improve productivity and customer experience in financial services, most banks struggle to translate pilots into production because of fragmented data estates and incumbent operating models. The consultancy argues that enterprise-level AI adoption requires shared infrastructure and governance, and reusable data products. In comments accompanying the product launch, Plumery’s founder and chief executive, Ben Goldin, said financial institutions are clear about what they expect from AI. “They want real production use cases that improve customer experience and operations, but they will not compromise on governance, security or control,” he said. “The event-driven data mesh architecture transforms how banking data is produced, shared, and consumed, not adding another AI layer on top of fragmented systems.” Fragmented data remains a barrier Data fragmentation remains one of the obstacles to operational AI in banking. Many institutions rely on legacy core systems that sit in newer digital channels, creating silos in products and customer journeys. Each AI initiative requires fresh integration work, security reviews, and governance approvals, thus increasing costs and slowing delivery. Academic and industry research supports this diagnosis. Studies on explainable AI in financial services note that fragmented pipelines make it harder to trace decisions and increase regulatory risk, particularly in areas like credit scoring and anti-money-laundering. Regulators have made clear that banks must be able to explain and audit AI-driven outcomes, regardless of where the models are developed. Plumery says its AI Fabric addresses such issues by presenting domain-oriented banking data as governed streams that can be reused in multiple use cases. The company argues that separating systems of record from systems of engagement and intelligence allows banks to innovate more safely. Evidence of AI already in production Despite the challenges, AI is already embedded in many parts of the financial sector. Case studies compiled by industry analysts show widespread use of machine learning and natural language processing in customer service, risk management, and compliance. Citibank, for example, has deployed AI-powered chatbots to handle routine customer enquiries, reducing pressure on call centres and improving response times. Other large banks use predictive analytics to monitor loan portfolios and anticipate defaults. Santander has publicly described its use of machine learning models to assess credit risk and strengthen portfolio management. Fraud detection is another mature area. Banks rely increasingly on AI systems to analyse transaction patterns, flagging anomalous behaviour more effectively than rule-based systems. Research from technology consultancies notes that such models depend on high-quality data flows, and that integration complexity remains a limiting factor for smaller institutions. More advanced applications are emerging at the margins. Academic research into large language models suggests that, under strict governance, conversational AI could support certain transactional and advisory functions in retail banking. However, these implementations remain experimental and are closely scrutinised due to their regulatory implications. Platform providers and ecosystem approaches Plumery operates in a competitive market of digital banking platforms that position themselves as orchestration layers rather than replacements for core systems. The company has entered partnerships designed to fit into broader fintech ecosystems. Its integration with Ozone API, an open banking infrastructure provider, was presented as a way for banks to deliver standards-compliant services more quickly, without custom development. Its approach reflects a wider industry trend towards composable architectures. Vendors like Backbase and others promote API-centric platforms that allow banks to plug in AI, analytics, and third-party services to the existing core. Analysts agree generally that such architectures are better suited to incremental innovation than large-scale system replacement. Readiness remains uneven Evidence suggests that readiness in the sector is uneven. A report by Boston Consulting Group found that fewer than a quarter of banks believe they are prepared for large-scale AI adoption. The gap, it argued, lies in governance, data foundations, and operating discipline. Regulators have responded by offering controlled environments for experimentation. In the ***, regulatory sandbox initiatives allow banks to test new technologies, including AI. These programmes are intended to support innovation and reinforce accountability and risk management. For vendors like Plumery, the opportunity lies in providing infrastructure that aligns technological ambition and regulatory reality. AI Fabric enters a market where demand for operational AI is apparent, but where success depends on proving that new tools can be safe and transparent. Whether Plumery’s approach becomes a adopted standard remains uncertain. As banks move from experimentation to production, the focus is moving towards the architectures that support AI. In that context, platforms that can demonstrate technical flexibility and governance adherence are more likely to play an important role in the digital banking’s next phase. (Image source: “Colorful Shale Strata of the Morrison Formation at the Edge of the San Rafael Swell” by Jesse Varner is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Banks operationalise as Plumery AI launches standardised integration appeared first on AI News. View the full article
  10. Author: Richard Farrell, CIO at Netcall After a year of rapid adoption and high expectations surrounding artificial intelligence, 2026 is shaping up to be the year CIOs apply a more strategic lens. Not to slow progress, but to steer it in a smarter direction. In 2025, we saw the rise of AI copilots across almost every platform imaginable. From browsers and CRMs to productivity tools and helpdesks, the tech world raced to embrace assistance-on-demand. But while vendors marketed “magic,” CIOs were left with the clean-up. Multiple pilots. Multiple platforms. Multiple promises. Few results. Now the honeymoon ******* is over. It’s time to assess what worked, what didn’t, and what truly matters. The role of the CIO is shifting from tech enthusiast to strategic outcome architect. That means moving from disconnected experiments to holistic thinking – aligning people, process, and technology to drive sustainable results. Process mapping will become an essential starting point: identifying pain points, inefficiencies, and areas for AI and automation that directly link to measurable outcomes. And that shift comes with a new set of priorities. Here are five that will define 2026. Process intelligence will replace fragmented copilots The early promise of AI copilots was appealing: save time, reduce manual work, and supercharge productivity. But reality has been far more grounded. Independent evaluations, including a detailed *** Department for Business and Trade trial, found minimal measurable productivity improvements[1]. Despite glowing self-reports, actual gains were either negligible or non-existent. Why? Because these tools were designed for individual users, not organisations. They sat on top of workflows, rather than improving them. In too many cases, the top use case was summarising meeting notes – useful, but hardly transformative. In 2026, CIOs will shift focus from point solutions to end-to-end platforms. The goal will be clear: use AI to optimise business processes, not pad out software features. This pivot from individual utility to organisational efficiency will be the biggest AI reset of the year. Consolidation will beat complexity CIOs have long battled sprawling tech estates and overlapping solutions, often held together by fragile integrations. In 2026, that complexity will come under fresh scrutiny. Too many tools chasing too few outcomes is no longer sustainable. There will be a marked shift towards simplification – rationalising technology stacks and working with partners who can demonstrate true interoperability. CIOs will favour vendors who collaborate rather than compete, and who can clearly show how their solutions integrate within the broader ecosystem. Less will be more, especially when it comes to driving efficiency and speed. This change is as much about procurement strategy as it is about technology. CIOs will look to platform-based approaches that offer the flexibility to build applications tailored to real-world processes. The ability to generate apps directly from mapped processes – refining and improving iteratively – will empower digital teams to deliver faster and smarter. It means building long-term partnerships that are based on shared goals and business value, not short-term sprints or siloed innovation. Governance will take centre stage The more AI scales, the more governance matters. In 2026, successful CIOs will build guardrails into every intelligent system. This means moving away from retrofitting rules after the fact, and instead embedding governance by design – from the very beginning of deployment. That includes audit trails, escalation rules, and privacy protocols, all built into the user journey through intuitive, adaptable frameworks. Proper escalation and human-in-the-loop models will be essential, alongside data stewardship – knowing where data is stored, how it’s accessed, and ensuring privacy by design. Governance isn’t a drag on progress; it’s the foundation of trust. Low-code platforms are emerging as powerful enablers in this shift. They don’t just speed up development – they allow CIOs to embed controls directly into the build process. This approach supports the democratisation of development, empowering teams to iterate, improve, and scale quickly, without compromising on oversight. That means compliance can’t be tacked on later; it must be built in from the start. This accelerates delivery while reassuring regulators, customers, and internal teams alike. This shift will ensure that automation supports human judgement, not overrides it – building systems people trust, not just systems that work. Prediction must be followed by action AI is good at pattern recognition. But unless those patterns trigger interventions, they don’t change outcomes. A shining example of this shift is the work at Rotherham NHS Foundation Trust. By embedding AI directly into its workflows, the Trust saw attendance among those most at risk of missing appointments improve significantly, with a 67% reduction in missed visits. It was not just that the model could identify at-risk patients; it was that this insight triggered an additional reminder, leading to better outcomes. The value was not in the model alone but in how it changed communication in a meaningful, practical way. That’s what CIOs will demand in 2026. Prediction engines must be paired with platforms that empower action. Whether it’s preventing missed appointments or spotting security anomalies before breaches occur, success will be defined by what AI enables teams to do differently. Value must be proven, not assumed A dangerous trend emerged in 2025: building business cases on feelings. CIOs were pressured to prove AI success based on user satisfaction or time-saving estimates, often self-reported. The problem? These metrics are vague, inconsistent, and impossible to verify. In 2026, that won’t be good enough. CIOs will be expected to show clear cause and effect. If AI is being used, what has it replaced? What has it improved? What cost has it avoided? We need to replace the tick-box mindset with a value lens. That means thinking beyond the tech and tying initiatives back to outcomes CEOs care about – growth, resilience, customer satisfaction, and efficiency. Crucially, this demands a holistic approach. It’s not just about technology. CIOs must align people, process, and platform – starting with detailed process mapping to understand how work gets done, where inefficiencies lie, and how those insights translate into smarter applications. These maps become blueprints for building, offering a framework to generate applications that deliver measurable value. The resolution: outcome-led leadership CIOs have spent the last decade digitising the enterprise. In 2026, their role will evolve again – from technologists to outcome architects. This year isn’t about pulling back on AI or slowing innovation. It’s about getting clear. Clear on priorities. Clear on governance. Clear on impact. The best CIOs will ask the toughest questions. Are we solving a real problem, or just deploying tech? Can we measure the benefit, not just hope for it? Are we building something sustainable, or chasing hype? 2026 is the year we stop experimenting for the sake of it and start delivering for the business. The age of shiny objects is over. It’s time for substance. And that starts with us. Author: Richard Farrell, CIO at Netcall (Image source: “Apollo classic concept art: Parachute deployment” by Mooncat.Drew is marked with Public Domain Mark 1.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI dominated the conversation in 2025, CIOs shift gears in 2026 appeared first on AI News. View the full article
  11. The ETSI EN 304 223 standard introduces baseline security requirements for AI that enterprises must integrate into governance frameworks. As organisations embed machine learning into their core operations, this European Standard (EN) establishes concrete provisions for securing AI models and systems. It stands as the first globally applicable European Standard for AI cybersecurity, having secured formal approval from National Standards Organisations to strengthen its authority across international markets. The standard serves as a necessary benchmark alongside the EU AI Act. It addresses the reality that AI systems possess specific risks – such as susceptibility to data poisoning, model obfuscation, and indirect prompt injection – that traditional software security measures often miss. The standard covers deep neural networks and generative AI through to basic predictive systems, explicitly excluding only those used strictly for academic research. ETSI standard clarifies the chain of responsibility for AI security A persistent hurdle in enterprise AI adoption is determining who owns the risk. The ETSI standard resolves this by defining three primary technical roles: Developers, System Operators, and Data Custodians. For many enterprises, these lines blur. A financial services firm that fine-tunes an open-source model for fraud detection counts as both a Developer and a System Operator. This dual status triggers strict obligations, requiring the firm to secure the deployment infrastructure while documenting the provenance of training data and the model’s design auditing. The inclusion of ‘Data Custodians’ as a distinct stakeholder group directly impacts Chief Data and Analytics Officers (CDAOs). These entities control data permissions and integrity, a role that now carries explicit security responsibilities. Custodians must ensure that the intended usage of a system aligns with the sensitivity of the training data, effectively placing a security gatekeeper within the data management workflow. ETSI’s AI standard makes clear that security cannot be an afterthought appended at the deployment stage. During the design phase, organisations must conduct threat modelling that addresses AI-native attacks, such as membership inference and model obfuscation. One provision requires developers to restrict functionality to reduce the attack surface. For instance, if a system uses a multi-modal model but only requires text processing, the unused modalities (like image or audio processing) represent a risk that must be managed. This requirement forces technical leaders to reconsider the common practice of deploying massive, general-purpose foundation models where a smaller and more specialised model would suffice. The document also enforces strict asset management. Developers and System Operators must maintain a comprehensive inventory of assets, including interdependencies and connectivity. This supports shadow AI discovery; IT leaders cannot secure models they do not know exist. The standard also requires the creation of specific disaster recovery plans tailored to AI attacks, ensuring that a “known good state” can be restored if a model is compromised. Supply chain security presents an immediate friction point for enterprises relying on third-party vendors or open-source repositories. The ETSI standard requires that if a System Operator chooses to use AI models or components that are not well-documented, they must justify that decision and document the associated security risks. Practically, procurement teams can no longer accept “****** box” solutions. Developers are required to provide cryptographic hashes for model components to verify authenticity. Where training data is sourced publicly (a common practice for Large Language Models), Developers must document the source URL and acquisition timestamp. This audit trail is necessary for post-incident investigations, particularly when attempting to identify if a model was subjected to data poisoning during its training phase. If an enterprise offers an API to external customers, they must apply controls designed to mitigate AI-focused attacks, such as rate limiting to prevent adversaries from reverse-engineering the model or overwhelming defences to inject poison data. The lifecycle approach extends into the maintenance phase, where the standard treats major updates – such as retraining on new data – as the deployment of a new version. Under the ETSI AI standard, this triggers a requirement for renewed security testing and evaluation. Continuous monitoring is also formalised. System Operators must analyse logs not just for uptime, but to detect “data drift” or gradual changes in behaviour that could indicate a security breach. This moves AI monitoring from a performance metric to a security discipline. The standard also addresses the “End of Life” phase. When a model is decommissioned or transferred, organisations must involve Data Custodians to ensure the secure disposal of data and configuration details. This provision prevents the leakage of sensitive intellectual property or training data through discarded hardware or forgotten cloud instances. Executive oversight and governance Compliance with ETSI EN 304 223 requires a review of existing cybersecurity training programmes. The standard mandates that training be tailored to specific roles, ensuring that developers understand secure coding for AI while general staff remain aware of threats like social engineering via AI outputs. “ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems”, said Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence. “At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated. The work that went into delivering this framework is the result of extensive collaboration and it means that organisations can have full confidence in AI systems that are resilient, trustworthy, and secure by design.” Implementing these baselines in ETSI’s AI security standard provides a structure for safer innovation. By enforcing documented audit trails, clear role definitions, and supply chain transparency, enterprises can mitigate the risks associated with AI adoption while establishing a defensible position for future regulatory audits. An upcoming Technical Report (ETSI TR 104 159) will apply these principles specifically to generative AI, targeting issues like deepfakes and disinformation. See also: Allister Frost: Tackling workforce anxiety for AI integration success Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Meeting the new ETSI standard for AI security appeared first on AI News. View the full article
  12. Hiring at large firms has long relied on interviews, tests, and human judgment. That process is starting to shift. McKinsey has begun using an AI chatbot as part of its graduate recruitment process, signalling a shift in how professional services organisations evaluate early-career candidates. The chatbot is being used during the initial stages of recruitment, where applicants are asked to interact with it as part of their assessment. Rather than replacing interviews or final hiring decisions, the tool is intended to support screening and evaluation earlier in the process. The move reflects a wider trend across large organisations: AI is no longer limited to research or client-facing tools, but is increasingly shaping internal workflows. Why McKinsey is using AI in graduate hiring Graduate recruitment is resource-heavy. Every year, large firms receive tens of thousands of applications, many of which must be assessed in short hiring cycles. Screening candidates for basic fit, communication skills, and problem-solving ability can take a long time, even before interviews begin. Using AI at this stage offers a way to manage volume. A chatbot can interact with every applicant, ask consistent questions and collect organised responses. Human recruiters can then review that data, rather than requiring staff to manually screen every application from scratch. For McKinsey, the chatbot is part of a larger assessment process that includes interviews and human judgment. According to the company, the tool helps in gathering more information early on, rather than making recruiting judgments on its own. Shifting the role of recruiters Introducing AI into recruitment alters how hiring teams operate. Rather than focusing on early screening, recruiters can devote more time to assessing prospects who have already passed initial tests. In theory, that allows for more thoughtful interviews and deeper evaluation later in the process. At the same time, it raises questions about oversight. Recruiters need to understand how the chatbot evaluates responses and what signals it prioritises. Without that visibility, there is a risk that decisions could lean too heavily on automated outputs, even if the tool is meant to assist rather than decide. Professional services firms are typically wary about such adjustments. Their reputations rely heavily on talent quality, and any perception of unfair or flawed hiring practices carries risk. As a result, recruitment serves as a testing ground for AI use, as well as an area where controls are important. Concerns around fairness and bias Using AI in hiring is not without controversy. Critics have raised concerns that automated systems can reflect biases present in their training data or in how questions are framed. If not monitored closely, those biases can affect who progresses through the hiring process. McKinsey has said it is mindful of these risks and that the chatbot is used alongside human review. Still, the move highlights a broader challenge for organisations adopting AI internally: tools must be tested, audited, and adjusted over time. In recruitment, that includes checking whether certain groups are disadvantaged by how questions are asked or how responses are interpreted. It also means giving candidates clear information about how AI is used and how their data is handled. How McKinsey’s AI hiring move fits a wider enterprise trend The use of AI in graduate hiring is not unique to consulting. Large employers in finance, law, and technology are also testing AI tools for screening, scheduling interviews, and analysing written responses. What stands out is how quickly these tools are moving from experiments to real processes. In many cases, AI enters organisations through small, contained use cases. Hiring is one of them. It sits inside the company, affects internal efficiency, and can be adjusted without changing products or services offered to clients. That pattern mirrors how AI adoption is unfolding more broadly. Instead of sweeping transformations, many firms are adding AI to specific workflows where the benefits and risks are easier to manage. What this signals for enterprises McKinsey’s use of an AI chatbot in recruitment points to a practical shift in enterprise thinking. AI is becoming a tool for routine internal decisions, not just analysis or automation behind the scenes. For other organisations, the lesson is less about copying the tool and more about approach. Introducing AI into sensitive areas like hiring requires clear boundaries, human oversight, and a willingness to review outcomes over time. It also requires communication. Candidates need to know when they are interacting with AI and how that interaction fits into the overall hiring process. Transparency helps build trust, especially as AI becomes more common in workplace decisions. As professional services firms continue to test AI in their own operations, recruitment offers an early view of how far they are willing to go. The technology may help manage scale and consistency, but responsibility for decisions still rests with people. How well companies balance those two will shape how AI is accepted inside the enterprise. (Photo by Resume Genius) See also: Allister Frost: Tackling workforce anxiety for AI integration success Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post McKinsey tests AI chatbot in early stages of graduate recruitment appeared first on AI News. View the full article
  13. OpenAI, Google, and Anthropic announced specialised medical AI capabilities within days of each other this month, a clustering that suggests competitive pressure rather than coincidental timing. Yet none of the releases are cleared as medical devices, approved for clinical use, or available for direct patient diagnosis—despite marketing language emphasising healthcare transformation. OpenAI introduced ChatGPT Health on January 7, allowing US users to connect medical records through partnerships with b.well, Apple Health, Function, and MyFitnessPal. Google released MedGemma 1.5 on January 13, expanding its open medical AI model to interpret three-dimensional CT and MRI scans alongside whole-slide histopathology images. Anthropic followed on January 11 with Claude for Healthcare, offering HIPAA-compliant connectors to CMS coverage databases, ICD-10 coding systems, and the National Provider Identifier Registry. All three companies are targeting the same workflow pain points—prior authorisation reviews, claims processing, clinical documentation—with similar technical approaches but different go-to-market strategies. Developer platforms, not diagnostic products The architectural similarities are notable. Each system uses multimodal large language models fine-tuned on medical literature and clinical datasets. Each emphasises privacy protections and regulatory disclaimers. Each positions itself as supporting rather than replacing clinical judgment. The differences lie in deployment and access models. OpenAI’s ChatGPT Health operates as a consumer-facing service with a waitlist for ChatGPT Free, Plus, and Pro subscribers outside the EEA, Switzerland, and the ***. Google’s MedGemma 1.5 releases as an open model through its Health AI Developer Foundations program, available for download via Hugging Face or deployment through Google Cloud’s Vertex AI. Anthropic’s Claude for Healthcare integrates into existing enterprise workflows through Claude for Enterprise, targeting institutional buyers rather than individual consumers. The regulatory positioning is consistent across all three. OpenAI states explicitly that Health “is not intended for diagnosis or treatment.” Google positions MedGemma as “starting points for developers to evaluate and adapt to their medical use cases.” Anthropic emphasises that outputs “are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.” Benchmark performance vs clinical validation Medical AI benchmark results improved substantially across all three releases, though the gap between test performance and clinical deployment remains significant. Google reports that MedGemma 1.5 achieved 92.3% accuracy on MedAgentBench, Stanford’s medical agent task completion benchmark, compared to 69.6% for the previous Sonnet 3.5 baseline. The model improved by 14 percentage points on MRI disease classification and 3 percentage points on CT findings in internal testing. Anthropic’s Claude Opus 4.5 scored 61.3% on MedCalc medical calculation accuracy tests with Python code execution enabled, and 92.3% on MedAgentBench. The company also claims improvements in “honesty evaluations” related to factual hallucinations, though specific metrics were not disclosed. OpenAI has not published benchmark comparisons for ChatGPT Health specifically, noting instead that “over 230 million people globally ask health and wellness-related questions on ChatGPT every week” based on de-identified analysis of existing usage patterns. These benchmarks measure performance on curated test datasets, not clinical outcomes in practice. Medical errors can have life-threatening consequences, translating benchmark accuracy to clinical utility more complex than in other AI application domains. Regulatory pathway remains unclear The regulatory framework for these medical AI tools remains ambiguous. In the US, the FDA’s oversight depends on intended use. Software that “supports or provides recommendations to a health care professional about prevention, diagnosis, or treatment of a disease” may require premarket review as a medical device. None of the announced tools has FDA clearance. Liability questions are similarly unresolved. When Banner Health’s CTO Mike Reagin states that the health system was “drawn to Anthropic’s focus on AI safety,” this addresses technology selection criteria, not legal liability frameworks. If a clinician relies on Claude’s prior authorisation analysis and a patient suffers harm from delayed care, existing case law provides limited guidance on responsibility allocation. Regulatory approaches vary significantly across markets. While the FDA and Europe’s Medical Device Regulation provide established frameworks for software as a medical device, many APAC regulators have not issued specific guidance on generative AI diagnostic tools. This regulatory ambiguity affects adoption timelines in markets where healthcare infrastructure gaps might otherwise accelerate implementation—creating a tension between clinical need and regulatory caution. Administrative workflows, not clinical decisions Real deployments remain carefully scoped. Novo Nordisk’s Louise Lind Skov, Director of Content Digitalisation, described using Claude for “document and content automation in pharma development,” focused on regulatory submission documents rather than patient diagnosis. Taiwan’s National Health Insurance Administration applied MedGemma to extract data from 30,000 pathology reports for policy analysis, not treatment decisions. The pattern suggests institutional adoption is concentrating on administrative workflows where errors are less immediately dangerous—billing, documentation, protocol drafting—rather than direct clinical decision support where medical AI capabilities would have the most dramatic impact on patient outcomes. Medical AI capabilities are advancing faster than the institutions deploying them can navigate regulatory, liability, and workflow integration complexities. The technology exists. The US$20 monthly subscription provides access to sophisticated medical reasoning tools. Whether that translates to transformed healthcare delivery depends on questions these coordinated announcements leave unaddressed. See also: AstraZeneca bets on in-house AI to speed up oncology research Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI medical diagnostics race intensifies as OpenAI, Google, and Anthropic launch competing healthcare tools appeared first on AI News. View the full article
  14. Drug development is producing more data than ever, and large pharmaceutical companies like AstraZeneca are turning to AI to make sense of it. The challenge is no longer whether AI can help, but how tightly it needs to be built into research and clinical work to improve decisions around trials and treatment. That question helps explain why AstraZeneca is bringing Modella AI in-house. The company has agreed to acquire the Boston-based AI firm as it looks to deepen its use of AI across oncology research and clinical development. Financial terms were not disclosed. Rather than treating AI as a supporting tool, AstraZeneca is pulling Modella’s models, data, and staff directly into its research organisation. The move reflects a broader shift in the drug industry, where partnerships are giving way to acquisitions as companies try to gain more control over how AI is built, tested, and used in regulated settings. Why AI ownership is starting to matter in drug research Modella AI focuses on using computers to analyse pathology data, such as biopsy images, and link those findings with clinical information. Its work centres on making pathology more quantitative, helping researchers spot patterns that may point to useful biomarkers or guide treatment choices. In a statement, Modella said its foundation models and AI agents would be integrated into AstraZeneca’s oncology research and development work, with a focus on clinical development and biomarker discovery. How AstraZeneca moved its AI partnership toward full integration For AstraZeneca, the deal builds on a collaboration that began several years ago. That earlier partnership allowed both sides to test whether Modella’s tools could work within the drugmaker’s research environment. According to AstraZeneca executives, the experience made it clear that closer integration was needed. Speaking at the J.P. Morgan Healthcare Conference, AstraZeneca Chief Financial Officer Aradhana Sarin described the acquisition as a way to bring more data and AI capability inside the company. “Oncology drug development is becoming more complex, more data-rich and more time-sensitive,” said Gabi Raia, Modella AI’s chief commercial officer, adding that joining AstraZeneca would allow the company to deploy its tools across global trials and clinical settings. Using AI to improve trial decisions Sarin said the deal would “supercharge” AstraZeneca’s work in quantitative pathology and biomarker discovery by combining data, models, and teams under one roof. While such language reflects ambition, the practical goal is more grounded: shortening the time it takes to turn research data into decisions that affect trial design and patient selection. One area where AstraZeneca expects AI to have an impact is in choosing patients for clinical trials. Better matching patients to studies could improve trial outcomes and reduce costs tied to delays or failed studies. That kind of improvement depends less on complex algorithms and more on steady access to clean data and tools that fit into existing workflows. Talent and tools move in-house The acquisition also highlights a change in how large pharmaceutical firms think about AI talent. Rather than relying on outside vendors, companies are increasingly treating data scientists and machine learning experts as part of their core research teams. For AstraZeneca, bringing Modella’s staff in-house reduces dependence on external roadmaps and gives the company more say over how tools are adapted as research needs change. AstraZeneca said this is the first time a major pharmaceutical company has acquired an AI firm outright, though collaborations between drugmakers and technology companies have become common. AstraZeneca joins a crowded field of pharma–AI deals At the same healthcare conference, several new partnerships were announced, including a $1 billion collaboration between Nvidia and Eli Lilly to build a new research lab using Nvidia’s latest AI chips. Those deals point to growing interest in AI across the sector, but they also underline a key difference in strategy. Partnerships can speed up experimentation, while acquisitions suggest a longer-term bet on building internal capability. For companies operating under strict regulatory rules, that control can matter as much as raw computing power. What AstraZeneca is betting on next Sarin described the earlier AstraZeneca–Modella partnership as a “test drive,” saying the company ultimately wanted Modella’s data, models, and people inside the organisation. The aim, she said, is to support the development of “highly targeted biomarkers and then highly targeted therapeutics.” Beyond the Modella deal, Sarin said 2026 is expected to be a busy year for AstraZeneca, with several late-stage trial results due across different therapy areas. The company is also working toward a target of $80 billion in annual revenue by 2030. Whether acquisitions like this help meet those goals will depend on execution. Integrating AI into drug development is slow, expensive, and often messy. Still, AstraZeneca’s move signals a clear view of where it thinks the value lies: not in buying AI as a service, but in embedding it deeply into how medicines are discovered and tested. (Photo by Mika Baumeister) See also: Allister Frost: Tackling workforce anxiety for AI integration success Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AstraZeneca bets on in-house AI to speed up oncology research appeared first on AI News. View the full article
  15. Research from Cleo AI indicates that young adults are turning to artificial intelligence for financial advice to help them manage their money and develop more sustainable financial habits. The study surveyed 5,000 *** adults aged 28 to 40 and found that the majority are saving significantly less than they would like. In this context, interest in AI-driven money management tools is rising. One in five respondents describe themselves as curious about using AI to manage their finances, while a further 12% say they are excited by the prospect. Yet despite the interest in using AI in this context, confidence in personal financial management remains weak. More than a third of respondents (37%) report struggling with self-discipline around money, with impulse spending frequently undermining savings goals. Four in five believe they could improve their financial knowledge, pointing to a gap between intention and behaviour. Adults aged 28 to 34 are around 15% more satisfied with their savings than those aged 35 to 40, and save around 33% more each month on average. The findings suggest that as people move through early adulthood, financial strain accumulates while access to effective, ongoing support does not increase at the same rate. AI in money management AI is being seen as a tool that might help regain financial control. Many respondents express comfort with using AI for routine financial tasks. Nearly two-thirds (64%) would trust AI to advise on disposable income, while more than half would allow AI to move money to avoid overdrafts (54%) or manage regular bill payments (52%). Cleo’s CEO and founder, Barney Hussey-Yeo, states structural economic pressures are a major factor. Rising living costs, stagnant pay, low wages, and debt mean that many people are not mismanaging money so much as not having enough to make managing it worthwhile. In this context, AI tools positioned as practical, everyday assistance that can work with highly limited funds at its disposal rather than a tool for aspirational financial planning. Younger respondents are driving adoption. Adults aged 28 to 34 are 8% more confident than those aged 35 to 40 in using AI-powered financial tools. However, trust remains a barrier: nearly a quarter of respondents (23%) prefer to begin with limited use of the technology and need evidence of value before significant engagement. The research also highlights the regional disparities evident in the ***. Average monthly savings in the affluent South are 26% higher than in the North. Londoners save 33% more than the national average and around £250 more per month than those in Norwich. London (£431), Brighton (£401) and Edinburgh (£386) report the highest average monthly savings, while Newcastle (£185) and Cardiff in Wales (£184.95) sit at the bottom. Implications for fintech decision-makers The strongest signal in this evidence is not enthusiasm for AI per se, but demand for support under financial stress. High proportions citing poor self-discipline (37%) and low confidence in financial knowledge (80%) indicate that execution is the second problem. Trust is a gating factor rather than a secondary concern. While headline willingness to delegate tasks such as overdraft avoidance is high, nearly a quarter of users want incremental proof before committing. This would favour modular product design and specific implementations in software rather than full automation from the outset. Evidence suggests adoption will be earned through demonstrated utility, not brand positioning. Age-related divergence within a relatively narrow cohort (28–40) is notable. The sharp drop in savings satisfaction and contribution among those aged 35–40 (the time of life when most take on more responsibilities and financial burden) suggests that fintechs targeting young professionals only might miss those with materially different needs. For older millennials, tools that address cumulative obligations (housing, dependants, legacy debt, bills) are likely to be more relevant.. Regional savings disparities are large and persistent, with London outliers (where mean income is higher) masking much weaker savings capacity elsewhere. This weakens the case for nationally uniform products. Pricing, thresholds, nudges in the form of notifications and in-app messages may need regional bias if products are to feel realistic outside higher-income urban centres in the South of the ***. (Image source: “Iced tea at Georgia’s” by Ed Yourdon is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Research shows *** young adults would use AI for financial guidance appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.