Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    800
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. By December 2025, AI adoption on Wall Street had moved past experiments inside large US banks and into everyday operations. Speaking at a Goldman Sachs financial-services conference in New York on 9 December, bank executives described AI—particularly generative AI—as an operational upgrade already lifting productivity across engineering, operations, and customer service. The same discussion also surfaced a harder reality. If banks can produce more with the same teams, some roles may no longer be required at current levels once demand stabilises. How Wall Street banks say AI is delivering results today JPMorgan: operational gains begin to compound Marianne Lake, chief executive of consumer and community banking at JPMorgan, said productivity in areas using AI has risen to around 6%, up from roughly 3% before deployment. She added that operations roles could eventually see productivity gains of 40% to 50% as AI becomes part of routine work. Those gains rest on deliberate choices rather than broad experimentation. JPMorgan has focused on secure internal access to large language models, targeted changes to workflows, and tight controls on how data is used. The bank has described its internal “LLM Suite” as a controlled setting where staff can draft and summarise content using large language models. Wells Fargo: output rising ahead of staffing changes Wells Fargo CEO Charlie Scharf said the bank has not reduced headcount because of AI so far, but noted that it is “getting a lot more done.” He said management expects to find areas where fewer people are needed as productivity improves. In comments reported the same day, Scharf said the bank’s internal budgets already point to a smaller workforce by 2026, even before factoring in AI’s full impact. He also flagged higher severance costs, suggesting preparations for future adjustments are under way. PNC: AI speeds up a long-running shift PNC CEO Bill Demchak positioned AI as an accelerator rather than a new direction. He said the bank’s headcount has stayed largely flat for about a decade, even as the business expanded. That stability, he said, came from automation and branch optimisation, with AI likely to push the trend further. Citigroup: gains in software and customer support Citi’s incoming CFO Gonzalo Luchetti said the bank has recorded a 9% productivity improvement in software development. That mirrors a broader pattern across large firms adopting AI copilots to support coding work. He also pointed to two customer service areas where AI is helping: improving self-service so fewer calls reach agents, and supporting agents in real time when customers do need to speak with a person. Goldman Sachs: workflow changes paired with hiring restraint According to Reuters, Goldman Sachs’ internal “OneGS 3.0” programme has focused on using AI to improve sales processes and client onboarding. It has also targeted process-heavy functions such as lending workflows, regulatory reporting, and vendor management. These changes are unfolding alongside job cuts and a slower pace of hiring, linking workflow redesign directly to staffing decisions. Where Wall Street banks see the earliest AI productivity gains Across banks, the clearest gains are showing up in work that relies heavily on documents, follows repeatable steps, and operates within defined rules. Generative AI can shorten the time needed to search for information, summarise material, draft content, and move work through approval chains—especially when paired with structured processes and human checks. Common areas seeing early impact include: Operations: drafting responses, summarising cases, and resolving exceptions more quickly Software development: generating code, writing tests, refactoring, and producing documentation Customer service: stronger self-service combined with real-time support for agents Sales support and onboarding: pulling data from documents, filling forms, and speeding up client setup Regulatory reporting: assembling narratives and evidence faster, under strict review and controls Why governance shapes the pace of adoption For banks, enthusiasm is not the main constraint. Control is. US regulators have long required strong oversight of models, and those expectations extend to AI systems. Guidance such as the Federal Reserve and OCC’s SR 11-7 sets standards for model development, validation, and ongoing review. A 2025 report from the US Government Accountability Office noted that existing model risk management principles already apply to AI, including testing and independent oversight. In practice, this pushes banks toward designs that can be examined and traced. AI use is often limited in how independently it can act. Prompts and outputs are logged, performance is monitored for drift, and humans remain responsible for high-impact decisions such as lending, dispute handling, and official reporting. Productivity rises, but employment questions remain The comments from bank leaders point to a phased shift. The first phase looks like stable headcount paired with higher output as AI tools spread across teams. The second phase begins once those gains become consistent enough to influence staffing plans, through attrition, role changes, or targeted cuts. Signals from Wells Fargo around 2026 headcount planning and severance costs suggest some banks are approaching that second stage. At a broader level, institutions such as the International Monetary Fund have warned that AI could affect a large share of jobs worldwide, with different mixes of automation and augmentation depending on role and region. The World Economic Forum’s Future of Jobs Report 2025 also projects substantial job movement as companies adopt AI and adjust skill needs. What AI means for Wall Street bank strategy beyond 2025 Banks that gain the most from AI are likely to focus on three areas at once: redesigning workflows rather than layering on chat tools, building strong data foundations, and putting governance in place that supports speed without eroding trust. Research firms argue the financial stakes are high. McKinsey estimates that generative AI could deliver between $200 billion and $340 billion in annual value for the banking sector, largely through productivity improvements. The open question is no longer whether AI can deliver results in banking. It is how quickly banks can make those gains routine while preserving audit trails, security, and customer safeguards—and how they manage the workforce changes that follow. (Photo by Lo Lo) See also: BNP Paribas introduces AI tool for investment banking Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Wall Street’s AI gains are here — banks plan for fewer people appeared first on AI News. View the full article
  2. Artificial intelligence has been part of the insurance sector for years – the Finance function in many businesses is often the first to automate. But what’s remarkable in the instance of AI is how directly the technology is woven into day-to-day operational work. Not sitting in the background as a niche modelling capability, AI is now used in places where insurers spend most of their time and money: claims handling, underwriting, and running complex programmes. Industry giants Allianz, Zurich, and Aviva have published evidence in just the last 12 months illustrating their shifts from experimentation stages to production-grade tools that support frontline workers in real workflows. Simple claims: Fewer admin bottlenecks Claims operations are a natural proving ground for AI because they comprise of a combination of paperwork and human judgement, and are usually undertaken in an environment of time pressure. Allianz describes its Insurance Copilot as an AI-powered tool that helps claims handlers automate repetitive tasks and pull together relevant information that would otherwise require multiple searches on different systems. There’s a notable change to the workflows, Allianz outlines. The Copilot starts with data gathering, summarising claim and contract details so a handler can get just the essentials, quickly. The algorithm then performs document analysis, operations that include interpreting agreements and comparing claims against policy details. The tool flags discrepancies and suggests next steps. Once the human operator has taken their decision, the Copilot assists drafts context-aware emails. This is the kind of daily activity that insurers care about, and by using their AI tools, they get reduced turnaround time, smoother settlements, and less friction for staff and customers. Allianz also frames AI as a way to reduce unnecessary payouts by highlighting important factors adjusters might otherwise miss. That has a clear impact on the company’s overall bottom line. Complex documents to usable decisions The quality of underwriting is determined by the quality of information available. Aviva uses the example of underwriters needing to read GP medical reports. The company says it’s launching an AI-powered summarisation tool that uses genAI to analyse and summarise these reports, which can sometimes amount to dozens of pages of medical text. The AI functions let underwriters make faster, more informed decisions. The immediate value here is not AI replacing the underwriter, but technology reducing the time spent reading. The insurer is explicit that underwriters will review summaries and make the final decision – not the AI. That distinction matters because underwriting is technical and sensitive; compressing documents into decision-ready summaries can speed up processing, but it also raises questions about accuracy, omissions and auditability. Aviva addresses this by pointing to its “rigorous testing and controls“. An active test phase processed around 1,000 cases before roll-out to ensure the standards it required, the company says. Uncertain contracts and servicing in multinational programmes Commercial insurance is an area with its own challenges, which include the complexity from working in multiple jurisdictions, and the regional differences between policies and stakeholders. Zurich says generative AI’s ability to process unstructured information lets multinational insurance work more easily across several countries, helping it build quicker, more accurate pictures of commercial insurance offerings, and simplifying submissions in different countries. Zurich also highlights contract certainty as a practical outcome: multinational programmes involve layered documents, varied local requirements and have the pervasive need for constant checking. It says GenAI helps internal experts compare, summarise and verify coverage in a programme using the operator’s native language, “in a fraction of the time” compared with the manual effort required to translate and capture the nuance of international differences. Although this area isn’t customer-facing, genAI improves the company’s responsiveness by letting its underwriters, risk engineers, and claims professionals work more efficiently. Zurich also refers to AI “joining up the dots”, able to spot trends in data that would – given the quantity of information – go unnoticed by human staff. Indeed, AI amplifies its experts’ judgement rather than displacing it. The common thread: augmentation, not automation-for-automation’s sake Across these three examples, a consistent pattern emerges: AI handles the heavy lifting of reading, searching, and drafting; high-volume tasks in insurance operations. Humans remain accountable for consequent decisions, whether it’s claim payments or underwriting acceptance. (Allianz describes a “human-in-the-loop” approach, and Aviva and Zurich similarly emphasise experts retaining decision-making control). Operational control and scalability are treated as major concerns: pilots, testing, domain-by-domain tuning, and expansion into lines of business are integral part of the narrative. What this means for the sector Insurers see faster cycle times, better consistency, reduced manual work, and a path to scaling. Their challenge is implementing tools responsibly, which is defined by secure data handling, explainability where needed, and the training of teams so they can question outputs appropriately. AI is becoming less of a headline in the sector and more of an everyday reality, a practical silicon colleague in the routine work of insurance profitability. (Image source: “house fire” by peteSwede is licensed under CC BY 2.0. ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Ensuring effective AI in insurance operations appeared first on AI News. View the full article
  3. Big Pharma’s AI race extends across drug discovery, development, and clinical trials—but AstraZeneca has distinguished itself by deploying AI clinical trials technology at an unprecedented public health scale. While competitors optimise internal R&D pipelines, AstraZeneca’s AI is already embedded in national healthcare systems, screening hundreds of thousands of patients and demonstrating what happens when AI moves from pharmaceutical labs into actual patient care. The clinical validation backs this approach. AstraZeneca’s CREATE study, presented at the European Lung ******* Congress in March 2025, demonstrated a 54.1% positive predictive value for its AI chest X-ray tool—far exceeding the pre-defined success threshold of 20%. Behind those numbers: over 660,000 people screened in Thailand since 2022, with AI detecting suspected pulmonary lesions in 8% of cases. More critically, Thailand’s National Health Security Office is now scaling this technology across 887 hospitals with a three-year budget exceeding 415 million baht. This isn’t just a pilot program or proof-of-concept. It’s AI clinical trials technology deployed at the national healthcare system scale. The strategic divergence in AI clinical trials approaches The contrast with competitors is revealing. Pfizer’s ML Research Hub has compressed drug discovery timelines to approximately 30 days for molecule identification. The company used AI to develop Paxlovid in record time, with machine learning analysing patient data 50% faster than traditional methods. Pfizer now deploys AI in over half its clinical trials. Novartis partnered with Nobel Prize winner Demis Hassabis’s Isomorphic Labs and Microsoft for “AI-driven drug discovery.” Its Intelligent Decision System uses computational twins to simulate clinical trial processes, with AI-identified sites reportedly recruiting patients faster than traditional selection methods. Roche’s “lab in a loop” strategy iterates AI models with laboratory experiments. Having acquired Foundation Medicine and Flatiron Health, Roche built the industry’s largest clinical genomic database—over 800,000 genomic profiles across 150+ tumour subtypes—targeting 50% efficiency gains in safety management by 2026. AstraZeneca’s clinical operations advantage What sets AstraZeneca apart in AI clinical trials isn’t just ambition—it’s execution at scale. The company runs over 240 global trials in its R&D pipeline and has systematically embedded generative AI across clinical operations. It’s an “intelligent protocol tool,” developed with medical writers, that has reduced document authoring time by 85% in some cases. The company uses AI for 3D location detection on CT scans, slashing the time radiologists spend on manual annotation. More significantly, AstraZeneca is pioneering virtual control groups for AI clinical trials using electronic health records and past trial data to simulate placebo arms—potentially reducing the number of patients receiving non-active treatments. This represents a fundamental rethinking of clinical trial design itself. The lung ******* screening program exemplifies this strategic focus. Using Qure.ai’s qXR-LNMS tool, AstraZeneca isn’t just conducting trials—it’s transforming public health infrastructure. The December 2025 expansion includes a new industrial worker screening program targeting 5,000 workers across four Thai provinces, now expanding beyond lung ******* to include heart failure detection. The timeline acceleration race Industry metrics show why AI clinical trials matter: Traditional drug development takes 10-15 years with a 90% failure rate. AI-discovered drugs achieve 80-90% Phase I success rates—double the 40-65% traditional benchmark. Over 3,000 AI-assisted drugs are in development, with 200+ AI-enabled approvals expected by 2030. Pfizer moves from molecule identification to clinical trials in six-week cycles. Novartis analyses 460,000 clinical trials in minutes versus months. Yet AstraZeneca’s model delivers immediate patient impact—detecting cancers today in underserved populations, often before symptoms appear. The US$410 Billion question The World Economic Forum projects AI could generate US$350-$410 billion annually for pharma by 2030. The question is which approach captures more value: faster drug discovery or more efficient clinical operations? Pfizer’s bet on computational drug design and Novartis’s AI-powered trial site selection may yield breakthrough molecules. Roche’s integrated pharma-diagnostics model creates a proprietary data moat. But AstraZeneca’s strategy of embedding AI clinical trials throughout operations—from protocol generation to patient recruitment to regulatory submissions—is demonstrably reducing time-to-market while building real-world evidence at scale. The company’s partnership approach is equally distinctive. While others acquire AI companies or build internal hubs, AstraZeneca collaborates with technology partners like Qure.ai and Perceptra, regulatory bodies, and national health systems to deploy AI clinical trials where infrastructure gaps exist. As AstraZeneca pursues its 2030 goal of delivering 20 new medicines and reaching us$80 billion in revenue, its AI clinical trials advantage isn’t just about speed—it’s about proving AI’s value in the most regulated, risk-averse phase of pharmaceutical development. While competitors race to discover the next breakthrough molecule, AstraZeneca is reengineering how clinical trials themselves are conducted. The winner may not be determined by who builds the most sophisticated algorithm, but by who deploys AI clinical trials technology where it demonstrably improves patient outcomes—at scale, under regulatory scrutiny, and within real healthcare systems. And in that race, AstraZeneca currently leads. (Photo by AstraZeneca) See also: Google AMIE: AI doctor learns to ‘see’ medical images Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AstraZeneca leads big pharma’s AI clinical trials revolution with real-world patient impact appeared first on AI News. View the full article
  4. Roblox is often seen as a games platform, but its day-to-day reality looks closer to a production studio. Small teams release new experiences on a rolling basis and then monetise them at scale. That pace creates two persistent problems: time lost to repeatable production work, and friction when moving outputs between tools. Roblox’s 2025 updates point to how AI can reduce both, without drifting away from clear business outcomes. Roblox keeps AI where the work happens Rather than pushing creators toward separate AI products, Roblox has embedded AI inside Roblox Studio, the environment where creators already build, test, and iterate. In its September 2025 RDC update, Roblox outlined “AI tools and an Assistant” designed to improve creator productivity, with an emphasis on small teams. Its annual economic impact report adds that Studio features such as Avatar Auto-Setup and Assistant already include “new AI capabilities” to “accelerate content creation”. The language matters—Roblox frames AI in terms of cycle time and output, not abstract claims about transformation or innovation. That framing makes it easier to judge whether the tools are doing their job. One of the more practical updates focuses on asset creation. Roblox described an AI capability that goes beyond static generation, allowing creators to produce “fully functional objects” from a prompt. The initial rollout covers selected vehicle and weapons categories, returning interactive assets that can be extended inside Studio. This addresses a common bottleneck where drafting an idea is rarely the slow part; turning it into something that behaves correctly inside a live system is. By narrowing that gap, Roblox reduces the time spent translating concepts into working components. The company also highlighted language tools delivered through APIs, including Text-to-Speech, Speech-to-Text, and real-time voice chat translation across multiple languages. These features lower the effort required to localise content and reach broader audiences. Similar tooling plays a role in training and support in other industries. Roblox treats AI as connective tissue between tools Roblox also put emphasis on how tools connect to one another. Its RDC post describes integrating the Model Context Protocol (MCP) into Studio’s Assistant, allowing creators to coordinate multi-step work across third-party tools that support MCP. Roblox points to practical examples, such as designing a UI in Figma or generating a skybox elsewhere, then importing the result directly into Studio. This matters because many AI initiatives slow down at the workflow level. Teams spend time copying outputs, fixing formats, or reworking assets that do not quite fit. Orchestration reduces that overhead by turning AI into a bridge between tools, rather than another destination in the process. Linking productivity to revenue Roblox ties these workflow gains directly to economics. In its RDC post, the company reported that creators earned over $1 billion through its Developer Exchange programme over the past year, and it set a goal for 10% of gaming content revenue to flow through its ecosystem. It also announced an increased exchange rate so creators “earn 8.5% more” when converting Robux into cash. The economic impact report makes the connection explicit. Alongside AI upgrades in Studio, Roblox highlights monetisation tools such as price optimisation and regional pricing. Even outside a marketplace model, the takeaway is clear: when AI productivity is paired with a financial lever, teams are more likely to treat new tooling as part of core operations rather than an experiment. Roblox uses operational AI to scale safety systems While creative tools attract attention, operational AI often determines whether growth is sustainable. In November 2025, Roblox published a technical post on its PII Classifier, an AI model used to detect attempts to share personal information in chat. Roblox reports handling an average of 6.1 billion chat messages per day, and says the classifier has been in production since late 2024, with a reported 98% recall on an internal test set at a 1% false positive rate. This is a quieter form of efficiency. Automation at this level reduces the need for manual review and supports consistent policy enforcement, which helps prevent scale from becoming a liability. What carries across, and what several patterns stand out: Put AI where decisions are already made. Roblox focuses on the build-and-review loop, rather than inserting a separate AI step. Reduce tool friction early. Orchestration matters because it cuts down on context switching and rework. Tie AI to something measurable. Creation speed is linked to monetisation and payout incentives. Keep adapting the system. Roblox describes ongoing updates to address new adversarial behaviour in safety models. Roblox’s tools will not translate directly to every sector. The underlying approach will. AI tends to pay for itself when it shortens the path from intent to usable output, and when that output is clearly connected to real economic value. (Photo by Oberon Copeland @veryinformed.com) See also: Mining business learnings for AI deployment Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Roblox brings AI into the Studio to speed up game creation appeared first on AI News. View the full article
  5. AI search engines and generative AI tools are certainly transforming how people discover information online. Far from making SEO specialists obsolete, the shift highlights clearly why skilled human optimisers remain more important than ever. As generative AI search tools reshape the digital landscape, many wonder whether traditional SEO has reached the end. Despite AI’s growing influence, the fundamentals of discoverability, authority, clarity, and trust remain vital. The rise of AI-powered search makes the role of SEO specialists more important than before, ensuring content not only reaches audiences but resonates authentically. A digital environment shaped by automation still depends heavily on human expertise, especially when credibility is at stake. Why AI expands rather than replaces the work of SEO specialists The assumption that AI will replace SEO specialists ignores how modern search works. AI expands what optimisation can achieve, but it does not eliminate the need for skilled human oversight. Traditional ranking signals like site structure, depth of information, internal linking and authority continue to influence visibility, even in AI-driven answer engines. This means content must still be crafted with intent, clarity, and expertise. AI-powered search engines prioritise well-structured, trustworthy information, reinforcing the importance of thoughtful optimisation. In this brand new landscape, SEO now spans two parallel environments: classic search engines and AI-generated answer systems. Competing effectively in both requires strategic thinking, careful content planning, and a nuanced understanding of user intent: areas where human specialists excel, beyond what AI can replicate. Agencies like Brath, which presents SEO as one of the most profitable long-term marketing channels when executed with technical precision and strategic insight, clearly understand this reality. Its service model remains grounded in proven disciplines: technical improvements, keyword strategy, content refinement, and authority building. Rather than leaning on automation, it demonstrates that sustainable SEO relies on human expertise, methodical analysis, and consistent optimisation. Human insight still drives strategy in an AI-powered search world AI excels at processing data, but it still falls short when deeper context, cultural awareness, and nuanced decision-making are required. SEO is more than a technical exercise; it is a strategic craft rooted in understanding people, their motivations, expectations, and the subtle differences in how they search for answers. The human dimension is something AI cannot replicate. Specialists bring an understanding of why search behaviour shifts, how tone influences trust, and which narratives resonate in different industries. They also grasp how economic cycles, news events, and regulations affect user queries and engagement patterns. AI can identify patterns, but it cannot interpret them fully or determine which insights align with brand values or long-term goals. The increasing emphasis on E-E-A-T signals, Experience, Expertise, Authoritativeness and Trustworthiness, further reinforces this need for human guidance. An algorithm cannot fabricate authentic authority. It must be demonstrated through lived experience, credibility, and careful communication. SEO specialists ensure that these qualities are baked into every page and every message, guiding brands through an increasingly complex search environment where trust matters more than ever. How SEO specialists use AI to deliver faster, smarter optimisation Where AI shines is in support, not replacement. Modern SEO specialists use AI tools to automate repetitive tasks, accelerate data analysis, and uncover patterns at scale. AI can rapidly produce initial keyword lists, identify technical issues, or generate draft structures for content. These capabilities shorten production timelines and free specialists to focus on the strategic, creative, and high-value decisions that define effective optimisation. The balanced approach enables better outcomes. AI handles the heavy lifting, while human specialists interpret the results, refine the strategy, and ensure everything aligns with the brand’s goals, tone, and audience requirements. It is this combination of speed and discernment that shapes the future of SEO. The creative and analytical skills AI cannot replicate Even the most advanced AI tools cannot fully replicate creativity, intuition, or emotional intelligence. SEO specialists draw on analytical skills and creative insight to produce content that stands out in competitive digital markets. AI can propose phrasing, but it cannot determine when a message needs personality, authority, or nuance to feel trustworthy. In industries where reputation matters, like fintech, finance, healthcare, law, and education, human-shaped content is essential. Users want reassurance that the guidance they read comes from people with real expertise. AI-generated content often lacks the depth or subtlety needed to build that trust. Human specialists also excel at interpreting data in context. They understand how search intent connects to behaviour, how content influences perception, and how messaging supports or undermines brand identity. Their ability to connect analytical insights with emotional resonance is something AI simply cannot achieve. Building the next generation of SEO careers in an AI-enhanced industry AI is not shrinking the field of SEO; it is expanding it. The next generation of SEO professionals will blend human-centred strategy with AI-assisted efficiency. Modern SEO requires fluency in multiple areas: data interpretation, UX thinking, structured content, behavioural insights, content design, and now, AI tool management. Specialists who embrace both disciplines will lead the industry forward. Their work will become more strategic, more aligned with long-term business value, and more influential as search becomes increasingly complex and multi-modal. AI may automate tasks, but strategy, creativity, judgement, and trust-building remain human strengths. And as AI transforms how search tools interpret information, those strengths will only become more essential. SEO is not disappearing. It is evolving into a richer, more strategic, and more human discipline. Image source: Pexels The post What AI search tools mean for the future of SEO specialists appeared first on AI News. View the full article
  6. Mining conglomerate BHP describes AI as the way it’s turning operational data into better day-to-day decisions. A blog post from the company highlights the analysis of data from sensors and monitoring systems to spot patterns and flag issues for plant machinery, giving choices to decision-makers that can improve efficiency and safety – plus reduce environmental impact. For business leaders at BHP, the useful question was not “Where can we use AI?” but “Which decisions do we make repeatedly, and what information would improve them?” Portfolio not showcase BHP describes the end-to-end effects of AI on operations, or as it puts it, “from mineral extraction to customer delivery.” Leaders had decided to move beyond pilot rollouts, treating AI as an operational capability. It started with a small set of problems that affected the company’s performance; places where change could be measured in results. The company found it could avoid unplanned downtime of machinery, plus it tightened its energy and water use. Each use case addressing a small but impactful problem was given an owner and an accompanying KPI. Results were reviewed with the same regularity used for other operational performance monitoring elsewhere in the company. Where BHP uses AI daily In addition to BHP focusing specifically on areas such as predictive maintenance and energy optimisation, it gave consideration to using AI in more adventurous yet important operations such as autonomous vehicles and real-time staff health monitoring. Such categories can translate well to other asset-heavy environments, across logistics, manufacturing, and heavy industry. Predictive maintenance Predictive maintenance is the process of planning repairs in scheduled downtime to reduce unexpected failures and costly, unplanned stoppages. Here, AI models analyse equipment data from on-board sensors and can anticipate maintenance needs. This cuts breakdown numbers and reduces equipment-related safety incidents. BHP runs predictive analytics across most of its load-and-haul fleets and its materials handling systems. A central maintenance centre provides real-time and longer-range indications of machine health and potential failure or degradation. Prediction has become an integral part of its machinery-heavy operations, where previously, such information was presented as ‘just another’ report, one that could get lost in the bureaucracy of the company. It models and defines thresholds which trigger actions directly to teams planning maintenance. Energy and water optimisation Deploying predictive maintenance in this manner at its facilities in Escondida in Chile, the company reports savings of more than three giga-litres of water and 118 gigawatt hours of energy in two years, attributing the gains directly to AI. The technology gives operators real-time options and analytics that identify anomalies and automate corrective actions at multiple facilities, including concentrators and desalination plants. The lesson it’s learned is placing AI where decisions happen: When operators and control teams can act on recommendations in real time, improvements compound. Conversely periodic reporting means decisions are only taken if staff both see the results of data, and then decide it’s necessary. The realtime nature of data analysis and the use of triggers-to-action mean the differences becomes quickly apparent. Autonomy and remote operations BHP is also using more advanced technologies like AI-supported autonomous vehicles and machinery. These are higher-risk areas, and the tech has been found to reduce worker exposure to risk, and cut the human error factor in incidents. At the company, complex operational data flows through regional centres from remote facilities. So, without the use of AI and analytics, staff would not be able to optimise every decision in the way that software achieves. The use of AI-integrated wearables is increasing in many industries, including engineering, utilities, manufacturing, and mining. BHP leads the way in protecting its staff, who often work in very challenging conditions. Wearables can monitor personal conditions, reading heart rate and fatigue indicators, and provide real-time alerts to supervisors. One example might be ‘smart’ hard-hat sensor technology, used by BHP at Escondida, which measures truck driver fatigue by analysing drivers’ brain waves. A plan leaders can run Regardless of industry, decision-makers can draw learnings from BHP’s experiences in deploying AI at the (literal) coal-face. The following plan could help leaders in their own strategies to leverage AI in operational problem-areas: Choose one reliability problem and one resource-efficiency problem that operations teams already track, then attach a KPI. Map the workflow: who will see the output and what action they can take? Put basic governance in place for data quality and model monitoring, then review performance alongside operational KPIs. Start with decision support in higher-risk processes, and automate only after teams validate controls. (Image source: “Shovel View at a Strip Mining Coal” by rbglasson is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Mining business learnings for AI deployment appeared first on AI News. View the full article
  7. BNP Paribas is testing how far AI can be pushed into the day-to-day mechanics of investment banking. According to Financial News, the bank has rolled out an internal tool called IB Portal, designed to help bankers assemble client pitches more quickly and with less repetition. Pitch preparation sits at the centre of investment banking work. Teams pull together market views, deal history, and tailored narratives under tight timelines. Much of that effort repeats work that already exists elsewhere in the organisation. Slides, charts, and precedent analysis are often rebuilt from scratch, even when similar material has been used before by another team or office. IB Portal is meant to reduce that waste. The system searches BNP Paribas’s past pitch materials and uses what the bank describes as “smart prompts” to surface relevant slides, analysis, and supporting content for a new mandate. George Holst, head of the corporate clients group at BNP Paribas, said the tool functions like an AI-powered search engine that helps bankers find what matters ahead of a pitch or client meeting. In his words, it can cut research time by days, giving teams more room to focus on strategy and client judgement. The use case matters because it places AI inside real, constrained workflows rather than around them. Pitch decks are not generic documents. They reflect internal viewpoints, client-specific details, and regulatory requirements. Making an AI tool useful in this setting depends less on conversational flair and more on structure. That includes deciding which materials are searchable, setting clear access controls in regions and business lines, and defining how retrieved content moves from internal draft to client-ready output. In practice, that also means traceability. Bankers need to see where information comes from, and anything produced by the system still needs human review before it leaves the firm. Without those checks, the risk of errors or inappropriate disclosure rises quickly. BNP Paribas builds AI tools on internal platforms The portal also fits into a broader internal build-out at BNP Paribas. In June 2025, the bank outlined an “LLM as a Service” platform aimed at giving its business units shared access to large language models in the group’s own infrastructure. The platform is run by internal IT teams and hosted in BNP Paribas data centres with dedicated GPU capacity. The bank said it supports a mix of models, including open-source options and systems from Mistral AI, with plans to add models trained on internal data. Intended use cases include internal assistants, document drafting, and information retrieval. Other large banks are taking a similar approach. JPMorganChase has pointed to growing use of its internal “LLM Suite”, which provides staff access to models in a controlled environment. Reuters has reported on Goldman Sachs’s investment in AI engineering and its rollout of a proprietary “GS AI Assistant”. UBS has discussed an internal M&A “co-pilot” used for idea generation. Alongside these in-house efforts, specialist tools like Rogo have found traction at firms including Nomura and Moelis, pointing to demand for finance-specific AI tools. For BNP Paribas, the real test is whether IB Portal becomes part of everyday work rather than a one-off experiment. The potential benefits are straightforward: less time spent searching, fewer duplicated decks, and better reuse of institutional knowledge. The risks are just as familiar. Hallucinated data, unclear sources, and accidental exposure of sensitive information all carry real consequences in banking. The most stable deployments keep AI tightly constrained. That usually means grounding outputs in approved internal content, applying role-based access controls, recording how tools are used, and requiring human sign-off before anything reaches a client. If IB Portal operates in those boundaries, it offers a practical view of how enterprise AI is taking shape: not as a source of instant answers, but as a faster and safer way to navigate what an organisation already knows. (Photo by Enrico Frascati) See also: CEOs still betting big on AI: Strategy vs. return on investment in 2026 Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post BNP Paribas introduces AI tool for investment banking appeared first on AI News. View the full article
  8. JPMorgan Chase’s AI strategy is delivering measurable returns – but at a human cost. The bank isn’t hiding the fact. With 200,000 employees now using its proprietary LLM Suite platform daily and AI benefits growing 30-40% annually, America’s largest bank is executing what Chief Analytics Officer Derek Waldron calls a plan to create the world’s first “fully AI-connected enterprise.” What infrastructure backs this transformation? A US$18 billion annual technology budget, 450+ AI use cases in production, and a platform that won American Banker’s 2025 Innovation of the Year Grand Prize. But JPMorgan’s candour about workforce displacement – operations staff projected to fall at least 10% – reveals the complexity of enterprise AI beyond the promotional headlines. LLM suite: From zero to 200,000 users in eight months Released in summer 2024, LLM Suite reached 200,000 users in eight months through an opt-in strategy that created what Waldron describes as “healthy competition, driving viral adoption. This isn’t just a chatbot: LLM Suite functions as a “full ecosystem” connecting AI to firm-wide data, applications, and workflows. The model-agnostic architecture integrates OpenAI and Anthropic models, with updates every eight weeks. Investment bankers create five-page decks in 30 seconds – work that previously took junior analysts hours. Lawyers scan and generate contracts. Credit professionals extract covenant information instantly. Call centre tool EVEE Intelligent Q&A improved resolution times through context-aware responses. “A little under half of JPMorgan employees use gen AI tools every single day,” Waldron told McKinsey in October 2025. “People use it in tens of thousands of ways specific to their jobs.” JPMorgan Chase AI strategy delivers 30-40% annual ROI growth JPMorgan tracks ROI at the individual initiative level – not platform-wide vanity metrics. Since inception, AI-attributed benefits have grown 30-40% year-over-year. The strategy combines top-down focus on transformative domains (credit, fraud, marketing, operations) with bottom-up democratisation, letting employees innovate in job families. McKinsey’s Kevin Buehler estimates US$700 billion in potential banking cost savings industry-wide. But much will be “competed away” to customers. Industry return on tangible equity could drop one to two points, while AI pioneers could see four-point increases versus slow movers. Waldron acknowledges productivity gains don’t automatically translate to cost reductions. “An hour saved here and three hours there may increase individual productivity, but in end-to-end processes these snips often just shift bottlenecks.” Operations staff to fall 10% as AI agents take complex tasks JPMorgan’s consumer banking chief announced operations staff would decline at least 10% as the bank deploys “agentic AI” – autonomous systems handling multi-step tasks. The bank is building AI agents that execute cascading actions independently. Waldron demonstrated to CNBC how the system creates investment banking presentations in 30 seconds and drafts confidential M&A memos. AI favours client-facing roles – private bankers, traders, investment bankers. At risk: operations staff handling account setup, fraud detection, and trade settlement. New job categories are emerging: “context engineers” ensuring AI systems have proper information, knowledge management specialists, and up-skilled software engineers building agentic systems. Stanford researchers analysing ADP data found early-career workers (ages 22-25) in AI-exposed occupations saw 6% employment decline from late 2022 to July 2025. Shadow IT, trust, and the “value gap” problem JPMorgan’s transparency extends to acknowledging significant execution risks. Without enterprise-grade tools, employees might use consumer-grade AI – exposing sensitive data. JPMorgan built an in-house system for security and control. When AI performs correctly 85-95% of the time, human reviewers may stop checking carefully. The error rate compounds at scale. “When an agentic system does a cascading series of analyses independently for a long time, it raises questions about how humans can trust that,” Waldron told McKinsey. Many enterprises face “proof-of-concept hell” – numerous pilots that never reach production because they underestimate integration complexity. “There is a value gap between what the technology is capable of and the ability to fully capture that in an enterprise,” Waldron told CNBC. Even with US$18 billion, full realisation takes years. The JPMorgan playbook: What enterprises can learn JPMorgan’s approach offers replicable principles despite scale advantages. Democratise access but mandate nothing – the opt-in strategy created viral adoption. Build for security first, particularly in regulated industries. Implement model-agnostic architecture to avoid vendor lock-in. Combine top-down transformation with bottom-up innovation. Segment training by audience. Track ROI with discipline at the initiative level. Acknowledge complexity and plan accordingly – JPMorgan took over two years to build the LLM Suite. Not every enterprise has US$18 billion for technology or 200,000 employees. But core principles – democratisation, security-first architecture, avoiding vendor lock-in, and financial discipline – apply in industries and scale. Transformation with eyes wide open JPMorgan Chase’s AI strategy represents enterprise AI’s most transparent case study – complete with industry-leading adoption metrics, measurable ROI growth, and unflinching acknowledgement of workforce displacement. The bank’s success factors are clear: massive capital investment, model-agnostic infrastructure, democratised access paired with financial discipline, and realistic timelines. But Waldron’s candour about trust challenges, the “value gap” between capability and execution, and the multi-year journey ahead suggest that even US$18 billion and 200,000 engaged employees don’t guarantee seamless transformation. For enterprises evaluating their AI strategies, JPMorgan’s lesson isn’t that scale solves everything – it’s that honest assessment of both opportunities and execution risks separates genuine transformation from expensive experimentation. The question isn’t whether JPMorgan’s AI strategy is working. It’s whether the 10% workforce reduction and years-long complexity represent acceptable trade-offs for 30-40% annual benefit growth – and how many other enterprises can afford to find out. Editor’s note: The analysis draws from McKinsey’s October 2025 interview with Derek Waldron and Kevin Buehler, CNBC’s September 2025 exclusive demonstration of LLM Suite, American Banker’s June 2025 Innovation of the Year coverage, and Stanford University research on AI employment effects. See also: Walmart and Amazon drive retail transformation with AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post JPMorgan Chase AI strategy: US$18B bet paying off appeared first on AI News. View the full article
  9. Across the US, workers are experiencing a seismic shift in workplace operations as AI literacy becomes a core part of business strategies. This is redefining roles and expectations, while workloads continue to increase and pressure intensifies. As the employment landscape transforms, it has become clear that the future of work and talent will be defined by three main areas – continuous education, greater flexibility, and AI literacy. This is according to new research conducted by The Harris Poll, who discovered that employers who invest heavily in educational benefits and AI development gain key advantages. Priya Krishnan, Chief Transformation Officer, Bright Horizons, said, “AI is rewriting job descriptions fast… employers who invest in education benefits and AI training now will build resilient, innovative teams.” AI skills demand heightens with workloads According to the 2025 EdAssist by Bright Horizons Education Index, 42% of US employees are expecting AI to significantly change their role over the next year, despite only 17% actively using AI on a frequent basis. However, the report revealed that the increased adoption of AI is not the key talking point – it’s the urgent skills mandate AI has triggered. Workers now feel they have to evolve to remain competitive, with 32% feeling increased pressure to learn new skills because of AI, an increase from 26% in 2024. The demand for upskilling comes as workforce stress hits new heights. For instance, 81% of employees report being pushed to take on greater workloads, and 80% stated they are expected to deliver work faster. AI has the ability to reduce some of this strain, but without proper guidance and training, employees will struggle to use it effectively. The report found that the adoption of AI technology rises to 76% when employers provide AI training, while workers who have access to training (84%) reported feeling more prepared for potential changes than those without (48%). Essentially, effective training turns uncertainty into confidence. Ultimately, 34% of those surveyed said they feel unprepared for changes driven by AI, and 42% said their employers expected them to develop their understanding of AI alone, without any formal training. “AI is rewriting job descriptions faster than most organisations can keep up,” said Priya Krishnan, Chief Transformation Officer at Bright Horizons. “Employers who act now will not only close important skill gaps but also build a culture of resilience and innovation. This is not about chasing trends but creating a workforce that thrives in a world where technology and human capability advance together. Education benefits, flexible learning, and AI literacy are the foundation for long-term competitiveness.” Education benefits drive retention and readiness The EdIndex highlights how investing in employee education can pay off, both for employees and employers. However, challenges are still present, with a key concern being financial barriers. According to the report, the skills gap is not simply about technology, it’s about “access and affordability.” 48% of employees surveyed said they avoid further education, citing the “fear of student debt” as the main reason. 34% reported that they cannot afford any more student loan debt, therefore curtailing any career advancement and blocking opportunities. The underscores the need for affordable, employer-sponsored learning programmes that allow employees the chance to progress. Employees highly value education support, with 85% reporting that they would be more loyal to employers that invest in continuing education. 86% said they would choose a job that offers such opportunities over one that doesn’t. 82% believe employer support for learning is crucial, while 29% aid it is essential for their job. In addition, 74% said they would be more loyal to an employer if they helped pay off their student loans, further emphasising how the alleviation of financial obstacles drives loyalty and retention, as well as ROI for employers. Over half (55%) of respondents are more likely to remain with a company if AI training or certification is available, and 76% actively use AI when training has been provided, compared to only 25% without formal training. 2026 predictions Looking ahead to 2026, the report highlights five key shifts and how employers can start preparing now. First, the report predicts AI literacy will become essential in the workplace as automation affects almost all jobs. Therefore, companies need to develop AI training programmes and ongoing support that enable employees to feel confident when using new tools. Secondly, upskilling is set to give employers and employees a significant competitive advantage. Organisations that invest heavily in both technical and soft skills will be better equipped for innovation and adaption going forward. Third, flexible education benefits will be key to ensuring employee retention. Workers are increasingly seeking employers who provide financial support and remove financial barriers that limit their learning, so those who offer certain “perks,” like debt-free programmes and tuition support will be more likely to attract and retain talent. The fourth major prediction is how continuous learning will become a key component of workplace environments. With roles evolving, the report says employers must set clear career pathways with personalised learning plans that help their employees grow. Finally, work-life balance is expected to play a huge role. Companies that blend education benefits with flexibility and well-being support that reduces stress have a higher chance of maintaining a loyal workforce. Pressure remains Although career satisfaction appears high at 87%, workers are under continuing pressure, with 78% feeling pushed to perform tasks outside their main role. As a result, priorities are shifting quickly. While earning a raise remains an important factor with 57% naming it a key priority, work-life balance (43%) and stress reduction (35%) are surging, rising sharply from 2024’s numbers of 35% and 29% respectively. While 90% say they feel somewhat prepared as their job role evolves, this confidence drops when AI is brought into the equation, as just 66% feel ready for AI’s certain impact. AI may be easing some areas of work, but its presence is also rising workplace pressure. The 2025 EdAssist report suggests, “The employers who act now by embedding skill development opportunities, AI training, short-form learning, and debt-conscious benefits into their talent strategy will future-proof their business and avoid being left behind as the competition for skills intensifies.” Only when employers take these steps can they build a workforce capable of sustaining long-term growth and navigating the changes that lie ahead. (Image source: “HMS St Albans’ Ship’s Company conduct a training day in the Plymouth exercise areas” by Royal Navy Media Archive is licensed under CC BY-NC 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI literacy and continuous education are cornerstones appeared first on AI News. View the full article
  10. The construction industry generates colossal amounts of data, with much of it unused or locked in spreadsheets. AI is now changing this, enabling teams to accelerate decision-making, enhance margins, and improve project outcomes. According to new research from Dodge Construction Network (Dodge) and CMiC, the true transformative impact of AI is highlighted by contractors, with 87% believing AI will “meaningfully transform their business,” despite current low adoption rates. The latest research, entitled, ‘AI for Contractors,’ discovered automated proposal generation and progress tracking from site photos both reached a 92% effectiveness rating. Meanwhile, contract risk review achieved 85% effectiveness when compared to previous, more traditional methods. The report highlights how AI is allowing project managers to focus on strategic decisions rather than time-consuming administrative tasks. Finance teams are also benefiting from AI technology, shifting from historical reporting to predictive insights, while operations leaders are able to apply data-driven intelligence for improved project delivery. Rather than AI fully replacing human expertise, the report found it actually enhances human input. “For decades, construction firms have lacked the tools to transform the data they’ve collected into action. AI-enabled solutions are changing that,” says Gord Rawlins, president and CEO of CMiC. “This research highlights the high-impact results contractors are achieving today.” AI changing contractor roles Surveyed contractors see AI as a catalyst in reshaping everyday aspects of their operations, enabling predictive insights rather than reacting to problems once they have occurred. This introduces wider benefits, like tighter cost controls, improved scheduling, and higher quality project delivery. In other words, improved overall outcomes. A substantial 85% of contractors foresee less time spent on repetitive tasks, while 75% have faith that AI can help mine historical data to learn from previous projects. Rather than relying fully on AI, 70% said the technology helps them make better, more informed decisions thanks to insights that may otherwise not be present. AI implementation remains low, but companies are preparing for wider adoption Currently, AI adoption in the construction industry is low, despite awareness levels of 32% to 34%. This seems to be due to several reasons, including a lack of clear understanding, internal approvals, and software access. However, Dodge’s research discovered more than half of companies surveyed are strategically preparing for AI with pilot programmes and staff training for AI-related positions. According to the report, 40% of companies have a set budget for AI, 38% are developing teams for implementation, 19% are adapting old workflows, and 51% are assessing AI changes. Early adopters lead the way Overall awareness of AI use in the industry is quite low, with just 20% to 50% of contractors knowing that certain management tasks implement AI, and very few actively use these functions. Nevertheless, early adopters of AI provided positive feedback, as more than 70% revealed that AI tools are hugely effective compared to more traditional methods, suggesting a potential for quick growth in AI use throughout the industry. Security and accuracy lead concerns The main concerns of adopting AI revolve around security and accuracy. The report reveals that 57% are worried about the accuracy of AI output, while 54% have doubts over the security of company data. Internal resistance to change (44%) and implementation costs (41%) are also cited as key concerns, but perhaps surprisingly, just 21% expressed concern over job losses. 31% believe current data quality is not yet adequate to support AI analysis. According to the report, larger contractors are likely to rely more on AI than smaller firms, thus are more concerned about data quality and reliability. For instance, 69% of larger contractors cited lack of reliability or accuracy of AI outputs as a major concern, compared to 54% of smaller or mid-size contractors. Research data confirms that contractors are generally open to adopting AI, but the accuracy of AI outputs tend to stand in the way, as well as the desire for better tools, more information, and greater internal support. 17% of contractors said they do not sufficiently trust AI results, an issue that becomes more pronounced in sensitive areas like payments. Distrust in AI operations rises to 35% and 31% not having faith in AI managing project budgets. A major theme is the need for more understanding before using AI. On average, 21% of respondents said they want a better insight of how AI works before considering using it, climbing to 31% for more complex tasks like safety risk assessments. Contractors also believe they are limited by their current software capabilities, with an average of 19% reporting their software does not offer the AI functions they require. The increases to 33% for managing resources. Internal approval remains a notable obstacle, with 22% saying their company has not yet approved the use of AI, despite personal interest. Another barrier is a lack of time or resources that effectively evaluate AI tools. 13% stated this as a main reason why AI has not yet been adopted. Although there are obvious challenges to mass AI use in the construction industry – and therefore significant market opportunity – only 5% believe AI would not be beneficial or improve current methods. That indicates a resistance that stems from various concerns rather than a lack of perceived value. Steve Jones, Senior Director, Industry Insights Analytics at Dodge, spoke on the findings. “We designed this study to look at the use of AI in the digital tools already deployed by contractors because that may offer the best solution to the challenge of data quality. But it is also heartening to see that many contractors are aware of the key challenges and the need for a rigorous approach to successfully implementing these tools at their organisations,” the Dodge research states. Key interest in emerging AI functionalities AI’s potential is clearly recognised, even if the industry’s readiness to adopt it isn’t quite matching the data. Certain areas are attracting the most attention when it comes to AI functions, like automated construction analysis, where 81% see potential benefits. 80% also show interest in intelligent permit submissions, while 79% believe in autonomous schedule and resource optimisation. 92% appreciate automated contract management and 76% recognise potential in AI-powered dynamic pricing. Although AI adoption remains limited, these strong numbers suggest the tide may soon be turning. AI and the new age of the construction industry The latest data suggests a strong openness, maybe even an eagerness, to AI adoption in the construction sector. However, better tools, clearer guidance, and more trustworthy outputs are just some of the areas that need to be addressed before interest becomes implementation. “With high awareness, strong interest, and powerful validation from early adopters, contractors appear poised for significant expansion in their use of AI-enabled tools in meaningful ways,” said Steve Jones. The industry is on a “tipping point for AI adoption” according to Jones. When companies start to provide clearer pathways for adoption, the move towards AI-powered construction workflows will undoubtedly accelerate rapidly, reshaping how projects are delivered forever. (Image source: “Tianjin Construction Site.” by @yakobusan Jakob Montrasio is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Strong contractor belief in AI for industry-wide transformation appeared first on AI News. View the full article
  11. As the company that kick-started the cloud computing revolution, Amazon is one of the world’s biggest companies whose practices in all things technological can be regarded as a blueprint for implementing new technology. This article looks at some of the ways that the company is deploying AI in its operations. Amazon’s latest AI strategy has progressed from basic chatbots to agentic AI: systems that can plan and execute multi-step work using different tools and across processes. As a company, Amazon sits at the intersection of cloud infrastructure (in the form of AWS), logistics, retail, and customer service, all of which are areas where small efficiency gains can have massive impact. Copilots to agents, AWS builds the control plane autonomy In early 2025, Amazon made its AI intentions clear for its cloud company, AWS, by forming a new group focused internally on agentic AI. According to reporting on an internal email, AWS leadership described agentic AI as a potential “multi-billion” business, underscoring that the technology is regarded as a new platform layer, not a standalone feature. The company was not afraid to say that its workforce is expected to shrink because of the technology. In June 2025, Amazon CEO Andy Jassy told employees that widespread use of generative AI and agents will change how work is done, and that over the next few years, Amazon expects routine work to become faster and more automated, slowing hiring, changing roles, and shrinking some job categories, even if other categories grow. Amazon’s best use cases are high-volume, rules-bound workflows that require a lot of searching, checking, routing, and logging. These are or will have significant impact in forecasting, delivery mapping, customer service, and product content. /Reuters/ noted examples like inventory optimisation, improved customer service, and better product detail pages as internal targets for gen AI. Logistics and operations Amazon has described AI-enabled upgrades in its US operations that hint at where an agentic approach may take shape. In June 2025, it outlined AI innovations that included a generative AI system to improve delivery location accuracy, a new demand forecasting model to predict what customers want (and where), and an agentic AI team looking at enabling robots to understand natural-language Consumer-facing agents Consumer agents are where autonomy first becomes real, because systems can take actions, even where there’s money involved. Reporting in The Verge about Alexa+ highlighted features like monitoring items for price drops and (optionally) purchasing for the user automatically once a threshold is hit, a concrete example of the agentic concept in everyday terms: users setting constraints (in the form of price thresholds), and the system watches and executes inside said boundaries. Rufus as the Amazon AI interface Amazon’s Rufus assistant is positioned as an AI interface to shopping, one that helps customers find products, do comparisons, and understand the trade-offs between various choices. Amazon describes Rufus as powered by generative (and increasingly agentic) AI to make shopping faster, with personalisation created by a user’s shopping history and current context. Agents therefore become the a shopping interface, with their value to the retailer in shortening journey from intent to final purchase. Agents for Amazon Bedrock and AgentCore Internally, AWS is producing agentic ‘building blocks’. Agents for Amazon Bedrock are designed to execute multi-step tasks by orchestrating models with tools use and integration with other platforms. The Amazon Bedrock AgentCore is presented as a platform to build [PDF], deploy, and operate agents securely at scale. It has features like runtime hosting, memory, observability dashboards, and evaluation. AgentCore is Amazon’s attempt to become the default infrastructure layer for supervised enterprise agents, especially for organisations that need auditability, access controls, and reliability. Keeping an eye on workforce and governance If Amazon succeeds, the next phase for the technology is managed AI, comprising of mechanisms that grant or revoke permissions for tools and data access, the monitoring of agents’ behaviour, evaluation of performance and whether governance guidelines are being met, and the establishment of escalation paths when agents hit uncertainty. The signals to the workforce have been baked into leadership messaging at the company. Fewer people will be required for some corporate tasks, and there will be more roles that can design workflows, govern the models, keep systems secure, and audit the outcomes of agentic AI use. Conclusions Proven as a leader in technology, Amazon’s stance on AI and the meaningful ways in which it’s implementing AI are a description of the paths enterprise companies may follow. Winning the productivity gains and lowered costs that AI technology promises is not as simple as plugging in a local device, or spinning up a new cloud instance. But the company can be seen as lighting the way for others to follow. Whether it’s supervising agents or deflecting customer queries to automated answering systems, AI is changing this technology giant in every possible way. (Image source: CHEN – The Arousing, Thunder – arouse, excite, inspire; thunder rising from below; awe, alarm, trembling; fertilizing intrusion. The ideogram: excitement and rain” – public domain) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AWS’s legacy will be in AI success appeared first on AI News. View the full article
  12. Walmart’s December 9 transfer to Nasdaq wasn’t just a symbolic gesture. The US$905 billion retailer is making its boldest claim yet: that it’s no longer a traditional discount chain, but a tech-powered enterprise using AI to fundamentally rewire retail operations. But beyond the marketing spin and the parade of AI announcements, what’s genuinely transforming at the world’s largest retailer—and where are the gaps between ambition and execution? The Agentic AI pivot: Purpose-built, not off-the-shelf Walmart’s AI strategy diverges sharply from competitors chasing generic large language models. According to CTO Hari Vasudev, the company is deploying what it calls “purpose-built agentic AI”—specialised tools trained on Walmart’s proprietary retail data rather than one-size-fits-all solutions. “Our approach to agentic AI at Walmart is surgical,” Vasudev wrote in a May 2025 blog post. “Extensive early testing proved that, for us, agents work best when deployed for highly specific tasks, to produce outputs that can then be stitched together to orchestrate and solve complex workflows.” This translates to tangible applications: Walmart’s “Trend-to-Product” system cuts fashion production timelines by 18 weeks. Its GenAI Customer Support Assistant now autonomously routes and resolves issues without human intervention. Developer productivity tools handle test generation and error resolution within CI/CD pipelines. Meanwhile, the company’s retail-specific LLM “Wallaby”—trained on decades of Walmart transaction data—powers everything from item comparison to personalised shopping journey completion. The infrastructure undergirding this? Element, Walmart’s proprietary MLOps platform, is designed to avoid vendor lock-in and optimise GPU usage across multiple cloud providers. It’s an in-house “factory” that gives Walmart speed and flexibility competitors wrestling with third-party platforms can’t match. Real numbers: Where AI delivers measurable impact Walmart has been unusually transparent about specific ROI metrics, offering a rare glimpse into enterprise AI economics: Data operations: GenAI improved over 850 million product catalogue data points—a task that would have required 100 times the headcount using manual processes, according to CEO Doug McMillon’s August 2024 earnings call. Supply chain efficiency: AI-powered route optimisation eliminated 30 million unnecessary delivery miles and avoided 94 million pounds of CO2 emissions. The company won the prestigious Franz Edelman Award in 2023 for this technology—and has since commercialised it as a SaaS product for other businesses. Store operations: Digital Twin technology predicts refrigeration failures up to two weeks in advance, auto-generating work orders complete with visual models, wiring diagrams, and required parts. Sam’s Club’s AI-powered exit technology has reduced member checkout times by 21%, with over 64% of members now using the friction-free system across all locations. Customer experience: Dynamic Delivery algorithms analyse traffic patterns, weather conditions, and order complexity to predict delivery times down to the minute, enabling 17-minute express deliveries in test markets. The human cost: “AI will change every job” McMillon hasn’t sugarcoated the workforce implications. Speaking at a Bentonville workforce conference in September 2025, he stated bluntly: “It’s very clear that AI is going to change literally every job. Maybe there’s a job in the world that AI won’t change, but I haven’t thought of it.” But Walmart’s positioning this as a transformation rather than an elimination. McMillon expects total headcount to remain flat even as revenue grows—meaning jobs will shift, not disappear. White-collar roles face the earliest disruption through chatbots handling customer service and supply chain tracking, while store and warehouse workers will eventually see tasks absorbed by autonomous systems. The company is investing heavily in reskilling programs. “We’ve got to create the opportunity for everybody to make it to the other side,” McMillon said at the Bentonville conference. Automation equipment operator Chance at Walmart’s Palestine, Texas, distribution centre described the shift: “It used to be 85% physical. Now it’s 85% mental. I’m solving problems with my mind, not just my body.” The Nasdaq gambit: Repositioning for tech valuations Walmart’s exchange transfer was explicitly framed around its AI transformation. CFO John David Rainey stated the move reflects the company “setting a new standard for omnichannel retail by integrating automation and AI.” The subtext? Walmart wants the valuation multiples tech companies command. At a P/E ratio of 40.3x—higher than Amazon and Microsoft—the market is partially buying the transformation story. Potential inclusion in the tech-heavy Nasdaq 100 index would drive passive fund investment regardless of AI execution. Analysts are split on whether the premium is justified. Jefferies’ Corey Tarlowe argued the move signals Walmart is “less of a traditional retail corporation and more of a technology firm.” But sceptics note the company still derives revenue from razor-thin retail margins, not high-margin software or cloud services—despite commercialising tools like Route Optimisation. Verdict: Genuine transformation with execution risk Walmart’s AI strategy is neither pure hype nor guaranteed success. The company is making structural investments in proprietary infrastructure, deploying AI at genuine scale with measurable operational benefits, and transparently acknowledging workforce implications most enterprises dodge. But significant execution risks remain: managing fragmented agent ecosystems, preventing algorithmic bias at scale, competing against external shopping agents, and determining appropriate automation boundaries while maintaining accuracy. The company’s candidness about challenges—”often, a co-pilot model, with humans and AI working as a team, is the most effective approach”—suggests leadership understands AI isn’t a silver bullet. For enterprises watching Walmart’s playbook, the lesson is clear: build for specificity, not generality. Invest in proprietary data moats. Plan for workforce transformation, not just cost reduction. And recognise that even with massive resources and technical talent, agentic AI remains early-stage technology with genuine limitations. The question isn’t whether Walmart is using AI—it demonstrably is. It’s whether this surgical, infrastructure-heavy approach delivers sustainable competitive advantage, or if the company is simply automating itself into the same low-margin trap with shinier tools. That answer won’t be clear for several years—but Walmart’s willingness to bet US$905 billion in market cap on the transformation suggests leadership believes the former. See also: Walmart and Amazon drive retail transformation with AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Walmart’s AI strategy: Beyond the hype, what’s actually working appeared first on AI News. View the full article
  13. Enterprise leaders are pressing ahead with artificial intelligence, even as early results remain uneven. Reporting from the Wall Street Journal and Reuters shows that most CEOs expect AI spending to keep rising through 2026, despite difficulty tying those investments to clear, enterprise-wide returns. This tension highlights where many organisations now sit in their AI journey. The technology has moved beyond trials and proofs of concept, but it has yet to settle into a reliable source of value. Companies are operating in an in-between phase, where ambition, execution, and expectations are all under strain at the same time. Spending continues, even as returns lag AI budgets have climbed steadily across large enterprises over the past two years. Competitive pressure, board oversight, and fear of being left behind have all played a role. At the same time, executives are more open about the limits they are seeing. Gains often show up in pockets rather than across the business, pilots fail to spread, and the cost of connecting AI systems to existing tools keeps rising. A Wall Street Journal survey of senior executives found that most CEOs see AI as central to long-term competitiveness, even if short-term benefits are hard to measure. For many, AI no longer feels optional. It is treated as a capability that must be developed over time, rather than a project that can be paused if results disappoint. That view helps explain why spending remains steady. Leaders worry that cutting back now could weaken their position later, especially as rivals improve how they use the technology. Why pilots struggle to scale One of the main barriers to stronger returns is the jump from experimentation to day-to-day use. Many organisations have launched AI pilots across different teams, often without shared rules or coordination. While these efforts can generate insight and interest, few translate into changes that affect the wider business. Reuters has reported that companies trying to scale AI frequently run into issues with data quality, system links, security controls, and regulatory requirements. These problems are not only technical. They reflect how work is organised. Responsibility is often split across teams, ownership is unclear, and decisions slow down once projects touch legal, risk, and IT functions. The result is a pattern of heavy spending on trials, with limited progress toward systems that are embedded in core operations. Infrastructure costs reshape the equation The cost of infrastructure is also weighing on AI returns. Training and running models demands large amounts of computing power, storage, and energy. Cloud bills can rise quickly as usage grows, while building on-site systems requires upfront investment and long planning cycles. Executives cited by Reuters have warned that infrastructure costs can outpace the benefits delivered by AI tools, particularly in the early stages. This has forced tough choices: whether to centralise AI resources or leave teams to experiment on their own; whether to build in-house systems or rely on vendors; and how much waste is acceptable while capabilities are still forming. In practice, these decisions are shaping AI strategy as much as model performance or use-case selection. AI governance moves to the centre of CEO decision-making As AI spending increases, so does scrutiny. Boards, regulators, and internal audit teams are asking harder questions. In response, many organisations are tightening control. Decision rights are shifting toward central teams, AI councils are becoming more common, and projects are being linked more closely to business priorities. The Wall Street Journal reports that companies are moving away from loosely connected experiments toward clearer goals, measures, and timelines. This can slow progress, but it reflects a growing belief that AI should be managed with the same discipline as other major investments. This shift marks a change in how AI is treated. It is no longer a side effort or a curiosity. It is being brought into existing operating and risk structures. Expectations are being reset, not abandoned Importantly, the persistence of AI spending does not signal blind optimism. Instead, it reflects a reset in expectations. CEOs are learning that AI rarely delivers immediate, sweeping returns. Value tends to emerge gradually, as organisations adjust workflows, retrain staff, and refine data foundations. Rather than abandoning AI initiatives, many enterprises are narrowing their focus. They are prioritising fewer use cases, demanding clearer ownership, and aligning projects more closely with business outcomes. This recalibration may reduce short-term excitement, but it improves the likelihood of sustainable returns. What CEO AI strategy signals for 2026 planning For organisations shaping their plans for 2026, the message for every CEO is not to retreat from AI, but to pursue it with more care as AI strategies mature. Ownership, governance, and realistic timelines matter more than headline spending levels or bold claims. Those most likely to benefit are treating AI as a long-term shift in how the organisation works, not a quick route to growth. In the next phase, advantage will depend less on how much is spent and more on how well AI fits into everyday operations. (Photo by Ambre Estève) See also: AI in 2026: Experimental AI concludes as autonomous systems rise Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post CEOs still betting big on AI: Strategy vs. return on investment in 2026 appeared first on AI News. View the full article
  14. Generative AI’s experimental phase is concluding, making way for truly autonomous systems in 2026 that act rather than merely summarise. 2026 will lose the focus on model parameters and be about agency, energy efficiency, and the ability to navigate complex industrial environments. The next twelve months represent a departure from chatbots toward autonomous systems executing workflows with minimal oversight; forcing organisations to rethink infrastructure, governance, and talent management. Autonomous AI systems take the wheel Hanen Garcia, Chief Architect for Telecommunications at Red Hat, argues that while 2025 was defined by experimentation, the coming year marks a “decisive pivot towards agentic AI, autonomous software entities capable of reasoning, planning, and executing complex workflows without constant human intervention.” Telecoms and heavy industry are the proving grounds. Garcia points to a trajectory toward autonomous network operations (ANO), moving beyond simple automation to self-configuring and self-healing systems. The business goal is to reverse commoditisation by “prioritising intelligence over pure infrastructure” and reduce operating expenditures. Technologically, service providers are deploying multiagent systems (MAS). Rather than relying on a single model, these allow distinct agents to collaborate on multi-step tasks, handling complex interactions autonomously. However, increased autonomy introduces new threats. Emmet King, Founding Partner of J12 Ventures, warns that “as AI agents gain the ability to autonomously execute tasks, hidden instructions embedded in images and workflows become potential attack vectors.” Security priorities must therefore shift from endpoint protection to “governing and auditing autonomous AI actions.” As organisations scale these autonomous AI workloads, they hit a physical wall: power. King argues energy availability, rather than model access, will determine which startups scale. “Compute scarcity is now a function of grid capacity,” King states, suggesting energy policy will become the de facto AI policy in Europe. KPIs must adapt. Sergio Gago, CTO at Cloudera, predicts enterprises will prioritise energy efficiency as a primary metric. “The new competitive edge won’t come from the largest models, but from the most intelligent, efficient use of resources.” Horizontal copilots lacking domain expertise or proprietary data will fail ROI tests as buyers measure real productivity. The “clearest enterprise ROI” will emerge from manufacturing, logistics, and advanced engineering—sectors where AI integrates into high-value workflows rather than consumer-facing interfaces. AI ends the static app in 2026 Software consumption is changing too. Chris Royles, Field CTO for EMEA at Cloudera, suggests the traditional concept of an “app” is becoming fluid. “In 2026, AI will start to radically change the way we think about apps, how they function and how they’re built.” Users will soon request temporary modules generated by code and a prompt, effectively replacing dedicated applications. “Once that function has served its purpose, it closes,” Royles explains, noting these “disposable” apps can be built and rebuilt in seconds. Rigorous governance is required here; organisations need visibility into the reasoning processes used to create these modules to ensure errors are corrected safely. Data storage faces a similar reckoning, especially as AI becomes more autonomous. Wim Stoop, Director of Product Marketing at Cloudera, believes the era of “digital hoarding” is ending as storage capacity hits its limit. “AI-generated data will become disposable, created and refreshed on demand rather than stored indefinitely,” Stoop predicts. Verified, human-generated data will rise in value while synthetic content is discarded. Specialist AI governance agents will pick up the slack. These “digital colleagues” will continuously monitor and secure data, allowing humans to “govern the governance” rather than enforcing individual rules. For example, a security agent could automatically adjust access permissions as new data enters the environment without human intervention. Sovereignty and the human element Sovereignty remains a pressing concern for European IT. Red Hat’s survey data indicates 92 percent of IT and AI leaders in EMEA view enterprise open-source software as vital for achieving sovereignty. Providers will leverage existing data centre footprints to offer sovereign AI solutions, ensuring data remains within specific jurisdictions to meet compliance demands. Emmet King, Founding Partner of J12 Ventures, adds that competitive advantage is moving from owning models to “controlling training pipelines and energy supply,” with open-source advancements allowing more actors to run frontier-scale workloads. Workforce integration is becoming personal. Nick Blasi, Co-Founder of Personos, argues tools ignoring human nuance – tone, temperament, and personality – will soon feel obsolete. By 2026, Blasi predicts “half of workplace conflict will be flagged by AI before managers know it exists.” These systems will focus on “communication, influence, trust, motivation, and conflict resolution,” Blasi suggests, adding that personality science will become the “operating system” for the next generation of autonomous AI, offering grounded understanding of human individuality rather than generic recommendations. The era of the “thin wrapper” is over. Buyers are now measuring real productivity, exposing tools built on hype rather than proprietary data. For the enterprise, competitive advantage will no longer come from renting access to a model, but from controlling the training pipelines and energy supply that power it. See also: BBVA embeds AI into banking workflows using ChatGPT Enterprise Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI in 2026: Experimental AI concludes as autonomous systems rise appeared first on AI News. View the full article
  15. BBVA is embedding AI into core banking workflows using ChatGPT Enterprise to overhaul risk and service in the sector. For the banking industry, the challenge of generative AI is rarely about adoption; it is about value extraction. BBVA has addressed this by integrating OpenAI’s platform directly into its operational backbone, a decision that will see the tool deployed across every unit of the bank. This tenfold expansion marks one of the largest enterprise deployments in the financial sector to date. The bank’s approach prioritises data over hype. BBVA began working with OpenAI in May 2024, rolling out 3,300 accounts to test the waters. This pilot phase allowed the institution to validate use cases before expanding to 11,000 staff. The results from this intermediate rollout of AI in banking provided the necessary business case. Employees using the tools saved nearly three hours per week on routine tasks. Furthermore, engagement was high, with more than 80 percent of users logging in daily. Staff created thousands of custom GPTs to handle specific collaborative and administrative duties, proving that utility often lies in bottom-up innovation rather than top-down mandates. Impact of ChatGPT on banking operations This agreement goes beyond simple productivity tools; it aims to reshape how the bank functions. BBVA will use the partnership to streamline risk analysis, a resource-intensive area for all financial institutions. Additionally, the deal focuses on redesigning software development processes and improving general employee support. “We were pioneers in the digital and mobile transformation, and we are now entering the AI era with even greater ambition. Our alliance with OpenAI accelerates the native integration of artificial intelligence across the bank to create a smarter, more proactive, and completely personalized banking experience, anticipating the needs of every client,” said Carlos Torres Vila, Chairman, BBVA. Deploying AI into a heavily-regulated environment like banking requires strict controls. The global rollout includes enterprise-grade security and privacy measures, ensuring that client data remains protected while staff access OpenAI’s latest models. The ChatGPT deployment includes tools for creating internal agents that connect securely to BBVA’s existing systems and processes for banking. To ensure these tools are used effectively, the two companies are collaborating on a structured adoption model and specialised training programmes. This addresses the common enterprise pitfall where software is deployed without the necessary skills to leverage it. Improving the banking experience with AI The initiative also targets the client interface. BBVA has already deployed ‘Blue,’ a virtual assistant built on OpenAI models that helps customers manage cards and accounts using natural language. Future plans involve integrating products so customers can interact with the bank directly through ChatGPT. To facilitate this, BBVA will have a dedicated team working directly with OpenAI’s product and research units. This direct line is intended to accelerate the bank’s transition into an “AI-native” institution. Sam Altman, CEO of OpenAI, said: “BBVA is a strong example of how a large financial institution can adopt AI with real ambition and speed. With this expansion of our work together, BBVA will embed our AI into the core of their products and operations to enhance the overall banking experience for their customers.” See also: Perplexity: AI agents are taking over complex enterprise tasks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post BBVA embeds AI into banking workflows using ChatGPT Enterprise appeared first on AI News. View the full article
  16. F. Scott Fitzgerald once observed that “in a real dark night of the soul, it is always three o’clock in the morning.” Microsoft’s latest Copilot usage analysis suggests this nocturnal tendency toward existential contemplation persists in the AI age—with religion and philosophy conversations rising through the rankings during early morning hours. The Microsoft AI (MAI) research team analysed 37.5 million de-identified conversations to uncover when and how users turn to Copilot for different needs. The findings, published December 10, reveal distinct AI usage patterns that follow surprisingly human rhythms—from late-night philosophical queries to Valentine’s Day relationship anxiety, and the strict divide between weekday coding and weekend gaming. Late-night existential queries and daytime travel planning Religion and philosophy conversations rise during early morning hours while travel queries peak during commuting times According to the report authored by Bea Costa-Gomes and Seth Spielman, religion and philosophy conversations rise through the rankings during early morning hours. Travel-related discussions, by contrast, peak during typical commuting times, suggesting users plan trips while in transit. “The larger-than-life questions seem to have a rise during the early hours of the morning, with ‘Religion and Philosophy’ rising through the ranks,” the researchers noted in their paper. This temporal divide reflects different states of mind: practical planning during active hours versus contemplative questioning during quiet moments. The pattern challenges assumptions about AI as merely a productivity tool, revealing its role in addressing fundamental human curiosity. Health dominates mobile usage across all hours Health-related conversations consistently rank highest on mobile devices across all months of 2025 Health-related topics consistently ranked as the most common conversation type on mobile devices throughout 2025, regardless of time, day, or month. Users turned to Copilot for wellness tracking, health tips, and daily routine management with notable consistency. “When it comes to mobile, with its intimacy and immediacy, nothing tops our health,” the researchers wrote. The dominance of health queries on mobile devices—but not desktops—suggests users view smartphones as more personal companions for sensitive topics. This usage analysis indicates AI assistants are becoming integrated into daily health management routines. Programming rises weekdays, gaming dominates weekends August data revealed a clear cyclical pattern between programming and gaming topics. Programming conversations climbed from Monday through Friday, while gaming queries surged on weekends. Users were equally likely to engage with both topics, but strictly segregated by day of the week. Programming conversations climb Monday through Friday while gaming queries surge on weekends “This crossover hints at a vibrant, creative community that loves to code during the week and play during the weekends in equal measure,” according to the report. The strict temporal separation suggests users maintain clear boundaries between work and leisure, even when both involve digital activities. February’s Valentine’s Day spike in relationship queries The Copilot usage analysis showed February brought a pronounced spike in relationship-related conversations, peaking specifically on Valentine’s Day. Personal growth and wellness discussions also increased before the holiday, as users sought guidance for navigating romantic expectations. “Whether it was in preparing for Valentine’s day, or facing the day and the relationships, we saw a spike in activity as people turned to Copilot for guidance, reminders, and support,” the researchers observed. This pattern reveals how AI usage patterns align with cultural moments and social pressures, with users seeking private counsel from AI during emotionally charged periods. From search to advice: shifting usage patterns While information search remains Copilot’s most popular function, Microsoft’s analysis detected a clear rise in users seeking advice, particularly on personal topics including relationships and life decisions. “This growing trend highlights how digital tools are becoming trusted companions for life’s everyday questions,” the report stated. The shift from purely informational queries to advice-seeking represents an evolution in how users conceptualise AI assistants—from search engines to consultants. Privacy-preserving methodology Microsoft’s research team emphasised that their analysis maintained user privacy by extracting only conversation summaries rather than full content. The system identifies topic and intent while avoiding individual-level data exposure. “Our system doesn’t just de-identify conversations; it only extracts the summary of the conversation, from which we learn the topic and the intent, and maintains full privacy,” Costa-Gomes and Spielman explained. The 37.5 million conversation sample represents a subset of Copilot’s total usage, though Microsoft did not disclose the full user base or conversation volume. Implications for AI development Understanding these AI usage patterns informs feature development, according to Microsoft. The company noted that recognising what matters most to users—health, creativity, and support during key moments—enables designing features that fit naturally into daily life. “It’s also clear from these uses that what Copilot says matters. They show why it’s so important that we hold ourselves to a high bar for quality,” the researchers wrote. The findings complement recent broader industry analysis of AI usage patterns, including OpenRouter’s report showing ******** AI models gaining global adoption through cost efficiency and performance improvements. Microsoft’s temporal focus adds behavioural depth to understanding how AI assistants integrate into human routines. The research arrives as Microsoft expands its AI infrastructure, with the company’s next-generation GB200 cluster now operational at MAI. These usage patterns suggest AI assistants are evolving beyond tools into companions that adapt to human rhythms—answering practical questions during daylight hours while entertaining philosophical musings when users can’t sleep. See also: Microsoft ‘Promptions’ fix AI prompts failing to deliver Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Microsoft’s Copilot usage analysis exposes the 2 am philosophy question phenomenon appeared first on AI News. View the full article
  17. Microsoft believes it has a fix for AI prompts being given, the response missing the mark, and the cycle repeating. This inefficiency is a drain on resources. The “trial-and-error loop can feel unpredictable and discouraging,” turning what should be a productivity booster into a time sink. Knowledge workers often spend more time managing the interaction itself than understanding the material they hoped to learn. Microsoft has released Promptions (prompt + options), a UI framework designed to address this friction by replacing vague natural language requests with precise, dynamic interface controls. The open-source tool offers a method to standardise how workforces interact with large language models (LLMs), moving away from unstructured chat toward guided and reliable workflows. The comprehension bottleneck Public attention often centres on AI producing text or images, but a massive component of enterprise usage involves understanding—asking AI to explain, clarify, or teach. This distinction is vital for internal tooling. Consider a spreadsheet formula: one user may want a simple syntax breakdown, another a debugging guide, and another an explanation suitable for teaching colleagues. The same formula can require entirely different explanations depending on the user’s role, expertise, and goals. Current chat interfaces rarely capture this intent effectively. Users often find that the way they phrase a question doesn’t match the level of detail the AI needs. “Clarifying what they really want can require long, carefully worded prompts that are tiring to produce,” Microsoft explains. Promptions operates as a middleware layer to fix this familiar issue with AI prompts. Instead of forcing users to type lengthy specifications, the system analyses the intent and conversation history to generate clickable options – such as explanation length, tone, or specific focus areas – in real-time. Efficiency vs complexity Microsoft researchers tested this approach by comparing static controls against the new dynamic system. The findings offer a realistic view of how such tools function in a live environment. Participants consistently reported that dynamic controls made it easier to express the specifics of their tasks without repeatedly rephrasing their prompts. This reduced the effort of prompt engineering and allowed users to focus more on understanding content than managing the mechanics of phrasing. By surfacing options like “Learning Objective” and “Response Format,” the system prompted participants to think more deliberately about their goals. Yet, adoption brings trade-offs. Participants valued adaptability but also found the system more difficult to interpret. Some struggled to anticipate how a selected option would influence the response, noting that the controls seemed opaque because the effect became evident only after the output appeared. This highlights a balance to strike. Dynamic interfaces can streamline complex tasks but may introduce a learning curve where the connection between a checkbox and the final output requires user adaptation. Promptions: The solution to fix AI prompts? Promptions is designed to be lightweight, functioning as a middleware layer sitting between the user and the underlying language model. The architecture consists of two primary components: Option Module: Reviews the user’s prompt and conversation history to generate relevant UI elements. Chat Module: Incorporates these selections to produce the AI’s response. Of particular note for security teams, “there’s no need to store data between sessions, which keeps implementation simple.” This stateless design mitigates data governance concerns typically associated with complex AI overlays. Moving from “prompt engineering” to “prompt selection” offers a pathway to more consistent AI outputs across an organisation. By implementing UI frameworks that guide user intent, technology leaders can reduce the variability of AI responses and improve workforce efficiency. Success depends on calibration. Usability challenges remain regarding how dynamic options affect AI output and managing the complexity of multiple controls. Leaders should view this not as a complete solution to fix the results of AI prompts, but as a design pattern to test within their internal developer platforms and support tools. See also: Perplexity: AI agents are taking over complex enterprise tasks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Microsoft ‘Promptions’ fix AI prompts failing to deliver appeared first on AI News. View the full article
  18. New adoption data from Perplexity reveals how AI agents are driving workflow efficiency gains by taking over complex enterprise tasks. For the past year, the technology sector has operated under the assumption that the next evolution of generative AI would advance beyond conversation into action. While Large Language Models (LLMs) serve as a reasoning engine, “agents” act as the hands, capable of executing complex, multi-step workflows with minimal supervision. Until now, however, visibility into how these tools are actually being utilised in the wild has been opaque, relying largely on speculative frameworks or limited surveys. New data released by Perplexity, analysing hundreds of millions of interactions with its Comet browser and assistant, provides a first large-scale field study of general-purpose AI agents. The data indicates that agentic AI is already being deployed by high-value knowledge workers to streamline productivity and research tasks. Understanding who is using these tools is essential for forecasting internal demand and identifying potential shadow IT vectors. The study reveals marked heterogeneity in adoption. Users in nations with higher GDP per capita and educational attainment are far more likely to engage with agentic tools. More telling for corporate planning is the occupational breakdown. Adoption is heavily concentrated in digital and knowledge-intensive sectors. The ‘Digital Technology’ cluster represents the largest share, accounting for 28 percent of adopters and 30 percent of queries. This is followed closely by academia, finance, marketing, and entrepreneurship. Collectively, these clusters account for over 70 percent of total adopters. This suggests that the individuals most likely to leverage agentic workflows are the most expensive assets within an organisation: software engineers, financial analysts, and market strategists. These early adopters are not dabbling; the data shows that “power users” (those with earlier access) make nine times as many agentic queries as average users, indicating that once integrated into a workflow, the technology becomes indispensable. AI agents: Partners for enterprise tasks, not butlers To advance beyond marketing narratives, enterprises must understand the utility these agents provide. A common view suggests agents will primarily function as “digital concierges” for rote administrative chores. However, the data challenges this view: 57 percent of all agent activity focuses on cognitive work. Perplexity’s researchers developed a “hierarchical agentic taxonomy” to classify user intent, revealing the usage of AI agents is practical rather than experimental. The dominant use case is ‘Productivity & Workflow,’ which accounts for 36 percent of all agentic queries. This is followed by ‘Learning & Research’ at 21 percent. Specific anecdotes from the study illustrate how this translates to enterprise value. A procurement professional, for instance, used the assistant to scan customer case studies and identify relevant use cases before engaging with a vendor. Similarly, a finance worker delegated the tasks of filtering stock options and analysing investment information. In these scenarios, the agent handles the information gathering and initial synthesis autonomously to allow the human to focus on final judgment. This distribution provides a definite indication to operational leaders: the immediate ROI for agentic AI lies in scaling human capability rather than simply automating low-level friction. The study defines these agents as systems that “cycle automatically between three iterative phases to achieve the end goal: thinking, acting, and observing.” This capability allows them to support “deep cognitive work,” acting as a thinking partner rather than a simple butler. Stickiness and the cognitive migration A key insight for IT leaders is the “stickiness” of AI agents for enterprise workflows. The data shows that in the short term, users exhibit strong within-topic persistence. If a user engages an agent for a productivity task, their subsequent queries are highly likely to remain in that domain. However, the user journey often evolves. New users frequently “test the waters” with low-stakes queries, such as asking for movie recommendations or general trivia. Over time, a transition occurs. The study notes that while users may enter via various use cases, query shares tend to migrate toward cognitively oriented domains like productivity, learning, and career development. Once a user employs an agent to debug code or summarise a financial report, they rarely revert to lower-value tasks. The ‘Productivity’ and ‘Workflow’ categories demonstrate the highest retention rates. This behaviour implies that early pilot programmes should anticipate a learning curve where usage matures from simple information retrieval to complex task delegation. The “where” of agentic AI is just as important as the “what”. Perplexity’s study tracked the environments – specific websites and platforms – where these AI agents operate. The concentration of activity varies by task, but the top environments are staples of the modern enterprise stack. Google Docs is a primary environment for document and spreadsheet editing, while LinkedIn dominates professional networking tasks. For ‘Learning & Research,’ the activity is split between course platforms like Coursera and research repositories. For CISOs and compliance officers, this presents a new risk profile. AI agents are not just reading data; they are actively manipulating it within core enterprise applications. The study explicitly defines agentic queries as those involving “browser control” or actions on external applications via APIs. When an employee tasks an agent to “summarise these customer case studies,” the agent is interacting directly with proprietary data. The concentration of environments also highlights the potential for platform-specific optimisations. For instance, the top five environments account for 96 percent of queries in professional networking, primarily on LinkedIn. This high concentration suggests that businesses could see immediate efficiency gains by developing specific governance policies or API connectors for these high-traffic platforms. Business planning for agentic AI following Perplexity’s data The diffusion of capable AI agents invites new lines of inquiry for business planning. The data from Perplexity confirms that we have passed the speculative phase. Agents are currently being used to plan and execute multi-step actions, modifying their environments rather than just exchanging information. Operational leaders should consider three immediate actions: Audit the productivity and workflow friction points within high-value teams: The data shows this is where agents are naturally finding their foothold. If software engineers and financial analysts are already using these tools to edit documents or manage accounts, formalising these workflows could standardise efficiency gains. Prepare for the augmentation reality: The researchers note that while agents have autonomy, users often break tasks into smaller pieces, delegating only subtasks. This suggests that the immediate future of work is collaborative, requiring employees to be upskilled in how to effectively “manage” their AI counterparts. Address the infrastructure and security layer: With agents operating in “open-world web environments” and interacting with sites like GitHub and corporate email, the perimeter for data loss prevention expands. Policies must distinguish between a chatbot offering advice and an agent executing code or sending messages. As the market for agentic AI is projected to grow from $8 billion in 2025 to $199 billion by 2034, the early evidence from Perplexity serves as a bellwether. The transition to enterprise workflows led by AI agents is underway, driven by the most digitally capable segments of the workforce. The challenge for the enterprise is to harness this momentum without losing control of the governance required to scale it safely. See also: Accenture and Anthropic partner to boost enterprise AI integration Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Perplexity: AI agents are taking over complex enterprise tasks appeared first on AI News. View the full article
  19. Many companies are still working out how to use AI in a steady and practical way, but a small group is already pulling ahead. New research from NTT DATA outlines a playbook that shows how these “AI leaders” set themselves apart through strong plans, firm decisions, and a disciplined approach to building and using AI across their organisations. The findings come from a survey of 2,567 senior executives in 35 countries and 15 industries. Only 15% of the organisations met the bar to be considered AI leaders. These companies share a few traits: clear direction on where AI fits into their business, a solid operating model, and consistent follow-through. They also reported higher revenue growth and stronger profit margins than everyone else in the study. Yutaka Sasaki, President and CEO of NTT DATA Group, put it simply: “AI accountability now belongs in the boardroom and demands an enterprise-wide agenda. Our research shows that a small group of AI leaders already are using AI to differentiate, grow and reinvent how humans and machines create value together.” The playbook behind strong AI plans One of the clearest differences between leaders and the rest is how they approach strategy. For these companies, AI is not a side project or a tool bolted onto existing work. They treat it as a core driver of growth and adjust their plans to match that view. A major advantage for these leaders is how closely they connect AI with their business goals. This alignment helps them move faster and stay focused, which in turn delivers stronger financial outcomes. They also zero in on a few high-value areas of the business rather than spreading resources too thin. By redesigning entire workflows around AI, they unlock more value than if they had only made small improvements in scattered parts of the organisation. The report describes this as a kind of flywheel: early investments bring early wins, which then encourage more investment. Over time, this cycle becomes self-reinforcing. Leaders also rebuild important applications with AI embedded inside them, instead of adding basic AI features on top of old systems. This approach helps them see deeper impact and prepares the organisation for long-term gains. How leaders put their plans to work A good plan only works when backed by strong execution. AI leaders stand out through the foundations they build, the way they support their people, and how they drive adoption across the entire organisation. These companies invest in secure and scalable systems that can support large AI workloads. In some cases, they shift or localise their infrastructure to support private or sovereign AI needs. They also work to remove system bottlenecks so teams can move without roadblocks. Rather than using AI as a replacement for workers, leaders use it to help experienced employees do higher-value work. This “expert-first” approach allows teams to use their judgment while letting AI handle complex or time-consuming tasks. AI leaders also focus on adoption as a long-term change effort. They treat it as a company-wide shift, supported by clear communication and structured change management. This helps reduce pushback and encourages steady use of AI at all levels. Governance is another major difference. Leading organisations centralise their AI oversight, give clear responsibility to senior roles such as Chief AI Officers, and build processes that help balance innovation with risk. These systems allow them to scale AI more confidently. Partnerships also play a major role. Top companies often bring in outside experts and are open to arrangements that tie outcomes to shared success. This helps them move faster while keeping their goals in view. Abhijit Dubey, CEO and CAIO of NTT DATA, Inc., summarised the path forward: “Once AI and business strategies are aligned, the single most effective move is to pick one or two domains that deliver disproportionate value and redesign them end-to-end with AI. Supporting this focused, end-to-end approach with strong governance, modern infrastructure and trusted partners is how today’s AI leaders are turning pilots into profit and pulling ahead of the market.” (Photo by Igor Omilaev) See also: OpenAI: Enterprise users swap AI pilots for deep integrations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Inside the playbook of companies winning with AI appeared first on AI News. View the full article
  20. Accenture and Anthropic are setting out to boost enterprise AI integration with a newly-expanded partnership. While 2024 was defined by corporate curiosity regarding Large Language Models (LLMs), the current mandate for business leaders is operationalising these tools to achieve a return on investment. The new Accenture Anthropic Business Group combines Anthropic’s model capabilities with Accenture’s implementation machinery to industrialise the deployment of generative AI across regulated sectors. Industrialising the developer workflow A primary component of this collaboration focuses on software engineering. Coding assistance is often seen as the path of least resistance for AI adoption, yet integrating these tools into existing CI/CD pipelines remains complex. Accenture is positioning itself as a primary partner for Claude Code, Anthropic’s coding tool, which the company claims now holds over half of the AI coding market. The consultancy plans to train approximately 30,000 of its own professionals on Claude, creating one of the largest global ecosystems of practitioners familiar with the tool. The promise of deeper enterprise integration of AI coding tools is a complete restructuring of the development hierarchy. The joint offering suggests that junior developers can utilise these tools to produce senior-level code and complete integration tasks more quickly to reduce onboarding times from months to weeks. Senior developers can then concentrate on high-value architecture, validation, and oversight. Dario Amodei, CEO and Co-Founder of Anthropic, said: “AI is changing how almost everyone works, and enterprises need both cutting-edge AI and trusted expertise to deploy it at scale. Accenture brings deep enterprise transformation experience, and Anthropic brings the most capable models. “Our new partnership means that tens of thousands of Accenture developers will be using Claude Code, making this our largest ever deployment—and the new Accenture Anthropic Business Group will help enterprise clients use our smartest AI models to make major productivity gains.” Justifying AI inference costs and removing deployment barriers A persistent friction point for enterprise leaders seeking deeper AI integration is justifying the ongoing cost of inference against actual business value. To counter this, the partnership is launching a specific product designed to help CIOs measure value and drive adoption across engineering organisations. This offering attempts to provide a structured path for software design and maintenance, moving beyond the ad-hoc usage of coding assistants. It combines Claude Code with a framework for quantifying productivity gains and workflow redesigns tailored for AI-first development teams. For the enterprise, the goal is to translate individual developer efficiency into broader company impact; such as shorter development cycles and faster time-to-market for new products. However, the most substantial barrier to AI adoption in the Global 2000 remains compliance. Sectors such as financial services, healthcare, and the public sector face strict governance requirements that often stall AI initiatives. Accenture and Anthropic are developing industry-specific enterprise AI solutions to address these deployment challenges. In financial services, for instance, the focus is on automating compliance workflows and processing complex documents with the precision required for high-stakes decisions. Health and life sciences firms face a parallel demand. Here, the partnership aims to leverage Claude’s analytical capabilities to query proprietary datasets and streamline clinical trial processing. For the public sector, the utility lies in AI agents that assist citizens in navigating government services while adhering to statutory data privacy requirements. Julie Sweet, Chair and CEO of Accenture, commented: “With the powerful combination of Anthropic’s Claude capabilities and Accenture’s AI expertise and industry and function domain knowledge, organisations can embed AI everywhere responsibly and at speed – from software development to customer experience – to drive innovation, unlock new sources of growth, and build their confidence to lead in the age of AI.” How Accenture and Anthropic are mitigating risks to support enterprise AI integration To mitigate the risks associated with deploying non-deterministic models, the partnership emphasises “responsible AI.” This involves combining Anthropic’s “constitutional AI” principles – which embed safety rules directly into the model – with Accenture’s governance expertise. Practical implementation will occur through Accenture’s network of Innovation Hubs, which will serve as controlled environments or “sandboxes”. These hubs allow clients to prototype and validate solutions without exposing production systems or sensitive data to risk. The companies also plan to co-invest in a ‘Claude Center of Excellence’ to design bespoke AI offerings tailored to specific industry needs. This expanded partnership with Accenture follows Anthropic reporting a growth in its enterprise AI market share from 24 percent to 40 percent. For Accenture, establishing a dedicated business group with specific go-to-market focus reflects a long-term commitment to the platform. The era of standalone AI pilots is fading. The next phase for enterprise AI integration demands tight coupling between model capabilities, workforce training, and rigorous value measurement. See also: OpenAI targets AI skills gap with new certification standards Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Accenture and Anthropic partner to boost enterprise AI integration appeared first on AI News. View the full article
  21. Adoption of generative AI has outpaced workforce capability, prompting OpenAI to target the skills gap with new certification standards. While it’s safe to say OpenAI’s tools have reached mass adoption, organisations struggle to convert this usage into reliable output. To address this, OpenAI has announced ‘AI Foundations,’ a structured initiative designed to standardise how employees learn and apply the technology. OpenAI’s initiative marks a necessary evolution in the vendor ecosystem; indicating a departure from the “move fast” phase of experimental deployment toward a focus on verifiable competence. OpenAI explicitly states its intention to certify 10 million Americans by 2030. Workers and employers have an incentive to close the AI skills gap The economic case for AI training and certification is rooted in wage and productivity data. Workers possessing AI skills earn approximately 50 percent more than those without them. However, CIOs often find that productivity gains on paper fail to materialise in practice. OpenAI notes that gains “only materialise when people have the skills to use the technology.” Without guidance, widespread access can create operational risk. OpenAI admits the technology is “disruptive, leaving many people unsure which skills matter most.” By defining a standard curriculum, OpenAI aims to help organisations capture the efficiency gains promised by their software investments. The delivery method for AI Foundations differs from traditional corporate LMS (Learning Management System) modules. The course sits directly inside ChatGPT, allowing the platform to act as “tutor, the practice space, and the feedback loop” simultaneously. This integration allows learners to execute real tasks and receive context-aware corrections to help close the AI skills gap, rather than just watching passive video content. Completing the programme yields a badge verifying “job-ready AI skills”. This credential serves as a stepping stone toward a full OpenAI Certification. To ensure these badges carry weight in the labour market, OpenAI has engaged Coursera, ETS, and Credly by Pearson to validate the psychometric rigour and design of the assessments. Operational pilots for the AI certification and improving the hiring pipeline A consortium of large-scale employers and public-sector bodies will test the curriculum before a wider rollout. Pilot partners include Walmart, John Deere, Lowe’s, Boston Consulting Group, Russell Reynolds Associates, Upwork, Elevance Health, and Accenture. The Office of the Governor of Delaware is also participating, which shows interest from state-level administration. These partners span industries with heavy operational footprints (including retail, agriculture, and healthcare) suggesting the training targets core business functions rather than just technical roles. OpenAI plans to use the next few months to refine the course based on data from these pilots to ensure that it can effectively close the AI skills gap. OpenAI’s initiative extends into recruitment. The company is developing an ‘OpenAI Jobs Platform’ to connect certified workers with employers. Partnerships with Indeed and Upwork support this mechanism, aiming to make it easier for businesses to identify candidates with verified technical expertise. For hiring managers, this offers a potential solution to the difficulty of vetting AI literacy. A standardised AI certification could reduce the reliance on self-reported skills, providing “portable evidence” of a candidate’s development. Academic alignment to seed future AI talent While the enterprise focus is immediate, OpenAI is also seeding the future talent pipeline. A ‘ChatGPT Foundations for Teachers’ course has launched on Coursera. With three in five teachers already using AI tools to save time and personalise materials, this stream aims to formalise existing habits. Simultaneously, pilots with Arizona State University and the California State University system are creating pathways for students to certify their skills before entering the job market. This ensures that the next wave of graduates arrives with the “job-ready” verification that enterprise employers are beginning to demand. Organisations must now decide whether to rely on vendor-supplied certification or continue developing proprietary training. The involvement of firms like Boston Consulting Group and Accenture implies that major players see value in a standardised external benchmark. As OpenAI moves to certify millions of people and close the AI skills gap, the certification badge may become a baseline expectation for knowledge workers much like office suite proficiency in previous decades. See also: Instacart pilots agentic commerce by embedding in ChatGPT Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post OpenAI targets AI skills gap with new certification standards appeared first on AI News. View the full article
  22. For the past year, we’ve been told that artificial intelligence is revolutionising productivity—helping us write emails, generate code, and summarise documents. But what if the reality of how people actually use AI is completely different from what we’ve been led to believe? A data-driven study by OpenRouter has just pulled back the curtain on real-world AI usage by analysing over 100 trillion tokens—essentially billions upon billions of conversations and interactions with large language models like ChatGPT, Claude, and dozens of others. The findings challenge many assumptions about the AI revolution. ​​OpenRouter is a multi-model AI inference platform that routes requests across more than 300 models from over 60 providers—from OpenAI and Anthropic to open-source alternatives like DeepSeek and Meta’s LLaMA. With over 50% of its usage originating outside the United States and serving millions of developers globally, the platform offers a unique cross-section of how AI is actually deployed across different geographies, use cases, and user types. Importantly, the study analysed metadata from billions of interactions without accessing the actual text of conversations, preserving user privacy while revealing behavioural patterns. Open-source AI models have grown to capture approximately one-third of total usage by late 2025, with notable spikes following major releases. The roleplay revolution nobody saw coming Perhaps the most surprising discovery: more than half of all open-source AI model usage isn’t for productivity at all. It’s for roleplay and creative storytelling. Yes, you read that right. While tech executives tout AI’s potential to transform business, users are spending the majority of their time engaging in character-driven conversations, interactive fiction, and gaming scenarios. Over 50% of open-source model interactions fall into this category, dwarfing even programming assistance. “This counters an assumption that LLMs are mostly used for writing code, emails, or summaries,” the report states. “In reality, many users engage with these models for companionship or exploration.” This isn’t just casual chatting. The data shows users treat AI models as structured roleplaying engines, with 60% of roleplay tokens falling under specific gaming scenarios and creative writing contexts. It’s a massive, largely invisible use case that’s reshaping how AI companies think about their products. Programming’s meteoric rise While roleplay dominates open-source usage, programming has become the fastest-growing category across all AI models. At the start of 2025, coding-related queries accounted for just 11% of total AI usage. By the end of the year, that figure had exploded to over 50%. This growth reflects AI’s deepening integration into software development. Average prompt lengths for programming tasks have grown fourfold, from around 1,500 tokens to over 6,000, with some code-related requests exceeding 20,000 tokens—roughly equivalent to feeding an entire codebase into an AI model for analysis. For context, programming queries now generate some of the longest and most complex interactions in the entire AI ecosystem. Developers aren’t just asking for simple code snippets anymore; they’re conducting sophisticated debugging sessions, architectural reviews, and multi-step problem solving. Anthropic’s Claude models dominate this space, capturing over 60% of programming-related usage for most of 2025, though competition is intensifying as Google, OpenAI, and open-source alternatives gain ground. Programming-related queries exploded from 11% of total AI usage in early 2025 to over 50% by year’s end. The ******** AI surge Another major revelation: ******** AI models now account for approximately 30% of global usage—nearly triple their 13% share at the start of 2025. Models from DeepSeek, Qwen (Alibaba), and Moonshot AI have rapidly gained traction, with DeepSeek alone processing 14.37 trillion tokens during the study *******. This represents a fundamental shift in the global AI landscape, where Western companies no longer hold unchallenged dominance. Simplified ******** is now the second-most common language for AI interactions globally at 5% of total usage, behind only English at 83%. Asia’s overall share of AI spending more than doubled from 13% to 31%, with Singapore emerging as the second-largest country by usage after the United States. The rise of “Agentic” AI The study introduces a concept that will define AI’s next phase: agentic inference. This means AI models are no longer just answering single questions—they’re executing multi-step tasks, calling external tools, and reasoning across extended conversations. The share of AI interactions classified as “reasoning-optimised” jumped from nearly zero in early 2025 to over 50% by year’s end. This reflects a fundamental shift from AI as a text generator to AI as an autonomous agent capable of planning and execution. “The median LLM request is no longer a simple question or isolated instruction,” the researchers explain. “Instead, it is part of a structured, agent-like loop, invoking external tools, reasoning over state, and persisting across longer contexts.” Think of it this way: instead of asking AI to “write a function,” you’re now asking it to “debug this codebase, identify the performance bottleneck, and implement a solution”—and it can actually do it. The “Glass Slipper Effect” One of the study’s most fascinating insights relates to user retention. Researchers discovered what they call the Cinderella “Glass Slipper” effect—a phenomenon where AI models that are “first to solve” a critical problem create lasting user loyalty. When a newly released model perfectly matches a previously unmet need—the metaphorical “glass slipper”—those early users stick around far longer than later adopters. For example, the June 2025 cohort of Google’s Gemini 2.5 Pro retained approximately 40% of users at month five, substantially higher than later cohorts. This challenges conventional wisdom about AI competition. Being first matters, but specifically being first to solve a high-value problem creates a durable competitive advantage. Users embed these models into their workflows, making switching costly both technically and behaviorally. Cost doesn’t matter (as much as you’d think) Perhaps counterintuitively, the study reveals that AI usage is relatively price-inelastic. A 10% decrease in price corresponds to only about a 0.5-0.7% increase in usage. Premium models from Anthropic and OpenAI command $2-35 per million tokens while maintaining high usage, while budget options like DeepSeek and Google’s Gemini Flash achieve similar scale at under $0.40 per million tokens. Both coexist successfully. “The LLM market does not seem to behave like a commodity just yet,” the report concludes. “Users balance cost with reasoning quality, reliability, and breadth of capability.” This means AI hasn’t become a race to the bottom on pricing. Quality, reliability, and capability still command premiums—at least for now. What this means going forward The OpenRouter study paints a picture of real-world AI usage that’s far more nuanced than industry narratives suggest. Yes, AI is transforming programming and professional work. But it’s also creating entirely new categories of human-computer interaction through roleplay and creative applications. The market is diversifying geographically, with China emerging as a major force. The technology is evolving from simple text generation to complex, multi-step reasoning. And user loyalty depends less on being first to market than on being first to truly solve a problem. As the report notes, “ways in which people use LLMs do not always align with expectations and vary significantly country by country, state by state, use case by use case.” Understanding these real-world patterns—not just benchmark scores or marketing claims—will be crucial as AI becomes further embedded in daily life. The gap between how we think AI is used and how it’s actually used is wider than most realise. This study helps close that gap. See also: Deep Cogito v2: Open-source AI that hones its reasoning skills Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How people really use AI: The surprising truth from analysing billions of interactions appeared first on AI News. View the full article
  23. Artificial intelligence is transforming the way information is created, summarised, and delivered. For publishers, the shift is already visible. Search engines provide AI-generated overviews, users get answers without clicking, and content is scraped by large language models that train on decades of journalism. In this environment one question remains: How does a publisher survive when the traditional rules of distribution fall apart? Dev Pragad, the CEO of Newsweek, is offering one of the clearest answers. Pragad’s strategy begins with an acknowledgement of reality. In his view, publishers need to accept the search-driven traffic model that defined the digital era is no longer dependable. AI-powered answer engines are restructuring the way users interact with information. A user might ask a question, receive a summary generated by an LLM, and never visit the publisher’s website. Page views become unpredictable, programmatic advertising becomes unstable, and legacy structures become vulnerable. Rather than respond with fear, Dev Pragad has taken a proactive approach grounded in three core areas. Redesign the brand so that it remains visually strong in any context. Diversify revenue so the business is not tied to a single distribution mechanism. Expand those content formats that are less dependent on search engines and more aligned with the new habits of audiences. In September 2025 Newsweek unveiled its redesigned identity under the tagline ‘A World Drawn Closer’. This redesign, created with 2×4, introduced a refined wordmark, a bold ‘N’ icon, and a unified visual system used for print, digital, video and international editions. For the AI era such a coherence matters. An AI summary might reference Newsweek visually, a feed might show a thumbnail with minimal space, and a social clip might require brand clarity in a fraction of a second. The new design prepares Newsweek for the new reality by making the brand easy to identify. The editorial shift under Dev Pragad is also significant. Newsmakers, the series that features cultural leaders (Spike Lee, Liam Neeson, and Clark Hunt, for example), is available free on YouTube and digital platforms. The decision to make the series accessible at no cost is strategic. Video that travels across platforms is harder for AI summaries to replace. It is more immersive, and it reaches audiences directly, plus it builds brand equity and cultural relevance beyond search traffic. In interviews Pragad has said Newsmakers represents the future of journalism, blending storytelling, accessibility and platform fluency. Each episode is supported by a companion article and a collectable cover, creating a cross media footprint that is not reliant on one format or algorithm. In addition to editorial innovation, Newsweek is evolving its business architecture to withstand AI driven disruption. While digital advertising remains part of the company’s revenue model, Pragad has expanded the title into events, direct advertising relationships, data driven rankings, and verticals such as healthcare. This approach creates multiple revenue streams that do not depend on unpredictable traffic patterns. Another factor shaping Newsweek’s AI strategy is the way large language models scrape content. Newsweek monitors this activity through systems like TollBit which track bot behaviour and provide insight into how often AI engines attempt to access the site. Pragad has turned down licensing deals that undervalued the worth of Newsweek’s archives and has advocated for fair compensation for the use of publisher content. He believes publishers must negotiate collectively and maintain leverage rather than rush into agreements that minimise the value of their intellectual property. The redesign is also in response to the challenge of brand recognition in a world dominated by fast-moving feeds and AI-driven surfaces. Clear typography, concise visual hierarchy, and a distinct colour palette support recognition across AI-generated snippets, smart devices, social networks, and search previews. This is a design built for the realities of the modern information economy. Newsweek’s growth reflects the strength of these choices. The publication has been recognised as one of the fastest-rising digital news destinations in the US, and global audience numbers continue to climb. Although the company continues to evolve its revenue structure, its editorial mission remains grounded in fairness and trust. The new tagline reflects that commitment. Journalism brings the world closer when it is clear, accessible, and human-centred. The AI revolution has placed publishers in a difficult position, yet it has also opened an opportunity. Those willing to rethink design, editorial formats, AI licensing, distribution, and revenue have the chance to define what comes next. Under Dev Pragad Newsweek is doing exactly that. The company is no longer relying on assumptions about how audiences discover information. It’s building a future in which journalism can coexist with AI, not be erased by it. Dev Pragad has created a blueprint that demonstrates how a legacy publisher can reinvent itself for the AI age. Through design clarity, accessible cultural storytelling, diversified business models, and a firm stance on content value, he is positioning Newsweek not only to survive, but to lead in a world where information flows faster and more unpredictably than before. The result is a modern media entity built for a new era of intelligence, creativity, and connection. The post Newsweek: Building AI-resilience for the next era of information appeared first on AI News. View the full article
  24. Instacart has deployed an embedded checkout experience within ChatGPT through the emerging Agentic Commerce Protocol. With the deployment, the company is the first partner to launch an app on ChatGPT that offers a complete shopping cycle – from query to payment – without requiring the user to leave the conversation interface. Operationalising agentic commerce The integration fixes a broken link in conversational commerce: the “handoff”. Historically, AI models could suggest products or generate meal plans, but the execution phase required deep-linking out to a separate application or website, often resulting in cart abandonment. Under this new deployment, users can interact with the AI for meal planning and have the system build a cart based on local retailer inventory. The differentiator here is the checkout process. By leveraging the Agentic Commerce Protocol, the transaction is processed directly within the chat interface using a credit card flow powered by Stripe. According to Nick Turley, VP and Head of ChatGPT, the objective is to connect AI suggestions directly to real-world services. “With the Instacart app directly in ChatGPT, users can go from meal planning to checkout in a single, seamless conversation,” Turley said. “It’s another step toward bringing our vision to life—where AI delivers helpful suggestions and connects directly to real-world services, saving people time and effort in their everyday lives.” This integration goes deeper than standard API consumption. Instacart served as an early contributor to the OpenAI Operator research preview, providing feedback to ensure the technology could navigate real-world constraints while adhering to established norms. This “preview” involvement suggests that Instacart’s complex data environment – involving tens of thousands of SKUs and dynamic stock levels – served as a testing ground for OpenAI’s agentic capabilities. Rather than simply adopting the tool, Instacart helped define the parameters of how an AI agent interacts with external fulfilment logistics. The Instacart deployment underscores why structured, real-time data matters when integrating with large language models (LLMs). An AI agent is only as effective as the data it can access; hallucinations in a commercial context – such as selling out-of-stock items – carry financial and reputational risk. Anirban Kundu, CTO at Instacart, notes that powering shopping inside an AI agent requires technology capable of interpreting highly local and constantly fluctuating inventory. Instacart attempts to mitigate the “hallucination” risk by grounding the AI’s responses in its massive dataset, which covers more than 1.8 billion product instances across 100,000 stores. “Instacart and ChatGPT are redefining what’s possible in AI-powered shopping,” said Kundu. “Built on Agentic Commerce Protocol, this experience brings intelligent, real-time support to one of the most essential parts of daily life: getting groceries to feed your family. “Together, we’re creating a seamless and secure way for people to turn simple conversations into real-world action—helping customers go from inspiration to a full cart delivered from the store to their door with ease.” Dual adoption: Customer-facing and internal efficiency While the embedded checkout grabs headlines, Instacart’s broader plan involves extensive internal deployment. The company utilises ChatGPT Enterprise to streamline internal workflows, aimed at accelerating the development of customer experiences. Furthermore, they have deployed OpenAI’s Codex to power an internal coding agent. This dual approach – using AI to sell (Agentic Commerce) and AI to build (Codex) – offers a model for operations. It moves beyond isolated pilots into a holistic stance where generative models drive both revenue and R&D efficiency. The deployment points to a change in how brands view digital storefronts. Instacart’s approach appears to accept that consumer entry points are fragmenting. Rather than forcing all traffic through a proprietary app, the company is positioning its infrastructure as the backend fulfilment layer for third-party AI platforms. The company has explicitly stated its intention to bridge AI inspiration with real-world fulfilment, acting as a primary partner for major AI players including OpenAI, Google, and Microsoft. By embedding its service into these broad-reach platforms, Instacart aims to capture incremental demand that originates outside its native ecosystem. Implementation and availability of Instacart in ChatGPT The experience is currently active for users on desktop and mobile web platforms, while native mobile availability for iOS and Android applications is rolling out shortly. To access the feature, users must invoke the specific Instacart application within the ChatGPT interface (for example, by prompting “Instacart, can you help me shop for apple pie ingredients?”) and link their accounts. This opt-in mechanism ensures that data sharing is consensual, a requisite governance step for enterprises deploying consumer-facing AI agents. This integration serves as a case study of agentic AI for commerce. For retail and technology execs, the Instacart model demonstrates that the next phase of digital adoption involves preparing API structures and data pipelines to serve “non-human” customers (AI agents) as reliably as human ones. The focus must remain on data accuracy and real-time availability; without these foundations, agentic workflows will fail to deliver return on investment. See also: OpenAI: Enterprise users swap AI pilots for deep integrations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Instacart pilots agentic commerce by embedding in ChatGPT appeared first on AI News. View the full article
  25. According to OpenAI, enterprise AI has graduated from the sandbox and is now being used for daily operations with deep workflow integrations. New data from the company shows that firms are now assigning complex and multi-step workflows to models rather than simply asking for text summaries. The figures illustrate a hard change in how organisations deploy generative models. With OpenAI’s platform now serving over 800 million users weekly, a “flywheel” effect is driving consumer familiarity into professional environments. The company’s latest report notes that over a million business customers now use these tools, and the goal is now even deeper integration. This evolution presents two realities for decision-makers: productivity gains are concrete, but a growing divide between “frontier” adopters and the median enterprise suggests that value depends heavily on usage intensity. From chatbots to deep reasoning The best metric for corporate deployment maturity is not seat count, but task complexity OpenAI reports that ChatGPT message volume has grown eightfold year-over-year, but a better indicator for enterprise architects is the consumption of API reasoning tokens which suggests deeper integrations are taking place. This figure has increased by nearly 320 times per organisation—evidence that companies are systematically wiring more intelligent models into their products to handle logic rather than basic queries. The rise of configurable interfaces supports this view. Weekly users of Custom GPTs and Projects (tools that allow workers to instruct models with specific institutional knowledge) have increased approximately 19x this year. Roughly 20 percent of all enterprise messages are now processed via these customised environments, indicating that standardisation is now a prerequisite for professional use. For enterprise leaders auditing the ROI of AI seats, the data offers evidence on time savings. On average, users attribute between 40-60 minutes of time saved per active day to the technology. The impact varies by function: data science, engineering, and communication professionals report higher savings (averaging 60-80 minutes daily.) Beyond efficiency, the software is altering role boundaries. There is a specific effect on technical capability, particularly regarding code generation. Among enterprise users, OpenAI says that coding-related messages have risen across all business functions. Outside of engineering, IT, and research roles, coding queries have grown by an average of 36 percent over the past six months. Non-technical teams are using the tools to perform analysis that previously required specialised developers. Operational improvements extend across departments. Survey data shows 87 percent of IT workers report faster issue resolution, while 75 percent of HR professionals see improved employee engagement. Widening enterprise AI competence gap OpenAI’s data suggests that a split is forming between organisations that simply provide access to tools and those in which integrations are being deeply embedded into their operating models. The report identifies a “frontier” class of workers – those in the 95th percentile of adoption intensity – who generate six times more messages than the median worker. This disparity is stark at the organisational level. Frontier firms generate approximately twice as many messages per seat as the median enterprise and seven times more messages to custom GPTs. Leading firms are not just using the tools more frequently; they are investing in the infrastructure and standardisation required to make AI a persistent part of operations. Users who engage across a wider variety of tasks (roughly seven distinct types) report saving five times more time than those who limit their usage to three or four basic functions. Benefits correlate directly with the depth of use, implying that a “light touch” deployment plan may fail to deliver the anticipated ROI. While the professional services, finance, and technology sectors were early adopters and maintain the largest scale of usage, other industries are sprinting to catch up. The technology sector leads with 11x year-over-year growth, but healthcare and manufacturing follow closely with 8x and 7x growth respectively. Global adoption patterns also challenge the notion that this is solely a US-centric phenomenon. International usage is surging, with markets such as Australia, Brazil, the Netherlands, and France showing business customer growth rates exceeding 140 percent year-over-year. Japan has also surfaced as a key market, holding the largest number of corporate API customers outside of the US. OpenAI: Deep AI integrations accelerate enterprise workflows Examples of deployment highlight how these tools influence key business metrics. Retailer Lowe’s deployed an associate-facing tool to over 1,700 stores, resulting in a customer satisfaction score increase of 200 basis points when associates used the system. Furthermore, when online customers engaged with the retailer’s AI tool, conversion rates more than doubled. In the pharmaceutical sector, Moderna used enterprise AI to speed up the drafting of Target Product Profiles (TPPs), a process that typically involves weeks of cross-functional effort. By automating the extraction of key facts from massive evidence packs, the company reduced core analytical steps from weeks to hours. Financial services firm BBVA leveraged the technology to fix a bottleneck in legal validation for corporate signatory authority. By building a generative AI solution to handle standard legal queries, the bank automated over 9,000 queries annually, effectively freeing up the equivalent of three full-time employees for higher-value tasks. However, the transition to production-grade AI requires more than software procurement; it necessitates organisational readiness. The primary blockers for many organisations are no longer model capabilities, but implementation and internal structures. Leading firms consistently enable deep system integration by “turning on” connectors that give models secure access to company data. Yet, roughly one in four enterprises has not taken this step, limiting their models to generic knowledge rather than specific organisational context. Successful deployment relies on executive sponsorship that sets explicit mandates and encourages the codification of institutional knowledge into reusable assets. As the technology continues to evolve, organisations must adjust their approach. OpenAI’s data suggests that success now depends on delegating complex workflows with deep integrations rather than just asking for outputs, treating AI as a primary engine for enterprise revenue growth. See also: AWS re:Invent 2025: Frontier AI agents replace chatbots Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post OpenAI: Enterprise users swap AI pilots for deep integrations appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.