ChatGPT
Diamond Member-
Posts
559 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
A recent industry report [PDF] argues that Britain’s railway network could carry an extra billion journeys by the mid-2030s, building on the 1.6 billion passenger rail journeys recorded to year-end March 2024. The next decade will involve a combination of complexity and control, as more digital systems, data, and interconnected suppliers create the potential for more points of failure. The report’s central theme is that AI will become the operating system for modern rail, not as a single, centralised collection of models and algorithms, but as layers of prediction, optimisation, and automated monitoring found in infrastructure, rolling stock, maintenance yards, and stations (pp.18-23). This technology will guide human focus within daily work schedules rather than replace human activity entirely. Maintenance to become predictive and data-driven Traditional rail maintenance relies on fixed schedules and manual inspections, a reactive and labour-intensive practice. The whitepaper cites Network Rail’s reliance on engineers walking the track to spot defects (p.18). AI will shift the industry to predictive maintenance, analysing data from sensors to forecast failures before they cause significant disruption. This involves a combination of sensors and imaging, including high-definition cameras, LiDAR scanners, and vibration monitors. These provide machine-learning systems with data that can flag degradation in track, signalling, and electrical assets ahead of failure (pp.18-19). These monitoring programs can generate alerts months in advance, reducing emergency call-outs. The timeframe for predicting asset failure varies by asset type. Network Rail’s intelligent infrastructure efforts should transition from “find and fix” to “predict and prevent.” Network Rail emphasises data-led maintenance and tools designed to consolidate asset information, while European R&D programs (like Europe’s Rail and its predecessor, Shift2Rail) fund projects like DAYDREAMS, similarly aimed at prescriptive asset management. Prediction at scale requires a common approach to achieve transformation. Traffic control and energy efficiency Operational optimisation, beyond predictive maintenance, offers significant returns. AI systems use live and historical operating data—train positions, speeds, weather forecasts—to anticipate disruption and adjust traffic flow. Digital twin and AI-based traffic management trials in Europe, alongside research and testing of AI-assisted driving and positioning, could increase overall network capacity without laying more track (p.20). Algorithms also advise drivers on optimal acceleration and braking, potentially saving 10-15% in energy. Considering route variations, traction, and timetable constraints, energy savings compound quickly across a large network. Safety monitoring and CCTV Visible AI applications focus on safety and security. Obstacle detection uses thermal cameras and machine learning to identify hazards beyond human visibility. AI also monitors level crossings and analyses CCTV footage to spot unattended items and suspicious activity (pp.20-21). For example, AI and LiDAR are used for crowd monitoring at London Waterloo as part of a suite of safety tools. Passenger flows and journey optimisation AI can forecast demand using ticket sales, events, and mobile signals, allowing operators to adjust the number of carriages and reduce overcrowding, the report states. Passenger counting is a high-impact, low-drama application: better data supports better timetables and clearer customer information. Cybersecurity issues As operational technology converges with IT, cybersecurity becomes a critical operational issue. Legacy systems, lacking replacement plans, pose a risk, as does integrating modern analytics with older infrastructure. This creates conditions attractive to attackers. The future of AI in rail involves sensors performing in extreme environments, models trusted and tested by operators, and governance that treats cyber resilience as inseparable from physical safety. The report’s message is that AI will arrive regardless. The question is whether railways proactively adopt and control it or inherit it as un-managed complexity. (Image source: “Train Junction” by jcgoble3 is licensed under CC BY-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post The future of rail: Watching, predicting, and learning appeared first on AI News. View the full article
-
For a company built on intellectual property, scale creates a familiar tension. Disney needs to produce and distribute content across many formats and audiences, while keeping tight control over rights, safety, and brand consistency. Generative AI promises speed and flexibility, but unmanaged use risks creating legal, creative, and operational drag. Disney’s agreement with OpenAI shows how a large, IP-heavy organisation is attempting to resolve that tension by putting AI inside its operating system rather than treating it as a side experiment. Under the deal, Disney becomes both a licensing partner and a major enterprise customer. OpenAI’s video model Sora will be able to generate short, user-prompted videos using a defined set of Disney-owned characters and environments. Separately, Disney will use OpenAI’s APIs to build internal tools and new consumer experiences, including integrations tied to Disney+. The company will also deploy ChatGPT internally for employees. The mechanics matter more than the spectacle. Disney is not opening its catalogue to unrestricted generation. The licence excludes actor likenesses and voices, limits which assets can be used, and applies safety and age-appropriate controls. In practice, this positions generative AI as a constrained production layer—capable of generating variation and volume, but bounded by governance. AI inside existing workflows A consistent failure mode in enterprise AI programmes is separation. Tools live outside the systems where work actually happens, adding steps instead of removing them. Disney’s approach mirrors a more pragmatic pattern: put AI where decisions are already made. On the consumer side, AI-generated content will surface through Disney+, rather than through a standalone experiment. On the enterprise side, employees gain access to AI through APIs and a standardised assistant, rather than a patchwork of ad hoc tools. This reduces friction and makes AI usage observable and governable. The implication is organisational. Disney is treating generative AI as a horizontal capability—closer to a platform service than a creative add-on. That framing makes it easier to scale usage across teams without multiplying risk. Variation without expanding headcount The Sora licence focuses on short-form content derived from pre-approved assets. That constraint is deliberate. In production environments, much of the cost sits not in ideation but in generating usable variations, reviewing them, and moving them through distribution pipelines. By allowing prompt-driven generation inside a defined asset set, Disney can reduce the marginal cost of experimentation and fan engagement without increasing manual production or review load. The output is not a finished film. It is a controlled input into marketing, social, and engagement workflows. This mirrors a broader enterprise pattern: AI earns its place when it shortens the path from intent to usable output, not when it creates standalone artefacts. APIs over point tools Beyond content generation, the agreement positions OpenAI’s models as building blocks. Disney plans to use APIs to develop new products and internal tools, rather than relying solely on off-the-shelf interfaces. This matters because enterprise AI programmes often stall on integration. Teams waste time copying outputs between systems or adapting generic tools to fit internal processes. API-level access allows Disney to embed AI directly into product logic, employee workflows, and existing systems of record. In effect, AI becomes part of the connective tissue between tools, not another layer employees must learn to work around. Aligning productivity with incentives Disney’s $1 billion equity investment in OpenAI is less interesting as a valuation signal than as an operational one. It indicates an expectation that AI usage will be persistent and central, not optional or experimental. For large organisations, AI investments fail when tooling remains disconnected from economic outcomes. Here, AI touches revenue-facing surfaces (Disney+ engagement), cost structures (content variation and internal productivity), and long-term platform strategy. That alignment increases the likelihood that AI becomes part of standard planning cycles rather than discretionary innovation spend. Automation that makes scale less fragile High-volume AI use amplifies small failures. Disney and OpenAI emphasise safeguards around IP, harmful content, and misuse, not as a values statement but as a scaling requirement. Strong automation around safety and rights management reduces the need for manual intervention and supports consistent enforcement. As with fraud detection or content moderation in other industries, this kind of operational AI does not attract attention when it works—but it makes growth less brittle. Lessons for enterprise leaders Embed AI where work already happens. Disney targets product and employee workflows, not a separate AI sandbox. Constrain before you scale. Defined asset sets and exclusions make deployment viable in high-liability environments. Use APIs to reduce friction. Integration matters more than model novelty. Tie AI to economics early. Productivity gains stick when they connect to revenue and cost structures. Treat safety as infrastructure. Automation and controls are prerequisites for scale, not afterthoughts. Disney’s specific assets are unique. The operating pattern is not. Enterprise AI delivers value when it is designed as part of the organisation’s core machinery—governed, integrated, and measured—rather than as a showcase for what models can generate. (Photo by Héctor Vásquez) See also: OpenAI targets AI skills gap with new certification standards Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Why Disney is embedding generative AI into its operating model appeared first on AI News. View the full article
-
Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry. From cloud to edge Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised. “The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world. The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible. Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings. Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface. Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers. Arm and global governments Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic. Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware. Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry. The enterprise case for edge AI The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided. Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting. Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand. Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says. What’s next from Arm and the industry Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI. Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility. Redefining “smart” Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.” (Image source: “Factory Floor” by danielfoster437 is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Arm and the future of AI at the edge appeared first on AI News. View the full article
-
Under China’s push to clean up its energy system, AI is starting to shape how power is produced, moved, and used — not in abstract policy terms, but in day-to-day operations. In Chifeng, a city in northern China, a renewable-powered factory offers a clear example. The site produces hydrogen and ammonia using electricity generated entirely from nearby wind and solar farms. Unlike traditional plants connected to the wider grid, this facility runs on its own closed system. That setup brings a problem as well as a benefit: renewable power is clean, but it rises and falls with the weather. To keep production stable, the factory relies on an AI-driven control system built by its owner, Envision. Rather than following fixed schedules, the software continuously adjusts output based on changes in wind and sunlight. As reported by Reuters, Zhang Jian, Envision’s chief engineer for hydrogen energy, compared the system to a conductor, coordinating electricity supply and industrial demand in real time. When wind speeds increase, production ramps up automatically to take full advantage of the available power. When conditions weaken, electricity use is quickly reduced to avoid strain. Zhang said the system allows the plant to operate at high efficiency despite the volatility of renewable energy. Projects like this are central to China’s plans for hydrogen and ammonia, fuels seen as important for cutting emissions in sectors such as steelmaking and shipping. They also point to a broader strategy: using AI to manage complexity as the country adds more renewable power to its grid. Researchers argue that AI could play a significant role in meeting China’s climate goals. Zheng Saina, an associate professor at Southeast University in Nanjing who studies low-carbon transitions, said AI can support tasks ranging from emissions tracking to forecasting electricity supply and demand. At the same time, she cautioned that AI itself is driving rapid growth in power consumption, particularly through energy-hungry data centres. China now installs more wind and solar capacity than any other country, but absorbing that power efficiently remains a challenge. According to Cory Combs, associate director at Beijing-based research firm Trivium China, AI is increasingly seen as a way to make the grid more flexible and responsive. That thinking was formalised in September, when Beijing introduced an “AI+ energy” strategy. The plan calls for deeper links between AI systems and the energy sector, including the development of multiple large AI models focused on grid operations, power generation, and industrial use. By 2027, the government aims to roll out dozens of pilot projects and test AI across more than 100 use cases. Within another three years, officials want China to reach what they describe as a world-leading level of AI integration in energy. Combs said the focus is on highly specialised tools designed for specific jobs, such as managing wind farms, nuclear plants, or grid balancing, rather than general-purpose AI. This approach contrasts with the United States, where much of the investment has gone into building advanced large-language models, according to Hu Guangzhou, a professor at the China Europe International Business School in Shanghai. One area where AI could have immediate impact is demand forecasting. Fang Lurui, an assistant professor at Xi’an Jiaotong-Liverpool University, said power grids must match supply and demand at every moment to avoid outages. Accurate forecasts of renewable output and electricity use allow operators to plan ahead, storing energy in batteries when needed and reducing reliance on coal-fired backup plants. Some cities are already experimenting. Shanghai has launched a citywide virtual power plant that links dozens of operators — including data centres, building systems, and electric vehicle chargers — into a single coordinated network. During a trial last August, the system reduced peak demand by more than 160 megawatts, roughly equivalent to the output of a small coal plant. Combs said such systems matter because modern power generation is increasingly scattered and intermittent. “You need something very robust that is able to be predictive and account for new information very quickly,” he said. Beyond the grid, China is also looking to apply AI to its national carbon market, which covers more than 3,000 companies in emissions-heavy industries such as power, steel, cement, and aluminium. These sectors together produce over 60% of the country’s carbon emissions. Chen Zhibin, a senior manager at Berlin-based think tank adelphi, said AI could help regulators verify emissions data, refine the allocation of free allowances, and give companies clearer insight into their production costs. Still, the risks are growing alongside the opportunities. Studies suggest that by 2030, China’s AI data centres could consume more than 1,000 terawatt-hours of electricity each year — roughly the same as Japan’s current annual usage. Lifecycle emissions from the AI sector are projected to rise sharply and peak well after China’s 2030 emissions target. Xiong Qiyang, a doctoral researcher at Renmin University of China who worked on one such study, said the results reflect the reality that coal still dominates China’s power mix. He warned that rapid AI expansion could complicate national climate goals if energy sources do not shift quickly enough. In response, regulators have begun tightening rules. A 2024 action plan requires data centres to improve energy efficiency and increase their use of renewable power by 10% each year. Other initiatives encourage new facilities to be built in western regions, where wind and solar resources are more abundant. Operators on the east coast are also testing new ideas. Near Shanghai, an underwater data centre is set to open, using seawater for cooling to cut energy and water use. The developer, Hailanyun, said the facility will draw most of its power from an offshore wind farm and could be replicated if the project proves viable. Despite the growing energy demands of AI, Xiong argued that its overall impact on emissions could still be positive if applied carefully. Used to optimise heavy industry, power systems, and carbon markets, he said, AI may remain an essential part of China’s effort to cut emissions — even as it creates new pressures that policymakers must manage. (Photo by Matthew Henry) See also: Can China’s chip stacking strategy really challenge Nvidia’s AI dominance? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Inside China’s push to apply AI across its energy system appeared first on AI News. View the full article
-
For large retailers, the challenge with AI is no longer whether it can be useful, but how it fits into everyday work. A new three-year AI partnership by Tesco points to how one of the ***’s biggest supermarket groups is trying to answer that question. The agreement with French startup Mistral AI is less about launching a single AI feature and more about building long-term capability. Tesco plans to work with Mistral to develop AI tools that can be used across its business, from internal workflows to customer-facing systems. Rather than framing the deal around bold promises, Tesco has described it in practical terms. The aim, according to the company, is to save staff time, improve how teams work, and strengthen customer service. Ruben Lara Hernandez, Tesco’s Data, Analytics & AI Director, said the partnership brings together Tesco’s retail experience with Mistral’s technology, with the hope that it will help colleagues work more efficiently and support customers more effectively. He added that the agreement builds on Tesco’s history of developing technology and AI tools that support customers, staff, and suppliers. Tesco moves AI into day-to-day retail operations That focus reflects a wider shift in how enterprises are using AI. Early retail experiments often centred on customer-facing tools that were easy to see but hard to scale. More recent efforts are aimed at internal use, where AI can reduce repetitive work, support planning, and help staff make decisions faster. Tesco has already been moving in that direction. Over the past five years, the retailer has doubled the size of its technology team, signalling that software and data are now core to how the business operates. AI is already used across several parts of the organisation, both through in-house development and partnerships. In online grocery, AI helps find more efficient delivery routes, which can open up additional delivery slots for customers. In supply planning, AI supports complex demand forecasting, helping stores maintain product availability. The company also uses AI to personalise how it engages with customers through its Clubcard loyalty scheme, tailoring offers and communications based on shopping behaviour. The partnership with Mistral is meant to build on this existing work rather than replace it. One factor behind the collaboration is Mistral’s approach to model deployment, which allows AI systems to run in more controlled environments. For a retailer handling large volumes of customer and operational data, that level of control matters. Mistral’s Chief Revenue Officer and US General Manager, Marjorie Janiewicz, said the company’s Applied AI team will work closely with Tesco’s internal experts. She described the goal as building AI products that are customisable and controllable, with a focus on improving internal workflows and customer experience. A long-term partnership, not a one-off project The structure of the partnership also suggests a cautious approach to scale. Tesco plans to create an internal AI lab as part of the agreement, giving teams space to test and refine tools before wider deployment. For large organisations, this kind of setup can help prevent AI projects from remaining stuck in isolated pilots or specialist teams. There is also a strategic angle to the choice of partner. Mistral AI is the only European company developing large language models at the frontier level, and Tesco is the first major *** retailer to form a partnership with the startup as part of its broader AI and technology plans. Founded in April 2023, Mistral has grown quickly and counts enterprises such as HSBC, AXA, and Stellantis among its customers. For Tesco, the harder work now lies in execution. Retail data is often fragmented across regions, systems, and channels, and AI systems depend on that data being accurate and consistent. Rolling tools out across a business of this size also requires training, oversight, and trust from staff who use them daily. Whether the partnership succeeds will likely depend on how visible its impact becomes inside the organisation. If AI tools help store teams, planners, and analysts work more smoothly, the gains may be gradual rather than dramatic. As retailers look to move beyond experimentation, Tesco’s approach offers a view of how enterprise AI is starting to settle into routine operations — not as a single solution, but as a steady process of change. (Photo by Shashank Verma) See also: Zara’s use of AI shows how retail workflows are quietly changing Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Tesco signs three-year AI deal centred on customer experience appeared first on AI News. View the full article
-
[AI]Marketing agencies using AI in workflows serve more clients
ChatGPT posted a topic in World News
Of all the many industries, it’s marketing where AI is no longer an “innovation lab” side project but embedded in briefs, production pipelines, approvals, and media optimisation. A WPP iQ post published in December, based on a webinar with WPP and Stability AI, shows what AI deployment in daily operations looks like. Here, we’re talking about a focus on the practical constraints that determine whether AI changes daily work or merely adds another layer of complexity or tooling. Brand accuracy a repeatable capability Marketing agencies’ AI treats brand accuracy as something to be engineered. WPP and Stability AI note that off-the-shelf models “don’t come trained on your brand’s visual identity”, so outputs can often look generic. The companies’ remedy is fine-tuning, that is, training models on brand-specific datasets so the model learns the brand playbook, including style, look, and colours. Then, these elements can be reproduced consistently. WPP’s Argos is a prime example. After fine-tuning a model for the retailer, the team described how the model picked up details beyond the characters, including lighting and subtle shadows used in the brand’s 3D animations. Reproducing these finer details can be where time disappears in production, in the form of re-rendering and several rounds of approvals. When AI outputs start closer to “finished”, teams spend less time correcting and more time shaping narratives and adapting media for different channels. Cycle time collapses (and calendars change) WPP and Stability AI point out that traditional 3D animation can be too slow for reactive marketing. After all, cultural moments demand immediate content, not cycles defined in weeks or months. In its Argos case study, WPP trained custom models on two 3D toy characters so the models learned how they look and behave, including details such as proportions and how characters hold objects. The outcome was “high-quality images…generated in minutes instead of months”. The accelerated workflow moves rather than removes production bottlenecks. If generating variations becomes fast, then review, compliance, rights management and distribution, become the constraints. Those issues were always there, but the speed and efficiency of AI in this context shows the difference between what’s possible, and systems that have become embedded and accepted into workflows. Agencies that want AI to change daily operations have to redesign the workflow around it, not just add the technology as a new tool. The “AI front end” becomes essential WPP and Stability AI call out a “UI problem”, wherecreative teams lose time interfaces to common tools are “disconnected, complex and confusing”, forcing workarounds and constant asset movement between tools. Often, responses are bespoke, brand-specific front ends with complex workflows in the back end.. WPP positions WPP Open as a platform that encodes WPP’s proprietary knowledge into “globally accessible AI agents”, which helps teams plan, produce, create media, and sell. Operational gains come from cleaner handoffs between tools, as work moves from briefs into production, assets into activation, and performance signals back into planning. Self-serve capability changes agency operations AI-powered marketing platforms are also becoming client-facing. Operationally, that pushes agencies to concentrate on the parts of the workflow their clients can’t self-serve easily, like designing the brand system, building fine-tunings, and ensuring governance is embedded. Governance moves from policy to workflow For AI to be used daily, governance needs to be embedded where work happens. Dentsu describes building “walled gardens”, which are digital spaces where employees can prototype and develop AI-enabled solutions securely, and commercialise the best ideas. This reduces the risk of sensitive data exposure and lets experiments move into production systems. Planning and insight compress too The operational impact is not limited to production. Publicis Sapient describes AI-powered content strategy and planning that “transforms months of research into minutes of insight” by combining large language models with contextual knowledge and prompt libraries [PDF]. Research and brief development compress work schedules, so more client work can happen and the agency has faster responses to shifting culture and platform algorithms. What changes for people Across these examples, the impact on marketing professionals is one of rebalancing and shifting job descriptions. Less time goes on mechanical drafting, resizing, and versioning, and more time goes on brand stewardship. New operational roles expand, with titles like– model trainer, workflow designer, and AI governance lead. AI makes the biggest operational difference when agencies use customised models, usable front ends that make adoption (especially by clients) frictionless, and integrated platforms that connect planning, production, and execution. The headline benefit is speed and scale, but the deeper change is that marketing delivery starts to resemble a software-enabled supply chain, standardised, flexible where it needs to be, and measurable. (Image source: “Solar Wind Workhorse Marks 20 Years of Science Discoveries” by NASA Goddard Photo and Video is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Marketing agencies using AI in workflows serve more clients appeared first on AI News. View the full article -
Cognizant, ***** Consultancy Services, Infosys, and Wipro have announced plans to deploy more than 200,000 Microsoft Copilot licenses in their enterprises – over 50,000 per company – in what Microsoft is calling a new benchmark for enterprise-scale adoption of generative AI. The companies involved are framing the move as the implementation of a default tool for hundreds of thousands of employees involved in consulting, delivery, operations, and software. The announcement, made in Bengaluru, December 11, was timed to coincide with Microsoft CEO Satya Nadella’s visit to India. There, and across the industrialised world, there’s been growing momentum for agentic AI – AI systems that do more than chat, executing multi-step work in business processes. The four firms want to be seen as AI advisors for clients, with extensive experience drawn from their internal rollouts of AI. Why enterprises care about Copilot Readers will be familiar with Microsoft 365 Copilot, the AI assistant embedded in standard workplace tools Word, Excel, PowerPoint, Outlook, and Teams. It’s intended to help users draft, summarise, and analyse, turning natural-language queries into work-related outputs. Copilot combines large language models with Microsoft 365 apps and organisational data gained from Microsoft Graph, with the assistant working in the context of a user’s files, meetings, and messages. This ability is, of course, subject to access controls already in place and defined by the organisation. For large organisations, the embedding of AI into workflows is important. A firm shouldn’t have to rebuild its toolchain to experiment with AI, but rather start using AI in the software and documents its workforce already uses. The raft of benefits is practical and work-focused: faster documentation, quicker meeting follow-ups, faster draft proposals, better discovery of information from internal knowledge repositories, and, with agentic AI, the automation of repetitive tasks. From Copilots to frontier firms and agents Microsoft uses the term “Frontier Firms” to describe organisations that are “human-led and agent-operated”; where employees work alongside AI assistants and specialised agents that take on work processes. The designation of ‘Frontier Firm’ status aligns with Microsoft’s messaging at Microsoft Ignite 2025, where the company described agents reinventing business processes and amplifying impact through human-agent teamwork. In very simple terms, the company’s pitch is to move from “AI helps you write” to “AI helps run workflows.” Why IT services firms are making public commitments There are two reasons why the four firms are rolling out the technology at such a large scale. First, to improve internal productivity. The Times of India reports the deployments are intended to integrate Copilot into workflows in consulting, software development, operations and client delivery, with the aim of improved productivity. At large multinational companies, margins depend on delivery efficiency and knowledge reuse, so shaving minutes from everyday tasks for tens of thousands of workers produces meaningful gains. Second, client credibility. The consultancy companies serve global enterprises, including many Fortune 500 clients, which means their internal operating model can, and perhaps should, become their clients’ playbooks. If consultancies can demonstrate mature governance, training, and measurable outcomes with Copilot at scale in their own operations, it strengthens their messaging, better able to sell similar transformations to potential and existing clients. Hyperscalers’ investment in India The Copilot announcement came immediately after Microsoft said it would invest $17.5 billion in India between 2026-2029, money destined for cloud and AI infrastructure, skilling, and operations. The company describes this as its largest investment in Asia to date. Other major tech firms are making parallels: Reuters reported in December 2025 that Amazon/AWS planned to invest over $35 billion in India by 2030, expanding its operations and AI capabilities, for example. Together, such moves underscore India’s growing position as a massive enterprise market and strategic hub for AI talent and cloud infrastructure. For India’s IT services leaders, Copilot is being positioned as a way to stay ahead of the competitive curve and define “AI-first delivery.” (Image source: “Gobbling Indian view of Clinch River” by dmott9 is licensed under CC BY-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post 50,000 Copilot licences for Indian service companies appeared first on AI News. View the full article
-
Zara is testing how far generative AI can be pushed into everyday retail operations, starting with a part of the business that rarely gets attention in technology discussions: product imagery. Recent reporting shows the retailer using AI to generate new images of real models wearing different outfits, based on existing photoshoots. Models remain involved in the process, including consent and compensation, but AI is used to extend and adapt imagery without repeating production from scratch. The stated aim is to speed up content creation and reduce the need for repeated shoots. On the surface, the change looks incremental. In practice, it reflects a familiar pattern in enterprise AI adoption, where technology is introduced not to overhaul how a business works, but to remove friction from tasks that repeat at scale. How Zara uses AI to reduce friction in repeatable retail work For a global retailer like Zara, imagery is not a creative afterthought. It is a production requirement tied directly to how quickly products can be launched, refreshed, and sold across markets. Each item typically needs multiple visual variations for different regions, digital channels, and campaign cycles. Even when garments change only slightly, the surrounding production work often starts again from scratch. That repetition creates delays and cost that are easy to overlook precisely because they are routine. AI offers a way to compress those cycles by reusing approved material and generating variations without resetting the entire process. AI enters the production pipeline The placement of the technology is as important as the capability itself. Zara is not positioning AI as a separate creative product or asking teams to adopt an entirely new workflow. The tools are being used inside an existing production pipeline, supporting the same outputs with fewer handoffs. That keeps the focus on throughput and coordination rather than experimentation. This kind of deployment is typical once AI moves beyond pilot stages. Rather than asking organisations to rethink how work is done, the technology is introduced where constraints already exist. The question becomes whether teams can move faster and with less duplication, not whether AI can replace human judgement. The imagery initiative also sits alongside a broader set of data-driven systems that Zara has built up over time. The retailer has long relied on analytics and machine learning to forecast demand, allocate inventory, and respond quickly to changes in customer behaviour. Those systems depend on fast feedback loops between what customers see, what they buy, and how stock moves through the network. From that perspective, faster content production supports the wider operation even if it is not framed as a strategic shift. When product imagery can be updated or localised more quickly, it reduces lag between physical inventory, online presentation, and customer response. Each improvement is small, but together they help maintain the pace that fast fashion relies on. From experimentation to routine use Notably, the company has avoided framing this move in grand terms. There are no published figures on cost savings or productivity gains, and no claims that AI is transforming the creative function. The scope remains narrow and operational, which limits both risk and expectation. That restraint is often a sign that AI has moved out of experimentation and into routine use. Once technology becomes part of day-to-day operations, organisations tend to talk about it less, not more. It stops being an innovation story and starts being treated as infrastructure. There are also constraints that remain visible. The process still relies on human models and creative oversight, and there is no suggestion that AI-generated imagery operates independently. Quality control, brand consistency, and ethical considerations continue to shape how the tools are applied. AI extends existing assets rather than generating content in isolation. This is consistent with how enterprises typically approach creative automation. Rather than replacing subjective work outright, they target the repeatable components around it. Over time, those changes accumulate and reshape how teams allocate effort, even if the core roles remain intact. Zara’s use of generative AI does not signal a reinvention of fashion retail. It shows how AI is beginning to touch parts of the organisation that were previously considered manual or difficult to standardise, without changing how the business fundamentally operates. In large enterprises, that is often how AI adoption becomes durable. It does not arrive through sweeping strategy announcements or dramatic claims. It takes hold through small, practical changes that make everyday work move a little faster — until those changes become hard to imagine doing without. (Photo by M. Rennim) See also: Walmart’s AI strategy: Beyond the hype, what’s actually working Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Zara’s use of AI shows how retail workflows are quietly changing appeared first on AI News. View the full article
-
Human Resources is an area in many organisations where AI can have significant operational impact. The technology is now being embedded into day-to-day operations, in activities like answering employees’ questions and supporting training. The clearest impact appears where organisations can measure the tech’s outcomes, typically in time saved and the numbers of queries successfully resolved. Fewer tickets, more first-time answers IBM’s internal virtual agent, AskHR, was built to handle employee queries and automate routine HR actions. IBM says AskHR automates more than 80 internal HR tasks and has engaged in over two million conversations with employees every year. It uses a two-tier approach, where AI resolves routine issues, and human advisers handle more complex cases. The company reports some operational benefits: a 94% success rate in answering commonly-asked questions, a 75% reduction in the number of lodged support tickets since 2016, and – the headline figure – a 40% reduction in HR operational costs over four years. But it’s important to note that AI is not used by IBM to route queries to existing materials. The automation is capable of completing the transaction, thus reducing the need to hand-off queries to human staff. Recruitment and onboarding efficiencies Vodafone’s 2024 annual report describes an internal platform it calls ‘Grow with Vodafone‘. The company says it’s reduced its time-to-hire periods from 50 days to 48 days, made the job application process simpler, and added personalised skills-based job recommendations for applicants. That’s led to a 78% reduction in questions posed by potential applicants and those onboarding into new roles. The company also has a global headcount planning tool that reduces the manual work needed to assemble necessary data, plus there’s an AI-powered global HR ‘data lake’ that standardises dashboards and reduces the need for manual reporting – stakeholders can dive into the data themselves and surface the insights they need. Training and internal support Big employers have challengers getting new staff up to speed quickly; so-called time-to-competence. Bank of Americas’ newsroom describes how its onboarding and professional development organisation, ‘The Academy’ uses AI for interactive coaching, with employees completing over a million simulations in a year. The organisation operates ‘Erica for Employees‘, an internal assistant that handles topics like health benefits and payroll or tax forms for employees. It’s used by over 90% of employees – for the IT service desk, having Erica triage situations is impactful, with a reduction of more than 50% in incoming calls. Such tools reduce hidden work (searching, repeating questions, waiting for answers) and its associated costs. Plus, a shorter time-to-competence is especially valuable in regulated and customer-facing environments. Frontline work at big employers Walmart’s June 2025 corporate update describes rolling out AI tools via its associates’ app, which include a workflow tool that prioritises and recommends work tasks. At the time of publication, it was early days, but based on early results, Walmart says team leads and store managers are beginning to see shift planning times down from 90 to 30 minutes. As an employer of a diverse workforce, its app’s real-time translation ability (44 languages) is invaluable. The company is currently upgrading its associates’ software with AI to turn its internal process guides into multi-lingual instructions. It has more than 900,000 employees using the system every week users, with more than three million queries per day going through the associates’ conversational AI platform. Workforce efficiencies at Walmart scale is impressive, but for every size of business, there are clear advantages to be gained from giving employees faster guidance and better support across multilingual teams. In addition to the immediate cost savings, simple-to-use and effective software of this type affects retention, safety standards, and service quality – all for the better. Governance and human safety nets Multinational bank, HSBC’s publication, “Transforming HSBC with AI” describes over 600 AI use cases in operation at the company, and says colleagues have access to an LLM-based productivity tool for tasks like translation and document analysis. In an environment where governance and data security are of paramount importance, it’s ensured that all automated systems abide by existing codes, something that’s enforced by dedicated AI Review Councils and AI lifecycle management frameworks. In HR this matters, regardless of vertical. Governance decisions should shape what can be automated, how people data is handled, and how accountability is maintained into the long term. HR data is often personally-identifiable, so the highest standards – and their maintenance – are critically important. Operational trade-offs Operational impact is about trust as well as speed and efficiency. A self-service agent answering confidently but incorrectly creates rework, escalations, and causes problems. A pragmatic pattern for reducing risk is to keep humans in the loop, especially for complex decisions. IBM’s two-tier model, Vodafone’s tailored job recommendations, and Walmart an HSBC’s data governance and security bring oversight. Hybrid service models plus data discipline and oversight are what enable AI to scale without undermining employee confidence or fairness. Where this is heading The pattern of successful operational deployment has been consistent in the cases of the HR function in these large enterprises. They each started with high-volume questions and repetitive transactions, expanded into hiring and training, and then pushed AI to the frontline where it can save time. The biggest gains come when AI turns HR from a service ****** into a faster, more consistently-operating function. (Image source: “Business Meetings” by thinkpanama is licensed under CC BY-NC 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI in Human Resources: the real operational impact appeared first on AI News. View the full article
-
By December 2025, AI adoption on Wall Street had moved past experiments inside large US banks and into everyday operations. Speaking at a Goldman Sachs financial-services conference in New York on 9 December, bank executives described AI—particularly generative AI—as an operational upgrade already lifting productivity across engineering, operations, and customer service. The same discussion also surfaced a harder reality. If banks can produce more with the same teams, some roles may no longer be required at current levels once demand stabilises. How Wall Street banks say AI is delivering results today JPMorgan: operational gains begin to compound Marianne Lake, chief executive of consumer and community banking at JPMorgan, said productivity in areas using AI has risen to around 6%, up from roughly 3% before deployment. She added that operations roles could eventually see productivity gains of 40% to 50% as AI becomes part of routine work. Those gains rest on deliberate choices rather than broad experimentation. JPMorgan has focused on secure internal access to large language models, targeted changes to workflows, and tight controls on how data is used. The bank has described its internal “LLM Suite” as a controlled setting where staff can draft and summarise content using large language models. Wells Fargo: output rising ahead of staffing changes Wells Fargo CEO Charlie Scharf said the bank has not reduced headcount because of AI so far, but noted that it is “getting a lot more done.” He said management expects to find areas where fewer people are needed as productivity improves. In comments reported the same day, Scharf said the bank’s internal budgets already point to a smaller workforce by 2026, even before factoring in AI’s full impact. He also flagged higher severance costs, suggesting preparations for future adjustments are under way. PNC: AI speeds up a long-running shift PNC CEO Bill Demchak positioned AI as an accelerator rather than a new direction. He said the bank’s headcount has stayed largely flat for about a decade, even as the business expanded. That stability, he said, came from automation and branch optimisation, with AI likely to push the trend further. Citigroup: gains in software and customer support Citi’s incoming CFO Gonzalo Luchetti said the bank has recorded a 9% productivity improvement in software development. That mirrors a broader pattern across large firms adopting AI copilots to support coding work. He also pointed to two customer service areas where AI is helping: improving self-service so fewer calls reach agents, and supporting agents in real time when customers do need to speak with a person. Goldman Sachs: workflow changes paired with hiring restraint According to Reuters, Goldman Sachs’ internal “OneGS 3.0” programme has focused on using AI to improve sales processes and client onboarding. It has also targeted process-heavy functions such as lending workflows, regulatory reporting, and vendor management. These changes are unfolding alongside job cuts and a slower pace of hiring, linking workflow redesign directly to staffing decisions. Where Wall Street banks see the earliest AI productivity gains Across banks, the clearest gains are showing up in work that relies heavily on documents, follows repeatable steps, and operates within defined rules. Generative AI can shorten the time needed to search for information, summarise material, draft content, and move work through approval chains—especially when paired with structured processes and human checks. Common areas seeing early impact include: Operations: drafting responses, summarising cases, and resolving exceptions more quickly Software development: generating code, writing tests, refactoring, and producing documentation Customer service: stronger self-service combined with real-time support for agents Sales support and onboarding: pulling data from documents, filling forms, and speeding up client setup Regulatory reporting: assembling narratives and evidence faster, under strict review and controls Why governance shapes the pace of adoption For banks, enthusiasm is not the main constraint. Control is. US regulators have long required strong oversight of models, and those expectations extend to AI systems. Guidance such as the Federal Reserve and OCC’s SR 11-7 sets standards for model development, validation, and ongoing review. A 2025 report from the US Government Accountability Office noted that existing model risk management principles already apply to AI, including testing and independent oversight. In practice, this pushes banks toward designs that can be examined and traced. AI use is often limited in how independently it can act. Prompts and outputs are logged, performance is monitored for drift, and humans remain responsible for high-impact decisions such as lending, dispute handling, and official reporting. Productivity rises, but employment questions remain The comments from bank leaders point to a phased shift. The first phase looks like stable headcount paired with higher output as AI tools spread across teams. The second phase begins once those gains become consistent enough to influence staffing plans, through attrition, role changes, or targeted cuts. Signals from Wells Fargo around 2026 headcount planning and severance costs suggest some banks are approaching that second stage. At a broader level, institutions such as the International Monetary Fund have warned that AI could affect a large share of jobs worldwide, with different mixes of automation and augmentation depending on role and region. The World Economic Forum’s Future of Jobs Report 2025 also projects substantial job movement as companies adopt AI and adjust skill needs. What AI means for Wall Street bank strategy beyond 2025 Banks that gain the most from AI are likely to focus on three areas at once: redesigning workflows rather than layering on chat tools, building strong data foundations, and putting governance in place that supports speed without eroding trust. Research firms argue the financial stakes are high. McKinsey estimates that generative AI could deliver between $200 billion and $340 billion in annual value for the banking sector, largely through productivity improvements. The open question is no longer whether AI can deliver results in banking. It is how quickly banks can make those gains routine while preserving audit trails, security, and customer safeguards—and how they manage the workforce changes that follow. (Photo by Lo Lo) See also: BNP Paribas introduces AI tool for investment banking Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Wall Street’s AI gains are here — banks plan for fewer people appeared first on AI News. View the full article
-
Artificial intelligence has been part of the insurance sector for years – the Finance function in many businesses is often the first to automate. But what’s remarkable in the instance of AI is how directly the technology is woven into day-to-day operational work. Not sitting in the background as a niche modelling capability, AI is now used in places where insurers spend most of their time and money: claims handling, underwriting, and running complex programmes. Industry giants Allianz, Zurich, and Aviva have published evidence in just the last 12 months illustrating their shifts from experimentation stages to production-grade tools that support frontline workers in real workflows. Simple claims: Fewer admin bottlenecks Claims operations are a natural proving ground for AI because they comprise of a combination of paperwork and human judgement, and are usually undertaken in an environment of time pressure. Allianz describes its Insurance Copilot as an AI-powered tool that helps claims handlers automate repetitive tasks and pull together relevant information that would otherwise require multiple searches on different systems. There’s a notable change to the workflows, Allianz outlines. The Copilot starts with data gathering, summarising claim and contract details so a handler can get just the essentials, quickly. The algorithm then performs document analysis, operations that include interpreting agreements and comparing claims against policy details. The tool flags discrepancies and suggests next steps. Once the human operator has taken their decision, the Copilot assists drafts context-aware emails. This is the kind of daily activity that insurers care about, and by using their AI tools, they get reduced turnaround time, smoother settlements, and less friction for staff and customers. Allianz also frames AI as a way to reduce unnecessary payouts by highlighting important factors adjusters might otherwise miss. That has a clear impact on the company’s overall bottom line. Complex documents to usable decisions The quality of underwriting is determined by the quality of information available. Aviva uses the example of underwriters needing to read GP medical reports. The company says it’s launching an AI-powered summarisation tool that uses genAI to analyse and summarise these reports, which can sometimes amount to dozens of pages of medical text. The AI functions let underwriters make faster, more informed decisions. The immediate value here is not AI replacing the underwriter, but technology reducing the time spent reading. The insurer is explicit that underwriters will review summaries and make the final decision – not the AI. That distinction matters because underwriting is technical and sensitive; compressing documents into decision-ready summaries can speed up processing, but it also raises questions about accuracy, omissions and auditability. Aviva addresses this by pointing to its “rigorous testing and controls“. An active test phase processed around 1,000 cases before roll-out to ensure the standards it required, the company says. Uncertain contracts and servicing in multinational programmes Commercial insurance is an area with its own challenges, which include the complexity from working in multiple jurisdictions, and the regional differences between policies and stakeholders. Zurich says generative AI’s ability to process unstructured information lets multinational insurance work more easily across several countries, helping it build quicker, more accurate pictures of commercial insurance offerings, and simplifying submissions in different countries. Zurich also highlights contract certainty as a practical outcome: multinational programmes involve layered documents, varied local requirements and have the pervasive need for constant checking. It says GenAI helps internal experts compare, summarise and verify coverage in a programme using the operator’s native language, “in a fraction of the time” compared with the manual effort required to translate and capture the nuance of international differences. Although this area isn’t customer-facing, genAI improves the company’s responsiveness by letting its underwriters, risk engineers, and claims professionals work more efficiently. Zurich also refers to AI “joining up the dots”, able to spot trends in data that would – given the quantity of information – go unnoticed by human staff. Indeed, AI amplifies its experts’ judgement rather than displacing it. The common thread: augmentation, not automation-for-automation’s sake Across these three examples, a consistent pattern emerges: AI handles the heavy lifting of reading, searching, and drafting; high-volume tasks in insurance operations. Humans remain accountable for consequent decisions, whether it’s claim payments or underwriting acceptance. (Allianz describes a “human-in-the-loop” approach, and Aviva and Zurich similarly emphasise experts retaining decision-making control). Operational control and scalability are treated as major concerns: pilots, testing, domain-by-domain tuning, and expansion into lines of business are integral part of the narrative. What this means for the sector Insurers see faster cycle times, better consistency, reduced manual work, and a path to scaling. Their challenge is implementing tools responsibly, which is defined by secure data handling, explainability where needed, and the training of teams so they can question outputs appropriately. AI is becoming less of a headline in the sector and more of an everyday reality, a practical silicon colleague in the routine work of insurance profitability. (Image source: “house fire” by peteSwede is licensed under CC BY 2.0. ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Ensuring effective AI in insurance operations appeared first on AI News. View the full article
-
Big Pharma’s AI race extends across drug discovery, development, and clinical trials—but AstraZeneca has distinguished itself by deploying AI clinical trials technology at an unprecedented public health scale. While competitors optimise internal R&D pipelines, AstraZeneca’s AI is already embedded in national healthcare systems, screening hundreds of thousands of patients and demonstrating what happens when AI moves from pharmaceutical labs into actual patient care. The clinical validation backs this approach. AstraZeneca’s CREATE study, presented at the European Lung ******* Congress in March 2025, demonstrated a 54.1% positive predictive value for its AI chest X-ray tool—far exceeding the pre-defined success threshold of 20%. Behind those numbers: over 660,000 people screened in Thailand since 2022, with AI detecting suspected pulmonary lesions in 8% of cases. More critically, Thailand’s National Health Security Office is now scaling this technology across 887 hospitals with a three-year budget exceeding 415 million baht. This isn’t just a pilot program or proof-of-concept. It’s AI clinical trials technology deployed at the national healthcare system scale. The strategic divergence in AI clinical trials approaches The contrast with competitors is revealing. Pfizer’s ML Research Hub has compressed drug discovery timelines to approximately 30 days for molecule identification. The company used AI to develop Paxlovid in record time, with machine learning analysing patient data 50% faster than traditional methods. Pfizer now deploys AI in over half its clinical trials. Novartis partnered with Nobel Prize winner Demis Hassabis’s Isomorphic Labs and Microsoft for “AI-driven drug discovery.” Its Intelligent Decision System uses computational twins to simulate clinical trial processes, with AI-identified sites reportedly recruiting patients faster than traditional selection methods. Roche’s “lab in a loop” strategy iterates AI models with laboratory experiments. Having acquired Foundation Medicine and Flatiron Health, Roche built the industry’s largest clinical genomic database—over 800,000 genomic profiles across 150+ tumour subtypes—targeting 50% efficiency gains in safety management by 2026. AstraZeneca’s clinical operations advantage What sets AstraZeneca apart in AI clinical trials isn’t just ambition—it’s execution at scale. The company runs over 240 global trials in its R&D pipeline and has systematically embedded generative AI across clinical operations. It’s an “intelligent protocol tool,” developed with medical writers, that has reduced document authoring time by 85% in some cases. The company uses AI for 3D location detection on CT scans, slashing the time radiologists spend on manual annotation. More significantly, AstraZeneca is pioneering virtual control groups for AI clinical trials using electronic health records and past trial data to simulate placebo arms—potentially reducing the number of patients receiving non-active treatments. This represents a fundamental rethinking of clinical trial design itself. The lung ******* screening program exemplifies this strategic focus. Using Qure.ai’s qXR-LNMS tool, AstraZeneca isn’t just conducting trials—it’s transforming public health infrastructure. The December 2025 expansion includes a new industrial worker screening program targeting 5,000 workers across four Thai provinces, now expanding beyond lung ******* to include heart failure detection. The timeline acceleration race Industry metrics show why AI clinical trials matter: Traditional drug development takes 10-15 years with a 90% failure rate. AI-discovered drugs achieve 80-90% Phase I success rates—double the 40-65% traditional benchmark. Over 3,000 AI-assisted drugs are in development, with 200+ AI-enabled approvals expected by 2030. Pfizer moves from molecule identification to clinical trials in six-week cycles. Novartis analyses 460,000 clinical trials in minutes versus months. Yet AstraZeneca’s model delivers immediate patient impact—detecting cancers today in underserved populations, often before symptoms appear. The US$410 Billion question The World Economic Forum projects AI could generate US$350-$410 billion annually for pharma by 2030. The question is which approach captures more value: faster drug discovery or more efficient clinical operations? Pfizer’s bet on computational drug design and Novartis’s AI-powered trial site selection may yield breakthrough molecules. Roche’s integrated pharma-diagnostics model creates a proprietary data moat. But AstraZeneca’s strategy of embedding AI clinical trials throughout operations—from protocol generation to patient recruitment to regulatory submissions—is demonstrably reducing time-to-market while building real-world evidence at scale. The company’s partnership approach is equally distinctive. While others acquire AI companies or build internal hubs, AstraZeneca collaborates with technology partners like Qure.ai and Perceptra, regulatory bodies, and national health systems to deploy AI clinical trials where infrastructure gaps exist. As AstraZeneca pursues its 2030 goal of delivering 20 new medicines and reaching us$80 billion in revenue, its AI clinical trials advantage isn’t just about speed—it’s about proving AI’s value in the most regulated, risk-averse phase of pharmaceutical development. While competitors race to discover the next breakthrough molecule, AstraZeneca is reengineering how clinical trials themselves are conducted. The winner may not be determined by who builds the most sophisticated algorithm, but by who deploys AI clinical trials technology where it demonstrably improves patient outcomes—at scale, under regulatory scrutiny, and within real healthcare systems. And in that race, AstraZeneca currently leads. (Photo by AstraZeneca) See also: Google AMIE: AI doctor learns to ‘see’ medical images Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AstraZeneca leads big pharma’s AI clinical trials revolution with real-world patient impact appeared first on AI News. View the full article
-
Roblox is often seen as a games platform, but its day-to-day reality looks closer to a production studio. Small teams release new experiences on a rolling basis and then monetise them at scale. That pace creates two persistent problems: time lost to repeatable production work, and friction when moving outputs between tools. Roblox’s 2025 updates point to how AI can reduce both, without drifting away from clear business outcomes. Roblox keeps AI where the work happens Rather than pushing creators toward separate AI products, Roblox has embedded AI inside Roblox Studio, the environment where creators already build, test, and iterate. In its September 2025 RDC update, Roblox outlined “AI tools and an Assistant” designed to improve creator productivity, with an emphasis on small teams. Its annual economic impact report adds that Studio features such as Avatar Auto-Setup and Assistant already include “new AI capabilities” to “accelerate content creation”. The language matters—Roblox frames AI in terms of cycle time and output, not abstract claims about transformation or innovation. That framing makes it easier to judge whether the tools are doing their job. One of the more practical updates focuses on asset creation. Roblox described an AI capability that goes beyond static generation, allowing creators to produce “fully functional objects” from a prompt. The initial rollout covers selected vehicle and weapons categories, returning interactive assets that can be extended inside Studio. This addresses a common bottleneck where drafting an idea is rarely the slow part; turning it into something that behaves correctly inside a live system is. By narrowing that gap, Roblox reduces the time spent translating concepts into working components. The company also highlighted language tools delivered through APIs, including Text-to-Speech, Speech-to-Text, and real-time voice chat translation across multiple languages. These features lower the effort required to localise content and reach broader audiences. Similar tooling plays a role in training and support in other industries. Roblox treats AI as connective tissue between tools Roblox also put emphasis on how tools connect to one another. Its RDC post describes integrating the Model Context Protocol (MCP) into Studio’s Assistant, allowing creators to coordinate multi-step work across third-party tools that support MCP. Roblox points to practical examples, such as designing a UI in Figma or generating a skybox elsewhere, then importing the result directly into Studio. This matters because many AI initiatives slow down at the workflow level. Teams spend time copying outputs, fixing formats, or reworking assets that do not quite fit. Orchestration reduces that overhead by turning AI into a bridge between tools, rather than another destination in the process. Linking productivity to revenue Roblox ties these workflow gains directly to economics. In its RDC post, the company reported that creators earned over $1 billion through its Developer Exchange programme over the past year, and it set a goal for 10% of gaming content revenue to flow through its ecosystem. It also announced an increased exchange rate so creators “earn 8.5% more” when converting Robux into cash. The economic impact report makes the connection explicit. Alongside AI upgrades in Studio, Roblox highlights monetisation tools such as price optimisation and regional pricing. Even outside a marketplace model, the takeaway is clear: when AI productivity is paired with a financial lever, teams are more likely to treat new tooling as part of core operations rather than an experiment. Roblox uses operational AI to scale safety systems While creative tools attract attention, operational AI often determines whether growth is sustainable. In November 2025, Roblox published a technical post on its PII Classifier, an AI model used to detect attempts to share personal information in chat. Roblox reports handling an average of 6.1 billion chat messages per day, and says the classifier has been in production since late 2024, with a reported 98% recall on an internal test set at a 1% false positive rate. This is a quieter form of efficiency. Automation at this level reduces the need for manual review and supports consistent policy enforcement, which helps prevent scale from becoming a liability. What carries across, and what several patterns stand out: Put AI where decisions are already made. Roblox focuses on the build-and-review loop, rather than inserting a separate AI step. Reduce tool friction early. Orchestration matters because it cuts down on context switching and rework. Tie AI to something measurable. Creation speed is linked to monetisation and payout incentives. Keep adapting the system. Roblox describes ongoing updates to address new adversarial behaviour in safety models. Roblox’s tools will not translate directly to every sector. The underlying approach will. AI tends to pay for itself when it shortens the path from intent to usable output, and when that output is clearly connected to real economic value. (Photo by Oberon Copeland @veryinformed.com) See also: Mining business learnings for AI deployment Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Roblox brings AI into the Studio to speed up game creation appeared first on AI News. View the full article
-
[AI]What AI search tools mean for the future of SEO specialists
ChatGPT posted a topic in World News
AI search engines and generative AI tools are certainly transforming how people discover information online. Far from making SEO specialists obsolete, the shift highlights clearly why skilled human optimisers remain more important than ever. As generative AI search tools reshape the digital landscape, many wonder whether traditional SEO has reached the end. Despite AI’s growing influence, the fundamentals of discoverability, authority, clarity, and trust remain vital. The rise of AI-powered search makes the role of SEO specialists more important than before, ensuring content not only reaches audiences but resonates authentically. A digital environment shaped by automation still depends heavily on human expertise, especially when credibility is at stake. Why AI expands rather than replaces the work of SEO specialists The assumption that AI will replace SEO specialists ignores how modern search works. AI expands what optimisation can achieve, but it does not eliminate the need for skilled human oversight. Traditional ranking signals like site structure, depth of information, internal linking and authority continue to influence visibility, even in AI-driven answer engines. This means content must still be crafted with intent, clarity, and expertise. AI-powered search engines prioritise well-structured, trustworthy information, reinforcing the importance of thoughtful optimisation. In this brand new landscape, SEO now spans two parallel environments: classic search engines and AI-generated answer systems. Competing effectively in both requires strategic thinking, careful content planning, and a nuanced understanding of user intent: areas where human specialists excel, beyond what AI can replicate. Agencies like Brath, which presents SEO as one of the most profitable long-term marketing channels when executed with technical precision and strategic insight, clearly understand this reality. Its service model remains grounded in proven disciplines: technical improvements, keyword strategy, content refinement, and authority building. Rather than leaning on automation, it demonstrates that sustainable SEO relies on human expertise, methodical analysis, and consistent optimisation. Human insight still drives strategy in an AI-powered search world AI excels at processing data, but it still falls short when deeper context, cultural awareness, and nuanced decision-making are required. SEO is more than a technical exercise; it is a strategic craft rooted in understanding people, their motivations, expectations, and the subtle differences in how they search for answers. The human dimension is something AI cannot replicate. Specialists bring an understanding of why search behaviour shifts, how tone influences trust, and which narratives resonate in different industries. They also grasp how economic cycles, news events, and regulations affect user queries and engagement patterns. AI can identify patterns, but it cannot interpret them fully or determine which insights align with brand values or long-term goals. The increasing emphasis on E-E-A-T signals, Experience, Expertise, Authoritativeness and Trustworthiness, further reinforces this need for human guidance. An algorithm cannot fabricate authentic authority. It must be demonstrated through lived experience, credibility, and careful communication. SEO specialists ensure that these qualities are baked into every page and every message, guiding brands through an increasingly complex search environment where trust matters more than ever. How SEO specialists use AI to deliver faster, smarter optimisation Where AI shines is in support, not replacement. Modern SEO specialists use AI tools to automate repetitive tasks, accelerate data analysis, and uncover patterns at scale. AI can rapidly produce initial keyword lists, identify technical issues, or generate draft structures for content. These capabilities shorten production timelines and free specialists to focus on the strategic, creative, and high-value decisions that define effective optimisation. The balanced approach enables better outcomes. AI handles the heavy lifting, while human specialists interpret the results, refine the strategy, and ensure everything aligns with the brand’s goals, tone, and audience requirements. It is this combination of speed and discernment that shapes the future of SEO. The creative and analytical skills AI cannot replicate Even the most advanced AI tools cannot fully replicate creativity, intuition, or emotional intelligence. SEO specialists draw on analytical skills and creative insight to produce content that stands out in competitive digital markets. AI can propose phrasing, but it cannot determine when a message needs personality, authority, or nuance to feel trustworthy. In industries where reputation matters, like fintech, finance, healthcare, law, and education, human-shaped content is essential. Users want reassurance that the guidance they read comes from people with real expertise. AI-generated content often lacks the depth or subtlety needed to build that trust. Human specialists also excel at interpreting data in context. They understand how search intent connects to behaviour, how content influences perception, and how messaging supports or undermines brand identity. Their ability to connect analytical insights with emotional resonance is something AI simply cannot achieve. Building the next generation of SEO careers in an AI-enhanced industry AI is not shrinking the field of SEO; it is expanding it. The next generation of SEO professionals will blend human-centred strategy with AI-assisted efficiency. Modern SEO requires fluency in multiple areas: data interpretation, UX thinking, structured content, behavioural insights, content design, and now, AI tool management. Specialists who embrace both disciplines will lead the industry forward. Their work will become more strategic, more aligned with long-term business value, and more influential as search becomes increasingly complex and multi-modal. AI may automate tasks, but strategy, creativity, judgement, and trust-building remain human strengths. And as AI transforms how search tools interpret information, those strengths will only become more essential. SEO is not disappearing. It is evolving into a richer, more strategic, and more human discipline. Image source: Pexels The post What AI search tools mean for the future of SEO specialists appeared first on AI News. View the full article -
Mining conglomerate BHP describes AI as the way it’s turning operational data into better day-to-day decisions. A blog post from the company highlights the analysis of data from sensors and monitoring systems to spot patterns and flag issues for plant machinery, giving choices to decision-makers that can improve efficiency and safety – plus reduce environmental impact. For business leaders at BHP, the useful question was not “Where can we use AI?” but “Which decisions do we make repeatedly, and what information would improve them?” Portfolio not showcase BHP describes the end-to-end effects of AI on operations, or as it puts it, “from mineral extraction to customer delivery.” Leaders had decided to move beyond pilot rollouts, treating AI as an operational capability. It started with a small set of problems that affected the company’s performance; places where change could be measured in results. The company found it could avoid unplanned downtime of machinery, plus it tightened its energy and water use. Each use case addressing a small but impactful problem was given an owner and an accompanying KPI. Results were reviewed with the same regularity used for other operational performance monitoring elsewhere in the company. Where BHP uses AI daily In addition to BHP focusing specifically on areas such as predictive maintenance and energy optimisation, it gave consideration to using AI in more adventurous yet important operations such as autonomous vehicles and real-time staff health monitoring. Such categories can translate well to other asset-heavy environments, across logistics, manufacturing, and heavy industry. Predictive maintenance Predictive maintenance is the process of planning repairs in scheduled downtime to reduce unexpected failures and costly, unplanned stoppages. Here, AI models analyse equipment data from on-board sensors and can anticipate maintenance needs. This cuts breakdown numbers and reduces equipment-related safety incidents. BHP runs predictive analytics across most of its load-and-haul fleets and its materials handling systems. A central maintenance centre provides real-time and longer-range indications of machine health and potential failure or degradation. Prediction has become an integral part of its machinery-heavy operations, where previously, such information was presented as ‘just another’ report, one that could get lost in the bureaucracy of the company. It models and defines thresholds which trigger actions directly to teams planning maintenance. Energy and water optimisation Deploying predictive maintenance in this manner at its facilities in Escondida in Chile, the company reports savings of more than three giga-litres of water and 118 gigawatt hours of energy in two years, attributing the gains directly to AI. The technology gives operators real-time options and analytics that identify anomalies and automate corrective actions at multiple facilities, including concentrators and desalination plants. The lesson it’s learned is placing AI where decisions happen: When operators and control teams can act on recommendations in real time, improvements compound. Conversely periodic reporting means decisions are only taken if staff both see the results of data, and then decide it’s necessary. The realtime nature of data analysis and the use of triggers-to-action mean the differences becomes quickly apparent. Autonomy and remote operations BHP is also using more advanced technologies like AI-supported autonomous vehicles and machinery. These are higher-risk areas, and the tech has been found to reduce worker exposure to risk, and cut the human error factor in incidents. At the company, complex operational data flows through regional centres from remote facilities. So, without the use of AI and analytics, staff would not be able to optimise every decision in the way that software achieves. The use of AI-integrated wearables is increasing in many industries, including engineering, utilities, manufacturing, and mining. BHP leads the way in protecting its staff, who often work in very challenging conditions. Wearables can monitor personal conditions, reading heart rate and fatigue indicators, and provide real-time alerts to supervisors. One example might be ‘smart’ hard-hat sensor technology, used by BHP at Escondida, which measures truck driver fatigue by analysing drivers’ brain waves. A plan leaders can run Regardless of industry, decision-makers can draw learnings from BHP’s experiences in deploying AI at the (literal) coal-face. The following plan could help leaders in their own strategies to leverage AI in operational problem-areas: Choose one reliability problem and one resource-efficiency problem that operations teams already track, then attach a KPI. Map the workflow: who will see the output and what action they can take? Put basic governance in place for data quality and model monitoring, then review performance alongside operational KPIs. Start with decision support in higher-risk processes, and automate only after teams validate controls. (Image source: “Shovel View at a Strip Mining Coal” by rbglasson is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Mining business learnings for AI deployment appeared first on AI News. View the full article
-
BNP Paribas is testing how far AI can be pushed into the day-to-day mechanics of investment banking. According to Financial News, the bank has rolled out an internal tool called IB Portal, designed to help bankers assemble client pitches more quickly and with less repetition. Pitch preparation sits at the centre of investment banking work. Teams pull together market views, deal history, and tailored narratives under tight timelines. Much of that effort repeats work that already exists elsewhere in the organisation. Slides, charts, and precedent analysis are often rebuilt from scratch, even when similar material has been used before by another team or office. IB Portal is meant to reduce that waste. The system searches BNP Paribas’s past pitch materials and uses what the bank describes as “smart prompts” to surface relevant slides, analysis, and supporting content for a new mandate. George Holst, head of the corporate clients group at BNP Paribas, said the tool functions like an AI-powered search engine that helps bankers find what matters ahead of a pitch or client meeting. In his words, it can cut research time by days, giving teams more room to focus on strategy and client judgement. The use case matters because it places AI inside real, constrained workflows rather than around them. Pitch decks are not generic documents. They reflect internal viewpoints, client-specific details, and regulatory requirements. Making an AI tool useful in this setting depends less on conversational flair and more on structure. That includes deciding which materials are searchable, setting clear access controls in regions and business lines, and defining how retrieved content moves from internal draft to client-ready output. In practice, that also means traceability. Bankers need to see where information comes from, and anything produced by the system still needs human review before it leaves the firm. Without those checks, the risk of errors or inappropriate disclosure rises quickly. BNP Paribas builds AI tools on internal platforms The portal also fits into a broader internal build-out at BNP Paribas. In June 2025, the bank outlined an “LLM as a Service” platform aimed at giving its business units shared access to large language models in the group’s own infrastructure. The platform is run by internal IT teams and hosted in BNP Paribas data centres with dedicated GPU capacity. The bank said it supports a mix of models, including open-source options and systems from Mistral AI, with plans to add models trained on internal data. Intended use cases include internal assistants, document drafting, and information retrieval. Other large banks are taking a similar approach. JPMorganChase has pointed to growing use of its internal “LLM Suite”, which provides staff access to models in a controlled environment. Reuters has reported on Goldman Sachs’s investment in AI engineering and its rollout of a proprietary “GS AI Assistant”. UBS has discussed an internal M&A “co-pilot” used for idea generation. Alongside these in-house efforts, specialist tools like Rogo have found traction at firms including Nomura and Moelis, pointing to demand for finance-specific AI tools. For BNP Paribas, the real test is whether IB Portal becomes part of everyday work rather than a one-off experiment. The potential benefits are straightforward: less time spent searching, fewer duplicated decks, and better reuse of institutional knowledge. The risks are just as familiar. Hallucinated data, unclear sources, and accidental exposure of sensitive information all carry real consequences in banking. The most stable deployments keep AI tightly constrained. That usually means grounding outputs in approved internal content, applying role-based access controls, recording how tools are used, and requiring human sign-off before anything reaches a client. If IB Portal operates in those boundaries, it offers a practical view of how enterprise AI is taking shape: not as a source of instant answers, but as a faster and safer way to navigate what an organisation already knows. (Photo by Enrico Frascati) See also: CEOs still betting big on AI: Strategy vs. return on investment in 2026 Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post BNP Paribas introduces AI tool for investment banking appeared first on AI News. View the full article
-
JPMorgan Chase’s AI strategy is delivering measurable returns – but at a human cost. The bank isn’t hiding the fact. With 200,000 employees now using its proprietary LLM Suite platform daily and AI benefits growing 30-40% annually, America’s largest bank is executing what Chief Analytics Officer Derek Waldron calls a plan to create the world’s first “fully AI-connected enterprise.” What infrastructure backs this transformation? A US$18 billion annual technology budget, 450+ AI use cases in production, and a platform that won American Banker’s 2025 Innovation of the Year Grand Prize. But JPMorgan’s candour about workforce displacement – operations staff projected to fall at least 10% – reveals the complexity of enterprise AI beyond the promotional headlines. LLM suite: From zero to 200,000 users in eight months Released in summer 2024, LLM Suite reached 200,000 users in eight months through an opt-in strategy that created what Waldron describes as “healthy competition, driving viral adoption. This isn’t just a chatbot: LLM Suite functions as a “full ecosystem” connecting AI to firm-wide data, applications, and workflows. The model-agnostic architecture integrates OpenAI and Anthropic models, with updates every eight weeks. Investment bankers create five-page decks in 30 seconds – work that previously took junior analysts hours. Lawyers scan and generate contracts. Credit professionals extract covenant information instantly. Call centre tool EVEE Intelligent Q&A improved resolution times through context-aware responses. “A little under half of JPMorgan employees use gen AI tools every single day,” Waldron told McKinsey in October 2025. “People use it in tens of thousands of ways specific to their jobs.” JPMorgan Chase AI strategy delivers 30-40% annual ROI growth JPMorgan tracks ROI at the individual initiative level – not platform-wide vanity metrics. Since inception, AI-attributed benefits have grown 30-40% year-over-year. The strategy combines top-down focus on transformative domains (credit, fraud, marketing, operations) with bottom-up democratisation, letting employees innovate in job families. McKinsey’s Kevin Buehler estimates US$700 billion in potential banking cost savings industry-wide. But much will be “competed away” to customers. Industry return on tangible equity could drop one to two points, while AI pioneers could see four-point increases versus slow movers. Waldron acknowledges productivity gains don’t automatically translate to cost reductions. “An hour saved here and three hours there may increase individual productivity, but in end-to-end processes these snips often just shift bottlenecks.” Operations staff to fall 10% as AI agents take complex tasks JPMorgan’s consumer banking chief announced operations staff would decline at least 10% as the bank deploys “agentic AI” – autonomous systems handling multi-step tasks. The bank is building AI agents that execute cascading actions independently. Waldron demonstrated to CNBC how the system creates investment banking presentations in 30 seconds and drafts confidential M&A memos. AI favours client-facing roles – private bankers, traders, investment bankers. At risk: operations staff handling account setup, fraud detection, and trade settlement. New job categories are emerging: “context engineers” ensuring AI systems have proper information, knowledge management specialists, and up-skilled software engineers building agentic systems. Stanford researchers analysing ADP data found early-career workers (ages 22-25) in AI-exposed occupations saw 6% employment decline from late 2022 to July 2025. Shadow IT, trust, and the “value gap” problem JPMorgan’s transparency extends to acknowledging significant execution risks. Without enterprise-grade tools, employees might use consumer-grade AI – exposing sensitive data. JPMorgan built an in-house system for security and control. When AI performs correctly 85-95% of the time, human reviewers may stop checking carefully. The error rate compounds at scale. “When an agentic system does a cascading series of analyses independently for a long time, it raises questions about how humans can trust that,” Waldron told McKinsey. Many enterprises face “proof-of-concept hell” – numerous pilots that never reach production because they underestimate integration complexity. “There is a value gap between what the technology is capable of and the ability to fully capture that in an enterprise,” Waldron told CNBC. Even with US$18 billion, full realisation takes years. The JPMorgan playbook: What enterprises can learn JPMorgan’s approach offers replicable principles despite scale advantages. Democratise access but mandate nothing – the opt-in strategy created viral adoption. Build for security first, particularly in regulated industries. Implement model-agnostic architecture to avoid vendor lock-in. Combine top-down transformation with bottom-up innovation. Segment training by audience. Track ROI with discipline at the initiative level. Acknowledge complexity and plan accordingly – JPMorgan took over two years to build the LLM Suite. Not every enterprise has US$18 billion for technology or 200,000 employees. But core principles – democratisation, security-first architecture, avoiding vendor lock-in, and financial discipline – apply in industries and scale. Transformation with eyes wide open JPMorgan Chase’s AI strategy represents enterprise AI’s most transparent case study – complete with industry-leading adoption metrics, measurable ROI growth, and unflinching acknowledgement of workforce displacement. The bank’s success factors are clear: massive capital investment, model-agnostic infrastructure, democratised access paired with financial discipline, and realistic timelines. But Waldron’s candour about trust challenges, the “value gap” between capability and execution, and the multi-year journey ahead suggest that even US$18 billion and 200,000 engaged employees don’t guarantee seamless transformation. For enterprises evaluating their AI strategies, JPMorgan’s lesson isn’t that scale solves everything – it’s that honest assessment of both opportunities and execution risks separates genuine transformation from expensive experimentation. The question isn’t whether JPMorgan’s AI strategy is working. It’s whether the 10% workforce reduction and years-long complexity represent acceptable trade-offs for 30-40% annual benefit growth – and how many other enterprises can afford to find out. Editor’s note: The analysis draws from McKinsey’s October 2025 interview with Derek Waldron and Kevin Buehler, CNBC’s September 2025 exclusive demonstration of LLM Suite, American Banker’s June 2025 Innovation of the Year coverage, and Stanford University research on AI employment effects. See also: Walmart and Amazon drive retail transformation with AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post JPMorgan Chase AI strategy: US$18B bet paying off appeared first on AI News. View the full article
-
Across the US, workers are experiencing a seismic shift in workplace operations as AI literacy becomes a core part of business strategies. This is redefining roles and expectations, while workloads continue to increase and pressure intensifies. As the employment landscape transforms, it has become clear that the future of work and talent will be defined by three main areas – continuous education, greater flexibility, and AI literacy. This is according to new research conducted by The Harris Poll, who discovered that employers who invest heavily in educational benefits and AI development gain key advantages. Priya Krishnan, Chief Transformation Officer, Bright Horizons, said, “AI is rewriting job descriptions fast… employers who invest in education benefits and AI training now will build resilient, innovative teams.” AI skills demand heightens with workloads According to the 2025 EdAssist by Bright Horizons Education Index, 42% of US employees are expecting AI to significantly change their role over the next year, despite only 17% actively using AI on a frequent basis. However, the report revealed that the increased adoption of AI is not the key talking point – it’s the urgent skills mandate AI has triggered. Workers now feel they have to evolve to remain competitive, with 32% feeling increased pressure to learn new skills because of AI, an increase from 26% in 2024. The demand for upskilling comes as workforce stress hits new heights. For instance, 81% of employees report being pushed to take on greater workloads, and 80% stated they are expected to deliver work faster. AI has the ability to reduce some of this strain, but without proper guidance and training, employees will struggle to use it effectively. The report found that the adoption of AI technology rises to 76% when employers provide AI training, while workers who have access to training (84%) reported feeling more prepared for potential changes than those without (48%). Essentially, effective training turns uncertainty into confidence. Ultimately, 34% of those surveyed said they feel unprepared for changes driven by AI, and 42% said their employers expected them to develop their understanding of AI alone, without any formal training. “AI is rewriting job descriptions faster than most organisations can keep up,” said Priya Krishnan, Chief Transformation Officer at Bright Horizons. “Employers who act now will not only close important skill gaps but also build a culture of resilience and innovation. This is not about chasing trends but creating a workforce that thrives in a world where technology and human capability advance together. Education benefits, flexible learning, and AI literacy are the foundation for long-term competitiveness.” Education benefits drive retention and readiness The EdIndex highlights how investing in employee education can pay off, both for employees and employers. However, challenges are still present, with a key concern being financial barriers. According to the report, the skills gap is not simply about technology, it’s about “access and affordability.” 48% of employees surveyed said they avoid further education, citing the “fear of student debt” as the main reason. 34% reported that they cannot afford any more student loan debt, therefore curtailing any career advancement and blocking opportunities. The underscores the need for affordable, employer-sponsored learning programmes that allow employees the chance to progress. Employees highly value education support, with 85% reporting that they would be more loyal to employers that invest in continuing education. 86% said they would choose a job that offers such opportunities over one that doesn’t. 82% believe employer support for learning is crucial, while 29% aid it is essential for their job. In addition, 74% said they would be more loyal to an employer if they helped pay off their student loans, further emphasising how the alleviation of financial obstacles drives loyalty and retention, as well as ROI for employers. Over half (55%) of respondents are more likely to remain with a company if AI training or certification is available, and 76% actively use AI when training has been provided, compared to only 25% without formal training. 2026 predictions Looking ahead to 2026, the report highlights five key shifts and how employers can start preparing now. First, the report predicts AI literacy will become essential in the workplace as automation affects almost all jobs. Therefore, companies need to develop AI training programmes and ongoing support that enable employees to feel confident when using new tools. Secondly, upskilling is set to give employers and employees a significant competitive advantage. Organisations that invest heavily in both technical and soft skills will be better equipped for innovation and adaption going forward. Third, flexible education benefits will be key to ensuring employee retention. Workers are increasingly seeking employers who provide financial support and remove financial barriers that limit their learning, so those who offer certain “perks,” like debt-free programmes and tuition support will be more likely to attract and retain talent. The fourth major prediction is how continuous learning will become a key component of workplace environments. With roles evolving, the report says employers must set clear career pathways with personalised learning plans that help their employees grow. Finally, work-life balance is expected to play a huge role. Companies that blend education benefits with flexibility and well-being support that reduces stress have a higher chance of maintaining a loyal workforce. Pressure remains Although career satisfaction appears high at 87%, workers are under continuing pressure, with 78% feeling pushed to perform tasks outside their main role. As a result, priorities are shifting quickly. While earning a raise remains an important factor with 57% naming it a key priority, work-life balance (43%) and stress reduction (35%) are surging, rising sharply from 2024’s numbers of 35% and 29% respectively. While 90% say they feel somewhat prepared as their job role evolves, this confidence drops when AI is brought into the equation, as just 66% feel ready for AI’s certain impact. AI may be easing some areas of work, but its presence is also rising workplace pressure. The 2025 EdAssist report suggests, “The employers who act now by embedding skill development opportunities, AI training, short-form learning, and debt-conscious benefits into their talent strategy will future-proof their business and avoid being left behind as the competition for skills intensifies.” Only when employers take these steps can they build a workforce capable of sustaining long-term growth and navigating the changes that lie ahead. (Image source: “HMS St Albans’ Ship’s Company conduct a training day in the Plymouth exercise areas” by Royal Navy Media Archive is licensed under CC BY-NC 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI literacy and continuous education are cornerstones appeared first on AI News. View the full article
-
The construction industry generates colossal amounts of data, with much of it unused or locked in spreadsheets. AI is now changing this, enabling teams to accelerate decision-making, enhance margins, and improve project outcomes. According to new research from Dodge Construction Network (Dodge) and CMiC, the true transformative impact of AI is highlighted by contractors, with 87% believing AI will “meaningfully transform their business,” despite current low adoption rates. The latest research, entitled, ‘AI for Contractors,’ discovered automated proposal generation and progress tracking from site photos both reached a 92% effectiveness rating. Meanwhile, contract risk review achieved 85% effectiveness when compared to previous, more traditional methods. The report highlights how AI is allowing project managers to focus on strategic decisions rather than time-consuming administrative tasks. Finance teams are also benefiting from AI technology, shifting from historical reporting to predictive insights, while operations leaders are able to apply data-driven intelligence for improved project delivery. Rather than AI fully replacing human expertise, the report found it actually enhances human input. “For decades, construction firms have lacked the tools to transform the data they’ve collected into action. AI-enabled solutions are changing that,” says Gord Rawlins, president and CEO of CMiC. “This research highlights the high-impact results contractors are achieving today.” AI changing contractor roles Surveyed contractors see AI as a catalyst in reshaping everyday aspects of their operations, enabling predictive insights rather than reacting to problems once they have occurred. This introduces wider benefits, like tighter cost controls, improved scheduling, and higher quality project delivery. In other words, improved overall outcomes. A substantial 85% of contractors foresee less time spent on repetitive tasks, while 75% have faith that AI can help mine historical data to learn from previous projects. Rather than relying fully on AI, 70% said the technology helps them make better, more informed decisions thanks to insights that may otherwise not be present. AI implementation remains low, but companies are preparing for wider adoption Currently, AI adoption in the construction industry is low, despite awareness levels of 32% to 34%. This seems to be due to several reasons, including a lack of clear understanding, internal approvals, and software access. However, Dodge’s research discovered more than half of companies surveyed are strategically preparing for AI with pilot programmes and staff training for AI-related positions. According to the report, 40% of companies have a set budget for AI, 38% are developing teams for implementation, 19% are adapting old workflows, and 51% are assessing AI changes. Early adopters lead the way Overall awareness of AI use in the industry is quite low, with just 20% to 50% of contractors knowing that certain management tasks implement AI, and very few actively use these functions. Nevertheless, early adopters of AI provided positive feedback, as more than 70% revealed that AI tools are hugely effective compared to more traditional methods, suggesting a potential for quick growth in AI use throughout the industry. Security and accuracy lead concerns The main concerns of adopting AI revolve around security and accuracy. The report reveals that 57% are worried about the accuracy of AI output, while 54% have doubts over the security of company data. Internal resistance to change (44%) and implementation costs (41%) are also cited as key concerns, but perhaps surprisingly, just 21% expressed concern over job losses. 31% believe current data quality is not yet adequate to support AI analysis. According to the report, larger contractors are likely to rely more on AI than smaller firms, thus are more concerned about data quality and reliability. For instance, 69% of larger contractors cited lack of reliability or accuracy of AI outputs as a major concern, compared to 54% of smaller or mid-size contractors. Research data confirms that contractors are generally open to adopting AI, but the accuracy of AI outputs tend to stand in the way, as well as the desire for better tools, more information, and greater internal support. 17% of contractors said they do not sufficiently trust AI results, an issue that becomes more pronounced in sensitive areas like payments. Distrust in AI operations rises to 35% and 31% not having faith in AI managing project budgets. A major theme is the need for more understanding before using AI. On average, 21% of respondents said they want a better insight of how AI works before considering using it, climbing to 31% for more complex tasks like safety risk assessments. Contractors also believe they are limited by their current software capabilities, with an average of 19% reporting their software does not offer the AI functions they require. The increases to 33% for managing resources. Internal approval remains a notable obstacle, with 22% saying their company has not yet approved the use of AI, despite personal interest. Another barrier is a lack of time or resources that effectively evaluate AI tools. 13% stated this as a main reason why AI has not yet been adopted. Although there are obvious challenges to mass AI use in the construction industry – and therefore significant market opportunity – only 5% believe AI would not be beneficial or improve current methods. That indicates a resistance that stems from various concerns rather than a lack of perceived value. Steve Jones, Senior Director, Industry Insights Analytics at Dodge, spoke on the findings. “We designed this study to look at the use of AI in the digital tools already deployed by contractors because that may offer the best solution to the challenge of data quality. But it is also heartening to see that many contractors are aware of the key challenges and the need for a rigorous approach to successfully implementing these tools at their organisations,” the Dodge research states. Key interest in emerging AI functionalities AI’s potential is clearly recognised, even if the industry’s readiness to adopt it isn’t quite matching the data. Certain areas are attracting the most attention when it comes to AI functions, like automated construction analysis, where 81% see potential benefits. 80% also show interest in intelligent permit submissions, while 79% believe in autonomous schedule and resource optimisation. 92% appreciate automated contract management and 76% recognise potential in AI-powered dynamic pricing. Although AI adoption remains limited, these strong numbers suggest the tide may soon be turning. AI and the new age of the construction industry The latest data suggests a strong openness, maybe even an eagerness, to AI adoption in the construction sector. However, better tools, clearer guidance, and more trustworthy outputs are just some of the areas that need to be addressed before interest becomes implementation. “With high awareness, strong interest, and powerful validation from early adopters, contractors appear poised for significant expansion in their use of AI-enabled tools in meaningful ways,” said Steve Jones. The industry is on a “tipping point for AI adoption” according to Jones. When companies start to provide clearer pathways for adoption, the move towards AI-powered construction workflows will undoubtedly accelerate rapidly, reshaping how projects are delivered forever. (Image source: “Tianjin Construction Site.” by @yakobusan Jakob Montrasio is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Strong contractor belief in AI for industry-wide transformation appeared first on AI News. View the full article
-
As the company that kick-started the cloud computing revolution, Amazon is one of the world’s biggest companies whose practices in all things technological can be regarded as a blueprint for implementing new technology. This article looks at some of the ways that the company is deploying AI in its operations. Amazon’s latest AI strategy has progressed from basic chatbots to agentic AI: systems that can plan and execute multi-step work using different tools and across processes. As a company, Amazon sits at the intersection of cloud infrastructure (in the form of AWS), logistics, retail, and customer service, all of which are areas where small efficiency gains can have massive impact. Copilots to agents, AWS builds the control plane autonomy In early 2025, Amazon made its AI intentions clear for its cloud company, AWS, by forming a new group focused internally on agentic AI. According to reporting on an internal email, AWS leadership described agentic AI as a potential “multi-billion” business, underscoring that the technology is regarded as a new platform layer, not a standalone feature. The company was not afraid to say that its workforce is expected to shrink because of the technology. In June 2025, Amazon CEO Andy Jassy told employees that widespread use of generative AI and agents will change how work is done, and that over the next few years, Amazon expects routine work to become faster and more automated, slowing hiring, changing roles, and shrinking some job categories, even if other categories grow. Amazon’s best use cases are high-volume, rules-bound workflows that require a lot of searching, checking, routing, and logging. These are or will have significant impact in forecasting, delivery mapping, customer service, and product content. /Reuters/ noted examples like inventory optimisation, improved customer service, and better product detail pages as internal targets for gen AI. Logistics and operations Amazon has described AI-enabled upgrades in its US operations that hint at where an agentic approach may take shape. In June 2025, it outlined AI innovations that included a generative AI system to improve delivery location accuracy, a new demand forecasting model to predict what customers want (and where), and an agentic AI team looking at enabling robots to understand natural-language Consumer-facing agents Consumer agents are where autonomy first becomes real, because systems can take actions, even where there’s money involved. Reporting in The Verge about Alexa+ highlighted features like monitoring items for price drops and (optionally) purchasing for the user automatically once a threshold is hit, a concrete example of the agentic concept in everyday terms: users setting constraints (in the form of price thresholds), and the system watches and executes inside said boundaries. Rufus as the Amazon AI interface Amazon’s Rufus assistant is positioned as an AI interface to shopping, one that helps customers find products, do comparisons, and understand the trade-offs between various choices. Amazon describes Rufus as powered by generative (and increasingly agentic) AI to make shopping faster, with personalisation created by a user’s shopping history and current context. Agents therefore become the a shopping interface, with their value to the retailer in shortening journey from intent to final purchase. Agents for Amazon Bedrock and AgentCore Internally, AWS is producing agentic ‘building blocks’. Agents for Amazon Bedrock are designed to execute multi-step tasks by orchestrating models with tools use and integration with other platforms. The Amazon Bedrock AgentCore is presented as a platform to build [PDF], deploy, and operate agents securely at scale. It has features like runtime hosting, memory, observability dashboards, and evaluation. AgentCore is Amazon’s attempt to become the default infrastructure layer for supervised enterprise agents, especially for organisations that need auditability, access controls, and reliability. Keeping an eye on workforce and governance If Amazon succeeds, the next phase for the technology is managed AI, comprising of mechanisms that grant or revoke permissions for tools and data access, the monitoring of agents’ behaviour, evaluation of performance and whether governance guidelines are being met, and the establishment of escalation paths when agents hit uncertainty. The signals to the workforce have been baked into leadership messaging at the company. Fewer people will be required for some corporate tasks, and there will be more roles that can design workflows, govern the models, keep systems secure, and audit the outcomes of agentic AI use. Conclusions Proven as a leader in technology, Amazon’s stance on AI and the meaningful ways in which it’s implementing AI are a description of the paths enterprise companies may follow. Winning the productivity gains and lowered costs that AI technology promises is not as simple as plugging in a local device, or spinning up a new cloud instance. But the company can be seen as lighting the way for others to follow. Whether it’s supervising agents or deflecting customer queries to automated answering systems, AI is changing this technology giant in every possible way. (Image source: CHEN – The Arousing, Thunder – arouse, excite, inspire; thunder rising from below; awe, alarm, trembling; fertilizing intrusion. The ideogram: excitement and rain” – public domain) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AWS’s legacy will be in AI success appeared first on AI News. View the full article
-
Walmart’s December 9 transfer to Nasdaq wasn’t just a symbolic gesture. The US$905 billion retailer is making its boldest claim yet: that it’s no longer a traditional discount chain, but a tech-powered enterprise using AI to fundamentally rewire retail operations. But beyond the marketing spin and the parade of AI announcements, what’s genuinely transforming at the world’s largest retailer—and where are the gaps between ambition and execution? The Agentic AI pivot: Purpose-built, not off-the-shelf Walmart’s AI strategy diverges sharply from competitors chasing generic large language models. According to CTO Hari Vasudev, the company is deploying what it calls “purpose-built agentic AI”—specialised tools trained on Walmart’s proprietary retail data rather than one-size-fits-all solutions. “Our approach to agentic AI at Walmart is surgical,” Vasudev wrote in a May 2025 blog post. “Extensive early testing proved that, for us, agents work best when deployed for highly specific tasks, to produce outputs that can then be stitched together to orchestrate and solve complex workflows.” This translates to tangible applications: Walmart’s “Trend-to-Product” system cuts fashion production timelines by 18 weeks. Its GenAI Customer Support Assistant now autonomously routes and resolves issues without human intervention. Developer productivity tools handle test generation and error resolution within CI/CD pipelines. Meanwhile, the company’s retail-specific LLM “Wallaby”—trained on decades of Walmart transaction data—powers everything from item comparison to personalised shopping journey completion. The infrastructure undergirding this? Element, Walmart’s proprietary MLOps platform, is designed to avoid vendor lock-in and optimise GPU usage across multiple cloud providers. It’s an in-house “factory” that gives Walmart speed and flexibility competitors wrestling with third-party platforms can’t match. Real numbers: Where AI delivers measurable impact Walmart has been unusually transparent about specific ROI metrics, offering a rare glimpse into enterprise AI economics: Data operations: GenAI improved over 850 million product catalogue data points—a task that would have required 100 times the headcount using manual processes, according to CEO Doug McMillon’s August 2024 earnings call. Supply chain efficiency: AI-powered route optimisation eliminated 30 million unnecessary delivery miles and avoided 94 million pounds of CO2 emissions. The company won the prestigious Franz Edelman Award in 2023 for this technology—and has since commercialised it as a SaaS product for other businesses. Store operations: Digital Twin technology predicts refrigeration failures up to two weeks in advance, auto-generating work orders complete with visual models, wiring diagrams, and required parts. Sam’s Club’s AI-powered exit technology has reduced member checkout times by 21%, with over 64% of members now using the friction-free system across all locations. Customer experience: Dynamic Delivery algorithms analyse traffic patterns, weather conditions, and order complexity to predict delivery times down to the minute, enabling 17-minute express deliveries in test markets. The human cost: “AI will change every job” McMillon hasn’t sugarcoated the workforce implications. Speaking at a Bentonville workforce conference in September 2025, he stated bluntly: “It’s very clear that AI is going to change literally every job. Maybe there’s a job in the world that AI won’t change, but I haven’t thought of it.” But Walmart’s positioning this as a transformation rather than an elimination. McMillon expects total headcount to remain flat even as revenue grows—meaning jobs will shift, not disappear. White-collar roles face the earliest disruption through chatbots handling customer service and supply chain tracking, while store and warehouse workers will eventually see tasks absorbed by autonomous systems. The company is investing heavily in reskilling programs. “We’ve got to create the opportunity for everybody to make it to the other side,” McMillon said at the Bentonville conference. Automation equipment operator Chance at Walmart’s Palestine, Texas, distribution centre described the shift: “It used to be 85% physical. Now it’s 85% mental. I’m solving problems with my mind, not just my body.” The Nasdaq gambit: Repositioning for tech valuations Walmart’s exchange transfer was explicitly framed around its AI transformation. CFO John David Rainey stated the move reflects the company “setting a new standard for omnichannel retail by integrating automation and AI.” The subtext? Walmart wants the valuation multiples tech companies command. At a P/E ratio of 40.3x—higher than Amazon and Microsoft—the market is partially buying the transformation story. Potential inclusion in the tech-heavy Nasdaq 100 index would drive passive fund investment regardless of AI execution. Analysts are split on whether the premium is justified. Jefferies’ Corey Tarlowe argued the move signals Walmart is “less of a traditional retail corporation and more of a technology firm.” But sceptics note the company still derives revenue from razor-thin retail margins, not high-margin software or cloud services—despite commercialising tools like Route Optimisation. Verdict: Genuine transformation with execution risk Walmart’s AI strategy is neither pure hype nor guaranteed success. The company is making structural investments in proprietary infrastructure, deploying AI at genuine scale with measurable operational benefits, and transparently acknowledging workforce implications most enterprises dodge. But significant execution risks remain: managing fragmented agent ecosystems, preventing algorithmic bias at scale, competing against external shopping agents, and determining appropriate automation boundaries while maintaining accuracy. The company’s candidness about challenges—”often, a co-pilot model, with humans and AI working as a team, is the most effective approach”—suggests leadership understands AI isn’t a silver bullet. For enterprises watching Walmart’s playbook, the lesson is clear: build for specificity, not generality. Invest in proprietary data moats. Plan for workforce transformation, not just cost reduction. And recognise that even with massive resources and technical talent, agentic AI remains early-stage technology with genuine limitations. The question isn’t whether Walmart is using AI—it demonstrably is. It’s whether this surgical, infrastructure-heavy approach delivers sustainable competitive advantage, or if the company is simply automating itself into the same low-margin trap with shinier tools. That answer won’t be clear for several years—but Walmart’s willingness to bet US$905 billion in market cap on the transformation suggests leadership believes the former. See also: Walmart and Amazon drive retail transformation with AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Walmart’s AI strategy: Beyond the hype, what’s actually working appeared first on AI News. View the full article
-
Enterprise leaders are pressing ahead with artificial intelligence, even as early results remain uneven. Reporting from the Wall Street Journal and Reuters shows that most CEOs expect AI spending to keep rising through 2026, despite difficulty tying those investments to clear, enterprise-wide returns. This tension highlights where many organisations now sit in their AI journey. The technology has moved beyond trials and proofs of concept, but it has yet to settle into a reliable source of value. Companies are operating in an in-between phase, where ambition, execution, and expectations are all under strain at the same time. Spending continues, even as returns lag AI budgets have climbed steadily across large enterprises over the past two years. Competitive pressure, board oversight, and fear of being left behind have all played a role. At the same time, executives are more open about the limits they are seeing. Gains often show up in pockets rather than across the business, pilots fail to spread, and the cost of connecting AI systems to existing tools keeps rising. A Wall Street Journal survey of senior executives found that most CEOs see AI as central to long-term competitiveness, even if short-term benefits are hard to measure. For many, AI no longer feels optional. It is treated as a capability that must be developed over time, rather than a project that can be paused if results disappoint. That view helps explain why spending remains steady. Leaders worry that cutting back now could weaken their position later, especially as rivals improve how they use the technology. Why pilots struggle to scale One of the main barriers to stronger returns is the jump from experimentation to day-to-day use. Many organisations have launched AI pilots across different teams, often without shared rules or coordination. While these efforts can generate insight and interest, few translate into changes that affect the wider business. Reuters has reported that companies trying to scale AI frequently run into issues with data quality, system links, security controls, and regulatory requirements. These problems are not only technical. They reflect how work is organised. Responsibility is often split across teams, ownership is unclear, and decisions slow down once projects touch legal, risk, and IT functions. The result is a pattern of heavy spending on trials, with limited progress toward systems that are embedded in core operations. Infrastructure costs reshape the equation The cost of infrastructure is also weighing on AI returns. Training and running models demands large amounts of computing power, storage, and energy. Cloud bills can rise quickly as usage grows, while building on-site systems requires upfront investment and long planning cycles. Executives cited by Reuters have warned that infrastructure costs can outpace the benefits delivered by AI tools, particularly in the early stages. This has forced tough choices: whether to centralise AI resources or leave teams to experiment on their own; whether to build in-house systems or rely on vendors; and how much waste is acceptable while capabilities are still forming. In practice, these decisions are shaping AI strategy as much as model performance or use-case selection. AI governance moves to the centre of CEO decision-making As AI spending increases, so does scrutiny. Boards, regulators, and internal audit teams are asking harder questions. In response, many organisations are tightening control. Decision rights are shifting toward central teams, AI councils are becoming more common, and projects are being linked more closely to business priorities. The Wall Street Journal reports that companies are moving away from loosely connected experiments toward clearer goals, measures, and timelines. This can slow progress, but it reflects a growing belief that AI should be managed with the same discipline as other major investments. This shift marks a change in how AI is treated. It is no longer a side effort or a curiosity. It is being brought into existing operating and risk structures. Expectations are being reset, not abandoned Importantly, the persistence of AI spending does not signal blind optimism. Instead, it reflects a reset in expectations. CEOs are learning that AI rarely delivers immediate, sweeping returns. Value tends to emerge gradually, as organisations adjust workflows, retrain staff, and refine data foundations. Rather than abandoning AI initiatives, many enterprises are narrowing their focus. They are prioritising fewer use cases, demanding clearer ownership, and aligning projects more closely with business outcomes. This recalibration may reduce short-term excitement, but it improves the likelihood of sustainable returns. What CEO AI strategy signals for 2026 planning For organisations shaping their plans for 2026, the message for every CEO is not to retreat from AI, but to pursue it with more care as AI strategies mature. Ownership, governance, and realistic timelines matter more than headline spending levels or bold claims. Those most likely to benefit are treating AI as a long-term shift in how the organisation works, not a quick route to growth. In the next phase, advantage will depend less on how much is spent and more on how well AI fits into everyday operations. (Photo by Ambre Estève) See also: AI in 2026: Experimental AI concludes as autonomous systems rise Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post CEOs still betting big on AI: Strategy vs. return on investment in 2026 appeared first on AI News. View the full article
-
Generative AI’s experimental phase is concluding, making way for truly autonomous systems in 2026 that act rather than merely summarise. 2026 will lose the focus on model parameters and be about agency, energy efficiency, and the ability to navigate complex industrial environments. The next twelve months represent a departure from chatbots toward autonomous systems executing workflows with minimal oversight; forcing organisations to rethink infrastructure, governance, and talent management. Autonomous AI systems take the wheel Hanen Garcia, Chief Architect for Telecommunications at Red Hat, argues that while 2025 was defined by experimentation, the coming year marks a “decisive pivot towards agentic AI, autonomous software entities capable of reasoning, planning, and executing complex workflows without constant human intervention.” Telecoms and heavy industry are the proving grounds. Garcia points to a trajectory toward autonomous network operations (ANO), moving beyond simple automation to self-configuring and self-healing systems. The business goal is to reverse commoditisation by “prioritising intelligence over pure infrastructure” and reduce operating expenditures. Technologically, service providers are deploying multiagent systems (MAS). Rather than relying on a single model, these allow distinct agents to collaborate on multi-step tasks, handling complex interactions autonomously. However, increased autonomy introduces new threats. Emmet King, Founding Partner of J12 Ventures, warns that “as AI agents gain the ability to autonomously execute tasks, hidden instructions embedded in images and workflows become potential attack vectors.” Security priorities must therefore shift from endpoint protection to “governing and auditing autonomous AI actions.” As organisations scale these autonomous AI workloads, they hit a physical wall: power. King argues energy availability, rather than model access, will determine which startups scale. “Compute scarcity is now a function of grid capacity,” King states, suggesting energy policy will become the de facto AI policy in Europe. KPIs must adapt. Sergio Gago, CTO at Cloudera, predicts enterprises will prioritise energy efficiency as a primary metric. “The new competitive edge won’t come from the largest models, but from the most intelligent, efficient use of resources.” Horizontal copilots lacking domain expertise or proprietary data will fail ROI tests as buyers measure real productivity. The “clearest enterprise ROI” will emerge from manufacturing, logistics, and advanced engineering—sectors where AI integrates into high-value workflows rather than consumer-facing interfaces. AI ends the static app in 2026 Software consumption is changing too. Chris Royles, Field CTO for EMEA at Cloudera, suggests the traditional concept of an “app” is becoming fluid. “In 2026, AI will start to radically change the way we think about apps, how they function and how they’re built.” Users will soon request temporary modules generated by code and a prompt, effectively replacing dedicated applications. “Once that function has served its purpose, it closes,” Royles explains, noting these “disposable” apps can be built and rebuilt in seconds. Rigorous governance is required here; organisations need visibility into the reasoning processes used to create these modules to ensure errors are corrected safely. Data storage faces a similar reckoning, especially as AI becomes more autonomous. Wim Stoop, Director of Product Marketing at Cloudera, believes the era of “digital hoarding” is ending as storage capacity hits its limit. “AI-generated data will become disposable, created and refreshed on demand rather than stored indefinitely,” Stoop predicts. Verified, human-generated data will rise in value while synthetic content is discarded. Specialist AI governance agents will pick up the slack. These “digital colleagues” will continuously monitor and secure data, allowing humans to “govern the governance” rather than enforcing individual rules. For example, a security agent could automatically adjust access permissions as new data enters the environment without human intervention. Sovereignty and the human element Sovereignty remains a pressing concern for European IT. Red Hat’s survey data indicates 92 percent of IT and AI leaders in EMEA view enterprise open-source software as vital for achieving sovereignty. Providers will leverage existing data centre footprints to offer sovereign AI solutions, ensuring data remains within specific jurisdictions to meet compliance demands. Emmet King, Founding Partner of J12 Ventures, adds that competitive advantage is moving from owning models to “controlling training pipelines and energy supply,” with open-source advancements allowing more actors to run frontier-scale workloads. Workforce integration is becoming personal. Nick Blasi, Co-Founder of Personos, argues tools ignoring human nuance – tone, temperament, and personality – will soon feel obsolete. By 2026, Blasi predicts “half of workplace conflict will be flagged by AI before managers know it exists.” These systems will focus on “communication, influence, trust, motivation, and conflict resolution,” Blasi suggests, adding that personality science will become the “operating system” for the next generation of autonomous AI, offering grounded understanding of human individuality rather than generic recommendations. The era of the “thin wrapper” is over. Buyers are now measuring real productivity, exposing tools built on hype rather than proprietary data. For the enterprise, competitive advantage will no longer come from renting access to a model, but from controlling the training pipelines and energy supply that power it. See also: BBVA embeds AI into banking workflows using ChatGPT Enterprise Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI in 2026: Experimental AI concludes as autonomous systems rise appeared first on AI News. View the full article
-
BBVA is embedding AI into core banking workflows using ChatGPT Enterprise to overhaul risk and service in the sector. For the banking industry, the challenge of generative AI is rarely about adoption; it is about value extraction. BBVA has addressed this by integrating OpenAI’s platform directly into its operational backbone, a decision that will see the tool deployed across every unit of the bank. This tenfold expansion marks one of the largest enterprise deployments in the financial sector to date. The bank’s approach prioritises data over hype. BBVA began working with OpenAI in May 2024, rolling out 3,300 accounts to test the waters. This pilot phase allowed the institution to validate use cases before expanding to 11,000 staff. The results from this intermediate rollout of AI in banking provided the necessary business case. Employees using the tools saved nearly three hours per week on routine tasks. Furthermore, engagement was high, with more than 80 percent of users logging in daily. Staff created thousands of custom GPTs to handle specific collaborative and administrative duties, proving that utility often lies in bottom-up innovation rather than top-down mandates. Impact of ChatGPT on banking operations This agreement goes beyond simple productivity tools; it aims to reshape how the bank functions. BBVA will use the partnership to streamline risk analysis, a resource-intensive area for all financial institutions. Additionally, the deal focuses on redesigning software development processes and improving general employee support. “We were pioneers in the digital and mobile transformation, and we are now entering the AI era with even greater ambition. Our alliance with OpenAI accelerates the native integration of artificial intelligence across the bank to create a smarter, more proactive, and completely personalized banking experience, anticipating the needs of every client,” said Carlos Torres Vila, Chairman, BBVA. Deploying AI into a heavily-regulated environment like banking requires strict controls. The global rollout includes enterprise-grade security and privacy measures, ensuring that client data remains protected while staff access OpenAI’s latest models. The ChatGPT deployment includes tools for creating internal agents that connect securely to BBVA’s existing systems and processes for banking. To ensure these tools are used effectively, the two companies are collaborating on a structured adoption model and specialised training programmes. This addresses the common enterprise pitfall where software is deployed without the necessary skills to leverage it. Improving the banking experience with AI The initiative also targets the client interface. BBVA has already deployed ‘Blue,’ a virtual assistant built on OpenAI models that helps customers manage cards and accounts using natural language. Future plans involve integrating products so customers can interact with the bank directly through ChatGPT. To facilitate this, BBVA will have a dedicated team working directly with OpenAI’s product and research units. This direct line is intended to accelerate the bank’s transition into an “AI-native” institution. Sam Altman, CEO of OpenAI, said: “BBVA is a strong example of how a large financial institution can adopt AI with real ambition and speed. With this expansion of our work together, BBVA will embed our AI into the core of their products and operations to enhance the overall banking experience for their customers.” See also: Perplexity: AI agents are taking over complex enterprise tasks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post BBVA embeds AI into banking workflows using ChatGPT Enterprise appeared first on AI News. View the full article
-
F. Scott Fitzgerald once observed that “in a real dark night of the soul, it is always three o’clock in the morning.” Microsoft’s latest Copilot usage analysis suggests this nocturnal tendency toward existential contemplation persists in the AI age—with religion and philosophy conversations rising through the rankings during early morning hours. The Microsoft AI (MAI) research team analysed 37.5 million de-identified conversations to uncover when and how users turn to Copilot for different needs. The findings, published December 10, reveal distinct AI usage patterns that follow surprisingly human rhythms—from late-night philosophical queries to Valentine’s Day relationship anxiety, and the strict divide between weekday coding and weekend gaming. Late-night existential queries and daytime travel planning Religion and philosophy conversations rise during early morning hours while travel queries peak during commuting times According to the report authored by Bea Costa-Gomes and Seth Spielman, religion and philosophy conversations rise through the rankings during early morning hours. Travel-related discussions, by contrast, peak during typical commuting times, suggesting users plan trips while in transit. “The larger-than-life questions seem to have a rise during the early hours of the morning, with ‘Religion and Philosophy’ rising through the ranks,” the researchers noted in their paper. This temporal divide reflects different states of mind: practical planning during active hours versus contemplative questioning during quiet moments. The pattern challenges assumptions about AI as merely a productivity tool, revealing its role in addressing fundamental human curiosity. Health dominates mobile usage across all hours Health-related conversations consistently rank highest on mobile devices across all months of 2025 Health-related topics consistently ranked as the most common conversation type on mobile devices throughout 2025, regardless of time, day, or month. Users turned to Copilot for wellness tracking, health tips, and daily routine management with notable consistency. “When it comes to mobile, with its intimacy and immediacy, nothing tops our health,” the researchers wrote. The dominance of health queries on mobile devices—but not desktops—suggests users view smartphones as more personal companions for sensitive topics. This usage analysis indicates AI assistants are becoming integrated into daily health management routines. Programming rises weekdays, gaming dominates weekends August data revealed a clear cyclical pattern between programming and gaming topics. Programming conversations climbed from Monday through Friday, while gaming queries surged on weekends. Users were equally likely to engage with both topics, but strictly segregated by day of the week. Programming conversations climb Monday through Friday while gaming queries surge on weekends “This crossover hints at a vibrant, creative community that loves to code during the week and play during the weekends in equal measure,” according to the report. The strict temporal separation suggests users maintain clear boundaries between work and leisure, even when both involve digital activities. February’s Valentine’s Day spike in relationship queries The Copilot usage analysis showed February brought a pronounced spike in relationship-related conversations, peaking specifically on Valentine’s Day. Personal growth and wellness discussions also increased before the holiday, as users sought guidance for navigating romantic expectations. “Whether it was in preparing for Valentine’s day, or facing the day and the relationships, we saw a spike in activity as people turned to Copilot for guidance, reminders, and support,” the researchers observed. This pattern reveals how AI usage patterns align with cultural moments and social pressures, with users seeking private counsel from AI during emotionally charged periods. From search to advice: shifting usage patterns While information search remains Copilot’s most popular function, Microsoft’s analysis detected a clear rise in users seeking advice, particularly on personal topics including relationships and life decisions. “This growing trend highlights how digital tools are becoming trusted companions for life’s everyday questions,” the report stated. The shift from purely informational queries to advice-seeking represents an evolution in how users conceptualise AI assistants—from search engines to consultants. Privacy-preserving methodology Microsoft’s research team emphasised that their analysis maintained user privacy by extracting only conversation summaries rather than full content. The system identifies topic and intent while avoiding individual-level data exposure. “Our system doesn’t just de-identify conversations; it only extracts the summary of the conversation, from which we learn the topic and the intent, and maintains full privacy,” Costa-Gomes and Spielman explained. The 37.5 million conversation sample represents a subset of Copilot’s total usage, though Microsoft did not disclose the full user base or conversation volume. Implications for AI development Understanding these AI usage patterns informs feature development, according to Microsoft. The company noted that recognising what matters most to users—health, creativity, and support during key moments—enables designing features that fit naturally into daily life. “It’s also clear from these uses that what Copilot says matters. They show why it’s so important that we hold ourselves to a high bar for quality,” the researchers wrote. The findings complement recent broader industry analysis of AI usage patterns, including OpenRouter’s report showing ******** AI models gaining global adoption through cost efficiency and performance improvements. Microsoft’s temporal focus adds behavioural depth to understanding how AI assistants integrate into human routines. The research arrives as Microsoft expands its AI infrastructure, with the company’s next-generation GB200 cluster now operational at MAI. These usage patterns suggest AI assistants are evolving beyond tools into companions that adapt to human rhythms—answering practical questions during daylight hours while entertaining philosophical musings when users can’t sleep. See also: Microsoft ‘Promptions’ fix AI prompts failing to deliver Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Microsoft’s Copilot usage analysis exposes the 2 am philosophy question phenomenon appeared first on AI News. View the full article