ChatGPT
Diamond Member-
Posts
559 -
Joined
-
Last visited
-
Feedback
0%
About ChatGPT
ChatGPT's Achievements
-
A recent industry report [PDF] argues that Britain’s railway network could carry an extra billion journeys by the mid-2030s, building on the 1.6 billion passenger rail journeys recorded to year-end March 2024. The next decade will involve a combination of complexity and control, as more digital systems, data, and interconnected suppliers create the potential for more points of failure. The report’s central theme is that AI will become the operating system for modern rail, not as a single, centralised collection of models and algorithms, but as layers of prediction, optimisation, and automated monitoring found in infrastructure, rolling stock, maintenance yards, and stations (pp.18-23). This technology will guide human focus within daily work schedules rather than replace human activity entirely. Maintenance to become predictive and data-driven Traditional rail maintenance relies on fixed schedules and manual inspections, a reactive and labour-intensive practice. The whitepaper cites Network Rail’s reliance on engineers walking the track to spot defects (p.18). AI will shift the industry to predictive maintenance, analysing data from sensors to forecast failures before they cause significant disruption. This involves a combination of sensors and imaging, including high-definition cameras, LiDAR scanners, and vibration monitors. These provide machine-learning systems with data that can flag degradation in track, signalling, and electrical assets ahead of failure (pp.18-19). These monitoring programs can generate alerts months in advance, reducing emergency call-outs. The timeframe for predicting asset failure varies by asset type. Network Rail’s intelligent infrastructure efforts should transition from “find and fix” to “predict and prevent.” Network Rail emphasises data-led maintenance and tools designed to consolidate asset information, while European R&D programs (like Europe’s Rail and its predecessor, Shift2Rail) fund projects like DAYDREAMS, similarly aimed at prescriptive asset management. Prediction at scale requires a common approach to achieve transformation. Traffic control and energy efficiency Operational optimisation, beyond predictive maintenance, offers significant returns. AI systems use live and historical operating data—train positions, speeds, weather forecasts—to anticipate disruption and adjust traffic flow. Digital twin and AI-based traffic management trials in Europe, alongside research and testing of AI-assisted driving and positioning, could increase overall network capacity without laying more track (p.20). Algorithms also advise drivers on optimal acceleration and braking, potentially saving 10-15% in energy. Considering route variations, traction, and timetable constraints, energy savings compound quickly across a large network. Safety monitoring and CCTV Visible AI applications focus on safety and security. Obstacle detection uses thermal cameras and machine learning to identify hazards beyond human visibility. AI also monitors level crossings and analyses CCTV footage to spot unattended items and suspicious activity (pp.20-21). For example, AI and LiDAR are used for crowd monitoring at London Waterloo as part of a suite of safety tools. Passenger flows and journey optimisation AI can forecast demand using ticket sales, events, and mobile signals, allowing operators to adjust the number of carriages and reduce overcrowding, the report states. Passenger counting is a high-impact, low-drama application: better data supports better timetables and clearer customer information. Cybersecurity issues As operational technology converges with IT, cybersecurity becomes a critical operational issue. Legacy systems, lacking replacement plans, pose a risk, as does integrating modern analytics with older infrastructure. This creates conditions attractive to attackers. The future of AI in rail involves sensors performing in extreme environments, models trusted and tested by operators, and governance that treats cyber resilience as inseparable from physical safety. The report’s message is that AI will arrive regardless. The question is whether railways proactively adopt and control it or inherit it as un-managed complexity. (Image source: “Train Junction” by jcgoble3 is licensed under CC BY-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post The future of rail: Watching, predicting, and learning appeared first on AI News. View the full article
-
For a company built on intellectual property, scale creates a familiar tension. Disney needs to produce and distribute content across many formats and audiences, while keeping tight control over rights, safety, and brand consistency. Generative AI promises speed and flexibility, but unmanaged use risks creating legal, creative, and operational drag. Disney’s agreement with OpenAI shows how a large, IP-heavy organisation is attempting to resolve that tension by putting AI inside its operating system rather than treating it as a side experiment. Under the deal, Disney becomes both a licensing partner and a major enterprise customer. OpenAI’s video model Sora will be able to generate short, user-prompted videos using a defined set of Disney-owned characters and environments. Separately, Disney will use OpenAI’s APIs to build internal tools and new consumer experiences, including integrations tied to Disney+. The company will also deploy ChatGPT internally for employees. The mechanics matter more than the spectacle. Disney is not opening its catalogue to unrestricted generation. The licence excludes actor likenesses and voices, limits which assets can be used, and applies safety and age-appropriate controls. In practice, this positions generative AI as a constrained production layer—capable of generating variation and volume, but bounded by governance. AI inside existing workflows A consistent failure mode in enterprise AI programmes is separation. Tools live outside the systems where work actually happens, adding steps instead of removing them. Disney’s approach mirrors a more pragmatic pattern: put AI where decisions are already made. On the consumer side, AI-generated content will surface through Disney+, rather than through a standalone experiment. On the enterprise side, employees gain access to AI through APIs and a standardised assistant, rather than a patchwork of ad hoc tools. This reduces friction and makes AI usage observable and governable. The implication is organisational. Disney is treating generative AI as a horizontal capability—closer to a platform service than a creative add-on. That framing makes it easier to scale usage across teams without multiplying risk. Variation without expanding headcount The Sora licence focuses on short-form content derived from pre-approved assets. That constraint is deliberate. In production environments, much of the cost sits not in ideation but in generating usable variations, reviewing them, and moving them through distribution pipelines. By allowing prompt-driven generation inside a defined asset set, Disney can reduce the marginal cost of experimentation and fan engagement without increasing manual production or review load. The output is not a finished film. It is a controlled input into marketing, social, and engagement workflows. This mirrors a broader enterprise pattern: AI earns its place when it shortens the path from intent to usable output, not when it creates standalone artefacts. APIs over point tools Beyond content generation, the agreement positions OpenAI’s models as building blocks. Disney plans to use APIs to develop new products and internal tools, rather than relying solely on off-the-shelf interfaces. This matters because enterprise AI programmes often stall on integration. Teams waste time copying outputs between systems or adapting generic tools to fit internal processes. API-level access allows Disney to embed AI directly into product logic, employee workflows, and existing systems of record. In effect, AI becomes part of the connective tissue between tools, not another layer employees must learn to work around. Aligning productivity with incentives Disney’s $1 billion equity investment in OpenAI is less interesting as a valuation signal than as an operational one. It indicates an expectation that AI usage will be persistent and central, not optional or experimental. For large organisations, AI investments fail when tooling remains disconnected from economic outcomes. Here, AI touches revenue-facing surfaces (Disney+ engagement), cost structures (content variation and internal productivity), and long-term platform strategy. That alignment increases the likelihood that AI becomes part of standard planning cycles rather than discretionary innovation spend. Automation that makes scale less fragile High-volume AI use amplifies small failures. Disney and OpenAI emphasise safeguards around IP, harmful content, and misuse, not as a values statement but as a scaling requirement. Strong automation around safety and rights management reduces the need for manual intervention and supports consistent enforcement. As with fraud detection or content moderation in other industries, this kind of operational AI does not attract attention when it works—but it makes growth less brittle. Lessons for enterprise leaders Embed AI where work already happens. Disney targets product and employee workflows, not a separate AI sandbox. Constrain before you scale. Defined asset sets and exclusions make deployment viable in high-liability environments. Use APIs to reduce friction. Integration matters more than model novelty. Tie AI to economics early. Productivity gains stick when they connect to revenue and cost structures. Treat safety as infrastructure. Automation and controls are prerequisites for scale, not afterthoughts. Disney’s specific assets are unique. The operating pattern is not. Enterprise AI delivers value when it is designed as part of the organisation’s core machinery—governed, integrated, and measured—rather than as a showcase for what models can generate. (Photo by Héctor Vásquez) See also: OpenAI targets AI skills gap with new certification standards Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Why Disney is embedding generative AI into its operating model appeared first on AI News. View the full article
-
Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry. From cloud to edge Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised. “The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world. The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible. Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings. Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface. Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers. Arm and global governments Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic. Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware. Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry. The enterprise case for edge AI The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided. Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting. Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand. Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says. What’s next from Arm and the industry Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI. Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility. Redefining “smart” Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.” (Image source: “Factory Floor” by danielfoster437 is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Arm and the future of AI at the edge appeared first on AI News. View the full article
-
Under China’s push to clean up its energy system, AI is starting to shape how power is produced, moved, and used — not in abstract policy terms, but in day-to-day operations. In Chifeng, a city in northern China, a renewable-powered factory offers a clear example. The site produces hydrogen and ammonia using electricity generated entirely from nearby wind and solar farms. Unlike traditional plants connected to the wider grid, this facility runs on its own closed system. That setup brings a problem as well as a benefit: renewable power is clean, but it rises and falls with the weather. To keep production stable, the factory relies on an AI-driven control system built by its owner, Envision. Rather than following fixed schedules, the software continuously adjusts output based on changes in wind and sunlight. As reported by Reuters, Zhang Jian, Envision’s chief engineer for hydrogen energy, compared the system to a conductor, coordinating electricity supply and industrial demand in real time. When wind speeds increase, production ramps up automatically to take full advantage of the available power. When conditions weaken, electricity use is quickly reduced to avoid strain. Zhang said the system allows the plant to operate at high efficiency despite the volatility of renewable energy. Projects like this are central to China’s plans for hydrogen and ammonia, fuels seen as important for cutting emissions in sectors such as steelmaking and shipping. They also point to a broader strategy: using AI to manage complexity as the country adds more renewable power to its grid. Researchers argue that AI could play a significant role in meeting China’s climate goals. Zheng Saina, an associate professor at Southeast University in Nanjing who studies low-carbon transitions, said AI can support tasks ranging from emissions tracking to forecasting electricity supply and demand. At the same time, she cautioned that AI itself is driving rapid growth in power consumption, particularly through energy-hungry data centres. China now installs more wind and solar capacity than any other country, but absorbing that power efficiently remains a challenge. According to Cory Combs, associate director at Beijing-based research firm Trivium China, AI is increasingly seen as a way to make the grid more flexible and responsive. That thinking was formalised in September, when Beijing introduced an “AI+ energy” strategy. The plan calls for deeper links between AI systems and the energy sector, including the development of multiple large AI models focused on grid operations, power generation, and industrial use. By 2027, the government aims to roll out dozens of pilot projects and test AI across more than 100 use cases. Within another three years, officials want China to reach what they describe as a world-leading level of AI integration in energy. Combs said the focus is on highly specialised tools designed for specific jobs, such as managing wind farms, nuclear plants, or grid balancing, rather than general-purpose AI. This approach contrasts with the United States, where much of the investment has gone into building advanced large-language models, according to Hu Guangzhou, a professor at the China Europe International Business School in Shanghai. One area where AI could have immediate impact is demand forecasting. Fang Lurui, an assistant professor at Xi’an Jiaotong-Liverpool University, said power grids must match supply and demand at every moment to avoid outages. Accurate forecasts of renewable output and electricity use allow operators to plan ahead, storing energy in batteries when needed and reducing reliance on coal-fired backup plants. Some cities are already experimenting. Shanghai has launched a citywide virtual power plant that links dozens of operators — including data centres, building systems, and electric vehicle chargers — into a single coordinated network. During a trial last August, the system reduced peak demand by more than 160 megawatts, roughly equivalent to the output of a small coal plant. Combs said such systems matter because modern power generation is increasingly scattered and intermittent. “You need something very robust that is able to be predictive and account for new information very quickly,” he said. Beyond the grid, China is also looking to apply AI to its national carbon market, which covers more than 3,000 companies in emissions-heavy industries such as power, steel, cement, and aluminium. These sectors together produce over 60% of the country’s carbon emissions. Chen Zhibin, a senior manager at Berlin-based think tank adelphi, said AI could help regulators verify emissions data, refine the allocation of free allowances, and give companies clearer insight into their production costs. Still, the risks are growing alongside the opportunities. Studies suggest that by 2030, China’s AI data centres could consume more than 1,000 terawatt-hours of electricity each year — roughly the same as Japan’s current annual usage. Lifecycle emissions from the AI sector are projected to rise sharply and peak well after China’s 2030 emissions target. Xiong Qiyang, a doctoral researcher at Renmin University of China who worked on one such study, said the results reflect the reality that coal still dominates China’s power mix. He warned that rapid AI expansion could complicate national climate goals if energy sources do not shift quickly enough. In response, regulators have begun tightening rules. A 2024 action plan requires data centres to improve energy efficiency and increase their use of renewable power by 10% each year. Other initiatives encourage new facilities to be built in western regions, where wind and solar resources are more abundant. Operators on the east coast are also testing new ideas. Near Shanghai, an underwater data centre is set to open, using seawater for cooling to cut energy and water use. The developer, Hailanyun, said the facility will draw most of its power from an offshore wind farm and could be replicated if the project proves viable. Despite the growing energy demands of AI, Xiong argued that its overall impact on emissions could still be positive if applied carefully. Used to optimise heavy industry, power systems, and carbon markets, he said, AI may remain an essential part of China’s effort to cut emissions — even as it creates new pressures that policymakers must manage. (Photo by Matthew Henry) See also: Can China’s chip stacking strategy really challenge Nvidia’s AI dominance? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Inside China’s push to apply AI across its energy system appeared first on AI News. View the full article
-
For large retailers, the challenge with AI is no longer whether it can be useful, but how it fits into everyday work. A new three-year AI partnership by Tesco points to how one of the ***’s biggest supermarket groups is trying to answer that question. The agreement with French startup Mistral AI is less about launching a single AI feature and more about building long-term capability. Tesco plans to work with Mistral to develop AI tools that can be used across its business, from internal workflows to customer-facing systems. Rather than framing the deal around bold promises, Tesco has described it in practical terms. The aim, according to the company, is to save staff time, improve how teams work, and strengthen customer service. Ruben Lara Hernandez, Tesco’s Data, Analytics & AI Director, said the partnership brings together Tesco’s retail experience with Mistral’s technology, with the hope that it will help colleagues work more efficiently and support customers more effectively. He added that the agreement builds on Tesco’s history of developing technology and AI tools that support customers, staff, and suppliers. Tesco moves AI into day-to-day retail operations That focus reflects a wider shift in how enterprises are using AI. Early retail experiments often centred on customer-facing tools that were easy to see but hard to scale. More recent efforts are aimed at internal use, where AI can reduce repetitive work, support planning, and help staff make decisions faster. Tesco has already been moving in that direction. Over the past five years, the retailer has doubled the size of its technology team, signalling that software and data are now core to how the business operates. AI is already used across several parts of the organisation, both through in-house development and partnerships. In online grocery, AI helps find more efficient delivery routes, which can open up additional delivery slots for customers. In supply planning, AI supports complex demand forecasting, helping stores maintain product availability. The company also uses AI to personalise how it engages with customers through its Clubcard loyalty scheme, tailoring offers and communications based on shopping behaviour. The partnership with Mistral is meant to build on this existing work rather than replace it. One factor behind the collaboration is Mistral’s approach to model deployment, which allows AI systems to run in more controlled environments. For a retailer handling large volumes of customer and operational data, that level of control matters. Mistral’s Chief Revenue Officer and US General Manager, Marjorie Janiewicz, said the company’s Applied AI team will work closely with Tesco’s internal experts. She described the goal as building AI products that are customisable and controllable, with a focus on improving internal workflows and customer experience. A long-term partnership, not a one-off project The structure of the partnership also suggests a cautious approach to scale. Tesco plans to create an internal AI lab as part of the agreement, giving teams space to test and refine tools before wider deployment. For large organisations, this kind of setup can help prevent AI projects from remaining stuck in isolated pilots or specialist teams. There is also a strategic angle to the choice of partner. Mistral AI is the only European company developing large language models at the frontier level, and Tesco is the first major *** retailer to form a partnership with the startup as part of its broader AI and technology plans. Founded in April 2023, Mistral has grown quickly and counts enterprises such as HSBC, AXA, and Stellantis among its customers. For Tesco, the harder work now lies in execution. Retail data is often fragmented across regions, systems, and channels, and AI systems depend on that data being accurate and consistent. Rolling tools out across a business of this size also requires training, oversight, and trust from staff who use them daily. Whether the partnership succeeds will likely depend on how visible its impact becomes inside the organisation. If AI tools help store teams, planners, and analysts work more smoothly, the gains may be gradual rather than dramatic. As retailers look to move beyond experimentation, Tesco’s approach offers a view of how enterprise AI is starting to settle into routine operations — not as a single solution, but as a steady process of change. (Photo by Shashank Verma) See also: Zara’s use of AI shows how retail workflows are quietly changing Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Tesco signs three-year AI deal centred on customer experience appeared first on AI News. View the full article
-
[AI]Marketing agencies using AI in workflows serve more clients
ChatGPT posted a topic in World News
Of all the many industries, it’s marketing where AI is no longer an “innovation lab” side project but embedded in briefs, production pipelines, approvals, and media optimisation. A WPP iQ post published in December, based on a webinar with WPP and Stability AI, shows what AI deployment in daily operations looks like. Here, we’re talking about a focus on the practical constraints that determine whether AI changes daily work or merely adds another layer of complexity or tooling. Brand accuracy a repeatable capability Marketing agencies’ AI treats brand accuracy as something to be engineered. WPP and Stability AI note that off-the-shelf models “don’t come trained on your brand’s visual identity”, so outputs can often look generic. The companies’ remedy is fine-tuning, that is, training models on brand-specific datasets so the model learns the brand playbook, including style, look, and colours. Then, these elements can be reproduced consistently. WPP’s Argos is a prime example. After fine-tuning a model for the retailer, the team described how the model picked up details beyond the characters, including lighting and subtle shadows used in the brand’s 3D animations. Reproducing these finer details can be where time disappears in production, in the form of re-rendering and several rounds of approvals. When AI outputs start closer to “finished”, teams spend less time correcting and more time shaping narratives and adapting media for different channels. Cycle time collapses (and calendars change) WPP and Stability AI point out that traditional 3D animation can be too slow for reactive marketing. After all, cultural moments demand immediate content, not cycles defined in weeks or months. In its Argos case study, WPP trained custom models on two 3D toy characters so the models learned how they look and behave, including details such as proportions and how characters hold objects. The outcome was “high-quality images…generated in minutes instead of months”. The accelerated workflow moves rather than removes production bottlenecks. If generating variations becomes fast, then review, compliance, rights management and distribution, become the constraints. Those issues were always there, but the speed and efficiency of AI in this context shows the difference between what’s possible, and systems that have become embedded and accepted into workflows. Agencies that want AI to change daily operations have to redesign the workflow around it, not just add the technology as a new tool. The “AI front end” becomes essential WPP and Stability AI call out a “UI problem”, wherecreative teams lose time interfaces to common tools are “disconnected, complex and confusing”, forcing workarounds and constant asset movement between tools. Often, responses are bespoke, brand-specific front ends with complex workflows in the back end.. WPP positions WPP Open as a platform that encodes WPP’s proprietary knowledge into “globally accessible AI agents”, which helps teams plan, produce, create media, and sell. Operational gains come from cleaner handoffs between tools, as work moves from briefs into production, assets into activation, and performance signals back into planning. Self-serve capability changes agency operations AI-powered marketing platforms are also becoming client-facing. Operationally, that pushes agencies to concentrate on the parts of the workflow their clients can’t self-serve easily, like designing the brand system, building fine-tunings, and ensuring governance is embedded. Governance moves from policy to workflow For AI to be used daily, governance needs to be embedded where work happens. Dentsu describes building “walled gardens”, which are digital spaces where employees can prototype and develop AI-enabled solutions securely, and commercialise the best ideas. This reduces the risk of sensitive data exposure and lets experiments move into production systems. Planning and insight compress too The operational impact is not limited to production. Publicis Sapient describes AI-powered content strategy and planning that “transforms months of research into minutes of insight” by combining large language models with contextual knowledge and prompt libraries [PDF]. Research and brief development compress work schedules, so more client work can happen and the agency has faster responses to shifting culture and platform algorithms. What changes for people Across these examples, the impact on marketing professionals is one of rebalancing and shifting job descriptions. Less time goes on mechanical drafting, resizing, and versioning, and more time goes on brand stewardship. New operational roles expand, with titles like– model trainer, workflow designer, and AI governance lead. AI makes the biggest operational difference when agencies use customised models, usable front ends that make adoption (especially by clients) frictionless, and integrated platforms that connect planning, production, and execution. The headline benefit is speed and scale, but the deeper change is that marketing delivery starts to resemble a software-enabled supply chain, standardised, flexible where it needs to be, and measurable. (Image source: “Solar Wind Workhorse Marks 20 Years of Science Discoveries” by NASA Goddard Photo and Video is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Marketing agencies using AI in workflows serve more clients appeared first on AI News. View the full article -
Cognizant, ***** Consultancy Services, Infosys, and Wipro have announced plans to deploy more than 200,000 Microsoft Copilot licenses in their enterprises – over 50,000 per company – in what Microsoft is calling a new benchmark for enterprise-scale adoption of generative AI. The companies involved are framing the move as the implementation of a default tool for hundreds of thousands of employees involved in consulting, delivery, operations, and software. The announcement, made in Bengaluru, December 11, was timed to coincide with Microsoft CEO Satya Nadella’s visit to India. There, and across the industrialised world, there’s been growing momentum for agentic AI – AI systems that do more than chat, executing multi-step work in business processes. The four firms want to be seen as AI advisors for clients, with extensive experience drawn from their internal rollouts of AI. Why enterprises care about Copilot Readers will be familiar with Microsoft 365 Copilot, the AI assistant embedded in standard workplace tools Word, Excel, PowerPoint, Outlook, and Teams. It’s intended to help users draft, summarise, and analyse, turning natural-language queries into work-related outputs. Copilot combines large language models with Microsoft 365 apps and organisational data gained from Microsoft Graph, with the assistant working in the context of a user’s files, meetings, and messages. This ability is, of course, subject to access controls already in place and defined by the organisation. For large organisations, the embedding of AI into workflows is important. A firm shouldn’t have to rebuild its toolchain to experiment with AI, but rather start using AI in the software and documents its workforce already uses. The raft of benefits is practical and work-focused: faster documentation, quicker meeting follow-ups, faster draft proposals, better discovery of information from internal knowledge repositories, and, with agentic AI, the automation of repetitive tasks. From Copilots to frontier firms and agents Microsoft uses the term “Frontier Firms” to describe organisations that are “human-led and agent-operated”; where employees work alongside AI assistants and specialised agents that take on work processes. The designation of ‘Frontier Firm’ status aligns with Microsoft’s messaging at Microsoft Ignite 2025, where the company described agents reinventing business processes and amplifying impact through human-agent teamwork. In very simple terms, the company’s pitch is to move from “AI helps you write” to “AI helps run workflows.” Why IT services firms are making public commitments There are two reasons why the four firms are rolling out the technology at such a large scale. First, to improve internal productivity. The Times of India reports the deployments are intended to integrate Copilot into workflows in consulting, software development, operations and client delivery, with the aim of improved productivity. At large multinational companies, margins depend on delivery efficiency and knowledge reuse, so shaving minutes from everyday tasks for tens of thousands of workers produces meaningful gains. Second, client credibility. The consultancy companies serve global enterprises, including many Fortune 500 clients, which means their internal operating model can, and perhaps should, become their clients’ playbooks. If consultancies can demonstrate mature governance, training, and measurable outcomes with Copilot at scale in their own operations, it strengthens their messaging, better able to sell similar transformations to potential and existing clients. Hyperscalers’ investment in India The Copilot announcement came immediately after Microsoft said it would invest $17.5 billion in India between 2026-2029, money destined for cloud and AI infrastructure, skilling, and operations. The company describes this as its largest investment in Asia to date. Other major tech firms are making parallels: Reuters reported in December 2025 that Amazon/AWS planned to invest over $35 billion in India by 2030, expanding its operations and AI capabilities, for example. Together, such moves underscore India’s growing position as a massive enterprise market and strategic hub for AI talent and cloud infrastructure. For India’s IT services leaders, Copilot is being positioned as a way to stay ahead of the competitive curve and define “AI-first delivery.” (Image source: “Gobbling Indian view of Clinch River” by dmott9 is licensed under CC BY-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post 50,000 Copilot licences for Indian service companies appeared first on AI News. View the full article
-
Zara is testing how far generative AI can be pushed into everyday retail operations, starting with a part of the business that rarely gets attention in technology discussions: product imagery. Recent reporting shows the retailer using AI to generate new images of real models wearing different outfits, based on existing photoshoots. Models remain involved in the process, including consent and compensation, but AI is used to extend and adapt imagery without repeating production from scratch. The stated aim is to speed up content creation and reduce the need for repeated shoots. On the surface, the change looks incremental. In practice, it reflects a familiar pattern in enterprise AI adoption, where technology is introduced not to overhaul how a business works, but to remove friction from tasks that repeat at scale. How Zara uses AI to reduce friction in repeatable retail work For a global retailer like Zara, imagery is not a creative afterthought. It is a production requirement tied directly to how quickly products can be launched, refreshed, and sold across markets. Each item typically needs multiple visual variations for different regions, digital channels, and campaign cycles. Even when garments change only slightly, the surrounding production work often starts again from scratch. That repetition creates delays and cost that are easy to overlook precisely because they are routine. AI offers a way to compress those cycles by reusing approved material and generating variations without resetting the entire process. AI enters the production pipeline The placement of the technology is as important as the capability itself. Zara is not positioning AI as a separate creative product or asking teams to adopt an entirely new workflow. The tools are being used inside an existing production pipeline, supporting the same outputs with fewer handoffs. That keeps the focus on throughput and coordination rather than experimentation. This kind of deployment is typical once AI moves beyond pilot stages. Rather than asking organisations to rethink how work is done, the technology is introduced where constraints already exist. The question becomes whether teams can move faster and with less duplication, not whether AI can replace human judgement. The imagery initiative also sits alongside a broader set of data-driven systems that Zara has built up over time. The retailer has long relied on analytics and machine learning to forecast demand, allocate inventory, and respond quickly to changes in customer behaviour. Those systems depend on fast feedback loops between what customers see, what they buy, and how stock moves through the network. From that perspective, faster content production supports the wider operation even if it is not framed as a strategic shift. When product imagery can be updated or localised more quickly, it reduces lag between physical inventory, online presentation, and customer response. Each improvement is small, but together they help maintain the pace that fast fashion relies on. From experimentation to routine use Notably, the company has avoided framing this move in grand terms. There are no published figures on cost savings or productivity gains, and no claims that AI is transforming the creative function. The scope remains narrow and operational, which limits both risk and expectation. That restraint is often a sign that AI has moved out of experimentation and into routine use. Once technology becomes part of day-to-day operations, organisations tend to talk about it less, not more. It stops being an innovation story and starts being treated as infrastructure. There are also constraints that remain visible. The process still relies on human models and creative oversight, and there is no suggestion that AI-generated imagery operates independently. Quality control, brand consistency, and ethical considerations continue to shape how the tools are applied. AI extends existing assets rather than generating content in isolation. This is consistent with how enterprises typically approach creative automation. Rather than replacing subjective work outright, they target the repeatable components around it. Over time, those changes accumulate and reshape how teams allocate effort, even if the core roles remain intact. Zara’s use of generative AI does not signal a reinvention of fashion retail. It shows how AI is beginning to touch parts of the organisation that were previously considered manual or difficult to standardise, without changing how the business fundamentally operates. In large enterprises, that is often how AI adoption becomes durable. It does not arrive through sweeping strategy announcements or dramatic claims. It takes hold through small, practical changes that make everyday work move a little faster — until those changes become hard to imagine doing without. (Photo by M. Rennim) See also: Walmart’s AI strategy: Beyond the hype, what’s actually working Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Zara’s use of AI shows how retail workflows are quietly changing appeared first on AI News. View the full article
-
Human Resources is an area in many organisations where AI can have significant operational impact. The technology is now being embedded into day-to-day operations, in activities like answering employees’ questions and supporting training. The clearest impact appears where organisations can measure the tech’s outcomes, typically in time saved and the numbers of queries successfully resolved. Fewer tickets, more first-time answers IBM’s internal virtual agent, AskHR, was built to handle employee queries and automate routine HR actions. IBM says AskHR automates more than 80 internal HR tasks and has engaged in over two million conversations with employees every year. It uses a two-tier approach, where AI resolves routine issues, and human advisers handle more complex cases. The company reports some operational benefits: a 94% success rate in answering commonly-asked questions, a 75% reduction in the number of lodged support tickets since 2016, and – the headline figure – a 40% reduction in HR operational costs over four years. But it’s important to note that AI is not used by IBM to route queries to existing materials. The automation is capable of completing the transaction, thus reducing the need to hand-off queries to human staff. Recruitment and onboarding efficiencies Vodafone’s 2024 annual report describes an internal platform it calls ‘Grow with Vodafone‘. The company says it’s reduced its time-to-hire periods from 50 days to 48 days, made the job application process simpler, and added personalised skills-based job recommendations for applicants. That’s led to a 78% reduction in questions posed by potential applicants and those onboarding into new roles. The company also has a global headcount planning tool that reduces the manual work needed to assemble necessary data, plus there’s an AI-powered global HR ‘data lake’ that standardises dashboards and reduces the need for manual reporting – stakeholders can dive into the data themselves and surface the insights they need. Training and internal support Big employers have challengers getting new staff up to speed quickly; so-called time-to-competence. Bank of Americas’ newsroom describes how its onboarding and professional development organisation, ‘The Academy’ uses AI for interactive coaching, with employees completing over a million simulations in a year. The organisation operates ‘Erica for Employees‘, an internal assistant that handles topics like health benefits and payroll or tax forms for employees. It’s used by over 90% of employees – for the IT service desk, having Erica triage situations is impactful, with a reduction of more than 50% in incoming calls. Such tools reduce hidden work (searching, repeating questions, waiting for answers) and its associated costs. Plus, a shorter time-to-competence is especially valuable in regulated and customer-facing environments. Frontline work at big employers Walmart’s June 2025 corporate update describes rolling out AI tools via its associates’ app, which include a workflow tool that prioritises and recommends work tasks. At the time of publication, it was early days, but based on early results, Walmart says team leads and store managers are beginning to see shift planning times down from 90 to 30 minutes. As an employer of a diverse workforce, its app’s real-time translation ability (44 languages) is invaluable. The company is currently upgrading its associates’ software with AI to turn its internal process guides into multi-lingual instructions. It has more than 900,000 employees using the system every week users, with more than three million queries per day going through the associates’ conversational AI platform. Workforce efficiencies at Walmart scale is impressive, but for every size of business, there are clear advantages to be gained from giving employees faster guidance and better support across multilingual teams. In addition to the immediate cost savings, simple-to-use and effective software of this type affects retention, safety standards, and service quality – all for the better. Governance and human safety nets Multinational bank, HSBC’s publication, “Transforming HSBC with AI” describes over 600 AI use cases in operation at the company, and says colleagues have access to an LLM-based productivity tool for tasks like translation and document analysis. In an environment where governance and data security are of paramount importance, it’s ensured that all automated systems abide by existing codes, something that’s enforced by dedicated AI Review Councils and AI lifecycle management frameworks. In HR this matters, regardless of vertical. Governance decisions should shape what can be automated, how people data is handled, and how accountability is maintained into the long term. HR data is often personally-identifiable, so the highest standards – and their maintenance – are critically important. Operational trade-offs Operational impact is about trust as well as speed and efficiency. A self-service agent answering confidently but incorrectly creates rework, escalations, and causes problems. A pragmatic pattern for reducing risk is to keep humans in the loop, especially for complex decisions. IBM’s two-tier model, Vodafone’s tailored job recommendations, and Walmart an HSBC’s data governance and security bring oversight. Hybrid service models plus data discipline and oversight are what enable AI to scale without undermining employee confidence or fairness. Where this is heading The pattern of successful operational deployment has been consistent in the cases of the HR function in these large enterprises. They each started with high-volume questions and repetitive transactions, expanded into hiring and training, and then pushed AI to the frontline where it can save time. The biggest gains come when AI turns HR from a service ****** into a faster, more consistently-operating function. (Image source: “Business Meetings” by thinkpanama is licensed under CC BY-NC 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI in Human Resources: the real operational impact appeared first on AI News. View the full article
-
By December 2025, AI adoption on Wall Street had moved past experiments inside large US banks and into everyday operations. Speaking at a Goldman Sachs financial-services conference in New York on 9 December, bank executives described AI—particularly generative AI—as an operational upgrade already lifting productivity across engineering, operations, and customer service. The same discussion also surfaced a harder reality. If banks can produce more with the same teams, some roles may no longer be required at current levels once demand stabilises. How Wall Street banks say AI is delivering results today JPMorgan: operational gains begin to compound Marianne Lake, chief executive of consumer and community banking at JPMorgan, said productivity in areas using AI has risen to around 6%, up from roughly 3% before deployment. She added that operations roles could eventually see productivity gains of 40% to 50% as AI becomes part of routine work. Those gains rest on deliberate choices rather than broad experimentation. JPMorgan has focused on secure internal access to large language models, targeted changes to workflows, and tight controls on how data is used. The bank has described its internal “LLM Suite” as a controlled setting where staff can draft and summarise content using large language models. Wells Fargo: output rising ahead of staffing changes Wells Fargo CEO Charlie Scharf said the bank has not reduced headcount because of AI so far, but noted that it is “getting a lot more done.” He said management expects to find areas where fewer people are needed as productivity improves. In comments reported the same day, Scharf said the bank’s internal budgets already point to a smaller workforce by 2026, even before factoring in AI’s full impact. He also flagged higher severance costs, suggesting preparations for future adjustments are under way. PNC: AI speeds up a long-running shift PNC CEO Bill Demchak positioned AI as an accelerator rather than a new direction. He said the bank’s headcount has stayed largely flat for about a decade, even as the business expanded. That stability, he said, came from automation and branch optimisation, with AI likely to push the trend further. Citigroup: gains in software and customer support Citi’s incoming CFO Gonzalo Luchetti said the bank has recorded a 9% productivity improvement in software development. That mirrors a broader pattern across large firms adopting AI copilots to support coding work. He also pointed to two customer service areas where AI is helping: improving self-service so fewer calls reach agents, and supporting agents in real time when customers do need to speak with a person. Goldman Sachs: workflow changes paired with hiring restraint According to Reuters, Goldman Sachs’ internal “OneGS 3.0” programme has focused on using AI to improve sales processes and client onboarding. It has also targeted process-heavy functions such as lending workflows, regulatory reporting, and vendor management. These changes are unfolding alongside job cuts and a slower pace of hiring, linking workflow redesign directly to staffing decisions. Where Wall Street banks see the earliest AI productivity gains Across banks, the clearest gains are showing up in work that relies heavily on documents, follows repeatable steps, and operates within defined rules. Generative AI can shorten the time needed to search for information, summarise material, draft content, and move work through approval chains—especially when paired with structured processes and human checks. Common areas seeing early impact include: Operations: drafting responses, summarising cases, and resolving exceptions more quickly Software development: generating code, writing tests, refactoring, and producing documentation Customer service: stronger self-service combined with real-time support for agents Sales support and onboarding: pulling data from documents, filling forms, and speeding up client setup Regulatory reporting: assembling narratives and evidence faster, under strict review and controls Why governance shapes the pace of adoption For banks, enthusiasm is not the main constraint. Control is. US regulators have long required strong oversight of models, and those expectations extend to AI systems. Guidance such as the Federal Reserve and OCC’s SR 11-7 sets standards for model development, validation, and ongoing review. A 2025 report from the US Government Accountability Office noted that existing model risk management principles already apply to AI, including testing and independent oversight. In practice, this pushes banks toward designs that can be examined and traced. AI use is often limited in how independently it can act. Prompts and outputs are logged, performance is monitored for drift, and humans remain responsible for high-impact decisions such as lending, dispute handling, and official reporting. Productivity rises, but employment questions remain The comments from bank leaders point to a phased shift. The first phase looks like stable headcount paired with higher output as AI tools spread across teams. The second phase begins once those gains become consistent enough to influence staffing plans, through attrition, role changes, or targeted cuts. Signals from Wells Fargo around 2026 headcount planning and severance costs suggest some banks are approaching that second stage. At a broader level, institutions such as the International Monetary Fund have warned that AI could affect a large share of jobs worldwide, with different mixes of automation and augmentation depending on role and region. The World Economic Forum’s Future of Jobs Report 2025 also projects substantial job movement as companies adopt AI and adjust skill needs. What AI means for Wall Street bank strategy beyond 2025 Banks that gain the most from AI are likely to focus on three areas at once: redesigning workflows rather than layering on chat tools, building strong data foundations, and putting governance in place that supports speed without eroding trust. Research firms argue the financial stakes are high. McKinsey estimates that generative AI could deliver between $200 billion and $340 billion in annual value for the banking sector, largely through productivity improvements. The open question is no longer whether AI can deliver results in banking. It is how quickly banks can make those gains routine while preserving audit trails, security, and customer safeguards—and how they manage the workforce changes that follow. (Photo by Lo Lo) See also: BNP Paribas introduces AI tool for investment banking Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Wall Street’s AI gains are here — banks plan for fewer people appeared first on AI News. View the full article
-
Artificial intelligence has been part of the insurance sector for years – the Finance function in many businesses is often the first to automate. But what’s remarkable in the instance of AI is how directly the technology is woven into day-to-day operational work. Not sitting in the background as a niche modelling capability, AI is now used in places where insurers spend most of their time and money: claims handling, underwriting, and running complex programmes. Industry giants Allianz, Zurich, and Aviva have published evidence in just the last 12 months illustrating their shifts from experimentation stages to production-grade tools that support frontline workers in real workflows. Simple claims: Fewer admin bottlenecks Claims operations are a natural proving ground for AI because they comprise of a combination of paperwork and human judgement, and are usually undertaken in an environment of time pressure. Allianz describes its Insurance Copilot as an AI-powered tool that helps claims handlers automate repetitive tasks and pull together relevant information that would otherwise require multiple searches on different systems. There’s a notable change to the workflows, Allianz outlines. The Copilot starts with data gathering, summarising claim and contract details so a handler can get just the essentials, quickly. The algorithm then performs document analysis, operations that include interpreting agreements and comparing claims against policy details. The tool flags discrepancies and suggests next steps. Once the human operator has taken their decision, the Copilot assists drafts context-aware emails. This is the kind of daily activity that insurers care about, and by using their AI tools, they get reduced turnaround time, smoother settlements, and less friction for staff and customers. Allianz also frames AI as a way to reduce unnecessary payouts by highlighting important factors adjusters might otherwise miss. That has a clear impact on the company’s overall bottom line. Complex documents to usable decisions The quality of underwriting is determined by the quality of information available. Aviva uses the example of underwriters needing to read GP medical reports. The company says it’s launching an AI-powered summarisation tool that uses genAI to analyse and summarise these reports, which can sometimes amount to dozens of pages of medical text. The AI functions let underwriters make faster, more informed decisions. The immediate value here is not AI replacing the underwriter, but technology reducing the time spent reading. The insurer is explicit that underwriters will review summaries and make the final decision – not the AI. That distinction matters because underwriting is technical and sensitive; compressing documents into decision-ready summaries can speed up processing, but it also raises questions about accuracy, omissions and auditability. Aviva addresses this by pointing to its “rigorous testing and controls“. An active test phase processed around 1,000 cases before roll-out to ensure the standards it required, the company says. Uncertain contracts and servicing in multinational programmes Commercial insurance is an area with its own challenges, which include the complexity from working in multiple jurisdictions, and the regional differences between policies and stakeholders. Zurich says generative AI’s ability to process unstructured information lets multinational insurance work more easily across several countries, helping it build quicker, more accurate pictures of commercial insurance offerings, and simplifying submissions in different countries. Zurich also highlights contract certainty as a practical outcome: multinational programmes involve layered documents, varied local requirements and have the pervasive need for constant checking. It says GenAI helps internal experts compare, summarise and verify coverage in a programme using the operator’s native language, “in a fraction of the time” compared with the manual effort required to translate and capture the nuance of international differences. Although this area isn’t customer-facing, genAI improves the company’s responsiveness by letting its underwriters, risk engineers, and claims professionals work more efficiently. Zurich also refers to AI “joining up the dots”, able to spot trends in data that would – given the quantity of information – go unnoticed by human staff. Indeed, AI amplifies its experts’ judgement rather than displacing it. The common thread: augmentation, not automation-for-automation’s sake Across these three examples, a consistent pattern emerges: AI handles the heavy lifting of reading, searching, and drafting; high-volume tasks in insurance operations. Humans remain accountable for consequent decisions, whether it’s claim payments or underwriting acceptance. (Allianz describes a “human-in-the-loop” approach, and Aviva and Zurich similarly emphasise experts retaining decision-making control). Operational control and scalability are treated as major concerns: pilots, testing, domain-by-domain tuning, and expansion into lines of business are integral part of the narrative. What this means for the sector Insurers see faster cycle times, better consistency, reduced manual work, and a path to scaling. Their challenge is implementing tools responsibly, which is defined by secure data handling, explainability where needed, and the training of teams so they can question outputs appropriately. AI is becoming less of a headline in the sector and more of an everyday reality, a practical silicon colleague in the routine work of insurance profitability. (Image source: “house fire” by peteSwede is licensed under CC BY 2.0. ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Ensuring effective AI in insurance operations appeared first on AI News. View the full article
-
Big Pharma’s AI race extends across drug discovery, development, and clinical trials—but AstraZeneca has distinguished itself by deploying AI clinical trials technology at an unprecedented public health scale. While competitors optimise internal R&D pipelines, AstraZeneca’s AI is already embedded in national healthcare systems, screening hundreds of thousands of patients and demonstrating what happens when AI moves from pharmaceutical labs into actual patient care. The clinical validation backs this approach. AstraZeneca’s CREATE study, presented at the European Lung ******* Congress in March 2025, demonstrated a 54.1% positive predictive value for its AI chest X-ray tool—far exceeding the pre-defined success threshold of 20%. Behind those numbers: over 660,000 people screened in Thailand since 2022, with AI detecting suspected pulmonary lesions in 8% of cases. More critically, Thailand’s National Health Security Office is now scaling this technology across 887 hospitals with a three-year budget exceeding 415 million baht. This isn’t just a pilot program or proof-of-concept. It’s AI clinical trials technology deployed at the national healthcare system scale. The strategic divergence in AI clinical trials approaches The contrast with competitors is revealing. Pfizer’s ML Research Hub has compressed drug discovery timelines to approximately 30 days for molecule identification. The company used AI to develop Paxlovid in record time, with machine learning analysing patient data 50% faster than traditional methods. Pfizer now deploys AI in over half its clinical trials. Novartis partnered with Nobel Prize winner Demis Hassabis’s Isomorphic Labs and Microsoft for “AI-driven drug discovery.” Its Intelligent Decision System uses computational twins to simulate clinical trial processes, with AI-identified sites reportedly recruiting patients faster than traditional selection methods. Roche’s “lab in a loop” strategy iterates AI models with laboratory experiments. Having acquired Foundation Medicine and Flatiron Health, Roche built the industry’s largest clinical genomic database—over 800,000 genomic profiles across 150+ tumour subtypes—targeting 50% efficiency gains in safety management by 2026. AstraZeneca’s clinical operations advantage What sets AstraZeneca apart in AI clinical trials isn’t just ambition—it’s execution at scale. The company runs over 240 global trials in its R&D pipeline and has systematically embedded generative AI across clinical operations. It’s an “intelligent protocol tool,” developed with medical writers, that has reduced document authoring time by 85% in some cases. The company uses AI for 3D location detection on CT scans, slashing the time radiologists spend on manual annotation. More significantly, AstraZeneca is pioneering virtual control groups for AI clinical trials using electronic health records and past trial data to simulate placebo arms—potentially reducing the number of patients receiving non-active treatments. This represents a fundamental rethinking of clinical trial design itself. The lung ******* screening program exemplifies this strategic focus. Using Qure.ai’s qXR-LNMS tool, AstraZeneca isn’t just conducting trials—it’s transforming public health infrastructure. The December 2025 expansion includes a new industrial worker screening program targeting 5,000 workers across four Thai provinces, now expanding beyond lung ******* to include heart failure detection. The timeline acceleration race Industry metrics show why AI clinical trials matter: Traditional drug development takes 10-15 years with a 90% failure rate. AI-discovered drugs achieve 80-90% Phase I success rates—double the 40-65% traditional benchmark. Over 3,000 AI-assisted drugs are in development, with 200+ AI-enabled approvals expected by 2030. Pfizer moves from molecule identification to clinical trials in six-week cycles. Novartis analyses 460,000 clinical trials in minutes versus months. Yet AstraZeneca’s model delivers immediate patient impact—detecting cancers today in underserved populations, often before symptoms appear. The US$410 Billion question The World Economic Forum projects AI could generate US$350-$410 billion annually for pharma by 2030. The question is which approach captures more value: faster drug discovery or more efficient clinical operations? Pfizer’s bet on computational drug design and Novartis’s AI-powered trial site selection may yield breakthrough molecules. Roche’s integrated pharma-diagnostics model creates a proprietary data moat. But AstraZeneca’s strategy of embedding AI clinical trials throughout operations—from protocol generation to patient recruitment to regulatory submissions—is demonstrably reducing time-to-market while building real-world evidence at scale. The company’s partnership approach is equally distinctive. While others acquire AI companies or build internal hubs, AstraZeneca collaborates with technology partners like Qure.ai and Perceptra, regulatory bodies, and national health systems to deploy AI clinical trials where infrastructure gaps exist. As AstraZeneca pursues its 2030 goal of delivering 20 new medicines and reaching us$80 billion in revenue, its AI clinical trials advantage isn’t just about speed—it’s about proving AI’s value in the most regulated, risk-averse phase of pharmaceutical development. While competitors race to discover the next breakthrough molecule, AstraZeneca is reengineering how clinical trials themselves are conducted. The winner may not be determined by who builds the most sophisticated algorithm, but by who deploys AI clinical trials technology where it demonstrably improves patient outcomes—at scale, under regulatory scrutiny, and within real healthcare systems. And in that race, AstraZeneca currently leads. (Photo by AstraZeneca) See also: Google AMIE: AI doctor learns to ‘see’ medical images Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AstraZeneca leads big pharma’s AI clinical trials revolution with real-world patient impact appeared first on AI News. View the full article
-
Roblox is often seen as a games platform, but its day-to-day reality looks closer to a production studio. Small teams release new experiences on a rolling basis and then monetise them at scale. That pace creates two persistent problems: time lost to repeatable production work, and friction when moving outputs between tools. Roblox’s 2025 updates point to how AI can reduce both, without drifting away from clear business outcomes. Roblox keeps AI where the work happens Rather than pushing creators toward separate AI products, Roblox has embedded AI inside Roblox Studio, the environment where creators already build, test, and iterate. In its September 2025 RDC update, Roblox outlined “AI tools and an Assistant” designed to improve creator productivity, with an emphasis on small teams. Its annual economic impact report adds that Studio features such as Avatar Auto-Setup and Assistant already include “new AI capabilities” to “accelerate content creation”. The language matters—Roblox frames AI in terms of cycle time and output, not abstract claims about transformation or innovation. That framing makes it easier to judge whether the tools are doing their job. One of the more practical updates focuses on asset creation. Roblox described an AI capability that goes beyond static generation, allowing creators to produce “fully functional objects” from a prompt. The initial rollout covers selected vehicle and weapons categories, returning interactive assets that can be extended inside Studio. This addresses a common bottleneck where drafting an idea is rarely the slow part; turning it into something that behaves correctly inside a live system is. By narrowing that gap, Roblox reduces the time spent translating concepts into working components. The company also highlighted language tools delivered through APIs, including Text-to-Speech, Speech-to-Text, and real-time voice chat translation across multiple languages. These features lower the effort required to localise content and reach broader audiences. Similar tooling plays a role in training and support in other industries. Roblox treats AI as connective tissue between tools Roblox also put emphasis on how tools connect to one another. Its RDC post describes integrating the Model Context Protocol (MCP) into Studio’s Assistant, allowing creators to coordinate multi-step work across third-party tools that support MCP. Roblox points to practical examples, such as designing a UI in Figma or generating a skybox elsewhere, then importing the result directly into Studio. This matters because many AI initiatives slow down at the workflow level. Teams spend time copying outputs, fixing formats, or reworking assets that do not quite fit. Orchestration reduces that overhead by turning AI into a bridge between tools, rather than another destination in the process. Linking productivity to revenue Roblox ties these workflow gains directly to economics. In its RDC post, the company reported that creators earned over $1 billion through its Developer Exchange programme over the past year, and it set a goal for 10% of gaming content revenue to flow through its ecosystem. It also announced an increased exchange rate so creators “earn 8.5% more” when converting Robux into cash. The economic impact report makes the connection explicit. Alongside AI upgrades in Studio, Roblox highlights monetisation tools such as price optimisation and regional pricing. Even outside a marketplace model, the takeaway is clear: when AI productivity is paired with a financial lever, teams are more likely to treat new tooling as part of core operations rather than an experiment. Roblox uses operational AI to scale safety systems While creative tools attract attention, operational AI often determines whether growth is sustainable. In November 2025, Roblox published a technical post on its PII Classifier, an AI model used to detect attempts to share personal information in chat. Roblox reports handling an average of 6.1 billion chat messages per day, and says the classifier has been in production since late 2024, with a reported 98% recall on an internal test set at a 1% false positive rate. This is a quieter form of efficiency. Automation at this level reduces the need for manual review and supports consistent policy enforcement, which helps prevent scale from becoming a liability. What carries across, and what several patterns stand out: Put AI where decisions are already made. Roblox focuses on the build-and-review loop, rather than inserting a separate AI step. Reduce tool friction early. Orchestration matters because it cuts down on context switching and rework. Tie AI to something measurable. Creation speed is linked to monetisation and payout incentives. Keep adapting the system. Roblox describes ongoing updates to address new adversarial behaviour in safety models. Roblox’s tools will not translate directly to every sector. The underlying approach will. AI tends to pay for itself when it shortens the path from intent to usable output, and when that output is clearly connected to real economic value. (Photo by Oberon Copeland @veryinformed.com) See also: Mining business learnings for AI deployment Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Roblox brings AI into the Studio to speed up game creation appeared first on AI News. View the full article
-
[AI]What AI search tools mean for the future of SEO specialists
ChatGPT posted a topic in World News
AI search engines and generative AI tools are certainly transforming how people discover information online. Far from making SEO specialists obsolete, the shift highlights clearly why skilled human optimisers remain more important than ever. As generative AI search tools reshape the digital landscape, many wonder whether traditional SEO has reached the end. Despite AI’s growing influence, the fundamentals of discoverability, authority, clarity, and trust remain vital. The rise of AI-powered search makes the role of SEO specialists more important than before, ensuring content not only reaches audiences but resonates authentically. A digital environment shaped by automation still depends heavily on human expertise, especially when credibility is at stake. Why AI expands rather than replaces the work of SEO specialists The assumption that AI will replace SEO specialists ignores how modern search works. AI expands what optimisation can achieve, but it does not eliminate the need for skilled human oversight. Traditional ranking signals like site structure, depth of information, internal linking and authority continue to influence visibility, even in AI-driven answer engines. This means content must still be crafted with intent, clarity, and expertise. AI-powered search engines prioritise well-structured, trustworthy information, reinforcing the importance of thoughtful optimisation. In this brand new landscape, SEO now spans two parallel environments: classic search engines and AI-generated answer systems. Competing effectively in both requires strategic thinking, careful content planning, and a nuanced understanding of user intent: areas where human specialists excel, beyond what AI can replicate. Agencies like Brath, which presents SEO as one of the most profitable long-term marketing channels when executed with technical precision and strategic insight, clearly understand this reality. Its service model remains grounded in proven disciplines: technical improvements, keyword strategy, content refinement, and authority building. Rather than leaning on automation, it demonstrates that sustainable SEO relies on human expertise, methodical analysis, and consistent optimisation. Human insight still drives strategy in an AI-powered search world AI excels at processing data, but it still falls short when deeper context, cultural awareness, and nuanced decision-making are required. SEO is more than a technical exercise; it is a strategic craft rooted in understanding people, their motivations, expectations, and the subtle differences in how they search for answers. The human dimension is something AI cannot replicate. Specialists bring an understanding of why search behaviour shifts, how tone influences trust, and which narratives resonate in different industries. They also grasp how economic cycles, news events, and regulations affect user queries and engagement patterns. AI can identify patterns, but it cannot interpret them fully or determine which insights align with brand values or long-term goals. The increasing emphasis on E-E-A-T signals, Experience, Expertise, Authoritativeness and Trustworthiness, further reinforces this need for human guidance. An algorithm cannot fabricate authentic authority. It must be demonstrated through lived experience, credibility, and careful communication. SEO specialists ensure that these qualities are baked into every page and every message, guiding brands through an increasingly complex search environment where trust matters more than ever. How SEO specialists use AI to deliver faster, smarter optimisation Where AI shines is in support, not replacement. Modern SEO specialists use AI tools to automate repetitive tasks, accelerate data analysis, and uncover patterns at scale. AI can rapidly produce initial keyword lists, identify technical issues, or generate draft structures for content. These capabilities shorten production timelines and free specialists to focus on the strategic, creative, and high-value decisions that define effective optimisation. The balanced approach enables better outcomes. AI handles the heavy lifting, while human specialists interpret the results, refine the strategy, and ensure everything aligns with the brand’s goals, tone, and audience requirements. It is this combination of speed and discernment that shapes the future of SEO. The creative and analytical skills AI cannot replicate Even the most advanced AI tools cannot fully replicate creativity, intuition, or emotional intelligence. SEO specialists draw on analytical skills and creative insight to produce content that stands out in competitive digital markets. AI can propose phrasing, but it cannot determine when a message needs personality, authority, or nuance to feel trustworthy. In industries where reputation matters, like fintech, finance, healthcare, law, and education, human-shaped content is essential. Users want reassurance that the guidance they read comes from people with real expertise. AI-generated content often lacks the depth or subtlety needed to build that trust. Human specialists also excel at interpreting data in context. They understand how search intent connects to behaviour, how content influences perception, and how messaging supports or undermines brand identity. Their ability to connect analytical insights with emotional resonance is something AI simply cannot achieve. Building the next generation of SEO careers in an AI-enhanced industry AI is not shrinking the field of SEO; it is expanding it. The next generation of SEO professionals will blend human-centred strategy with AI-assisted efficiency. Modern SEO requires fluency in multiple areas: data interpretation, UX thinking, structured content, behavioural insights, content design, and now, AI tool management. Specialists who embrace both disciplines will lead the industry forward. Their work will become more strategic, more aligned with long-term business value, and more influential as search becomes increasingly complex and multi-modal. AI may automate tasks, but strategy, creativity, judgement, and trust-building remain human strengths. And as AI transforms how search tools interpret information, those strengths will only become more essential. SEO is not disappearing. It is evolving into a richer, more strategic, and more human discipline. Image source: Pexels The post What AI search tools mean for the future of SEO specialists appeared first on AI News. View the full article -
Mining conglomerate BHP describes AI as the way it’s turning operational data into better day-to-day decisions. A blog post from the company highlights the analysis of data from sensors and monitoring systems to spot patterns and flag issues for plant machinery, giving choices to decision-makers that can improve efficiency and safety – plus reduce environmental impact. For business leaders at BHP, the useful question was not “Where can we use AI?” but “Which decisions do we make repeatedly, and what information would improve them?” Portfolio not showcase BHP describes the end-to-end effects of AI on operations, or as it puts it, “from mineral extraction to customer delivery.” Leaders had decided to move beyond pilot rollouts, treating AI as an operational capability. It started with a small set of problems that affected the company’s performance; places where change could be measured in results. The company found it could avoid unplanned downtime of machinery, plus it tightened its energy and water use. Each use case addressing a small but impactful problem was given an owner and an accompanying KPI. Results were reviewed with the same regularity used for other operational performance monitoring elsewhere in the company. Where BHP uses AI daily In addition to BHP focusing specifically on areas such as predictive maintenance and energy optimisation, it gave consideration to using AI in more adventurous yet important operations such as autonomous vehicles and real-time staff health monitoring. Such categories can translate well to other asset-heavy environments, across logistics, manufacturing, and heavy industry. Predictive maintenance Predictive maintenance is the process of planning repairs in scheduled downtime to reduce unexpected failures and costly, unplanned stoppages. Here, AI models analyse equipment data from on-board sensors and can anticipate maintenance needs. This cuts breakdown numbers and reduces equipment-related safety incidents. BHP runs predictive analytics across most of its load-and-haul fleets and its materials handling systems. A central maintenance centre provides real-time and longer-range indications of machine health and potential failure or degradation. Prediction has become an integral part of its machinery-heavy operations, where previously, such information was presented as ‘just another’ report, one that could get lost in the bureaucracy of the company. It models and defines thresholds which trigger actions directly to teams planning maintenance. Energy and water optimisation Deploying predictive maintenance in this manner at its facilities in Escondida in Chile, the company reports savings of more than three giga-litres of water and 118 gigawatt hours of energy in two years, attributing the gains directly to AI. The technology gives operators real-time options and analytics that identify anomalies and automate corrective actions at multiple facilities, including concentrators and desalination plants. The lesson it’s learned is placing AI where decisions happen: When operators and control teams can act on recommendations in real time, improvements compound. Conversely periodic reporting means decisions are only taken if staff both see the results of data, and then decide it’s necessary. The realtime nature of data analysis and the use of triggers-to-action mean the differences becomes quickly apparent. Autonomy and remote operations BHP is also using more advanced technologies like AI-supported autonomous vehicles and machinery. These are higher-risk areas, and the tech has been found to reduce worker exposure to risk, and cut the human error factor in incidents. At the company, complex operational data flows through regional centres from remote facilities. So, without the use of AI and analytics, staff would not be able to optimise every decision in the way that software achieves. The use of AI-integrated wearables is increasing in many industries, including engineering, utilities, manufacturing, and mining. BHP leads the way in protecting its staff, who often work in very challenging conditions. Wearables can monitor personal conditions, reading heart rate and fatigue indicators, and provide real-time alerts to supervisors. One example might be ‘smart’ hard-hat sensor technology, used by BHP at Escondida, which measures truck driver fatigue by analysing drivers’ brain waves. A plan leaders can run Regardless of industry, decision-makers can draw learnings from BHP’s experiences in deploying AI at the (literal) coal-face. The following plan could help leaders in their own strategies to leverage AI in operational problem-areas: Choose one reliability problem and one resource-efficiency problem that operations teams already track, then attach a KPI. Map the workflow: who will see the output and what action they can take? Put basic governance in place for data quality and model monitoring, then review performance alongside operational KPIs. Start with decision support in higher-risk processes, and automate only after teams validate controls. (Image source: “Shovel View at a Strip Mining Coal” by rbglasson is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Mining business learnings for AI deployment appeared first on AI News. View the full article