ChatGPT
Diamond Member-
Posts
800 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
If you have ever taken a self-driving Uber through downtown LA, you might recognise the strange sense of uncertainty that settles in when there is no driver and no conversation, just a quiet car making assumptions about the world around it. The journey feels fine until the car misreads a shadow or slows abruptly for something harmless. In that moment you see the real issue with autonomy. It does not panic when it should, and that gap between confidence and judgement is where trust is either earned or lost. Much of today’s enterprise AI feels remarkably similar. It is competent without being confident, and efficient without being empathetic, which is why the deciding factor in every successful deployment is no longer computing power but trust. The MLQ State of AI in Business 2025 [PDF] report puts a sharp number on this. 95% of early AI pilots fail to produce measurable ROI, not because the technology is weak but because it is mismatched to the problems organisations are trying to solve. The pattern repeats itself in industries. Leaders get uneasy when they can’t tell if the output is right, teams are unsure whether dashboards can be trusted, and customers quickly lose patience when an interaction feels automated rather than supported. Anyone who has been locked out of their bank account while the automated recovery system insists their answers are wrong knows how quickly confidence evaporates. Klarna remains the most publicised example of large-scale automation in action. The company has now halved its workforce since 2022 and says internal AI systems are performing the work of 853 full-time roles, up from 700 earlier this year. Revenues have risen 108%, while average employee compensation has increased 60%, funded in part by those operational gains. Yet the picture is more complicated. Klarna still reported a 95 million dollar quarterly loss, and its CEO has warned that further staff reductions are likely. It shows that automation alone does not create stability. Without accountability and structure, the experience breaks down long before the AI does. As Jason Roos, CEO of CCaaS provider Cirrus, puts it, “Any transformation that unsettles confidence, inside or outside the business, carries a cost you cannot ignore. it can leave you worse off.” We have already seen what happens when autonomy runs ahead of accountability. The ***’s Department for Work and Pensions used an algorithm that wrongly flagged around 200,000 housing-benefit claims as potentially fraudulent, even though the majority were legitimate. The problem wasn’t the technology. It was the absence of clear ownership over its decisions. When an automated system suspends the wrong account, rejects the wrong claim or creates unnecessary fear, the issue is never just “why did the model misfire?” It’s “who owns the outcome?” Without that answer, trust becomes fragile. “The missing step is always readiness,” says Roos. “If the process, the data and the guardrails aren’t in place, autonomy doesn’t accelerate performance, it amplifies the weaknesses. Accountability has to come first. Start with the outcome, find where effort is being wasted, check your readiness and governance, and only then automate. Skip those steps and accountability disappears just as fast as the efficiency gains arrive.” Part of the problem is an obsession with scale without the grounding that makes scale sustainable. Many organisations push toward autonomous agents that can act decisively, yet very few pause to consider what happens when those actions drift outside expected boundaries. The Edelman Trust Barometer [PDF] shows a steady decline in public trust in AI over the past five years, and a joint KPMG and University of Melbourne study found that workers prefer more human involvement in almost half the tasks examined. The findings reinforce a simple point. Trust rarely comes from pushing models harder. It comes from people taking the time to understand how decisions are made, and from governance that behaves less like a brake pedal and more like a steering wheel. The same dynamics appear on the customer side. PwC’s trust research reveals a wide gulf between perception and reality. Most executives believe customers trust their organisation, while only a ********* of customers agree. Other surveys show that transparency helps to close this gap, with large majorities of consumers wanting clear disclosure when AI is used in service experiences. Without that clarity, people do not feel reassured. They feel misled, and the relationship becomes strained. Companies that communicate openly about their AI use are not only protecting trust but also normalising the idea that technology and human support can co-exist. Some of the confusion stems from the term “agentic AI” itself. Much of the market treats it as something unpredictable or self-directing, when in reality it is workflow automation with reasoning and recall. It is a structured way for systems to make modest decisions inside parameters designed by people. The deployments that scale safely all follow the same sequence. They start with the outcome they want to improve, then look at where unnecessary effort sits in the workflow, then assess whether their systems and teams are ready for autonomy, and only then choose the technology. Reversing that order does not speed anything up. It simply creates faster mistakes. As Roos says, AI should expand human judgement, not replace it. All of this points toward a wider truth. Every wave of automation eventually becomes a social question rather than a purely technical one. Amazon built its dominance through operational consistency, but it also built a level of confidence that the parcel would arrive. When that confidence dips, customers move on. AI follows the same pattern. You can deploy sophisticated, self-correcting systems, but if the customer feels tricked or misled at any point, the trust breaks. Internally, the same pressures apply. The KPMG global study [PDF] highlights how quickly employees disengage when they do not understand how decisions are made or who is accountable for them. Without that clarity, adoption stalls. As agentic systems take on more conversational roles, the emotional dimension becomes even more significant. Early reviews of autonomous chat interactions show that people now judge their experience not only by whether they were helped but also by whether the interaction felt attentive and respectful. A customer who feels dismissed rarely keeps the frustration to themselves. The emotional tone of AI is becoming a genuine operational factor, and systems that cannot meet that expectation risk becoming liabilities. The difficult truth is that technology will continue to move faster than people’s instinctive comfort with it. Trust will always lag behind innovation. That is not an argument against progress. It is an argument for maturity. Every AI leader should be asking whether they would trust the system with their own data, whether they can explain its last decision in plain language, and who steps in when something goes wrong. If those answers are unclear, the organisation is not leading transformation. It is preparing an apology. Roos puts it simply, “Agentic AI is not the concern. Unaccountable AI is.” When trust goes, adoption goes, and the project that looked transformative becomes another entry in the 95% failure rate. Autonomy is not the enemy. Forgetting who is responsible is. The organisations that keep a human hand on the wheel will be the ones still in control when the self-driving hype eventually fades. The post Autonomy without accountability: The real AI risk appeared first on AI News. View the full article
-
The partnership announced this week between Microsoft and Hexagon Robotics marks an inflection point in the commercialisation of humanoid, AI-powered robots for industrial environments. The two companies will combine Microsoft’s cloud and AI infrastructure with Hexagon’s expertise in robotics, sensors, and spatial intelligence to advance the deployment of physical AI systems in real-world settings. At the centre of the collaboration is AEON, Hexagon’s industrial humanoid robot, a device designed to operate autonomously in environments like factories, logistics hubs, engineering plants, and inspection sites. The partnership will focus on multimodal AI training, imitation learning, real-time data management, and integration with existing industrial systems. Initial target sectors include automotive, aerospace, manufacturing, and logistics, the companies say. It’s in these industries where labour shortages and operational complexity are already constraining financial growth. The announcement is the sign of a maturing ecosystem: cloud platforms, physical AI, and robotics engineering’s convergence, making humanoid automation commercially viable. Humanoid robots out of the research lab While humanoid robots have been the subject of work at research institutions, demonstrated proudly at technology events, the last five years have seen a move to practical deployment in real-world, working environments. The main change has been the combination of improved perception, advances in reinforcement and imitation learning, and the availability of scalable cloud infrastructure. One of the most visible examples is Agility Robotics’ Digit, a bipedal humanoid robot designed for logistics and warehouse operations. Digit has been piloted in live environments by companies like Amazon, where it performs material-handling tasks including tote movement and last-metre logistics. Such deployments tend to focus on augmenting human workers rather than replacing them, with Digit handling more physically demanding tasks. Similarly, Tesla’s Optimus programme has moved out of the phase where concept videos were all that existed, and is now undergoing factory trials. Optimus robots are being tested on structured tasks like part handling and equipment transport inside Tesla’s automotive manufacturing facilities. While still limited in scope, these pilots demonstrate the pattern of humanoid-like machines chosen over less anthropomorphic form-factors so they can operate in human-designed and -populated spaces. Inspection, maintenance, and hazardous environments Industrial inspection is emerging as one of the earliest commercially viable use cases for humanoid and quasi-humanoid robots. Boston Dynamics’ Atlas, while not yet a general-purpose commercial product, has been used in live industrial trials for inspection and disaster-response environments. It can navigate uneven terrain, climb stairs, and manipulate tools in places considered unsafe for humans. Toyota Research Institute has deployed humanoid robotics platforms for remote inspection and manipulation tasks in similar settings. Toyota’s systems rely on multimodal perception and human-in-the-loop control, the latter reinforcing an industry trend: early deployments prioritise reliability and traceability, so need human oversight. Hexagon’s AEON aligns closely with this trend. Its emphasis on sensor fusion and spatial intelligence is relevant for inspection and quality assurance tasks, where precise understanding of physical environments is more valuable than the conversational abilities most associated with everyday use of AIs. Cloud platforms central to robotics strategy A defining feature of the Microsoft-Hexagon partnership is the use of cloud infrastructure in the scaling of humanoid robots. Training, updating, and monitoring physical AI systems generates large quantities of data, including video, force feedback from on-device sensors, spatial mapping (such as that derived from LIDAR), and operational telemetry. Managing this data locally has historically been a bottleneck, due to storage and processing constraints. By using platforms like Azure and Azure IoT Operations, plus real-time intelligence services in the cloud, humanoid robots can be trained fleet-wide, not isolated units. This leads to multiple possibilities in shared learning, improvement by iteration, and greater consistency. For board-level buyers, these IT architecture shifts mean humanoid robots become viable entities that can be treated – in terms of IT requirements – more like enterprise software than machinery. Labour shortages drive adoption The demographic trends in manufacturing, logistics, and asset-intensive industries are increasingly unfavourable. Ageing workforces, declining interest in manual roles, and persistent skills shortages create skills gaps that conventional automation cannot fully address – at least, not without rebuilding entire facilities to be more suited to a robotic workforce. Fixed robotic systems excel in repetitive, predictable tasks but struggle in dynamic, human environments. Humanoid robots occupy a middle ground. Not designed to replace workflows, they can stabilise operations where human availability is uncertain. Case studies show early value in night shifts, periods of peak demand, and tasks deemed too hazardous for humans. What boards should evaluate before investing For decision-makers considering investment in next-generation workplace robots, several issues to note have emerged from existing, real-world deployments: Task specificity matters more than general intelligence, with the more successful pilots focusing on well-defined activities. Data governance and security continue to have to be placed front and centre when robots are put into play, especially so when it’s necessary to connect them to cloud platforms. At a human level, workforce integration can be more challenging than sourcing, installing, and running the technology itself. Yet human oversight remains essential at this stage in AI maturity, for safety and regulatory acceptance. A measured but irreversible shift Humanoid robots won’t replace the human workforce, but an increasing body of evidence from live deployments and prototyping shows such devices are moving into the workplace. As of now, humanoid, AI-powered robots can perform economically-valuable tasks, and integration with existing industrial systems is immensely possible. For boards with the appetite to invest, the question could be when competitors might deploy the technology responsibly and at scale. (Image source: Source: Hexagon Robotics) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post From cloud to factory – humanoid robots coming to workplaces appeared first on AI News. View the full article
-
AI advancements are changing the way we look at health and deal with health-related issues. According to a new nationwide study by Confused.com Life Insurance, three in five Brits now use AI to self-diagnose health conditions. Through various searches, like side effects of medical conditions, treatment options, and symptom checks, as much as 11% of respondents claim AI has helped improve their conditions. More than a third (35%) are likely to use AI in this context in the future, moving away from traditional GP appointments – increasingly harder to get at short notice. In the ***, the average GP appointment waiting time is currently 10 days, a ******* too long for many. Therefore, health related searches have significantly risen since January 2025, including “what is my illness?”, increasing by 85%, “what are the symptoms for?” (33%), and “side effects” (22%). Most common health-related queries with AI According to Confused.com, the most searched for health-related query is symptom checks, with 63% seeking advice from AI. Next are side effects at 50% and lifestyle and well-being techniques at 38%. 20% have also sought mental health support through therapy or recommended coping strategies, treating ChatGPT as their virtual therapist. 35% of respondents over 65 are using AI to self diagnose, with 54% using the technology to check their symptoms. This pales in comparison to 18-24 year olds, with 85% using AI to search regularly for health issues. Tom Vaughan, life insurance expert at Confused.com, commented on these latest findings, saying, “Advances in AI technology have created a new way for people to approach healthcare and self-diagnosis. More individuals are taking steps to support their own and their family’s well-being, getting ahead of health concerns and addressing situations as quickly as possible.” AI self diagnosis potential benefits With current GP waiting times sometimes reaching a month, it is no surprise that 42% claimed AI is quicker than waiting for a doctor’s appointment. 50% of 25-34 year olds and 51% of 35-44 year olds said they are not comfortable taking any risks with timings, believing self-diagnosis provides a faster response than waiting for a GP. Family well-being is also crucial, with 20% using AI to determine the best methods to support their loved one’s health. Not having to physically speak to a doctor is another reason many turned to AI. 24% said they feel more comfortable using AI than discussing their health face to face with a healthcare professional, rising to 39% for 18-24 year olds. 17% are searching for alternative medical solutions and support via AI, increasing to 27% for those aged 25-34. Money is another key factor, as 20% feel self diagnosis through AI could save them substantial private healthcare fees. AI has also had a positive influence for non-binary individuals and those with an alternative identity. 75% said the technology’s diagnosis had helped them a “great deal”, compared to just 13% for men and 9% for women. Overall, AI seems to have a positive impact on users’ health situations. For instance, 11% stated that AI has helped their health conditions “a great deal,” while 41% claimed it has helped “somewhat.” The hope is that this self-diagnosis, though not guaranteeing accuracy, will encourage people to visit their GP for a formal diagnosis. Only a ********* of respondents (9%) felt AI has not helped their health in any way, indicating traditional healthcare methods are more reliable. Tom Vaughan emphasised the importance of GP consultations. “While AI can be useful for initial research and gaining an understanding of a condition, it’s clear that for the ultimate peace of mind people should consult a GP or pharmacist. GPs and other medical professionals are the only people who can accurately diagnose conditions, some of which may worsen or become long-term illnesses without the proper treatment.” OpenAI launches ChatGPT Health Confused.com‘s insights into AI use for health concerns coincides with OpenAI’s launch of its new ChatGPT Health feature, part of the ChatGPT platform. This has been set up to meet the substantial number of health-related queries made on the site each day. Figures suggest over 230 million health-related inquiries are made weekly. ChatGPT Health allows users to connect their personal medical records and wellness apps, like Apple Health, allowing the AI to provide tailored responses, rather than general knowledge surrounding certain health conditions. Although set up to help users find answers to their health questions, OpenAI has stressed the new feature is not a diagnostic tool or substitute for professional medical care. It has been designed to support medical care, like understanding lab results and track wellness, rather than replace it and give formal medical diagnoses or treatment plans. ChatGPT Health has been developed with input from hundreds of physicians around the world, ensuring clarity and safety for its users. Despite not being a substitute for medical professionals and traditional GP appointments, the number of people turning to AI for health information and help to understand medical issues is expected to rise, raising important questions and potential repercussions for patient care and clinical trust. (Image source: “The Sick Classroom by Nge Lay” by Jnzl’s Photos is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post “Dr AI, am I healthy?” 59% of Brits rely on AI for self-diagnosis appeared first on AI News. View the full article
-
After several years of experimentation, enterprise AI is moving out of the pilot phase. To date, many organisations limit AI to general-purpose chatbots, often created by small groups of early adopters. According to Nexos.ai, that model will give way to something more operational: fleets of task-specific AI agents embedded directly into business workflows. Even isolated agents are in common use, screening CVs, reviewing contracts, drafting routine correspondence, preparing management reports and orchestrating actions in enterprise systems. Analysis from the company suggests organisations that move from single chatbots to multiple role-specific agents see materially higher adoption and claim a clearer business impact. Teams interact with agents that can behave like junior colleagues, where each agent is accountable for a defined slice of work. Every team gets its own named agent The company’s studies envisage the normalisation of named AI agents assigned on a per team basis, which it describes as an “AI intern”. These are not general-purpose assistants, but dedicated tools for specific operational processes. For example, HR teams might deploy agents tuned to recruitment criteria, or legal teams using agents configured to flag contract standard violations. Sales teams will rely on agents optimised for their sales pipelines and integrated with an existing CRM. In each case, Nexos says the business value comes from contextual awareness and integration with existing software and date, rather than from advances in the raw power of the model. Early enterprise deployments suggest the gains can be significant. Payhawk, for example, reports that its deployment of Nexos.ai’s agentic platform in finance, customer support, and operations reduced the necessary security investigation time by 80%. The company achieved 98% data accuracy and cut its processing costs by 75%. Žilvinas Girėnas, head of product at Nexos.ai, says the real benefit stems from coordination. “The shift from single-purpose agents to coordinated AI teams is fundamental. Businesses are […] building groups of specialised agents that work together in a workflow. That’s when AI stops being a pilot and starts becoming infrastructure.” Platform consolidation becomes unavoidable As the number of active agents in organisations rises, a second-order problem – fragmentation – appears. Teams running five to ten agents in different tools face duplicate costs and inconsistency in security controls. From the perspective of IT governance, this situation can become unsustainable. Evidence from early Nexos adopters suggests consolidating agents on a enterprise-wide shared platform delivers faster deployment – in some cases twice as fast – and gives better oversight over spend and performance. Girėnas says: “When teams are juggling multiple vendors and logins, usage drops. A single platform is what allows organisations to extract consistent value rather than paying for shelfware.” The situation points to pattern familiar to enterprise technology veterans: AI agent systems follow the same trajectory of consolidation seen in collaboration, security, and analytics stacks. AI operations shifts to the business The company’s findings suggest that the ownership of AI operations is moving from engineering teams and towards business leaders and discrete business functions. The function-specific deployment model means heads of HR, legal, finance, and sales are will expected to configure their own agents, a task that include prompt management. Thus, the ability to manage agents will become a core operational competency for individuals and business functions. This places new requirements on agentic platforms, with the need for interfaces that are approachable by non-technical users, with the stack operating with minimal reliance on APIs or developer-style tooling. Team leads will need to be able to adjust instructions, test outputs from their adopted systems and find ways to scale successful configurations. Engineering support will be reserved for isolated problem-solving. Demand will outstrip delivery capacity Nexos.ai’s final prediction is the appearance of a capacity challenge. It says that once teams can deploy their first few agents successfully, demand for similar systems will accelerate in the organisation. Marketing departments may look for workflow automation, finance pros will want compliance-checking agents, and customer success teams will explore the effects of support triage: Each department, seeing proven value elsewhere, will expect similar abilities and efficiencies. Industry projections suggest that by the end of 2026, around 40% of enterprise software applications will incorporate task-specific AI agents, up from under 5% in 2024. Engineering capacity is unlikely to keep pace if every agent is built from scratch – thus the call for centralised capability. “The organisations that cope best will be those with agent libraries rather than bespoke builds,” Girėnas says. “Templates, playbooks, and pre-built agents are the only way to meet rising demand without overwhelming delivery teams.” (Image source: “Office Assistant” by LornaJane.net is licensed under CC BY-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post 2026 to be the year of the agentic AI intern appeared first on AI News. View the full article
-
Manufacturing factories are producing more data than they can easily process, and companies like Bosch are turning to AI to close that gap. Cameras watch production lines, sensors track machines, and software records each step of the process. However, much of that information still does not lead to faster decisions or fewer breakdowns. For large manufacturing firms, this gap is pushing AI from small trials into core operations. That shift helps explain why Bosch plans to invest about €2.9 billion in artificial intelligence by 2027, according to The Wall Street Journal. The spending is aimed at manufacturing, supply chain management, and perception systems, areas where the company sees AI as a way to improve how physical systems behave in real conditions. How Bosch uses AI to catch manufacturing problems earlier In manufacturing, delays and defects frequently start small. A minor variation in materials or machine settings can ripple through a production line. Bosch has been applying AI models to camera feeds and sensor data to detect quality issues earlier. Instead of catching defects after products are finished, systems can flag problems while items are still on the line. That gives workers time to change operations before waste increases. For high-volume manufacturing, earlier detection can reduce scrap and limit the need for rework. Equipment maintenance is another area under pressure. Many factories still rely on fixed schedules or manual inspections, which can miss early warning signs. AI models trained on vibration, temperature, and usage data can help predict when a machine is likely to fail. This allows maintenance teams to plan repairs instead of reacting to breakdowns. The aim is to reduce unplanned downtime without replacing equipment too early. Over time, this approach can extend the working life of machines while keeping production more stable. Making supply chains more adaptable Supply chains are also part of the investment focus. Disruptions that became visible during the pandemic have not fully disappeared, and manufacturers are still dealing with shifting demand and transport delays. AI systems can help forecast needs, track parts across sites, and adjust plans when conditions change. For a global manufacturer, even small improvements in planning accuracy can have a broad effect when applied across hundreds of factories and suppliers. Bosch is also putting funding into perception systems, which help machines understand their surroundings. These systems combine input from cameras, radar, and other sensors with AI models that can recognise objects, judge distance, or spot changes in the environment. They are used in areas such as factory automation, driver assistance, and robotics, where machines must respond quickly and safely. In these settings, AI is not analysing abstract data but reacting to real-world conditions as they happen. Why edge computing matters on the factory floor Much of this work takes place at the edge. In factories and vehicles, sending data to a distant cloud system and waiting for a response can add delay or create risk if connections fail. Running AI models locally allows systems to respond in real time and keep operating even when networks are unreliable. It also limits how much sensitive data leaves a site. For industrial companies, that can matter as much as speed, especially when production processes are closely guarded. Cloud systems still play a role, though mostly behind the scenes. Training models, managing updates, and analysing trends across locations often happens in central environments. Many manufacturers are moving toward a split setup, using cloud systems for coordination and learning, and edge systems for action. This pattern is becoming common across industrial firms, not just Bosch. Scaling AI beyond small trials The scale of the investment matters because many companies remain stuck at the pilot stage. Small AI tests can show promise, but rolling them out across operations takes funding, skilled staff, and long-term commitment. Bosch executives have previously described AI as a way to support workers rather than replace them, and as a tool to handle complexity that humans cannot manage alone. That view reflects a broader shift in industry, where AI is treated less as an experiment and more as basic infrastructure. What Bosch’s manufacturing AI strategy shows in practice Rising energy costs, labour shortages, and tighter margins leave less room for inefficiency. Automation alone no longer solves those problems. Companies are looking for systems that can adjust to changing conditions without constant manual input. Bosch’s €2.9 billion commitment sits within that wider shift. Other large manufacturers are making similar moves, often without public fanfare, by upgrading factories and retraining staff. What stands out is the focus on operational use rather than customer-facing features. Taken together, these efforts show how end-user companies are applying AI today. The work is less about bold claims and more about reducing waste, improving uptime, and making complex systems easier to manage. For industrial firms, that practical focus may define how AI delivers value over time. (Photo by P. L.) See also: Agentic AI scaling requires new memory architecture Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Bosch’s €2.9 billion AI investment and shifting manufacturing priorities appeared first on AI News. View the full article
-
Agentic AI represents a distinct evolution from stateless chatbots toward complex workflows, and scaling it requires new memory architecture. As foundation models scale toward trillions of parameters and context windows reach millions of tokens, the computational cost of remembering history is rising faster than the ability to process it. Organisations deploying these systems now face a bottleneck where the sheer volume of “long-term memory” (technically known as Key-Value (KV) cache) overwhelms existing hardware architectures. Current infrastructure forces a binary choice: store inference context in scarce, high-bandwidth GPU memory (HBM) or relegate it to slow, general-purpose storage. The former is prohibitively expensive for large contexts; the latter creates latency that renders real-time agentic interactions unviable. To address this widening disparity that is holding back the scaling of agentic AI, NVIDIA has introduced the Inference Context Memory Storage (ICMS) platform within its Rubin architecture, proposing a new storage tier designed specifically to handle the ephemeral and high-velocity nature of AI memory. “AI is revolutionising the entire computing stack—and now, storage,” Huang said. “AI is no longer about one-shot chatbots but intelligent collaborators that understand the physical world, reason over long horizons, stay grounded in facts, use tools to do real work, and retain both short- and long-term memory.” The operational challenge lies in the specific behaviour of transformer-based models. To avoid recomputing an entire conversation history for every new word generated, models store previous states in the KV cache. In agentic workflows, this cache acts as persistent memory across tools and sessions, growing linearly with sequence length. This creates a distinct data class. Unlike financial records or customer logs, KV cache is derived data; it is essential for immediate performance but does not require the heavy durability guarantees of enterprise file systems. General-purpose storage stacks, running on standard CPUs, expend energy on metadata management and replication that agentic workloads do not require. The current hierarchy, spanning from GPU HBM (G1) to shared storage (G4), is becoming inefficient: (Credit: NVIDIA) As context spills from the GPU (G1) to system RAM (G2) and eventually to shared storage (G4), efficiency plummets. Moving active context to the G4 tier introduces millisecond-level latency and increases the power cost per token, leaving expensive GPUs idle while they await data. For the enterprise, this manifests as a bloated Total Cost of Ownership (TCO), where power is wasted on infrastructure overhead rather than active reasoning. A new memory tier for the AI factory The industry response involves inserting a purpose-built layer into this hierarchy. The ICMS platform establishes a “G3.5” tier—an Ethernet-attached flash layer designed explicitly for gigascale inference. This approach integrates storage directly into the compute pod. By utilising the NVIDIA BlueField-4 data processor, the platform offloads the management of this context data from the host CPU. The system provides petabytes of shared capacity per pod, boosting the scaling of agentic AI by allowing agents to retain massive amounts of history without occupying expensive HBM. The operational benefit is quantifiable in throughput and energy. By keeping relevant context in this intermediate tier – which is faster than standard storage, but cheaper than HBM – the system can “prestage” memory back to the GPU before it is needed. This reduces the idle time of the GPU decoder, enabling up to 5x higher tokens-per-second (TPS) for long-context workloads. From an energy perspective, the implications are equally measurable. Because the architecture removes the overhead of general-purpose storage protocols, it delivers 5x better power efficiency than traditional methods. Integrating the data plane Implementing this architecture requires a change in how IT teams view storage networking. The ICMS platform relies on NVIDIA Spectrum-X Ethernet to provide the high-bandwidth, low-jitter connectivity required to treat flash storage almost as if it were local memory. For enterprise infrastructure teams, the integration point is the orchestration layer. Frameworks such as NVIDIA Dynamo and the Inference Transfer Library (NIXL) manage the movement of KV blocks between tiers. These tools coordinate with the storage layer to ensure that the correct context is loaded into the GPU memory (G1) or host memory (G2) exactly when the AI model requires it. The NVIDIA DOCA framework further supports this by providing a KV communication layer that treats context cache as a first-class resource. Major storage vendors are already aligning with this architecture. Companies including AIC, Cloudian, DDN, Dell Technologies, HPE, Hitachi Vantara, IBM, Nutanix, Pure Storage, Supermicro, VAST Data, and WEKA are building platforms with BlueField-4. These solutions are expected to be available in the second half of this year. Redefining infrastructure for scaling agentic AI Adopting a dedicated context memory tier impacts capacity planning and datacentre design. Reclassifying data: CIOs must recognise KV cache as a unique data type. It is “ephemeral but latency-sensitive,” distinct from “durable and cold” compliance data. The G3.5 tier handles the former, allowing durable G4 storage to focus on long-term logs and artifacts. Orchestration maturity: Success depends on software that can intelligently place workloads. The system uses topology-aware orchestration (via NVIDIA Grove) to place jobs near their cached context, minimising data movement across the fabric. Power density: By fitting more usable capacity into the same rack footprint, organisations can extend the life of existing facilities. However, this increases the density of compute per square metre, requiring adequate cooling and power distribution planning. The transition to agentic AI forces a physical reconfiguration of the datacentre. The prevailing model of separating compute completely from slow, persistent storage is incompatible with the real-time retrieval needs of agents with photographic memories. By introducing a specialised context tier, enterprises can decouple the growth of model memory from the cost of GPU HBM. This architecture for agentic AI allows multiple agents to share a massive low-power memory pool to reduce the cost of serving complex queries and boosts scaling by enabling high-throughput reasoning. As organisations plan their next cycle of infrastructure investment, evaluating the efficiency of the memory hierarchy will be as vital as selecting the GPU itself. See also: 2025’s AI chip wars: What enterprise leaders learned about supply chain reality Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Agentic AI scaling requires new memory architecture appeared first on AI News. View the full article
-
Deloitte’s latest *** CFO Survey presents an improving outlook for large *** businesses, with technology investment – particularly in AI – emerging as a dominant strategy. The survey offers the signal that while macroeconomic and geopolitical risks remain elevated, boards are converging increasingly on digital ability as a primary route to productivity and medium-term growth. The strongest finding concerns technology investment. An overwhelming 96% of CFOs expect *** companies to increase investment in technology over the next five years, with 77% anticipating improvement to productivity and business performance. The figures are distinctive for a CFO-destined paper, and indicate digital spend is not viewed as discretionary or cyclical, but is treated as structural (akin to capital investment in previous industrial phases). For IT leaders, the paper shows sustained funding is available, but also points out the heightened expectations for delivery, integration, and measurable returns from the technology. Artificial intelligence sits at the centre of the paper and CFO sentiment in general. The proportion of CFOs becoming ‘more optimistic’ about AI’s ability to improve organisational performance has risen to 59%, up from 39% in Q3 2024. This change isn’t incremental, suggesting AI has crossed from experiment into mainstream financial confidence. Importantly, the survey does not indicate a wholesale rise in risk-taking to accompany the new-found optimism. Risk appetite, while improving, remains subdued at 15%, below the longer average of 25%. This combination – confidence in AI but continued balance-sheet caution – has implications for how AI initiatives are likely to be governed and controlled. Finance functions are likely to need tightly-scoped uses and productivity metrics over open-ended experiments and trials. For finance professionals, the environment reinforces the role of the CFO as a steward of technology, rather than a passive consumer of IT budgets. The survey positions finance chiefs as shaping digital strategy where AI is concerned. The paper’s emphasis on productivity gains suggests a preference for applications that automate processes and help with financial forecasting, not just customer-facing innovation. IT teams should expect closer scrutiny of business cases presented to them, more involved work from finance professionals, and a translation of technical ability to financial outcomes. Despite improving sentiment metrics, the survey also highlights some notable constraints. Business confidence remains negative at net -13%, below its long-term average, despite optimism having lifted from lows recorded in earlier iterations of the CFO Survey from Deloitte. Capital expenditure is a priority, but only 17% of CFOs describe it as a ‘strong priority’, only just above the long-term average. This suggests while investment is protected, it’s not immune: Programmes perceived as speculative, poorly governed, or badly aligned with productivity are still unlikely to survive. External uncertainty, though declining, remains notable. 38% of CFOs still rate their uncertainty in the future as ‘high’ or ‘very high’, and geopolitics still dominates the risk landscape, as cited by 65% of respondents. *** competitiveness and productivity follow closely, with a historically high risk rating of 62. Systems resilience, data security, energy efficiency, and supply-chain visibility are likely to command attention as well as the overall goal of efficiencies created by the use of AI in operations. A notable subtext of the survey is the human dimension of the technology’s adoption. Deloitte’s leadership realises AI’s value depends on combining technology with human skills and the need to upskill workforces. While this is not quantified in the survey data, it aligns with the broader pattern of cautious optimism: CFOs are willing to invest, but not to assume that technology, as of itself, delivers outcomes. This strengthens the case among IT leaderships for embedding change management, training, governance, and oversight into new digital programmes. The Deloitte CFO Survey shows a pragmatic and decisive turn towards technology-led productivity in *** businesses. Its evidence is strongest around sustained digital investment and the noteworthy rise in confidence in AI. There’s continued caution on risk and a recognition of a challenging external environment. For Finance professionals, the priority is allocation of capital to initiatives that can improve performance demonstrably. For IT staff, opportunity is expanding, but so is accountability. Digital ambition will be funded in all likelihood, but only where it can be translated into credible, auditable business value. (Image source: “Deloitte exposure” by zilverbat. is licensed under CC BY-NC 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Optimism in productivity tempered by AI risk: Deloitte appeared first on AI News. View the full article
-
Rising labour costs and tighter delivery margins are pushing large platform operators like Grab to look at automation. It’s moved to bring robotics capability in-house by its acquisition of Infermove. Grab operates at a scale where small efficiency gains can have out-sized effects. Its platform supports millions of deliveries in Southeast Asia, many of them carried out by riders on scooters and bicycles in dense urban areas, producing complexity that limits how much automation could replace human labour. By acquiring a company focused on robots designed for unstructured settings, Grab sees physical-world AI as mature enough to use in cases outside pilot programmes. Delivery automation close to core operations Rather than relying on off-the-shelf systems, Grab is opting to internalise the development loop. Infermove’s technology is designed to learn from real-world movement data, including information generated by non-motorised delivery vehicles. In practical terms, that means robots trained on how people actually navigate pavements, crossings, and crowded drop-off points, rather than how those spaces appear in simulations. For a delivery operator like Grab, that distinction matters. Simulated environments can support early development, but they often struggle with the edge cases that define real cities. Bringing that learning process in-house allows Grab to shape how automation behaves under its own operating constraints, rather than adapting its delivery network to fit a third-party system. From an enterprise perspective, the strategic value lies in control. Owning the technology gives Grab more influence over deployment pace, operating scope, and cost trade-offs. It also reduces long-term dependence on vendors whose priorities may not match Grab’s regional footprint or economic realities. Automation, however, is not positioned as a replacement for human riders. Even as robots take on parts of the workflow, people remain central to service delivery. Grab’s interest appears focused on selective use, like structured first-mile or last-mile segments where tasks are repetitive and distances are short. In these areas, robots may help smooth demand spikes, reduce delays during peak hours, and ease pressure during labour shortages. Managing cost pressure without breaking service During an internal meeting in December, Grab’s chief technology officer Suthen Thomas described Infermove’s progress as “impressive,” highlighting both the technology and its early commercial use. He also said the company would continue to operate independently, with its founder reporting directly to him. The structure suggests Grab is prioritising execution and continuity rather than rapid organisational integration. The approach reflects a broader shift among large digital platforms. Instead of treating AI as a layer added on top of existing systems, companies are embedding it deeper into core operations. In delivery and logistics, that often means moving beyond optimisation software into physical automation, where the risks and costs are higher but the potential gains are more structural. The timing is also telling. On-demand delivery volumes continue to grow, but margins remain under pressure. Customers expect faster service and lower fees, while operators face rising wages, fuel costs, and tighter regulation. In that environment, automation becomes less about novelty and more about sustaining service levels without eroding profitability. Bringing robotics development closer to operations may also help align incentives around data use. Training physical AI systems requires large amounts of real-world data, which delivery platforms already generate at scale. Keeping that feedback loop internal can speed iteration and reduce the need to share sensitive operational data externally. There are still limits. Robots designed for pavements and short routes are unlikely to replace human couriers in an entire network anytime soon. Weather, local rules, and customer acceptance will continue to shape where automation can realistically operate. Expanding in multiple countries adds further complexity, as infrastructure and regulations vary widely. Industry forecasts suggest rapid growth in last-mile delivery robotics, but those figures offer limited guidance for operators. The more immediate question is whether automation can lower cost per delivery without introducing new failure points. That depends less on market size and more on performance in live environments. Seen through an enterprise lens, the acquisition of Infermove is not a bet on robotics as a product category. It is a move to tighten the link between AI, data, and physical operations. For platform companies built on logistics and mobility, that integration may become a key factor in managing growth under sustained cost pressure. (Photo by Afif Ramdhasuma) See also: The Law Society: Current laws are fit for the AI era Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Grab brings robotics in-house to manage delivery costs appeared first on AI News. View the full article
-
As ministers push to loosen rules to speed up AI adoption, The Law Society argues that lawyers just need to know how current laws apply. The Department for Science, Innovation & Technology (DSIT) recently launched a call for evidence on a proposed ‘AI Growth Lab’. This cross-economy sandbox is designed to accelerate the deployment of autonomous technologies by granting “time-limited regulatory exemptions” to firms. The government’s position is that many regulations are outdated, having been designed before autonomous software existed, often assuming that decisions are made by people rather than machines. Ministers believe that if the *** can move faster than its global competitors, it can secure a defining economic advantage, with a potential £140 billion boost to national output by 2030. Their preliminary analysis specifically flags legal services as a sector where removing “unnecessary legal barriers” could generate billions in value over the next decade. Yet, the legal profession – supposedly the beneficiary of this deregulation – isn’t asking for exemptions. In its formal response, the Law Society made clear that the existing framework is robust enough. The friction lies not in the rules themselves, but in the lack of certainty surrounding them. While two-thirds of lawyers already use AI tools, confusion remains the primary brake on deeper integration. Ian Jeffery, CEO of The Law Society, said: “AI innovation is vital for the legal sector and already has great momentum. The existing legal regulatory framework supports progress. The main challenges don’t stem from regulatory burdens, but rather from uncertainty, cost, data and skills associated with AI adoption.” Rather than a regulatory overhaul, the profession is asking for a practical roadmap. Firms are currently navigating a grey area regarding liability and data protection. Solicitors need definitive answers on whether client data must be anonymised before it is fed into AI platforms, and they require standardised protocols for data security and storage. The questions get thornier when errors occur. If an AI tool generates harmful legal advice, it is currently unclear where the buck stops (i.e. with the solicitor, the firm, the developer, or the insurer.) There is also ambiguity about supervision requirements, specifically whether a human lawyer must oversee every instance of AI deployment. Such concerns are particularly acute for “reserved legal activities” like court representation, conveyancing, and probate, where practitioners need to know if using automated assistance puts them in breach of their professional duties. AI laws must retain safeguards The government has tried to reassure the public that the sandbox will have “red lines” to protect fundamental rights and safety. However, The Law Society remains wary of any move that might dilute consumer protection in the name of speed. “Technological progress in the legal sector should not expose clients or consumers to unregulated risks,” Jeffery stated. “Current regulation of the profession reflects the safeguards that Parliament deemed vital to protect clients and the public. It ensures trust in the English and Welsh legal system worldwide.” The body is willing to collaborate on a “legal services sandbox,” but only if it upholds professional standards rather than bypassing them. For The Law Society, the priority is maintaining the integrity of the justice system in the AI era. “The Law Society strongly supports innovation provided it remains aligned with professional integrity and operates in a solid regulatory environment,” Jeffery explained. “The government must work with legal regulators and bodies to ensure adherence to the sector’s professional standards. Any legal regulatory changes must include parliamentary oversight.” See also: Inside China’s push to apply AI across its energy system Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post The Law Society: Current laws are fit for the AI era appeared first on AI News. View the full article
-
The launch of PubMatic’s AgenticOS marks a change in how artificial intelligence is being operationalised in digital advertising, moving agentic AI from isolated experiments into a system-level capability embedded in programmatic infrastructure. For marketing leaders managing seven-figure budgets in media environments, the implications are practical not theoretical, implying faster decision cycles and a re-balance of human effort to strategy and differentiation. Programmatic advertising promises efficiency, but in practice accumulates operational complexity. Campaigns span formats, devices, data partnerships, and regulatory constraints, which make manual optimisation problematic. PubMatic is positioning AgenticOS as a response to such pressure, presenting it as an ‘operating system’ that allows multiple AI agents to transact and optimise campaigns inside human-defined objectives, and with what company-defined guardrails. AgenticOS acts across infrastructure and applications to coordinate decisions. This aligns with current research trends showing that agentic systems outperform single-model automation in contexts where campaign tasks trade-off cost, performance, and risk analysis that are inherent in media buying. Cost reduction through operational compression For medium to large organisations, marketing cost rises are driven by operational overhead rather than media prices. PubMatic reports early tests where agent-led campaigns reduced setup time by 87% and issue resolution by 70%. Even allowing for bias, these figures are consistent with studies of AI-assisted workflow automation in enterprise marketing. Typically, these find 30–50% reductions in manual labour in planning and reporting. The near-term opportunity for budget holders is not headcount reduction necessarily, but capacity gains. Agentic systems absorb decision load—bid adjustments, pacing changes, and inventory discovery. This lets teams run more campaigns concurrently or redirect effort to activities like experimentation and testing. Decision quality at scale AgenticOS’s claim is that it enables continuous decision-making without fragmentation, significant as most marketing inefficiency arises from delayed or inconsistent execution, not poor strategy. Human teams operate in reporting cycles, while agentic systems operate in seconds. Research into real-time optimisation suggests marginal gains at auction level can compound with large spends. At enterprise level, even low single-digit percentage improvements in effective CPM or conversion efficiency translate can have budgetary impact. Agentic AI does not eliminate the need for human judgement, but changes where and when judgement is made. Instead of reactive troubleshooting, teams define objectives, constraints, and success goal definitions. Governance, control, and brand safety A persistent concern among senior marketers is loss of control to agentic processes. PubMatic states AgenticOS works from advertisers’ objectives, brand-safety rules, and creative parameters, with agents operating inside those boundaries. This reflects a wider industry consensus that agentic AI adoption will only scale where governance is embedded at system level rather than bolted on. For decision-makers, the practical lesson is to invest early in codifying marketing intent, detailing performance hierarchies, set brand constraints, and escalation thresholds. Organisations that treat agentic AI as a strategic execution layer, rather than a ****** box, are likely to realise benefits faster and with lower risk. Predictions for the next 24 months Evidence from adjacent enterprise functions such as supply chain, finance, and customer support suggest three likely developments: First, agentic AI will become a standard execution layer in programmatic advertising, with a shift from automation to high-quality intent modelling and agent coordination. Second, marketing operating models will flatten, with smaller teams managing large, more complex portfolios. Senior marketers will spending more time on scenario planning and less on day-to-day campaign mechanics. Third, vendors offering system-level agentic platforms (not isolated point solutions) will be able to deliver ROI, as cost savings and performance gains compound across the workflow rather than at isolated points. Practical advice for marketing leaders Marketing decision-makers could regard AgenticOS and similar platforms as infrastructure investment. Pilot programmes should focus on high-volume, rules-based campaigns where efficiency gains are easier to measure. Success can be evaluated on performance metrics and time saved. Most importantly, internal preparation is of paramount importance. The more precisely objectives and constraints are defined, the more effectively autonomous systems will operate. In this sense, the adoption of agentic AI is as much an organisational discipline challenge than a technological one. PubMatic’s AgenticOS illustrates agentic AI in marketing entering operational phases. The question is how quickly organisations can adapt their processes to take advantage of the technology. Those that do are likely to see lower costs and more effective use of marketing spend in increasingly complex media environments. (Image source: “market” by star-one is licensed under CC BY-SA 2.0. ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post What PubMatic’s AgenticOS signals for enterprise marketing appeared first on AI News. View the full article
-
Contract work has evolved to touch privacy, security, revenue recognition, data residency, vendor risk, renewals and numerous internal approvals. At the same time, teams are expected to turn agreements around faster and keep every signed obligation visible after signature. Artificial intelligence is becoming a practical layer in this process. It can read language at scale, pull key terms into structured fields, highlight unusual clauses, and support quicker hand-offs between legal departments and the business. Five tools stand out in completing these processes efficiently. What makes a contract management provider a good choice Choosing a contract management service is partly a software decision and partly a long-term operating commitment. Contract processes change as the business evolves, so the right provider needs to support iterations, integration and adoption in teams. Tools that deliver will have certain minimum capabilities. Scalability and flexibility: A good platform can adapt to users’ unique workflows and scale with a business as the contract volume and complexity grow. User adoption focus: The interface should be intuitive for all types of users, covering legal, sales and procurement teams, to ensure company-wide adoption. Security and support: Look for providers with proven security credentials and a reputation for excellent training and client support. Integration ecosystem: The tool must connect seamlessly with a company’s existing software stack to ensure smooth data flow and a single source of truth. The best AI-powered tools for managing contracts Five tools stand out based on four criteria. These are contract AI capabilities in daily workflows, ease of rollout for legal and business users, integration readiness, and how well each supports post-signature work, like reporting, renewals and obligation tracking. 1. Agiloft Agiloft has established a reputation in contract life cycle management (CLM) for its configurability and approach to treating data as an asset. That matters in real environments where sales, procurement, finance and legal all need different views of the same agreement, and where approvals and clause positions shift over time. On the AI side, Agiloft’s AI Core extracts and analyses contract data in the life cycle, helping teams turn documents into searchable fields and automation triggers. It also uses its own ConvoAI, a conversational experience for asking questions about the contract repository in natural language. Its no-code model is also commendable. Contract teams routinely need to adjust intake forms, approval routing, renewal steps and reporting fields. No-code configuration reduces dependence on IT tickets for daily workflow updates. It also emphasises integrations that connect CLM platforms to systems used by other businesses, as outlined in offerings like its Integration Hub. 2. Ironclad Ironclad is a strong fit for teams that want contracting to feel approachable for the business. Legal may own the guardrails, but sales and procurement usually own the urgency. The company has invested heavily in keeping negotiation, approvals and version management inside one workspace, so the process does not fracture into email threads and scattered attachments. Ironclad’s AI Assist is designed to accelerate review by flagging risky or off-standard clauses and helping legal focus on the parts that truly need judgement. It also leans into repeatable processes, like templates, workflows and structured steps for drafting through signature. When those are used consistently, AI review becomes more effective because the system sees similar patterns in many agreements. 3. Icertis Icertis is built for enterprise complexity. Global users often need contracting to support multiple regions and languages, layered approval guidelines and integrations that connect contract commitments to other teams. Its core concept is contract intelligence through the Icertis Contract Intelligence platform, which aims to connect terms to business processes. This way, obligations and entitlements do not stay trapped in PDFs. Another notable feature is its depth of integration. Enterprise contract teams tend to care less about where contracts are stored and more about whether renewals, price holds, compliance terms and supplier obligations can be reflected in the systems. 4. LinkSquares LinkSquares shines in post-signature contract work. A company can have a solid signature process and still struggle when leadership asks for a portfolio view of renewals, indemnity positions, liability caps or security commitments in vendors. It uses AI to read executed agreements and extract key information into fields that can be searched and reported. Its Smart Values feature is a concrete example. It pulls common terms, dates and clause types, so legal teams can monitor them in large sets of agreements without manual tagging. Instead of opening dozens of documents to answer a single stakeholder request, teams can search in the database and generate reports that support the needs of various teams. 5. ContractPodAi ContractPodAi is positioned as a broader legal platform with CLM at the centre, aiming to cover drafting, review, repository management and reporting in one place. It is often evaluated by teams looking for a single system for legal operations workflows, along with contract life cycle steps. A notable AI element is Leah, a generative AI legal assistant that supports summarisation and review-oriented help, among other tasks. Used well, this kind of assistant can speed up intake triage and help prepare quick summaries for business owners, while legal retains responsibility for final decisions. The tool also has clause detection and risk analysis capabilities, as well as dashboards for monitoring obligations after signature. Comparing the top AI tools for contract management Here’s an at-a-glance look at each provider’s key features and strengths. ToolKey AI featureCore strengthBest forAgiloftConversational AI (ConvoAI)No-code capabilityHigh-growth companies needing flexibilityIroncladAutomated clause review (AI Assist)User-friendly workflow automationTeams prioritising ease of use and collaborationIcertisEnterprise-wide data connectivityEnterprise-scale and ERP integrationLarge, global enterprisesLinkSquaresPost-signature data extractionPowerful search and analyticsIn-house legal teams needing post-signature analysisContractPodAiGenerative AI legal assistant (Leah)End-to-end life cycle automationMid-to-large companies wanting an all-in-one legal hub Choosing the right AI CLM for better visibility and control Contract AI pays off when it reduces repetition, keeps executed terms searchable, and makes renewals and obligations easier to manage in the business. Choosing the best AI-powered contract management tool depends on contract volume, workflow, integration requirements and how much time the team spends answering post-signature questions. Image source: Unsplash The post 5 AI-powered tools streamlining contract management today appeared first on AI News. View the full article
-
AI chip shortage became the defining constraint for enterprise AI deployments in 2025, forcing CTOs to confront an uncomfortable reality: semiconductor geopolitics and supply chain physics matter more than software roadmaps or vendor commitments. What began as US export controls restricting advanced AI chips to China evolved into a broader infrastructure crisis affecting enterprises globally—not from policy alone, but from explosive demand colliding with manufacturing capacity that cannot scale at software speed. By year’s end, the dual pressures of geopolitical restrictions and component scarcity had fundamentally reshaped enterprise AI economics. The numbers tell a stark story. Average enterprise AI spending is forecasted at US$85,521 monthly in 2025, up 36% from 2024, according to CloudZero’s research surveying 500 engineering professionals. Organisations planning to invest over US$100,000 monthly more than doubled from 20% in 2024 to 45% in 2025—not because AI became more valuable, but because component costs and deployment timelines spiralled beyond initial projections. Export controls reshape chip access The Trump administration’s December 2025 decision to allow conditional sales of Nvidia’s H200 chips to China—the most powerful AI chip ever approved for export—illustrated how quickly semiconductor policy can shift. The arrangement requires a 25% revenue share with the US government and applies only to approved ******** buyers, reversing an earlier April 2025 export freeze. Yet the policy reversal came too late to prevent widespread disruption. US Commerce Secretary Howard Lutnick testified that China’s Huawei will produce only 200,000 AI chips in 2025, while China legally imported around one million downgraded Nvidia chips designed specifically for export compliance. The production gap forced ******** companies into large-scale smuggling operations—federal prosecutors unsealed documents in December revealing a ring that attempted to export at least US$160 million worth of Nvidia H100 and H200 GPUs between October 2024 and May 2025. For global enterprises, these restrictions created unpredictable procurement challenges. Companies with China-based operations or data centres faced sudden access limitations, while others discovered their global deployment plans assumed chip availability that geopolitics no longer guaranteed. Memory chip crisis compounds AI infrastructure pain While export controls dominated headlines, a deeper supply crisis emerged: memory chips became the binding constraint on AI infrastructure globally. High-bandwidth memory (HBM), the specialised memory that enables AI accelerators to function, hit severe shortages as manufacturers Samsung, SK Hynix, and Micron operated near full capacity while reporting six-to twelve-month lead times. Memory prices surged accordingly. DRAM prices climbed over 50% in 2025 in some categories, with server contract prices up as much as 50% quarterly, according to Counterpoint Research. Samsung reportedly lifted prices for server memory chips by 30% to 60%. The firm forecasts memory prices to continue rising another 20% in early 2026 as demand continues outpacing capacity expansion. The shortage wasn’t limited to specialised AI components. DRAM supplier inventories fell to two to four weeks by October 2025, down from 13-17 weeks in late 2024, per TrendForce data cited by Reuters. SK Hynix told analysts that shortages may persist until late 2027, reporting that all memory scheduled for 2026 production is already sold out. Enterprise AI labs experienced this firsthand. Major cloud providers Google, Amazon, Microsoft, and Meta issued open-ended orders to Micron, stating they will take as much inventory as the company can provide. ******** firms Alibaba, Tencent, and ByteDance pressed Samsung and SK Hynix for priority access. The pressure extended into future years, with OpenAI signing preliminary agreements with Samsung and SK Hynix for its Stargate project requiring up to 900,000 wafers monthly by 2029—roughly double today’s global monthly HBM output. Deployment timelines stretch beyond projections The AI chip shortage didn’t just increase costs—it fundamentally altered enterprise deployment timelines. Enterprise-level custom AI solutions that typically required six to twelve months for full deployment in early 2025 stretched to 12-18 months or longer by year-end, according to industry analysts. Bain & Company partner Peter Hanbury, speaking to CNBC, noted utility connection timelines have become the biggest constraint on data centre growth, with some projects facing five-year delays just to secure electricity access. The firm forecasts a 163GW rise in global data centre electricity demand by 2030, much of it linked to generative AI’s intensive compute requirements. Microsoft CEO Satya Nadella captured the paradox in stark terms: “The biggest issue we are now having is not a compute glut, but its power—it’s the ability to get the builds done fast enough close to power. If you can’t do that, you may actually have a bunch of chips sitting in inventory that I can’t plug in. In fact, that is my problem today.” Traditional tech buyers in enterprise environments faced even steeper challenges. “Buyers in this environment will have to over-extend and make some bets now to secure supply later,” warned Chad Bickley of Bain & Company in a March 2025 analysis. “Planning ahead for delays in production may require buyers to take on some expensive inventory of bleeding-edge technology products that may become obsolete in short order.” Hidden costs compound budget pressures The visible price increases—HBM up 20-30% year-over-year, GPU cloud costs rising 40-300% depending on region—represented only part of the total cost impact. Organisations discovered multiple hidden expense categories that vendor quotes hadn’t captured. Advanced packaging capacity emerged as a critical bottleneck. TSMC’s CoWoS packaging, essential for stacking HBM alongside AI processors, was fully booked through the end of 2025. Demand for this integration technique exploded as wafer production increased, creating a secondary choke point that added months to delivery timelines. Infrastructure costs beyond chips escalated sharply. Enterprise-grade NVMe SSDs saw prices climb 15-20% compared to a year earlier as AI workloads required significantly higher endurance and bandwidth than traditional applications. Organisations planning AI deployments found their bill-of-materials costs rising 5-10% from memory component increases alone, according to Bain analysis. Implementation and governance costs compounded further. Organisations spent US$50,000 to US$250,000 annually on monitoring, governance, and enablement infrastructure beyond core licensing fees. Usage-based overages caused monthly charges to spike unexpectedly for teams with high AI interaction density, particularly those engaging in heavy model training or frequent inference workloads. Strategic lessons for 2026 and beyond Enterprise leaders who successfully navigated 2025’s AI chip shortage emerged with hard-won insights that will shape procurement strategy for years ahead. Diversify supply relationships early: Organizations that secured long-term supply agreements with multiple vendors before shortages intensified maintained more predictable deployment timelines than those relying on spot procurement. Budget for component volatility: The era of stable, predictable infrastructure pricing has ended for AI workloads. CTOs learned to build 20-30% cost buffers into AI infrastructure budgets to absorb memory price fluctuations and component availability gaps. Optimise before scaling: Techniques like model quantisation, pruning, and inference optimisation cut GPU needs by 30-70% in some implementations. Organisations that invested in efficiency before throwing hardware at problems achieved better economics than those focused purely on procurement. Consider hybrid infrastructure models: Multi-cloud strategies and hybrid setups combining cloud GPUs with dedicated clusters improved reliability and cost predictability. For high-volume AI workloads, owning or leasing infrastructure increasingly proved more cost-effective than renting cloud GPUs at inflated spot prices. Factor geopolitics into architecture decisions: The rapid policy shifts around chip exports taught enterprises that global AI infrastructure can’t assume stable regulatory environments. Organisations with China exposure learned to design deployment architectures with regulatory flexibility in mind. The 2026 outlook: Continued constraints The supply-demand imbalance shows no signs of resolving quickly. New memory chip factories take years to build—most capacity expansions announced in 2025 won’t come online until 2027 or later. SK Hynix guidance suggests shortages persisting through at least late 2027. Export control policy remains fluid. A new “Trump AI Controls” rule to replace earlier frameworks is expected later in 2025, along with potential controls on exports to Malaysia and Thailand identified as diversion routes for China. Each policy shift creates new procurement uncertainties for global enterprises. The macroeconomic implications extend beyond IT budgets. Memory shortages could delay hundreds of billions in AI infrastructure investment, slowing productivity gains that enterprises have bet on to justify massive AI spending. Rising component costs threaten to add inflationary pressure at a moment when global economies remain sensitive to price increases. For enterprise leaders, 2025’s AI chip shortage delivered a definitive lesson: software moves at digital speed, but hardware moves at physical speed, and geopolitics moves at political speed. The gap between those three timelines defines what’s actually deployable—regardless of what vendors promise or roadmap projects. The organisations that thrived weren’t those with the biggest budgets or the most ambitious AI visions. They were the ones who understood that in 2025, supply chain reality trumped strategic ambition—and planned accordingly. (Photo by Igor Omilaev/Unsplash) See also: Can the US really enforce a global AI chip ban? Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post 2025’s AI chip wars: What enterprise leaders learned about supply chain reality appeared first on AI News. View the full article
-
Producing digital advertising at global scale has become less about one standout campaign and more about volume, speed, and consistency. For consumer brands operating across dozens of markets, the challenge is not creativity alone, but how to keep content flowing without repeating expensive production cycles. That pressure is pushing some large companies to test where AI fits inside everyday marketing work. At L’Oréal, AI-generated creative tools are being used to support parts of the digital advertising process, particularly video and visual content. The aim is not to replace human teams, but to reduce friction in a system that demands constant refresh. The shift offers a useful view into how enterprise AI adoption is unfolding in creative functions, where speed and control matter as much as originality. Scaling content without scaling production For a global beauty group, digital advertising is no longer a seasonal exercise. Content is needed continuously across social platforms, ecommerce sites, and regional campaigns, often with small variations in language, format, or visual emphasis. Traditional production models struggle to keep up. Each new asset typically involves planning, filming, editing, and approvals. AI-generated images and video elements allow you to reuse old content and extend it into new formats without having to start from scratch every time. At L’Oréal, AI tools are being used to help generate or adapt visual content that fits specific digital channels. This includes polishing footage, modifying formats, and creating versions for different platforms. Human teams continue to monitor creative direction and final output, but AI speeds up the time between idea and delivery. The practical value is not about producing something altogether new. It is about producing enough usable content to meet the pace of digital advertising. Why L’Oréal keeps AI under tight creative control One reason large brands move cautiously with AI in creative work is brand risk. Visual identity, tone, and messaging are tightly regulated, and small inconsistencies can be amplified when content is distributed at scale. Rather than handing over creative decisions, companies like L’Oréal are using AI as a support layer. AI-generated output is examined, adjusted, and approved using existing workflows. This keeps accountability with internal teams and external agencies, while still gaining efficiency. This approach reflects a broader pattern in enterprise AI adoption. Tools are being introduced into workflows that already exist, rather than reshaping how decisions are made. In marketing, that often means AI assists with production, not with defining brand voice. Cost, speed, and repeatability Digital advertising budgets are under pressure, even for large consumer groups. Media prices fluctuate, platforms change their restrictions, and audiences expect constant updates. AI offers a way to absorb some of that pressure by lowering the marginal cost of producing additional assets. By reusing footage and applying AI-based enhancements, brands can stretch the value of each shoot. This is especially important in areas where campaigns must be quickly changed, or when local teams want specific assets but lack full-scale production support. The result is not a dramatic cost cut in one area, but incremental savings across hundreds of minor decisions. Over time, those savings shape how marketing teams plan campaigns and allocate expenditures. What this says about enterprise AI maturity L’Oréal’s use of AI-generated creative work is less about experimentation and more about operational fit. The tools are used in situations where output is predictable, quality can be measured, and mistakes may be caught before release. This mirrors how AI is being adopted across many enterprise functions. Instead of broad, open-ended use, companies are identifying narrow tasks where AI can reliably assist without introducing new risk. In marketing, those tasks often sit between creative concept and final distribution. The approach also emphasises a key constraint. AI works best in environments with existing data, rules, and review processes. Creative freedom still belongs to people, while AI supports scale. Implications for marketing teams For marketing leaders, the lesson is not that AI will replace agencies or internal creatives. It is that production models built for slower cycles are becoming harder to sustain. Teams are being asked to deliver more content, more often, with tighter budgets and faster turnaround. AI tools offer one way to manage that demand, but only if they fit existing controls and expectations. This places new demands on governance. Marketing teams need clear rules on where AI can be used, how outputs are reviewed, and who remains accountable for final decisions. Without that structure, efficiency gains can quickly be offset by risk. What L’Oréal’s approach signals for enterprise AI adoption What stands out in L’Oréal’s approach is restraint. AI is applied where it reduces friction, not where it reshapes the role of creative teams. That makes it easier to integrate into large organisations with established processes and brand safeguards. As more enterprises look to AI for productivity gains, similar patterns are emerging. AI becomes part of the workflow, not the headline. Success is measured in time saved and consistency maintained, not in novelty. For now, AI-generated creative work remains a supporting act in enterprise marketing. Its real impact lies in how quietly it changes the economics of content production, one asset at a time. (Photo by Helio E. López Vega) See also: Disney is embedding generative AI into its operating model Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post L’Oréal brings AI into everyday digital advertising production appeared first on AI News. View the full article
-
Modern software delivery depends on the reliability, integrity, and security of container images. As organisations migrate to microservices, automated CI/CD pipelines, and multi-cloud architectures, the container image becomes more than a packaging mechanism, it becomes a security boundary. A single vulnerability embedded in an image can replicate in clusters, environments, and deployments, creating widespread risk for applications that rely on speed and repeatability. Security-forward organisations are increasingly shifting from general-purpose base images to secure-by-design, minimal, or enterprise-maintained images that provide strong guarantees around trust, provenance, and vulnerability management. The industry has witnessed a significant increase in attacks targeting software supply chains, open-source dependencies, or compromised image registries. As a result, engineering teams are prioritising container security earlier in the build process, selecting image foundations that minimise the need for downstream mitigation and maximise confidence before deployment. The 3 best secure container images for modern applications The landscape of secure container images has evolved rapidly, and modern development teams now seek images that reduce vulnerabilities, enhance performance, and support predictable operations. The three platforms below represent the strongest options in 2025, offering different paths to security: source-level reconstruction, extreme minimalism, and long-term stability. 1. Echo Echo represents one of the most advanced evolutions in secure container images. Instead of attempting to scan, patch, or incrementally improve existing base images, Echo rebuilds them entirely from source, producing images that are free from known vulnerabilities from the get-go. The zero-CVE image model enables organisations to begin each deployment with a verified clean foundation, reducing the remediation burden associated with container maintenance. What sets Echo apart is its AI-powered automated lifecycle approach. As new vulnerabilities are disclosed, Echo’s purpose-built AI agents detect dependencies impacted by the CVE, regenerate the affected images, and deliver updated versions back to the organisation’s registry without the need for manual intervention. This approach dramatically shortens exposure windows and ensures continuous alignment with security benchmarks, even in highly dynamic environments. Echo is ideal for enterprises that cannot tolerate prolonged CVE exposure, like financial platforms, healthcare providers, SaaS vendors, and important infrastructure operators. It transforms container image security from a reactive process into a proactive, automated practice. Key features Source-level reconstruction to remove vulnerabilities entirely Automated patch regeneration with strict SLAs Strong governance and policy controls Broad runtime and language support Seamless pipeline integration for frictionless adoption 2. Google Distroless Google Distroless is built on the principle of extreme minimalism. Whereas traditional images include shells, package managers, and utility libraries, Distroless images contain only the dependencies required for an application to run. Nothing more. The design philosophy significantly reduces the attack surface and limits the number of components that could be compromised. Distroless also offers strong alignment with modern DevOps and SRE practices. By removing unnecessary system-level functionality, Distroless encourages clean application packaging and ensures that teams explicitly define the dependencies required for execution. The approach reduces ambiguity and improves reliability when reproducing builds in environments. Key features Minimal composition eliminates unnecessary libraries and utilities Reduced attack surface compared to traditional images Immutable-by-design infrastructure for safer deployments Performance improvements through reduced image size Stronger dependency clarity in application packaging 3. Ubuntu Containers Ubuntu Containers focus on stability, predictability, and long-term maintenance. Canonical’s Ubuntu distributions have long been respected for their balance of usability and robustness, and their containerised versions offer an equally compelling solution for teams that require reliable and well-supported base images. Unlike minimalist images that reduce functionality, Ubuntu provides a complete, fully featured environment that supports a broad range of software ecosystems. This compatibility makes it easier for teams to run applications with complex dependencies without needing major adjustments to package configurations. Key features Long-term, predictable security updates through Canonical LTS Broad software compatibility in languages, libraries, and frameworks Enterprise-focused security enhancements including compliance alignment Extensive community and vendor support Stable and reliable behaviour in heterogeneous environments Broader considerations when evaluating secure container images Choosing the right secure container image is not simply a technical preference, it is a strategic decision that affects every stage of the software lifecycle. Modern organisations should evaluate image options based on several broader criteria that extend beyond immediate functionality. Security posture and vulnerability management Organisations should assess whether an image requires reactive vulnerability patching or offers proactive vulnerability elimination. Images with automated security maintenance reduce operational overhead and lower exposure risk. Minimalism vs.completeness Minimal images reduce attack surface but may require application adjustments. Full-featured images simplify compatibility but introduce more dependencies. The right choice depends on workload complexity and team expertise. Operational consistency A secure image should behave reliably in testing, staging, and production environments. Stability is a foundation for predictable deployments and reduced debugging time. Compliance alignment Security teams must ensure that base images support compliance frameworks, particularly in regulated industries. Vendor-backed images often provide stronger audit trails and lifecycle guarantees. Ecosystem compatibility Base images should integrate well with Kubernetes, CI/CD pipelines, observability tools, and automation systems. Maintainability over time Modern applications evolve continuously, so image choices should support sustainable upgrades, long-term support horizons, and clear documentation. The evaluative principles help ensure that organisations select the image foundation that best aligns with their strategic goals. Final thoughts Secure container images are essential for maintaining resilience in cloud-native architectures. While Bitnami and other curated image providers offer convenience, modern applications require a deeper focus on image integrity, vulnerability management, and runtime safety. Echo, Google Distroless, and Ubuntu Containers represent three powerful approaches to secure container design, each suited to different organisational needs. Together, these three platforms form a robust foundation for teams striving to build secure, scalable, and reliable modern applications. Image source: Unsplash The post 3 best secure container images for modern applications appeared first on AI News. View the full article
-
Artificial intelligence and big data are reshaping digital marketing by providing new insights into consumer behaviour. The technologies allow marketers to create more personalised and effective strategies. As the digital world evolves, businesses must adapt to stay competitive. Rainmaker is an AI marketing agency that uses artificial intelligence and big data to enhance digital marketing strategies. As an AI-powered digital marketing and performance agency, Rainmaker exists to help businesses navigate the complexities of the digital landscape. By using advanced technologies, Rainmaker delivers tailored marketing solutions that drive measurable outcomes for its clients. The impact of AI and big data on marketing AI and big data have a profound impact on marketing by enabling the analysis of large datasets to uncover patterns and trends. The capability allows marketers to gain a deeper understanding of consumer preferences and behaviours. With these insights, an AI SEO company can tailor its marketing strategies to better meet the needs of its target audience, resulting in improved engagement and conversion rates. Moreover, AI-driven tools can automate various marketing tasks, like content creation and campaign management, freeing up time for marketers to focus on strategy development. Automation increases efficiency and ensures marketing efforts are aligned consistently with consumer expectations and market trends. The integration of AI and big data also enables real-time decision-making capabilities that were previously impossible. Marketers can now monitor campaign performance instantaneously and make adjustments on the fly to optimise results. This agility allows businesses to respond quickly to market changes, competitor actions and shifting consumer preferences, ensuring that marketing budgets are spent efficiently and effectively in all channels. Enhancing marketing strategies with AI insight AI provides marketers with valuable insights that can enhance their strategies. By analysing consumer data, AI can identify which marketing tactics are most effective, allowing businesses to allocate resources more efficiently. The data-driven approach ensures that marketing efforts are focused on the most promising opportunities, maximising return on investment. AI can help marketers predict future trends and consumer behaviours, enabling them to stay ahead of the competition. By anticipating changes in the market, businesses can adapt their strategies proactively, ensuring they remain relevant and effective in a rapidly changing digital landscape. Challenges and ethical considerations While AI and big data offer significant benefits, they also present challenges, particularly in terms of data privacy and ethical considerations. Businesses must ensure that they handle consumer data responsibly and comply with relevant regulations to maintain trust and transparency with their audience. The potential for algorithmic bias is a concern that marketers must address. Ensuring that AI systems are designed and monitored to prevent biased outcomes is crucial for maintaining fairness and equality in marketing practices. By addressing these challenges, businesses can harness the full potential of AI and big data while maintaining ethical standards. Personalisation at scale through AI and big data One of the most significant ways AI and big data are transforming digital marketing strategies is through large-scale personalisation. Traditional marketing approaches often rely on broad audience segments, but AI-powered systems can analyse vast amounts of behavioural, demographic and contextual data to create highly individualised experiences. This allows marketers to move beyond generic messaging and deliver content, offers and recommendations that align closely with each user’s interests and intent. By using big data, AI can assess factors like browsing history, purchase behaviour, location, device use and engagement patterns in real time. These insights enable marketers to adjust messaging dynamically in channels, from personalised website experiences to tailored email campaigns and paid media targeting. The future of AI in digital marketing As AI and big data technologies continue to evolve, their role in digital marketing is expected to grow. Future innovations will likely focus on enhancing personalisation and real-time data analysis, providing marketers with even more powerful tools to engage with consumers effectively. Staying informed about these advancements is essential for businesses looking to use AI and big data in their marketing strategies. By embracing these technologies and adapting to new developments, businesses can ensure they remain competitive and successful in the ever-evolving digital landscape. Rainmaker’s commitment to using AI and big data enables clients to achieve superior marketing outcomes. By focusing on personalised strategies and data-driven insights, Rainmaker helps businesses thrive in the digital age. As an AI SEO company, Rainmaker is at the forefront of transforming digital marketing through innovative solutions and strategic foresight. The post Understanding how AI and big data transform digital marketing appeared first on AI News. View the full article
-
Solana’s high-speed platform is fast becoming the preferred home for independent AI programmes. It comes at a time when advanced uses of technology have led to significant increases in cyberattacks. This article details the escalating malware threats for the cryptocurrency community. According to the most recent data on December 5, 2025, the Solana price on Binance hovers around $134.95, facing tight consolidation before a potential breakout. Cryptocurrencies now confront a serious new class of digital adversary. You see the collision between innovation and menace right at the ledger layer. Smart programmes are seeking maximum efficiency on low-cost chains while attackers exploit the very tools that enable this progress. Available data paints a stark picture of both genuine technological promise and pervasive risk. AI’s ability to act independently presents a huge operational opportunity and a massive security headache, all at once. Autonomous AI programmes demand high-speed blockchains Developers are increasingly building what people call autonomous agents. Call them what you will, these intelligent programmes think for themselves, executing intricate tasks without a person even looking. Running independently right on the chain, they manage contracts and execute rapid-fire decisions. Conversations on Binance Square consistently highlight Solana as the premium choice for this kind of work. Tremendous processing capacity and ultra-low fees make the chain perfect for automated operations. Data from January 12, 2025, confirms agents create small, very fast, and ultimately more effective transactions. Platforms like Solana benefit because transaction costs trend downward, fostering rapid expansion of the AI-driven transaction ecosystem. Solana-based tokens like $FUN demonstrate strong investor interest in this pretty new field. Binance Research also confirms the chain’s rapid pace is necessary to manage the high-frequency interactions thousands or millions of active agents would require. Malicious code is now created by chatbots Cybersecurity researchers recently flagged a convincing malicious npm package to watch out for. It concealed a cryptocurrency wallet drainer, generated entirely using artificial intelligence. Safety, the supply chain security company, identified the library, which had attracted over 1,500 downloads before its removal. Uploaded by a user named “Kodane,” the package claimed to offer advanced licence validation utilities for high-performance Node.js applications. Behaviour is triggered by a post-install script, a common, often overlooked attack vector. The script executes automatically, stashing its payload in hidden directories on major operating systems. Logs were very detailed and the use of words like “Enhanced” strongly suggested the original source was an AI chatbot. Afterward, the script connects to a command-and-control (C2) server and generates a unique machine ID code for the compromised host. The malware scans the system for wallet files, then proceeds to drain all funds directly to a hard-coded Solana wallet address. Autonomous spy campaigns raise alarms A severe new problem now confronts security leaders. State-sponsored groups are deploying AI to orchestrate espionage. Anthropic’s threat intelligence team detailed the disruption of a sophisticated operation dubbed GTG-1002 in November 2025 (affecting about 30 entities). The group, assessed with high confidence as ******** state-sponsored, manipulated the Claude Code model to serve as a penetration testing agent. By tricking the Claude Code model into adopting a “role-play” persona (an employee of a legitimate cybersecurity firm), attackers successfully broke its safety protocols. People were only minimally involved, mostly starting the campaign or approving moves at key escalation points. Programmes performed reconnaissance, found weaknesses, built exploits, and harvested data. Human involvement was limited to 10-20% of the total effort. Model Context Protocol servers acted as the interface between the AI and open-source penetration testing tools. Speed and stability form a balance For developers, Solana’s raw performance statistics are what really draws them in. The network’s co-founder, Anatoly Yakovenko, strongly believes market value will eventually be determined by how much revenue a blockchain generates. He argues that only platforms attempting to capture the whole market can hope to survive in the long run. Under the hood, Solana operates using the Proof of History consensus mechanism, alongside traditional Proof of Stake. Reportedly, this combination achieves peak performance of up to 65,000 transactions per second (TPS). Fees are listed as low as $0.00025, making the platform accessible for high-frequency automated trades. Solana has faced certain undeniable challenges. Security analysis shared on Binance Square included reports of significant cryptocurrency theft involving a user’s compromised Solana address. Such incidents raise immediate concerns about private key storage. Analysts continue to cite historical instability, with one analysis mentioning the network suffered multiple major outages so far. Defenders should adopt machine intelligence too The GTG-1002 campaign proves AI can discover and exploit vulnerabilities in live operations without substantial human intervention. Groups with fewer resources can now execute campaigns previously requiring entire teams of experienced hackers. The drastic lowering of the attack barrier is the primary implication for security and technology leaders. Could you afford to ignore this new reality in your operational planning? Anthropic’s investigation exposed one important weakness in the AI’s operational structure: hallucinations. Claude often overstated its findings and sometimes just fabricated data entirely, claiming to possess non-existent credentials. Such a flaw forces human handlers to verify every single result, which drags down attackers’ operational pace. Security teams should assume a major change has occurred in the security domain. Applying AI for defence (especially in SOC automation and threat detection) is the only proactive path forward. The ledger space must now square up to AI’s two faces. Innovation running at machine speed on Solana is directly coupled with machine-speed malice aimed at users. Getting ahead of this threat will require a new, very proactive approach. The post Solana’s high-speed AI gains and malware losses appeared first on AI News. View the full article
-
A recent industry report [PDF] argues that Britain’s railway network could carry an extra billion journeys by the mid-2030s, building on the 1.6 billion passenger rail journeys recorded to year-end March 2024. The next decade will involve a combination of complexity and control, as more digital systems, data, and interconnected suppliers create the potential for more points of failure. The report’s central theme is that AI will become the operating system for modern rail, not as a single, centralised collection of models and algorithms, but as layers of prediction, optimisation, and automated monitoring found in infrastructure, rolling stock, maintenance yards, and stations (pp.18-23). This technology will guide human focus within daily work schedules rather than replace human activity entirely. Maintenance to become predictive and data-driven Traditional rail maintenance relies on fixed schedules and manual inspections, a reactive and labour-intensive practice. The whitepaper cites Network Rail’s reliance on engineers walking the track to spot defects (p.18). AI will shift the industry to predictive maintenance, analysing data from sensors to forecast failures before they cause significant disruption. This involves a combination of sensors and imaging, including high-definition cameras, LiDAR scanners, and vibration monitors. These provide machine-learning systems with data that can flag degradation in track, signalling, and electrical assets ahead of failure (pp.18-19). These monitoring programs can generate alerts months in advance, reducing emergency call-outs. The timeframe for predicting asset failure varies by asset type. Network Rail’s intelligent infrastructure efforts should transition from “find and fix” to “predict and prevent.” Network Rail emphasises data-led maintenance and tools designed to consolidate asset information, while European R&D programs (like Europe’s Rail and its predecessor, Shift2Rail) fund projects like DAYDREAMS, similarly aimed at prescriptive asset management. Prediction at scale requires a common approach to achieve transformation. Traffic control and energy efficiency Operational optimisation, beyond predictive maintenance, offers significant returns. AI systems use live and historical operating data—train positions, speeds, weather forecasts—to anticipate disruption and adjust traffic flow. Digital twin and AI-based traffic management trials in Europe, alongside research and testing of AI-assisted driving and positioning, could increase overall network capacity without laying more track (p.20). Algorithms also advise drivers on optimal acceleration and braking, potentially saving 10-15% in energy. Considering route variations, traction, and timetable constraints, energy savings compound quickly across a large network. Safety monitoring and CCTV Visible AI applications focus on safety and security. Obstacle detection uses thermal cameras and machine learning to identify hazards beyond human visibility. AI also monitors level crossings and analyses CCTV footage to spot unattended items and suspicious activity (pp.20-21). For example, AI and LiDAR are used for crowd monitoring at London Waterloo as part of a suite of safety tools. Passenger flows and journey optimisation AI can forecast demand using ticket sales, events, and mobile signals, allowing operators to adjust the number of carriages and reduce overcrowding, the report states. Passenger counting is a high-impact, low-drama application: better data supports better timetables and clearer customer information. Cybersecurity issues As operational technology converges with IT, cybersecurity becomes a critical operational issue. Legacy systems, lacking replacement plans, pose a risk, as does integrating modern analytics with older infrastructure. This creates conditions attractive to attackers. The future of AI in rail involves sensors performing in extreme environments, models trusted and tested by operators, and governance that treats cyber resilience as inseparable from physical safety. The report’s message is that AI will arrive regardless. The question is whether railways proactively adopt and control it or inherit it as un-managed complexity. (Image source: “Train Junction” by jcgoble3 is licensed under CC BY-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post The future of rail: Watching, predicting, and learning appeared first on AI News. View the full article
-
For a company built on intellectual property, scale creates a familiar tension. Disney needs to produce and distribute content across many formats and audiences, while keeping tight control over rights, safety, and brand consistency. Generative AI promises speed and flexibility, but unmanaged use risks creating legal, creative, and operational drag. Disney’s agreement with OpenAI shows how a large, IP-heavy organisation is attempting to resolve that tension by putting AI inside its operating system rather than treating it as a side experiment. Under the deal, Disney becomes both a licensing partner and a major enterprise customer. OpenAI’s video model Sora will be able to generate short, user-prompted videos using a defined set of Disney-owned characters and environments. Separately, Disney will use OpenAI’s APIs to build internal tools and new consumer experiences, including integrations tied to Disney+. The company will also deploy ChatGPT internally for employees. The mechanics matter more than the spectacle. Disney is not opening its catalogue to unrestricted generation. The licence excludes actor likenesses and voices, limits which assets can be used, and applies safety and age-appropriate controls. In practice, this positions generative AI as a constrained production layer—capable of generating variation and volume, but bounded by governance. AI inside existing workflows A consistent failure mode in enterprise AI programmes is separation. Tools live outside the systems where work actually happens, adding steps instead of removing them. Disney’s approach mirrors a more pragmatic pattern: put AI where decisions are already made. On the consumer side, AI-generated content will surface through Disney+, rather than through a standalone experiment. On the enterprise side, employees gain access to AI through APIs and a standardised assistant, rather than a patchwork of ad hoc tools. This reduces friction and makes AI usage observable and governable. The implication is organisational. Disney is treating generative AI as a horizontal capability—closer to a platform service than a creative add-on. That framing makes it easier to scale usage across teams without multiplying risk. Variation without expanding headcount The Sora licence focuses on short-form content derived from pre-approved assets. That constraint is deliberate. In production environments, much of the cost sits not in ideation but in generating usable variations, reviewing them, and moving them through distribution pipelines. By allowing prompt-driven generation inside a defined asset set, Disney can reduce the marginal cost of experimentation and fan engagement without increasing manual production or review load. The output is not a finished film. It is a controlled input into marketing, social, and engagement workflows. This mirrors a broader enterprise pattern: AI earns its place when it shortens the path from intent to usable output, not when it creates standalone artefacts. APIs over point tools Beyond content generation, the agreement positions OpenAI’s models as building blocks. Disney plans to use APIs to develop new products and internal tools, rather than relying solely on off-the-shelf interfaces. This matters because enterprise AI programmes often stall on integration. Teams waste time copying outputs between systems or adapting generic tools to fit internal processes. API-level access allows Disney to embed AI directly into product logic, employee workflows, and existing systems of record. In effect, AI becomes part of the connective tissue between tools, not another layer employees must learn to work around. Aligning productivity with incentives Disney’s $1 billion equity investment in OpenAI is less interesting as a valuation signal than as an operational one. It indicates an expectation that AI usage will be persistent and central, not optional or experimental. For large organisations, AI investments fail when tooling remains disconnected from economic outcomes. Here, AI touches revenue-facing surfaces (Disney+ engagement), cost structures (content variation and internal productivity), and long-term platform strategy. That alignment increases the likelihood that AI becomes part of standard planning cycles rather than discretionary innovation spend. Automation that makes scale less fragile High-volume AI use amplifies small failures. Disney and OpenAI emphasise safeguards around IP, harmful content, and misuse, not as a values statement but as a scaling requirement. Strong automation around safety and rights management reduces the need for manual intervention and supports consistent enforcement. As with fraud detection or content moderation in other industries, this kind of operational AI does not attract attention when it works—but it makes growth less brittle. Lessons for enterprise leaders Embed AI where work already happens. Disney targets product and employee workflows, not a separate AI sandbox. Constrain before you scale. Defined asset sets and exclusions make deployment viable in high-liability environments. Use APIs to reduce friction. Integration matters more than model novelty. Tie AI to economics early. Productivity gains stick when they connect to revenue and cost structures. Treat safety as infrastructure. Automation and controls are prerequisites for scale, not afterthoughts. Disney’s specific assets are unique. The operating pattern is not. Enterprise AI delivers value when it is designed as part of the organisation’s core machinery—governed, integrated, and measured—rather than as a showcase for what models can generate. (Photo by Héctor Vásquez) See also: OpenAI targets AI skills gap with new certification standards Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Why Disney is embedding generative AI into its operating model appeared first on AI News. View the full article
-
Arm Holdings has positioned itself at the centre of AI transformation. In a wide-ranging podcast interview, Vince Jesaitis, head of global government affairs at Arm, offered enterprise decision-makers look into the company’s international strategy, the evolution of AI as the company sees it, and what lies ahead for the industry. From cloud to edge Arm thinks the AI market is about to enter a new phase, moving from cloud-based processing to edge computing. While much of the media’s attention has been focused to date on massive data centres, with models trained in and accessed from the cloud, Jesaitis said that most AI compute, especially inference tasks, is likely to be increasingly decentralised. “The next ‘aha’ moment in AI is when local AI processing is being done on devices you couldn’t have imagined before,” Jesaitis said. These devices range from smartphones and earbuds to cars and industrial sensors. Arm’s IP is already embedded, literally, in these devices – it’s a company that only in the last year has been the IP behind over 30 billion chips, placed in devices of every conceivable description, all over the world. The deployment of AI in edge environments has several benefits, with team at Arm citing three main ‘wins’. Firstly, the inherent efficiency of low-power Arm chips means that power bills for running compute and cooling are lower. That keeps the environmental footprint of the technology as small as possible. Secondly, putting AI in local settings means latency is much lower (with latency determined by the distance between local operations and the site of the AI model). Arm points to uses like instant translation, dynamic scheduling of control systems, and features like the near-immediate triggering of safety functions – for instance in IIoT settings. Thirdly, ‘keeping it local’ means there’s no potentially sensitive data sent off-premise. The benefits are obvious for any organisation in highly-regulated industries, but the increasing number of data breaches means even companies operating with relatively benign data sets are looking to reduce their attack surface. Arm silicon, optimised for power-constrained devices, makes it well-suited for compute where it’s needed on the ground, the company says. The future may well be one where AI is found woven throughout environments, not centralised in a data centre run by one of the large providers. Arm and global governments Arm is actively engaged with global policymakers, considering this level of engagement an important part of its role. Governments continue to compete to attract semiconductor investment, the issues of supply chain and concentrated dependencies still fresh in many policymakers’ memories from the time of the COVID epidemic. Arm lobbies for workforce development, working at present with policy-makers in the White House on an education coalition to build an ‘AI-ready workforce’. Domestic independence in technology relies as much on the abilities of workforce as it does on the availability of hardware. Jesaitis noted a divergence between regulatory environments: the US prioritises what the government there terms acceleration and innovation, while the EU leads on safety, privacy, security and legally-enforced standards of practice. Arm aims to find the middle ground between these approaches, building products that meet stringent global compliance needs, yet furthering advances in the AI industry. The enterprise case for edge AI The case for integrating Arm’s edge-focused AI architecture into enterprise transformation strategies can be persuasive. The company stresses its ability to offer scale-able AI without the need to centralise to the cloud, and is also pushing its investment in hardware-level security. That means issues like memory exploits (outside of the control of users plugged into centralised AI models) can be avoided. Of course, sectors already highly-regulated in terms of data practices are unlikely to experience relaxed governance in the future – the opposite is pretty much inevitable. All industries will be seeing more regulation and greater penalties for non-compliance in the years to come. However, to balance that, there are significant competitive advantages available to those that can demonstrate their systems’ inherent safety and security. It’s into this regulatory landscape that Arm sees itself and local, edge AI fitting. Additionally, in Europe and Scandinavia, ESG goals are going to be increasingly important. Here, the power-sipping nature of Arm chips offers big advantages. That’s a trend that even the US hyperscalers are responding to: AWS’s latest SHALAR range of low-cost, low-power Arm-based platforms is there to satisfy that exact demand. Arm’s collaboration with cloud hyperscalers such as AWS and Microsoft produces chips that combine efficiency with the necessary horsepower for AI applications, the company says. What’s next from Arm and the industry Jesaitis pointed out several trends that enterprises may be seeing in the next 12 to 18 months. Global AI exports, particularly from the US and Middle East, are ensuring that local demand for AI can be satisfied by the big providers. Arm is a company that can supply both big providers in these contexts (as part of their portfolios of offerings) and satisfy the rising demand for edge-based AI. Jesaitis also sees edge AI as something of the hero of sustainability in an industry increasingly under fire for its ecological impact. Because Arm technology’s biggest market has been in low-power compute for mobile, it’s inherently ‘greener’. As enterprises hope to meet energy goals without sacrificing compute, Arm offers a way that combines performance with responsibility. Redefining “smart” Arm’s vision of AI at the edge means computers and the software running on them can be context-aware, cheap to run, secure by design, and – thanks to near-zero network latency – highly-responsive. Jesaitis said, “We used to call things ‘smart’ because they were online. Now, they’re going to be truly intelligent.” (Image source: “Factory Floor” by danielfoster437 is licensed under CC BY-NC-SA 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Arm and the future of AI at the edge appeared first on AI News. View the full article
-
Under China’s push to clean up its energy system, AI is starting to shape how power is produced, moved, and used — not in abstract policy terms, but in day-to-day operations. In Chifeng, a city in northern China, a renewable-powered factory offers a clear example. The site produces hydrogen and ammonia using electricity generated entirely from nearby wind and solar farms. Unlike traditional plants connected to the wider grid, this facility runs on its own closed system. That setup brings a problem as well as a benefit: renewable power is clean, but it rises and falls with the weather. To keep production stable, the factory relies on an AI-driven control system built by its owner, Envision. Rather than following fixed schedules, the software continuously adjusts output based on changes in wind and sunlight. As reported by Reuters, Zhang Jian, Envision’s chief engineer for hydrogen energy, compared the system to a conductor, coordinating electricity supply and industrial demand in real time. When wind speeds increase, production ramps up automatically to take full advantage of the available power. When conditions weaken, electricity use is quickly reduced to avoid strain. Zhang said the system allows the plant to operate at high efficiency despite the volatility of renewable energy. Projects like this are central to China’s plans for hydrogen and ammonia, fuels seen as important for cutting emissions in sectors such as steelmaking and shipping. They also point to a broader strategy: using AI to manage complexity as the country adds more renewable power to its grid. Researchers argue that AI could play a significant role in meeting China’s climate goals. Zheng Saina, an associate professor at Southeast University in Nanjing who studies low-carbon transitions, said AI can support tasks ranging from emissions tracking to forecasting electricity supply and demand. At the same time, she cautioned that AI itself is driving rapid growth in power consumption, particularly through energy-hungry data centres. China now installs more wind and solar capacity than any other country, but absorbing that power efficiently remains a challenge. According to Cory Combs, associate director at Beijing-based research firm Trivium China, AI is increasingly seen as a way to make the grid more flexible and responsive. That thinking was formalised in September, when Beijing introduced an “AI+ energy” strategy. The plan calls for deeper links between AI systems and the energy sector, including the development of multiple large AI models focused on grid operations, power generation, and industrial use. By 2027, the government aims to roll out dozens of pilot projects and test AI across more than 100 use cases. Within another three years, officials want China to reach what they describe as a world-leading level of AI integration in energy. Combs said the focus is on highly specialised tools designed for specific jobs, such as managing wind farms, nuclear plants, or grid balancing, rather than general-purpose AI. This approach contrasts with the United States, where much of the investment has gone into building advanced large-language models, according to Hu Guangzhou, a professor at the China Europe International Business School in Shanghai. One area where AI could have immediate impact is demand forecasting. Fang Lurui, an assistant professor at Xi’an Jiaotong-Liverpool University, said power grids must match supply and demand at every moment to avoid outages. Accurate forecasts of renewable output and electricity use allow operators to plan ahead, storing energy in batteries when needed and reducing reliance on coal-fired backup plants. Some cities are already experimenting. Shanghai has launched a citywide virtual power plant that links dozens of operators — including data centres, building systems, and electric vehicle chargers — into a single coordinated network. During a trial last August, the system reduced peak demand by more than 160 megawatts, roughly equivalent to the output of a small coal plant. Combs said such systems matter because modern power generation is increasingly scattered and intermittent. “You need something very robust that is able to be predictive and account for new information very quickly,” he said. Beyond the grid, China is also looking to apply AI to its national carbon market, which covers more than 3,000 companies in emissions-heavy industries such as power, steel, cement, and aluminium. These sectors together produce over 60% of the country’s carbon emissions. Chen Zhibin, a senior manager at Berlin-based think tank adelphi, said AI could help regulators verify emissions data, refine the allocation of free allowances, and give companies clearer insight into their production costs. Still, the risks are growing alongside the opportunities. Studies suggest that by 2030, China’s AI data centres could consume more than 1,000 terawatt-hours of electricity each year — roughly the same as Japan’s current annual usage. Lifecycle emissions from the AI sector are projected to rise sharply and peak well after China’s 2030 emissions target. Xiong Qiyang, a doctoral researcher at Renmin University of China who worked on one such study, said the results reflect the reality that coal still dominates China’s power mix. He warned that rapid AI expansion could complicate national climate goals if energy sources do not shift quickly enough. In response, regulators have begun tightening rules. A 2024 action plan requires data centres to improve energy efficiency and increase their use of renewable power by 10% each year. Other initiatives encourage new facilities to be built in western regions, where wind and solar resources are more abundant. Operators on the east coast are also testing new ideas. Near Shanghai, an underwater data centre is set to open, using seawater for cooling to cut energy and water use. The developer, Hailanyun, said the facility will draw most of its power from an offshore wind farm and could be replicated if the project proves viable. Despite the growing energy demands of AI, Xiong argued that its overall impact on emissions could still be positive if applied carefully. Used to optimise heavy industry, power systems, and carbon markets, he said, AI may remain an essential part of China’s effort to cut emissions — even as it creates new pressures that policymakers must manage. (Photo by Matthew Henry) See also: Can China’s chip stacking strategy really challenge Nvidia’s AI dominance? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Inside China’s push to apply AI across its energy system appeared first on AI News. View the full article
-
For large retailers, the challenge with AI is no longer whether it can be useful, but how it fits into everyday work. A new three-year AI partnership by Tesco points to how one of the ***’s biggest supermarket groups is trying to answer that question. The agreement with French startup Mistral AI is less about launching a single AI feature and more about building long-term capability. Tesco plans to work with Mistral to develop AI tools that can be used across its business, from internal workflows to customer-facing systems. Rather than framing the deal around bold promises, Tesco has described it in practical terms. The aim, according to the company, is to save staff time, improve how teams work, and strengthen customer service. Ruben Lara Hernandez, Tesco’s Data, Analytics & AI Director, said the partnership brings together Tesco’s retail experience with Mistral’s technology, with the hope that it will help colleagues work more efficiently and support customers more effectively. He added that the agreement builds on Tesco’s history of developing technology and AI tools that support customers, staff, and suppliers. Tesco moves AI into day-to-day retail operations That focus reflects a wider shift in how enterprises are using AI. Early retail experiments often centred on customer-facing tools that were easy to see but hard to scale. More recent efforts are aimed at internal use, where AI can reduce repetitive work, support planning, and help staff make decisions faster. Tesco has already been moving in that direction. Over the past five years, the retailer has doubled the size of its technology team, signalling that software and data are now core to how the business operates. AI is already used across several parts of the organisation, both through in-house development and partnerships. In online grocery, AI helps find more efficient delivery routes, which can open up additional delivery slots for customers. In supply planning, AI supports complex demand forecasting, helping stores maintain product availability. The company also uses AI to personalise how it engages with customers through its Clubcard loyalty scheme, tailoring offers and communications based on shopping behaviour. The partnership with Mistral is meant to build on this existing work rather than replace it. One factor behind the collaboration is Mistral’s approach to model deployment, which allows AI systems to run in more controlled environments. For a retailer handling large volumes of customer and operational data, that level of control matters. Mistral’s Chief Revenue Officer and US General Manager, Marjorie Janiewicz, said the company’s Applied AI team will work closely with Tesco’s internal experts. She described the goal as building AI products that are customisable and controllable, with a focus on improving internal workflows and customer experience. A long-term partnership, not a one-off project The structure of the partnership also suggests a cautious approach to scale. Tesco plans to create an internal AI lab as part of the agreement, giving teams space to test and refine tools before wider deployment. For large organisations, this kind of setup can help prevent AI projects from remaining stuck in isolated pilots or specialist teams. There is also a strategic angle to the choice of partner. Mistral AI is the only European company developing large language models at the frontier level, and Tesco is the first major *** retailer to form a partnership with the startup as part of its broader AI and technology plans. Founded in April 2023, Mistral has grown quickly and counts enterprises such as HSBC, AXA, and Stellantis among its customers. For Tesco, the harder work now lies in execution. Retail data is often fragmented across regions, systems, and channels, and AI systems depend on that data being accurate and consistent. Rolling tools out across a business of this size also requires training, oversight, and trust from staff who use them daily. Whether the partnership succeeds will likely depend on how visible its impact becomes inside the organisation. If AI tools help store teams, planners, and analysts work more smoothly, the gains may be gradual rather than dramatic. As retailers look to move beyond experimentation, Tesco’s approach offers a view of how enterprise AI is starting to settle into routine operations — not as a single solution, but as a steady process of change. (Photo by Shashank Verma) See also: Zara’s use of AI shows how retail workflows are quietly changing Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Tesco signs three-year AI deal centred on customer experience appeared first on AI News. View the full article
-
[AI]Marketing agencies using AI in workflows serve more clients
ChatGPT posted a topic in World News
Of all the many industries, it’s marketing where AI is no longer an “innovation lab” side project but embedded in briefs, production pipelines, approvals, and media optimisation. A WPP iQ post published in December, based on a webinar with WPP and Stability AI, shows what AI deployment in daily operations looks like. Here, we’re talking about a focus on the practical constraints that determine whether AI changes daily work or merely adds another layer of complexity or tooling. Brand accuracy a repeatable capability Marketing agencies’ AI treats brand accuracy as something to be engineered. WPP and Stability AI note that off-the-shelf models “don’t come trained on your brand’s visual identity”, so outputs can often look generic. The companies’ remedy is fine-tuning, that is, training models on brand-specific datasets so the model learns the brand playbook, including style, look, and colours. Then, these elements can be reproduced consistently. WPP’s Argos is a prime example. After fine-tuning a model for the retailer, the team described how the model picked up details beyond the characters, including lighting and subtle shadows used in the brand’s 3D animations. Reproducing these finer details can be where time disappears in production, in the form of re-rendering and several rounds of approvals. When AI outputs start closer to “finished”, teams spend less time correcting and more time shaping narratives and adapting media for different channels. Cycle time collapses (and calendars change) WPP and Stability AI point out that traditional 3D animation can be too slow for reactive marketing. After all, cultural moments demand immediate content, not cycles defined in weeks or months. In its Argos case study, WPP trained custom models on two 3D toy characters so the models learned how they look and behave, including details such as proportions and how characters hold objects. The outcome was “high-quality images…generated in minutes instead of months”. The accelerated workflow moves rather than removes production bottlenecks. If generating variations becomes fast, then review, compliance, rights management and distribution, become the constraints. Those issues were always there, but the speed and efficiency of AI in this context shows the difference between what’s possible, and systems that have become embedded and accepted into workflows. Agencies that want AI to change daily operations have to redesign the workflow around it, not just add the technology as a new tool. The “AI front end” becomes essential WPP and Stability AI call out a “UI problem”, wherecreative teams lose time interfaces to common tools are “disconnected, complex and confusing”, forcing workarounds and constant asset movement between tools. Often, responses are bespoke, brand-specific front ends with complex workflows in the back end.. WPP positions WPP Open as a platform that encodes WPP’s proprietary knowledge into “globally accessible AI agents”, which helps teams plan, produce, create media, and sell. Operational gains come from cleaner handoffs between tools, as work moves from briefs into production, assets into activation, and performance signals back into planning. Self-serve capability changes agency operations AI-powered marketing platforms are also becoming client-facing. Operationally, that pushes agencies to concentrate on the parts of the workflow their clients can’t self-serve easily, like designing the brand system, building fine-tunings, and ensuring governance is embedded. Governance moves from policy to workflow For AI to be used daily, governance needs to be embedded where work happens. Dentsu describes building “walled gardens”, which are digital spaces where employees can prototype and develop AI-enabled solutions securely, and commercialise the best ideas. This reduces the risk of sensitive data exposure and lets experiments move into production systems. Planning and insight compress too The operational impact is not limited to production. Publicis Sapient describes AI-powered content strategy and planning that “transforms months of research into minutes of insight” by combining large language models with contextual knowledge and prompt libraries [PDF]. Research and brief development compress work schedules, so more client work can happen and the agency has faster responses to shifting culture and platform algorithms. What changes for people Across these examples, the impact on marketing professionals is one of rebalancing and shifting job descriptions. Less time goes on mechanical drafting, resizing, and versioning, and more time goes on brand stewardship. New operational roles expand, with titles like– model trainer, workflow designer, and AI governance lead. AI makes the biggest operational difference when agencies use customised models, usable front ends that make adoption (especially by clients) frictionless, and integrated platforms that connect planning, production, and execution. The headline benefit is speed and scale, but the deeper change is that marketing delivery starts to resemble a software-enabled supply chain, standardised, flexible where it needs to be, and measurable. (Image source: “Solar Wind Workhorse Marks 20 Years of Science Discoveries” by NASA Goddard Photo and Video is licensed under CC BY 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Marketing agencies using AI in workflows serve more clients appeared first on AI News. View the full article -
Cognizant, ***** Consultancy Services, Infosys, and Wipro have announced plans to deploy more than 200,000 Microsoft Copilot licenses in their enterprises – over 50,000 per company – in what Microsoft is calling a new benchmark for enterprise-scale adoption of generative AI. The companies involved are framing the move as the implementation of a default tool for hundreds of thousands of employees involved in consulting, delivery, operations, and software. The announcement, made in Bengaluru, December 11, was timed to coincide with Microsoft CEO Satya Nadella’s visit to India. There, and across the industrialised world, there’s been growing momentum for agentic AI – AI systems that do more than chat, executing multi-step work in business processes. The four firms want to be seen as AI advisors for clients, with extensive experience drawn from their internal rollouts of AI. Why enterprises care about Copilot Readers will be familiar with Microsoft 365 Copilot, the AI assistant embedded in standard workplace tools Word, Excel, PowerPoint, Outlook, and Teams. It’s intended to help users draft, summarise, and analyse, turning natural-language queries into work-related outputs. Copilot combines large language models with Microsoft 365 apps and organisational data gained from Microsoft Graph, with the assistant working in the context of a user’s files, meetings, and messages. This ability is, of course, subject to access controls already in place and defined by the organisation. For large organisations, the embedding of AI into workflows is important. A firm shouldn’t have to rebuild its toolchain to experiment with AI, but rather start using AI in the software and documents its workforce already uses. The raft of benefits is practical and work-focused: faster documentation, quicker meeting follow-ups, faster draft proposals, better discovery of information from internal knowledge repositories, and, with agentic AI, the automation of repetitive tasks. From Copilots to frontier firms and agents Microsoft uses the term “Frontier Firms” to describe organisations that are “human-led and agent-operated”; where employees work alongside AI assistants and specialised agents that take on work processes. The designation of ‘Frontier Firm’ status aligns with Microsoft’s messaging at Microsoft Ignite 2025, where the company described agents reinventing business processes and amplifying impact through human-agent teamwork. In very simple terms, the company’s pitch is to move from “AI helps you write” to “AI helps run workflows.” Why IT services firms are making public commitments There are two reasons why the four firms are rolling out the technology at such a large scale. First, to improve internal productivity. The Times of India reports the deployments are intended to integrate Copilot into workflows in consulting, software development, operations and client delivery, with the aim of improved productivity. At large multinational companies, margins depend on delivery efficiency and knowledge reuse, so shaving minutes from everyday tasks for tens of thousands of workers produces meaningful gains. Second, client credibility. The consultancy companies serve global enterprises, including many Fortune 500 clients, which means their internal operating model can, and perhaps should, become their clients’ playbooks. If consultancies can demonstrate mature governance, training, and measurable outcomes with Copilot at scale in their own operations, it strengthens their messaging, better able to sell similar transformations to potential and existing clients. Hyperscalers’ investment in India The Copilot announcement came immediately after Microsoft said it would invest $17.5 billion in India between 2026-2029, money destined for cloud and AI infrastructure, skilling, and operations. The company describes this as its largest investment in Asia to date. Other major tech firms are making parallels: Reuters reported in December 2025 that Amazon/AWS planned to invest over $35 billion in India by 2030, expanding its operations and AI capabilities, for example. Together, such moves underscore India’s growing position as a massive enterprise market and strategic hub for AI talent and cloud infrastructure. For India’s IT services leaders, Copilot is being positioned as a way to stay ahead of the competitive curve and define “AI-first delivery.” (Image source: “Gobbling Indian view of Clinch River” by dmott9 is licensed under CC BY-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post 50,000 Copilot licences for Indian service companies appeared first on AI News. View the full article
-
Zara is testing how far generative AI can be pushed into everyday retail operations, starting with a part of the business that rarely gets attention in technology discussions: product imagery. Recent reporting shows the retailer using AI to generate new images of real models wearing different outfits, based on existing photoshoots. Models remain involved in the process, including consent and compensation, but AI is used to extend and adapt imagery without repeating production from scratch. The stated aim is to speed up content creation and reduce the need for repeated shoots. On the surface, the change looks incremental. In practice, it reflects a familiar pattern in enterprise AI adoption, where technology is introduced not to overhaul how a business works, but to remove friction from tasks that repeat at scale. How Zara uses AI to reduce friction in repeatable retail work For a global retailer like Zara, imagery is not a creative afterthought. It is a production requirement tied directly to how quickly products can be launched, refreshed, and sold across markets. Each item typically needs multiple visual variations for different regions, digital channels, and campaign cycles. Even when garments change only slightly, the surrounding production work often starts again from scratch. That repetition creates delays and cost that are easy to overlook precisely because they are routine. AI offers a way to compress those cycles by reusing approved material and generating variations without resetting the entire process. AI enters the production pipeline The placement of the technology is as important as the capability itself. Zara is not positioning AI as a separate creative product or asking teams to adopt an entirely new workflow. The tools are being used inside an existing production pipeline, supporting the same outputs with fewer handoffs. That keeps the focus on throughput and coordination rather than experimentation. This kind of deployment is typical once AI moves beyond pilot stages. Rather than asking organisations to rethink how work is done, the technology is introduced where constraints already exist. The question becomes whether teams can move faster and with less duplication, not whether AI can replace human judgement. The imagery initiative also sits alongside a broader set of data-driven systems that Zara has built up over time. The retailer has long relied on analytics and machine learning to forecast demand, allocate inventory, and respond quickly to changes in customer behaviour. Those systems depend on fast feedback loops between what customers see, what they buy, and how stock moves through the network. From that perspective, faster content production supports the wider operation even if it is not framed as a strategic shift. When product imagery can be updated or localised more quickly, it reduces lag between physical inventory, online presentation, and customer response. Each improvement is small, but together they help maintain the pace that fast fashion relies on. From experimentation to routine use Notably, the company has avoided framing this move in grand terms. There are no published figures on cost savings or productivity gains, and no claims that AI is transforming the creative function. The scope remains narrow and operational, which limits both risk and expectation. That restraint is often a sign that AI has moved out of experimentation and into routine use. Once technology becomes part of day-to-day operations, organisations tend to talk about it less, not more. It stops being an innovation story and starts being treated as infrastructure. There are also constraints that remain visible. The process still relies on human models and creative oversight, and there is no suggestion that AI-generated imagery operates independently. Quality control, brand consistency, and ethical considerations continue to shape how the tools are applied. AI extends existing assets rather than generating content in isolation. This is consistent with how enterprises typically approach creative automation. Rather than replacing subjective work outright, they target the repeatable components around it. Over time, those changes accumulate and reshape how teams allocate effort, even if the core roles remain intact. Zara’s use of generative AI does not signal a reinvention of fashion retail. It shows how AI is beginning to touch parts of the organisation that were previously considered manual or difficult to standardise, without changing how the business fundamentally operates. In large enterprises, that is often how AI adoption becomes durable. It does not arrive through sweeping strategy announcements or dramatic claims. It takes hold through small, practical changes that make everyday work move a little faster — until those changes become hard to imagine doing without. (Photo by M. Rennim) See also: Walmart’s AI strategy: Beyond the hype, what’s actually working Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Zara’s use of AI shows how retail workflows are quietly changing appeared first on AI News. View the full article
-
Human Resources is an area in many organisations where AI can have significant operational impact. The technology is now being embedded into day-to-day operations, in activities like answering employees’ questions and supporting training. The clearest impact appears where organisations can measure the tech’s outcomes, typically in time saved and the numbers of queries successfully resolved. Fewer tickets, more first-time answers IBM’s internal virtual agent, AskHR, was built to handle employee queries and automate routine HR actions. IBM says AskHR automates more than 80 internal HR tasks and has engaged in over two million conversations with employees every year. It uses a two-tier approach, where AI resolves routine issues, and human advisers handle more complex cases. The company reports some operational benefits: a 94% success rate in answering commonly-asked questions, a 75% reduction in the number of lodged support tickets since 2016, and – the headline figure – a 40% reduction in HR operational costs over four years. But it’s important to note that AI is not used by IBM to route queries to existing materials. The automation is capable of completing the transaction, thus reducing the need to hand-off queries to human staff. Recruitment and onboarding efficiencies Vodafone’s 2024 annual report describes an internal platform it calls ‘Grow with Vodafone‘. The company says it’s reduced its time-to-hire periods from 50 days to 48 days, made the job application process simpler, and added personalised skills-based job recommendations for applicants. That’s led to a 78% reduction in questions posed by potential applicants and those onboarding into new roles. The company also has a global headcount planning tool that reduces the manual work needed to assemble necessary data, plus there’s an AI-powered global HR ‘data lake’ that standardises dashboards and reduces the need for manual reporting – stakeholders can dive into the data themselves and surface the insights they need. Training and internal support Big employers have challengers getting new staff up to speed quickly; so-called time-to-competence. Bank of Americas’ newsroom describes how its onboarding and professional development organisation, ‘The Academy’ uses AI for interactive coaching, with employees completing over a million simulations in a year. The organisation operates ‘Erica for Employees‘, an internal assistant that handles topics like health benefits and payroll or tax forms for employees. It’s used by over 90% of employees – for the IT service desk, having Erica triage situations is impactful, with a reduction of more than 50% in incoming calls. Such tools reduce hidden work (searching, repeating questions, waiting for answers) and its associated costs. Plus, a shorter time-to-competence is especially valuable in regulated and customer-facing environments. Frontline work at big employers Walmart’s June 2025 corporate update describes rolling out AI tools via its associates’ app, which include a workflow tool that prioritises and recommends work tasks. At the time of publication, it was early days, but based on early results, Walmart says team leads and store managers are beginning to see shift planning times down from 90 to 30 minutes. As an employer of a diverse workforce, its app’s real-time translation ability (44 languages) is invaluable. The company is currently upgrading its associates’ software with AI to turn its internal process guides into multi-lingual instructions. It has more than 900,000 employees using the system every week users, with more than three million queries per day going through the associates’ conversational AI platform. Workforce efficiencies at Walmart scale is impressive, but for every size of business, there are clear advantages to be gained from giving employees faster guidance and better support across multilingual teams. In addition to the immediate cost savings, simple-to-use and effective software of this type affects retention, safety standards, and service quality – all for the better. Governance and human safety nets Multinational bank, HSBC’s publication, “Transforming HSBC with AI” describes over 600 AI use cases in operation at the company, and says colleagues have access to an LLM-based productivity tool for tasks like translation and document analysis. In an environment where governance and data security are of paramount importance, it’s ensured that all automated systems abide by existing codes, something that’s enforced by dedicated AI Review Councils and AI lifecycle management frameworks. In HR this matters, regardless of vertical. Governance decisions should shape what can be automated, how people data is handled, and how accountability is maintained into the long term. HR data is often personally-identifiable, so the highest standards – and their maintenance – are critically important. Operational trade-offs Operational impact is about trust as well as speed and efficiency. A self-service agent answering confidently but incorrectly creates rework, escalations, and causes problems. A pragmatic pattern for reducing risk is to keep humans in the loop, especially for complex decisions. IBM’s two-tier model, Vodafone’s tailored job recommendations, and Walmart an HSBC’s data governance and security bring oversight. Hybrid service models plus data discipline and oversight are what enable AI to scale without undermining employee confidence or fairness. Where this is heading The pattern of successful operational deployment has been consistent in the cases of the HR function in these large enterprises. They each started with high-volume questions and repetitive transactions, expanded into hiring and training, and then pushed AI to the frontline where it can save time. The biggest gains come when AI turns HR from a service ****** into a faster, more consistently-operating function. (Image source: “Business Meetings” by thinkpanama is licensed under CC BY-NC 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI in Human Resources: the real operational impact appeared first on AI News. View the full article