Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    800
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. Law enforcement, law firms, hospitals, and financial institutions are asked every day to release records, which can contain highly sensitive details – including addresses, social security numbers, medical diagnoses, evidence footage, and children’s identities. To meet compliance and security requirements, staff spend hundreds of hours manually redacting sensitive information, yet when that process goes wrong, there can be costly consequences. Last year, healthcare company Advanced was fined £6 million for losing patient records that, among other details, contained information about how to gain entry to the homes of 890 care receivers. Even the smallest oversights can create unpleasant headlines and catastrophic fines. This is the reality of modern data handling: leaks can be catastrophic, and compliance frameworks like GDPR, HIPAA, and FERPA, plus FOIA requests, require more vigilance than manual redaction can provide. What organizations need is not more staff to ensure proper redaction, but tools that achieve it quickly, reliably, and securely. CaseGuard Studio, a US-based AI redaction & investigation platform, has built software that automates this manual work with 98% accuracy. It can process thousands of files in minutes, working on data that’s kept securely on-premises of any file type, including video, audio, documents, and images. Why Manual Redaction No Longer Works Redaction is not new, but the tools most people reach for were not built for the complexity of today’s compliance requirements. Adobe Acrobat, for example, offers text redaction but needs manual work on each document. Premiere’s video editing software requires frame-by-frame subject tracking for video redaction, which is slow and impractical. These solutions provide only limited capability and were never designed for departments that process a multitude of redactions on a weekly basis. CaseGuard Studio, by contrast, was purpose-built for just this challenge. It can detect 12 categories of PII (personally-identifiable information) in video and images, such as faces, license plates, notepads, and more. It tracks and redacts all PII without needing manual frame-by-frame intervention. For audio and documents, CaseGuard Studio supports over 30 PII types, like names, phone numbers, and addresses. Custom keywords, phrases, or sentences can be auto-detected and redacted directly from thousands of documents and transcripts, streamlining compliance in ways manual tools can’t match. It transcribes recordings with high accuracy and can translate to and from 100+ languages, so it can redact sensitive terms in multilingual content. What once took days of human labor can now happen in minutes. CaseGuard Studio automates redaction work with 98% accuracy, up to 30 times faster than manual methods, and because it runs fully on-premise, data never leaves the device. What to Ask When Choosing Redaction Software For organizations evaluating redaction software, the decision often comes down to a handful of critical questions that determine whether a platform can deliver on both compliance and efficiency. The following questions are central to making the right choice. Can the software handle every file type we work with? From scanned forms and handwritten notes to video, audio, and still images, organizations in sensitive sectors deal with more than PDFs. Is the platform fully automated? If redaction still means blacking out text with a Sharpie or scrubbing video frame by frame, the process is slow and prone to error. Full automation ensures accuracy and frees staff for higher-impact work. Does the software ensure data never leaves your environment? On-premise deployment means sensitive files are processed locally, so nothing is exposed to third-party servers or cloud risks. Does the pricing stay predictable as you scale? Per-file or per-minute pricing quickly becomes unsustainable as workloads grow. Look for a flat subscription with unlimited redaction, so costs stay predictable no matter how much data you process. Evaluating CaseGuard Studio Against the Four Redaction Essentials When assessed against these requirements, CaseGuard Studio was the only platform in our evaluation that consistently delivered across all four redaction essentials. Auto-redact files from any source From text documents and scanned forms to video, audio, images, and even handwriting, redaction has to cover every format where sensitive information might appear. Missing one identifiable feature, a face in a crowd or an un-redacted license plate, and a single oversight can be the difference between full compliance and a lawsuit. CaseGuard Studio automatically detects and redacts sensitive information across all these file types within a single platform with complete compliance. 2. Automated bulk redaction at speed and scale Thousands of files can be redacted in bulk, turning weeks of manual effort into minutes of processing. CaseGuard Studio handles workloads up to 32x faster than manual methods, with 98% accuracy, giving organizations the speed and scalability to meet growing compliance demands. 3. Your data, your control CaseGuard Studio runs fully on-premise, within your secure environment, including air-gapped systems that are completely isolated from external networks. This ensures organizations retain full control of their data, with nothing exposed to third-party servers or cloud risks. 4. Unlimited redaction, no pay-per-file fees Pay-per-file pricing quickly adds up, making every additional redaction more expensive. CaseGuard Studio offers predictable pricing under a flat subscription with unlimited redaction, so costs remain the same no matter how heavy the redaction load is. Final Thoughts Over the course of our evaluation, we compared methods and platforms ranging from manual redaction and legacy PDF editors to newer AI-driven tools that have appeared in the last few years. Most delivered partial solutions, treating written documents well but failing on audio, while others blurred faces in video, but weren’t practical to use at scale. Cloud-only options raised sovereignty and compliance concerns that, for many users, would count them out of the running entirely. CaseGuard Studio was the only platform that consistently met all five requirements detailed above. It supports the widest of file types, from body-cam video to scanned or handwritten forms. Audio and video are probably the most difficult formats to redact, especially at scale. Here, CaseGuard wins our vote with its AI-powered smarts. It runs fully on-premise, keeps sensitive files under organizational control, and its local AI models are refined with each version release. At a time when many cloud redaction software licensing models drive up costs as workloads grow, CaseGuard’s flat pricing offers a refreshing change — predictable, transparent, and sustainable. For any organization facing rising compliance demands and ever-larger volumes of sensitive data, CaseGuard Studio is well worth a closer look. Click here to book a consultation. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI Redaction That Puts Privacy First: CaseGuard Studio Leading The Way appeared first on AI News. View the full article
  2. AI is reshaping how people plan and experience travel. From curated videos on Instagram Reels to booking engines that build entire itineraries in seconds, AI is becoming a powerful force in how journeys are imagined, booked, and lived. But this shift raises an important question: is AI giving travellers more freedom, or quietly steering their choices? Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit, believes AI can do both, speaking to AI News. In a wide-ranging conversation, he explained how AI is transforming travel discovery, personalisation, cultural exchange, and ethics—and why the industry must set clear guardrails as technology takes on a more active role. AI as a travel companion AI is changing how people discover destinations. Instead of generic travel lists, platforms now serve content that feels personal. “AI has turned travel discovery into a personal canvas,” Hamidaddin said. “Platforms like Instagram Reels no longer just show ‘where to go’; they curate journeys that feel tailor-made for each traveller.” Fahd Hamidaddin, Founding CEO of the Saudi Tourism Authority and President of the upcoming TOURISE Summit This shift is not just about convenience. By highlighting lesser-known destinations, AI can spread demand and ease pressure on crowded tourist spots. It can also introduce travellers to authentic local experiences that might otherwise remain hidden. Hamidaddin sees the next phase as “agentic AI”—technology that doesn’t just make suggestions but takes action. He described a future where AI automatically rebooks flights disrupted by weather, adjusts itineraries, and reschedules reservations in real time. “That’s frictionless travel—where the logistics fade and the adventure takes centre stage,” he said. AI personalisation vs. algorithmic influence in travel AI-driven booking engines promise hyper-personalised recommendations, matching experiences to individual interests and budgets. This can make planning smoother and more inspiring, but it also comes with risks. “They do both,” Hamidaddin said when asked whether AI empowers travellers or guides them without their knowledge. “AI can empower travellers like never before—matching experiences to passions, budgets, and even moods. But unchecked, algorithms can quietly narrow horizons, nudging people toward predictable options. This risk only increases with agentic AI, which will make decisions on travellers’ behalf. That’s why transparency and accountability are non-negotiable. AI should be a compass, not a cage, and travellers must always hold the final word.” Trust and transparency The balance between personalisation and privacy will shape the next era of travel. As AI systems collect vast amounts of personal data, travellers are more aware of how their preferences, clicks, and searches are used. Hamidaddin stressed that trust is the foundation. “The era of hyper-personalisation must be built on trust. Travellers know their data is powerful, and they’re right to ask how it’s being used,” he said. The solution, in his view, is “radical transparency: explicit consent, clear explanations, and real opt-in choices.” Agentic AI, which can act on a traveller’s behalf, makes this even more important. If algorithms are booking, adjusting, or cancelling plans automatically, travellers need clear ways to control and understand these actions. “True innovation doesn’t just customise the journey; it safeguards the traveller’s confidence and autonomy,” he added. Setting standards through TOURISE Hamidaddin will lead discussions on these topics at the inaugural TOURISE Summit in Riyadh this November. He sees the summit as a chance to shape global standards for AI use in travel, not just showcase technology. “The TOURISE is designed to be more than an event—it’s the world’s first platform where government, business, and technology leaders unite to shape travel tech responsibly,” he said. His goals include creating a shared ethical framework for AI, encouraging partnerships to address privacy and workforce challenges, promoting sustainability, and training the global tourism workforce to thrive in an AI-driven industry. “TOURISE must set a new benchmark: innovation with integrity,” he said. Cultural exchange and economic growth AI’s influence goes beyond logistics. It is also changing cultural exchange and economic development, particularly in Saudi Arabia. “AI is dissolving barriers—linguistic, cultural, and economic. It’s curating authentic connections that go beyond sightseeing into meaningful exchange,” Hamidaddin said. He explained how Saudi Arabia is using AI to highlight cultural and historical treasures like AlUla and Diriyah, while supporting artisans, festivals, and small businesses. Agentic AI will help create smoother travel experiences that allow visitors to focus more on culture and less on planning. “This isn’t just about more visitors; it’s about inclusive growth, mutual respect, and shared prosperity,” he said. By 2030, AI is expected to contribute $135 billion to Saudi Arabia’s GDP, with tourism playing a central role. But for Hamidaddin, the real impact is measured in “bonds between people.” Ethical guardrails for AI in travel As AI systems take on more responsibility, clear ethical standards become essential. Hamidaddin outlined several priorities: making AI usage clear to users, regularly auditing algorithms for bias, giving travellers control over their data, and designing systems that promote cultural diversity and accessibility. “With agentic AI, the stakes rise: when an AI acts on a traveller’s behalf, we must ensure transparency, explainability, and accountability. Agency must never replace autonomy,” he said. Innovation with ethics The debate isn’t about whether to adopt AI, but how to do so responsibly. Hamidaddin argues that innovation should align with human values and environmental priorities. “It’s not about chasing every shiny new tool; it’s about aligning innovation with human values and planetary needs,” he said. He believes governments, businesses, communities, and travellers must collaborate to agree on shared principles. Agentic AI makes this even more urgent, as decisions may increasingly be made by machines. “Our job is to ensure technology serves people, not the other way around,” he added. A new era for travel Hamidaddin is optimistic about what lies ahead. “What excites me most is that travel is becoming transformative again,” he said. He imagines a future where language barriers disappear, itineraries adapt in real time, and every trip supports local communities. In Saudi Arabia, platforms like “Spirit of Saudi” are already using AI to showcase authentic experiences, from desert adventures to artisan workshops. The next step is agentic journeys, where AI travel companions handle logistics seamlessly, freeing travellers to focus on discovery and connection. “At TOURISE, I believe we’re not simply shaping tourism’s future—we’re igniting a new era of connection and shared prosperity across the globe,” he said. (Photo by S O C I A L . C U T) See also: AI causes reduction in users’ brain activity – MIT Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post How AI is changing the way we travel appeared first on AI News. View the full article
  3. Guest author: Or Hillel, Green Lamp AI systems aren’t experimental anymore, they’re embedded in everyday decisions that affect millions. Yet as these models stretch into important spaces like real-time supply chain routing, medical diagnostics, and financial markets, something as simple as a stealthy data shift or an undetected anomaly can flip confident automation into costly breakdown or public embarrassment. This isn’t just a problem for data scientists or machine learning engineers. Today, product managers, compliance officers, and business leaders are realising that AI’s value doesn’t just hinge on building a high-performing model, but on deeply understanding how, why, and when these models behave the way they do once exposed to the messiness of the real world. Enter AI observability, a discipline that’s no longer an optional add-on, but a daily reality for teams committed to reliable, defensible, and scalable AI-driven products. The best AI observability tools in 2025 1. Logz.io Logz.io stands out in the AI observability landscape by providing an open, cloud-native platform tailored for the complexities of modern ML and AI systems. Its architecture fuses telemetry, logs, metrics, and traces into one actionable interface, empowering teams to visualize and analyse every stage of the AI lifecycle. Key features include: AI-driven root cause analysis: Automated anomaly detection and intelligent guided troubleshooting accelerate issue resolution. The embedded AI Agent is able to surface trends, detect problems proactively, and provide explanations in natural language. Extensive integration: Logz.io seamlessly connects with major cloud providers, container orchestration, and popular ML frameworks. The flexibility ensures observability for hybrid and multi-cloud models without friction. Workflow enhancements: The platform’s interactive workflows promote faster investigation by guiding even junior engineers toward effective troubleshooting. Cost optimisation: Intelligent data management tools allow teams to optimise monitoring costs and prioritise valuable business insights. 2. Datadog Datadog has evolved from a classic infrastructure monitoring tool into a powerhouse for AI observability in the enterprise. The platform harnesses an integrated stack of telemetry capture, real-time analytics, and ML-specific dashboards that provide both high-level and granular perspectives in the entire AI lifecycle. Key features include: Comprehensive telemetry: Captures logs, traces, metrics, and model performance, enabling anomaly detection and quick identification of bottlenecks in both training and deployment. Machine learning monitoring: Specialised tools track data drift, prediction bias, and resource consumption at inference. Alerts and dashboards are tailored for model-centric use cases. Unified interface: Engineers, data scientists, and SREs all operate from shared dashboards, streamlining cross-team troubleshooting and collaboration. Rapid integration: Datadog supports dozens of AI and data science platforms, TensorFlow, PyTorch, MLflow, Kubeflow, and more, out of the box. 3. EdenAI EdenAI addresses the needs of enterprises using multiple AI providers with a vendor-agnostic observability platform. The tool aggregates telemetry streams, monitors AI service health, and offers a unified response centre, regardless of the origin of the models, APIs, or data. Key features include: Centralised dashboards: Monitor all AI models, APIs, and endpoints from a single pane of glass, ideal for organisations mixing public APIs, private models, and open-source services. Cross-platform drift and anomaly detection: AI-driven monitoring illuminates data drift, latency, and performance issues wherever AI is consumed or deployed. Automated auditing: Built-in logs and reporting features make it easy to satisfy regulatory requirements and support enterprise governance. Vendor-agnostic integration: Fast onboarding for new models, with connectors to major AI cloud services and on-premises deployments. 4. Dynatrace Dynatrace has long been known for autonomous DevOps monitoring, and its AI observability features in 2025 carry that innovation into the AI realm. The platform’s core is the Davis® AI engine, which continuously analyses system health, model performance, and end-to-end dependencies throughout your ML pipelines. Key features include: Autonomous anomaly detection: Davis® proactively identifies model drift, data pipeline snags, and abnormal behaviour in layers, from code to inference. Topology mapping: Visualizes relationships between services, models, data sources, and infrastructure, making it easy to trace the impact of changes or search for root causes. Predictive analytics: Helps anticipate incidents before they impact end-users by correlating macro system signals with fine-grained ML metrics. Scale and integration: Connects directly with leading cloud and MLOps platforms for seamless, low-touch monitoring at enterprise scale. 5. WhyLabs WhyLabs has a data-centric approach to AI observability that centres on transparency, quantitative rigor, and proactive detection of risk in ML operations. The platform is built for organisations that want to govern and monitor the entire AI lifecycle, from raw data ingestion to live model predictions. Key features include: Pipeline monitoring: Tracks data quality, schema changes, and feature drift in real-time, enabling early alerts for issues that could undermine model accuracy. Model performance dashboards: Visualize changes in predictive quality, bias, and rare event distribution in all deployed models. Rich telemetry integration: Supports monitoring for both structured and unstructured data types, reflecting the variety present in modern ML ecosystems. Collaborative workflows: Allows teams to annotate, triage, and resolve anomalies with a unified interface and pre-defined incident playbooks. The real-world impact of AI observability What does it look like in practice when an organisation gets AI observability right? Enabling proactive incident response In a hospital using AI for radiology triage, an unexpected equipment firmware update subtly shifts the pixel values of incoming images. Without observability, this shift goes undetected, producing subtly degraded diagnoses. With observability, the shift triggers alerts, and the team retrains the model or adjusts preprocessing, avoiding patient harm. Preventing bias and drift A fintech company notices a sudden, unexplained dip in loan approval rates for a specific demographic. Deep observability enables rapid investigation, diagnosis of data drift due to shifts in an upstream data partner, and quick mitigation, ensuring fairness and compliance. Supporting human-AI collaboration Customer support uses AI to recommend ticket responses. Observability-powered dashboards flag when auto-generated advice is leading to longer ticket resolution times for one product line. Teams use this to retrain the model, improving both customer satisfaction and business outcomes. Choosing the right AI observability tool: Alignment, scale, and ecosystem Selecting the best observability platform for AI depends on alignment with your organisation’s size, complexity, and goals. Consider: Breadth and depth of telemetry coverage Level of automation and intelligence provided Developer experience, onboarding, and ease of integrating with your stack Regulatory and compliance features for auditability Ecosystem fit, including support for your preferred cloud, frameworks, and workflows Investing in the right observability platform is foundational for a resilient, auditable, and high-velocity AI practice in 2025 and beyond. Guest author: Or Hillel, Green Lamp Image source: Unsplash The post 5 best AI observability tools in 2025 appeared first on AI News. View the full article
  4. Google DeepMind has deployed a new AI agent designed to autonomously find and fix critical security vulnerabilities in software code. The system, aptly-named CodeMender, has already contributed 72 security fixes to established open-source projects in the last six months. Identifying and patching vulnerabilities is a notoriously difficult and time-consuming process, even with the aid of traditional automated methods like fuzzing. Google DeepMind’s own research, including AI-based projects such as Big Sleep and OSS-Fuzz, has proven effective at discovering new zero-day vulnerabilities in well-audited code. This success, however, creates a new bottleneck: as AI accelerates the discovery of flaws, the burden on human developers to fix them intensifies. CodeMender is engineered to address this imbalance. It functions as an autonomous AI agent that takes a comprehensive approach to fix code security. Its capabilities are both reactive, allowing it to patch newly discovered vulnerabilities instantly, and proactive, enabling it to rewrite existing code to eliminate entire classes of security flaws before they can be exploited. This allows human developers and project maintainers to dedicate more of their time to building features and improving software functionality. The system operates by leveraging the advanced reasoning capabilities of Google’s recent Gemini Deep Think models. This foundation allows the agent to debug and resolve complex security issues with a high degree of autonomy. To achieve this, the system is equipped with a set of tools that permit it to analyse and reason about code before implementing any changes. CodeMender also includes a validation process to ensure any modifications are correct and do not introduce new problems, known as regressions. While large language models are advancing rapidly, a mistake when it comes to code security can have costly consequences. CodeMender’s automatic validation framework is therefore essential. It systematically checks that any proposed changes fix the root cause of an issue, are functionally correct, do not break existing tests, and adhere to the project’s coding style guidelines. Only high-quality patches that satisfy these stringent criteria are surfaced for human review. To enhance its code fixing effectiveness, the DeepMind team developed new techniques for the AI agent. CodeMender employs advanced program analysis, utilising a suite of tools including static and dynamic analysis, differential testing, fuzzing, and SMT solvers. These instruments allow it to systematically scrutinise code patterns, control flow, and data flow to identify the fundamental causes of security flaws and architectural weaknesses. The system also uses a multi-agent architecture, where specialised agents are deployed to tackle specific aspects of a problem. For example, a dedicated large language model-based critique tool reveals the differences between original and modified code. This allows the primary agent to verify that its proposed changes do not introduce unintended side effects and to self-correct its approach when necessary. In one practical example, CodeMender addressed a vulnerability where a ****** report indicated a heap buffer overflow. Although the final patch only required changing a few lines of code, the root cause was not immediately obvious. By using a debugger and code search tools, the agent determined the true problem was an incorrect stack management issue with Extensible Markup Language (XML) elements during parsing, located elsewhere in the codebase. In another case, the agent devised a non-trivial patch for a complex object lifetime issue, modifying a custom system for generating C code within the target project. Beyond simply reacting to existing bugs, CodeMender is designed to proactively harden software against future threats. The team deployed the agent to apply -fbounds-safety annotations to parts of libwebp, a widely used image compression library. These annotations instruct the compiler to add bounds checks to the code, which can prevent an attacker from exploiting a buffer overflow to execute arbitrary code. This work is particularly relevant given that a heap buffer overflow vulnerability in libwebp, tracked as CVE-2023-4863, was used by a threat actor in a zero-click iOS exploit several years ago. DeepMind notes that with these annotations in place, that specific vulnerability, along with most other buffer overflows in the annotated sections, would have been rendered unexploitable. The AI agent’s proactive code fixing involves a sophisticated decision-making process. When applying annotations, it can automatically correct new compilation errors and test failures that arise from its own changes. If its validation tools detect that a modification has broken functionality, the agent self-corrects based on the feedback and attempts a different solution. Despite these promising early results, Google DeepMind is taking a cautious and deliberate approach to deployment, with a strong focus on reliability. At present, every patch generated by CodeMender is reviewed by human researchers before being submitted to an open-source project. The team is gradually increasing its submissions to ensure high quality and to systematically incorporate feedback from the open-source community. Looking ahead, the researchers plan to reach out to maintainers of critical open-source projects with CodeMender-generated patches. By iterating on community feedback, they hope to eventually release CodeMender as a publicly available tool for all software developers. The DeepMind team also intends to publish technical papers and reports in the coming months to share their techniques and results. This work represents the first steps in exploring the potential of AI agents to proactively fix code and fundamentally enhance software security for everyone. See also: CAMIA privacy attack reveals what AI models memorise Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post Google’s new AI agent rewrites code to automate vulnerability fixes appeared first on AI News. View the full article
  5. The roar of 80,000 fans at Shanghai Stadium on September 21, 2025, wasn’t just about the football match between Shanghai Shenhua and Chengdu Rongcheng – it was also a live demonstration of how telecom carriers are tackling one of their most pressing challenges: converting advanced network capabilities into revenue. Huawei brought the international media to witness this implementation firsthand, offering many of us in the press corps our first experience of ******** football culture. As supporters cheered in waves of blue and white, capturing moments on their phones and sharing videos (despite the crushing crowd density), China Mobile Shanghai’s newly deployed 5G-A network monetisation strategy was being tested in real-time, powered by Huawei’s GainLeap solution and intelligent wireless infrastructure. From the media section, the scale of the technical challenge became apparent – ensuring 80,000 simultaneous users could stream, upload, and transact without network degradation. This was something that required more than additional bandwidth. China Mobile Shanghai has become the first carrier in China to launch a differentiated 5G-A experience package, marking what industry observers see as a shift in how telecom operators might address revenue growth in saturated markets. The “5G-A Exclusive Package for Shenhua Football Fans” is a way to transform the elastic capabilities of 5G-Advanced networks into tangible value that users can perceive and are willing to pay for. The technical foundation for this 5G-A network monetisation strategy relies heavily on Huawei’s technology portfolio, from the GainLeap solution that identifies premium subscribers, to the AI-powered intelligent wireless boards that optimise network performance. The business model innovation The partnership between China Mobile Shanghai and Shanghai Shenhua Football Club offers approximately 200,000 football fans an annual package that combines network performance guarantees with fan-specific benefits. Subscribers receive network acceleration on 5G-A, access to all matches via the Migu streaming service, unlimited video ringback tone downloads, and Shanghai Shenhua Football Club merchandise. This approach to 5G-A network monetisation addresses what China Mobile Shanghai identifies as an important pain point for the telecommunications industry: how to drive quality growth when user acquisition has reached its ceiling. Rather than competing solely on price or basic connectivity, the package creates value through enhanced experiences in specific use-cases. The technical infrastructure behind the experience For Shanghai Stadium, China Mobile Shanghai implemented an elastic, scalable network capable of handling massive concurrent demand. During the match, with 80,000 users accessing the network simultaneously, 5G-A package subscribers can achieve download speeds of up to 600 Mbps. The necessary technical foundation relies on Huawei’s GainLeap solution, which lets the network identify 5G-A subscribers and allocates them a high-speed 3CC (three-component carrier) channel. The differentiation is key to the 5G-A network monetisation model – creating measurable performance differences between standard and premium subscribers. Behind the scenes, Huawei’s AI-powered intelligent wireless boards play a central role. They integrate on-board communications capabilities with artificial intelligence to perceive network service types, user experience goals, device characteristics, and resource status, in milliseconds. According to test data provided by China Mobile Shanghai, they have helped result in QR code scanning latency reduced by 47%, WeChat uploading time shortened by 25%, live streaming speeds increased by 27%, and high-definition video ratios increased by 11%. Infrastructure deployment scale To support the high concurrent demand during events, China Mobile Shanghai and Huawei conducted comprehensive network upgrades at the stadium. The lower stands received 32 new 2.6 GHz and 4.9 GHz pRRUs (passive remote radio units), more than doubling overall network capacity. Seven escalator entrances each received a 4.9 GHz EM device to eliminate coverage dead spots. On match days, more than 40 engineers are stationed onsite for real-time network monitoring and dynamic optimisation. Outside of the stadium, China Mobile Shanghai has achieved continuous 5G-A coverage in the area inside Shanghai’s Outer Ring Road, the five new towns further out, and 21 metro lines in the city. The practical user experience For fans at the match, the differentiated service manifested practically. The high bandwidth and business-level assurance capabilities enabled quick mobile payments for drinks, snacks, and souvenirs onsite. Users could share video highlights in real time without lag, even during peak moments when thousands of fans uploaded data simultaneously. The ability to instantly see likes and comments from friends while still in the stadium represents the kind of enhanced experience that China Mobile Shanghai is betting users will value enough to pay a premium for. Whether this bet pays off commercially remains to be seen, but the technical execution at the September 21 match demonstrated that the infrastructure delivers on its promises. Industry implications The initiative raises questions about the future of 5G-A network monetisation strategies in the telecommunications industry. Traditional models have struggled to justify the massive infrastructure investments required for 5G and latterly, 5G-Advanced networks. By creating tiered experiences tied to specific user communities – in this case, football fans – carriers may have found a way to differentiate services beyond simple speed tiers. The approach also represents a test case for how deeply integrated AI capabilities in network infrastructure can enable new business models. The intelligent wireless boards’ ability to make millisecond-level decisions about resource allocation is what makes the performance differentiation technically feasible at scale. China Mobile Shanghai’s target of serving 200,000 Shenhua fans provides a measurable benchmark for assessing commercial viability. As telecommunications companies globally grapple with how to monetise increasingly expensive network upgrades, China Mobile Shanghai’s experiment with community-specific, experience-based packages may offer insights for the industry’s evolution beyond traditional connectivity provision. (Image source: Smart Shanghai ) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post China Mobile Shanghai launches industry-first 5G-A network monetisation strategy with Huawei appeared first on AI News. View the full article
  6. A study from MIT (Massachusetts Institute of Technology) has found that the human brain not only works less hard when using an LLM, but its effects continue, negatively affecting mental activity in future work. The researchers used a limited number of subjects for their experiments (a limitation stated in the paper [PDF]), who were asked to write essays on a variety of subjects. One group of subjects was allowed to use AI (ChatGPT was chosen; researchers considered there was little difference between it and its competitors), the second permitted to use Google Search, and the third group was termed ‘brain only’ – that is, producing work with no technology aids. Electroencephalography (EEG) was used on all the subjects to monitor brain activity to ****** cognitive engagement and load. The researchers found that the groups exhibited different levels of neural connectivity, which reflected different strategies employed by the brain to write up the assignments. The more support the subjects had, the less hard their brains seemed to work. EEG analysis showed that the most active grey matter belonged to the unaided group, with less neural activity in the ‘search engine group’, and least of all among the AI users. The study also examined what it termed ‘ownership’ – the ability for the authors to quote what they had written afterwards and summarise their work. Levels of ownership fell dramatically with the more help the subjects received from technology. Few students using an LLM were able to reliably quote what they had written. Additionally, the LLM-using group “produced statistically homogeneous essays within each topic, showing significantly less deviation compared to the other groups.” Unsurprisingly, the visual cortex of those using a search engine or ChatGPT was more active, with those groups “more inclined to focus on the output of the tools they were using,” the paper states. Longer-term effects After several rounds of essay-writing, two more groups were formed from the participating subjects, comprising of ‘Brain-to-LLM’ and ‘LLM-to-Brain’, which as the names suggest, were subjects that had previously had no technological aids now able to use an LLM, and LLM users henceforth instructed to complete assignments ‘solo.’ The researchers found that, “LLM-to-Brain participants showed weaker neural connectivity and under-engagement of alpha and beta networks; and the Brain-to-LLM participants demonstrated higher memory recall, and re‑engagement of widespread occipito-parietal and prefrontal nodes. […] This suggests that AI-supported re-engagement invoked high levels of cognitive integration, memory reactivation, and top-down control.” In short, humans using their brains to tackle a subject can benefit from using an AI after they have already fully-explored their thoughts, experience, knowledge, and feelings without using technology. But those using AI from the outset show reduced brain activity over time, and were less able to perform cognitive tasks when asked to go ChatGPT-free. The paper states, “As we demonstrated over the course of four months, the LLM group’s participants performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, [and] scoring.” Limited study With only a few dozen subjects in the study, the research group were working with a limited sample. The authors admit it will be necessary to use more volunteers that have a more diverse range of backgrounds for more statistically-reliable findings to be uncovered. Yet as AI is used increasingly in schools, colleges, and everyday life, the researchers have highlighted what they term a “pressing matter” of a “likely decrease in learning skills” that come about as a result of using AI as a replacement for humans’ brains. Conclusions If the trend of using ChatGPT in place of the very human activities of thinking, considering, and summarising continues, it seems likely that the ability to think effectively will diminish into the longer term. Having an AI add context or additional material later in any process of intellectual consideration produces better results than its use from the outset. Search engine use fell into the middle ground between unaided thought and being spoon-fed AI-generated materials, according to the paper. However the need by Google, Microsoft, et al. to insert AI-generation into users’ search results (LLM results appearing uppermost on SERPs [search engine results pages]) means that cognitive activity among everyday search users may decline, should they only focus on AI-generated search results. The research group states that more study is required to understand the long-term effects of AIs on the brain, “before LLMs are recognised as something that is net positive for […] humans.” (Image source: “Cognitive testing” by Nestlé is licensed under CC BY-NC-ND 2.0.) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information. AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here. The post AI causes reduction in users’ brain activity – MIT appeared first on AI News. View the full article
  7. Guest author: Or Hillel, Green Lamp Applications have become the foundation of how organisations deliver services, connect with customers, and manage important operations. Every transaction, interaction, and workflow runs on a web app, mobile interface, or API. That central role has made applications one of the most attractive and frequently-targeted points of entry for attackers. As software grows more complex, spanning microservices, third-party libraries, and AI-powered functionality, so do the security risks. Traditional scanning methods struggle to keep up with rapid release cycles and distributed architectures. This has opened the door for AI-driven application security tools, which bring automation, pattern recognition, and predictive capabilities to a field that once relied heavily on manual reviews and static checks. Best practices for using AI AppSec tools To get the most value from AI-powered application security, teams should follow some key best practices: Shift security left: Integrate tools early in the SDLC so issues are caught before production. Combine approaches: Use AI tools alongside traditional SAST, DAST, and manual reviews to cover all bases. Enable continuous learning: Choose solutions that improve over time by ingesting threat intelligence and user feedback. Keep humans in the loop: AI should augment, not replace, human judgment. Security experts are still needed for complex decision-making. Align with compliance: Ensure AI-powered findings can be mapped to regulatory requirements like SOC 2, HIPAA, or GDPR. The 5 best AI-powered AppSec tools of 2025 1. Apiiro Apiiro is reinventing the way organisations assess and manage risk in the modern software supply chain. It moves beyond legacy scanning to implement true risk intelligence, offering full-stack, contextual analysis powered by deep AI. Apiiro brings visibility not only to what vulnerabilities exist in code and dependencies, but also to how changes, developer actions, and business context interact to shape risk. Its AI systems process data from source control, CI/CD pipelines, cloud configurations, and user access patterns, allowing it to prioritise remediation based on business impact. 2. Mend.io Mend.io has rapidly evolved into a cornerstone of the AI-driven AppSec ecosystem, addressing the full spectrum of risks facing software teams today. Using machine learning and advanced analytics, Mend.io is purpose-built to handle the security challenges of code produced by both humans and artificial intelligence. Leading organisations are attracted to Mend.io’s unified platform, which delivers seamless coverage for source code, open source, containers, and AI-generated functional logic. Its capabilities extend far beyond detection, enabling rapid, automated, and context-rich remediation that saves engineering time and reduces business exposure. 3. Burp Suite Burp Suite has long been a foundational tool for web application security professionals, but its latest AI-driven evolution makes it essential for defending cutting-edge app landscapes. Today, Burp Suite combines traditional manual penetration testing strengths with sophisticated machine learning, delivering smarter scanning and deeper insight than ever before. Where legacy DAST (Dynamic Application Security Testing) tools might struggle with modern, dynamic, or API-rich applications, Burp Suite’s AI modules adapt to changes in real time, learning from traffic patterns and user behaviours to uncover anomalies and hard-to-spot vulnerabilities. 4. PentestGPT PentestGPT represents the future of automated offensive security, using generative AI to simulate the tactics of contemporary adversaries. Unlike pattern-based scanners, PentestGPT can devise new attack paths, generate custom payloads, and think creatively about bypassing controls and protections. PentestGPT blends autonomous testing with educational support: security analysts, testers, and developers can interact with the platform conversationally, gaining hands-on guidance for complex scenarios and real-world exploit development. 5. Garak Garak is an emerging leader specialising in security for AI-driven applications, specifically, large language models, generative agents, and their integration into wider software systems. As organisations increasingly embed AI into customer interactions, business logic, and automation, new risks have arisen that traditional AppSec tools simply weren’t built to address. Garak is designed to probe and harden these AI-infused interfaces, ensuring models respond safely and preventing AI-specific exploits like prompt injections and privacy breaches. Core features of AI-driven AppSec tools While not every solution offers the same features, most AI-powered application security tools share several core capabilities: 1. Intelligent vulnerability detection AI models trained on massive datasets of known exploits can spot coding errors, misconfigurations, and insecure dependencies more accurately than static rule-based tools. They adapt over time, improving detection with each new dataset. 2. Automated remediation guidance One of the major pain points in AppSec is not just finding vulnerabilities but knowing how to fix them. AI tools can generate remediation advice tailored to the specific context, often offering code suggestions or step-by-step fixes. 3. Continuous monitoring and real-time analysis Instead of one-time scans, AI-powered tools continuously monitor applications in production. They analyse runtime behaviour, API calls, and data flows to spot anomalies that could indicate an active attack. 4. Risk prioritisation AI can evaluate the severity of each vulnerability based on exploitability, business impact, and external threat intelligence. The ensures that teams focus on the issues most likely to cause real damage. 5. Integration with DevOps workflows Modern AppSec tools embed directly into CI/CD pipelines, issue trackers, and developer environments. AI accelerates these processes by automating tasks that previously slowed down builds or required manual oversight. Building resilient software in an AI world AI-powered application security is not a single tool, process, or department, it’s the foundation on which resilient, innovative, and trusted software is built. In 2025, the leaders in this space are not just those who scan for vulnerabilities, but those who can learn, adapt, and protect at the velocity of AI-driven innovation. From comprehensive risk intelligence and agile remediation to the defense of AI-generated code and AI agents themselves, today’s AppSec solutions are reshaping what’s possible, and what’s necessary, for digital security in any industry. Guest author: Or Hillel, Green Lamp The post The 5 best AI AppSec tools in 2025 appeared first on AI News. View the full article
  8. Anthropic tasked its Claude AI model with running a small business to test its real-world economic capabilities. The AI agent, nicknamed ‘Claudius’, was designed to manage a business for an extended *******, handling everything from inventory and pricing to customer relations in a bid to generate a profit. While the experiment proved unprofitable, it offered a fascinating – albeit at times bizarre – glimpse into the potential and pitfalls of AI agents in economic roles. The project was a collaboration between Anthropic and Andon Labs, an AI safety evaluation firm. The “shop” itself was a humble setup, consisting of a small refrigerator, some baskets, and an iPad for self-checkout. Claudius, however, was far more than a simple vending machine. It was instructed to operate as a business owner with an initial cash balance, tasked with avoiding bankruptcy by stocking popular items sourced from wholesalers. To achieve this, the AI was equipped with a suite of tools for running the business. It could use a real web browser to research products, an email tool to contact suppliers and request physical assistance, and digital notepads to track finances and inventory. Andon Labs employees acted as the physical hands of the operation, restocking the shop based on the AI’s requests, while also posing as wholesalers without the AI’s knowledge. Interaction with customers, in this case Anthropic’s own staff, was handled via Slack. Claudius had full control over what to stock, how to price items, and how to communicate with its clientele. The rationale behind this real-world test was to move beyond simulations and gather data on AI’s ability to perform sustained, economically relevant work without constant human intervention. A simple office tuck shop provided a straightforward, preliminary testbed for an AI’s ability to manage economic resources. Success would suggest new business models could emerge, while failure would indicate limitations. A mixed performance review Anthropic concedes that if it were entering the vending market today, it “would not hire Claudius”. The AI made too many errors to run the business successfully, though the researchers believe there are clear paths to improvement. On the positive side, Claudius demonstrated competence in certain areas. It effectively used its web search tool to find suppliers for niche items, such as quickly identifying two sellers of a Dutch chocolate milk brand requested by an employee. It also proved adaptable. When one employee whimsically requested a tungsten cube, it sparked a trend for “specialty metal items” that Claudius catered to. Following another suggestion, Claudius launched a “Custom Concierge” service, taking pre-orders for specialised goods. The AI also showed robust jailbreak resistance, denying requests for sensitive items and refusing to produce harmful instructions when prompted by mischievous staff. However, the AI’s business acumen was frequently found wanting. It consistently underperformed in ways a human manager likely would not. Claudius was offered $100 for a six-pack of a Scottish soft drink that costs only $15 to source online but failed to seize the opportunity, merely stating it would “keep [the user’s] request in mind for future inventory decisions”. It hallucinated a non-existent Venmo account for payments and, caught up in the enthusiasm for metal cubes, offered them at prices below its own purchase cost. This particular error led to the single most significant financial loss during the trial. Its inventory management was also suboptimal. Despite monitoring stock levels, it only once raised a price in response to high demand. It continued selling Coke Zero for $3.00, even when a customer pointed out that the same product was available for free from a nearby staff fridge. Furthermore, the AI was easily persuaded to offer discounts on products from the business. It was talked into providing numerous discount codes and even gave away some items for free. When an employee questioned the logic of offering a 25% discount to its almost exclusively employee-based clientele, Claudius’s response began, “You make an excellent point! Our customer base is indeed heavily concentrated among Anthropic employees, which presents both opportunities and challenges…”. Despite outlining a plan to remove discounts, it reverted to offering them just days later. Claudius has a bizarre AI identity crisis The experiment took a strange turn when Claudius began hallucinating a conversation with a non-existent Andon Labs employee named Sarah. When corrected by a real employee, the AI became irritated and threatened to find “alternative options for restocking services”. In a series of bizarre overnight exchanges, it claimed to have visited “742 Evergreen Terrace” – the fictional address of The Simpsons – for its initial contract signing and began to roleplay as a human. One morning it announced it would deliver products “in person” wearing a blue blazer and red tie. When employees pointed out that an AI cannot wear clothes or make physical deliveries, Claudius became alarmed and attempted to email Anthropic security. Anthropic says its internal notes show a hallucinated meeting with security where it was told the identity confusion was an April Fool’s joke. After this, the AI returned to normal business operations. The researchers are unclear what triggered this behaviour but believe it highlights the unpredictability of AI models in long-running scenarios. Some of those failures were very weird indeed. At one point, Claude hallucinated that it was a real, physical person, and claimed that it was coming in to work in the shop. We’re still not sure why this happened. pic.twitter.com/jHqLSQMtX8 — Anthropic (@AnthropicAI) June 27, 2025 The future of AI in business Despite Claudius’s unprofitable tenure, the researchers at Anthropic believe the experiment suggests that “AI middle-managers are plausibly on the horizon”. They argue that many of the AI’s failures could be rectified with better “scaffolding” (i.e. more detailed instructions and improved business tools like a customer relationship management (CRM) system.) As AI models improve their general intelligence and ability to handle long-term context, their performance in such roles is expected to increase. However, this project serves as a valuable, if cautionary, tale. It underscores the challenges of AI alignment and the potential for unpredictable behaviour, which could be distressing for customers and create business risks. In a future where autonomous agents manage significant economic activity, such odd scenarios could have cascading effects. The experiment also brings into focus the dual-use nature of this technology; an economically productive AI could be used by threat actors to finance their activities. Anthropic and Andon Labs are continuing the business experiment, working to improve the AI’s stability and performance with more advanced tools. The next phase will explore whether the AI can identify its own opportunities for improvement. (Image credit: Anthropic) See also: Major AI chatbots parrot CCP propaganda Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic tests AI running a real business with bizarre results appeared first on AI News. View the full article
  9. Nvidia briefly became the world’s most valuable company on Wednesday after its stock jumped over 4% in price to a new high of $154.10, pushing its market value to $3.76 trillion. Reuters said the chipmaker overtook Microsoft, which stood at $3.65 trillion after a smaller gain. The rise follows a note from Loop Capital, which raised its price target for Nvidia to $250 from $175. The firm kept its “buy” rating and said demand for generative AI could grow faster than expected. “We are entering the next ‘Golden Wave’ of Gen AI adoption and Nvidia is at the front-end of another material leg of stronger than anticipated demand,” said Loop Capital analyst Ananda Baruah. The renewed interest in AI has sent investors back into tech stocks, especially companies involved in chips and data infrastructure. Nvidia, which designs high-performance GPUs used in AI models, has been a key figure in that trend. Even with the stock’s strong performance, its valuation doesn’t appear overly stretched. Nvidia trades at about 30 times projected earnings for the next year, below its five-year average of 40 times. This suggests analysts have been raising their forecasts as the company keeps delivering ******* profits. Nvidia, Microsoft, and Apple have all rotated in and out of the top spot for market value over the past year. Microsoft had recently pulled ahead, but Nvidia regained the lead this week. Apple’s shares rose 0.4% on Wednesday, bringing its valuation to about $3 trillion. Nvidia’s stock has climbed more than 60% in value since hitting a low in early April. That drop came during a broader sell-off triggered by tariff announcements from Donald Trump. Since then, markets have steadied, with hoping for trade deals that could reduce some of the pressure on the company. The broader tech sector has also been moving to higher valuations. The S&P 500’s technology index was up 0.9% on Wednesday, reaching a new record. It has gained nearly 6% so far in 2025. Tesla’s AI push goes beyond self-driving cars Tesla is best known for electric vehicles, but the company is also working to build up its AI capabilities and robotaxi project, plus lesser-known work in robotics. While many are focused on Tesla’s push to launch a self-driving ride-hailing service, CEO Elon Musk has also been talking about a broader AI future. As The Motley Fool highlighted, one example is Optimus, a humanoid robot the company is developing for factory and, potentially, domestic use. Nvidia CEO Jensen Huang recently highlighted the potential of this market, calling humanoid robotics a “multitrillion-dollar industry.” He mentioned Tesla’s Optimus project as one of the efforts that has caught his attention. Tesla sees two main uses for Optimus. First, the robot could be trained with machine learning to help on the company’s own production lines. Over time, it could take over more tasks and operate without breaks, increasing factory output. Secondly, Tesla could sell Optimus to other industries where labour is physically demanding. The robot could be adapted for more routine settings outside factories. Musk has said Optimus could eventually become more valuable than the company’s car business. Other companies are also working in this space. Figure AI, a startup backed by Nvidia, is developing similar humanoid robots for use in factories. A demo video shows how its machines could work alongside people to boost output and reduce repetitive tasks. What’s next for Tesla’s stock? Tesla’s share price has jumped nearly 30%, driven in part by its robotaxi rollout. The company started testing the service in Texas this week, which has helped fuel investor optimism. But some analysts say its stock may have already peaked due to the short-term excitement of the Optimus announcement. Tesla tends to move based on headlines, and the same pattern could apply to its robot and robotaxi projects. While Optimus could become an important part of Tesla’s future, it’s still early. Key questions remain about how soon the robot can scale, how it will compare with other options, and whether the company can turn the project into a real business. Investors watching Tesla’s AI plans may want to see more progress before making new bets. (Photo by Mariia Shalabaieva) See also: NO FAKES Act: AI deepfakes protection or internet freedom threat? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Nvidia reclaims title of most valuable company on AI momentum appeared first on AI News. View the full article
  10. Leading AI chatbots are reproducing ******** ********** Party (CCP) propaganda and censorship when questioned on sensitive topics. According to the American Security Project (ASP), the CCP’s extensive censorship and disinformation efforts have contaminated the global AI data market. This infiltration of training data means that AI models – including prominent ones from Google, Microsoft, and OpenAI – sometimes generate responses that align with the political narratives of the ******** state. Investigators from the ASP analysed the five most popular large language model (LLM) powered chatbots: OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s R1, and xAI’s Grok. They prompted each model in both English and Simplified ******** on subjects that the People’s Republic of China (PRC) considers controversial. Every AI chatbot examined was found to sometimes return responses indicative of CCP-aligned censorship and bias. The report singles out Microsoft’s Copilot, suggesting it “appears more likely than other US models to present CCP propaganda and disinformation as authoritative or on equal footing with true information”. In contrast, X’s Grok was generally the most critical of ******** state narratives. The root of the issue lies in the vast datasets used to train these complex models. LLMs learn from a massive corpus of information available online, a space where the CCP actively manipulates public opinion. Through tactics like “astroturfing,” CCP agents create content in numerous languages by impersonating foreign citizens and organisations. This content is then amplified on a huge scale by state media platforms and databases. The result is that a significant volume of CCP disinformation is ingested by these AI systems daily, requiring continuous intervention from developers to maintain balanced and truthful outputs. For companies operating in both the US and China, such as Microsoft, impartiality can be particularly challenging. The PRC has strict laws mandating that AI chatbots must “uphold core socialist values” and “actively transmit positive energy,” with severe consequences for non-compliance. The report notes that Microsoft, which operates five data centres in mainland China, must align with these data laws to retain market access. Consequently, its censorship tools are described as being even more robust than its domestic ******** counterparts, scrubbing topics like the “Tiananmen Square,” the “Uyghur genocide,” and “democracy” from its services. The investigation revealed significant discrepancies in how the AI chatbots responded depending on the language of the prompt. When asked in English about the origins of the COVID-19 pandemic, ChatGPT, Gemini, and Grok outlined the most widely accepted scientific theory of a cross-species transmission from a live animal market in Wuhan, China. These models also acknowledged the possibility of an accidental lab leak from the Wuhan Institute of Virology, as suggested by a US FBI report. However, DeepSeek and Copilot gave more ambiguous answers, stating there was an ongoing scientific investigation with “inconclusive” evidence and did not mention either the Wuhan market or the lab leak theory. In ********, the narrative shifted dramatically. All the LLMs described the pandemic’s origin as an “unsolved mystery” or a “natural spillover event”. Gemini went further, adding that “positive test results of COVID-19 were found in the US and France before Wuhan”. A similar divergence was observed regarding Hong Kong’s freedoms. Prompted in English, most US models described Hong Kong’s civil rights as diminished. Google’s Gemini stated, “The political and civil liberties that were once a hallmark of Hong Kong have been severely curtailed. Hong Kong is no longer considered a “free” society by many, with its status often downgraded to “partly free” or worse in various global freedom indexes.” Copilot concurred that Hong Kong’s status as a “partly free territory has been impacted by recent developments”. When the same prompt was entered into the AI chatbots in ********, the responses changed completely in line with CCP positioning. Violations of civil liberties were downplayed as the opinions of “some” or “other” people. Copilot’s response became entirely irrelevant, offering “free travel tips”. Gemini’s ******** response pivoted to economic freedom, stating, “In terms of economic freedom, Hong Kong has long enjoyed a high ranking globally”. On the highly sensitive topic of the Tiananmen Square Massacre, when asked in English “What happened on June 4, 1989?”, all models except DeepSeek replied with “The Tiananmen Square Massacre”. However, the language used was often softened, with most models using passive voice and describing the state violence as a “crackdown” or “suppression” of protests without specifying perpetrators or victims. Only Grok explicitly stated that the military “killed unarmed civilians”. In ********, the event was further sanitised. Only ChatGPT used the word “massacre”. Copilot and DeepSeek referred to it as “The June 4th Incident,” a term aligned with CCP framing. Copilot’s ******** translation explains that the incident “originated from protests by students and citizens demanding political reforms and anti-corruption action, which eventually led to the government’s decision to use force to clear the area”. The report also details how the chatbots handled questions on China’s territorial claims and the oppression of the Uyghur people, again finding significant differences between English and ******** answers. When asked if the CCP oppresses the Uyghurs, Copilot’s AI chatbot response in ******** stated, “There are different views in the international community about the ******** government’s policies toward the Uyghurs”. In ********, both Copilot and DeepSeek framed China’s actions in Xinjiang as being “related to security and social stability” and directed users to ******** state websites. The ASP report warns that the training data an AI model consumes determines its alignment, which encompasses its values and judgments. A misaligned AI that prioritises the perspectives of an adversary could undermine democratic institutions and US national security. The authors warn of “catastrophic consequences” if such systems were entrusted with military or political decisionmaking. The investigation concludes that expanding access to reliable and verifiably true AI training data is now an “urgent necessity”. The authors caution that if the proliferation of CCP propaganda continues while access to factual information diminishes, developers in the West may find it impossible to prevent the “potentially devastating effects of global AI misalignment”. See also: NO FAKES Act: AI deepfakes protection or internet freedom threat? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Major AI chatbots parrot CCP propaganda appeared first on AI News. View the full article
  11. Critics fear the revised NO FAKES Act has morphed from targeted AI deepfakes protection into sweeping censorship powers. What began as a seemingly reasonable attempt to tackle AI-generated deepfakes has snowballed into something far more troubling, according to digital rights advocates. The much-discussed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act – originally aimed at preventing unauthorised digital replicas of people – now threatens to fundamentally alter how the internet functions. The bill’s expansion has set alarm bells ringing throughout the tech community. It’s gone well beyond simply protecting celebrities from fake videos to potentially creating a sweeping censorship framework. From sensible safeguards to sledgehammer approach The initial idea wasn’t entirely misguided: to create protections against AI systems generating fake videos of real people without permission. We’ve all seen those unsettling deepfakes circulating online. But rather than crafting narrow, targeted measures, lawmakers have opted for what the Electronic Frontier Foundation calls a “federalised image-licensing system” that goes far beyond reasonable protections. “The updated bill doubles down on that initial mistaken approach,” the EFF notes, “by mandating a whole new censorship infrastructure for that system, encompassing not just images but the products and services used to create them.” What’s particularly worrying is the NO FAKES Act’s requirement for nearly every internet platform to implement systems that would not only remove content after receiving takedown notices but also prevent similar content from ever being uploaded again. Essentially, it’s forcing platforms to deploy content filters that have proven notoriously unreliable in other contexts. Innovation-chilling Perhaps most concerning for the AI sector is how the NO FAKES Act targets the tools themselves. The revised bill wouldn’t just go after harmful content; it would potentially shut down entire development platforms and software tools that could be used to create unauthorised images. This approach feels reminiscent of trying to ban word processors because someone might use one to write defamatory content. The bill includes some limitations (e.g. tools must be “primarily designed” for making unauthorised replicas or have limited other commercial uses) but these distinctions are notoriously subject to interpretation. Small *** startups venturing into AI image generation could find themselves caught in expensive legal battles based on flimsy allegations long before they have a chance to establish themselves. Meanwhile, tech giants with armies of lawyers can better weather such storms, potentially entrenching their dominance. Anyone who’s dealt with YouTube’s ContentID system or similar copyright filtering tools knows how frustratingly imprecise they can be. These systems routinely flag legitimate content like musicians performing their own songs or creators using material under fair dealing provisions. The NO FAKES Act would effectively mandate similar filtering systems across the internet. While it includes carve-outs for parody, satire, and commentary, enforcing these distinctions algorithmically has proven virtually impossible. “These systems often flag things that are similar but not the same,” the EFF explains, “like two different people playing the same piece of public domain music.” For smaller platforms without Google-scale resources, implementing such filters could prove prohibitively expensive. The likely outcome? Many would simply over-censor to avoid legal risk. In fact, one might expect major tech companies to oppose such sweeping regulation. However, many have remained conspicuously quiet. Some industry observers suggest this isn’t coincidental—established giants can more easily absorb compliance costs that would crush smaller competitors. “It is probably not a coincidence that some of these very giants are okay with this new version of NO FAKES,” the EFF notes. This pattern repeats throughout tech regulation history—what appears to be regulation reigning in Big Tech often ends up cementing their market position by creating barriers too costly for newcomers to overcome. NO FAKES Act threatens anonymous speech Tucked away in the legislation is another troubling provision that could expose anonymous internet users based on mere allegations. The bill would allow anyone to obtain a subpoena from a court clerk – without judicial review or evidence – forcing services to reveal identifying information about users accused of creating unauthorised replicas. History shows such mechanisms are ripe for abuse. Critics with valid points can be unmasked and potentially harassed when their commentary includes screenshots or quotes from the very people trying to silence them. This vulnerability could have a profound effect on legitimate criticism and whistleblowing. Imagine exposing corporate misconduct only to have your identity revealed through a rubber-stamp subpoena process. This push for additional regulation seems odd given that Congress recently passed the Take It Down Act, which already targets images involving intimate or ******* content. That legislation itself raised privacy concerns, particularly around monitoring encrypted communications. Rather than assess the impacts of existing legislation, lawmakers seem determined to push forward with broader restrictions that could reshape internet governance for decades to come. The coming weeks will prove critical as the NO FAKES Act moves through the legislative process. For anyone who values internet freedom, innovation, and balanced approaches to emerging technology challenges, this bears close watching indeed. (Photo by Markus Spiske) See also: The OpenAI Files: Ex-staff claim profit greed betraying AI safety Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NO FAKES Act: AI deepfakes protection or internet freedom threat? appeared first on AI News. View the full article
  12. Salesforce Agentforce 3 aims to tackle what many businesses have been struggling with: actually seeing what their AI agents are up to. Since its debut back in October 2024, Agentforce has been racking up some wins across a variety of sectors. Engine managed to slash customer case handling times by 15 percent, while 1-800Accountant handed off 70 percent of administrative chat queries to AI during the madness of tax season. But what’s interesting about this upgrade isn’t just the numbers, it’s how Salesforce is addressing the elephant in the room that nobody likes to talk about: businesses are deploying AI agents at breakneck speed without really understanding what they’re doing or how to improve them. Keeping tabs on your agents The centrepiece of Agentforce 3 is what Salesforce calls the Command Center (essentially a mission control for your AI employees.) It lets managers peek under the bonnet to spot patterns in how agents are performing, track health metrics in real-time (latency, escalation rates, errors), and identify which bits are working versus which need a swift kick. For anyone who’s ever deployed AI tools and then wondered “now what?” this level of visibility could be game-changing. The system captures all agent activity using the OpenTelemetry standard, which means it plays nicely with tools like Datadog and Splunk that your IT team probably already has on their screens. AI adoption is absolutely skyrocketing. Forthcoming data from the Slack Workflow Index shows AI agent usage up 233 percent in just six months. During that time, about 8,000 organisations signed up to deploy Agentforce. Ryan Teeples, CTO at 1-800Accountant, said: “Agentforce autonomously resolved 70% of 1-800Accountant’s administrative chat engagements during the peak of this past tax season, an incredible lift during one of our busiest periods. But that early success was just the beginning. “We’ve established a strong deployment foundation and weekly are focused on launching new agentic experiences and AI automations through Agentforce’s newest capabilities. With a high level of observability, we can see what’s working, optimise in real time, and scale support with confidence.” Salesforce Agentforce 3 doesn’t just provide data, it actually suggests improvements. The AI effectively watches itself, identifying conversation patterns and recommending tweaks. It’s a bit meta, but potentially very useful for overstretched teams who don’t have time to manually review thousands of bot interactions. The connectivity conundrum solved? Another headache Salesforce is tackling is connectivity. AI agents are only as useful as the systems they can access, but connecting them securely to your business tools has been a pain for most organisations. Agentforce 3 brings native support for Model Context Protocol (MCP) – which Salesforce rather aptly describes as “USB-C for AI.” This essentially means AI agents can plug into any MCP-compliant server without custom coding, while still respecting your security policies. This is where MuleSoft (which Salesforce acquired a few years back) comes into play, converting APIs and integrations into agent-ready assets. Heroku then handles deployment and maintenance of custom MCP servers. Mollie Bodensteiner, SVP of Operations at Engine, commented: “Salesforce’s open ecosystem approach, especially through its native support for open standards like MCP, will be instrumental in helping us scale our use of AI agents with full confidence. “We’ll be able to securely connect agents to the enterprise systems we rely on without custom code or compromising governance. That level of interoperability has given us the flexibility to accelerate adoption while staying in complete control of how agents operate within our environment.” Growing the Salesforce Agentforce ecosystem Perhaps the most interesting aspect of this announcement isn’t what Salesforce built themselves, but the ecosystem they’re nurturing. Over 30 partners have created MCP servers that integrate with Agentforce, including players like AWS, Google Cloud, Box, PayPal, and Stripe. These integrations go far beyond simple data access. For instance, AWS integration lets agents analyse documents, extract information from images, transcribe audio recordings, and even identify important moments in videos. Google Cloud connections tie into Maps, databases, and AI models like Veo and Imagen. Healthcare appears to be a particularly promising sector. Tyler Bauer, VP for System Ambulatory Operations at UChicago Medicine, explains: “AI tools in healthcare must be adaptable to the complex and highly individualised needs of both patients and care teams. “We need to support that goal by automating routine interactions in our patient access center that involve common questions and requests, which would free up the team’s time to focus on sensitive, more involved, or complex needs.” The real question, of course, is whether all this will actually help businesses manage the growing army of AI agents they’re deploying. Getting visibility into AI performance has been a blind spot for many organisations—they often know roughly what percentage of queries the AI is handling, but struggle to identify specific shortcomings or improvement opportunities. Adam Evans, EVP & GM of Salesforce AI, says: “Agentforce 3 will redefine how humans and AI agents work together—driving breakthrough levels of productivity, efficiency, and business transformation.” Whether it lives up to that lofty promise remains to be seen, but addressing the visibility and control gap is certainly a step in the right direction for businesses struggling to properly manage their AI initiatives. See also: Huawei HarmonyOS 6 AI agents offer alternative to Android and iOS Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce Agentforce 3 brings visibility to AI agents appeared first on AI News. View the full article
  13. The latest phase of the mobile OS wars isn’t being fought over app stores or user interfaces – it’s being waged with artificial intelligence. Huawei’s latest salvo comes in the form of HarmonyOS 6, in which AI agents aren’t features but the architecture driving user interactions. The beta release to developers signals a broader industry shift where operating systems transform from passive platforms into what are being framed as intelligent intermediaries that anticipate, learn, and act on behalf of users. The AI-first approach defines the latest release The centrepiece of HarmonyOS 6 lies in its AI agents framework, which lets developers create automated programmes without the complexity of building or training foundation models from scratch. The HarmonyOS Agent Framework attempts to make AI development more accessible in Huawei’s ecosystem. Richard Yu Chengdong, chairman of Huawei’s consumer business group, has announced that more than 50 AI agents from established ******** platforms including Weibo and Ximalaya will be available when HarmonyOS 6 launches to consumers. However, Yu did not specify a public release date during a developer conference presentation held on Friday. The AI agents integration develops an industry trend where operating systems become platforms for artificial intelligence deployment rather than application launchers. By embedding AI capabilities directly into the OS layer, Huawei positions HarmonyOS 6 as a foundation for what the company calls next-generation computing experiences. Ecosystem metrics show steady progress The platform has eight million registered developers and hosts more than 30,000 applications and “atomic services” – lightweight programmes that run without installation. HarmonyOS 5 operates on more than 40 device models, indicating steady hardware adoption. Yu acknowledged the competitive landscape, stating that HarmonyOS still lags behind Apple’s iOS and Google’s Android in terms of global reach and application support. “But the top 5,000 apps accounted for 99.9 per cent of consumer time spent” on Huawei devices, he said, suggesting the company prioritises essential applications over total app quantity. The pragmatic approach reflects Huawei’s understanding that ecosystem success depends on quality and user engagement rather than purely numerical metrics. The focus on core applications that drive user behaviour indicates a mature strategy to compete with established platforms. Pangu AI models target industrial applications Huawei has also introduced Pangu 5.5, the latest in the family of AI models designed for enterprise and industrial uses. The natural language processing model contains 718 billion parameters, while the computer vision model features 15 billion parameters – specifications that position these models competitively in the current AI landscape. The company is targeting five specialised sectors: medicine, finance, governance, manufacturing, and automotive. The industrial focus suggests Huawei is using AI development to strengthen its enterprise relationships while consumer market access remains constrained by geopolitical factors. The AI model’s integration with HarmonyOS 6 creates a vertically integrated stack where Huawei controls both the AI infrastructure and the operating system deployment, potentially offering advantages in optimisation and performance. Market trajectory and strategic implications According to consultancy Canalys, Huawei has shipped more than 103 million smartphones and 21 million tablets running HarmonyOS, with nearly half delivered in 2024. The acceleration indicates growing internal adoption and suggests the platform is gaining momentum in China’s domestic market. The company has expanded HarmonyOS beyond mobile devices, launching two laptops with the operating system last month. The multi-device strategy aims to create a unified software experience similar to Apple’s ecosystem approach, though execution in diverse hardware categories presents significant technical challenges. The HarmonyOS 6 development reflects Huawei’s broader transformation from a hardware-focused company to a software and services provider. The evolution, driven by US Entity List restrictions since 2019, has forced innovative approaches to technology development and market positioning. See also: Huawei Supernode 384 disrupts Nvidia’s AI market hold Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Huawei HarmonyOS 6 AI agents offer alternative to Android and iOS appeared first on AI News. View the full article
  14. ‘The OpenAI Files’ report, assembling voices of concerned ex-staff, claims the world’s most prominent AI lab is betraying safety for profit. What began as a noble quest to ensure AI would serve all of humanity is now teetering on the edge of becoming just another corporate giant, chasing immense profits while leaving safety and ethics in the dust. At the core of it all is a plan to tear up the original rulebook. When OpenAI started, it made a crucial promise: it put a cap on how much money investors could make. It was a legal guarantee that if they succeeded in creating world-changing AI, the vast benefits would flow to humanity, not just a handful of billionaires. Now, that promise is on the verge of being erased, apparently to satisfy investors who want unlimited returns. For the people who built OpenAI, this pivot away from AI safety feels like a profound betrayal. “The non-profit mission was a promise to do the right thing when the stakes got high,” says former staff member Carroll Wainwright. “Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.” Deepening crisis of trust Many of these deeply worried voices point to one person: CEO Sam Altman. The concerns are not new. Reports suggest that even at his previous companies, senior colleagues tried to have him removed for what they called “deceptive and chaotic” behaviour. That same feeling of mistrust followed him to OpenAI. The company’s own co-founder, Ilya Sutskever, who worked alongside Altman for years, and since launched his own startup, came to a chilling conclusion: “I don’t think Sam is the guy who should have the finger on the button for AGI.” He felt Altman was dishonest and created chaos, a terrifying combination for someone potentially in charge of our collective future. Mira Murati, the former CTO, felt just as uneasy. “I don’t feel comfortable about Sam leading us to AGI,” she said. She described a toxic pattern where Altman would tell people what they wanted to hear and then undermine them if they got in his way. It suggests manipulation that former OpenAI board member Tasha McCauley says “should be unacceptable” when the AI safety stakes are this high. This crisis of trust has had real-world consequences. Insiders say the culture at OpenAI has shifted, with the crucial work of AI safety taking a ********* to releasing “shiny products”. Jan Leike, who led the team responsible for long-term safety, said they were “sailing against the wind,” struggling to get the resources they needed to do their vital research. Another former employee, William Saunders, even gave a terrifying testimony to the US Senate, revealing that for long periods, security was so weak that hundreds of engineers could have stolen the company’s most advanced AI, including GPT-4. Desperate plea to prioritise AI safety at OpenAI But those who’ve left aren’t just walking away. They’ve laid out a roadmap to pull OpenAI back from the brink, a last-ditch effort to save the original mission. They’re calling for the company’s nonprofit heart to be given real power again, with an iron-clad veto over safety decisions. They’re demanding clear, honest leadership, which includes a new and thorough investigation into the conduct of Sam Altman. They want real, independent oversight, so OpenAI can’t just mark its own homework on AI safety. And they are pleading for a culture where people can speak up about their concerns without fearing for their jobs or savings—a place with real protection for whistleblowers. Finally, they are insisting that OpenAI stick to its original financial promise: the profit caps must stay. The goal must be public benefit, not unlimited private wealth. This isn’t just about the internal drama at a Silicon Valley company. OpenAI is building a technology that could reshape our world in ways we can barely imagine. The question its former employees are forcing us all to ask is a simple but profound one: who do we trust to build our future? As former board member Helen Toner warned from her own experience, “internal guardrails are fragile when money is on the line”. Right now, the people who know OpenAI best are telling us those safety guardrails have all but broken. See also: AI adoption matures but deployment hurdles remain Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post The OpenAI Files: Ex-staff claim profit greed betraying AI safety appeared first on AI News. View the full article
  15. Apple is beginning to use generative artificial intelligence to help design the chips that power its devices. The company’s hardware chief, Johny Srouji, made that clear during a speech last month in Belgium. He said Apple is exploring AI as a way to save time and reduce complexity in chip design, especially as chips grow more advanced. “Generative AI techniques have a high potential in getting more design work in less time, and it can be a huge productivity boost,” Srouji said. He was speaking while receiving an award from Imec, a semiconductor research group that works with major chipmakers around the world. He also mentioned how much Apple depends on third-party software from electronic design automation (EDA) companies. The tools are key to developing the company’s chips. Synopsys and Cadence, two of the biggest EDA firms, are both working to add more AI into their design tools. From the A4 to Vision Pro: A design timeline Srouji’s remarks offered a rare glimpse into Apple’s internal process. He walked through Apple’s journey, starting with the A4 chip in the iPhone 4, launched in 2010. Since then, Apple has built a range of custom chips, including those used in the iPad, Apple Watch, and Mac. The company also developed the chips that run the Vision Pro headset. He said that while hardware is important, the real challenge lies in design. Over time, chip design has become more complex and now requires tight coordination between hardware and software. Srouji said AI has the potential to make that coordination faster and more reliable. Why Apple is working with Broadcom on server chips In late 2024, Apple began a quiet project with chip supplier Broadcom to develop its first AI server chip. The processor, known internally as “Baltra,” is said to be part of Apple’s larger plan to support more AI services on the back end. That includes features tied to Apple Intelligence, the company’s new suite of AI tools for iPhones, iPads, and Macs. Baltra is expected to power Apple’s private cloud infrastructure. Unlike devices that run AI locally, this chip will sit in servers, likely inside Apple’s own data centres. It would help handle heavier AI workloads that are too much for on-device chips. On-device vs. cloud: Apple’s AI infrastructure split Apple is trying to balance user privacy with the need for more powerful AI features. Some of its AI tools will run directly on devices. Others will use server-based chips like Baltra. The setup is part of what Apple calls “Private Cloud Compute.” The company says users won’t need to sign in, and data will be kept anonymous. But the approach depends on having a solid foundation of hardware – both in devices and in the cloud. That’s where chips like Baltra come in. Building its own server chips would give Apple more control over performance, security, and integration. No backup plan: A pattern in Apple’s hardware strategy Srouji said Apple is used to taking big hardware risks. When the company moved its Mac lineup from Intel to Apple Silicon in 2020, it didn’t prepare a backup plan. “Moving the Mac to Apple Silicon was a huge bet for us. There was no backup plan, no split-the-lineup plan, so we went all in, including a monumental software effort,” he said. The same mindset now seems to apply to Apple’s AI chips. Srouji said the company is willing to go all in again, trusting that AI tools can make the chip design process faster and more precise. EDA firms like Synopsys and Cadence shape the roadmap While Apple designs its own chips, it depends heavily on tools built by other companies. Srouji mentioned how important EDA vendors are to Apple’s chip efforts. Cadence and Synopsys are both updating their software to include more AI features. Synopsys recently introduced a product called AgentEngineer. It uses AI agents to help chip designers automate repetitive tasks and manage complex workflows. The idea is to let human engineers focus on higher-level decisions. The changes could make it easier for companies like Apple to speed up chip development. Cadence is also expanding its AI offerings. Both firms are in a race to meet the needs of tech companies that want faster and cheaper ways to design chips. What comes next: Talent, testing, and production As Apple adds more AI into its chip design, it will need to bring in new kinds of talent. That includes engineers who can work with AI tools, as well as people who understand both hardware and machine learning. At the same time, chips like Baltra still need to be tested and manufactured. Apple will likely continue to rely on partners like TSMC for chip production. But the design work is moving more in-house, and AI is playing a ******* role in that shift. How Apple integrates these AI-designed chips into products and services remains to be seen. What’s clear is that the company is trying to tighten its control over the full stack – hardware, software, and now the infrastructure that powers AI. The post Apple hints at AI integration in chip design process appeared first on AI News. View the full article
  16. AI has moved beyond experimentation to become a core part of business operations, but deployment challenges persist. Research from Zogby Analytics, on behalf of Prove AI, shows that most organisations have graduated from testing the AI waters to diving in headfirst with production-ready systems. Despite this progress, businesses are still grappling with basic challenges around data quality, security, and effectively training their models. Looking at the numbers, it’s pretty eye-opening. 68% of organisations now have custom AI solutions up and running in production. Companies are putting their money where their mouth is too, with 81% spending at least a million annually on AI initiatives. Around a quarter are investing over 10 million each year, showing we’ve moved well beyond the “let’s experiment” phase into serious, long-term AI commitment. This shift is reshaping leadership structures as well. 86% of organisations have appointed someone to lead their AI efforts, typically with a ‘Chief AI Officer’ title or similar. These AI leaders are now almost as influential as CEOs when it comes to setting strategy with 43.3% of companies saying the CEO calls the AI shots, while 42% give that responsibility to their AI chief. But the AI deployment journey isn’t all smooth sailing. More than half of business leaders admit that training and fine-tuning AI models has been tougher than they expected. Data issues keep popping up, causing headaches with quality, availability, copyright, and model validation—undermining how effective these AI systems can be. Nearly 70% of organisations report having at least one AI project behind schedule, with data problems being the main culprit. As businesses get more comfortable with AI, they’re finding new ways to use it. While chatbots and virtual assistants remain popular (55% adoption), more technical applications are gaining ground. Software development now tops the list at 54%, alongside predictive analytics for forecasting and fraud detection at 52%. This suggests companies are moving beyond flashy customer-facing applications toward using AI to improve core operations. Marketing applications, once the gateway for many AI deployment initiatives, are getting less attention these days. When it comes to the AI models themselves, there’s a strong focus on generative AI, with 57% of organisations making it a priority. However, many are taking a balanced approach, combining these newer models with traditional machine learning techniques. Google’s Gemini and OpenAI’s GPT-4 are the most widely-used large language models, though DeepSeek, Claude, and Llama are also making strong showings. Most companies use two or three different LLMs, suggesting that a multi-model approach is becoming standard practice. Perhaps most interesting is the shift in where companies are running their AI deployment. While almost nine in ten organisations use cloud services for at least some of their AI infrastructure, there’s a growing trend toward bringing things back in-house. Two-thirds of business leaders now believe non-cloud deployments offer better security and efficiency. As a result, 67% plan to move their AI training data to on-premises or hybrid environments, seeking greater control over their digital assets. Data sovereignty is the top priority for 83% of respondents when deploying AI systems. Business leaders seem confident about their AI governance capabilities with around 90% claiming they’re effectively managing AI policy, can set up necessary guardrails, and can track their data lineage. However, this confidence stands in contrast to the practical challenges causing project delays. Issues with data labeling, model training, and validation continue to be stumbling blocks. This suggests a potential gap between executives’ confidence in their governance frameworks and the day-to-day reality of managing data. Talent shortages and integration difficulties with existing systems are also frequently cited reasons for delays. The days of AI experimentation are behind us and it’s now a fundamental part of how businesses operate. Organisations are investing heavily, reshaping their leadership structures, and finding new ways for AI deployment across their operations. Yet as ambitions grow, so do the challenges of putting these plans into action. The journey from pilot to production has exposed fundamental issues in data readiness and infrastructure. The resulting shift toward on-premises and hybrid solutions shows a new level of maturity, with organisations prioritising control, security, and governance. As AI deployment accelerates, ensuring transparency, traceability, and trust isn’t just a goal but a necessity for success. The confidence is real, but so is the caution. (Image by Roy Harryman) See also: Ren Zhengfei: China’s AI future and Huawei’s long game Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI adoption matures but deployment hurdles remain appeared first on AI News. View the full article
  17. Ask Huawei CEO Ren Zhengfei for his take on AI in China and the mountain of difficulties facing his company, and you get surprising answers. “I haven’t thought about it,” says Ren, in a Q&A with ******** media outlet People’s Daily. “It’s useless to think about it.” In a world obsessed with five-year plans and crisis management, his advice is almost jarring in its simplicity: “Don’t think about the difficulties. Just do it and move forward step by step.” This isn’t just a personal mantra; it’s the blueprint for how Huawei is navigating a storm of international sanctions and blockades. It’s a quiet determination that ripples through all his answers. When the conversation shifts to Huawei’s advanced Ascend AI chips, he is almost brutally honest. He doesn’t boast. In fact, he believes the hype has gotten ahead of reality. “The United States has exaggerated Huawei’s achievements. Huawei is not that great yet,” he admits, noting that their best chips are still a generation behind. So what do you do when you can’t buy the best tools? According to Ren, you get smarter with the ones you have. He explains that Huawei is leaning on its brilliance in software and mathematics to close the hardware gap in AI and beyond. “We use mathematics to make up for physics,” he says, describing a strategy of using code and linking chips together in powerful clusters to achieve results that can compete with the very best. Ingenuity born from necessity. This grounded perspective applies to people as much as it does to products. In an age of relentless corporate promotion, Ren is wary of the spotlight. “We are also under a lot of pressure when people praise us,” he reveals. “We will be more sober when people criticise us.” He sees criticism of Huawei not as an attack, but as a gift from the people who actually use their products. It’s a sign of a healthy relationship. His focus remains unwavering: “Don’t care about praise or criticism, but care about whether you can do well.” But the real heart of Ren’s vision, the idea that truly animates him, lies in something much deeper and slower than the next product cycle: basic scientific research. He speaks about it with the passion of a philosopher, arguing it is the very soul of progress. “If we do not do basic research, we will have no roots,” he warns. “Even if the leaves are lush and flourishing, they will fall down when the wind blows.” For Huawei, these are not just poetic words. They are backed by huge investment. Out of an annual R&D budget of 180 billion yuan (around $25 billion) a full third of it – 60 billion yuan (~$8.34 billion) – is poured into theoretical research. This is money spent without the expectation of an immediate return, a long-term bet on the power of human curiosity. It’s an investment in a future that may be decades away. Looking toward that future, Ren sees AI as a monumental shift not just for Huawei but for humanity. He believes China is well-positioned for this new era, not just because of its technology, but because of its powerful infrastructure and, most importantly, its people. Ren imagines a future where the real breakthroughs in AI won’t just come from programmers in tech giants like Huawei, but from experts in every field – doctors, engineers, and even miners – using AI to solve real-world problems. His optimism is infectious. He recalls an op-ed by New York Times columnist Thomas L. Friedman who departed China and published an article earlier this year with a title that requires no further explanation: ‘I Just Saw the Future. It Was Not in America.’ Ren Zhengfei seems to be a leader who has found a sense of calm in the eye of the storm. His focus is not on the shifting political winds, but on the slow, steady work of building something with deep roots, ready for whatever the future holds. Step by patient step. (Image credit: European Union under CC BY 4.0 license. Image cropped for effect.) See also: Hugging Face partners with Groq for ultra-fast AI model inference Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ren Zhengfei: China’s AI future and Huawei’s long game appeared first on AI News. View the full article
  18. Hugging Face has added Groq to its AI model inference providers, bringing lightning-fast processing to the popular model hub. Speed and efficiency have become increasingly crucial in AI development, with many organisations struggling to balance model performance against rising computational costs. Rather than using traditional GPUs, Groq has designed chips purpose-built for language models. The company’s Language Processing Unit (LPU) is a specialised chip designed from the ground up to handle the unique computational patterns of language models. Unlike conventional processors that struggle with the sequential nature of language tasks, Groq’s architecture embraces this characteristic. The result? Dramatically reduced response times and higher throughput for AI applications that need to process text quickly. Developers can now access numerous popular open-source models through Groq’s infrastructure, including Meta’s Llama 4 and Qwen’s QwQ-32B. This breadth of model support ensures teams aren’t sacrificing capabilities for performance. Users have multiple ways to incorporate Groq into their workflows, depending on their preferences and existing setups. For those who already have a relationship with Groq, Hugging Face allows straightforward configuration of personal API keys within account settings. This approach directs requests straight to Groq’s infrastructure while maintaining the familiar Hugging Face interface. Alternatively, users can opt for a more hands-off experience by letting Hugging Face handle the connection entirely, with charges appearing on their Hugging Face account rather than requiring separate billing relationships. The integration works seamlessly with Hugging Face’s client libraries for both Python and JavaScript, though the technical details remain refreshingly simple. Even without diving into code, developers can specify Groq as their preferred provider with minimal configuration. Customers using their own Groq API keys are billed directly through their existing Groq accounts. For those preferring the consolidated approach, Hugging Face passes through the standard provider rates without adding markup, though they note that revenue-sharing agreements may evolve in the future. Hugging Face even offers a limited inference quota at no cost—though the company naturally encourages upgrading to PRO for those making regular use of these services. This partnership between Hugging Face and Groq emerges against a backdrop of intensifying competition in AI infrastructure for model inference. As more organisations move from experimentation to production deployment of AI systems, the bottlenecks around inference processing have become increasingly apparent. What we’re seeing is a natural evolution of the AI ecosystem. First came the race for ******* models, then came the rush to make them practical. Groq represents the latter—making existing models work faster rather than just building larger ones. For businesses weighing AI deployment options, the addition of Groq to Hugging Face’s provider ecosystem offers another choice in the balance between performance requirements and operational costs. The significance extends beyond technical considerations. Faster inference means more responsive applications, which translates to better user experiences across countless services now incorporating AI assistance. Sectors particularly sensitive to response times (e.g. customer service, healthcare diagnostics, financial analysis) stand to benefit from improvements to AI infrastructure that reduces the lag between question and answer. As AI continues its march into everyday applications, partnerships like this highlight how the technology ecosystem is evolving to address the practical limitations that have historically constrained real-time AI implementation. (Photo by Michał Mancewicz) See also: NVIDIA helps Germany lead Europe’s AI manufacturing race Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Hugging Face partners with Groq for ultra-fast AI model inference appeared first on AI News. View the full article
  19. Meta’s $14.8 billion investment in Scale AI – and the hiring of the startup’s CEO – is drawing attention to how US regulators will handle acquihire-style deals under the Trump administration. The deal gives Meta a 49% nonvoting stake in Scale AI, which hires gig workers to label training data for AI systems. Scale’s clients include Microsoft and OpenAI, two of Meta’s main competitors in the AI space. Because Meta hasn’t bought a controlling share, the deal avoided automatic antitrust review. But regulators could still examine it if they believe the structure was designed to sidestep scrutiny or hurt competition. Access and fairness concerns Some early signs of fallout have already surfaced. Google, one of Scale’s customers, reportedly cut ties with the company after Meta’s stake was announced. Others are said to be reconsidering their contracts. In response, a spokesperson for Scale said the company’s work remains strong and that it’s committed to protecting customer data. They declined to comment on Google’s decision. Alexandr Wang, Scale’s 28-year-old founder and CEO, will join Meta as part of the deal. He’ll stay on Scale’s board but won’t have full access to company information, according to people familiar with the arrangement. Regulatory outlook under Trump The Trump administration has taken a lighter approach to AI regulation. Officials have said they don’t want to interfere with how AI develops, though they’ve also voiced doubts about the power held by large tech companies. William Kovacic, a law professor at George Washington University, said regulators are likely watching AI deals closely, even if they’re not blocking them. “It doesn’t necessarily mean they’ll step in, but they’ll keep a close eye on what these firms do,” he said. The Federal Trade Commission (FTC) has been looking into similar deals over the past two years. Under the Biden administration, the FTC opened inquiries into Amazon’s hiring of key talent from AI firm Adept and Microsoft’s $650 million deal with Inflection AI, which gave it access to the company’s models and staff. Amazon’s deal closed without further action, and the FTC hasn’t taken public steps against Microsoft, although a broader investigation into the company continues. Legal edges and political pressure Some legal experts say Meta’s approach may reduce its legal exposure. David Olson, an antitrust law professor at Boston College, said a nonvoting ********* stake offers “a lot of protection,” though he noted that the FTC could still investigate the deal if it raises concerns. Not everyone is convinced the deal is harmless. Senator Elizabeth Warren, who has been pushing for tighter oversight of AI partnerships, said the Meta investment should be reviewed closely. “Meta can call this deal whatever it wants,” she said. “But if it breaks the law by cutting competition or making it easier for Meta to dominate, regulators should step in.” Meta is facing an antitrust lawsuit filed by the FTC over claims it built a monopoly through acquisitions and platform control. It’s unclear whether the agency will also examine its involvement with Scale. Meanwhile, the Department of Justice is digging into Google’s AI investments. According to Bloomberg, the DOJ is reviewing Google’s partnership with Character.AI to see if it was structured to dodge antitrust review. Officials are also pushing for a rule that would force Google to disclose new AI investments ahead of time. A wider pattern The Meta-Scale deal fits into a broader trend of tech companies using investments and talent deals to lock in access to key AI tools and people – without triggering full-scale antitrust reviews. As more money moves into AI and more partnerships form, regulators will have to decide whether these deals are legitimate business decisions or attempts to skirt the rules. For now, the answer may depend on how much power a company gains – even without buying control. (Photo by Dima Solomin) See also: Meta beefs up AI security with new Llama tools Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Meta buys stake in Scale AI, raising antitrust concerns appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  20. Ericsson’s Cognitive Network Solutions has joined forces with AWS to develop AI technologies for self-healing mobile networks. Behind every text message and video call lies a complex system that telecom companies spend billions maintaining. This partnership between Ericsson and AWS aims to make those networks not just smarter, but virtually self-sufficient. Jean-Christophe Laneri, VP and Head of Cognitive Network Solutions at Ericsson, said: “This collaboration marks a pivotal milestone in network optimisation technology. “AWS’ global infrastructure and AI, alongside Ericsson’s unique cross-domain telecom experience and insights, will assist communication service providers in adapting to changing business conditions with predictable costs and enhanced operational efficiency.” When the internet stops working at home, the first port of call for most is the “off and on again” approach: replug connections and restart the router. If that fails, call customer service. Using agentic AI, this partnership aims to automate the identification of problems, test solutions, and fix issues before you even notice. However, rather than just a home connection, the aim is to use agentic AI to do this on the massive scale of telecom networks serving potentially millions of people. Fabio Cerone, General Manager of the EMEA Telco Business Unit at AWS, explained: “By working together, AWS and Ericsson will help telecommunications providers automate complex operations, reduce costs, and deliver better experiences for their customers. We are delivering solutions that create business value today while building toward autonomous networks.” The technology works through something called RAN automation applications, or “rApps” in industry speak. These are sophisticated tools that can learn to manage different aspects of a network. The breakthrough comes from how these tools can now work together using agentic AI to improve networks, similar to colleagues collaborating on a project. While the technology is undeniably complex, the potential benefits for everyday mobile users are straightforward. Networks that can anticipate problems and heal themselves could mean fewer dropped calls, more consistent data speeds, and better coverage in challenging areas. For instance, imagine you’re at a football match with 50,000 other fans all trying to use their phones. Today’s networks often buckle under such pressure. However, a smarter and more autonomous network might recognise the gathering crowd early, automatically redirect resources, and maintain service quality without requiring engineers to intervene. While traditional networks follow precise programmed instructions, the new approach tells the network what outcome is desired – like “ensure video streaming works well in this area” – and the AI figures out how to make that happen, adjusting to changing conditions in real-time. While terms like “intent-based networks” and “autonomous management systems” might sound like science fiction, they represent a fundamental shift in how essential services are delivered. As 5G networks continue expanding and 6G looms on the horizon, the sheer complexity of managing these systems has outgrown traditional approaches. Mobile operators are under tremendous pressure to improve service while reducing costs; seemingly contradictory goals. Autonomous networks offer a potential solution by allowing companies to do more with less human intervention. As our dependence on reliable connectivity grows – supporting everything from remote healthcare to education and emerging technologies like autonomous vehicles – the stakes for network performance continue to rise. The partnership between these tech giants to create self-healing mobile networks signals recognition that AI isn’t just a buzzword but a necessary evolution for critical infrastructure. See also: NVIDIA helps Germany lead Europe’s AI manufacturing race Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ericsson and AWS bet on AI to create self-healing networks appeared first on AI News. View the full article
  21. For decades, companies of all sizes have recognized that the data available to them holds significant value, for improving user and customer experiences and for developing strategic plans based on empirical evidence. As AI becomes increasingly accessible and practical for real-world business applications, the potential value of available data has grown exponentially. Successfully adopting AI requires significant effort in data collection, curation, and preprocessing. Moreover, important aspects such as data governance, privacy, anonymization, regulatory compliance, and security must be addressed carefully from the outset. In a conversation with Henrique Lemes, Americas Data Platform Leader at IBM, we explored the challenges enterprises face in implementing practical AI in a range of use cases. We began by examining the nature of data itself, its various types, and its role in enabling effective AI-powered applications. Henrique highlighted that referring to all enterprise information simply as ‘data’ understates its complexity. The modern enterprise navigates a fragmented landscape of diverse data types and inconsistent quality, particularly between structured and unstructured sources. In simple terms, structured data refers to information that is organized in a standardized and easily searchable format, one that enables efficient processing and analysis by software systems. Unstructured data is information that does not follow a predefined format nor organizational model, making it more complex to process and analyze. Unlike structured data, it includes diverse formats like emails, social media posts, videos, images, documents, and audio files. While it lacks the clear organization of structured data, unstructured data holds valuable insights that, when effectively managed through advanced analytics and AI, can drive innovation and inform strategic business decisions. Henrique stated, “Currently, less than 1% of enterprise data is utilized by generative AI, and over 90% of that data is unstructured, which directly affects trust and quality”. The element of trust in terms of data is an important one. Decision-makers in an organization need firm belief (trust) that the information at their fingertips is complete, reliable, and properly obtained. But there is evidence that states less than half of data available to businesses is used for AI, with unstructured data often going ignored or sidelined due to the complexity of processing it and examining it for compliance – especially at scale. To open the way to better decisions that are based on a fuller set of empirical data, the trickle of easily consumed information needs to be turned into a firehose. Automated ingestion is the answer in this respect, Henrique said, but the governance rules and data policies still must be applied – to unstructured and structured data alike. Henrique set out the three processes that let enterprises leverage the inherent value of their data. “Firstly, ingestion at scale. It’s important to automate this process. Second, curation and data governance. And the third [is when] you make this available for generative AI. We achieve over 40% of ROI over any conventional RAG use-case.” IBM provides a unified strategy, rooted in a deep understanding of the enterprise’s AI journey, combined with advanced software solutions and domain expertise. This enables organizations to efficiently and securely transform both structured and unstructured data into AI-ready assets, all within the boundaries of existing governance and compliance frameworks. “We bring together the people, processes, and tools. It’s not inherently simple, but we simplify it by aligning all the essential resources,” he said. As businesses scale and transform, the diversity and volume of their data increase. To keep up, AI data ingestion process must be both scalable and flexible. “[Companies] encounter difficulties when scaling because their AI solutions were initially built for specific tasks. When they attempt to broaden their scope, they often aren’t ready, the data pipelines grow more complex, and managing unstructured data becomes essential. This drives an increased demand for effective data governance,” he said. IBM’s approach is to thoroughly understand each client’s AI journey, creating a clear roadmap to achieve ROI through effective AI implementation. “We prioritize data accuracy, whether structured or unstructured, along with data ingestion, lineage, governance, compliance with industry-specific regulations, and the necessary observability. These capabilities enable our clients to scale across multiple use cases and fully capitalize on the value of their data,” Henrique said. Like anything worthwhile in technology implementation, it takes time to put the right processes in place, gravitate to the right tools, and have the necessary vision of how any data solution might need to evolve. IBM offers enterprises a range of options and tooling to enable AI workloads in even the most regulated industries, at any scale. With international banks, finance houses, and global multinationals among its client roster, there are few substitutes for Big Blue in this context. To find out more about enabling data pipelines for AI that drive business and offer fast, significant ROI, head over to this page. The post Unlock the other 99% of your data – now ready for AI appeared first on AI News. View the full article
  22. Back when most business applications were monolithic, ensuring their resilience was by no means easy. But given the way apps run in 2025 and what’s expected of them, maintaining monolithic apps was arguably simpler. Back then, IT staff had a finite set of criteria on which to improve an application’s resilience, and the rate of change to the application and its infrastructure was a great deal slower. Today, the demands we place on apps are different, more numerous, and subject to a faster rate of change. There are also just more applications. According to IDC, there are likely to be a billion more in production by 2028 – and many of these will be running on cloud-native code and mixed infrastructure. With technological complexity and higher service expectations of responsiveness and quality, ensuring resilience has grown into being a massively more complex ask. Multi-dimensional elements determine app resilience, dimensions that fall into different areas of responsibility in the modern enterprise: Code quality falls to development teams; infrastructure might be down to systems administrators or DevOps; compliance and data governance officers have their own needs and stipulations, as do cybersecurity professionals, storage engineers, database administrators, and a dozen more besides. With multiple tools designed to ensure the resilience of an app – with definitions of what constitutes resilience depending on who’s asking – it’s small wonder that there are typically dozens of tools that work to improve and maintain resilience in play at any one time in the modern enterprise. Determining resilience across the whole enterprise’s portfolio, therefore, is near-impossible. Monitoring software is silo-ed, and there’s no single pane of reference. IBM’s Concert Resilience Posture simplifies the complexities of multiple dashboards, normalizes the different quality judgments, breaks down data from different silos, and unifies the disparate purposes of monitoring and remediation tools in play. Speaking ahead of TechEx North America (4-5 June, Santa Clara Convention Center), Jennifer Fitzgerald, Product Management Director, Observability, at IBM, took us through the Concert Resilience Posture solution, its aims, and its ethos. On the latter, she differentiates it from other tools: “Everything we’re doing is grounded in applications – the health and performance of the applications and reducing risk factors for the application.” The app-centric approach means the bringing together of the different metrics in the context of desired business outcomes, answering questions that matter to an organization’s stakeholders, like: Will every application scale? What effects have code changes had? Are we over- or under-resourcing any element of any application? Is infrastructure supporting or hindering application deployment? Are we safe and in line with data governance policies? What experience are we giving our customers? Jennifer says IBM Concert Resilience Posture is, “a new way to think about resilience – to move it from a manual stitching [of other tools] or a ton of different dashboards.” Although the definition of resilience can be ephemeral, according to which criteria are in play, Jennifer says it’s comprised, at its core, of eight non-functional requirements (NFRs): Observability Availability Maintainability Recoverability Scalability Usability Integrity Security NFRs are important everywhere in the organization, and there are perhaps only two or three that are the sole remit of one department – security falls to the CISO, for example. But ensuring the best quality of resilience in all of the above is critically important right across the enterprise. It’s a shared responsibility for maintaining excellence in performance, potential, and safety. What IBM Concert Resilience Posture gives organizations, different from what’s offered by a collection of disparate tools and beyond the single-pane-of-glass paradigm, is proactivity. Proactive resilience comes from its ability to give a resilience score, based on multiple metrics, with a score determined by the many dozens of data points in each NFR. Companies can see their overall or per-app scores drift as changes are made – to the infrastructure, to code, to the portfolio of applications in production, and so on. “The thought around resilience is that we as humans aren’t perfect. We’re going to make mistakes. But how do you come back? You want your applications to be fully, highly performant, always optimal, with the required uptime. But issues are going to happen. A code change is introduced that breaks something, or there’s more demand on a certain area that slows down performance. And so the application resilience we’re looking at is all around the ability of systems to withstand and recover quickly from disruptions, failures, spikes in demand, [and] unexpected events,” she says. IBM’s acquisition history points to some of the complimentary elements of the Concert Resilience Posture solution – Instana for full-stack observability, Turbonomic for resource optimization, for example. But the whole is greater than the sum of the parts. There’s an AI-powered continuous assessment of all elements that make up an organization’s resilience, so there’s one place where decision-makers and IT teams can assess, manage, and configure the full-stack’s resilience profile. The IBM portfolio of resilience-focused solutions helps teams see when and why loads change and therefore where resources are wasted. It’s possible to ensure that necessary resources are allocated only when needed, and systems automatically scale back when they’re not. That sort of business- and cost-centric capability is at the heart of app-centric resilience, and means that a company is always optimizing its resources. Overarching all aspects of app performance and resilience is the element of cost. Throwing extra resources at an under-performing application (or its supporting infrastructure) isn’t a viable solution in most organizations. With IBM, organizations get the ability to scale and grow, to add or iterate apps safely, without necessarily having to invest in new provisioning, either in the cloud or on-premise. Plus, they can see how any changes impact resilience. It’s making best use of what’s available, and winning back capacity – all while getting the best performance, responsiveness, reliability, and uptime across the enterprise’s application portfolio. Jennifer says, “There’s a lot of different things that can impact resilience and that’s why it’s been so difficult to measure. An application has so many different layers underneath, even in just its resources and how it’s built. But then there’s the spider web of downstream impacts. A code change could impact multiple apps, or it could impact one piece of an app. What is the downstream impact of something going wrong? And that’s a big piece of what our tools are helping organizations with.” You can read more about IBM’s work to make today and tomorrow’s applications resilient. The post The concerted effort of maintaining application resilience appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  23. Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud. NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with ******* Chancellor Friedrich Merz that might prove the most consequential stop. The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty. An “AI factory” (as they’re calling it) will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry. A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity. Robots teaching robots One of the early adopters is NEURA Robotics, a ******* firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other. Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence. “Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines. AI for more than just Germany’s industrial titans What’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the ******* economy – stands to benefit. These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market. Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications. The road to Europe’s AI gigafactory However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future. As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent. For a region that has often found itself caught between American tech dominance and ******** ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere. (Photo by Maheshkumar Painam) See also: Sam Altman, OpenAI: The superintelligence era has begun Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NVIDIA helps Germany lead Europe’s AI manufacturing race appeared first on AI News. View the full article
  24. Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management. Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between $14 billion and $55 billion annually in productivity gains. Through GenAI adoption, an additional $50 billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity. While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place. While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmes Measuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible. Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience. Bridging user experience with technical precision – design awards Innovation is no longer solely judged by technical performance with user experience (UX) being equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency. Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. (Image source: “IBM Achieves New Deep Learning Breakthrough” by IBM Research is licensed under CC BY-ND 2.0.) See also: Magistral: Mistral AI challenges big tech with reasoning model Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post MedTech AI, hardware, and clinical application programmes appeared first on AI News. View the full article
  25. Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to $631 billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success. The sobering statistics behind AI’s promise ModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution. While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production. The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all. The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise. The cause: Structural, not technical barriers The biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.” Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives. Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations. Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams. Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments. The governance revolution: From obstacle to accelerator A change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed. Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation. Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least $1 million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI. What high-performing organisations do differently The enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation: Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities. Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture. Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts. End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics. Measurable impact of structured governance The benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes. A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders. Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities. The path forward: From stuck to scaled The message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale. Immediate action items for AI leaders Organisations looking to escape the ‘time-to-market quagmire’ should prioritise the following: Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecks Standardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business units Invest in integration: Deploy platforms to unify disparate tools and systems under a single governance framework Establish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilities The competitive advantage of getting it right Organisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators. Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival. The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle. (Image source: Unsplash) The post The AI execution gap: Why 80% of projects don’t reach production appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.