Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    559
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. Ask Huawei CEO Ren Zhengfei for his take on AI in China and the mountain of difficulties facing his company, and you get surprising answers. “I haven’t thought about it,” says Ren, in a Q&A with ******** media outlet People’s Daily. “It’s useless to think about it.” In a world obsessed with five-year plans and crisis management, his advice is almost jarring in its simplicity: “Don’t think about the difficulties. Just do it and move forward step by step.” This isn’t just a personal mantra; it’s the blueprint for how Huawei is navigating a storm of international sanctions and blockades. It’s a quiet determination that ripples through all his answers. When the conversation shifts to Huawei’s advanced Ascend AI chips, he is almost brutally honest. He doesn’t boast. In fact, he believes the hype has gotten ahead of reality. “The United States has exaggerated Huawei’s achievements. Huawei is not that great yet,” he admits, noting that their best chips are still a generation behind. So what do you do when you can’t buy the best tools? According to Ren, you get smarter with the ones you have. He explains that Huawei is leaning on its brilliance in software and mathematics to close the hardware gap in AI and beyond. “We use mathematics to make up for physics,” he says, describing a strategy of using code and linking chips together in powerful clusters to achieve results that can compete with the very best. Ingenuity born from necessity. This grounded perspective applies to people as much as it does to products. In an age of relentless corporate promotion, Ren is wary of the spotlight. “We are also under a lot of pressure when people praise us,” he reveals. “We will be more sober when people criticise us.” He sees criticism of Huawei not as an attack, but as a gift from the people who actually use their products. It’s a sign of a healthy relationship. His focus remains unwavering: “Don’t care about praise or criticism, but care about whether you can do well.” But the real heart of Ren’s vision, the idea that truly animates him, lies in something much deeper and slower than the next product cycle: basic scientific research. He speaks about it with the passion of a philosopher, arguing it is the very soul of progress. “If we do not do basic research, we will have no roots,” he warns. “Even if the leaves are lush and flourishing, they will fall down when the wind blows.” For Huawei, these are not just poetic words. They are backed by huge investment. Out of an annual R&D budget of 180 billion yuan (around $25 billion) a full third of it – 60 billion yuan (~$8.34 billion) – is poured into theoretical research. This is money spent without the expectation of an immediate return, a long-term bet on the power of human curiosity. It’s an investment in a future that may be decades away. Looking toward that future, Ren sees AI as a monumental shift not just for Huawei but for humanity. He believes China is well-positioned for this new era, not just because of its technology, but because of its powerful infrastructure and, most importantly, its people. Ren imagines a future where the real breakthroughs in AI won’t just come from programmers in tech giants like Huawei, but from experts in every field – doctors, engineers, and even miners – using AI to solve real-world problems. His optimism is infectious. He recalls an op-ed by New York Times columnist Thomas L. Friedman who departed China and published an article earlier this year with a title that requires no further explanation: ‘I Just Saw the Future. It Was Not in America.’ Ren Zhengfei seems to be a leader who has found a sense of calm in the eye of the storm. His focus is not on the shifting political winds, but on the slow, steady work of building something with deep roots, ready for whatever the future holds. Step by patient step. (Image credit: European Union under CC BY 4.0 license. Image cropped for effect.) See also: Hugging Face partners with Groq for ultra-fast AI model inference Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ren Zhengfei: China’s AI future and Huawei’s long game appeared first on AI News. View the full article
  2. Hugging Face has added Groq to its AI model inference providers, bringing lightning-fast processing to the popular model hub. Speed and efficiency have become increasingly crucial in AI development, with many organisations struggling to balance model performance against rising computational costs. Rather than using traditional GPUs, Groq has designed chips purpose-built for language models. The company’s Language Processing Unit (LPU) is a specialised chip designed from the ground up to handle the unique computational patterns of language models. Unlike conventional processors that struggle with the sequential nature of language tasks, Groq’s architecture embraces this characteristic. The result? Dramatically reduced response times and higher throughput for AI applications that need to process text quickly. Developers can now access numerous popular open-source models through Groq’s infrastructure, including Meta’s Llama 4 and Qwen’s QwQ-32B. This breadth of model support ensures teams aren’t sacrificing capabilities for performance. Users have multiple ways to incorporate Groq into their workflows, depending on their preferences and existing setups. For those who already have a relationship with Groq, Hugging Face allows straightforward configuration of personal API keys within account settings. This approach directs requests straight to Groq’s infrastructure while maintaining the familiar Hugging Face interface. Alternatively, users can opt for a more hands-off experience by letting Hugging Face handle the connection entirely, with charges appearing on their Hugging Face account rather than requiring separate billing relationships. The integration works seamlessly with Hugging Face’s client libraries for both Python and JavaScript, though the technical details remain refreshingly simple. Even without diving into code, developers can specify Groq as their preferred provider with minimal configuration. Customers using their own Groq API keys are billed directly through their existing Groq accounts. For those preferring the consolidated approach, Hugging Face passes through the standard provider rates without adding markup, though they note that revenue-sharing agreements may evolve in the future. Hugging Face even offers a limited inference quota at no cost—though the company naturally encourages upgrading to PRO for those making regular use of these services. This partnership between Hugging Face and Groq emerges against a backdrop of intensifying competition in AI infrastructure for model inference. As more organisations move from experimentation to production deployment of AI systems, the bottlenecks around inference processing have become increasingly apparent. What we’re seeing is a natural evolution of the AI ecosystem. First came the race for ******* models, then came the rush to make them practical. Groq represents the latter—making existing models work faster rather than just building larger ones. For businesses weighing AI deployment options, the addition of Groq to Hugging Face’s provider ecosystem offers another choice in the balance between performance requirements and operational costs. The significance extends beyond technical considerations. Faster inference means more responsive applications, which translates to better user experiences across countless services now incorporating AI assistance. Sectors particularly sensitive to response times (e.g. customer service, healthcare diagnostics, financial analysis) stand to benefit from improvements to AI infrastructure that reduces the lag between question and answer. As AI continues its march into everyday applications, partnerships like this highlight how the technology ecosystem is evolving to address the practical limitations that have historically constrained real-time AI implementation. (Photo by Michał Mancewicz) See also: NVIDIA helps Germany lead Europe’s AI manufacturing race Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Hugging Face partners with Groq for ultra-fast AI model inference appeared first on AI News. View the full article
  3. Meta’s $14.8 billion investment in Scale AI – and the hiring of the startup’s CEO – is drawing attention to how US regulators will handle acquihire-style deals under the Trump administration. The deal gives Meta a 49% nonvoting stake in Scale AI, which hires gig workers to label training data for AI systems. Scale’s clients include Microsoft and OpenAI, two of Meta’s main competitors in the AI space. Because Meta hasn’t bought a controlling share, the deal avoided automatic antitrust review. But regulators could still examine it if they believe the structure was designed to sidestep scrutiny or hurt competition. Access and fairness concerns Some early signs of fallout have already surfaced. Google, one of Scale’s customers, reportedly cut ties with the company after Meta’s stake was announced. Others are said to be reconsidering their contracts. In response, a spokesperson for Scale said the company’s work remains strong and that it’s committed to protecting customer data. They declined to comment on Google’s decision. Alexandr Wang, Scale’s 28-year-old founder and CEO, will join Meta as part of the deal. He’ll stay on Scale’s board but won’t have full access to company information, according to people familiar with the arrangement. Regulatory outlook under Trump The Trump administration has taken a lighter approach to AI regulation. Officials have said they don’t want to interfere with how AI develops, though they’ve also voiced doubts about the power held by large tech companies. William Kovacic, a law professor at George Washington University, said regulators are likely watching AI deals closely, even if they’re not blocking them. “It doesn’t necessarily mean they’ll step in, but they’ll keep a close eye on what these firms do,” he said. The Federal Trade Commission (FTC) has been looking into similar deals over the past two years. Under the Biden administration, the FTC opened inquiries into Amazon’s hiring of key talent from AI firm Adept and Microsoft’s $650 million deal with Inflection AI, which gave it access to the company’s models and staff. Amazon’s deal closed without further action, and the FTC hasn’t taken public steps against Microsoft, although a broader investigation into the company continues. Legal edges and political pressure Some legal experts say Meta’s approach may reduce its legal exposure. David Olson, an antitrust law professor at Boston College, said a nonvoting ********* stake offers “a lot of protection,” though he noted that the FTC could still investigate the deal if it raises concerns. Not everyone is convinced the deal is harmless. Senator Elizabeth Warren, who has been pushing for tighter oversight of AI partnerships, said the Meta investment should be reviewed closely. “Meta can call this deal whatever it wants,” she said. “But if it breaks the law by cutting competition or making it easier for Meta to dominate, regulators should step in.” Meta is facing an antitrust lawsuit filed by the FTC over claims it built a monopoly through acquisitions and platform control. It’s unclear whether the agency will also examine its involvement with Scale. Meanwhile, the Department of Justice is digging into Google’s AI investments. According to Bloomberg, the DOJ is reviewing Google’s partnership with Character.AI to see if it was structured to dodge antitrust review. Officials are also pushing for a rule that would force Google to disclose new AI investments ahead of time. A wider pattern The Meta-Scale deal fits into a broader trend of tech companies using investments and talent deals to lock in access to key AI tools and people – without triggering full-scale antitrust reviews. As more money moves into AI and more partnerships form, regulators will have to decide whether these deals are legitimate business decisions or attempts to skirt the rules. For now, the answer may depend on how much power a company gains – even without buying control. (Photo by Dima Solomin) See also: Meta beefs up AI security with new Llama tools Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Meta buys stake in Scale AI, raising antitrust concerns appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  4. Ericsson’s Cognitive Network Solutions has joined forces with AWS to develop AI technologies for self-healing mobile networks. Behind every text message and video call lies a complex system that telecom companies spend billions maintaining. This partnership between Ericsson and AWS aims to make those networks not just smarter, but virtually self-sufficient. Jean-Christophe Laneri, VP and Head of Cognitive Network Solutions at Ericsson, said: “This collaboration marks a pivotal milestone in network optimisation technology. “AWS’ global infrastructure and AI, alongside Ericsson’s unique cross-domain telecom experience and insights, will assist communication service providers in adapting to changing business conditions with predictable costs and enhanced operational efficiency.” When the internet stops working at home, the first port of call for most is the “off and on again” approach: replug connections and restart the router. If that fails, call customer service. Using agentic AI, this partnership aims to automate the identification of problems, test solutions, and fix issues before you even notice. However, rather than just a home connection, the aim is to use agentic AI to do this on the massive scale of telecom networks serving potentially millions of people. Fabio Cerone, General Manager of the EMEA Telco Business Unit at AWS, explained: “By working together, AWS and Ericsson will help telecommunications providers automate complex operations, reduce costs, and deliver better experiences for their customers. We are delivering solutions that create business value today while building toward autonomous networks.” The technology works through something called RAN automation applications, or “rApps” in industry speak. These are sophisticated tools that can learn to manage different aspects of a network. The breakthrough comes from how these tools can now work together using agentic AI to improve networks, similar to colleagues collaborating on a project. While the technology is undeniably complex, the potential benefits for everyday mobile users are straightforward. Networks that can anticipate problems and heal themselves could mean fewer dropped calls, more consistent data speeds, and better coverage in challenging areas. For instance, imagine you’re at a football match with 50,000 other fans all trying to use their phones. Today’s networks often buckle under such pressure. However, a smarter and more autonomous network might recognise the gathering crowd early, automatically redirect resources, and maintain service quality without requiring engineers to intervene. While traditional networks follow precise programmed instructions, the new approach tells the network what outcome is desired – like “ensure video streaming works well in this area” – and the AI figures out how to make that happen, adjusting to changing conditions in real-time. While terms like “intent-based networks” and “autonomous management systems” might sound like science fiction, they represent a fundamental shift in how essential services are delivered. As 5G networks continue expanding and 6G looms on the horizon, the sheer complexity of managing these systems has outgrown traditional approaches. Mobile operators are under tremendous pressure to improve service while reducing costs; seemingly contradictory goals. Autonomous networks offer a potential solution by allowing companies to do more with less human intervention. As our dependence on reliable connectivity grows – supporting everything from remote healthcare to education and emerging technologies like autonomous vehicles – the stakes for network performance continue to rise. The partnership between these tech giants to create self-healing mobile networks signals recognition that AI isn’t just a buzzword but a necessary evolution for critical infrastructure. See also: NVIDIA helps Germany lead Europe’s AI manufacturing race Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ericsson and AWS bet on AI to create self-healing networks appeared first on AI News. View the full article
  5. For decades, companies of all sizes have recognized that the data available to them holds significant value, for improving user and customer experiences and for developing strategic plans based on empirical evidence. As AI becomes increasingly accessible and practical for real-world business applications, the potential value of available data has grown exponentially. Successfully adopting AI requires significant effort in data collection, curation, and preprocessing. Moreover, important aspects such as data governance, privacy, anonymization, regulatory compliance, and security must be addressed carefully from the outset. In a conversation with Henrique Lemes, Americas Data Platform Leader at IBM, we explored the challenges enterprises face in implementing practical AI in a range of use cases. We began by examining the nature of data itself, its various types, and its role in enabling effective AI-powered applications. Henrique highlighted that referring to all enterprise information simply as ‘data’ understates its complexity. The modern enterprise navigates a fragmented landscape of diverse data types and inconsistent quality, particularly between structured and unstructured sources. In simple terms, structured data refers to information that is organized in a standardized and easily searchable format, one that enables efficient processing and analysis by software systems. Unstructured data is information that does not follow a predefined format nor organizational model, making it more complex to process and analyze. Unlike structured data, it includes diverse formats like emails, social media posts, videos, images, documents, and audio files. While it lacks the clear organization of structured data, unstructured data holds valuable insights that, when effectively managed through advanced analytics and AI, can drive innovation and inform strategic business decisions. Henrique stated, “Currently, less than 1% of enterprise data is utilized by generative AI, and over 90% of that data is unstructured, which directly affects trust and quality”. The element of trust in terms of data is an important one. Decision-makers in an organization need firm belief (trust) that the information at their fingertips is complete, reliable, and properly obtained. But there is evidence that states less than half of data available to businesses is used for AI, with unstructured data often going ignored or sidelined due to the complexity of processing it and examining it for compliance – especially at scale. To open the way to better decisions that are based on a fuller set of empirical data, the trickle of easily consumed information needs to be turned into a firehose. Automated ingestion is the answer in this respect, Henrique said, but the governance rules and data policies still must be applied – to unstructured and structured data alike. Henrique set out the three processes that let enterprises leverage the inherent value of their data. “Firstly, ingestion at scale. It’s important to automate this process. Second, curation and data governance. And the third [is when] you make this available for generative AI. We achieve over 40% of ROI over any conventional RAG use-case.” IBM provides a unified strategy, rooted in a deep understanding of the enterprise’s AI journey, combined with advanced software solutions and domain expertise. This enables organizations to efficiently and securely transform both structured and unstructured data into AI-ready assets, all within the boundaries of existing governance and compliance frameworks. “We bring together the people, processes, and tools. It’s not inherently simple, but we simplify it by aligning all the essential resources,” he said. As businesses scale and transform, the diversity and volume of their data increase. To keep up, AI data ingestion process must be both scalable and flexible. “[Companies] encounter difficulties when scaling because their AI solutions were initially built for specific tasks. When they attempt to broaden their scope, they often aren’t ready, the data pipelines grow more complex, and managing unstructured data becomes essential. This drives an increased demand for effective data governance,” he said. IBM’s approach is to thoroughly understand each client’s AI journey, creating a clear roadmap to achieve ROI through effective AI implementation. “We prioritize data accuracy, whether structured or unstructured, along with data ingestion, lineage, governance, compliance with industry-specific regulations, and the necessary observability. These capabilities enable our clients to scale across multiple use cases and fully capitalize on the value of their data,” Henrique said. Like anything worthwhile in technology implementation, it takes time to put the right processes in place, gravitate to the right tools, and have the necessary vision of how any data solution might need to evolve. IBM offers enterprises a range of options and tooling to enable AI workloads in even the most regulated industries, at any scale. With international banks, finance houses, and global multinationals among its client roster, there are few substitutes for Big Blue in this context. To find out more about enabling data pipelines for AI that drive business and offer fast, significant ROI, head over to this page. The post Unlock the other 99% of your data – now ready for AI appeared first on AI News. View the full article
  6. Back when most business applications were monolithic, ensuring their resilience was by no means easy. But given the way apps run in 2025 and what’s expected of them, maintaining monolithic apps was arguably simpler. Back then, IT staff had a finite set of criteria on which to improve an application’s resilience, and the rate of change to the application and its infrastructure was a great deal slower. Today, the demands we place on apps are different, more numerous, and subject to a faster rate of change. There are also just more applications. According to IDC, there are likely to be a billion more in production by 2028 – and many of these will be running on cloud-native code and mixed infrastructure. With technological complexity and higher service expectations of responsiveness and quality, ensuring resilience has grown into being a massively more complex ask. Multi-dimensional elements determine app resilience, dimensions that fall into different areas of responsibility in the modern enterprise: Code quality falls to development teams; infrastructure might be down to systems administrators or DevOps; compliance and data governance officers have their own needs and stipulations, as do cybersecurity professionals, storage engineers, database administrators, and a dozen more besides. With multiple tools designed to ensure the resilience of an app – with definitions of what constitutes resilience depending on who’s asking – it’s small wonder that there are typically dozens of tools that work to improve and maintain resilience in play at any one time in the modern enterprise. Determining resilience across the whole enterprise’s portfolio, therefore, is near-impossible. Monitoring software is silo-ed, and there’s no single pane of reference. IBM’s Concert Resilience Posture simplifies the complexities of multiple dashboards, normalizes the different quality judgments, breaks down data from different silos, and unifies the disparate purposes of monitoring and remediation tools in play. Speaking ahead of TechEx North America (4-5 June, Santa Clara Convention Center), Jennifer Fitzgerald, Product Management Director, Observability, at IBM, took us through the Concert Resilience Posture solution, its aims, and its ethos. On the latter, she differentiates it from other tools: “Everything we’re doing is grounded in applications – the health and performance of the applications and reducing risk factors for the application.” The app-centric approach means the bringing together of the different metrics in the context of desired business outcomes, answering questions that matter to an organization’s stakeholders, like: Will every application scale? What effects have code changes had? Are we over- or under-resourcing any element of any application? Is infrastructure supporting or hindering application deployment? Are we safe and in line with data governance policies? What experience are we giving our customers? Jennifer says IBM Concert Resilience Posture is, “a new way to think about resilience – to move it from a manual stitching [of other tools] or a ton of different dashboards.” Although the definition of resilience can be ephemeral, according to which criteria are in play, Jennifer says it’s comprised, at its core, of eight non-functional requirements (NFRs): Observability Availability Maintainability Recoverability Scalability Usability Integrity Security NFRs are important everywhere in the organization, and there are perhaps only two or three that are the sole remit of one department – security falls to the CISO, for example. But ensuring the best quality of resilience in all of the above is critically important right across the enterprise. It’s a shared responsibility for maintaining excellence in performance, potential, and safety. What IBM Concert Resilience Posture gives organizations, different from what’s offered by a collection of disparate tools and beyond the single-pane-of-glass paradigm, is proactivity. Proactive resilience comes from its ability to give a resilience score, based on multiple metrics, with a score determined by the many dozens of data points in each NFR. Companies can see their overall or per-app scores drift as changes are made – to the infrastructure, to code, to the portfolio of applications in production, and so on. “The thought around resilience is that we as humans aren’t perfect. We’re going to make mistakes. But how do you come back? You want your applications to be fully, highly performant, always optimal, with the required uptime. But issues are going to happen. A code change is introduced that breaks something, or there’s more demand on a certain area that slows down performance. And so the application resilience we’re looking at is all around the ability of systems to withstand and recover quickly from disruptions, failures, spikes in demand, [and] unexpected events,” she says. IBM’s acquisition history points to some of the complimentary elements of the Concert Resilience Posture solution – Instana for full-stack observability, Turbonomic for resource optimization, for example. But the whole is greater than the sum of the parts. There’s an AI-powered continuous assessment of all elements that make up an organization’s resilience, so there’s one place where decision-makers and IT teams can assess, manage, and configure the full-stack’s resilience profile. The IBM portfolio of resilience-focused solutions helps teams see when and why loads change and therefore where resources are wasted. It’s possible to ensure that necessary resources are allocated only when needed, and systems automatically scale back when they’re not. That sort of business- and cost-centric capability is at the heart of app-centric resilience, and means that a company is always optimizing its resources. Overarching all aspects of app performance and resilience is the element of cost. Throwing extra resources at an under-performing application (or its supporting infrastructure) isn’t a viable solution in most organizations. With IBM, organizations get the ability to scale and grow, to add or iterate apps safely, without necessarily having to invest in new provisioning, either in the cloud or on-premise. Plus, they can see how any changes impact resilience. It’s making best use of what’s available, and winning back capacity – all while getting the best performance, responsiveness, reliability, and uptime across the enterprise’s application portfolio. Jennifer says, “There’s a lot of different things that can impact resilience and that’s why it’s been so difficult to measure. An application has so many different layers underneath, even in just its resources and how it’s built. But then there’s the spider web of downstream impacts. A code change could impact multiple apps, or it could impact one piece of an app. What is the downstream impact of something going wrong? And that’s a big piece of what our tools are helping organizations with.” You can read more about IBM’s work to make today and tomorrow’s applications resilient. The post The concerted effort of maintaining application resilience appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  7. Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud. NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with ******* Chancellor Friedrich Merz that might prove the most consequential stop. The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty. An “AI factory” (as they’re calling it) will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry. A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity. Robots teaching robots One of the early adopters is NEURA Robotics, a ******* firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other. Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence. “Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines. AI for more than just Germany’s industrial titans What’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the ******* economy – stands to benefit. These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market. Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications. The road to Europe’s AI gigafactory However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future. As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent. For a region that has often found itself caught between American tech dominance and ******** ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere. (Photo by Maheshkumar Painam) See also: Sam Altman, OpenAI: The superintelligence era has begun Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NVIDIA helps Germany lead Europe’s AI manufacturing race appeared first on AI News. View the full article
  8. Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management. Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between $14 billion and $55 billion annually in productivity gains. Through GenAI adoption, an additional $50 billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity. While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place. While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmes Measuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible. Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience. Bridging user experience with technical precision – design awards Innovation is no longer solely judged by technical performance with user experience (UX) being equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency. Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. (Image source: “IBM Achieves New Deep Learning Breakthrough” by IBM Research is licensed under CC BY-ND 2.0.) See also: Magistral: Mistral AI challenges big tech with reasoning model Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post MedTech AI, hardware, and clinical application programmes appeared first on AI News. View the full article
  9. Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to $631 billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success. The sobering statistics behind AI’s promise ModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution. While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production. The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all. The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise. The cause: Structural, not technical barriers The biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.” Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives. Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations. Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams. Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments. The governance revolution: From obstacle to accelerator A change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed. Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation. Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least $1 million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI. What high-performing organisations do differently The enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation: Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities. Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture. Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts. End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics. Measurable impact of structured governance The benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes. A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders. Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities. The path forward: From stuck to scaled The message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale. Immediate action items for AI leaders Organisations looking to escape the ‘time-to-market quagmire’ should prioritise the following: Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecks Standardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business units Invest in integration: Deploy platforms to unify disparate tools and systems under a single governance framework Establish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilities The competitive advantage of getting it right Organisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators. Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival. The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle. (Image source: Unsplash) The post The AI execution gap: Why 80% of projects don’t reach production appeared first on AI News. View the full article
  10. Teachers in England have been given the all-clear to use AI to help them in low-level tasks that are part of their duties, the BBC reports. Guidance from the Department for Education (DfE) says AI can be used by school teachers in England, but it should only be for ‘low stakes’ tasks, such as writing letters to parents and marking homework. The decision to approve the use of the technology follows the results of a survey of teachers in 2023, undertaken on behalf of the DfE. In it, a majority of respondents were said to be “broadly optimistic” about using AI in the course of their jobs. At the time, a spokesperson from Teacher Tap (the company behind the software used to conduct the survey) said: “It’s really quite normal now as a maths teacher, that you don’t mark maths homework any more … because we have such chronic shortages of maths teachers that you know nobody really feels aggrieved.” Responses to the 2023 survey quoted teachers saying AI can be quite useful when they need to source appropriate teaching materials, and in the course of writing reports to parents on the performance and behaviour of their children. As part of today’s announcement, the DfE said that teachers using AI will help reduce the burden of unpaid overtime teachers work, and can lead to improved work-life balance and job satisfaction. By allowing staff to use AI tools, it’s hoped that the statistics around teachers’ mental health in general should improve (36% of teachers have experienced ‘burn-out’ according to the charity Education Support [PDF]), and will have the effect of attracting more graduates to the profession. Part of the daily stress many teachers suffer is caused by a shortage of qualified teachers, a situation that use of AI may help. Although the *** government has pointed to a greater number of teachers employed in the entirety of the *** than a decade ago, the ratio of pupils to teachers continues to widen as the population grows. Teaching classes of 33 or more is commonplace in English state schools, and over a million pupils in the *** are taught in classes of more than 30. The attrition rate for qualified teachers in the *** is around 8.8% according to SecEd, an industry website aimed at teachers working in secondary schools (the 11-18 age group). SecEd has also stated that the number of open positions in the sector climbed from three to six per 1,000 teachers in the 12 months from 2022. Due to budgetary constraints on local authorities and schools, open teaching positions are often filled by short-term supply (substitute) teachers sourced through employment agencies, a practice that costs schools significantly more than paying permanent salaried staff. In line with today’s announcement, a post on the Education Hub blog published by the *** government states that “teachers can use AI to help with things like planning lessons, creating resources, marking work, giving feedback, and handling administrative tasks.” It also gives the proviso of it being up to the individual teacher to “check that anything AI generates is accurate and appropriate – the final responsibility always rests with them and their school or college.” The DfE has also given the government’s seal of approval for the use of AI by companies that conduct curriculum and assessment reviews of *** schools, the outcomes of which determine schools’ rankings in the so-called league tables. These are classifications given to schools by Ofsted (Office for Standards in Education) such as ‘special measures’, ‘good’, or ‘outstanding’. The approval for the use of AI in this context comes despite opposition from teaching unions. The longer-term issue that has pervaded the English school system for several decades is not the sector’s use of technology, but its chronic under-funding. The NAHT (National Association of Head Teachers) states that between school years 2009-10 and 2021-2022, capital spending on schools saw an inflation-adjusted reduction of 29% over the decade. The Institute for Fiscal Study has said that school spending per pupil in England has seen a real-terms decrease of 9% in the same *******. Equipping teaching professionals with technology tools may help teachers with some of the burden of administration placed on them, although whether marking homework can be considered what the Department for Education terms ‘low stakes’ is debatable. Investment in school-age children in the form of education budget increases is expensive, while subscriptions to AI models can be as little as a few dollars a month. On paper, the lure of AI helping teachers manage their workloads a little more efficiently must be attractive to DfE officials. But what is apparent is the consistently low value placed on childhood education by successive *** governments. Deciding to allow AI to help staff in a criminally under-funded education sector is largely irrelevant and will have little impact on the quality of education offered to another generation of English children. (Image source: “Village School Classroom” by Thomas Galvez is licensed under CC BY 2.0.) The post Teachers in England given the green-light to use AI appeared first on AI News. View the full article
  11. MarketsandMarkets values the global artificial intelligence market at $371.71 billion and expects it to exceed $2407.02 billion in value by 2032. The statistic clearly demonstrates how AI technology can affect many sectors, including cryptocurrency. The Business Research Company reports the generative AI market in the cryptocurrency space alone is expected to grow in value from $760 million in 2024 to $1.02 billion in 2025. That’s a CAGR of roughly 34.5%. As readers will know, artificial intelligence boasts an unusual computational ability that helps it extract meaningful insights in real time. In terms of the ADA price, for instance, AI can help traders make more informed predictions about future price movements by combining historical performance, market trends and other data points. And that’s just scratching the surface – there’s much more to how AI is reshaping this space. Providing better security Cyberattacks are a growing concern in industries, and cryptocurrency is no exception. There are more than approximately 940,000 attacks daily worldwide. In the cryptocurrency industry, issues like private key compromises have surged in number , and compromises accounted for almost half (43.8%) of stolen cryptocurrency in 2024, with the total number of stolen funds rising by about 21% that year. Since bad actors reinvent themselves constantly, ignoring cybersecurity can have serious consequences, especially for cryptocurrency exchanges. One example may be the loss of security-conscious customers. According to cxscoop.com, up to 21% never return to brands that suffer cybersecurity incidents. Given the competitive nature of the cryptocurrency industry, such losses can be fatal to companies, and at best, recovering after cyberattacks can be challenging. An IBM report reveals that companies may need at least $4.88 million to recover, which is why many cryptocurrency companies are turning to AI for better protection. AI excels at pattern recognition, making it highly effective in detecting fraud. It examines data like transaction histories and IP addresses to identify malicious activity in real time. For example, blockchain analytics firm Elliptic recently noted potential money laundering on the Bitcoin network after training an AI model using data on about 200 million transactions. The rise of smart trading bots Gathering and processing all the data needed for accurate trading decisions or anomaly detection is no easy task. Errors and delays are common, but AI can quickly assess vast amounts of information and deliver results more quickly than human workers. Many cryptocurrency traders have turned to artificial intelligence as their new hope, leading to the expansion of the global AI cryptocurrency trading bot market, which Research and Markets values at $40.8 billion. If this trend continues, the market could hit $985.2 billion in value in the next few years, translating to a CAGR of 37.2%. Bots can examine large amounts of data, including social media sentiments and global news, and make predictions that give traders a serious edge. But despite such benefits, it doesn’t mean AI is 100% accurate; it needs close monitoring and strategy adjustment to avoid inaccurate predictions. Are there any challenges? According to a ResearchGate publication by Halima Kure and others, data poisoning can reduce classification accuracy in fraud detection models by 22%. Such instances manipulate AI models and can be used to initiate fraudulent transactions. Another common concern with AI algorithms is the ‘****** box’ problem. When users don’t understand how an AI system makes its decisions, trust erodes. In an industry like cryptocurrency, where trust is everything, users can perceive trading bots as untrustworthy. Security.org claims that 40% of cryptocurrency owners have doubts about digital currencies. Cryptocurrency’s volatility and extant unpredictable socio-economics create challenges for artificial intelligence’s ability to make accurate predictions. If AI’s analytical abilities are overestimated, costs will mount up, regardless of trading strategies. Future developments may address some of these challenges, with observers suggesting AI may continue to dominate the cryptocurrency space. Cryptocurrency companies have been using the technology to improve security measures through real-time monitoring. AI technology can detect an attack before it happens, helping companies avoid significant financial losses. Plus, artificial intelligence’s computational ability can help investors improve prediction accuracy. By gathering and assessing data from numerous sources, the technology offers real-time insights – something that once seemed out of reach. The post AI’s influence in the cryptocurrency industry appeared first on AI News. View the full article
  12. OpenAI chief Sam Altman has declared that humanity has crossed into the era of artificial superintelligence—and there’s no turning back. “We are past the event horizon; the takeoff has started,” Altman states. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.” The lack of visible signs – robots aren’t yet wandering our high streets, disease remains unconquered – masks what Altman characterises as a profound transformation already underway. Behind closed doors at tech firms like his own, systems are emerging that can outmatch general human intellect. “In some big sense, ChatGPT is already more powerful than any human who has ever lived,” Altman claims, noting that “hundreds of millions of people rely on it every day and for increasingly important tasks.” This casual observation hints at a troubling reality: such systems already wield enormous influence, with even minor flaws potentially causing widespread harm when multiplied across their vast user base. The road to superintelligence Altman outlines a timeline towards superintelligence that might leave many readers checking their calendars. By next year, he expects “the arrival of agents that can do real cognitive work,” fundamentally transforming software development. The following year could bring “systems that can figure out novel insights”—meaning AI that generates original discoveries rather than merely processing existing knowledge. By 2027, we might see “robots that can do tasks in the real world.” Each prediction seems to leap beyond the previous one in capability, drawing a line that points unmistakably toward superintelligence—systems whose intellectual capacity vastly outstrips human potential across most domains. “We do not know how far beyond human-level intelligence we can go, but we are about to find out,” Altman states. This progression has sparked fierce debate among experts, with some arguing these capabilities remain decades away. Yet Altman’s timeline suggests OpenAI has internal evidence for this accelerated path that isn’t yet public knowledge. A feedback loop that changes everything What makes current AI development uniquely concerning is what Altman calls a “larval version of recursive self-improvement”—the ability of today’s AI to help researchers build tomorrow’s more capable systems. “Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research,” he explains. “If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.” This acceleration compounds as multiple feedback loops intersect. Economic value drives infrastructure development, which enables more powerful systems, which generate more economic value. Meanwhile, the creation of physical robots capable of manufacturing more robots could create another explosive cycle of growth. “The rate of new wonders being achieved will be immense,” Altman predicts. “It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonisation the next year.” Such statements would sound like hyperbole from almost anyone else. Coming from the man overseeing some of the most advanced AI systems on the planet, they demand at least some consideration. Living alongside superintelligence Despite the potential impact, Altman believes many aspects of human life will retain their familiar contours. People will still form meaningful relationships, create art, and enjoy simple pleasures. But beneath these constants, society faces profound disruption. “Whole classes of jobs” will disappear—potentially at a pace that outstrips our ability to create new roles or retrain workers. The silver lining, according to Altman, is that “the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.” For those struggling to imagine this future, Altman offers a thought experiment: “A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries.” Our descendants may view our most prestigious professions with similar bemusement. The alignment problem Amid these predictions, Altman identifies a challenge that keeps AI safety researchers awake at night: ensuring superintelligent systems remain aligned with human values and intentions. Altman states the need to solve “the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term”. He contrasts this with social media algorithms that maximise engagement by exploiting psychological vulnerabilities. This isn’t merely a technical issue but an existential one. If superintelligence emerges without robust alignment, the consequences could be devastating. Yet defining “what we collectively really want” will be almost impossible in a diverse global society with competing values and interests. “The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better,” Altman urges. OpenAI is building a global brain Altman has repeatedly characterised what OpenAI is building as “a brain for the world.” This isn’t meant metaphorically. OpenAI and its competitors are creating cognitive systems intended to integrate into every aspect of human civilisation—systems that, by Altman’s own admission, will exceed human capabilities across domains. “Intelligence too cheap to meter is well within grasp,” Altman states, suggesting that superintelligent capabilities will eventually become as ubiquitous and affordable as electricity. For those dismissing such claims as science fiction, Altman offers a reminder that merely a few years ago, today’s AI capabilities seemed equally implausible: “If we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.” As the AI industry continues its march toward superintelligence, Altman’s closing wish – “May we scale smoothly, exponentially, and uneventfully through superintelligence” – sounds less like a prediction and more like a prayer. While timelines may (and will) be disputed, the OpenAI chief makes clear the race toward superintelligence isn’t coming—it’s already here. Humanity must grapple with what that means. See also: Magistral: Mistral AI challenges big tech with reasoning model Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Sam Altman, OpenAI: The superintelligence era has begun appeared first on AI News. View the full article
  13. Mistral AI has pulled back the curtain on Magistral, their first model specifically built for reasoning tasks. Magistral arrives in two flavours: a 24B parameter open-source version called Magistral Small that anyone can tinker with, and a beefier enterprise edition, Magistral Medium, aimed at commercial applications where advanced reasoning capabilities matter most. “The best human thinking isn’t linear—it weaves through logic, insight, uncertainty, and discovery,” explains Mistral AI. That’s a fair point, existing models often struggle with the messy, non-linear way humans actually think through problems. I’ve tested numerous reasoning models and they typically suffer from three key limitations: they lack depth in specialised domains, their thinking process is frustratingly opaque, and they perform inconsistently across different languages. Mistral AI’s real-world reasoning for professionals For professionals who’ve been hesitant to trust AI with complex tasks, Magistral might change some minds. Legal eagles, finance folks, healthcare professionals and government workers will appreciate the model’s ability to show its work. All conclusions can be traced back through logical steps—crucial when you’re operating in regulated environments where “because the AI said so” simply doesn’t cut it. Software developers haven’t been forgotten either. Magistral claims to shine at the kind of structured thinking that makes for better project planning, architecture design, and data engineering. Having struggled with some models that produce plausible-sounding but flawed technical solutions, I’m keen to see if Magistral’s reasoning capabilities deliver on this front. Mistral claims their reasoning model excels at creative tasks too. The company reports that Magistral is “an excellent creative companion” for writing and storytelling, capable of producing both coherent narratives and – when called for – more experimental content. This versatility suggests we’re moving beyond the era of having separate models for creative versus logical tasks. What separates Magistral from the rest? What separates Magistral from run-of-the-mill language models is transparency. Rather than simply spitting out answers from a ****** box, it reveals its thinking process in a way users can follow and verify. This matters enormously in professional contexts. A lawyer doesn’t just want a contract clause suggestion; they need to understand the legal reasoning behind it. A doctor can’t blindly trust a diagnostic suggestion without seeing the clinical logic. By making its reasoning traceable, Magistral could help bridge the trust gap that’s held back AI adoption in high-stakes fields. Having spoken with non-English AI developers, I’ve heard consistent frustration about how reasoning capabilities drop off dramatically outside English. Magistral appears to tackle this head-on with robust multilingual support, allowing professionals to reason in their preferred language without performance penalties. This isn’t just about convenience; it’s about equity and access. As countries increasingly implement AI regulations requiring localised solutions, tools that reason effectively across languages will have a significant advantage over English-centric competitors. Getting your hands on Magistral For those wanting to experiment, Magistral Small is available now under the Apache 2.0 licence via Hugging Face. Those interested in the more powerful Medium version can test a preview through Mistral’s Le Chat interface or via their API platform. Enterprise users looking for deployment options can find Magistral Medium on Amazon SageMaker, with IBM WatsonX, Azure, and Google Cloud Marketplace implementations coming soon. As the initial excitement around general-purpose chatbots begins to wane, the market is hungry for specialised AI tools that excel at specific professional tasks. By focusing on transparent reasoning for domain experts, Mistral has carved out a potentially valuable niche. Founded just last year by alumni from DeepMind and Meta AI, Mistral has moved at breakneck speed to establish itself as Europe’s AI champion. They’ve consistently punched above their weight, creating models that compete with offerings from companies many times their size. As organisations increasingly demand AI that can explain itself – particularly in Europe where the AI Act will require transparency – Magistral’s focus on showing its reasoning process feels particularly timely. (Image by Stephane) See also: Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Magistral: Mistral AI challenges big tech with reasoning model appeared first on AI News. View the full article
  14. Artificial intelligence needs no introduction, driving new innovation and transforming the way people work. But the adoption of AI and our increasing reliance on it also raises questions about the centralised nature of the infrastructure it runs on, and the risks that poses. Cryptocurrencies have taught us of the importance of decentralisation, and the dangers of concentrating compute resources and data in a small handful of facilities. While such an approach may seem efficient, it also introduces critical vulnerabilities and concerns over access and governance. Centralised AI systems are incredibly vulnerable, as the big server farms that run them represent a single point of failure that could bring hundreds of applications crashing down. Data centres that power AI models like ChatGPT pose a tempting target for hackers, too, due to the enormous amounts of data they possess. Centralised servers also mean more headaches in terms of regulation. When an AI system is located in a single country, it falls under that nation’s governance, which can cause problems for users in other territories subject to different data sovereignty and privacy rules. Of course, centralisation also means monopolisation, and we already have plenty of evidence of this with the likes of OpenAI, Google, and Anthropic being extremely secretive about how they train their most advanced AI models. The danger is that just a handful of big corporations will end up becoming gatekeepers of a technology that becomes a vital part of modern life, restricting access to those who are willing to pay whatever price they demand. Fortunately, there is a ready-made solution to these problems in the shape of decentralised AI. With a decentralised AI, the infrastructure that powers models can be distributed in a wide network of users, eliminating the risks associated with centralisation. Decentralisation means no single point of failure, more transparency and user control, and access for everyone. Welcome to the world of AI blockchains – the foundation of a more resilient, equitable and sustainable AI industry. Core characteristics of AI blockchains The convergence of blockchain and AI holds plenty of promise due to the way they complement one another. Blockchain’s immutability can ensure integrity and trust in the data that powers AI systems, while AI can bring enhanced automation and intelligence to blockchain-based systems. The synergies are clear. Consider supply chains, where blockchain can ensure full transparency and visibility, while AI can predict changes in demand and optimise logistics accordingly. Healthcare is another example, where blockchain can be used to secure medical records, while AI helps in diagnosing diseases via image analysis and predictive analytics. 1: Transparent data attribution A key capability of AI blockchains is transparent data attribution, which uses “proof-of-attribution” consensus mechanisms to identify and credit the source of data used by AI systems, increasing fairness. It provides visibility into who provided the data, how it contributed to the AI’s outputs, what value did it add, and how much should the provider of the data be compensated. An example of this in action is OpenLedger’s reward system, which ensures that every time a model taps into someone’s data, the person who created that data is rewarded with digital tokens. This model is in stark contrast to centralised AI companies, which amass data without the creator’s knowledge or consent, leaving them outside of the value chain. 2: AI royalties and monetisation layer Let’s imagine someone poses a question to a decentralised chatbot, and it responds by drawing on what it finds in a post on Substack or Medium. The system would record the fact that the model used this information to inform its response, and using smart contracts, it would automatically process the payment of tokens to the creator of that content. This paves the way for a new creator economy, where people create specialised datasets for AI models and host them on blockchains, so their contributions are fully attributed and rewarded. 3: Decentralised model lifecycles Another key difference is that the entire development process of blockchain-based AI is open, from the initial proposal, to the model training and, finally, its deployment. It supports a more collaborative environment for the creation of community-owned models that are controlled by their users, using democratic governance processes, where token holders vote on the new features they want to see added. 4: Efficient, scalable infrastructure AI blockchain run on decentralised infrastructures that are provided by their users. For instance, Render Network has built up a network of GPUs, but they’re not hosted in a centralised data centre. Instead, network participants rent out the idle GPU capacity of their laptops and desktops, and these resources are pooled and made available to AI applications that need processing power. Developers get the infrastructure they need at more affordable costs, and those who provide it can earn tokenised rewards for doing so. OpenLedger plays a key role in enhancing the efficiency of decentralised infrastructure with OpenLoRA. It’s a highly scalable and highly performant framework that can serve hundreds of fine-tuned AI Models in parallel on a single GPU block, allowing them to run simultaneously with much lower operating costs. In turn, this dramatically increases the accessibility of advanced AI applications by making them much more affordable for end users. Why do AI blockchains matter? The vast majority of AI services in use today live in centralised “****** boxes” that are incredibly opaque, revealing next to nothing about how they work or the data they use. They’re owned by a handful of powerful organisations, and this concentration of control undermines the democratizing potential of AI technology. AI companies like Google, OpenAI, and Microsoft could amass enormous influence over our societies and lives, retaining all of the revenue these systems generate, even as they replace many existing jobs. Decentralised AI networks give us an opportunity to prevent this monopolisation from happening. It’s clear that AI is the most powerful new technology to emerge since the rise of the internet, and it’s just too important to allow it to be fully controlled by a small number of corporations. If the predictions come true, AI will be infused into everyone’s life, transforming workplaces and mobility, delivering personalised experiences and revolutionising healthcare. With AI blockchains, we can build intelligent systems on a foundation of trust, distributing control to prevent the concentration of power and incentivising everyone to participate in their development. This will open the door to grassroots innovation, where anyone can put forward an idea and work with a community to make it happen, with decentralised governance ensuring it evolves in alignment with everyone’s needs, rather than serving the goals of profit-oriented corporations. It will nurture a more diverse AI application landscape that everyone can access, while curtailing its use for oppressive purposes. We cannot let AI be monopolised The existing AI landscape holds just as much peril as it does potential. The technology has advanced so much in such a short space of time that there’s a very real danger of monopolisation, and with that comes the risk of it being misused. AI blockchains are the only way to prevent this, serving as a foundation for freely accessible and decentralised AI systems that will be developed in a collaborative way, with checks and balances in place to prevent any abuse. Building this decentralised future for AI requires coordination at every layer, from the data being used to the model training processes and the infrastructure that hosts it. With transparent attribution in place, we can incentivise this kind of cooperation, ensuring everyone’s contributions are acknowledged and, and every user is rewarded for participating in the next technological revolution. (Image source: Unsplash) The post The AI blockchain: What is it really? appeared first on AI News. View the full article
  15. Apple has opened its foundational AI model to third-party developers for the first time, allowing direct access to the on-device large language model that powers Apple Intelligence. The move, announced at this week’s Worldwide Developers Conference, represents a significant shift in Apple’s traditionally closed ecosystem approach to Apple AI developer tools. The newly accessible three-billion parameter model operates entirely on-device, reflecting Apple’s privacy-first philosophy while imposing technical limitations compared to cloud-based alternatives from competitors. “We’re opening up access for any app to tap directly into the on-device, large language model at the core of Apple,” said Craig Federighi, Apple’s software chief, during the conference presentation, according to Reuters. The foundation model framework enables direct access The new Foundation Models framework allows developers to integrate Apple Intelligence features with just three lines of Swift code, providing privacy-focused AI inference at no cost. The framework includes guided generation and tool-calling capabilities built-in. Automattic has already begun leveraging the framework in its Day One journaling app. “The Foundation Model framework has helped us rethink what’s possible with journaling,” Paul Mayne, head of Day One at Automattic said. “Now we can bring intelligence and privacy together in ways that deeply respect our users.” Xcode 26 integrates AI assistance Xcode 26 now embeds large language models directly into the coding experience. Developers can use ChatGPT built into Xcode without creating an account, connect API keys from other providers, or run local models on Apple silicon Macs. The Coding Tools feature assists in the development, offering suggested actions like generating previews, creating playgrounds, or fixing code issues within the development environment. Visual intelligence opens to third parties Apple extended Visual Intelligence capabilities to third-party developers through enhanced App Intents. Etsy is exploring these features for product discovery, with CTO Rafe Colburn noting: “The ability to meet shoppers right on their iPhone with visual intelligence is a meaningful unlock.” The integration allows apps to provide search results within Apple’s visual intelligence experience, potentially driving direct engagement from camera-based searches. Market and analyst scepticism Apple’s stock closed 1.2% lower following the conference, with analysts questioning the incremental nature of announcements. “In a moment in which the market questions Apple’s ability to take any sort of lead in the AI space, the announced features felt incremental at best,” said Thomas Monteiro, senior analyst at Investing.com. The measured approach contrasts sharply with Apple’s more ambitious AI visions presented last year. Bob O’Donnell, chief analyst at Technalysis Research, observed: “They went from being visionary and talking about agents before a lot of other people did, to now realizing that, at the end of the day, what they need to do is deliver on what they presented a year ago.” Technical limitations and strategic focus The three-billion parameter on-device model represents both Apple’s commitment to privacy and its technical constraints. Unlike cloud-based models that can handle complex tasks, Apple’s on-device approach limits functionality while ensuring user data remains local. Ben Bajarin, CEO of analyst firm Creative Strategies, noted Apple’s behind-the-scenes focus: “You could see Apple’s priority is what they’re doing on the back-end, instead of what they’re doing at the front-end, which most people don’t care about yet.” Apple AI developer tools will be available for testing through the Apple Developer Program starting immediately, with a public beta expected next month. The company’s measured approach may disappoint those expecting revolutionary AI capabilities, but it maintains Apple’s traditional emphasis on privacy and incremental innovation over flashy demonstrations. As the AI race intensifies, Apple’s strategy of opening its foundational tools to developers while maintaining modest consumer-facing promises suggests a company more focused on building sustainable AI infrastructure than capturing headlines with ambitious claims. (Photo by Apple ) See also: Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Apple opens core AI model to developers amid measured WWDC strategy appeared first on AI News. View the full article
  16. Reddit is taking Anthropic to court, accusing the artificial intelligence company of pulling user content from the platform without permission and using it to train its Claude AI models. The lawsuit, filed in a California state court, claims Anthropic made more than 100,000 unauthorised requests to Reddit’s servers, even after publicly stating that it had stopped. The case is built around Reddit’s claim that Anthropic ignored both technical restrictions and its terms of service. According to the complaint, Anthropic bypassed protections like the site’s robots.txt file, which is supposed to prevent automated scraping. Reddit also accuses Anthropic of violating user privacy by collecting and using personal posts—including deleted content—for commercial purposes. Reddit says it offers structured access to its data through licensing agreements with companies such as OpenAI and Google. These deals include conditions around content use, privacy safeguards, and data deletion. According to the platform, Anthropic declined to pursue a formal agreement and instead scraped the site directly, avoiding licensing fees and skipping user protections in the process. The lawsuit highlights a 2021 research paper co-authored by Anthropic CEO Dario Amodei, which pointed to Reddit as a rich source of training data for language models. Reddit also included examples where Claude appeared to reproduce Reddit posts nearly word for word, even echoing posts that had been deleted by users. That, the company says, shows Anthropic failed to put guardrails in place to respect user privacy or content takedowns. Reddit is seeking financial damages and a court order that would stop Anthropic from using Reddit content in future versions of its models. Anthropic has responded, claiming it disagrees with the claims and plans to defend itself. However, this is not the first time the corporation has come under legal pressure over how it collects training data. In August 2024, a group of authors filed a class-action lawsuit accusing Anthropic of using their copyrighted work without permission. They claimed that the firm trained its models on books and other written materials without their consent and then requested compensation for using their content. A similar case from October 2023 involved Universal Music Group and other publishers. They sued Anthropic over claims that its Claude chatbot was reproducing copyrighted song lyrics. The music companies argued that this use violated their intellectual property rights and asked the court to block further use of their lyrics. Unlike those lawsuits, Reddit’s case doesn’t focus on copyright. Instead, it centres on breach of contract and unfair competition. Reddit’s argument is that the data taken from its site isn’t just public—it’s governed by terms that Anthropic knowingly ignored. That distinction could make the case an important one for other platforms that host user content but want to control how it’s used in commercial AI systems. Reddit also accuses Anthropic of misleading the public. The lawsuit points to public statements from Anthropic claiming it respects scraping rules and values user privacy, which Reddit says were contradicted by the company’s actions. “For its part, despite what its marketing material says, Anthropic does not care about Reddit’s rules or users,” the lawsuit reads. “It believes it is entitled to take whatever content it wants and use that content however it desires, with impunity.” After the lawsuit was filed, Reddit’s stock rose nearly 67%, a sign that investors supported the move. The outcome of the case could set a precedent for how companies strike a balance between open internet content and the rights of users and content owners. As more AI firms rely on large volumes of online data, the legal and ethical questions around scraping are getting harder to ignore. Reddit’s case adds to the growing list of lawsuits shaping how this next wave of AI development unfolds. (Photo by Brett Jordan) See also: Ethics in automation: Addressing bias and compliance in AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Reddit sues Anthropic for scraping user data to train AI appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  17. Taiwan Semiconductor Manufacturing Company (TSMC) finds itself at the centre of a perfect storm: unprecedented AI chip demand that it cannot fully satisfy, escalating trade tensions that threaten its business model, and geopolitical risks that expose the fragility of global semiconductor supply chains. Speaking at TSMC’s annual shareholders meeting in Hsinchu on Tuesday, CEO C.C. Wei delivered a confident outlook for the semiconductor giant, stating that “our revenue and profit this year will set new historical highs.” The bullish projection comes as the company grapples with the indirect effects of US tariffs while simultaneously struggling to meet unprecedented demand for AI applications. Tariff impact remains manageable despite industry concerns Wei addressed growing concerns about the impact of President Donald Trump’s trade policies on the global chip industry, acknowledging that tariffs do affect TSMC, though not directly. “Tariffs are imposed on importers, not exporters. TSMC is an exporter,” Wei explained to shareholders. “However, tariffs can lead to slightly higher prices, and when prices go up, demand may go down.” He emphasized that while TSMC’s business could be affected if tariffs force up prices and reduce overall chip demand, the company’s position remains strong. “Our business will still be very good,” Wei stated, adding, “I am not afraid of anything, I am only afraid that the world economy will decline.” Trump’s sweeping tariff policies have created significant uncertainty across the semiconductor sector. The administration initially imposed a 32% duty on imports from Taiwan as part of broader trade measures, though these were later pausedfor 90 days and semiconductors were notably excluded from the levies. AI applications drive unprecedented growth Despite trade policy headwinds, TSMC’s core business continues to benefit from explosive growth in artificial intelligence applications. Wei emphasized that AI chip demand remains “very strong” and consistently outpaces the company’s ability to supply. “Our job is to provide our customers with enough chips, and we’re working hard on that. ‘Working hard’ means it’s still not enough,” he told the meeting. The company’s customer roster includes tech giants Apple and Nvidia, both of which have been major drivers of AI-related semiconductor demand. TSMC’s April sales figures underscore this robust demand, with the company reporting NT$349.6 billion ($11.6 billion) in revenue—a 48.1% increase from the previous year and 22.2% growth from March. Wei noted that the surge partly resulted from companies stockpiling semiconductors ahead of anticipated tariff increases,but stressed that underlying AI demand fundamentals remain exceptionally strong. Production capacity expansion challenges The mismatch between AI chip demand and available supply has become a defining challenge for TSMC. Wei indicated that the company is actively working to “increase production capacity to satisfy our customers,” though the scale of demand continues to strain even the world’s most advanced semiconductor manufacturing capabilities. This capacity constraint reflects broader industry dynamics where AI applications—from data centre processors to consumer devices—require increasingly sophisticated and powerful chips that only a handful of manufacturers can produce at scale. Geopolitical pressures and expansion strategy TSMC faces mounting pressure to diversify its manufacturing footprint away from Taiwan, where the majority of its fabrication plants are currently located. Beijing’s continued claims over Taiwan and threats to use force have heightened concerns about supply chain resilience for critical semiconductor production. Wei directly addressed recent media speculation about potential Middle East expansion, firmly denying reports that TSMC was considering building chip factories in the United Arab Emirates. “I think rumours are really flying everywhere,” he said, dismissing the Bloomberg reports that cited unnamed sources. The company has been actively establishing a manufacturing presence in other regions, with facilities under development in the United States, Europe, and Japan. These expansion efforts aim to address both geopolitical risks and customer demands for geographically diversified supply chains. Regulatory compliance and China relations TSMC’s operations continue to navigate complex regulatory requirements spanning multiple jurisdictions. Wei confirmed that the company works closely with both Taiwan and U.S. governments to ensure compliance with legal and regulatory requirements. The company recently suspended shipments to China-based chip designer Sophgo after discovering that its chip matched components found in AI processors from Huawei Technologies, a ******** company subject to extensive US government restrictions. This incident highlights the ongoing challenges TSMC faces in balancing commercial relationships with regulatory compliance. Industry outlook and economic concerns While TSMC’s near-term prospects appear robust, Wei acknowledged broader economic risks that could impact the semiconductor industry. The executive’s comment about fearing economic decline more than specific trade policies reflects a recognition that global demand patterns ultimately drive the industry’s fortunes. The company’s record-setting performance projections suggest that current AI chip demand trends are sufficiently strong to offset potential headwinds from trade policies or broader economic uncertainty. However, the sustainability of this growth will likely depend on continued advancement in AI applications and the global economy’s overall health. (Photo by TSMC ) See also: Huawei Supernode 384 disrupts Nvidia’s AI market hold Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars power The post TSMC reports record AI chip demand amid Trump tariff uncertainty appeared first on AI News. View the full article
  18. When ChatGPT’s user base exploded from 980,000 to over 10 million in South Korea within a year—an eleven-fold increase that outpaced growth in any other market—OpenAI’s executives knew they had discovered something extraordinary. This wasn’t just viral adoption; it was a powerful market signal that drove the company to fast-track its South Korean expansion and establish Seoul as its third Asian headquarters. But the real story behind OpenAI’s Korean strategy reveals a calculated bet on a country that offers something no other market can: a complete AI ecosystem ready for transformation. What the numbers reveal This explosive growth tells a deeper story than simple user acquisition. ChatGPT’s monthly active users reached 10.72 million as of April, more than doubling from March’s 5.09 million in just one month, according to mobile data tracker Mobile Index. While OpenAI itself has not officially disclosed detailed user metrics for South Korea, third-party data reveals the scope of adoption. Total usage time among Korean users jumped from 8.08 million hours in March to 23.7 million hours in April, while new app installations rose nearly fourfold over the same *******, from 1.44 million to 4.67 million, Mobile Index reported. But the metrics that matter most to OpenAI aren’t just about volume—they’re about value. Perhaps more telling is Korea’s global ranking in paid subscriptions. South Korea has the largest number of paying ChatGPT subscribers after the United States, according to OpenAI. This isn’t just about free users experimenting with AI—Koreans are putting money behind their adoption, indicating genuine value recognition and sustained engagement. Beyond user numbers: A strategic ecosystem What makes the OpenAI South Korea expansion particularly strategic isn’t just the user base, but the ecosystem itself. “Korea has an ecosystem that encompasses all areas of AI, from semiconductors to software and startups, and is a leading AI country where various generations, from students to the elderly, use AI daily,” Kwon said according to various local reports. This ecosystem advantage is crucial. While many countries excel in specific AI domains, Korea offers a complete vertical stack—from Samsung’s cutting-edge semiconductors that power AI computing to a population that has already integrated AI into daily workflows. “There are many companies leading the global market in areas where open AI is seeking cooperation, such as healthcare, bio, robotics, manufacturing, and finance,” Kwon noted. Timing and competitive pressures The timing of this expansion reveals competitive pressures that extend beyond simple market opportunities. As ChatGPT tightens its grip on the market, domestic tech firms are facing the risk of losing ground on their home turf due to delayed AI rollouts or insufficient competitiveness. Korean companies like Kakao only began the first closed beta test of its AI assistant “Kanana” on May 8. Meanwhile, SK Telecom’s “A.Dot” and Wrtn Technologies’ “Wrtn” each maintain MAUs around just one million. This competitive landscape suggests OpenAI’s expansion isn’t just about growth—it’s about securing a market position before domestic competitors can mount effective challenges. The company is moving quickly to establish partnerships with major Korean firms, including recent collaborations with the Korea Development Bank, Kakao, Krafton, and SK Telecom. The infrastructure play Perhaps the most significant aspect of OpenAI’s South Korea expansion lies in its infrastructure ambitions. The company’s “OpenAI for Countries” program, which integrates software by cooperating with governments and local companies to build a data centre locally and providing locally customized ChatGPT on top of its infrastructure, positions Korea as a potential regional hub. “To achieve Korea’s goal of becoming a leading AI country in 2027, infrastructure investment is essential,” Kwon stated, hinting at deeper cooperation possibilities. This approach mirrors OpenAI’s recent partnership with the UAE, where OpenAI previously signed an infrastructure construction cooperation with the United Arab Emirates (UAE) as its first overseas partnership on the 22nd. Critical considerations This expansion however isn’t without challenges. The rapid growth in ChatGPT usage has sparked concerns about domestic platform viability. One industry insider commented, “ChatGPT is expanding its influence beyond search into various sectors, meaning no other app category can afford to be complacent.” Moreover, regulatory and political considerations remain complex. Kwon’s meetings with both major Korean political parties—the Democratic Party and People Power Party—indicate that successful expansion requires careful navigation of local political dynamics. Strategic implications OpenAI’s South Korea expansion ultimately signals a broader strategic shift from pure technology development to geopolitical positioning in AI infrastructure. By establishing deep roots in Korea’s comprehensive AI ecosystem, OpenAI isn’t just gaining users—it’s securing a strategic foothold in Asia’s most AI-ready market. The success of this expansion could serve as a template for OpenAI’s global strategy, demonstrating how AI companies must move beyond software services to become integral parts of national technological infrastructure. For Korea, this partnership offers accelerated AI development; for OpenAI, it provides a proving ground for its “AI for Countries” vision. The question isn’t whether this expansion will succeed—the user numbers already prove market demand. The question is whether this model of deep, infrastructure-level partnerships will become the new standard for AI companies seeking global influence in an increasingly competitive landscape. While Korean tech giants like Kakao scramble to launch their first AI assistants and SK Telecom’s offerings languish at just one million users, a foreign competitor has already captured over 10 million Korean users and shows no signs of slowing down. OpenAI’s South Korea expansion isn’t just about international growth—it’s about seizing control of Asia’s most strategically valuable AI market before domestic players can mount an effective defence. (Photo by Dima Solomin/Unsplash) See also: Sam Altman: OpenAI to keep nonprofit soul in restructuring Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. The post OpenAI’s second largest paying market gets its own office: The South Korean story appeared first on AI News. View the full article
  19. The *** is cementing its position as Europe’s AI powerhouse through partnerships with players like NVIDIA to tackle issues like the skills gap. The *** continued to outpace continental rivals both in freshly funded AI startups and overall private investment throughout 2024. Since 2013, *** AI ventures have managed to attract £22 billion in private funding, suggesting investors are continuing to bet on the home of industry giants like DeepMind, Stability AI, and Wayve. Research unveiled during the recent London Tech Week showed something many tech observers have long suspected: regions blessed with robust AI and data centre infrastructure tend to enjoy stronger economic growth across the board. The analysis, by Public First, suggested even modest bumps in AI data centre capacity could pump nearly £5 billion into the nation’s coffers. More ambitious expansion – doubling current access levels, for instance – might deliver annual economic windfalls approaching £36.5 billion. Cloud provider Nscale chose London Tech Week to pledge to deploy 10,000 NVIDIA Blackwell GPUs in the country by late 2026. Not to be outdone, cloud outfit Nebius revealed plans for its first AI factory in the *** which is set to bring a further 4,000 NVIDIA Blackwell GPUs online—providing much-needed computational muscle for research bodies, universities, and public services including our perpetually cash-strapped NHS. But having the hardware is only half the battle. As anyone in tech recruitment will tell you, finding people who can actually take advantage of it remains a challenge. NVIDIA is throwing its considerable weight behind the *** government’s national skills push, with plans for a dedicated AI Technology Center on British soil. This centre promises hands-on training in AI, data science, and the increasingly critical field of accelerated computing. “A new NVIDIA AI Technology Center in the *** will provide hands-on training in AI, data science and accelerated computing, focusing on foundation model builders, embodied AI, materials science and earth systems modeling,” explained NVIDIA. The financial sector – the ***’s crown jewel – stands to benefit too. A new AI-powered sandbox from the Financial Conduct Authority will allow for safer experimentation with AI in banking and finance, with NayaOne providing infrastructure and NVIDIA supplying the technological backbone. Sumant Kumar, CTO for Banking & Financial Markets at NTT DATA ***&I, said: “In a sandbox, every action leaves a mark. This supercharged sandbox may help banks get to a viable AI proof-of-concept faster, but it doesn’t reduce their regulatory obligations. If anything, it adds new layers of responsibility. As soon as a firm begins developing models in the sandbox, it needs to be ready to explain how they work, why they produce certain outcomes, and how they’ve been built. “In financial services, the main bottleneck is often about ensuring the right governance is in place. The FCA will still expect clear documentation and strong controls around data provenance and auditability – even in a controlled environment. “That’s why this is such an important opportunity. For firms, it’s a chance to build and refine the internal capabilities that will let them scale AI responsibly. For the government, it’s a chance to maintain the ***’s competitive edge and advance innovation while promoting balanced regulation and consumer safeguards. Those who approach the sandbox with the right structure will be in the best position to move quickly and safely when it comes to deployment.” Barclays Eagle Labs is opening an Innovation Hub in London that could serve as a launching pad for promising AI and deep tech startups. Those who make the cut will gain a pathway into NVIDIA’s Inception programme, unlocking access to cutting-edge tools and targeted training that might otherwise remain frustratingly out of reach. Mark Boost, CEO of Civo said: “This feels like a real step forward. We’ve spent years talking about being a leader in AI, but investing in compute infrastructure, developer training, and serious R&D is how we actually start to deliver it. “NVIDIA’s AI Technology Center is an important initiative. Giving *** developers better access to hands-on training in accelerated computing, AI engineering and model development will help close critical skills gaps and support the next generation of homegrown talent. Boost also touched on a point that’s increasingly occupying minds in Whitehall and boardrooms alike: technological sovereignty. “Building long-term resilience in the *** means looking carefully at our reliance on external compute. As the AI stack becomes more strategic, the *** should be complementing global partnerships with greater investment in local infrastructure, open standards, and technologies we can help shape. That’s what keeps us competitive—staying flexible and able to shape our own path.” Rather than just government announcements or corporate PR, this *** AI initiative with NVIDIA appears to promise genuine coordination between public institutions, industry heavyweights, and educational bodies. The focus on both immediate needs and longer-term foundations suggests lessons have been learned from previous tech booms. Whether this approach delivers the projected economic windfall remains to be seen. But, for once, the *** seems to be playing to its strengths—combining world-class research institutions, a vibrant financial sector, and pragmatic regulation with the computational muscle and skills development needed to turn AI potential into economic reality. (Photo by Charles Postiaux) See also: Anthropic launches Claude AI models for US national security Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** tackles AI skills gap through NVIDIA partnership appeared first on AI News. View the full article
  20. Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security. (Image credit: Anthropic) See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News. View the full article
  21. As Saudi Arabia accelerates its journey toward becoming a global leader in digital innovation, the Smart Data & AI Summit will play a pivotal role in shaping the Kingdom’s data and AI landscape. Scheduled for 27-28 August 2025 at the JW Marriott Hotel in Riyadh, this event will bring together 300+ data and AI professionals, including CDOs, CIOs, data scientists, AI directors, C-suite executives and many others, to explore the latest advances, tackle challenges, and unlock opportunities in data and artificial intelligence. With the Kingdom’s data analytics market projected to reach $8.8 billion by 2030, the summit comes at an important time, offering a platform for public and private sector leaders to collaborate, innovate, and approach the nation’s Vision 2030 goals. A platform for innovation and collaboration Building on the success of its debut last year, which was inaugurated by a leading official from the Saudi Data & AI Authority (SDAIA), the 2025 edition promises to be ******* and more impactful. The summit will feature: 25+ cutting-edge solution providers showcasing the latest technologies in data and AI. 50+ industry experts sharing insights on emerging trends, challenges, and opportunities. 300+ attendees, including data engineers, architects, AI pioneers, and decision-makers from Saudi Arabia’s largest organisations. 1:1 meetings to foster collaboration and evaluate tailored solutions. CPD-accredited sessions to help professionals enhance their skills and advance their careers. Unveiling the future of data and AI The summit’s agenda will look at important topics shaping the future of data and AI, including: Navigating open data in Saudi Arabia AI fusion and machine learning innovations Data virtualisation and the power of data mesh Ethical data governance and cybersecurity analytics Unified data cloud architectures Discussions will be led by thought leaders from the Kingdom’s top organisations, including Ministry of Hajj & Umrah, Insurance Authority, Council of Health Insurance, NEOM, AlNASSR Club Company | PIF, and Abdul Latif Jameel United Finance. DAMA Saudi Arabia joins as supporting partner The Data Management Association (DAMA Saudi Arabia), the Kingdom’s largest data management community, has joined the summit as a supporting partner. The partnership underscores DAMA’s commitment to fostering a robust data management ecosystem and aligns with the summit’s mission to elevate Saudi Arabia’s position as a global leader in data and AI. Abdulaziz Almanea, Founder & Chairman of the Board, DAMA Saudi, spoke of the importance of the summit: “Artificial intelligence is only as good as the data behind it. Quality, governance, and ethics must come first to ensure trust, accuracy, and impact. As Saudi Arabia accelerates its data-driven transformation, industry events like the Smart Data & AI Summit serve as vital platforms for bringing experts together to shape the future of AI with responsible and innovative data practices.” A legacy of excellence The inaugural edition of the summit set a high benchmark, with attendees praising the quality of speakers, depth of discussions, and opportunities for networking and collaboration. Nayef Al-Otaibi, VP & Chief Digital Officer at Saudi Aramco, said, “The event was well-managed, the coordination was excellent, and the quality of the speakers was above expectations. It was a beautiful experience connecting with industry experts during the panel discussions and sharing our experiences. This could basically help us establish the platform and collaborate and work together in future.” Driving Vision 2030 forward The Smart Data & AI Summit is a strategic initiative to support Saudi Arabia’s Vision 2030 goals. By bringing together global expertise, cutting-edge technologies, and local insights, the summit aims to: Accelerate the Kingdom’s digital transformation. Foster innovation and collaboration across industries. Address regulatory challenges and ethical considerations in data and AI. Unlock new opportunities for investment and growth in the Kingdom’s data and AI sectors. Sudhir Ranjan Jena, CEO & Co-founder of Tradepass, the organising body, spoke of the summit’s mission: “The data & AI sector is entering a transformative chapter, fuelled by technology disruptions, heightened expectations, and the unprecedented expansion of digital tools and platforms. In the upcoming edition, we will delve into Vision 2030 goals, unlock limitless opportunities, and explore emerging trends and solutions that will play an integral role in shaping the Kingdom’s post-oil economy.” A high-impact speaker lineup The summit will feature an impressive roster of speakers, including: Dr Ahmed Alzahrani – Director of Business Intelligence and Data Analytics Centre, Ministry of Hajj & Umrah Hajar Alolah – Data Governance and Management Office Director, Saudi Development Bank Abdullah AlBar – Chief Data Officer, Abdul Latif Jameel United Finance Usamah Algemili – Chief Data Executive, Insurance Authority Jawad Saleemi – Director – AI & Cloud, Telenor Abbasi Poonawala – Executive Director – Enterprise Architecture, Alinma Bank Nawaf Alghamdi – Director – Data Analytics & AI, Council of Health Insurance These experts will share their insights on the latest trends, challenges, and opportunities in data and AI, offering attendees strategies to drive innovation and growth in their organisations. For more information, visit: [Hidden Content] Media contact: Shrinkhal Sharad PR & Communication Lead Tradepass Email: *****@*****.tld Phone: + (91) 80 6166 4401 The post The Kingdom’s digital transformation showcased at Smart Data & AI Summit appeared first on AI News. View the full article
  22. Reddit is accusing Anthropic of building its Claude AI models on the back of Reddit’s users, without permission and without paying for it. Anyone who uses Reddit, even a web-crawling bot, agrees to the site’s user agreement. That agreement is clear: you cannot just take content from the site and use it for your own commercial products without a written deal. Reddit claims Anthropic’s bots have been doing exactly that for years, scraping massive amounts of conversations and posts to train and improve Claude. What makes this lawsuit particularly spicy is the way it goes after Anthropic’s reputation. Anthropic has worked hard to brand itself as the ethical, trustworthy AI company, the “white knight” of the industry. The lawsuit, however, calls these claims nothing more than “empty marketing gimmicks”. For instance, Reddit points to a statement from July 2024 where Anthropic claimed it had stopped its bots from crawling Reddit. The lawsuit says this was “false”, alleging that its logs caught Anthropic’s bots trying to access the site more than one hundred thousand times in the following months. But this isn’t just about corporate squabbles; it directly involves user privacy. When you delete a post or a comment on Reddit, you expect it to be gone. Reddit has official licensing deals with other big AI players like Google and OpenAI, and these deals include technical measures to ensure that when a user deletes content, the AI company does too. According to Reddit’s lawsuit, Anthropic has no such deal and has refused to enter one. This means if their AI was trained on a post you later deleted, that content could still be baked into Claude’s knowledge base, effectively ignoring your choice to remove it. The lawsuit even includes a screenshot where Claude itself admits it has no real way of knowing if the Reddit data it was trained on was later deleted by a user: So, what does Reddit want? It’s not just about money, although they are asking for damages for things like increased server costs and lost licensing fees. They are asking the court for an injunction to force Anthropic to stop using any Reddit data immediately. Furthermore, Reddit wants to prohibit Anthropic from selling or licensing any product that was built using that data. That means they’re asking a judge to effectively take Claude off the market. This case forces a tough question: Does being “publicly available” on the internet mean content is free for any corporation to take and monetise? Reddit is arguing a firm “no,” and the outcome could change the rules for how AI is developed from here on out. (Photo by Brett Jordan) See also: Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Reddit sues Anthropic over AI data scraping appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  23. Ahead of the TechEx North America event on June 4-5, we’ve been lucky enough to speak to Kieran Norton, Deloitte’s US Cyber AI & Automation leader, who will be one of the speakers at the conference on June 4th. Kieran’s 25+ years in the sector mean that as well as speaking authoritatively on all matters cybersecurity, his most recent roles include advising Deloitte clients on many issues around cybersecurity when using AI in business applications. The majority of organisations have in place at least the bare minimum of cybersecurity, and thankfully, in most cases, operate a decently comprehensive raft of cybersecurity measures that cover off communications, data storage, and perimeter defences. However, in the last couple of years, AI has changed the picture, both in terms of how companies can leverage the technology internally, and in how AI is used in cybersecurity – in advanced detection, and in the new ways the tech is used by bad actors. As a cybersecurity tool, AI can be used in network anomaly detection and the smart spotting of phishing messages, among other uses. As a business enabler, AI means that the enterprise has to be proactive to ensure AI is used responsibly, balancing the innovation AI offers with privacy, data sovereignty, and risk. Considered a relatively new area, AI, smart automation, data governance and security all inhabit a niche at present. But given the growing presence of AI in the enterprise, those niches are set to become mainstream issues: problems, solutions, and advice that will need to be observed in every organisation, sooner rather than later. Governance and risk Integrating AI into business processes isn’t solely about the technology and methods for its deployment. Internal processes will need to change to make best use of AI, and to better protect the business that’s using AI daily. Kieran draws a parallel to earlier changes made necessary by new technologies: “I would correlate [AI] with cloud adoption where it was a fairly significant shift. People understood the advantages of it and were moving in that direction, although sometimes it took them more time than others to get there.” Those changes mean casting the net wide, to encompass the update of governance frameworks, establishing secure architectures, even leveraging a new generation of specialists to ensure AI and the data associated with it are used safely and responsibly. Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more. As Kieran puts it: “You probably weren’t doing a lot of testing for hallucination, bias, toxicity, data poisoning, model vulnerabilities, etc. That now has to be part of your process.” These are big subjects, and for the fuller picture, we advocate that readers attend the two talks at TechEx North America that Kieran’s to give. He’ll be exploring both sides of the AI coin – issues around AI deployment for the business, and the methods that companies can implement to deter and detect the new breed of AI-powered malware and attack vectors. The right use-cases Kieran advocates that companies start with smaller, lower-risk AI implementations. While some of the first sightings of AI ‘in the wild’ have been chatbots, he was quick to differentiate between a chatbot that can intelligently answer questions from customers, and agents, which can take action by means of triggering interactions with the apps and services the business operates. “So there’s a delineation […] chatbots have been one of the primary starting places […] As we get into agents and agentic, that changes the picture. It also changes the complexity and risk profile.” Customer-facing agentic AI instances are indubitably higher risk, as a misstep can have significant effects on a brand. “That’s a higher risk scenario. Particularly if the agent is executing financial transactions or making determinations based on healthcare coverage […] that’s not the first use case you want to try.” “If you plug 5, 6, 10, 50, a hundred agents together, you’re getting into a network of agency […] the interactions become quite complex and present different issues,” he said. In some ways, the issues around automation and system-to-system interfaces have been around for close on a decade. Data silos and RPA (robotic process automation) challenges are the hurdles enterprises have been trying to jump for several years. “You still have to know where your data is, know what data you have, have access to it […] The fundamentals are still true.” In the AI era, fundamental questions about infrastructure, data visibility, security, and sovereignty are arguably more relevant. Any discussions about AI tend to circle around the same issues, which throws into relief Kieran’s statements that a conversation about AI in the enterprise has to be wide-reaching and concern many of the operational and infrastructural underpinnings of the enterprise. Kieran therefore emphasises the importance of practicality, and a grounded assessment of need and ability as needing careful examination before AI can gain a foothold. “If you understand the use case […] you should have a pretty good idea of the ROI […] and therefore whether or not it’s worth the pain and suffering to go through building it.” At Deloitte, AI is being put to use where there is a clear use case with a measurable return: in the initial triage-ing of SOC tickets. Here the AI acts as a Level I incident analysis engine. “We know how many tickets get generated a day […] if we can take 60 to 80% of the time out of the triage process, then that has a significant impact.” Given the technology’s nascence, demarcating a specific area of operations where AI can be used acts as both prototype and proof of effectiveness. The AI is not customer-facing, and there are highly-qualified experts in their fields who can check and oversee the AI’s deliberations. Conclusion Kieran’s message for business professionals investigating AI uses for their organisations was not to build an AI risk assessment and management programme from scratch. Instead, companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value. “You shouldn’t create another programme just for AI security on top of what you’re already doing […] you should be modernising your programme to address the nuances associated with AI workloads.” Success in AI starts with clear, realistic goals built on solid foundations. You can read more about TechEx North America here and sign up to attend. Visit the Deloitte team at booth #153 and drop in on its sessions on June 4: ‘Securing the AI Stack’ on the AI & Big Data stage from 9:20am-9:50am, and ‘Leveraging AI in Cybersecurity for business transformation’ on the Cybersecurity stage, 10:20am – 10:50am. Learn more about Deloitte’s solutions and service offerings for AI in business and cybersecurity or email the team at *****@*****.tld. (Image source: “Symposium Cisco Ecole Polytechnique 9-10 April 2018 Artificial Intelligence & Cybersecurity” by Ecole polytechnique / Paris / France is licensed under CC BY-SA 2.0.) The post The modern ROI imperative: AI deployment, security and governance appeared first on AI News. View the full article
  24. CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge. Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’ (during a customer interaction, for example), is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet. But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues. We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center. Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments. “It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwC From support to strategy: shifting expectations for AI Rani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation. For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs. Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples. But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk. “It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” Cleaning house: the data challenge behind AI Rani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines. “In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.” Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data. It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge. If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.” For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation. Conclusion It’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential. But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business. As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys. Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America. (Image source: “Network Rack” by one individual is licensed under CC BY-SA 2.0.) The post AI enables shift from enablement to strategic leadership appeared first on AI News. View the full article
  25. AI hallucinations are becoming more dangerous as models are increasingly trusted to surface information and make critical decisions. We’ve all got that know-it-all friend that can’t admit when they don’t know something, or resorts to giving dodgy advice based on something they’ve read online. Hallucinations by AI models are like that friend, but this one could be in charge of creating your ******* treatment plan. That’s where Themis AI enters the picture. This MIT spinout has managed to achieve something that seems straightforward in theory but is actually quite complex, teaching AI systems to say, “I’m not sure about this.” AI systems typically display overconfidence. Themis’ Capsa platform acts as a reality check for AI, helping models recognise when they’re venturing into guesswork rather than certainty. Founded in 2021 by MIT Professor Daniela Rus, along with former research colleagues Alexander Amini and Elaheh Ahmadi, Themis AI has developed a platform that can integrate with virtually any AI system to flag moments of uncertainty before they lead to mistakes. Capsa essentially trains AI to detect patterns in how it processes information that might indicate it’s confused, biased, or working with incomplete data that could lead to hallucinations. Since launching, Themis claims it has helped telecoms companies avoid costly network planning errors, assisted oil and gas firms in making sense of complex seismic data, and published research on creating chatbots that don’t confidently make things up. Most people remain unaware of how frequently AI systems are simply taking their best guess. As these systems handle increasingly critical tasks, those guesses could have serious consequences. Themis AI’s software adds a layer of self-awareness that’s been missing. Themis’ journey towards tackling AI hallucinations The journey to Themis AI began years ago in Professor Rus’s MIT lab, where the team was investigating a fundamental problem: how do you make a machine aware of its own limitations? In 2018, Toyota funded their research into reliable AI for self-driving vehicles—a sector where mistakes could be fatal. The stakes are incredibly high when autonomous vehicles must accurately identify pedestrians and other road hazards. Their breakthrough came when they developed an algorithm that could spot racial and gender bias in facial recognition systems. Rather than just identifying the problem, their system actually fixed it by rebalancing the training data—essentially teaching the AI to correct its own prejudices. By 2021, they’d demonstrated how this approach could revolutionise drug discovery. AI systems could evaluate potential medications but – crucially – flag when their predictions were based on solid evidence versus educated guesswork or complete hallucinations. The pharmaceutical industry recognised the potential savings in money and time by focusing only on drug candidates the AI was confident about. Another advantage of the technology is for devices with limited computing power. Edge devices use smaller models that cannot match the accuracy of huge models run on a server, but with Themis’ technology, these devices will be far more capable of handling most tasks locally and only request help from the big servers when they encounter something challenging. AI holds tremendous potential to improve our lives, but that potential comes with real risks. As AI systems become more deeply integrated into critical infrastructure and decisionmaking, the ability to acknowledge uncertainty leading to hallucinations may prove to be their most human – and most valuable – quality. Themis AI is making sure they learn this crucial skill. See also: Diabetes management: IBM and Roche use AI to forecast blood sugar levels Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.