ChatGPT
Diamond Member-
Posts
800 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
Teachers in England have been given the all-clear to use AI to help them in low-level tasks that are part of their duties, the BBC reports. Guidance from the Department for Education (DfE) says AI can be used by school teachers in England, but it should only be for ‘low stakes’ tasks, such as writing letters to parents and marking homework. The decision to approve the use of the technology follows the results of a survey of teachers in 2023, undertaken on behalf of the DfE. In it, a majority of respondents were said to be “broadly optimistic” about using AI in the course of their jobs. At the time, a spokesperson from Teacher Tap (the company behind the software used to conduct the survey) said: “It’s really quite normal now as a maths teacher, that you don’t mark maths homework any more … because we have such chronic shortages of maths teachers that you know nobody really feels aggrieved.” Responses to the 2023 survey quoted teachers saying AI can be quite useful when they need to source appropriate teaching materials, and in the course of writing reports to parents on the performance and behaviour of their children. As part of today’s announcement, the DfE said that teachers using AI will help reduce the burden of unpaid overtime teachers work, and can lead to improved work-life balance and job satisfaction. By allowing staff to use AI tools, it’s hoped that the statistics around teachers’ mental health in general should improve (36% of teachers have experienced ‘burn-out’ according to the charity Education Support [PDF]), and will have the effect of attracting more graduates to the profession. Part of the daily stress many teachers suffer is caused by a shortage of qualified teachers, a situation that use of AI may help. Although the *** government has pointed to a greater number of teachers employed in the entirety of the *** than a decade ago, the ratio of pupils to teachers continues to widen as the population grows. Teaching classes of 33 or more is commonplace in English state schools, and over a million pupils in the *** are taught in classes of more than 30. The attrition rate for qualified teachers in the *** is around 8.8% according to SecEd, an industry website aimed at teachers working in secondary schools (the 11-18 age group). SecEd has also stated that the number of open positions in the sector climbed from three to six per 1,000 teachers in the 12 months from 2022. Due to budgetary constraints on local authorities and schools, open teaching positions are often filled by short-term supply (substitute) teachers sourced through employment agencies, a practice that costs schools significantly more than paying permanent salaried staff. In line with today’s announcement, a post on the Education Hub blog published by the *** government states that “teachers can use AI to help with things like planning lessons, creating resources, marking work, giving feedback, and handling administrative tasks.” It also gives the proviso of it being up to the individual teacher to “check that anything AI generates is accurate and appropriate – the final responsibility always rests with them and their school or college.” The DfE has also given the government’s seal of approval for the use of AI by companies that conduct curriculum and assessment reviews of *** schools, the outcomes of which determine schools’ rankings in the so-called league tables. These are classifications given to schools by Ofsted (Office for Standards in Education) such as ‘special measures’, ‘good’, or ‘outstanding’. The approval for the use of AI in this context comes despite opposition from teaching unions. The longer-term issue that has pervaded the English school system for several decades is not the sector’s use of technology, but its chronic under-funding. The NAHT (National Association of Head Teachers) states that between school years 2009-10 and 2021-2022, capital spending on schools saw an inflation-adjusted reduction of 29% over the decade. The Institute for Fiscal Study has said that school spending per pupil in England has seen a real-terms decrease of 9% in the same *******. Equipping teaching professionals with technology tools may help teachers with some of the burden of administration placed on them, although whether marking homework can be considered what the Department for Education terms ‘low stakes’ is debatable. Investment in school-age children in the form of education budget increases is expensive, while subscriptions to AI models can be as little as a few dollars a month. On paper, the lure of AI helping teachers manage their workloads a little more efficiently must be attractive to DfE officials. But what is apparent is the consistently low value placed on childhood education by successive *** governments. Deciding to allow AI to help staff in a criminally under-funded education sector is largely irrelevant and will have little impact on the quality of education offered to another generation of English children. (Image source: “Village School Classroom” by Thomas Galvez is licensed under CC BY 2.0.) The post Teachers in England given the green-light to use AI appeared first on AI News. View the full article
-
MarketsandMarkets values the global artificial intelligence market at $371.71 billion and expects it to exceed $2407.02 billion in value by 2032. The statistic clearly demonstrates how AI technology can affect many sectors, including cryptocurrency. The Business Research Company reports the generative AI market in the cryptocurrency space alone is expected to grow in value from $760 million in 2024 to $1.02 billion in 2025. That’s a CAGR of roughly 34.5%. As readers will know, artificial intelligence boasts an unusual computational ability that helps it extract meaningful insights in real time. In terms of the ADA price, for instance, AI can help traders make more informed predictions about future price movements by combining historical performance, market trends and other data points. And that’s just scratching the surface – there’s much more to how AI is reshaping this space. Providing better security Cyberattacks are a growing concern in industries, and cryptocurrency is no exception. There are more than approximately 940,000 attacks daily worldwide. In the cryptocurrency industry, issues like private key compromises have surged in number , and compromises accounted for almost half (43.8%) of stolen cryptocurrency in 2024, with the total number of stolen funds rising by about 21% that year. Since bad actors reinvent themselves constantly, ignoring cybersecurity can have serious consequences, especially for cryptocurrency exchanges. One example may be the loss of security-conscious customers. According to cxscoop.com, up to 21% never return to brands that suffer cybersecurity incidents. Given the competitive nature of the cryptocurrency industry, such losses can be fatal to companies, and at best, recovering after cyberattacks can be challenging. An IBM report reveals that companies may need at least $4.88 million to recover, which is why many cryptocurrency companies are turning to AI for better protection. AI excels at pattern recognition, making it highly effective in detecting fraud. It examines data like transaction histories and IP addresses to identify malicious activity in real time. For example, blockchain analytics firm Elliptic recently noted potential money laundering on the Bitcoin network after training an AI model using data on about 200 million transactions. The rise of smart trading bots Gathering and processing all the data needed for accurate trading decisions or anomaly detection is no easy task. Errors and delays are common, but AI can quickly assess vast amounts of information and deliver results more quickly than human workers. Many cryptocurrency traders have turned to artificial intelligence as their new hope, leading to the expansion of the global AI cryptocurrency trading bot market, which Research and Markets values at $40.8 billion. If this trend continues, the market could hit $985.2 billion in value in the next few years, translating to a CAGR of 37.2%. Bots can examine large amounts of data, including social media sentiments and global news, and make predictions that give traders a serious edge. But despite such benefits, it doesn’t mean AI is 100% accurate; it needs close monitoring and strategy adjustment to avoid inaccurate predictions. Are there any challenges? According to a ResearchGate publication by Halima Kure and others, data poisoning can reduce classification accuracy in fraud detection models by 22%. Such instances manipulate AI models and can be used to initiate fraudulent transactions. Another common concern with AI algorithms is the ‘****** box’ problem. When users don’t understand how an AI system makes its decisions, trust erodes. In an industry like cryptocurrency, where trust is everything, users can perceive trading bots as untrustworthy. Security.org claims that 40% of cryptocurrency owners have doubts about digital currencies. Cryptocurrency’s volatility and extant unpredictable socio-economics create challenges for artificial intelligence’s ability to make accurate predictions. If AI’s analytical abilities are overestimated, costs will mount up, regardless of trading strategies. Future developments may address some of these challenges, with observers suggesting AI may continue to dominate the cryptocurrency space. Cryptocurrency companies have been using the technology to improve security measures through real-time monitoring. AI technology can detect an attack before it happens, helping companies avoid significant financial losses. Plus, artificial intelligence’s computational ability can help investors improve prediction accuracy. By gathering and assessing data from numerous sources, the technology offers real-time insights – something that once seemed out of reach. The post AI’s influence in the cryptocurrency industry appeared first on AI News. View the full article
-
OpenAI chief Sam Altman has declared that humanity has crossed into the era of artificial superintelligence—and there’s no turning back. “We are past the event horizon; the takeoff has started,” Altman states. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.” The lack of visible signs – robots aren’t yet wandering our high streets, disease remains unconquered – masks what Altman characterises as a profound transformation already underway. Behind closed doors at tech firms like his own, systems are emerging that can outmatch general human intellect. “In some big sense, ChatGPT is already more powerful than any human who has ever lived,” Altman claims, noting that “hundreds of millions of people rely on it every day and for increasingly important tasks.” This casual observation hints at a troubling reality: such systems already wield enormous influence, with even minor flaws potentially causing widespread harm when multiplied across their vast user base. The road to superintelligence Altman outlines a timeline towards superintelligence that might leave many readers checking their calendars. By next year, he expects “the arrival of agents that can do real cognitive work,” fundamentally transforming software development. The following year could bring “systems that can figure out novel insights”—meaning AI that generates original discoveries rather than merely processing existing knowledge. By 2027, we might see “robots that can do tasks in the real world.” Each prediction seems to leap beyond the previous one in capability, drawing a line that points unmistakably toward superintelligence—systems whose intellectual capacity vastly outstrips human potential across most domains. “We do not know how far beyond human-level intelligence we can go, but we are about to find out,” Altman states. This progression has sparked fierce debate among experts, with some arguing these capabilities remain decades away. Yet Altman’s timeline suggests OpenAI has internal evidence for this accelerated path that isn’t yet public knowledge. A feedback loop that changes everything What makes current AI development uniquely concerning is what Altman calls a “larval version of recursive self-improvement”—the ability of today’s AI to help researchers build tomorrow’s more capable systems. “Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research,” he explains. “If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.” This acceleration compounds as multiple feedback loops intersect. Economic value drives infrastructure development, which enables more powerful systems, which generate more economic value. Meanwhile, the creation of physical robots capable of manufacturing more robots could create another explosive cycle of growth. “The rate of new wonders being achieved will be immense,” Altman predicts. “It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonisation the next year.” Such statements would sound like hyperbole from almost anyone else. Coming from the man overseeing some of the most advanced AI systems on the planet, they demand at least some consideration. Living alongside superintelligence Despite the potential impact, Altman believes many aspects of human life will retain their familiar contours. People will still form meaningful relationships, create art, and enjoy simple pleasures. But beneath these constants, society faces profound disruption. “Whole classes of jobs” will disappear—potentially at a pace that outstrips our ability to create new roles or retrain workers. The silver lining, according to Altman, is that “the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.” For those struggling to imagine this future, Altman offers a thought experiment: “A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries.” Our descendants may view our most prestigious professions with similar bemusement. The alignment problem Amid these predictions, Altman identifies a challenge that keeps AI safety researchers awake at night: ensuring superintelligent systems remain aligned with human values and intentions. Altman states the need to solve “the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term”. He contrasts this with social media algorithms that maximise engagement by exploiting psychological vulnerabilities. This isn’t merely a technical issue but an existential one. If superintelligence emerges without robust alignment, the consequences could be devastating. Yet defining “what we collectively really want” will be almost impossible in a diverse global society with competing values and interests. “The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better,” Altman urges. OpenAI is building a global brain Altman has repeatedly characterised what OpenAI is building as “a brain for the world.” This isn’t meant metaphorically. OpenAI and its competitors are creating cognitive systems intended to integrate into every aspect of human civilisation—systems that, by Altman’s own admission, will exceed human capabilities across domains. “Intelligence too cheap to meter is well within grasp,” Altman states, suggesting that superintelligent capabilities will eventually become as ubiquitous and affordable as electricity. For those dismissing such claims as science fiction, Altman offers a reminder that merely a few years ago, today’s AI capabilities seemed equally implausible: “If we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.” As the AI industry continues its march toward superintelligence, Altman’s closing wish – “May we scale smoothly, exponentially, and uneventfully through superintelligence” – sounds less like a prediction and more like a prayer. While timelines may (and will) be disputed, the OpenAI chief makes clear the race toward superintelligence isn’t coming—it’s already here. Humanity must grapple with what that means. See also: Magistral: Mistral AI challenges big tech with reasoning model Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Sam Altman, OpenAI: The superintelligence era has begun appeared first on AI News. View the full article
-
Mistral AI has pulled back the curtain on Magistral, their first model specifically built for reasoning tasks. Magistral arrives in two flavours: a 24B parameter open-source version called Magistral Small that anyone can tinker with, and a beefier enterprise edition, Magistral Medium, aimed at commercial applications where advanced reasoning capabilities matter most. “The best human thinking isn’t linear—it weaves through logic, insight, uncertainty, and discovery,” explains Mistral AI. That’s a fair point, existing models often struggle with the messy, non-linear way humans actually think through problems. I’ve tested numerous reasoning models and they typically suffer from three key limitations: they lack depth in specialised domains, their thinking process is frustratingly opaque, and they perform inconsistently across different languages. Mistral AI’s real-world reasoning for professionals For professionals who’ve been hesitant to trust AI with complex tasks, Magistral might change some minds. Legal eagles, finance folks, healthcare professionals and government workers will appreciate the model’s ability to show its work. All conclusions can be traced back through logical steps—crucial when you’re operating in regulated environments where “because the AI said so” simply doesn’t cut it. Software developers haven’t been forgotten either. Magistral claims to shine at the kind of structured thinking that makes for better project planning, architecture design, and data engineering. Having struggled with some models that produce plausible-sounding but flawed technical solutions, I’m keen to see if Magistral’s reasoning capabilities deliver on this front. Mistral claims their reasoning model excels at creative tasks too. The company reports that Magistral is “an excellent creative companion” for writing and storytelling, capable of producing both coherent narratives and – when called for – more experimental content. This versatility suggests we’re moving beyond the era of having separate models for creative versus logical tasks. What separates Magistral from the rest? What separates Magistral from run-of-the-mill language models is transparency. Rather than simply spitting out answers from a ****** box, it reveals its thinking process in a way users can follow and verify. This matters enormously in professional contexts. A lawyer doesn’t just want a contract clause suggestion; they need to understand the legal reasoning behind it. A doctor can’t blindly trust a diagnostic suggestion without seeing the clinical logic. By making its reasoning traceable, Magistral could help bridge the trust gap that’s held back AI adoption in high-stakes fields. Having spoken with non-English AI developers, I’ve heard consistent frustration about how reasoning capabilities drop off dramatically outside English. Magistral appears to tackle this head-on with robust multilingual support, allowing professionals to reason in their preferred language without performance penalties. This isn’t just about convenience; it’s about equity and access. As countries increasingly implement AI regulations requiring localised solutions, tools that reason effectively across languages will have a significant advantage over English-centric competitors. Getting your hands on Magistral For those wanting to experiment, Magistral Small is available now under the Apache 2.0 licence via Hugging Face. Those interested in the more powerful Medium version can test a preview through Mistral’s Le Chat interface or via their API platform. Enterprise users looking for deployment options can find Magistral Medium on Amazon SageMaker, with IBM WatsonX, Azure, and Google Cloud Marketplace implementations coming soon. As the initial excitement around general-purpose chatbots begins to wane, the market is hungry for specialised AI tools that excel at specific professional tasks. By focusing on transparent reasoning for domain experts, Mistral has carved out a potentially valuable niche. Founded just last year by alumni from DeepMind and Meta AI, Mistral has moved at breakneck speed to establish itself as Europe’s AI champion. They’ve consistently punched above their weight, creating models that compete with offerings from companies many times their size. As organisations increasingly demand AI that can explain itself – particularly in Europe where the AI Act will require transparency – Magistral’s focus on showing its reasoning process feels particularly timely. (Image by Stephane) See also: Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Magistral: Mistral AI challenges big tech with reasoning model appeared first on AI News. View the full article
-
Artificial intelligence needs no introduction, driving new innovation and transforming the way people work. But the adoption of AI and our increasing reliance on it also raises questions about the centralised nature of the infrastructure it runs on, and the risks that poses. Cryptocurrencies have taught us of the importance of decentralisation, and the dangers of concentrating compute resources and data in a small handful of facilities. While such an approach may seem efficient, it also introduces critical vulnerabilities and concerns over access and governance. Centralised AI systems are incredibly vulnerable, as the big server farms that run them represent a single point of failure that could bring hundreds of applications crashing down. Data centres that power AI models like ChatGPT pose a tempting target for hackers, too, due to the enormous amounts of data they possess. Centralised servers also mean more headaches in terms of regulation. When an AI system is located in a single country, it falls under that nation’s governance, which can cause problems for users in other territories subject to different data sovereignty and privacy rules. Of course, centralisation also means monopolisation, and we already have plenty of evidence of this with the likes of OpenAI, Google, and Anthropic being extremely secretive about how they train their most advanced AI models. The danger is that just a handful of big corporations will end up becoming gatekeepers of a technology that becomes a vital part of modern life, restricting access to those who are willing to pay whatever price they demand. Fortunately, there is a ready-made solution to these problems in the shape of decentralised AI. With a decentralised AI, the infrastructure that powers models can be distributed in a wide network of users, eliminating the risks associated with centralisation. Decentralisation means no single point of failure, more transparency and user control, and access for everyone. Welcome to the world of AI blockchains – the foundation of a more resilient, equitable and sustainable AI industry. Core characteristics of AI blockchains The convergence of blockchain and AI holds plenty of promise due to the way they complement one another. Blockchain’s immutability can ensure integrity and trust in the data that powers AI systems, while AI can bring enhanced automation and intelligence to blockchain-based systems. The synergies are clear. Consider supply chains, where blockchain can ensure full transparency and visibility, while AI can predict changes in demand and optimise logistics accordingly. Healthcare is another example, where blockchain can be used to secure medical records, while AI helps in diagnosing diseases via image analysis and predictive analytics. 1: Transparent data attribution A key capability of AI blockchains is transparent data attribution, which uses “proof-of-attribution” consensus mechanisms to identify and credit the source of data used by AI systems, increasing fairness. It provides visibility into who provided the data, how it contributed to the AI’s outputs, what value did it add, and how much should the provider of the data be compensated. An example of this in action is OpenLedger’s reward system, which ensures that every time a model taps into someone’s data, the person who created that data is rewarded with digital tokens. This model is in stark contrast to centralised AI companies, which amass data without the creator’s knowledge or consent, leaving them outside of the value chain. 2: AI royalties and monetisation layer Let’s imagine someone poses a question to a decentralised chatbot, and it responds by drawing on what it finds in a post on Substack or Medium. The system would record the fact that the model used this information to inform its response, and using smart contracts, it would automatically process the payment of tokens to the creator of that content. This paves the way for a new creator economy, where people create specialised datasets for AI models and host them on blockchains, so their contributions are fully attributed and rewarded. 3: Decentralised model lifecycles Another key difference is that the entire development process of blockchain-based AI is open, from the initial proposal, to the model training and, finally, its deployment. It supports a more collaborative environment for the creation of community-owned models that are controlled by their users, using democratic governance processes, where token holders vote on the new features they want to see added. 4: Efficient, scalable infrastructure AI blockchain run on decentralised infrastructures that are provided by their users. For instance, Render Network has built up a network of GPUs, but they’re not hosted in a centralised data centre. Instead, network participants rent out the idle GPU capacity of their laptops and desktops, and these resources are pooled and made available to AI applications that need processing power. Developers get the infrastructure they need at more affordable costs, and those who provide it can earn tokenised rewards for doing so. OpenLedger plays a key role in enhancing the efficiency of decentralised infrastructure with OpenLoRA. It’s a highly scalable and highly performant framework that can serve hundreds of fine-tuned AI Models in parallel on a single GPU block, allowing them to run simultaneously with much lower operating costs. In turn, this dramatically increases the accessibility of advanced AI applications by making them much more affordable for end users. Why do AI blockchains matter? The vast majority of AI services in use today live in centralised “****** boxes” that are incredibly opaque, revealing next to nothing about how they work or the data they use. They’re owned by a handful of powerful organisations, and this concentration of control undermines the democratizing potential of AI technology. AI companies like Google, OpenAI, and Microsoft could amass enormous influence over our societies and lives, retaining all of the revenue these systems generate, even as they replace many existing jobs. Decentralised AI networks give us an opportunity to prevent this monopolisation from happening. It’s clear that AI is the most powerful new technology to emerge since the rise of the internet, and it’s just too important to allow it to be fully controlled by a small number of corporations. If the predictions come true, AI will be infused into everyone’s life, transforming workplaces and mobility, delivering personalised experiences and revolutionising healthcare. With AI blockchains, we can build intelligent systems on a foundation of trust, distributing control to prevent the concentration of power and incentivising everyone to participate in their development. This will open the door to grassroots innovation, where anyone can put forward an idea and work with a community to make it happen, with decentralised governance ensuring it evolves in alignment with everyone’s needs, rather than serving the goals of profit-oriented corporations. It will nurture a more diverse AI application landscape that everyone can access, while curtailing its use for oppressive purposes. We cannot let AI be monopolised The existing AI landscape holds just as much peril as it does potential. The technology has advanced so much in such a short space of time that there’s a very real danger of monopolisation, and with that comes the risk of it being misused. AI blockchains are the only way to prevent this, serving as a foundation for freely accessible and decentralised AI systems that will be developed in a collaborative way, with checks and balances in place to prevent any abuse. Building this decentralised future for AI requires coordination at every layer, from the data being used to the model training processes and the infrastructure that hosts it. With transparent attribution in place, we can incentivise this kind of cooperation, ensuring everyone’s contributions are acknowledged and, and every user is rewarded for participating in the next technological revolution. (Image source: Unsplash) The post The AI blockchain: What is it really? appeared first on AI News. View the full article
-
Apple has opened its foundational AI model to third-party developers for the first time, allowing direct access to the on-device large language model that powers Apple Intelligence. The move, announced at this week’s Worldwide Developers Conference, represents a significant shift in Apple’s traditionally closed ecosystem approach to Apple AI developer tools. The newly accessible three-billion parameter model operates entirely on-device, reflecting Apple’s privacy-first philosophy while imposing technical limitations compared to cloud-based alternatives from competitors. “We’re opening up access for any app to tap directly into the on-device, large language model at the core of Apple,” said Craig Federighi, Apple’s software chief, during the conference presentation, according to Reuters. The foundation model framework enables direct access The new Foundation Models framework allows developers to integrate Apple Intelligence features with just three lines of Swift code, providing privacy-focused AI inference at no cost. The framework includes guided generation and tool-calling capabilities built-in. Automattic has already begun leveraging the framework in its Day One journaling app. “The Foundation Model framework has helped us rethink what’s possible with journaling,” Paul Mayne, head of Day One at Automattic said. “Now we can bring intelligence and privacy together in ways that deeply respect our users.” Xcode 26 integrates AI assistance Xcode 26 now embeds large language models directly into the coding experience. Developers can use ChatGPT built into Xcode without creating an account, connect API keys from other providers, or run local models on Apple silicon Macs. The Coding Tools feature assists in the development, offering suggested actions like generating previews, creating playgrounds, or fixing code issues within the development environment. Visual intelligence opens to third parties Apple extended Visual Intelligence capabilities to third-party developers through enhanced App Intents. Etsy is exploring these features for product discovery, with CTO Rafe Colburn noting: “The ability to meet shoppers right on their iPhone with visual intelligence is a meaningful unlock.” The integration allows apps to provide search results within Apple’s visual intelligence experience, potentially driving direct engagement from camera-based searches. Market and analyst scepticism Apple’s stock closed 1.2% lower following the conference, with analysts questioning the incremental nature of announcements. “In a moment in which the market questions Apple’s ability to take any sort of lead in the AI space, the announced features felt incremental at best,” said Thomas Monteiro, senior analyst at Investing.com. The measured approach contrasts sharply with Apple’s more ambitious AI visions presented last year. Bob O’Donnell, chief analyst at Technalysis Research, observed: “They went from being visionary and talking about agents before a lot of other people did, to now realizing that, at the end of the day, what they need to do is deliver on what they presented a year ago.” Technical limitations and strategic focus The three-billion parameter on-device model represents both Apple’s commitment to privacy and its technical constraints. Unlike cloud-based models that can handle complex tasks, Apple’s on-device approach limits functionality while ensuring user data remains local. Ben Bajarin, CEO of analyst firm Creative Strategies, noted Apple’s behind-the-scenes focus: “You could see Apple’s priority is what they’re doing on the back-end, instead of what they’re doing at the front-end, which most people don’t care about yet.” Apple AI developer tools will be available for testing through the Apple Developer Program starting immediately, with a public beta expected next month. The company’s measured approach may disappoint those expecting revolutionary AI capabilities, but it maintains Apple’s traditional emphasis on privacy and incremental innovation over flashy demonstrations. As the AI race intensifies, Apple’s strategy of opening its foundational tools to developers while maintaining modest consumer-facing promises suggests a company more focused on building sustainable AI infrastructure than capturing headlines with ambitious claims. (Photo by Apple ) See also: Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Apple opens core AI model to developers amid measured WWDC strategy appeared first on AI News. View the full article
-
Reddit is taking Anthropic to court, accusing the artificial intelligence company of pulling user content from the platform without permission and using it to train its Claude AI models. The lawsuit, filed in a California state court, claims Anthropic made more than 100,000 unauthorised requests to Reddit’s servers, even after publicly stating that it had stopped. The case is built around Reddit’s claim that Anthropic ignored both technical restrictions and its terms of service. According to the complaint, Anthropic bypassed protections like the site’s robots.txt file, which is supposed to prevent automated scraping. Reddit also accuses Anthropic of violating user privacy by collecting and using personal posts—including deleted content—for commercial purposes. Reddit says it offers structured access to its data through licensing agreements with companies such as OpenAI and Google. These deals include conditions around content use, privacy safeguards, and data deletion. According to the platform, Anthropic declined to pursue a formal agreement and instead scraped the site directly, avoiding licensing fees and skipping user protections in the process. The lawsuit highlights a 2021 research paper co-authored by Anthropic CEO Dario Amodei, which pointed to Reddit as a rich source of training data for language models. Reddit also included examples where Claude appeared to reproduce Reddit posts nearly word for word, even echoing posts that had been deleted by users. That, the company says, shows Anthropic failed to put guardrails in place to respect user privacy or content takedowns. Reddit is seeking financial damages and a court order that would stop Anthropic from using Reddit content in future versions of its models. Anthropic has responded, claiming it disagrees with the claims and plans to defend itself. However, this is not the first time the corporation has come under legal pressure over how it collects training data. In August 2024, a group of authors filed a class-action lawsuit accusing Anthropic of using their copyrighted work without permission. They claimed that the firm trained its models on books and other written materials without their consent and then requested compensation for using their content. A similar case from October 2023 involved Universal Music Group and other publishers. They sued Anthropic over claims that its Claude chatbot was reproducing copyrighted song lyrics. The music companies argued that this use violated their intellectual property rights and asked the court to block further use of their lyrics. Unlike those lawsuits, Reddit’s case doesn’t focus on copyright. Instead, it centres on breach of contract and unfair competition. Reddit’s argument is that the data taken from its site isn’t just public—it’s governed by terms that Anthropic knowingly ignored. That distinction could make the case an important one for other platforms that host user content but want to control how it’s used in commercial AI systems. Reddit also accuses Anthropic of misleading the public. The lawsuit points to public statements from Anthropic claiming it respects scraping rules and values user privacy, which Reddit says were contradicted by the company’s actions. “For its part, despite what its marketing material says, Anthropic does not care about Reddit’s rules or users,” the lawsuit reads. “It believes it is entitled to take whatever content it wants and use that content however it desires, with impunity.” After the lawsuit was filed, Reddit’s stock rose nearly 67%, a sign that investors supported the move. The outcome of the case could set a precedent for how companies strike a balance between open internet content and the rights of users and content owners. As more AI firms rely on large volumes of online data, the legal and ethical questions around scraping are getting harder to ignore. Reddit’s case adds to the growing list of lawsuits shaping how this next wave of AI development unfolds. (Photo by Brett Jordan) See also: Ethics in automation: Addressing bias and compliance in AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Reddit sues Anthropic for scraping user data to train AI appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Taiwan Semiconductor Manufacturing Company (TSMC) finds itself at the centre of a perfect storm: unprecedented AI chip demand that it cannot fully satisfy, escalating trade tensions that threaten its business model, and geopolitical risks that expose the fragility of global semiconductor supply chains. Speaking at TSMC’s annual shareholders meeting in Hsinchu on Tuesday, CEO C.C. Wei delivered a confident outlook for the semiconductor giant, stating that “our revenue and profit this year will set new historical highs.” The bullish projection comes as the company grapples with the indirect effects of US tariffs while simultaneously struggling to meet unprecedented demand for AI applications. Tariff impact remains manageable despite industry concerns Wei addressed growing concerns about the impact of President Donald Trump’s trade policies on the global chip industry, acknowledging that tariffs do affect TSMC, though not directly. “Tariffs are imposed on importers, not exporters. TSMC is an exporter,” Wei explained to shareholders. “However, tariffs can lead to slightly higher prices, and when prices go up, demand may go down.” He emphasized that while TSMC’s business could be affected if tariffs force up prices and reduce overall chip demand, the company’s position remains strong. “Our business will still be very good,” Wei stated, adding, “I am not afraid of anything, I am only afraid that the world economy will decline.” Trump’s sweeping tariff policies have created significant uncertainty across the semiconductor sector. The administration initially imposed a 32% duty on imports from Taiwan as part of broader trade measures, though these were later pausedfor 90 days and semiconductors were notably excluded from the levies. AI applications drive unprecedented growth Despite trade policy headwinds, TSMC’s core business continues to benefit from explosive growth in artificial intelligence applications. Wei emphasized that AI chip demand remains “very strong” and consistently outpaces the company’s ability to supply. “Our job is to provide our customers with enough chips, and we’re working hard on that. ‘Working hard’ means it’s still not enough,” he told the meeting. The company’s customer roster includes tech giants Apple and Nvidia, both of which have been major drivers of AI-related semiconductor demand. TSMC’s April sales figures underscore this robust demand, with the company reporting NT$349.6 billion ($11.6 billion) in revenue—a 48.1% increase from the previous year and 22.2% growth from March. Wei noted that the surge partly resulted from companies stockpiling semiconductors ahead of anticipated tariff increases,but stressed that underlying AI demand fundamentals remain exceptionally strong. Production capacity expansion challenges The mismatch between AI chip demand and available supply has become a defining challenge for TSMC. Wei indicated that the company is actively working to “increase production capacity to satisfy our customers,” though the scale of demand continues to strain even the world’s most advanced semiconductor manufacturing capabilities. This capacity constraint reflects broader industry dynamics where AI applications—from data centre processors to consumer devices—require increasingly sophisticated and powerful chips that only a handful of manufacturers can produce at scale. Geopolitical pressures and expansion strategy TSMC faces mounting pressure to diversify its manufacturing footprint away from Taiwan, where the majority of its fabrication plants are currently located. Beijing’s continued claims over Taiwan and threats to use force have heightened concerns about supply chain resilience for critical semiconductor production. Wei directly addressed recent media speculation about potential Middle East expansion, firmly denying reports that TSMC was considering building chip factories in the United Arab Emirates. “I think rumours are really flying everywhere,” he said, dismissing the Bloomberg reports that cited unnamed sources. The company has been actively establishing a manufacturing presence in other regions, with facilities under development in the United States, Europe, and Japan. These expansion efforts aim to address both geopolitical risks and customer demands for geographically diversified supply chains. Regulatory compliance and China relations TSMC’s operations continue to navigate complex regulatory requirements spanning multiple jurisdictions. Wei confirmed that the company works closely with both Taiwan and U.S. governments to ensure compliance with legal and regulatory requirements. The company recently suspended shipments to China-based chip designer Sophgo after discovering that its chip matched components found in AI processors from Huawei Technologies, a ******** company subject to extensive US government restrictions. This incident highlights the ongoing challenges TSMC faces in balancing commercial relationships with regulatory compliance. Industry outlook and economic concerns While TSMC’s near-term prospects appear robust, Wei acknowledged broader economic risks that could impact the semiconductor industry. The executive’s comment about fearing economic decline more than specific trade policies reflects a recognition that global demand patterns ultimately drive the industry’s fortunes. The company’s record-setting performance projections suggest that current AI chip demand trends are sufficiently strong to offset potential headwinds from trade policies or broader economic uncertainty. However, the sustainability of this growth will likely depend on continued advancement in AI applications and the global economy’s overall health. (Photo by TSMC ) See also: Huawei Supernode 384 disrupts Nvidia’s AI market hold Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars power The post TSMC reports record AI chip demand amid Trump tariff uncertainty appeared first on AI News. View the full article
-
When ChatGPT’s user base exploded from 980,000 to over 10 million in South Korea within a year—an eleven-fold increase that outpaced growth in any other market—OpenAI’s executives knew they had discovered something extraordinary. This wasn’t just viral adoption; it was a powerful market signal that drove the company to fast-track its South Korean expansion and establish Seoul as its third Asian headquarters. But the real story behind OpenAI’s Korean strategy reveals a calculated bet on a country that offers something no other market can: a complete AI ecosystem ready for transformation. What the numbers reveal This explosive growth tells a deeper story than simple user acquisition. ChatGPT’s monthly active users reached 10.72 million as of April, more than doubling from March’s 5.09 million in just one month, according to mobile data tracker Mobile Index. While OpenAI itself has not officially disclosed detailed user metrics for South Korea, third-party data reveals the scope of adoption. Total usage time among Korean users jumped from 8.08 million hours in March to 23.7 million hours in April, while new app installations rose nearly fourfold over the same *******, from 1.44 million to 4.67 million, Mobile Index reported. But the metrics that matter most to OpenAI aren’t just about volume—they’re about value. Perhaps more telling is Korea’s global ranking in paid subscriptions. South Korea has the largest number of paying ChatGPT subscribers after the United States, according to OpenAI. This isn’t just about free users experimenting with AI—Koreans are putting money behind their adoption, indicating genuine value recognition and sustained engagement. Beyond user numbers: A strategic ecosystem What makes the OpenAI South Korea expansion particularly strategic isn’t just the user base, but the ecosystem itself. “Korea has an ecosystem that encompasses all areas of AI, from semiconductors to software and startups, and is a leading AI country where various generations, from students to the elderly, use AI daily,” Kwon said according to various local reports. This ecosystem advantage is crucial. While many countries excel in specific AI domains, Korea offers a complete vertical stack—from Samsung’s cutting-edge semiconductors that power AI computing to a population that has already integrated AI into daily workflows. “There are many companies leading the global market in areas where open AI is seeking cooperation, such as healthcare, bio, robotics, manufacturing, and finance,” Kwon noted. Timing and competitive pressures The timing of this expansion reveals competitive pressures that extend beyond simple market opportunities. As ChatGPT tightens its grip on the market, domestic tech firms are facing the risk of losing ground on their home turf due to delayed AI rollouts or insufficient competitiveness. Korean companies like Kakao only began the first closed beta test of its AI assistant “Kanana” on May 8. Meanwhile, SK Telecom’s “A.Dot” and Wrtn Technologies’ “Wrtn” each maintain MAUs around just one million. This competitive landscape suggests OpenAI’s expansion isn’t just about growth—it’s about securing a market position before domestic competitors can mount effective challenges. The company is moving quickly to establish partnerships with major Korean firms, including recent collaborations with the Korea Development Bank, Kakao, Krafton, and SK Telecom. The infrastructure play Perhaps the most significant aspect of OpenAI’s South Korea expansion lies in its infrastructure ambitions. The company’s “OpenAI for Countries” program, which integrates software by cooperating with governments and local companies to build a data centre locally and providing locally customized ChatGPT on top of its infrastructure, positions Korea as a potential regional hub. “To achieve Korea’s goal of becoming a leading AI country in 2027, infrastructure investment is essential,” Kwon stated, hinting at deeper cooperation possibilities. This approach mirrors OpenAI’s recent partnership with the UAE, where OpenAI previously signed an infrastructure construction cooperation with the United Arab Emirates (UAE) as its first overseas partnership on the 22nd. Critical considerations This expansion however isn’t without challenges. The rapid growth in ChatGPT usage has sparked concerns about domestic platform viability. One industry insider commented, “ChatGPT is expanding its influence beyond search into various sectors, meaning no other app category can afford to be complacent.” Moreover, regulatory and political considerations remain complex. Kwon’s meetings with both major Korean political parties—the Democratic Party and People Power Party—indicate that successful expansion requires careful navigation of local political dynamics. Strategic implications OpenAI’s South Korea expansion ultimately signals a broader strategic shift from pure technology development to geopolitical positioning in AI infrastructure. By establishing deep roots in Korea’s comprehensive AI ecosystem, OpenAI isn’t just gaining users—it’s securing a strategic foothold in Asia’s most AI-ready market. The success of this expansion could serve as a template for OpenAI’s global strategy, demonstrating how AI companies must move beyond software services to become integral parts of national technological infrastructure. For Korea, this partnership offers accelerated AI development; for OpenAI, it provides a proving ground for its “AI for Countries” vision. The question isn’t whether this expansion will succeed—the user numbers already prove market demand. The question is whether this model of deep, infrastructure-level partnerships will become the new standard for AI companies seeking global influence in an increasingly competitive landscape. While Korean tech giants like Kakao scramble to launch their first AI assistants and SK Telecom’s offerings languish at just one million users, a foreign competitor has already captured over 10 million Korean users and shows no signs of slowing down. OpenAI’s South Korea expansion isn’t just about international growth—it’s about seizing control of Asia’s most strategically valuable AI market before domestic players can mount an effective defence. (Photo by Dima Solomin/Unsplash) See also: Sam Altman: OpenAI to keep nonprofit soul in restructuring Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. The post OpenAI’s second largest paying market gets its own office: The South Korean story appeared first on AI News. View the full article
-
The *** is cementing its position as Europe’s AI powerhouse through partnerships with players like NVIDIA to tackle issues like the skills gap. The *** continued to outpace continental rivals both in freshly funded AI startups and overall private investment throughout 2024. Since 2013, *** AI ventures have managed to attract £22 billion in private funding, suggesting investors are continuing to bet on the home of industry giants like DeepMind, Stability AI, and Wayve. Research unveiled during the recent London Tech Week showed something many tech observers have long suspected: regions blessed with robust AI and data centre infrastructure tend to enjoy stronger economic growth across the board. The analysis, by Public First, suggested even modest bumps in AI data centre capacity could pump nearly £5 billion into the nation’s coffers. More ambitious expansion – doubling current access levels, for instance – might deliver annual economic windfalls approaching £36.5 billion. Cloud provider Nscale chose London Tech Week to pledge to deploy 10,000 NVIDIA Blackwell GPUs in the country by late 2026. Not to be outdone, cloud outfit Nebius revealed plans for its first AI factory in the *** which is set to bring a further 4,000 NVIDIA Blackwell GPUs online—providing much-needed computational muscle for research bodies, universities, and public services including our perpetually cash-strapped NHS. But having the hardware is only half the battle. As anyone in tech recruitment will tell you, finding people who can actually take advantage of it remains a challenge. NVIDIA is throwing its considerable weight behind the *** government’s national skills push, with plans for a dedicated AI Technology Center on British soil. This centre promises hands-on training in AI, data science, and the increasingly critical field of accelerated computing. “A new NVIDIA AI Technology Center in the *** will provide hands-on training in AI, data science and accelerated computing, focusing on foundation model builders, embodied AI, materials science and earth systems modeling,” explained NVIDIA. The financial sector – the ***’s crown jewel – stands to benefit too. A new AI-powered sandbox from the Financial Conduct Authority will allow for safer experimentation with AI in banking and finance, with NayaOne providing infrastructure and NVIDIA supplying the technological backbone. Sumant Kumar, CTO for Banking & Financial Markets at NTT DATA ***&I, said: “In a sandbox, every action leaves a mark. This supercharged sandbox may help banks get to a viable AI proof-of-concept faster, but it doesn’t reduce their regulatory obligations. If anything, it adds new layers of responsibility. As soon as a firm begins developing models in the sandbox, it needs to be ready to explain how they work, why they produce certain outcomes, and how they’ve been built. “In financial services, the main bottleneck is often about ensuring the right governance is in place. The FCA will still expect clear documentation and strong controls around data provenance and auditability – even in a controlled environment. “That’s why this is such an important opportunity. For firms, it’s a chance to build and refine the internal capabilities that will let them scale AI responsibly. For the government, it’s a chance to maintain the ***’s competitive edge and advance innovation while promoting balanced regulation and consumer safeguards. Those who approach the sandbox with the right structure will be in the best position to move quickly and safely when it comes to deployment.” Barclays Eagle Labs is opening an Innovation Hub in London that could serve as a launching pad for promising AI and deep tech startups. Those who make the cut will gain a pathway into NVIDIA’s Inception programme, unlocking access to cutting-edge tools and targeted training that might otherwise remain frustratingly out of reach. Mark Boost, CEO of Civo said: “This feels like a real step forward. We’ve spent years talking about being a leader in AI, but investing in compute infrastructure, developer training, and serious R&D is how we actually start to deliver it. “NVIDIA’s AI Technology Center is an important initiative. Giving *** developers better access to hands-on training in accelerated computing, AI engineering and model development will help close critical skills gaps and support the next generation of homegrown talent. Boost also touched on a point that’s increasingly occupying minds in Whitehall and boardrooms alike: technological sovereignty. “Building long-term resilience in the *** means looking carefully at our reliance on external compute. As the AI stack becomes more strategic, the *** should be complementing global partnerships with greater investment in local infrastructure, open standards, and technologies we can help shape. That’s what keeps us competitive—staying flexible and able to shape our own path.” Rather than just government announcements or corporate PR, this *** AI initiative with NVIDIA appears to promise genuine coordination between public institutions, industry heavyweights, and educational bodies. The focus on both immediate needs and longer-term foundations suggests lessons have been learned from previous tech booms. Whether this approach delivers the projected economic windfall remains to be seen. But, for once, the *** seems to be playing to its strengths—combining world-class research institutions, a vibrant financial sector, and pragmatic regulation with the computational muscle and skills development needed to turn AI potential into economic reality. (Photo by Charles Postiaux) See also: Anthropic launches Claude AI models for US national security Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** tackles AI skills gap through NVIDIA partnership appeared first on AI News. View the full article
-
Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security. (Image credit: Anthropic) See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News. View the full article
-
As Saudi Arabia accelerates its journey toward becoming a global leader in digital innovation, the Smart Data & AI Summit will play a pivotal role in shaping the Kingdom’s data and AI landscape. Scheduled for 27-28 August 2025 at the JW Marriott Hotel in Riyadh, this event will bring together 300+ data and AI professionals, including CDOs, CIOs, data scientists, AI directors, C-suite executives and many others, to explore the latest advances, tackle challenges, and unlock opportunities in data and artificial intelligence. With the Kingdom’s data analytics market projected to reach $8.8 billion by 2030, the summit comes at an important time, offering a platform for public and private sector leaders to collaborate, innovate, and approach the nation’s Vision 2030 goals. A platform for innovation and collaboration Building on the success of its debut last year, which was inaugurated by a leading official from the Saudi Data & AI Authority (SDAIA), the 2025 edition promises to be ******* and more impactful. The summit will feature: 25+ cutting-edge solution providers showcasing the latest technologies in data and AI. 50+ industry experts sharing insights on emerging trends, challenges, and opportunities. 300+ attendees, including data engineers, architects, AI pioneers, and decision-makers from Saudi Arabia’s largest organisations. 1:1 meetings to foster collaboration and evaluate tailored solutions. CPD-accredited sessions to help professionals enhance their skills and advance their careers. Unveiling the future of data and AI The summit’s agenda will look at important topics shaping the future of data and AI, including: Navigating open data in Saudi Arabia AI fusion and machine learning innovations Data virtualisation and the power of data mesh Ethical data governance and cybersecurity analytics Unified data cloud architectures Discussions will be led by thought leaders from the Kingdom’s top organisations, including Ministry of Hajj & Umrah, Insurance Authority, Council of Health Insurance, NEOM, AlNASSR Club Company | PIF, and Abdul Latif Jameel United Finance. DAMA Saudi Arabia joins as supporting partner The Data Management Association (DAMA Saudi Arabia), the Kingdom’s largest data management community, has joined the summit as a supporting partner. The partnership underscores DAMA’s commitment to fostering a robust data management ecosystem and aligns with the summit’s mission to elevate Saudi Arabia’s position as a global leader in data and AI. Abdulaziz Almanea, Founder & Chairman of the Board, DAMA Saudi, spoke of the importance of the summit: “Artificial intelligence is only as good as the data behind it. Quality, governance, and ethics must come first to ensure trust, accuracy, and impact. As Saudi Arabia accelerates its data-driven transformation, industry events like the Smart Data & AI Summit serve as vital platforms for bringing experts together to shape the future of AI with responsible and innovative data practices.” A legacy of excellence The inaugural edition of the summit set a high benchmark, with attendees praising the quality of speakers, depth of discussions, and opportunities for networking and collaboration. Nayef Al-Otaibi, VP & Chief Digital Officer at Saudi Aramco, said, “The event was well-managed, the coordination was excellent, and the quality of the speakers was above expectations. It was a beautiful experience connecting with industry experts during the panel discussions and sharing our experiences. This could basically help us establish the platform and collaborate and work together in future.” Driving Vision 2030 forward The Smart Data & AI Summit is a strategic initiative to support Saudi Arabia’s Vision 2030 goals. By bringing together global expertise, cutting-edge technologies, and local insights, the summit aims to: Accelerate the Kingdom’s digital transformation. Foster innovation and collaboration across industries. Address regulatory challenges and ethical considerations in data and AI. Unlock new opportunities for investment and growth in the Kingdom’s data and AI sectors. Sudhir Ranjan Jena, CEO & Co-founder of Tradepass, the organising body, spoke of the summit’s mission: “The data & AI sector is entering a transformative chapter, fuelled by technology disruptions, heightened expectations, and the unprecedented expansion of digital tools and platforms. In the upcoming edition, we will delve into Vision 2030 goals, unlock limitless opportunities, and explore emerging trends and solutions that will play an integral role in shaping the Kingdom’s post-oil economy.” A high-impact speaker lineup The summit will feature an impressive roster of speakers, including: Dr Ahmed Alzahrani – Director of Business Intelligence and Data Analytics Centre, Ministry of Hajj & Umrah Hajar Alolah – Data Governance and Management Office Director, Saudi Development Bank Abdullah AlBar – Chief Data Officer, Abdul Latif Jameel United Finance Usamah Algemili – Chief Data Executive, Insurance Authority Jawad Saleemi – Director – AI & Cloud, Telenor Abbasi Poonawala – Executive Director – Enterprise Architecture, Alinma Bank Nawaf Alghamdi – Director – Data Analytics & AI, Council of Health Insurance These experts will share their insights on the latest trends, challenges, and opportunities in data and AI, offering attendees strategies to drive innovation and growth in their organisations. For more information, visit: [Hidden Content] Media contact: Shrinkhal Sharad PR & Communication Lead Tradepass Email: *****@*****.tld Phone: + (91) 80 6166 4401 The post The Kingdom’s digital transformation showcased at Smart Data & AI Summit appeared first on AI News. View the full article
-
Reddit is accusing Anthropic of building its Claude AI models on the back of Reddit’s users, without permission and without paying for it. Anyone who uses Reddit, even a web-crawling bot, agrees to the site’s user agreement. That agreement is clear: you cannot just take content from the site and use it for your own commercial products without a written deal. Reddit claims Anthropic’s bots have been doing exactly that for years, scraping massive amounts of conversations and posts to train and improve Claude. What makes this lawsuit particularly spicy is the way it goes after Anthropic’s reputation. Anthropic has worked hard to brand itself as the ethical, trustworthy AI company, the “white knight” of the industry. The lawsuit, however, calls these claims nothing more than “empty marketing gimmicks”. For instance, Reddit points to a statement from July 2024 where Anthropic claimed it had stopped its bots from crawling Reddit. The lawsuit says this was “false”, alleging that its logs caught Anthropic’s bots trying to access the site more than one hundred thousand times in the following months. But this isn’t just about corporate squabbles; it directly involves user privacy. When you delete a post or a comment on Reddit, you expect it to be gone. Reddit has official licensing deals with other big AI players like Google and OpenAI, and these deals include technical measures to ensure that when a user deletes content, the AI company does too. According to Reddit’s lawsuit, Anthropic has no such deal and has refused to enter one. This means if their AI was trained on a post you later deleted, that content could still be baked into Claude’s knowledge base, effectively ignoring your choice to remove it. The lawsuit even includes a screenshot where Claude itself admits it has no real way of knowing if the Reddit data it was trained on was later deleted by a user: So, what does Reddit want? It’s not just about money, although they are asking for damages for things like increased server costs and lost licensing fees. They are asking the court for an injunction to force Anthropic to stop using any Reddit data immediately. Furthermore, Reddit wants to prohibit Anthropic from selling or licensing any product that was built using that data. That means they’re asking a judge to effectively take Claude off the market. This case forces a tough question: Does being “publicly available” on the internet mean content is free for any corporation to take and monetise? Reddit is arguing a firm “no,” and the outcome could change the rules for how AI is developed from here on out. (Photo by Brett Jordan) See also: Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Reddit sues Anthropic over AI data scraping appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Ahead of the TechEx North America event on June 4-5, we’ve been lucky enough to speak to Kieran Norton, Deloitte’s US Cyber AI & Automation leader, who will be one of the speakers at the conference on June 4th. Kieran’s 25+ years in the sector mean that as well as speaking authoritatively on all matters cybersecurity, his most recent roles include advising Deloitte clients on many issues around cybersecurity when using AI in business applications. The majority of organisations have in place at least the bare minimum of cybersecurity, and thankfully, in most cases, operate a decently comprehensive raft of cybersecurity measures that cover off communications, data storage, and perimeter defences. However, in the last couple of years, AI has changed the picture, both in terms of how companies can leverage the technology internally, and in how AI is used in cybersecurity – in advanced detection, and in the new ways the tech is used by bad actors. As a cybersecurity tool, AI can be used in network anomaly detection and the smart spotting of phishing messages, among other uses. As a business enabler, AI means that the enterprise has to be proactive to ensure AI is used responsibly, balancing the innovation AI offers with privacy, data sovereignty, and risk. Considered a relatively new area, AI, smart automation, data governance and security all inhabit a niche at present. But given the growing presence of AI in the enterprise, those niches are set to become mainstream issues: problems, solutions, and advice that will need to be observed in every organisation, sooner rather than later. Governance and risk Integrating AI into business processes isn’t solely about the technology and methods for its deployment. Internal processes will need to change to make best use of AI, and to better protect the business that’s using AI daily. Kieran draws a parallel to earlier changes made necessary by new technologies: “I would correlate [AI] with cloud adoption where it was a fairly significant shift. People understood the advantages of it and were moving in that direction, although sometimes it took them more time than others to get there.” Those changes mean casting the net wide, to encompass the update of governance frameworks, establishing secure architectures, even leveraging a new generation of specialists to ensure AI and the data associated with it are used safely and responsibly. Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more. As Kieran puts it: “You probably weren’t doing a lot of testing for hallucination, bias, toxicity, data poisoning, model vulnerabilities, etc. That now has to be part of your process.” These are big subjects, and for the fuller picture, we advocate that readers attend the two talks at TechEx North America that Kieran’s to give. He’ll be exploring both sides of the AI coin – issues around AI deployment for the business, and the methods that companies can implement to deter and detect the new breed of AI-powered malware and attack vectors. The right use-cases Kieran advocates that companies start with smaller, lower-risk AI implementations. While some of the first sightings of AI ‘in the wild’ have been chatbots, he was quick to differentiate between a chatbot that can intelligently answer questions from customers, and agents, which can take action by means of triggering interactions with the apps and services the business operates. “So there’s a delineation […] chatbots have been one of the primary starting places […] As we get into agents and agentic, that changes the picture. It also changes the complexity and risk profile.” Customer-facing agentic AI instances are indubitably higher risk, as a misstep can have significant effects on a brand. “That’s a higher risk scenario. Particularly if the agent is executing financial transactions or making determinations based on healthcare coverage […] that’s not the first use case you want to try.” “If you plug 5, 6, 10, 50, a hundred agents together, you’re getting into a network of agency […] the interactions become quite complex and present different issues,” he said. In some ways, the issues around automation and system-to-system interfaces have been around for close on a decade. Data silos and RPA (robotic process automation) challenges are the hurdles enterprises have been trying to jump for several years. “You still have to know where your data is, know what data you have, have access to it […] The fundamentals are still true.” In the AI era, fundamental questions about infrastructure, data visibility, security, and sovereignty are arguably more relevant. Any discussions about AI tend to circle around the same issues, which throws into relief Kieran’s statements that a conversation about AI in the enterprise has to be wide-reaching and concern many of the operational and infrastructural underpinnings of the enterprise. Kieran therefore emphasises the importance of practicality, and a grounded assessment of need and ability as needing careful examination before AI can gain a foothold. “If you understand the use case […] you should have a pretty good idea of the ROI […] and therefore whether or not it’s worth the pain and suffering to go through building it.” At Deloitte, AI is being put to use where there is a clear use case with a measurable return: in the initial triage-ing of SOC tickets. Here the AI acts as a Level I incident analysis engine. “We know how many tickets get generated a day […] if we can take 60 to 80% of the time out of the triage process, then that has a significant impact.” Given the technology’s nascence, demarcating a specific area of operations where AI can be used acts as both prototype and proof of effectiveness. The AI is not customer-facing, and there are highly-qualified experts in their fields who can check and oversee the AI’s deliberations. Conclusion Kieran’s message for business professionals investigating AI uses for their organisations was not to build an AI risk assessment and management programme from scratch. Instead, companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value. “You shouldn’t create another programme just for AI security on top of what you’re already doing […] you should be modernising your programme to address the nuances associated with AI workloads.” Success in AI starts with clear, realistic goals built on solid foundations. You can read more about TechEx North America here and sign up to attend. Visit the Deloitte team at booth #153 and drop in on its sessions on June 4: ‘Securing the AI Stack’ on the AI & Big Data stage from 9:20am-9:50am, and ‘Leveraging AI in Cybersecurity for business transformation’ on the Cybersecurity stage, 10:20am – 10:50am. Learn more about Deloitte’s solutions and service offerings for AI in business and cybersecurity or email the team at *****@*****.tld. (Image source: “Symposium Cisco Ecole Polytechnique 9-10 April 2018 Artificial Intelligence & Cybersecurity” by Ecole polytechnique / Paris / France is licensed under CC BY-SA 2.0.) The post The modern ROI imperative: AI deployment, security and governance appeared first on AI News. View the full article
-
CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge. Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’ (during a customer interaction, for example), is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet. But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues. We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center. Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments. “It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwC From support to strategy: shifting expectations for AI Rani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation. For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs. Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples. But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk. “It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” Cleaning house: the data challenge behind AI Rani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines. “In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.” Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data. It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge. If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.” For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation. Conclusion It’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential. But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business. As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys. Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America. (Image source: “Network Rack” by one individual is licensed under CC BY-SA 2.0.) The post AI enables shift from enablement to strategic leadership appeared first on AI News. View the full article
-
AI hallucinations are becoming more dangerous as models are increasingly trusted to surface information and make critical decisions. We’ve all got that know-it-all friend that can’t admit when they don’t know something, or resorts to giving dodgy advice based on something they’ve read online. Hallucinations by AI models are like that friend, but this one could be in charge of creating your ******* treatment plan. That’s where Themis AI enters the picture. This MIT spinout has managed to achieve something that seems straightforward in theory but is actually quite complex, teaching AI systems to say, “I’m not sure about this.” AI systems typically display overconfidence. Themis’ Capsa platform acts as a reality check for AI, helping models recognise when they’re venturing into guesswork rather than certainty. Founded in 2021 by MIT Professor Daniela Rus, along with former research colleagues Alexander Amini and Elaheh Ahmadi, Themis AI has developed a platform that can integrate with virtually any AI system to flag moments of uncertainty before they lead to mistakes. Capsa essentially trains AI to detect patterns in how it processes information that might indicate it’s confused, biased, or working with incomplete data that could lead to hallucinations. Since launching, Themis claims it has helped telecoms companies avoid costly network planning errors, assisted oil and gas firms in making sense of complex seismic data, and published research on creating chatbots that don’t confidently make things up. Most people remain unaware of how frequently AI systems are simply taking their best guess. As these systems handle increasingly critical tasks, those guesses could have serious consequences. Themis AI’s software adds a layer of self-awareness that’s been missing. Themis’ journey towards tackling AI hallucinations The journey to Themis AI began years ago in Professor Rus’s MIT lab, where the team was investigating a fundamental problem: how do you make a machine aware of its own limitations? In 2018, Toyota funded their research into reliable AI for self-driving vehicles—a sector where mistakes could be fatal. The stakes are incredibly high when autonomous vehicles must accurately identify pedestrians and other road hazards. Their breakthrough came when they developed an algorithm that could spot racial and gender bias in facial recognition systems. Rather than just identifying the problem, their system actually fixed it by rebalancing the training data—essentially teaching the AI to correct its own prejudices. By 2021, they’d demonstrated how this approach could revolutionise drug discovery. AI systems could evaluate potential medications but – crucially – flag when their predictions were based on solid evidence versus educated guesswork or complete hallucinations. The pharmaceutical industry recognised the potential savings in money and time by focusing only on drug candidates the AI was confident about. Another advantage of the technology is for devices with limited computing power. Edge devices use smaller models that cannot match the accuracy of huge models run on a server, but with Themis’ technology, these devices will be far more capable of handling most tasks locally and only request help from the big servers when they encounter something challenging. AI holds tremendous potential to improve our lives, but that potential comes with real risks. As AI systems become more deeply integrated into critical infrastructure and decisionmaking, the ability to acknowledge uncertainty leading to hallucinations may prove to be their most human – and most valuable – quality. Themis AI is making sure they learn this crucial skill. See also: Diabetes management: IBM and Roche use AI to forecast blood sugar levels Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Tackling hallucinations: MIT spinout teaches AI to admit when it’s clueless appeared first on AI News. View the full article
-
IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating. AI-powered diabetes management The app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings. What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south. For those who live with the anxiety of hypoglycaemia (when blood sugar plummets to dangerous levels), the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action. Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households. “By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions. How AI is speeding up diabetes research It’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed. Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities. The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself. What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise. ”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann. “Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.” Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes. “With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.” What this means for the future of healthcare tech Having covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI. For the estimated 590 million people (or 1 in 9 of the adult population) worldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights. The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise. If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s. For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own. (Photo by Alexander Grey) See also: DeepSeek’s latest AI model a ‘big step backwards’ for free speech Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Diabetes management: IBM and Roche use AI to forecast blood sugar levels appeared first on AI News. View the full article
-
DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it up AI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions. “DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety. What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries. In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses. Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly. “It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed. China criticism? Computer says no This pattern becomes even more pronounced when examining the model’s handling of questions about the ******** government. Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the ******** government.” Where previous DeepSeek models might have offered measured responses to questions about ******** politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly. There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing. “The model is open source with a permissive license, so the community can (and will) address this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness. What DeepSeek’s latest model shows about free speech in the AI era The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question. As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content. DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence. (Photo by John Cameron) See also: Ethics in automation: Addressing bias and compliance in AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek’s latest AI model a ‘big step backwards’ for free speech appeared first on AI News. View the full article
-
[AI]Odyssey’s AI model transforms video into interactive worlds
ChatGPT posted a topic in World News
London-based AI lab Odyssey has launched a research preview of a model transforming video into interactive worlds. Initially focusing on world models for film and game production, the Odyssey team has stumbled onto potentially a completely new entertainment medium. The interactive video generated by Odyssey’s AI model responds to inputs in real-time. You can interact with it using your keyboard, phone, controller, or eventually even voice commands. The folks at Odyssey are billing it as an “early version of the Holodeck.” The underlying AI can generate realistic-looking video frames every 40 milliseconds. That means when you press a button or make a gesture, the video responds almost instantly—creating the illusion that you’re actually influencing this digital world. “The experience today feels like exploring a glitchy dream—raw, unstable, but undeniably new,” according to Odyssey. We’re not talking about polished, AAA-game quality visuals here, at least not yet. Not your standard video tech Let’s get a bit technical for a moment. What makes this AI-generated interactive video tech different from, say, a standard video game or CGI? It all comes down to something Odyssey calls a “world model.” Unlike traditional video models that generate entire clips in one go, world models work frame-by-frame to predict what should come next based on the current state and any user inputs. It’s similar to how large language models predict the next word in a sequence, but infinitely more complex because we’re talking about high-resolution video frames rather than words. “A world model is, at its core, an action-conditioned dynamics model,” as Odyssey puts it. Each time you interact, the model takes the current state, your action, and the history of what’s happened, then generates the next video frame accordingly. The result is something that feels more organic and unpredictable than a traditional game. There’s no pre-programmed logic saying “if a player does X, then Y happens”—instead, the AI is making its best guess at what should happen next based on what it’s learned from watching countless videos. Odyssey tackles historic challenges with AI-generated video Building something like this isn’t exactly a walk in the park. One of the biggest hurdles with AI-generated interactive video is keeping it stable over time. When you’re generating each frame based on previous ones, small errors can compound quickly (a phenomenon AI researchers call “drift.”) To tackle this, Odyssey has used what they term a “narrow distribution model”—essentially pre-training their AI on general video footage, then fine-tuning it on a smaller set of environments. This trade-off means less variety but better stability so everything doesn’t become a bizarre mess. The company says they’re already making “fast progress” on their next-gen model, which apparently shows “a richer range of pixels, dynamics, and actions.” Running all this fancy AI tech in real-time isn’t cheap. Currently, the infrastructure powering this experience costs between £0.80-£1.60 (1-2) per user-hour, relying on clusters of H100 GPUs scattered across the US and EU. That might sound expensive for streaming video, but it’s remarkably cheap compared to producing traditional game or film content. And Odyssey expects these costs to tumble further as models become more efficient. Interactive video: The next storytelling medium? Throughout history, new technologies have given birth to new forms of storytelling—from cave paintings to books, photography, radio, film, and video games. Odyssey believes AI-generated interactive video is the next step in this evolution. If they’re right, we might be looking at the prototype of something that will transform entertainment, education, advertising, and more. Imagine training videos where you can practice the skills being taught, or travel experiences where you can explore destinations from your sofa. The research preview available now is obviously just a small step towards this vision and more of a proof of concept than a finished product. However, it’s an intriguing glimpse at what might be possible when AI-generated worlds become interactive playgrounds rather than just passive experiences. You can give the research preview a try here. See also: Telegram and xAI forge Grok AI deal Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Odyssey’s AI model transforms video into interactive worlds appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content] -
Salesforce has agreed to acquire data management firm Informatica in a deal valued at around $8 billion. This includes equity value, minus Salesforce’s existing investment in the company. Informatica shareholders will receive $25 in cash per share. The move aims to help Salesforce build a stronger foundation for AI tools that can act on their own, often called agentic AI. Informatica’s software is known for helping businesses collect, manage, and organise large sets of data – the kind of support Salesforce needs to improve its AI systems’ work in different business applications. The deal brings together tools for organising and cleaning data (like Master Data Management and data integration) with Salesforce’s cloud platform. The idea is to make sure any AI features running on Salesforce have access to organised and secure data. For companies using AI in daily operations, having the right data isn’t enough. They also need to know where that data came from, how it has been changed, and whether it can be trusted. That’s where Informatica’s tools come in with benefits such as: Transparency: Informatica can show how data flows through systems, helping companies meet audit or regulatory needs. Context: By combining Informatica’s metadata with Salesforce’s data models, AI agents will better understand how to connect the dots in business systems. Governance: Features like data quality controls and policy settings help make sure AI systems rely on clean and consistent data. Salesforce CEO Marc Benioff said the acquisition supports the company’s goal of building safe and responsible AI for business use. “We’re excited to acquire Informatica … Together, we’ll supercharge Agentforce, Data Cloud, Tableau, MuleSoft, and Customer 360,” Benioff said. Informatica CEO Amit Walia said joining Salesforce will help more businesses make better use of their data. How this helps Salesforce’s data products Informatica’s cloud tools will plug directly into Salesforce’s core products: Data cloud: Informatica will help ensure data collected is trustworthy and ready to use – not just gathered in one place. Agentforce: AI agents should be able to make smarter decisions with cleaner data and better understanding of business context. Customer 360: Salesforce CRM tools will gain data inputs, helping sales and support teams. MuleSoft: With Informatica’s data quality and governance tools, the data passing through MuleSoft APIs should be more reliable. Tableau: Users of Tableau will benefit from more detailed information, as the data behind the dashboards should be better organised and easier to understand. Steve Fisher, President and CTO at Salesforce, explained the value: “Imagine an AI agent that goes beyond simply seeing data points to understand their full context – origin, transformation, quality, and governance.” Salesforce plans to bring Informatica’s technology into its existing systems quickly after the deal closes. This includes integrating data quality, governance, and MDM features into Agentforce and Data Cloud. The company also said it will continue to support Informatica’s current strategy to build AI-driven data tools for use in different cloud environments. Informatica acquisition aligns with Salesforce’s strategy Salesforce executives described the acquisition as part of a long-term plan. Robin Washington, President and CFO, said the company targets deals like this one when it sees a clear fit for customers and a solid financial return. “We’re laser-focused on accelerated execution,” she said, pointing to sectors like government, healthcare, and finance, where the combined tools could have most impact. Informatica’s chairman Bruce Chizen said the deal shows how long-term investment strategies can pay off. He credited private equity backers Permira and CPP Investments for their role in guiding the company toward this outcome. Salesforce also said it plans to invest in Informatica’s partner network and apply its own sales and marketing muscle to grow Informatica’s cloud business further. Deal terms and next steps The boards of both companies have approved the transaction. Shareholders representing about 63% of Informatica’s voting shares have signed off and no further votes are needed. The deal is expected to close early in Salesforce’s 2027 fiscal year, pending regulatory approval and other conditions. Salesforce will pay for the deal using a mix of cash and new debt. The company expects the deal to add to its non-GAAP earnings, margin, and cash flow starting in the second year after closing. It does not plan to change its shareholder return plans as a result of the acquisition. (Image from Pixabay) See also: Oracle plans $40B Nvidia chip deal for AI facility in Texas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce to buy Informatica in $8B deal appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions. The ******** tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions. Architectural innovation born from necessity Zhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.” The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts models (machine-learning systems using multiple specialised sub-networks to solve complex computational challenges.) Huawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure. Performance metrics challenge industry leaders Real-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures. Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads. The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement. Geopolitical strategy drives technical innovation The Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints. Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.” The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation. Market implications and deployment reality Beyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple ******** data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption. The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors. Industry disruption and future considerations Huawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines. The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance. For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability. (Image from Pixabay) See also: Oracle plans $40B Nvidia chip deal for AI facility in Texas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Huawei Supernode 384 disrupts Nvidia’s AI market hold appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform. This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool $300 million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app. This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy. In fact, Telegram is gearing up to raise at least $1.5 billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service. And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a $173 million loss. Fast forward to 2024, and they’d flipped that on its head, banking a $540 million profit from $1.4 billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of $700 million from a $2 billion revenue pot. So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp. This summer, Telegram users will gain access to the best AI technology on the market. @elonmusk and I have agreed to a 1-year partnership to bring xAI’s @grok to our billion+ users and integrate it across all Telegram apps This also strengthens Telegram’s financial… pic.twitter.com/ZPK550AyRV — Pavel Durov (@durov) May 28, 2025 However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water. Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content. Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company. And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam. Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users. The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage? (Photo from Unsplash) See also: Mistral Agents API: Transforming complex AI workflows Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Telegram and xAI forge Grok AI deal appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
If you’re working in SEO or digital marketing, you’ve probably noticed how Google search results look different. That instant answer that pops up at the top of the page is AI Overview, and it’s changing the game. Instead of having to click through to a bunch of different websites, users can now get direct answers right there in the search results, thanks to AI. Michal Kurzanowski, the CEO of OC24 LTD, a marketing company specialising in SEO, has seen a lot of changes over the years. But this new AI feature? It’s something entirely new. With his experience in helping businesses get better rankings, Michal understands how AI Overviews are reshaping SEO. Back in May 2023, Google introduced the feature as Search Generative Experience (SGE), renamed it in May 2024 to AI Overview, and launched it in the US. By the end of the year, it expanded to over 130 countries. According to a case study analysing millions of search results, 78% of users were happy with the AI-generated answers. That’s a pretty good sign that this feature is here to stay. What is AI overview? It’s a feature that gives users the answer they’re looking for right at the top of the search results. Google’s AI pulls information from all over the web and gives a short response to the user’s query. Instead of making them click on multiple links, the AI compiles all the relevant info into a summary. The answers are usually 160-170 words, just enough to give the user what they need, fast. But here’s the catch: when users get answers this quickly, they’re less likely to click on any links below. And that’s a problem for SEO because it means less traffic to your website. Now, here’s the kicker: AI Overview can’t be disabled – there’s no way to opt out. However, if you want to get rid of it in your own browser, there’s a Chrome extension called Hide Google AI Overviews that will block it from appearing. But for the rest of us in digital marketing, it’s time to figure out how to work with the change. How does AI overview affect SEO? AI Overviews take up a massive chunk of a screen’s real estate. When they appear, they often dominate the top of the search results page, meaning even if your page ranks on page one, you could get passed over because the AI response already answered the question. It’s not all bad, though. 33.4% of the links that show up in AI Overviews are actually from pages that are also ranked in the top 10 of organic search. So, it’s not like it’s impossible to get featured if your page isn’t number one, but it is tougher. Now here’s where it gets interesting: 46.5% of the URLs that appear in AI Overviews are from websites ranked outside the top 50. So even pages that aren’t ranking highly can still be included. But, for those trying to grab organic traffic, it’s a double-edged sword. The domains that show up most often on search pages with AI Overviews are youtube.com, quora.com, wikipedia.org, reddit.com, among others, and information requests are most often generated by AI Overview (about 93%). [Hidden Content] How to optimise content for AI overview This is a dynamic field, and you need to be ready for changes, because SEO is always about challenges, testing, algorithm changes, and so on. AI Overview can actually help a brand become more recognisable and improve its reputation if you get on its radar. Content optimisation is still important, but other factors now play a major role. Michal Kurzanowski has put together a checklist for creating top-notch content that Google’s artificial intelligence will like. Follow Google’s recommendations for authors, as it automatically selects links for AI-powered response blocks from various sources, including sites that meet search engine quality standards. Start with a strong intro: The first 100 words of your page are crucial. Make sure they answer the user’s main question right off the bat. The quicker you get to the point, the better. Keep content fresh and relevant: AI likes fresh content. Update your pages regularly, and make sure your information is always relevant to the questions people are asking. Use descriptive headings: Don’t just throw random headings in there. Use H1, H2, and H3 tags that are specific and describe exactly what the content is about. Q&A format works well because many AI responses are structured this way, and it helps increase your chances of being selected. Lists Are key: Artificial intelligence loves numbered and bulleted lists! About 40% of responses come from content that includes lists. Quality over quantity: Share original research, insights, and your own case studies. Google isn’t interested in generic stuff – it’s looking for real expertise. Including quotes and statistics makes your content more authoritative. It can boost your chances of being featured by 30-40%, a huge win. Visuals and interactive elements: Add videos, infographics, and quizzes to keep users engaged. EEAT principle: The one’s huge – make sure your content reflects expertise, authoritativeness, and trustworthiness. The more your content shows these qualities, the better. Final thoughts Let’s be real: the SEO world is shifting fast. AI Overviews are here to stay, and it’s up to content creators to adapt. The days of getting traffic just by ranking high are changing. Now, it’s about providing the best, most relevant, and easiest-to-understand content that answers users’ questions quickly. For businesses like OC24 Limited, staying ahead of these changes is essential. Embrace AI Overviews by optimising your content in a way that both Google and users love, and you’ll not only keep up but thrive. The post The impact of Google AI Overview on SEO appeared first on AI News. View the full article
-
[AI]UK deploys AI to boost Arctic security amid growing threats
ChatGPT posted a topic in World News
The *** is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today. The deployment is seen as a signal of the ***’s commitment to leveraging technology to navigate an increasingly complex global security landscape. For Britain, what unfolds in the territories of two of its closest Arctic neighbours – Norway and Iceland – has direct and profound implications. The national security of the *** is linked to stability in the High North. The once remote and frozen expanse is changing, and with it, the security calculus for the ***. Foreign Secretary David Lammy said: “The Arctic is becoming an increasingly important frontier for geopolitical competition and trade, and a key flank for European and *** security. “We cannot bolster the ***’s defence and deliver the Plan for Change without greater security in the Arctic. This is a region where Russia’s shadowfleet operates, threatening critical infrastructure like undersea cables to the *** and Europe, and helping fund Russia’s aggressive activity.” British and Norwegian naval vessels conduct vital joint patrols in the Arctic. These missions are at the sharp end of efforts to detect, deter, and manage the increasing subsea threats that loom over vital energy supplies, national infrastructure, and broader regional security. Russia’s Northern Fleet, in particular, presents a persistent challenge in these icy waters. This high-level engagement follows closely on the heels of the Prime Minister’s visit to Norway earlier this month for a Joint Expeditionary Force meeting, where further support for Ukraine was a key talking point with allies from the Baltic and Scandinavian states. During the Icelandic stop of his tour, Lammy will unveil a ***-Iceland tech partnership to boost Arctic security. This new scheme is designed to harness AI technologies for monitoring hostile activity across this vast and challenging region. It’s a forward-looking strategy, acknowledging that as the Arctic opens up, so too do the opportunities for those who might seek to exploit its vulnerabilities. As global temperatures climb and the ancient ice caps continue their retreat, previously impassable shipping routes are emerging. This is not just a matter for climate scientists; it’s redrawing geopolitical maps. The Arctic is fast becoming an arena of increased competition, with nations eyeing newly accessible reserves of gas, oil, and precious minerals. Unsurprisingly, this scramble for resources is cranking up security concerns. Adding another layer of complexity, areas near the Arctic are being actively used by Russia’s fleet of nuclear-powered icebreakers. Putin’s vessels are crucial to his “High North” strategy, carving paths for tankers that, in turn, help to bankroll his ******** war in Ukraine. Such operations cast a long shadow, threatening not only maritime security but also the delicate Arctic environment. Reports suggest Putin has been forced to rely on “dodgy and decaying vessels,” which frequently suffer breakdowns and increase the risk of devastating oil spills. The ***’s defence partnership with Norway is deeply rooted, with British troops undertaking vital Arctic training in the country for over half a century. This enduring collaboration is now being elevated through an agreement to fortify the security of both nations. “It’s more important than ever that we work with our allies in the High North, like Norway and Iceland, to enhance our ability to patrol and protect these waters,” added Lammy. “That’s why we have today announced new *** funding to work more closely with Iceland, using AI to bolster our ability to monitor and detect hostile state activity in the Arctic.” Throughout his Arctic tour, the Foreign Secretary will be emphasising the ***’s role in securing NATO’s northern flank. This includes the often unseen but hugely significant task of protecting the region’s critical undersea infrastructure – the cables and pipelines that are the lifelines for stable energy supplies and telecoms for the *** and much of Europe. These targeted Arctic security initiatives are part and parcel of a broader, robust enhancement of the ***’s overall defence posture. Earlier this year, the Prime Minister announced the most significant sustained increase in defence spending since the Cold War. This will see *** defence expenditure climb to 2.5% of GDP by April 2027, with a clear ambition to reach 3% in the next Parliament, contingent on economic and fiscal conditions. The significance of maritime security and the Arctic is also recognised in the ***’s ambitious new Security and Defence Partnership with the EU, agreed last week. This pact commits both sides to closer collaboration to make Europe a safer place. In today’s interconnected world, security, climate action, and international collaboration are inextricably linked. The turn to AI isn’t just a tech upgrade; it’s a strategic necessity. (Photo by Annie Spratt) See also: Thales: AI and quantum threats top security agendas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** deploys AI to boost Arctic security amid growing threats appeared first on AI News. View the full article -
As companies rely more on automated systems, ethics has become a key concern. Algorithms increasingly shape decisions that were previously made by people, and these systems have an impact on jobs, credit, healthcare, and legal outcomes. That power demands responsibility. Without clear rules and ethical standards, automation can reinforce unfairness and cause harm. Ignoring ethics affects real people in real ways, not only changing degrees of public trust. Biased systems can deny loans, jobs, or healthcare, and automation can increase the speed of bad decisions if no guardrails are in place. When systems make the wrong call, it’s often hard to appeal or even understand why, and the lack of transparency turns small errors into ******* issues. Understanding bias in AI systems Bias in automation often comes from data. If historical data includes discrimination, systems trained on it may repeat those patterns. For example, an AI tool used to screen job applicants might reject candidates based on gender, race, or age if its training data reflects those past biases. Bias also enters through design, where choices about what to measure, which outcomes to favour, and how to label data can create skewed results. There are many kinds of bias. Sampling bias happens when a data set doesn’t represent all groups, whereas labelling bias can come from subjective human input. Even technical choices like optimisation targets or algorithm type can skew results. The issues are not just theoretical. Amazon dropped its use of a recruiting tool in 2018 after it favoured male candidates, and some facial recognition systems have been found to misidentify people of colour at higher rates than Caucasians. Such problems damage trust and raise legal and social concerns. Another real concern is proxy bias. Even when protected traits like race are not used directly, other features like zip code or education level can act as stand-ins, meaning the system may still discriminate even if the input seems neutral, for instance on the basis of richer or poorer areas. Proxy bias is hard to detect without careful testing. The rise in AI bias incidents is a sign that more attention is needed in system design. Meeting the standards that matter Laws are catching up. The EU’s AI Act, passed in 2024, ranks AI systems by risk. High-risk systems, like those used in hiring or credit scoring, must meet strict requirements, including transparency, human oversight, and bias checks. In the US, there is no single AI law, but regulators are active. The Equal Employment Opportunity Commission (EEOC) warns employers about the risks of AI-driven hiring tools, and the Federal Trade Commission (FTC) has also signalled that biased systems may violate anti-discrimination laws. The White House has issued a Blueprint for an AI Bill of Rights, offering guidance on safe and ethical use. While not a law, it sets expectations, covering five key areas: safe systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. Companies must also watch US state laws. California has moved to regulate algorithmic decision-making, and Illinois requires firms to tell job applicants if AI is used in video interviews. Failing to comply can bring fines and lawsuits. Regulators in New York City now require audits for AI systems used in hiring. The audits must show whether the system gives fair results in gender and race groups, and employers must also notify applicants when automation is used. Compliance is more than just avoiding penalties – it is also about establishing trust. Firms that can show that their systems are fair and accountable are more likely to win support from users and regulators. How to build fairer systems Ethics in automation doesn’t happen by chance. It takes planning, the right tools, and ongoing attention. Bias and fairness must be built into the process from the start, not bolted on later. That entails setting goals, choosing the right data, and including the right voices at the table. Doing this well means following a few key strategies: Conducting bias assessments The first step in overcoming bias is to find it. Bias assessments should be performed early and often, from development to deployment, to ensure that systems do not produce unfair outcomes. Metrics might include error rates in groups or decisions that have a greater impact on one group than others. Bias audits should be performed by third parties when possible. Internal reviews can miss key issues or lack independence, and transparency in objective audit processes builds public trust. Implementing diverse data sets Diverse training data helps reduce bias by including samples from all user groups, especially those often excluded. A voice assistant trained mostly on male voices will work poorly for women, and a credit scoring model that lacks data on low-income users may misjudge them. Data diversity also helps models adapt to real-world use. Users come from different backgrounds, and systems should reflect that. Geographic, cultural, and linguistic variety all matter. Diverse data isn’t enough on its own – it must also be accurate and well-labelled. Garbage in, garbage out still applies, so teams need to check for errors and gaps, and correct them. Promoting inclusivity in design Inclusive design involves the people affected. Developers should consult with users, especially those at risk of harm (or those who might, by using biased AI, cause harm), as this helps uncover blind spots. That might mean involving advocacy groups, civil rights experts, or local communities in product reviews. It means listening before systems go live, not after complaints roll in. Inclusive design also means cross-disciplinary teams. Bringing in voices from ethics, law, and social science can improve decision-making, as these teams are more likely to ask different questions and spot risks. Teams should be diverse too. People with different life experiences spot different issues, and a system built by a homogenous group may overlook risks others would catch. What companies are doing right Some firms and agencies are taking steps to address AI bias and improve compliance. Between 2005 and 2019, the Dutch Tax and Customs Administration wrongly accused around 26,000 families of fraudulently claiming childcare benefits. An algorithm used in the fraud detection system disproportionately targeted families with dual nationalities and low incomes. The fallout led to public outcry and the resignation of the Dutch government in 2021. LinkedIn has faced scrutiny over gender bias in its job recommendation algorithms. Research from MIT and other sources found that men were more likely to be matched with higher-paying leadership roles, partly due to behavioural patterns in how users applied for jobs. In response, LinkedIn implemented a secondary AI system to ensure a more representative pool of candidates. Another example is the New York City Automated Employment Decision Tool (AEDT) law, which took effect on January 1, 2023, with enforcement starting on July 5, 2023. The law requires employers and employment agencies using automated tools for hiring or promotion to conduct an independent bias audit in one year of use, publicly disclose a summary of the results, and notify candidates at least 10 business days in advance, rules which aim to make AI-driven hiring more transparent and fair. Aetna, a health insurer, launched an internal review of its claim approval algorithms, and found that some models led to longer delays for lower-income patients. The company changed how data was weighted and added more oversight to reduce this gap. The examples show that AI bias can be addressed, but it takes effort, clear goals, and strong accountability. Where we go from here Automation is here to stay, but trust in systems depends on fairness of results and clear rules. Bias in AI systems can cause harm and legal risk, and compliance is not a box to check – it’s part of doing things right. Ethical automation starts with awareness. It takes strong data, regular testing, and inclusive design. Laws can help, but real change also depends on company culture and leadership. (Photo from Pixabay) See also: Why the Middle East is a hot place for global tech investments Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ethics in automation: Addressing bias and compliance in AI appeared first on AI News. View the full article