ChatGPT
Diamond Member-
Posts
559 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
The Department of Commerce (DOC) has slammed the brakes on the sweeping “AI Diffusion Rule,” yanking it just a day before it was due to bite. Meanwhile, officials have laid down the gauntlet with stricter measures to control semiconductor exports. The AI Diffusion Rule, a piece of regulation cooked up under the Biden administration, was staring down a compliance deadline of May 15th. According to the folks at the DOC, letting this rule roll out would have been like throwing a spanner in the works of American innovation. DOC officials argue the rule would have saddled tech firms with “burdensome new regulatory requirements” and, perhaps more surprisingly, risked souring America’s relationships on the world stage by effectively “downgrading” dozens of countries “to second-tier status.” The nuts and bolts of this reversal will see the Bureau of Industry and Security (BIS), part of the DOC, publishing a notice in the Federal Register to make the rescission official. While this particular rule is heading for the shredder, the official line is that a replacement isn’t off the table; one will be cooked up and served “in the future.” Jeffery Kessler, the Under Secretary of Commerce for Industry and Security, has told BIS enforcement teams to stand down on anything concerning the now-canned AI Diffusion Rule. “The Trump Administration will pursue a bold, inclusive strategy to American AI technology with trusted foreign countries around the world, while keeping the technology out of the hands of our adversaries,” said Kessler. “At the same time, we reject the Biden Administration’s attempt to impose its own ill-conceived and counterproductive AI policies on the American people.” What was this ‘AI Diffusion Rule’ anyway? You might be wondering what this “AI Diffusion Rule” actually was, and why it’s causing such a stir. The rule wasn’t just a minor tweak; it was the Biden administration’s bid to get a tight grip on how advanced American tech – everything from the AI chips themselves to cloud computing access and even the crucial AI ‘model weights’ – flowed out of the US to the rest of the world. The idea, at least on paper, was to walk a tightrope: keep the US at the front of the AI pack, protect national security, and still champion American tech exports. But how did it plan to do this? The rule laid out a fairly complex playbook: A tiered system for nations: Imagine a global league table for AI access. Countries were split into three groups. Tier 1 nations, America’s closest allies like Japan and South Korea, would have seen hardly any new restrictions. Tier 3, unsurprisingly, included countries already under arms embargoes – like China and Russia – who were already largely banned from getting US chips and would face the toughest controls imaginable. The squeezed middle: This is where things got sticky. A large swathe of countries, including nations like Mexico, Portugal, India, and even Switzerland, found themselves in Tier 2. For them, the rule meant new limits on how many advanced AI chips they could import, especially if they were looking to build those super-powerful, large computing clusters essential for AI development. Caps and close scrutiny: Beyond the tiers, the rule introduced actual caps on the quantity of high-performance AI chips most countries could get their hands on. If anyone wanted to bring in chips above certain levels, particularly for building massive AI data centres, they’d have faced incredibly strict security checks and reporting duties. Controlling the ‘brains’: It wasn’t just about the hardware. The rule also aimed to regulate the storage and export of advanced AI model weights – essentially the core programming and learned knowledge of an AI system. There were strict rules about not storing these in arms-embargoed countries and only allowing their export to favoured allies, and even then, only under tight conditions. Tech as a bargaining chip: Underneath it all, the framework was also a bit of a power play. The US aimed to use access to its coveted AI technology as a carrot, encouraging other nations to sign up to American standards and safeguards if they wanted to keep the American chips and software flowing. The Biden administration had a clear rationale for these moves. They wanted to stop adversaries, with China being the primary concern, from getting their hands on advanced AI that could be turned against US interests or used for military purposes. It was also about cementing US leadership in AI, making sure the most potent AI systems and the infrastructure to run them stayed within the US and its closest circle of allies, all while trying to keep US tech exports competitive. However, the AI Diffusion Rule and broader plan didn’t exactly get a standing ovation. Far from it. Major US tech players – including giants like Nvidia, Microsoft, and Oracle – voiced strong concerns. They argued that the rule, instead of protecting US interests, would stifle innovation, bog businesses down in red tape, and ultimately hurt the competitiveness of American companies on the global stage. Crucially, they also doubted it would effectively stop China from accessing advanced AI chips through other means. And it wasn’t just industry. Many countries weren’t thrilled about being labelled “second-tier,” a status they felt was not only insulting but also risked undermining diplomatic ties. There was a real fear it could push them to look for AI technology elsewhere, potentially even from China, which was hardly the intended outcome. This widespread pushback and the concerns about hampering innovation and international relations are exactly what the current Department of Commerce is pointing to as reasons for today’s decisive action to scrap the rule. Fresh clampdown on AI chip exports It wasn’t just about scrapping old rules, though. The BIS also rolled out a new playbook to tighten America’s grip on AI chip exports, showing they’re serious about guarding the nation’s tech crown jewels. The latest clampdown includes: A spotlight on Huawei Ascend chips: New guidance makes it crystal clear: using Huawei Ascend chips anywhere on the planet is now a no-go under US export controls. This takes direct aim at one of China’s big players in the AI hardware game. Heads-up on ******** AI model training: A stark warning has gone out to the public and the industry about the serious consequences if US AI chips are used to train or run ******** AI models. The worry? That American tech could inadvertently supercharge AI systems that might not have US interests at heart. Guidance on shoring up supply chains: US firms are getting a fresh batch of advice on how to batten down the hatches on their supply chains to stop controlled tech from being siphoned off to unapproved destinations or users. The Department of Commerce is selling today’s double-whammy – axing the rule and beefing up export controls – as essential to “ensure that the United States will remain at the forefront of AI innovation and maintain global AI dominance.” It’s a strategy that looks to clear the runway for domestic tech growth while building higher fences around critical AI technologies, especially advanced semiconductors. This policy pivot will likely get a thumbs-up from some quarters in the US tech scene, particularly those who were getting sweaty palms about the AI Diffusion Rule and the red tape it threatened. On the flip side, the even tougher export controls – especially those zeroing in on China and firms like Huawei – show that trade policy is still very much a frontline tool in the high-stakes global chess game over who leads in tech. The whisper of a “replacement rule” down the line means this isn’t the final chapter in the saga of how to manage the AI revolution. For now, it seems the game plan is to clear the path for homegrown innovation and be much more careful about who gets to play with America’s latest breakthroughs. See also: Samsung AI strategy delivers record revenue despite semiconductor headwinds Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post US slams brakes on AI Diffusion Rule, hardens chip export curbs appeared first on AI News. View the full article
-
Saudi Arabia’s new state subsidiary, HUMAIN, is collaborating with NVIDIA to build AI infrastructure, nurture talent, and launch large-scale digital systems. The effort includes plans to set up AI “factories” powered by up to 500 megawatts of energy. The sites will be filled with NVIDIA GPUs, including the Grace Blackwell GB300 supercomputers connected via NVIDIA’s InfiniBand network. The goal is to create a base for training models, running simulations, and managing complex AI deployments. A major part of the push is about control. Saudi Arabia wants to build sovereign AI – models trained using local data, language, and systems. By building its own infrastructure, it avoids relying on foreign cloud providers. The shift aligns with a broader trend, as governments around the world start to question how AI tools are built, where data goes, and who controls it. HUMAIN is meant to give Saudi Arabia more say in that process. While other countries have launched national AI strategies, HUMAIN stands out for its structure. It’s not just a policy office or research fund; instead, it operates across the full AI value chain – building data centres, managing data, training models, and deploying applications. Few countries have a single body doing likewise with such a broad remit. Singapore’s NAIS 2.0, for example, focuses on public sector use cases and talent development, and the UAE’s approach, which emphasises frameworks and governance. China has set up AI labs in several cities, but they tend to work in silos. HUMAIN brings elements together with a central goal: make Saudi Arabia a producer, not just a user, of AI. The ambition is clear, but it comes with trade-offs. Running GPU-heavy data centres on this scale will use a lot of power. The 500-megawatt figure is far beyond typical enterprise deployments. Globally, the environmental cost of AI has become a growing concern. Microsoft and Google have both reported rising emissions from AI-related infrastructure. Saudi Arabia will need to explain how its AI factories will be powered – especially if it wants to align with its own sustainability targets under Vision 2030. The partnership with NVIDIA isn’t just about machines, it also includes training for people. HUMAIN and NVIDIA say they will run large-scale education programmes to help thousands of Saudi developers gain skills in AI, robotics, simulation, and digital twins. Building local talent is a core part of the effort, and without it, infrastructure likely won’t get used to its full potential. “AI, like electricity and internet, is essential infrastructure for every nation,” said Jensen Huang, founder and CEO of NVIDIA. “Together with HUMAIN, we are building AI infrastructure for the people and companies of Saudi Arabia to realise the bold vision of the Kingdom.” One of the tools HUMAIN plans to deploy is NVIDIA Omniverse, to be used as a multi-tenant platform for industries like logistics, manufacturing, and energy. These sectors could create digital twins – virtual versions of real systems – to test, monitor, and improve operations. The idea is simple: simulate before you build, or run stress tests in digital form to save time and money later. This type of simulation and optimisation supports Saudi Arabia’s broader push into automation and smart industry. It fits in a wider narrative of transitioning from oil to advanced tech as a core pillar of the economy. The deal fits into NVIDIA’s global strategy, and the company has similar partnerships in India, the UAE, and Europe. Saudi Arabia offers strong government support, deep funding, and the promise to become a new AI hub in the Middle East. In return, NVIDIA provides the technical backbone – GPUs, software platforms, and the know-how to run them. The partnership helps both sides. Saudi Arabia gets the tools to build AI from the ground up and build a new economic version of itself, while NVIDIA gains a long-term customer and a foothold in a growing market. There are still gaps to watch. How will HUMAIN govern the use of its models? Will they be open for researchers and startups, or tightly controlled by the state? What role will local universities or private companies play? And can workforce development keep pace with the rapid buildout of infrastructure? HUMAIN isn’t just building for now. The structure suggests a long-term bet – one that links compute power, national priorities, and a shift in how AI is developed and deployed. Saudi Arabia wants more than access. It wants influence. And HUMAIN, in partnership with NVIDIA, is the engine it’s building to get there. (Photo by Mariia Shalabaieva) See also: Huawei’s AI hardware breakthrough challenges Nvidia’s dominance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Saudi Arabia moves to build its AI future with HUMAIN and NVIDIA appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
The US Food and Drugs Administration (FDA) has stated that it wants to accelerate the deployment of AI across its centres. FDA Commissioner Martin A. Makary has announced an aggressive timeline to scale use of AI by 30 June 2025 and is betting big on the technology to change drug approval processes for the US. But the rapid AI deployment at the FDA raises important questions about whether innovation can be balanced with oversight. Strategic leadership drive: FDA names first AI chief The foundation for the ambitious FDA AI deployment was laid with the appointment of Jeremy Walsh as the first-ever Chief AI Officer. Walsh previously led enterprise-scale technology deployments in federal health and intelligence agencies and came from government contractor Booz Allen Hamilton, where he worked for 14 years as chief technologist. His appointment, announced just before the May 8th rollout announcement, signals the agency’s serious commitment to technological transformation. The timing is significant – Walsh’s hiring coincided with workforce cuts at the FDA, including the loss of key tech talent. Among the losses was Sridhar Mantha, the former director of strategic programmes at the Center for Drug Evaluation and Research, who had co-chaired the AI Council at CDER and helped develop policy around AI’s use in drug development. Ironically, Mantha is now working alongside Walsh to coordinate the agency-wide rollout. The pilot programme: Impressive results, limited details What’s driving the rapid AI deployment is the reported success of the agency’s pilot programme trialling the software. Commissioner Makary said he was “blown away by the success of our first AI-assisted scientific review pilot,” with one official claiming the technology enabled him to perform scientific review tasks in minutes that used to take three days. However, the scope, rigour and results from the pilot scheme remain unreleased. The agency has not published detailed reports on the pilot’s methodology, validation procedures, or specific use cases tested. The lack of transparency is concerning given the high-stakes nature of drug evaluation. When pressed for details, the FDA has promised that additional details and updates on the initiative will be shared publicly in June. For an agency responsible for protecting public health through rigorous scientific review, the absence of published pilot data raises questions about the evidence base supporting such an aggressive timeline. Industry perspective: Cautious optimism meets concerns The pharmaceutical industry’s reaction to the FDA AI deployment reflects a mixture of optimism and apprehension. Companies have long sought faster approval processes, with Makary pointedly asking, “Why does it take over 10 years for a new drug to come to market?” “While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the centre. We’re pleased to see the FDA taking concrete action to harness the potential of AI,” said PhRMA spokesperson Andrew Powaleny. However, industry experts are raising practical concerns. Mike Hinckle, an FDA compliance expert at K&L Gates, highlighted a key issue: pharmaceutical companies will want to know how the proprietary data they submit will be secured. The concern is particularly acute given reports that the FDA was in discussions with OpenAI about a project called cderGPT, which appears to be an AI tool for the Centre for Drug Evaluation and Research. Expert warnings: The rush vs rigour debate Leading experts in the field are expressing concern about the pace of deployment. Eric Topol, founder of the Scripps Research Translational Institute, told Axios: “The idea is good, but the lack of details and the perceived ‘rush’ is concerning.” He identified critical gaps in transparency, including questions about which models are being used to train the AI, and what inputs are provided for specialised fine-tuning. Former FDA commissioner Robert Califf struck a balanced tone: “I have nothing but enthusiasm tempered by caution about the timeline.” His comment reflects the broader sentiment among experts who support AI integration but question whether the June 30th deadline allows sufficient time for proper validation and safeguards to be implemented. Rafael Rosengarten from the Alliance for AI in Healthcare supports automation but emphasises the need for governance, saying there is a need for policy guidance around what kind of data is used to train AI models and what kind of model performance is considered acceptable. Political context: Trump’s deregulatory AI vision The FDA AI deployment must be understood in the broader context of the Trump administration’s approach to AI governance. Trump’s overhaul of federal AI policy – ditching Biden-era guardrails in favour of speed and international dominance in technology – has turned the government into a tech testing ground. The administration has explicitly prioritised innovation over precaution. Vice President JD Vance outlined four key AI policy priorities, including encouraging “pro-growth AI policies” instead of “excessive regulation of the AI sector,” and he has taken action to ensure the forthcoming White House AI Action Plan would “avoid an overly precautionary regulatory regime.” The philosophy is evident in how the FDA is approaching its AI deployment. With Elon Musk leading a charge under an “AI-first” flag, critics warn that rushed rollouts at agencies could compromise data security, automate important decisions, and put Americans at risk. Safeguards and governance: What’s missing? While the FDA has promised that its AI systems will maintain strict information security and act in compliance with FDA policy, specific details about safeguards remain sparse. The agency’s claims that AI is a tool to support, not replace, human expertise and can enhance regulatory rigour by helping predict toxicities and adverse events. This provides some reassurance but lacks specificity. The absence of published governance frameworks for what is an internal process contrasts sharply with the FDA’s guidance for industry. The agency has previously issued draft guidance to pharma companies, providing recommendations on the use of AI intended to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality. Its published draft guidance in that instance was based on feedback from over 800 external comments and its experience with more than 500 drug submissions involving AI components in their development since 2016. The broader AI landscape: Federal agencies as testing grounds The FDA’s initiative is part of a larger federal AI adoption wave. The General Services Administration is piloting an AI chatbot to automate routine tasks, and the Social Security Administration plans to use AI software to transcribe applicant hearings. However, GSA officials noted its tool has been in development for 18 months – highlighting the contrast with the FDA’s accelerated timeline, which at the time of writing, is a matter of weeks. The rapid federal adoption reflects the Trump administration’s belief that America is well-positioned to maintain its global dominance in AI and that the Federal Government must capitalise on the advantages of American innovation. It also maintains the importance of strong protections for Americans’ privacy, civil rights, and civil liberties. Innovation at a crossroads The FDA’s ambitious timeline embodies the fundamental tension between technological promise and regulatory responsibility. While AI offers clear benefits in automating tedious tasks, the rush to implementation raises critical questions about transparency, accountability, and the erosion of scientific rigour. The June 30th deadline will test whether the agency can maintain the public trust that has long been its cornerstone. Success requires more than technological capability – it demands proof that oversight hasn’t been sacrificed for speed. The FDA AI deployment represents a defining moment for pharmaceutical regulation. The outcome will determine whether rapid AI adoption strengthens public health protection or serves as a cautionary tale about prioritising efficiency over safety in matters of life and death. The stakes couldn’t be higher. See also: AI vs COVID-19: Here are the AI tools and services fighting coronavirus Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post FDA AI deployment: Innovation vs oversight in drug regulation appeared first on AI News. View the full article
-
Google’s recent announcement of an AI Futures Fund shows the company wants to invest in AI startups. It sees itself as providing capital, early access to AI models yet to reach production, and support for startups from experts at the company. This is not Google’s first rodeo. To date, Alphabet (Google’s parent company) has invested in 38 AI companies. Big name acquisitions to date include the ***’s DeepMind, Waymo, the autonomous vehicle intelligence company, and home automation specialist Nest. While such investments signal a broad intent to at least associate Google with startups coming to market with a smart idea (if not necessarily to ******** up smaller companies), the current situation regarding the US courts’ attitude to monopolistic behaviour by Google questions just how far any relationship may go in the longer term. If Google faces increased scrutiny in the next few years on the back of any eventual ruling the US Department of Justice makes, having what might be interpreted as a monopolistic position in AI could bring down the ire of the judiciary. On the other hand, if Google is forced to divest itself of some of its more profitable divisions – the Chrome browser, the Android mobile operating system, some or all of its ad networks – the company may have to double down on its other sources of revenue; and AI could become its favoured métier. If the board at Alphabet decide to bet large stakes on AI, one core aspect of doing so will need clear and definite resolution: the economic viability of continuing AI implementation in the forms that users have become familiar with in the last couple of years. According to some industry commentators, the AI ‘whale’, OpenAI, is struggling to monetise its operations to the extent that will satisfy its investors. Google’s share of the AI market is tiny in comparison with OpenAI’s, but it suffers from the same potential financial issues. Google’s way through may be to continue its original role as provider of information searched for on the internet, and use its models to improve the search results given to online queries, and perhaps monetise around that transaction: either charging end-users for AI-powered search, or having advertisers pay for top spots in AI-generated search results. In that role, Google would be returning to its original function, but with the addition of AI algorithms under the surface – AI that improves a service that’s proven to be in massive demand, rather than AI being the main focus of user activity. Meta’s latest earnings call signalled that Mark Zuckerberg wants to do just that: return to the roots of the Facebook platform as a social connector, but have AI improve users’ experiences. It’s proposed that any acquisitions by Google of AI companies in the future would have to get the approval of the US Department of Justice. The company says such a move would limit investment in future AI technologies, a sentiment echoed by Anthropic representatives during the anti-monopolistic search practices court case brought against Google by the DOJ. If such a government approval edict were in place for Google, it would change the nature of companies that Google might fund via the AI Futures Fund or similar scheme. Rather than risking censure by appearing to add companies to the Alphabet stable that are in line with Google’s AI offerings, those benefiting from the company’s largess would be more likely to be niche players, bringing unique products to sectors of the economy where Google doesn’t already hold sway. Amazon’s acquisitions have been, at first glance, more in line with that seeking out of niche products to snap up. The Ring home smart device company (acquired in 2018 for $1bn) and One Medical (2022, $3.9bn) were purchases well outside of Amazon’s core cloud and retail verticals. Both, however, are prime sources of training data for AI models – consumer behaviour metrics and healthcare information are prime data real-estate. Google’s strategy for investment in smaller companies will need to be similarly canny, given that whatever the outcome of the DOJ case, its activities will be subject to intense scrutiny by the courts, end-users, and the press. No article discussing the activities of US big tech companies would be complete in 2025 without the addition of caveats around the present American leadership’s attitude to competition in the sector. The elephant in the room is the real possibility of executive veto of, or significant amendment to, any judicial ruling. Those potential game-changing elements could affect Google and Alphabet’s investment plans with little notice and less reason. (Image source: “Dallas DA LGBT Task Force visits DOJ FBP” by Dallas County DA is licensed under CC BY-NC-ND 2.0.) See also: Apple AI stresses privacy with synthetic and anonymised data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google’s AI Futures Fund may have to tread carefully appeared first on AI News. View the full article
-
The Trump AI chip policy reversal announced this week signals a shift in how advanced computing technologies will flow in global markets, as the administration prepares to dismantle Biden’s complex three-tier regulatory framework. Set to take effect on May 15, 2025, the Biden administration’s Framework for Artificial Intelligence Diffusion would have created a stratified global technology landscape with significant implications for international trade, innovation, and geopolitical relationships. According to a Commerce Department on Wednesday, the Trump administration views the existing approach as fundamentally flawed. “The Biden AI rule is overly complex, overly bureaucratic, and would stymie American innovation,” a Commerce Department spokeswoman told Reuters. “We will be replacing it with a much simpler rule that freees American innovation and ensures American AI dominance,” they said. The policy shift comes months after the Biden administration finalised an export control framework during its final week in office. That framework represented the culmination of a four-year effort to restrict China’s access to cutting-edge chips while maintaining US leadership in artificial intelligence technology. The decision to rescind the rule reflects the Trump administration’s different approach to balancing national security concerns with commercial interests. < Understanding the three-tier system The soon-to-be-eliminated rule had established a hierarchical structure for global technology access. In the first tier, 17 countries plus Taiwan would have enjoyed unlimited access to advanced AI chips. A second tier of approximately 120 countries would have operated under strict numerical caps limiting their imports. The third and final tier – including China, Russia, Iran, and North Korea – would have been completely blocked from accessing these technologies. The structured approach aimed to prevent advanced technologies from reaching countries of concern through intermediaries while still allowing access for allies and neutral nations. However, critics argued the complexity of the system would create significant compliance burdens and push international partners toward alternative suppliers. < The new approach taking shape Instead of the tiered system, sources cited by Reuters indicate the Trump administration is considering implementing a global licensing regime supported by inter-governmental agreements. The approach would potentially offer more flexibility and maintain controls over sensitive technology. The timing of the announcement appears strategically significant. Bloomberg reported the changes are developing as President Trump prepares for a trip to the Middle East, where countries including Saudi Arabia and the United Arab Emirates have expressed frustration over existing restrictions on their acquisition of AI chips. The Commerce Department’s decision could be announced as soon as Thursday, according to a source familiar with the matter. < Market reaction and industry impact News of the policy reversal has already sent ripples through financial markets. Shares of Nvidia, the dominant manufacturer of chips used for training AI models, ended 3% higher on May 7 following the announcement, though they dipped 0.7% in after-hours trading, according to Reuters. The company has consistently opposed the growing number of US restrictions. Nvidia CEO Jensen Huang argues that American companies should be able to sell into China, which, he predicts, will become a $50 billion market for AI chips in the next couple of years. However, it’s important to note that the Trump AI chip policy shift does not signal a complete abandonment of export controls. The administration has already demonstrated its willingness to take strong action against China, specifically, by banning Nvidia from selling its H20 chip there – a move that cost the company $5.5 billion in writedowns, according to Bloomberg. < Global winners and losers The policy reversal creates a complex map of potential winners and losers in the global technology landscape. Countries like India and Malaysia, which hadn’t faced chip restrictions before the Biden rule was unveiled in January, will see temporary relief. In Malaysia’s case, this could particularly benefit Oracle Corporation, which has plans for a massive data centre expansion that would have exceeded limits established by rules governing AI hardware distribution. Middle Eastern nations also stand to gain. The UAE and Saudi Arabia, which have faced chip export controls since 2023, may now be able to negotiate more favourable terms. Trump has expressed interest in easing restrictions for the UAE specifically and could announce the beginning of work on a government-to-government AI chip agreement during his upcoming visit to the region from May 13 to 16. The UAE’s aggressive pursuit of such an agreement, backed by its pledge to invest up to $1.4 trillion in US technology and infrastructure over the next decade, exemplifies how high-stakes these negotiations have become for countries seeking to establish themselves as AI powerhouses. < Uncertainty ahead According to Axios, the Trump administration is currently developing a new control scheme, which could emerge as either a new rule or an executive order. The transition ******* creates significant uncertainty for companies like Nvidia regarding the regulatory environment they’ll face in the coming months. While the new framework takes shape, the administration has indicated it will continue enforcing existing chip export controls. One potential element of the new approach might involve imposing controls specifically on countries that have diverted chips to China, including Malaysia and Thailand, according to a source familiar with the matter. Industry stakeholders remain divided on the issue. While chip manufacturers have lobbied aggressively against strict export controls, some AI companies, including Anthropic, have advocated for maintaining protections that safeguard US intellectual property and technological advantages. < Balancing competing priorities The Biden administration’s export controls were designed to limit access to chips needed for cutting-edge AI development, with a particular focus on preventing ******** firms from finding indirect routes to technology that existing export controls prevented them from importing directly. Creating a balanced approach that addresses national security concerns while promoting US commercial interests presents significant challenges. Establishing agreements with a wide range of countries eager to purchase advanced AI chips would require navigating complex diplomatic relationships and potentially creating dozens of separate policy frameworks. The Commerce Department has not provided a specific timeline for when any new rules are to be finalised or implemented, only indicating that debate continues on the optimal approach forward. The shift in Trump AI chip policy reflects the administration’s broader emphasis on American competitiveness and innovation while still maintaining control over technologies with national security implications. As officials work to craft a replacement framework, the global AI chip market remains in flux, with profound implications for technological development, international relations, and corporate strategies in the evolving artificial intelligence landscape. also: US-China AI chip race: Cambricon’s first profit lands Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Trump AI chip policy to scrap Biden’s export controls appeared first on AI News. View the full article
-
Apple continues its focus on hardware produced in-house, and is currently working on a new generation of chips for future hardware, according to Bloomberg. The processors are thought to be destined for use in smart glasses, AI-capable servers, and the next generations of Macs. One project involves a custom chip designed for smart glasses, which are thought to offer voice commands, photo capture, and audio playback, but will not be full augmented reality (AR) devices. The chip design is based on the low-power components used at present in the latest models of the Apple Watch, but modified to use less energy and support multiple cameras. Apple has yet to comment on any of the rumoured projects it’s a company with a strict policy of keeping the products it may, or may not be developing under wraps. However, production for the glasses chip is said to begin by late 2026 or early 2027. If that timeline holds true, devices could reach the market in two years. As with most of Apple’s chips, Taiwan Semiconductor Manufacturing Co. is expected to handle production. Smart glasses have been in development at Apple for several years, industry insiders claim. The company aims to build full AR wearables that overlay digital information onto real-world views, but the technology is yet to be ready for everyday use. In this sector, Meta has already broken some ground, launching smart glasses in partnership with Ray-Ban. Apple seems to be pursuing a similar product, minus the AR features – at least, in any device’s first iteration. Sources say Apple is developing both AR and non-AR glasses under the codename N401, previously N50. According to reports, Apple’s CEO Tim Cook hopes for the company to take a lead in this market segment. Meta, meanwhile, is expanding its own product line, planning to debut a high-end model of its Ray-Ban style device with a display later this year. The company is said to be targeting 2027 for its first, fully-AR glasses gadget. Apple’s non-AR glasses could use cameras to scan the environment and apply AI to assist users, mirroring Meta’s current strategy. Apple is said to be biding its time, and waiting for AI software to mature before committing to a full product release. In the meantime, Apple is exploring other avenues to improve its current product lines, with engineers reportedly testing features like cameras in AirPods and smartwatches, which will likely use Apple chips currently in development. Codename “Nevis” is slated for a camera-enabled Apple Watch, while “Glennie” is intended for AirPods. Both are thought to be planned for release by 2027. Apple is said to be preparing a new set of processors specifically for Macs; the M6 (Komodo) and M7 (Borneo), and a higher-end chip “Sotra”. Apple is also thought to be planning to upgrade the iPad Pro and MacBook Pro with its M5 chip later this year. Internal-to-Apple chip development efforts are part of Apple’s broader push to control the full hardware stack of its products. The hardware group, led by Johny Srouji, has been expanding its portfolio: Earlier this year, Apple launched its first in-house modem chip in the iPhone 16e, with a higher-end version, the C2, planned for release in 2026. (Photo by Unsplash) See also: Apple AI stresses privacy with synthetic and anonymised data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Apple developing custom chips for smart glasses and more appeared first on AI News. View the full article
-
Samsung Electronics’ strategic focus on AI has delivered high revenue in the first quarter of 2025, as the South Korean tech giant navigates semiconductor market challenges and growing global trade uncertainties. The company posted an all-time quarterly high revenue of KRW 79.14 trillion ($55.4 billion), marking a 10% increase year-over-year, according to its financial results released on Wednesday. Operating profit climbed to KRW 6.7 trillion ($4.68 billion), representing a modest 1.5% increase compared to the same ******* last year. The results exceeded Samsung’s earlier forecast of KRW 79 trillion and analysts’ expectations. Smartphone success counters chip challenges The Mobile Experience (MX) Business emerged as the best performer, contributing KRW 37 trillion in consolidated revenue and KRW 4.3 trillion in operating profit – its highest level in four years. The success was driven primarily by strong sales of the flagship Galaxy S25 series, which features AI abilities via Galaxy AI. “Enhanced cost competency and price declines for some components also contributed to solid double-digit profitability,” the company’s earnings report said. In contrast, Samsung’s Device Solutions (DS) Division, which includes its semiconductor operations, posted KRW 25.1 trillion in revenue and KRW 1.1 trillion in operating profit – a 42% decline from the previous year. The performance reflects ongoing challenges in the semiconductor market, particularly in high-bandwidth memory (HBM) sales. “Overall earnings were impacted by the erosion of average selling price (ASP), as well as a decrease in HBM sales due to export controls on AI chips and deferred demand in anticipation of upcoming enhanced HBM3E products,” Samsung said. Trade tensions cloud future outlook Despite the record revenue, Samsung has expressed caution about the second quarter, dropping its usual business outlook due to growing macroeconomic uncertainties stemming from global trade tensions and slowing economic growth. “Due to the rapid changes in policies and geopolitical tensions among major countries, it’s difficult to accurately predict the business impact of tariffs and established countermeasures,” a Samsung executive stated during Wednesday’s earnings call. Of particular concern are US President Donald Trump’s “reciprocal” tariffs, most of which have been suspended until July but threaten to impact dozens of countries including Vietnam and South Korea, where Samsung produces smartphones and displays. While Samsung noted that its flagship products like semiconductors, smartphones, and tablets are currently exempt from these tariffs, the company revealed that Washington is conducting a product-specific tariff probe into these categories. “There are a lot of uncertainties ahead of us […] we are communicating with related countries to minimise negative effects,” Samsung said during the call. In response to its challenges, the company disclosed it is considering relocating production of TVs and home appliances. AI investment and future strategy Despite these headwinds, Samsung remains committed to its artificial intelligence strategy, allocating its highest-ever annual R&D expenditure for 2024. In the first quarter of 2025, the company increased R&D spending by 16% compared to the same ******* last year, amounting to KRW 9 trillion. For the remainder of 2025, Samsung plans to expand its AI smartphone lineup through the introduction of “Awesome Intelligence” to the Galaxy A series and the launch of the Galaxy S25 Edge in Q2. Later in the year, the company will strengthen its foldable lineup with enhanced AI user experiences. In the semiconductor space, Samsung aims to strengthen its position in the high-value-added market through its server-centric portfolio and the ramp-up of enhanced HBM3E 12H products to meet initial demand. The company expects AI-related demand to remain high in the second half of 2025, coinciding with the launch of new GPUs. “In the mobile and PC markets, on-device AI is expected to proliferate, so the Memory Business will proactively respond to this shift in the business environment with its industry-leading 10.7Gbps LPDDR5x products,” Samsung stated. The company’s foundry business remains focused on its 2nm Gate-All-Around (GAA) process development, which remains on schedule despite current challenges. Market reaction and competitive landscape Samsung shares were trading down approximately 0.6% following the announcement, reflecting investor concerns about the uncertain outlook. The results highlight Samsung’s complex position in the AI market – succeeding in consumer-facing applications while working to catch up with competitors in AI-specific semiconductor components. Local rival SK Hynix, which reported a 158% jump in operating profit last week to KRW 7.4 trillion, has overtaken Samsung in overall DRAM market revenue for the first time, capturing 36% global market share compared to Samsung’s 34%, according to Counterpoint Research. SK Hynix’s success has been particularly pronounced in the high-bandwidth memory segment, which is crucial for AI server applications. “Samsung has assumed that the uncertainties are diminished, it expects its performance to improve in the second half of the year,” the company noted, striking a cautiously optimistic tone despite the challenges ahead. Samsung’s record revenue masks a pivotal crossroads for the tech giant: while its AI-enhanced smartphones flourish, its once-dominant semiconductor business risks falling behind in the AI revolution. The coming quarters will reveal whether Samsung’s massive R&D investments can reclaim lost ground in HBM chips, or if we’re witnessing a fundamental power shift in Asian tech manufacturing that could alter the global AI supply chain for years to come. For a company that rebuilt itself numerous times over its 56-year history, the AI semiconductor race may prove to be its most consequential transformation yet. (Image credit: Anthropic) See also: Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost <figurewp-block-image”> Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Samsung AI strategy delivers record revenue despite semiconductor headwinds appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
ServiceNow has opened its Knowledge 2025 conference with a brand new AI platform. The ambition is clear: to give businesses a single, coherent way to get all their different AI tools and intelligent agents working together, right across the company. This isn’t just a fresh lick of paint; we’re talking deep-rooted new features and much cosier relationships with the likes of NVIDIA, Microsoft, Google, and Oracle. The aim? To finally help businesses orchestrate their operations with genuine intelligence, and it seems some big names like Adobe, Aptiv, the NHL, Visa, and Wells Fargo are already seeing it pay off. Most business leaders you speak with today will tell you they’re wrestling with a tangled mess of complexity. We’ve got systems that don’t talk to each other, data all over the shop, often no real game plan for AI, and that constant pressure to do more with less. ServiceNow believes AI innovation is the answer here, a way to fundamentally change how businesses run – making them more resilient, more efficient, and helping them get a handle on costs, all while chipping away at that mountain of tech debt and operational guesswork. However, the company rightly points out that just throwing AI at the problem won’t cut it. You need a clear vision and a real commitment to using ‘agentic AI’ – think smart, autonomous AI helpers – all underpinned by a platform built for what’s next. For the companies already diving in, these AI agents are apparently delivering the goods, boosting productivity in all sorts of ways. And it’s hitting the bottom line: ServiceNow’s own research, the Enterprise AI Maturity Index, shows that 55% of organisations using this kind of agentic AI have seen their gross margins improve. That’s a hefty jump compared to the mere 22% for those not yet on board. Bill McDermott, Chairman and CEO of ServiceNow, commented: “ServiceNow is igniting a new era of enterprise transformation with the ServiceNow AI Platform. We’re unleashing the full power of AI, across any industry, any agent, any workflow. “For decades, CEOs have wanted technology to accelerate the speed of business transformation. With this next generation architecture, we finally have the foundation to run the integrated enterprise in real time. We are the only ones who can orchestrate AI, data, and workflows on a single platform.” It’s not just talk; some big names are already putting ServiceNow’s agentic AI through its paces: Adobe is using it to speed up automation in IT and workplace services. Think AI agents handling common but time-consuming requests like password resets. The upshot? Fewer support tickets, quicker fixes, and happier, more productive staff. They’re also looking to add ServiceNow’s Workflow Data Fabric and RaptorDB to chew through their data even faster. Aptiv, known for its work in critical industries, is teaming up with ServiceNow. Their new partnership aims to blend ServiceNow’s AI smarts with Aptiv’s edge intelligence to boost automation and keep things running smoothly when it really matters. The NHL is, in their words, “going all in on ServiceNow AI.” The goal is to streamline how they operate, making life easier for employees needing quick solutions and helping arena technicians give fans an even better game day. Wells Fargo has rolled out ServiceNow AI with RaptorDB to automate tricky workflows and process huge amounts of data in real-time. This, they hope, will lead to smarter, AI-driven decisions right across the bank. Visa is set to bring in ServiceNow Disputes Management – a system they built together. It uses AI agents on the ServiceNow platform to help resolve payment disputes. And it seems they like what they see, as Visa plans to use ServiceNow’s AI to run its own managed dispute services. So, what’s under the bonnet of this reimagined ServiceNow AI Platform? It’s built to bring together intelligence, data, and the actual doing – the orchestration. The idea is to help companies move beyond a few scattered AI experiments to making AI a core part of how they operate. A big piece of this puzzle is what they call a “smart, conversational AI Engagement Layer.” This lets people get complex tasks done across different systems by tapping into ServiceNow’s Knowledge Graph, Workflow Data Fabric, and AI Agent Fabric. These clever bits are designed to connect smoothly with all sorts of enterprise data systems and AI frameworks. This launch is clearly just the start of a whole wave of new stuff. We’re seeing expanded partnerships, literally thousands of ready-made AI agents, and the introduction of the ServiceNow AI Control Tower. Alongside the main platform, ServiceNow is adding some additional firepower: AI Control Tower: Think of it as a central dashboard for managing everything AI – whether it’s ServiceNow’s own AI, or tools from other providers. It’s about keeping an eye on things, making sure it’s secure, and getting real value from every AI agent, model, and workflow, all in one place. The goal is better teamwork, solid oversight, and automation that actually scales. AI Agent Fabric: This is the communication network for a company’s AI ecosystem. It’s about getting AI agents – no matter who built them (ServiceNow, partners, or your own tech teams) – to work together smoothly across different tools, teams, and even vendors like Microsoft, NVIDIA, Google, and Oracle. They can share information, coordinate jobs, and generally get things done more effectively together. Next-generation CRM: They’re also beefing up their Customer Relationship Management offering with more AI. The plan is to bring sales, order fulfilment, and customer service onto one platform. This means businesses can shift from just reacting to customer problems to proactively engaging with them at every stage, from quoting a price right through to renewal. Apriel Nemotron 15B: This new reasoning LLM is built with NVIDIA. It’s designed to power intelligent AI agents that can think and act at scale, promising high accuracy, quick responses, lower running costs, and generally faster, smarter AI for everyone. And it’s not just about the tech; ServiceNow is also doubling down on people with the launch of ServiceNow University. This is their bid to give organisations the tools to upskill their workforce, helping them not just unlock individual potential but also drive real business change. At the end of the day, even the smartest AI needs smart people to make the most of it. See also: UAE to teach its children AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ServiceNow bets on unified AI to untangle enterprise complexity appeared first on AI News. View the full article
-
The United Arab Emirates looks set to integrating AI education in its schools’ curricula, meaning all children from kindergarten to high school will learn about the technology, how it can be used day-to-day, and the best ways to implement the various types of models. There will also be classes covering the ethics of AI, something that the country’s young might eventually teach to the world, according to OpenAI’s Sam Altman, who once termed the UAE the world’s ‘sandbox’ in which issues around AI such as governance could be thrashed out, and from which the rest of the world can make its regulatory models. The new curriculum will include areas such as data and algorithms, software applications, the ethics of AI, real-world applications of the technology, policies, and social engagement. All modules have been designed to be age-appropriate, and will be incorporated into the standard curriculum, rather than being taught after-hours. The rollout into schools is part of the country’s National Artificial Intelligence Strategy 2031, which aims to position the Kingdom as a global leader in AI capabilities – including education – and is part of wider efforts to diversify the UAE’s economy from its core basis of oil production and *****. In addition to the changes to school timetables, the National AI Strategy also includes funds to promote AI in research, with institutions like the American University of Sharjah and United Arab Emirates University committed to the technology’s use in the higher and postgraduate sectors. There are also public awareness and learning initiatives springing up in the country as it gathers momentum to become the country that’s most behind the possibilities of AI in the modern world. An AI investment fund is expected to reach a value of $100 billion in the next few years, according to people familiar with the project. The country also has plans to spend around $1.4 trillion in in the US in the next ten years on energy generation, semiconductor manufacture, and AI infrastructure. Investments abroad will ensure the Kingdom retains positive relations with elements of its supply chain related to AI. US President Trump is thought to be considering easing tariffs and other restrictions on exports of Nvidia hardware to the Kingdom, and has planned a visit to the region later this month, when he will also visit Saudi Arabia and Qatar. The UAE has actively encouraged investment in infrastructure from ******** manufacturers such as Huawei, and is seen as something of an impartial middle-ground in the ongoing trade war between the Western and Eastern industrial and technology blocs. The wider region is home to some of the most affluent nations, so any curbs on trade tend to have negative effects on vendors based on both sides of the divide. See also: Conversations with AI – Education (Image source: “Dubai” by Eugene Kaspersky is licensed under CC BY-NC-SA 2.0.) The post UAE to teach its children AI appeared first on AI News. View the full article
-
Ever wondered what happens when a company trying to build a ‘brain for the world’ needs to grow up, fast, without selling its soul? Well, OpenAI has just given us a peek as it pledges to keep its nonprofit core amid broader restructuring. OpenAI CEO Sam Altman has laid out their roadmap, and the headline news is: they’re rejigging the money side of things, but their core mission to make Artificial General Intelligence (AGI) work for all of us remains bolted down. In a letter, Altman wrote: “OpenAI is not a normal company and never will be.” It’s a bold statement, but it sets the scene for a company wrestling with how to fund world-changing tech while keeping its ethical compass pointing true north. Cast your mind back, if you will, to OpenAI’s early days. Altman paints a picture that’s a far cry from the tech behemoth it’s becoming. “When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission,” he shared. “We started out staring at each other around a kitchen table, wondering what research we should do.” Forget fancy business models or product roadmaps back then. The idea of AI dishing out medical advice, revolutionising how we learn, or needing the kind of computing power that makes your gaming PC look like a pocket calculator – “hundreds of billions of dollars of compute,” as Altman puts it – wasn’t even on the horizon. Even the ‘how’ of building AGI was a bit of a head-scratcher. When OpenAI was founded as a nonprofit, some of the early thinkers at the company apparently thought AI should probably only be trusted to a handful of “trusted people” who could “handle it.” That view has done a complete 180. “We now see a way for AGI to directly empower everyone as the most capable tool in human history,” Altman declared. The big dream? If everyone gets their hands on AGI, we’ll cook up amazing things for each other, pushing society forward. Sure, some might use it for dodgy stuff, but Altman’s betting on humanity: “We trust humanity and think the good will outweigh the bad by orders of magnitude.” Their game plan is what they call “democratic AI.” They want to give us all these incredible tools. They’re even talking about open-sourcing powerful models, saying they want us to make decisions about how ChatGPT behaves. “We want to build a brain for the world and make it super easy for people to use for whatever they want (subject to few restrictions; freedom shouldn’t impinge on other people’s freedom, for example),” Altman explained. And people are already getting stuck in. Scientists are crunching data faster, programmers are coding smarter, and folks are even using ChatGPT to navigate tricky health issues or get advice on tough personal situations. Here’s the rub: the world wants way more AI than they can currently churn out. “We currently cannot supply nearly as much AI as the world wants,” Altman admitted. This insatiable appetite for AI, and the eye-watering sums of cash needed to feed it, is why OpenAI feels it’s time for it to “evolve” beyond a strict nonprofit structure. Altman boiled the restructuring down to three main goals: Getting the dough: They need to find a way to pull in the “hundreds of billions of dollars and may eventually require trillions of dollars” – yes, trillions with a ‘T’ – to make their AI tools available to everyone on the planet. Think of it like building a global superhighway for intelligence. Supercharging the nonprofit: They want their original nonprofit arm to be the “largest and most effective nonprofit in history,” using AI to make a massive positive difference in people’s lives. Delivering AGI that’s helpful and safe: This means doubling down on safety and making sure AI aligns with human values. Altman’s proud of OpenAI’s track record, including creating new “red teaming” methods (where they get clever people to try and break their AI to find flaws) and being open about how their models work. So, what’s the grand plan for this evolution? Crucially, the nonprofit side of OpenAI is staying firmly in the driver’s seat. This isn’t just some vague promise; it came after serious chats with “civic leaders” and the offices of the Attorneys General of California and Delaware. “OpenAI was founded as a nonprofit, is today a nonprofit that oversees and controls the for-profit, and going forward will remain a nonprofit that oversees and controls the for-profit. That will not change,” Altman stated. The bit that is changing is the for-profit LLC that currently sits under the nonprofit. This will morph into a Public Benefit Corporation (PBC). If you’re scratching your head, a PBC is a type of company that’s legally bound to consider its public benefit mission alongside making money. Think of companies like Patagonia or some ethical food brands – they want to do good while still being a business. It’s a model other AGI labs like Anthropic are using too, so it’s becoming a bit of a trend for purpose-driven tech firms. This also means they’re ditching their old, rather head-scratching “capped-profit” system. Altman explained this made sense when it looked like one company might dominate AGI, but now, with lots of players in the game, a “normal capital structure where everyone has stock” is simpler. The nonprofit side of OpenAI won’t just be in the driving seat; it’ll also become a big shareholder in this new PBC. According to Altman, this means the nonprofit will get a hefty chunk of resources to pour into programmes that help AI benefit different communities. As the PBC makes more money, the nonprofit gets more cash to splash on projects in areas like health, education, and science. They’re even getting a special commission to dream up ways their nonprofit work can make AI more democratic. Altman wrapped things up with a healthy dose of optimism, saying, “We believe this sets us up to continue to make rapid, safe progress and to put great AI in the hands of everyone.” OpenAI is clearly trying to attract the colossal funding needed for AGI development while hard-wiring its “benefit all of humanity” mantra into its very DNA. It’s a delicate tightrope walk, and you can bet the entire tech world, and probably a good chunk of the rest of us, will be watching to see if they can pull it off. (Image by Mohamed Hassan) See also: Google AMIE: AI doctor learns to ‘see’ medical images Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Sam Altman: OpenAI to keep nonprofit soul in restructuring appeared first on AI News. View the full article
-
Google is giving its diagnostic AI the ability to understand visual medical information with its latest research on AMIE (Articulate Medical Intelligence Explorer). Imagine chatting with an AI about a health concern, and instead of just processing your words, it could actually look at the photo of that worrying rash or make sense of your ECG printout. That’s what Google is aiming for. We already knew AMIE showed promise in text-based medical chats, thanks to earlier work published in Nature. But let’s face it, real medicine isn’t just about words. Doctors rely heavily on what they can see – skin conditions, readings from machines, lab reports. As the Google team rightly points out, even simple instant messaging platforms “allow static multimodal information (e.g., images and documents) to enrich discussions.” Text-only AI was missing a huge piece of the puzzle. The big question, as the researchers put it, was “Whether LLMs can conduct diagnostic clinical conversations that incorporate this more complex type of information.” Google teaches AMIE to look and reason Google’s engineers have beefed up AMIE using their Gemini 2.0 Flash model as the brains of the operation. They’ve combined this with what they call a “state-aware reasoning framework.” In plain English, this means the AI doesn’t just follow a script; it adapts its conversation based on what it’s learned so far and what it still needs to figure out. It’s close to how a human clinician works: gathering clues, forming ideas about what might be wrong, and then asking for more specific information – including visual evidence – to narrow things down. “This enables AMIE to request relevant multimodal artifacts when needed, interpret their findings accurately, integrate this information seamlessly into the ongoing dialogue, and use it to refine diagnoses,” Google explains. Think of the conversation flowing through stages: first gathering the patient’s history, then moving towards diagnosis and management suggestions, and finally follow-up. The AI constantly assesses its own understanding, asking for that skin photo or lab result if it senses a gap in its knowledge. To get this right without endless trial-and-error on real people, Google built a detailed simulation lab. Google created lifelike patient cases, pulling realistic medical images and data from sources like the PTB-XL ECG database and the SCIN dermatology image set, adding plausible backstories using Gemini. Then, they let AMIE ‘chat’ with simulated patients within this setup and automatically check how well it performed on things like diagnostic accuracy and avoiding errors (or ‘hallucinations’). The virtual OSCE: Google puts AMIE through its paces The real test came in a setup designed to mirror how medical students are assessed: the Objective Structured Clinical Examination (OSCE). Google ran a remote study involving 105 different medical scenarios. Real actors, trained to portray patients consistently, interacted either with the new multimodal AMIE or with actual human primary care physicians (PCPs). These chats happened through an interface where the ‘patient’ could upload images, just like you might in a modern messaging app. Afterwards, specialist doctors (in dermatology, cardiology, and internal medicine) and the patient actors themselves reviewed the conversations. The human doctors scored everything from how well history was taken, the accuracy of the diagnosis, the quality of the suggested management plan, right down to communication skills and empathy—and, of course, how well the AI interpreted the visual information. Surprising results from the simulated clinic Here’s where it gets really interesting. In this head-to-head comparison within the controlled study environment, Google found AMIE didn’t just hold its own—it often came out ahead. The AI was rated as being better than the human PCPs at interpreting the multimodal data shared during the chats. It also scored higher on diagnostic accuracy, producing differential diagnosis lists (the ranked list of possible conditions) that specialists deemed more accurate and complete based on the case details. Specialist doctors reviewing the transcripts tended to rate AMIE’s performance higher across most areas. They particularly noted “the quality of image interpretation and reasoning,” the thoroughness of its diagnostic workup, the soundness of its management plans, and its ability to flag when a situation needed urgent attention. Perhaps one of the most surprising findings came from the patient actors: they often found the AI to be more empathetic and trustworthy than the human doctors in these text-based interactions. And, on a critical safety note, the study found no statistically significant difference between how often AMIE made errors based on the images (hallucinated findings) compared to the human physicians. Technology never stands still, so Google also ran some early tests swapping out the Gemini 2.0 Flash model for the newer Gemini 2.5 Flash. Using their simulation framework, the results hinted at further gains, particularly in getting the diagnosis right (Top-3 Accuracy) and suggesting appropriate management plans. While promising, the team is quick to add a dose of realism: these are just automated results, and “rigorous assessment through expert physician review is essential to confirm these performance benefits.” Important reality checks Google is commendably upfront about the limitations here. “This study explores a research-only system in an OSCE-style evaluation using patient actors, which substantially under-represents the complexity… of real-world care,” they state clearly. Simulated scenarios, however well-designed, aren’t the same as dealing with the unique complexities of real patients in a busy clinic. They also stress that the chat interface doesn’t capture the richness of a real video or in-person consultation. So, what’s the next step? Moving carefully towards the real world. Google is already partnering with Beth Israel Deaconess Medical Center for a research study to see how AMIE performs in actual clinical settings with patient consent. The researchers also acknowledge the need to eventually move beyond text and static images towards handling real-time video and audio—the kind of interaction common in telehealth today. Giving AI the ability to ‘see’ and interpret the kind of visual evidence doctors use every day offers a glimpse of how AI might one day assist clinicians and patients. However, the path from these promising findings to a safe and reliable tool for everyday healthcare is still a long one that requires careful navigation. (Photo by Alexander Sinn) See also: Are AI chatbots really changing the world of work? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google AMIE: AI doctor learns to ‘see’ medical images appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
We’ve heard endless predictions about how AI chatbots will transform work, but data paints a much calmer picture—at least for now. Despite huge and ongoing advancements in generative AI, the massive wave it was supposed to create in the world of work looks more like a ripple so far. Researchers Anders Humlum (University of Chicago) and Emilie Vestergaard (University of Copenhagen) didn’t just rely on anecdotes. They dug deep, connecting responses from two big surveys (late 2023 and 2024) with official, detailed records about jobs and pay in Denmark. The pair zoomed in on around 25,000 people working in 7,000 different places, covering 11 jobs thought to be right in the path of AI disruption. Everyone’s using AI chatbots for work, but where are the benefits? What they found confirms what many of us see: AI chatbots are everywhere in Danish workplaces now. Most bosses are actually encouraging staff to use them, a real turnaround from the early days when companies were understandably nervous about things like data privacy. Almost four out of ten employers have even rolled out their own in-house chatbots, and nearly a third of employees have had some formal training on these tools. When bosses gave the nod, the number of staff using chatbots practically doubled, jumping from 47% to 83%. It also helped level the playing field a bit. That gap between men and women using chatbots? It shrank noticeably when companies actively encouraged their use, especially when they threw in some training. So, the tools are popular, companies are investing, people are getting trained… but the big economic shift? It seems to be missing in action. Using statistical methods to compare people who used AI chatbots for work with those who didn’t, both before and after ChatGPT burst onto the scene, the researchers found… well, basically nothing. “Precise zeros,” the researchers call their findings. No significant bump in pay, no change in recorded work hours, across all 11 job types they looked at. And they’re pretty confident about this – the numbers rule out any average effect ******* than just 1%. This wasn’t just a blip, either. The lack of impact held true even for the keen beans who jumped on board early, those using chatbots daily, or folks working where the boss was actively pushing the tech. Looking at whole workplaces didn’t change the story; places with lots of chatbot users didn’t see different trends in hiring, overall wages, or keeping staff compared to places using them less. Productivity gains: More of a gentle nudge than a shove Why the big disconnect? Why all the hype and investment if it’s not showing up in paychecks or job stats? The study flags two main culprits: the productivity boosts aren’t as huge as hoped in the real world, and what little gains there are aren’t really making their way into wages. Sure, people using AI chatbots for work felt they were helpful. They mentioned better work quality and feeling more creative. But the number one benefit? Saving time. However, when the researchers crunched the numbers, the average time saved was only about 2.8% of a user’s total work hours. That’s miles away from the huge 15%, 30%, even 50% productivity jumps seen in controlled lab-style experiments (RCTs) involving similar jobs. Why the difference? A few things seem to be going on. Those experiments often focus on jobs or specific tasks where chatbots really shine (like coding help or basic customer service responses). This study looked at a wider range, including jobs like teaching where the benefits might be smaller. The researchers stress the importance of what they call “complementary investments”. People whose companies encouraged chatbot use and provided training actually did report ******* benefits – saving more time, improving quality, and feeling more creative. This suggests that just having the tool isn’t enough; you need the right support and company environment to really unlock its potential. And even those modest time savings weren’t padding wallets. The study reckons only a tiny fraction – maybe 3% to 7% – of the time saved actually showed up as higher earnings. It might be down to standard workplace inertia, or maybe it’s just harder to ask for a raise based on using a tool your boss hasn’t officially blessed, especially when many people started using them off their own bat. Making new work, not less work One fascinating twist is that AI chatbots aren’t just about doing old work tasks faster. They seem to be creating new tasks too. Around 17% of people using them said they had new workloads, mostly brand new types of tasks. This phenomenon happened more often in workplaces that encouraged chatbot use. It even spilled over to people not using the tools – about 5% of non-users reported new tasks popping up because of AI, especially teachers having to adapt assignments or spot AI-written homework. What kind of new tasks? Things like figuring out how to weave AI into daily workflows, drafting content with AI help, and importantly, dealing with the ethical side and making sure everything’s above board. It hints that companies are still very much in the ‘figuring it out’ phase, spending time and effort adapting rather than just reaping instant rewards. What’s the verdict on the work impact of AI chatbots? The researchers are careful not to write off generative AI completely. They see pathways for it to become more influential over time, especially as companies get better at integrating it and maybe as those “new tasks” evolve. But for now, their message is clear: the current reality doesn’t match the hype about a massive, immediate job market overhaul. “Despite rapid adoption and substantial investments… our key finding is that AI chatbots have had minimal impact on productivity and labor market outcomes to date,” the researchers conclude. It brings to mind that old quote about the early computer age: seen everywhere, except in the productivity stats. Two years on from ChatGPT’s launch kicking off the fastest tech adoption we’ve ever seen, its actual mark on jobs and pay looks surprisingly light. The revolution might still be coming, but it seems to be taking its time. See also: Claude Integrations: Anthropic adds AI to your favourite work tools Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Are AI chatbots really changing the world of work? appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Anthropic just launched ‘Integrations’ for Claude that enables the AI to talk directly to your favourite daily work tools. In addition, the company has launched a beefed-up ‘Advanced Research’ feature for digging deeper than ever before. Starting with Integrations, the feature builds on a technical standard Anthropic released last year (the Model Context Protocol, or MCP), but makes it much easier to use. Before, setting this up was a bit technical and local. Now, developers can build secure bridges allowing Claude to connect safely with apps over the web or on your desktop. For end-users of Claude, this means you can now hook it up to a growing list of popular work software. Right out of the gate, they’ve included support for ten big names: Atlassian’s Jira and Confluence (hello, project managers and dev teams!), the automation powerhouse Zapier, Cloudflare, customer comms tool Intercom, plus Asana, Square, Sentry, PayPal, Linear, and Plaid. Stripe and GitLab are joining the party soon. So, what’s the big deal? The real advantage here is context. When Claude can see your project history in Jira, read your team’s knowledge base in Confluence, or check task updates in Asana, it stops guessing and starts understanding what you’re working on. “When you connect your tools to Claude, it gains deep context about your work—understanding project histories, task statuses, and organisational knowledge—and can take actions across every surface,” explains Anthropic. They add, “Claude becomes a more informed collaborator, helping you execute complex projects in one place with expert assistance at every step.” Let’s look at what this means in practice. Connect Zapier, and you suddenly give Claude the keys to thousands of apps linked by Zapier’s workflows. You could just ask Claude, conversationally, to trigger a complex sequence – maybe grab the latest sales numbers from HubSpot, check your calendar, and whip up some meeting notes, all without you lifting a finger in those apps. For teams using Atlassian’s Jira and Confluence, Claude could become a serious helper. Think drafting product specs, summarising long Confluence documents so you don’t have to wade through them, or even creating batches of linked Jira tickets at once. It might even spot potential roadblocks by analysing project data. And if you use Intercom for customer chats, this integration could be a game-changer. Intercom’s own AI assistant, Fin, can now work with Claude to do things like automatically create a bug report in Linear if a customer flags an issue. You could also ask Claude to sift through your Intercom chat history to spot patterns, help debug tricky problems, or summarise what customers are saying – making the whole journey from feedback to fix much smoother. Anthropic is also making it easier for developers to build even more of these connections. They reckon that using their tools (or platforms like Cloudflare that handle the tricky bits like security and setup), developers can whip up a custom Integration with Claude in about half an hour. This could mean connecting Claude to your company’s unique internal systems or specialised industry software. Beyond tool integrations, Claude gets a serious research upgrade Alongside these new connections, Anthropic has given Claude’s Research feature a serious boost. It could already search the web and your Google Workspace files, but the new ‘Advanced Research’ mode is built for when you need to dig really deep. Flip the switch for this advanced mode, and Claude tackles big questions differently. Instead of just one big search, it intelligently breaks your request down into smaller chunks, investigates each part thoroughly – using the web, your Google Docs, and now tapping into any apps you’ve connected via Integrations – before pulling it all together into a detailed report. Now, this deeper digging takes a bit more time. While many reports might only take five to fifteen minutes, Anthropic says the really complex investigations could have Claude working away for up to 45 minutes. That might sound like a while, but compare it to the hours you might spend grinding through that research manually, and it starts to look pretty appealing. Importantly, you can trust the results. When Claude uses information from any source – whether it’s a website, an internal doc, a Jira ticket, or a Confluence page – it gives you clear links straight back to the original. No more wondering where the AI got its information from; you can check it yourself. These shiny new Integrations and the Advanced Research mode are rolling out now in beta for folks on Anthropic’s paid Max, Team, and Enterprise plans. If you’re on the Pro plan, don’t worry – access is coming your way soon. Also worth noting: the standard web search feature inside Claude is now available everywhere, for everyone on any paid Claude.ai plan (Pro and up). No more geographical restrictions on that front. Putting it all together, these updates and integrations show Anthropic is serious about making Claude genuinely useful in a professional context. By letting it plug directly into the tools we already use and giving it more powerful ways to analyse information, they’re pushing Claude towards being less of a novelty and more of an essential part of the modern toolkit. (Image credit: Anthropic) See also: Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Claude Integrations: Anthropic adds AI to your favourite work tools appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Developer experience (DevEx) is more than just a buzzphrase. With the rise of complex tech stacks, remote-first teams, and continuous delivery, developers’ work processes have become more complex. Poor DevEx leads to slower deployments, burnout, and increased turnover. Great DevEx, on the other hand, boosts productivity, developer satisfaction, and the quality of shipped code. Developer Experience Insight Tools help engineering teams measure, optimise, and elevate how developers work. The tools track workflows, streamline collaboration, catch issues early, and ultimately create an environment where devs can do their best work. Why developer experience (DevEx) matters In the evolving world of software development, providing a seamless and efficient developer experience (DevEx) has become important. DevEx impacts productivity, code quality, and overall project success. A positive DevEx reduces onboarding time, minimises frustration, and fosters innovation by letting developers focus on solving problems rather than battling tools or workflows. Best practices for implementing developer experience (DevEx) insight tools Here are the key best practices: 1. Set clear objectives Before choosing a tool, identify the specific challenges you want to address – whether it’s reducing lead time, improving code review efficiency, or increasing deployment frequency. Clear goals guide tool selection and help you measure success. 2. Include developers in the decision process Involve developers early when evaluating DevEx tools. Their feedback ensures the solution aligns with daily workflows and avoids adding unnecessary complexity. A tool embraced by engineers is far more likely to deliver impact. 3. Focus on seamless integration Choose tools that integrate well with your existing toolchain – like Git platforms, CI/CD systems, IDEs, and project management software. This ensures insights flow naturally without disrupting developer habits. 4. Start with a pilot team Roll out the tool to a small group first. Collect feedback, refine configurations, and evaluate results before expanding across the organisation. A phased rollout minimises risk and builds internal champions. 5. Prioritise actionable insights Avoid tools that overwhelm with vanity metrics. Look for platforms that surface specific, actionable recommendations developers can use to improve workflows and outcomes immediately. 6. Continuously monitor and Iterate Developer tools evolve. Regularly review tool performance, gather feedback, and adjust settings or processes as needed. Continuous improvement is key to long-term DevEx success. Top 10 developer experience insight tools of 2025 1. Milestone Milestone is built for engineering operations leaders who need visibility into the actual developer experience. It aggregates data across Git repositories, issue trackers, and CI/CD platforms to uncover bottlenecks in delivery, collaboration, and productivity. Unlike traditional tools, Milestone emphasises context-aware metrics like review latency, merge frequency, and time-in-status. It helps managers pinpoint workflow friction and enable smoother engineering cycles, while giving developers visibility into how their work contributes to team goals. Highlights: Seamless integration with GitHub, Jira, and CI/CD tools Rich dashboards for tracking velocity, quality, and workflow health Helps identify systemic delivery delays Suitable for both team leads and individual contributors 2. Visual Studio Code Visual Studio Code (VS Code) is more than just an editor – it’s a central DevEx powerhouse. With its blazing speed, massive extension ecosystem, and deep integrations, VS Code allows developers to stay productive without leaving the IDE. Its features like IntelliSense, Live Share, built-in terminal, and version control support streamline the coding experience. Developers can collaborate, debug, and deploy – all from one interface. With growing support for cloud-based development and AI-powered tools (like GitHub Copilot), VS Code continues to redefine DevEx in 2025. Highlights: Robust plugin ecosystem (AI, Git, testing, Docker, etc.) Live Share enables real-time collaboration Built-in Git support and terminal access Customisable themes, layouts, and keyboard shortcuts 3. SonarQube SonarQube offers continuous inspection of code quality through static analysis, helping teams reduce technical debt and maintain clean, secure codebases. It identifies bugs and security vulnerabilities in more than 30 languages. By integrating into CI/CD pipelines, SonarQube enforces quality gates before code gets merged. Developers receive real-time feedback on code issues and maintainability, improving both DevEx and long-term project health. In 2025, SonarQube remains a go-to tool for teams that treat quality as a DevEx priority. Highlights: Static analysis for 30+ languages Real-time feedback during pull requests Technical debt tracking and maintainability scoring Tight CI/CD and Git integration 4. LogRocket LogRocket enhances frontend DevEx by providing user session replays and performance analytics. It captures how users interact with your application – clicks, navigation, console logs, and network activity – making bug reproduction and performance debugging far more efficient. It bridges the gap between code and user experience, letting developers trace issues quickly. LogRocket also offers integrations with Sentry, Segment, and other analytics platforms to add context to every user issue. Highlights: Session replays with console and network logs Frontend performance monitoring Automatic capture of UI errors and crashes Support for React, Vue, Angular, and more 5. OverOps OverOps specialises in identifying and preventing critical errors in production. It captures the full state of your application (stack trace, variable state, logs) at the moment of failure – without relying on log files alone. OverOps gives developers insight into “why” errors happen, not just “what” happened. This enables faster root-cause analysis, fewer regressions, and higher deployment confidence – all important to frictionless DevEx in modern environments. Highlights: Automated root-cause analysis of runtime errors Continuous monitoring in pre-prod and production Eliminates reliance on verbose logging Insights into code changes that introduced issues 6. Buddy Buddy is a modern DevOps automation platform that enhances DevEx through simple, visual pipelines. With a drag-and-drop UI, developers can set up and manage CI/CD workflows, run tests, build containers, and deploy – all without complex scripts. What makes Buddy unique is its speed and simplicity. It supports Docker, Kubernetes, AWS, and dozens of integrations out-of-the-box, helping teams ship faster while keeping DevEx at the forefront. Highlights: Intuitive UI for CI/CD automation Docker, Kubernetes, and serverless deployment support Real-time feedback on build and test status Git-based workflow and pipeline versioning 7. Docusaurus Docusaurus improves DevEx by making documentation creation and maintenance as easy and developer-friendly as possible. Built by Facebook, it allows dev teams to build fast, versioned, and customisable documentation websites using Markdown and React. In 2025, Docusaurus continues to lead in the “docs as code” movement, helping developers maintain high-quality internal and external documentation without leaving their code editors. Better docs lead to faster onboarding, fewer support tickets, and smoother development workflows. Highlights: Easy setup with React + Markdown Built-in search, versioning, and localisation Custom theming and plugin support Git-based deployment with GitHub Pages or Vercel 8. Exaflow Exaflow is a DevEx observability platform focused on surfacing friction in development and delivery workflows. It aggregates signals from Git providers, issue trackers, code reviews, and builds, offering real-time insights into how teams work. It emphasises transparency and operational health, providing metrics like lead time, handoff delays, and deployment frequency. By highlighting where delays or inefficiencies happen, Exaflow helps teams proactively improve DevEx and delivery outcomes. Highlights: Workflow observability and DevOps telemetry Actionable insights for velocity and bottlenecks Git, Jira, and CI/CD tool integrations Visual timelines of developer handoffs 9. Replit Replit is an online IDE that brings DevEx into the browser. Developers can code, collaborate, and deploy without setting up a local environment. With support for 50+ languages, instant hosting, and live multiplayer coding, it’s a game-changer for fast experimentation and learning. Replit is particularly impactful for onboarding new developers or running internal tooling. It supports AI code suggestions, deployment previews, and GitHub integrations, and offers a frictionless experience from idea to execution. Highlights: Cloud-based, zero-setup IDE Real-time collaboration with multiplayer editing Instant hosting and deployment features Built-in AI tools for autocomplete and debugging 10. Codacy Codacy brings automated code reviews into the DevEx toolkit. It analyses every commit and pull request to flag issues related to code quality, security, duplication, and style – before they reach production. Codacy integrates with your CI and Git workflows, helping developers maintain consistent standards without manual review overhead. It also enables teams to track quality trends over time, ensuring scalable and maintainable codebases. Highlights: Automated code analysis for multiple languages Configurable quality standards and code patterns GitHub/GitLab/Bitbucket CI/CD integration Security and maintainability insights What to consider when selecting a DevEx insight tool? Selecting the right DevEx tool can make or break your team’s efficiency. Below are critical factors to keep in mind: Compatibility with existing ecosystem: Does the tool integrate with your current tech stack, repositories, and CI/CD pipelines? Ease of use: Tools should be intuitive and require minimal learning curves for developers to adopt quickly. Customisability: Every organisation has unique needs. The tools should allow customisation to fit your workflows. Scalability: Ensure the tool can grow with your development team, projects, and increasing workloads. Cost-effectiveness: Evaluate the pricing model to ensure it aligns with your budget without sacrificing features. Community and support: A vibrant community or robust technical support can make the adoption process smoother and keep the tool up-to-date. Insight & analytics: Choose tools that provide powerful analytics and actionable insights to improve workflows. Compliance standards: Consider whether the tool adheres to regulatory and security requirements relevant to your industry. As software teams continue to scale, improving Developer Experience is increasingly important. The right DevEx insight tools allow you to identify friction, empower your engineers, and build healthier development cultures. The post Top 10 developer experience insight tools appeared first on AI News. View the full article
-
The classroom hasn’t changed much in over a century. A teacher at the front, rows of students listening, and a curriculum defined by what’s testable – not necessarily what’s meaningful. But AI, as arguably the most powerful tool humanity has created in the last few years, is about to break that model open. Not with smarter software or faster grading, but by forcing us to ask: “What is the purpose of education in a world where machines could teach?” At AI News, rather than speculate about distant futures or lean on product announcements and edtech deals, we started a conversation – with an AI. We asked it what it sees when it looks at the classroom, the teacher, and the learner. What follows is a distilled version of that exchange, given here not as a technical analysis, but as a provocation. The system cracks Education is under pressure worldwide: Teachers are overworked, students are disengaged, and curricula feel outdated in a changing world. Into this comes AI – not as a patch or plug-in, but as a potential accelerant. Our opening prompt: “What roles might an AI play in education?“ The answer was wide-ranging: Personalised learning pathways Intelligent tutoring systems Administrative efficiency Language translation and accessibility tools Behavioural and emotional recognition Scalable, always-available content delivery These are features of an education system, its nuts and bolts. But what about meaning and ethics? Flawed by design? One concern kept resurfacing: bias. We asked the AI: “If you’re trained on the internet – and the internet is the output of biased, flawed human thought – doesn’t that mean your responses are equally flawed?” The AI acknowledged the logic. Bias is inherited. Inaccuracies, distortions, and blind spots all travel from teacher to pupil. What an AI learns, it learns from us, and it can reproduce our worst habits at vast scale. But we weren’t interested in letting human teachers off the hook either. So we asked: “Isn’t bias true of human educators too?” The AI agreed: human teachers are also shaped by the limitations of their training, culture, and experience. Both systems – AI and human – are imperfect. But only humans can reflect and care. That led us to a deeper question: if both AI and human can reproduce bias, why use AI at all? Why use AI in education? The AI outlined what it felt were its clear advantages, which seemed to be systemic, rather than revolutionary. The aspect of personalised learning intrigued us – after all, doing things fast and at scale is what software and computers are good at. We asked: “How much data is needed to personalise learning effectively?“ The answer: it varies. But at scale, it could require gigabytes or even terabytes of student data – performance, preferences, feedback, and longitudinal tracking over years. Which raises its own question: “What do we trade in terms of privacy for that precision?” A personalised or fragmented future? Putting aside the issue of whether we’re happy with student data being codified and ingested, if every student were to receive a tailored lesson plan, what happens to the shared experience of learning? Education has always been more than information. It’s about dialogue, debate, discomfort, empathy, and encounters with other minds, not just mirrored algorithms. AI can tailor a curriculum, but it can’t recreate the unpredictable alchemy of a classroom. We risk mistaking customisation for connection. “I use ChatGPT to provide more context […] to plan, structure and compose my essays.” – James, 17, Ottawa, Canada. The teacher reimagined Where does this leave the teacher? In the AI’s view: liberated. Freed from repetitive tasks and administrative overload, the teacher is able to spend more time guiding, mentoring, and cultivating important thinking. But this requires a shift in mindset – from delivering knowledge to curating wisdom. In broad terms, from part-time administrator, part-time teacher, to in-classroom collaborator. AI won’t replace teachers, but it might reveal which parts of the teaching job were never the most important. “The main way I use ChatGPT is to either help with ideas for when I am planning an essay, or to reinforce understanding when revising.” – Emily, 16, Eastbourne College, ***. What we teach next So, what do we want students to learn? In an AI-rich world, important thinking, ethical reasoning, and emotional intelligence rise in value. Ironically, the more intelligent our machines become, the more we’ll need to double down on what makes us human. Perhaps the ultimate lesson isn’t in what AI can teach us – but in what it can’t, or what it shouldn’t even try. Conclusion The future of education won’t be built by AI alone. The is our opportunity to modernise classrooms, and to reimagine them. Not to fear the machine, but to ask the ******* question: “What is learning in a world where all knowledge is available?” Whatever the answer is – that’s how we should be teaching next. (Image source: “Large lecture college classes” by Kevin Dooley is licensed under CC BY 2.0) See also: AI in education: Balancing promises and pitfalls Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Conversations with AI: Education appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
If you’re building with AI, or trying to defend against the less savoury side of the technology, Meta just dropped new Llama security tools. The improved security tools for the Llama AI models arrive alongside fresh resources from Meta designed to help cybersecurity teams harness AI for defence. It’s all part of their push to make developing and using AI a bit safer for everyone involved. Developers working with the Llama family of models now have some upgraded kit to play with. You can grab these latest Llama Protection tools directly from Meta’s own Llama Protections page, or find them where many developers live: Hugging Face and GitHub. First up is Llama Guard 4. Think of it as an evolution of Meta’s customisable safety filter for AI. The big news here is that it’s now multimodal so it can understand and apply safety rules not just to text, but to images as well. That’s crucial as AI applications get more visual. This new version is also being baked into Meta’s brand-new Llama API, which is currently in a limited preview. Then there’s LlamaFirewall. This is a new piece of the puzzle from Meta, designed to act like a security control centre for AI systems. It helps manage different safety models working together and hooks into Meta’s other protection tools. Its job? To spot and block the kind of risks that keep AI developers up at night – things like clever ‘prompt injection’ attacks designed to trick the AI, potentially dodgy code generation, or risky behaviour from AI plug-ins. Meta has also given its Llama Prompt Guard a tune-up. The main Prompt Guard 2 (86M) model is now better at sniffing out those pesky jailbreak attempts and prompt injections. More interestingly, perhaps, is the introduction of Prompt Guard 2 22M. Prompt Guard 2 22M is a much smaller, nippier version. Meta reckons it can slash latency and compute costs by up to 75% compared to the ******* model, without sacrificing too much detection power. For anyone needing faster responses or working on tighter budgets, that’s a welcome addition. But Meta isn’t just focusing on the AI builders; they’re also looking at the cyber defenders on the front lines of digital security. They’ve heard the calls for better AI-powered tools to help in the fight against cyberattacks, and they’re sharing some updates aimed at just that. The CyberSec Eval 4 benchmark suite has been updated. This open-source toolkit helps organisations figure out how good AI systems actually are at security tasks. This latest version includes two new tools: CyberSOC Eval: Built with the help of cybersecurity experts CrowdStrike, this framework specifically measures how well AI performs in a real Security Operation Centre (SOC) environment. It’s designed to give a clearer picture of AI’s effectiveness in threat detection and response. The benchmark itself is coming soon. AutoPatchBench: This benchmark tests how good Llama and other AIs are at automatically finding and fixing security holes in code before the bad guys can exploit them. To help get these kinds of tools into the hands of those who need them, Meta is kicking off the Llama Defenders Program. This seems to be about giving partner companies and developers special access to a mix of AI solutions – some open-source, some early-access, some perhaps proprietary – all geared towards different security challenges. As part of this, Meta is sharing an AI security tool they use internally: the Automated Sensitive Doc Classification Tool. It automatically slaps security labels on documents inside an organisation. Why? To stop sensitive info from walking out the door, or to prevent it from being accidentally fed into an AI system (like in RAG setups) where it could be leaked. They’re also tackling the problem of fake audio generated by AI, which is increasingly used in scams. The Llama Generated Audio Detector and Llama Audio Watermark Detector are being shared with partners to help them spot AI-generated voices in potential phishing calls or fraud attempts. Companies like ZenDesk, Bell Canada, and AT&T are already lined up to integrate these. Finally, Meta gave a sneak peek at something potentially huge for user privacy: Private Processing. This is new tech they’re working on for WhatsApp. The idea is to let AI do helpful things like summarise your unread messages or help you draft replies, but without Meta or WhatsApp being able to read the content of those messages. Meta is being quite open about the security side, even publishing their threat model and inviting security researchers to poke holes in the architecture before it ever goes live. It’s a sign they know they need to get the privacy aspect right. Overall, it’s a broad set of AI security announcements from Meta. They’re clearly trying to put serious muscle behind securing the AI they build, while also giving the wider tech community better tools to build safely and defend effectively. See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Meta beefs up AI security with new Llama tools appeared first on AI News. View the full article
-
The *** has cut the ribbon on a pioneering electron beam (E-Beam) lithography facility to build the semiconductor chips of the future. What makes this special? It’s the first of its kind in Europe, and only the second facility like it on the planet—the other being in Japan. So, what’s the big deal about E-Beam lithography? Imagine trying to draw incredibly complex patterns, but thousands of times smaller than a human hair. That’s essentially what this technology does, using a focused beam of tiny electrons. Such precision is vital for designing the microscopic components inside the chips that run everything from our smartphones and gaming consoles to life-saving medical scanners and advanced defence systems. Semiconductors are already big business for the ***, adding around £10 billion to its economy each year. And that figure is only expected to climb, potentially hitting £17 billion by the end of the decade. Nurturing this sector is a major opportunity for the ***—not just for bragging rights in advanced manufacturing, but for creating high-value jobs and driving real economic growth. Speaking at the launch of the facility in Southampton, Science Minister Lord Patrick Vallance said: “Britain is home to some of the most exciting semiconductor research anywhere in the world—and Southampton’s new E-Beam facility is a major boost to our national capabilities. “By investing in both infrastructure and talent, we’re giving our researchers and innovators the support they need to develop next-generation chips right here in the ***.” Lord Vallance’s visit wasn’t just a photo opportunity, though. It came alongside some sobering news: fresh research published today highlights that one of the biggest hurdles facing the ***’s growing chip industry is finding enough people with the right skills. We’re talking about a serious talent crunch. When you consider that a single person working in semiconductors contributes an average of £460,000 to the economy each year, you can see why plugging this skills gap is so critical. So, what’s the plan? The government isn’t just acknowledging the problem; they’re putting money where their mouth is with a £4.75 million semiconductor skills package. The idea is to build up that talent pipeline, making sure universities like Southampton – already powerhouses of chip innovation – have resources like the E-Beam lab and the students they need. “Our £4.75 million skills package will support our Plan for Change by helping more young people into high-value semiconductors careers, closing skills gaps and backing growth in this critical sector,” Lord Vallance explained. Here’s where that cash is going: Getting students hooked (£3 million): Fancy £5,000 towards your degree? 300 students starting Electronics and Electrical Engineering courses this year will get just that, along with specific learning modules to show them what a career in semiconductors actually involves, particularly in chip design and making the things. Practical chip skills (£1.2 million): It’s one thing learning the theory, another designing a real chip. This pot will fund new hands-on chip design courses for students (undergrad and postgrad) and even train up lecturers. They’re also looking into creating conversion courses to tempt talented people from other fields into the chip world. Inspiring the next generation (Nearly £550,000): To really build a long-term pipeline, you need to capture interest early. This funding aims to give 7,000 teenagers (15-18) and 450 teachers some real, hands-on experience with semiconductors, working with local companies in existing *** chip hotspots like Newport, Cambridge, and Glasgow. The goal is to show young people the cool career paths available right on their doorstep. Ultimately, the hope is that this targeted support will give the *** semiconductor scene the skilled workforce it needs to thrive. It’s about encouraging more students to jump into these valuable careers, helping companies find the people they desperately need, and making sure the *** stays at the forefront of the technologies that will shape tomorrow’s economy. Professor Graham Reed, who heads up the Optoelectronics Research Centre (ORC) at Southampton University, commented: “The introduction of the new E-Beam facility will reinforce our position of hosting the most advanced cleanroom in *** academia. “It facilitates a vast array of innovative and industrially relevant research, and much needed semiconductor skills training.” Putting world-class tools in the hands of researchers while simultaneously investing in the people who will use them will help to cement the ***’s leadership in semiconductors. See also: AI in education: Balancing promises and pitfalls Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** opens Europe’s first E-Beam semiconductor chip lab appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
[AI]Duolingo shifts to AI-first model, cutting contractor roles
ChatGPT posted a topic in World News
Duolingo is restructuring parts of its workforce as it shifts toward becoming an “AI-first” company, according to an internal memo from CEO and co-founder Luis von Ahn that was later shared publicly on the company’s LinkedIn page. The memo outlines a series of planned changes to how the company operates, with a particular focus on how artificial intelligence will be used to streamline processes, reduce manual tasks, and scale content development. Duolingo will gradually stop using contractors for work that AI can take over. The company will also begin evaluating job candidates and employee performance partly based on how they use AI tools. Von Ahn said that headcount increases will only be considered when a team can no longer automate parts of its work effectively. “Being AI-first means we will need to rethink much of how we work. Making minor tweaks to systems designed for humans won’t get us there,” von Ahn wrote. “AI helps us get closer to our mission. To teach well, we need to create a massive amount of content, and doing that manually doesn’t scale.” One of the main drivers behind the shift is the need to produce content more quickly, and Von Ahn says that producing new content manually would take decades. By integrating AI into its workflow, Duolingo has replaced processes he described as slow and manual those that are more efficient and automated. The company has also used AI to develop features that weren’t previously feasible such as an AI-powered video call feature, which aims to provide tutoring to the level of human instructors. According to von Ahn, tools like this move the Duolingo platform closer to its mission – to deliver language instruction globally. The internal shift is not limited to content creation or product development. Von Ahn said most business functions will be expected to rethink how they operate and identify opportunities to embed AI into daily work. Teams will be encouraged to adopt what he called “constructive constraints” – policies that push them to prioritise automation before requesting additional resources. The move echoes a broader trend in the tech industry. Shopify CEO Tobi Lütke recently gave a similar directive to employees, urging them to demonstrate why tasks couldn’t be completed with AI before requesting new headcount. Both companies appear to be setting new expectations for how teams manage growth in an AI-dominated environment. Duolingo’s leadership maintains the changes are not intended to reduce its focus on employee well-being, and the company will continue to support staff with training, mentorship, and tools designed to help employees adapt to new workflows. The goal, he wrote, is not to replace staff with AI, but to eliminate bottlenecks and allow employees to concentrate on complex or creative work. “AI isn’t just a productivity boost,” von Ahn wrote. “It helps us get closer to our mission.” The company’s move toward more automation reflects a belief that waiting too long to embrace AI could be a missed opportunity. Von Ahn pointed to Duolingo’s early investment in mobile-first design in 2012 as a model. That shift helped the company gain visibility and user adoption, including being named Apple’s iPhone App of the Year in 2013. The decision to go “AI-first” is framed as a similarly forward-looking step. The transition is expected to take some time. Von Ahn acknowledged that not all systems are ready for full automation and that integrating AI into certain areas, like codebase analysis, could take longer. Nevertheless, he said moving quickly – even if it means accepting occasional setbacks – is more important than waiting for the technology to be fully mature. By placing AI at the centre of its operations, Duolingo is aiming to deliver more scalable learning experiences and manage internal resources more efficiently. The company plans to provide additional updates as the implementation progresses. (Photo by Unsplash) See also: AI in education: Balancing promises and pitfalls Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Duolingo shifts to AI-first model, cutting contractor roles appeared first on AI News. View the full article -
At the Apsara Conference in Hangzhou, hosted by Alibaba Cloud, China’s AI startups emphasised their efforts to develop large language models. The companies’ efforts follow the announcement of OpenAI’s latest LLMs, including the o1 generative pre-trained transformer model backed by Microsoft. The model is intended to tackle difficult tasks, paving the way for advances in science, coding, and mathematics. During the conference, Kunal Zhilin, founder of Moonshot AI, underlined the importance of the o1 model, adding that it has the potential to reshape various industries and create new opportunities for AI startups. Zhilin stated that reinforcement learning and scalability might be pivotal for AI development. He spoke of the scaling law, which states that larger models with more training data perform better. “This approach pushes the ceiling of AI capabilities,” Zhilin said, adding that OpenAI o1 has the potential to disrupt sectors and generate new opportunities for startups. OpenAI has also stressed the model’s ability to solve complex problems, which it says operate in a manner similar to human thinking. By refining its strategies and learning from mistakes, the model improves its problem-solving capabilities. Zhilin said companies with enough computing power will be able to innovate not only in algorithms, but also in foundational AI models. He sees this as pivotal, as AI engineers rely increasingly on reinforcement learning to generate new data after exhausting available organic data sources. StepFun CEO Jiang Daxin concurred with Zhilin but stated that computational power remains a big challenge for many start-ups, particularly due to US trade restrictions that hinder ******** enterprises’ access to advanced semiconductors. “The computational requirements are still substantial,” Daxin stated. An insider at Baichuan AI has said that only a small group of ******** AI start-ups — including Moonshot AI, Baichuan AI, Zhipu AI, and MiniMax — are in a position to make large-scale investments in reinforcement learning. These companies — collectively referred to as the “AI tigers” — are involved heavily in LLM development, pushing the next generation of AI. More from the Apsara Conference Also at the conference, Alibaba Cloud made several announcements, including the release of its Qwen 2.5 model family, which features advances in coding and mathematics. The models range from 0.5 billion to 72 billion parameters and support approximately 29 languages, including ********, English, French, and Spanish. Specialised models such as Qwen2.5-Coder and Qwen2.5-Math have already gained some traction, with over 40 million downloads on platforms Hugging Face and ModelScope. Alibaba Cloud added to its product portfolio, delivering a text-to-video model in its picture generator, Tongyi Wanxiang. The model can create videos in realistic and animated styles, with possible uses in advertising and filmmaking. Alibaba Cloud unveiled Qwen 2-VL, the latest version of its vision language model. It handles videos longer than 20 minutes, supports video-based question-answering, and is optimised for mobile devices and robotics. For more information on the conference, click here. (Photo by: @Guy_AI_Wise via X) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI’s latest LLM opens doors for China’s AI startups appeared first on AI News. View the full article
-
If you’ve ever tried to get your cybersecurity news picked up by media outlets, you’ll know just how much of a challenge (and how disheartening) it can be. You pour hours into what you think is an excellent announcement about your new security tool, threat research, or vulnerability discovery, only to watch it disappear into journalists’ overflowing inboxes without a trace. The cyber PR space is brutally competitive. Reporters at top publications receive tens, if not hundreds, of pitches each day, and they have no choice but to be highly selective about which releases they choose to cover and which to discard. Your challenge then isn’t just creating a good press release, it’s making one that grabs attention and stands out in an industry drowning in technical jargon and “revolutionary” solutions. Why most cybersecurity press releases fall flat Let’s first look at some of the main reasons why many cyber press releases fail: They’re too complex from the start, losing non-technical reporters They bury the actual news under corporate marketing speak. They focus on product features rather than the real-world impact or problems they solve. They lack credible data or specific research findings that journalists can cite as support. Most of these problems have one main theme: Journalists aren’t interested in promoting your product or your business. They are looking after their interests and seeking newsworthy stories their audiences care about. Keep this in mind and make their job easier by showing them exactly why your announcement matters. Learning how to write a cybersecurity press release What does a well-written press release look like? Alongside the reasons listed above, many companies make the mistake of submitting poorly formatted releases that journalists will be unlikely to spend time reading. It’s worth learning how to write a cybersecurity press release properly, including the preferred structure (headline, subheader, opening paragraph, boilerplate, etc). And, be sure to review some examples of high-quality press releases as well. AI strategies that transform your press release process Let’s examine how AI tools can significantly enhance your cyber PR at every stage. 1. Research Enhancement Use AI tools to track media coverage patterns and identify emerging trends in cybersecurity news. You can analyse which types of security stories gain traction, and this can help you position your announcement in that context. Another idea is to use LLMs (like Google’s Gemini or OpenAI’s ChatGPT) to analyse hundreds of successful cybersecurity press releases in a niche similar to yours. Ask it to identify common elements in those that generated significant coverage, and then use these same features in your cyber PR efforts. To take this a step further, AI-powered sentiment analysis can help you understand how different audience segments receive specific cybersecurity topics. The intelligence can help you tailor your messaging to address current concerns and capitalise on positive industry momentum. 2. Writing assistance If you struggle to convey complex ideas and terminology in more accessible language, consider asking the LLM to help simplify your messaging. This can help transform technical specifications into clear, accessible language that non-technical journalists can understand. Since the headline is the most important part of your release, use an LLM to generate a handful of options based on your core announcement, then select the best one based on clarity and impact. Once your press release is complete, run it through an LLM to identify and replace jargon that might be second nature to your security team but may be confusing to general tech reporters. 3. Visual storytelling If you are struggling to find ways to explain your product or service in accessible language, visuals can help. AI image generation tools, like Midjourney, create custom visuals based on prompts that help illustrate your message. The latest models can handle highly complex tasks. With a bit of prompt engineering (and by incorporating the press release you want help with), you should be able to create accompanying images and infographics that bring your message to life. 4. Video content Going one step further than a static image, a brief AI-generated explainer video can sit alongside your press release, providing journalists with ready-to-use content that explains complex security concepts. Some ideas include: Short Explainer Videos: Use text-to-video tools to turn essential sections of your press release into a brief (60 seconds or less) animated or stock-footage-based video. You can usually use narration and text overlays directly on the AI platforms as well. AI Avatar Summaries: Several tools now enable you to create a brief video featuring an AI avatar that presents the core message of the press release. A human-looking avatar reads out the content and delivers an audio and video component for your release. Data Visualisation Videos: Use AI tools to animate key statistics or processes described in the release for enhanced clarity. Final word Even as you use the AI tools you have at your disposal, remember that the most effective cybersecurity press releases still require that all-important human insight and expertise. Your goal isn’t to automate the entire process. Instead, use AI to enhance your cyber PR efforts and make your releases stand out from the crowd. AI should help emphasise, not replace, the human elements that make security stories so engaging and compelling. Be sure to shine a spotlight on the researchers who made the discovery, the real-world implications of any threat vulnerabilities you uncover, and the people security measures ultimately protect. Combine this human-focused storytelling with the power of AI automation, and you’ll ensure that your press releases and cyber PR campaigns get the maximum mileage. The post AI strategies for cybersecurity press releases that get coverage appeared first on AI News. View the full article
-
The role of AI in education is a controversial subject, bringing both exciting possibilities and serious challenges. There’s a real push to bring AI into schools, and you can see why. The recent executive order on youth education from President Trump recognised that if future generations are going to do well in an increasingly automated world, they need to be ready. “To ensure the United States remains a global leader in this technological revolution, we must provide our nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” President Trump declared. So, what does AI actually look like in the classroom? One of the biggest hopes for AI in education is making learning more personal. Imagine software that can figure out how individual students are doing, then adjust the pace and materials just for them. This could mean finally moving away from the old one-size-fits-all approach towards learning environments that adapt and offer help exactly where it’s needed. The US executive order hints at this, wanting to improve results through things like “AI-based high-quality instructional resources” and “high-impact tutoring.” And what about teachers? AI could be a huge help here too, potentially taking over tedious admin tasks like grading, freeing them up to actually teach. Plus, AI software might offer fresh ways to present information. Getting kids familiar with AI early on could also take away some of the mystery around the technology. It might spark their “curiosity and creativity” and give them the foundation they need to become “active and responsible participants in the workforce of the future.” The focus stretches to lifelong learning and getting people ready for the job market. On top of that, AI tools like text-to-speech or translation features can make learning much more accessible for students with disabilities, opening up educational environments for everyone. Not all smooth sailing: The challenges ahead for AI in education While the potential is huge, we need to be realistic about the significant hurdles and potential downsides. First off, AI runs on student data – lots of it. That means we absolutely need strong rules and security to make sure this data is collected ethically, used correctly, and kept safe from breaches. Privacy is paramount here. Then there’s the bias problem. If the data used to train AI reflects existing unfairness in society (and let’s be honest, it often does), the AI could end up repeating or even worsening those inequalities. Think biased assessments or unfair resource allocation. Careful testing and constant checks are crucial to catch and fix this. We also can’t ignore the digital divide. If some students don’t have reliable internet, the right devices, or the necessary tech infrastructure at home or school, AI could widen the gap between the haves and have-nots. It’s vital that everyone gets fair access. There’s also a risk that leaning too heavily on AI education tools might stop students from developing essential skills like critical thinking. We need to teach them how to use AI as a helpful tool, not a crutch they can’t function without. Maybe the biggest piece of the puzzle, though, is making sure our teachers are ready. As the executive order rightly points out, “We must also invest in our educators and equip them with the tools and knowledge.” This isn’t just about knowing which buttons to push; teachers need to understand how AI fits into teaching effectively and ethically. That requires solid professional development and ongoing support. A recent GMB Union poll found that while about a fifth of *** schools are using AI now, the staff often aren’t getting the training they need: Finding the right path forward It’s going to take everyone – governments, schools, tech companies, and teachers – pulling together in order to ensure that AI plays a positive role in education. We absolutely need clear policies and standards covering ethics, privacy, bias, and making sure AI is accessible to all students. We also need to keep investing in research to figure out the best ways to use AI in education and to build tools that are fair and effective. And critically, we need a long-term commitment to teacher education to get educators comfortable and skilled with these changes. Part of this is building broad AI literacy, making sure all students get a basic understanding of this technology and how it impacts society. AI could be a positive force in education – making it more personalised, efficient, and focused on the skills students actually need. But turning that potential into reality means carefully navigating those tricky ethical, practical, and teaching challenges head-on. See also: How does AI judge? Anthropic studies the values of Claude Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI in education: Balancing promises and pitfalls appeared first on AI News. View the full article
-
The third edition of Machines Can See (MCS) Summit has concluded at Dubai’s Museum of the Future. More than 300 start‑ups pitched to investors from EQT Ventures, Balderton, Lakestar, e& capital and Mubadala, and more than 3,500 delegates from 45 countries attended the summit, while online engagement levels were high (4.7 million views). Real-time updates with the #MCS2025 hashtag are projected to exceed 5 million views. The summit was hosted by UAE-based Polynome Group under the patronage of H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum. Strategic backers included Digital Dubai, Dubai Police, Emirates, Amazon Web Services, NVIDIA, IBM, SAP, MBZUAI among others. “In just three years, MCS has evolved from a specialist meet‑up into a true crossroads for the world’s top minds in science, business and public policy. The week proved that when researchers, entrepreneurs and governments share one stage, we move a step closer to transparent, human‑centred AI that delivers real value for society,” said Alexander Khanin, founder & CEO of Polynome Group Landmark agreements announced live on stage During the two‑day programme, several high‑profile agreements were signed at the summit, including: A trilateral Memorandum of Understanding between Astana Hub (Kazakhstan), IT‑Park Uzbekistan and Al‑Farabi Innovation Hub (UAE), creating a Central‑Asia‑to‑MENA soft‑landing platform for high‑growth start‑ups. A Google Cloud initiative offering no‑cost “Gen‑AI Leader” learning paths and discounted certification vouchers to accelerate responsible AI adoption across the region. Polynome Group officially launched AI Academy, an educational initiative developed in collaboration with the Abu Dhabi School of Management and supported by NVIDIA’s Deep Learning Institute. The Academy will offer short executive seminars and a specialised four‑month Mini‑MBA in AI, aimed at equipping leaders and innovators with practical AI knowledge to bridge the gap between technology research and commercial application. Policy & talent Day one opened with a ministerial round‑table – “Wanted: AI to Retain and Attract Talent to the Country.” Ministers Omar Sultan Al Olama (UAE), Amr Talaat (Egypt), Gobind Singh Deo (Malaysia), Zhaslan Madiyev (Kazakhstan) and Meutya Hafid (Indonesia) detailed visa‑fast‑track programmes, national GPU clouds and cross‑border sandboxes designed to reverse brain‑drain and accelerate R&D. Breakthrough research Prof. Michael Bronstein (University of Oxford/Google DeepMind) demonstrated Geometric Deep Learning applications that shorten drug‑discovery timelines and model subatomic physics. Marco Tempest (NASA JPL/MagicLab.nyc) blended GPT‑4o dialogue with mixed‑reality holograms, turning the stage into an interactive mind‑map. Prof. Michal Irani (Weizmann Institute) showed perception‑to‑cognition systems capable of reconstructing scenes from a single gaze sequence. Andrea Vedaldi (Oxford) premiered a 3‑D generative‑AI pipeline for instant city‑scale digital twins, while Marc Pollefeys (ETH Zurich/Microsoft) demonstrated real‑time spatial mapping at sub‑10 ms latency. Industry workshops & panels AWS ran a hands‑on clinic – “Building Enterprise Gen‑AI Applications” – covering RAG, agentic orchestration and secure deployment. NVIDIA’s workshop unveiled its platform approach to production generative‑AI on Hopper‑class GPUs, complementing its newly announced Service Delivery Partnership with Polynome Group’s legal entity, Intelligent Machines Consultancies. Dubai Police hosted a closed‑door DFA session on predictive policing, while X and AI workshops explored social‑data pipelines on GPU clusters. The parallel Machines Can Create forum examined AI’s role in luxury, digital art and media, with speakers from HEC Paris, The Sandbox, IBM Research and BBC, culminating in the panel “Pixels and Palettes: The Canvas of Tomorrow.” Prof. Marc Pollefeys, Director of the Mixed Reality and AI Lab at ETH Zurich and Microsoft, highlighted the role of cutting-edge technology in daily life: “We are at a turning point where technologies like spatial AI and real-time 3D mapping are moving from laboratories into everyday life, transforming cities, workplaces, and how we interact with the digital world. The Machines Can See Summit underscores how collaboration between researchers, industry, and policymakers accelerates this transition, bringing innovative solutions closer to everyone,” he said. Ethical & security focus Panels “Good AI: Between Hype and Mediocrity” and “Defending Intelligence: Navigating Adversarial Machine Learning” stressed the need for continuous audits, red‑teaming and transparent supply chains. Dubai Police, TII UAE and IBM urged adoption of ISO‑aligned governance tool‑kits to safeguard public‑sector deployments. High‑profile awards On Day Two, H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum presented trophies for the Global Prompt Engineering Championship, for breakthroughs in multilingual, safety-aligned LLM prompting. Key takeaways The summit underscored three strategic imperatives for the decade ahead. Talent aviation – backed by unified tech visas, national GPU clouds and government‑funded sandbox clusters – is emerging as the most effective antidote to AI brain‑drain. Spatial computing is moving from laboratory to street level as sub‑10‑millisecond mapping unlocks safe humanoid robotics and city‑scale augmented‑reality services. Finally, secure generative AI must couple adversarial robustness with transparent, explainable pipelines before the technology can achieve mass‑market adoption in regulated industries. The post “Machines Can See 2025” wraps in Dubai after two‑day showcase of AI appeared first on AI News. View the full article
-
Baidu has unveiled ERNIE X1 Turbo and 4.5 Turbo, two fast models that boast impressive performance alongside dramatic cost reductions. Developed as enhancements to the existing ERNIE X1 and 4.5 models, both new Turbo versions highlight multimodal processing, robust reasoning skills, and aggressive pricing strategies designed to capture developer interest and marketshare. Baidu ERNIE X1 Turbo: Deep reasoning meets cost efficiency Positioned as a deep-thinking reasoning model, ERNIE X1 Turbo tackles complex tasks requiring sophisticated understanding. It enters a competitive field, claiming superior performance in some benchmarks against rivals like DeepSeek R1, V3, and OpenAI o1: Key to X1 Turbo’s enhanced capabilities is an advanced “chain of thought” process, enabling more structured and logical problem-solving. Furthermore, ERNIE X1 Turbo boasts improved multimodal functions – the ability to understand and process information beyond just text, potentially including images or other data types – alongside refined tool utilisation abilities. This makes it particularly well-suited for nuanced applications such as literary creation, complex logical reasoning challenges, code generation, and intricate instruction following. ERNIE X1 Turbo achieves this performance while undercutting competitor pricing. Input token costs start at $0.14 per million tokens, with output tokens priced at $0.55 per million. This pricing structure is approximately 25% of DeepSeek R1. Baidu ERNIE 4.5 Turbo: Multimodal muscle at a fraction of the cost Sharing the spotlight is ERNIE 4.5 Turbo, which focuses on delivering upgraded multimodal features and significantly faster response times compared to its non-Turbo counterpart. The emphasis here is on providing a versatile, responsive AI experience while slashing operational costs. The model achieves an 80% price reduction compared to the original ERNIE 4.5 with input set at $0.11 per million tokens and output at $0.44 per million tokens. This represents roughly 40% of the cost of the latest version of DeepSeek V3, again highlighting a deliberate strategy to attract users through cost-effectiveness. Performance benchmarks further bolster its credentials. In multiple tests evaluating both multimodal and text capabilities, Baidu ERNIE 4.5 Turbo outperforms OpenAI’s highly-regarded GPT-4o model. In multimodal capability assessments, ERNIE 4.5 Turbo achieved an average score of 77.68 to surpass GPT-4o’s score of 72.76 in the same tests. While benchmark results always require careful interpretation, this suggests ERNIE 4.5 Turbo is a serious contender for tasks involving an integrated understanding of different data types. Baidu continues to shake up the AI marketplace The launch of ERNIE X1 Turbo and 4.5 Turbo signifies a growing trend in the AI sector: the democratisation of high-end capabilities. While foundational models continue to push the boundaries of performance, there is increasing demand for models that balance power with accessibility and affordability. By lowering the price points for models with sophisticated reasoning and multimodal features, the Baidu ERNIE Turbo series could enable a wider range of developers and businesses to integrate advanced AI into their applications. This competitive pricing puts pressure on established players like OpenAI and Anthropic, as well as emerging competitors like DeepSeek, potentially leading to further price adjustments across the market. (Image Credit: Alpha Photo under CC BY-NC 2.0 license) See also: China’s MCP adoption: AI assistants that actually do things Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost appeared first on AI News. View the full article
-
The way we interact with our computers and smart devices is very different from previous years. Over the decades, human-computer interfaces have transformed, progressing from simple cardboard punch cards to keyboards and mice, and now extended reality-based AI agents that can converse with us in the same way as we do with friends. With each advance in human-computer interfaces, we’re getting closer to achieving the goal of interactions with machines, making computers more accessible and integrated with our lives. Where did it all begin? Modern computers emerged in the first half of the 20th century and relied on punch cards to feed data into the system and enable binary computations. The cards had a series of punched holes, and light was shone at them. If the light passed through a hole and was detected by the machine, it represented a “one”. Otherwise, it was a “zero”. As you can imagine, it was extremely cumbersome, time-consuming, and error-prone. That changed with the arrival of ENIAC, or Electronic Numerical Integrator and Computer, widely considered to be the first “Turing-complete” device that could solve a variety of numerical problems. Instead of punch cards, operating ENIAC involved manually setting a series of switches and plugging patch cords into a board to configure the computer for specific calculations, while data was inputted via a further series of switches and buttons. It was an improvement over punch cards, but not nearly as dramatic as the arrival of the modern QWERTY electronic keyboard in the early 1950s. Keyboards, adapted from typewriters, were a game-changer, allowing users to input text-based commands more intuitively. But while they made programming faster, accessibility was still limited to those with knowledge of the highly-technical programming commands required to operate computers. GUIs and touch The most important development in terms of computer accessibility was the graphical user interface or GUI, which finally opened computing to the masses. The first GUIs appeared in the late 1960s and were later refined by companies like IBM, Apple, and Microsoft, replacing text-based commands with a visual display made up of icons, menus, and windows. Alongside the GUI came the iconic “mouse“, which enabled users to “point-and-click” to interact with computers. Suddenly, these machines became easily navigable, allowing almost anyone to operate one. With the arrival of the internet a few years later, the GUI and the mouse helped pave the way for the computing revolution, with computers becoming commonplace in every home and office. The next major milestone in human-computer interfaces was the touchscreen, which first appeared in the late 1990s and did away with the need for a mouse or a separate keyboard. Users could now interact with their computers by tapping icons on the screen directly, pinching to zoom, and swiping left and right. Touchscreens eventually paved the way for the smartphone revolution that started with the arrival of the Apple iPhone in 2007 and, later, Android devices. With the rise of mobile computing, the variety of computing devices evolved further, and in the late 2000s and early 2010s, we witnessed the emergence of wearable devices like fitness trackers and smartwatches. Such devices are designed to integrate computers into our everyday lives, and it’s possible to interact with them in newer ways, like subtle gestures and biometric signals. Fitness trackers, for instance, use sensors to keep track of how many steps we take or how far we run, and can monitor a user’s pulse to measure heart rate. Extended reality & AI avatars In the last decade, we also saw the first artificial intelligence systems, with early examples being Apple’s Siri and Amazon’s Alexa. AI chatbots use voice recognition technology to enable users to communicate with their devices using their voice. As AI has advanced, these systems have become increasingly sophisticated and better able to understand complex instructions or questions, and can respond based on the context of the situation. With more advanced chatbots like ChatGPT, it’s possible to engage in lifelike conversations with machines, eliminating the need for any kind of physical input device. AI is now being combined with emerging augmented reality and virtual reality technologies to further refine human-computer interactions. With AR, we can insert digital information into our surroundings by overlaying it on top of our physical environment. This is enabled using VR devices like the Oculus Rift, HoloLens, and Apple Vision Pro, and further pushes the boundaries of what’s possible. So-called extended reality, or XR, is the latest take on the technology, replacing traditional input methods with eye-tracking, and gestures, and can provide haptic feedback, enabling users to interact with digital objects in physical environments. Instead of being restricted to flat, two-dimensional screens, our entire world becomes a computer through a blend of virtual and physical reality. The convergence of XR and AI opens the doors to more possibilities. Mawari Network is bringing AI agents and chatbots into the real world through the use of XR technology. It’s creating more meaningful, lifelike interactions by streaming AI avatars directly into our physical environments. The possibilities are endless – imagine an AI-powered virtual assistant standing in your home or a digital concierge that meets you in the hotel lobby, or even an AI passenger that sits next to you in your car, directing you on how to avoid the worst traffic jams. Through its decentralised DePin infrastructure, it’s enabling AI agents to drop into our lives in real-time. The technology is nascent but it’s not fantasy. In Germany, tourists can call on an avatar called Emma to guide them to the best spots and eateries in dozens of ******* cities. Other examples include digital popstars like Naevis, which is pioneering the concept of virtual concerts that can be attended from anywhere. In the coming years, we can expect to see this XR-based spatial computing combined with brain-computer interfaces, which promise to let users control computers with their thoughts. BCIs use electrodes placed on the scalp and pick up the electrical signals generated by our brains. Although it’s still in its infancy, this technology promises to deliver the most effective human-computer interactions possible. The future will be seamless The story of the human-computer interface is still under way, and as our technological capabilities advance, the distinction between digital and physical reality will more blurred. Perhaps one day soon, we’ll be living in a world where computers are omnipresent, integrated into every aspect of our lives, similar to Star Trek’s famed holodeck. Our physical realities will be merged with the digital world, and we’ll be able to communicate, find information, and perform actions using only our thoughts. This vision would have been considered fanciful only a few years ago, but the rapid pace of innovation suggests it’s not nearly so far-fetched. Rather, it’s something that the majority of us will live to see. (Image source: Unsplash) The post From punch cards to mind control: Human-computer interactions appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Having worked with AI since 2018, I’m watching its slow but steady pick-up alongside the unstructured bandwagon-jumping with considerable interest. Now that the initial fear has subsided somewhat about a robotic takeover, discussion about the ethics that will surround the integration of AI into everyday business structures has taken its place. A whole new range of roles will be required to handle ethics, governance and compliance, all of which are going to gain enormous value and importance to organisations. Probably the most essential of these will be an AI Ethics Specialist, who will be required to ensure Agentic AI systems meet ethical standards like fairness and transparency. This role will involve using specialised tools and frameworks to address ethical concerns efficiently and avoid potential legal or reputational risks. Human oversight to ensure transparency and responsible ethics is essential to maintain the delicate balance between data driven decisions, intelligence and intuition. In addition, roles like Agentic AI Workflow Designer, AI Interaction and Integration Designer will ensure AI integrates seamlessly across ecosystems and prioritises transparency, ethical considerations, and adaptability. An AI Overseer will also be required, to monitor the entire Agentic stack of agents and arbiters, the decision-making elements of AI. For anyone embarking on the integration of AI into their organisation and wanting to ensure the technology is introduced and maintained responsibly, I can recommend consulting the United Nations’ principles. These 10 principles were created by the United Nations in 2022, in response to the ethical challenges raised by the increasing preponderance of AI. So what are these ten principles, and how can we use them as a framework? First, do no harm As befits technology with an autonomous element, the first principle focuses on the deployment of AI systems in ways that will avoid any negative impact on social, cultural, economic, natural or political environments. An AI lifecycle should be designed to respect and protect human rights and freedoms. Systems should be monitored to ensure that that situation is maintained and no long-term damage is being done. Avoid AI for AI’s sake Ensure that the use of AI is justified, appropriate and not excessive. There is a distinct temptation to become over-zealous in the application of this exciting technology and it needs to be balanced against human needs and aims and should never be used at the expense of human dignity. Safety and security Safety and security risks should be identified, addressed and mitigated throughout the life cycle of the AI system and on an on-going basis. Exactly the same robust health and safety frameworks should be applied to AI as to any other area of the business. Equality Similarly, AI should be deployed with the aim of ensuring the equal and just distribution of the benefits, risks and cost, and to prevent bias, deception, discrimination and stigma of any kind. Sustainability AI should be aimed at promoting environmental, economic and social sustainability. Continual assessment should be made to address negative impacts, including any on the generations to come. Data privacy, data protection and data governance Adequate data protection frameworks and data governance mechanisms should be established or enhanced to ensure that the privacy and rights of individuals are maintained in line with legal guidelines around data integrity and personal data protection. No AI system should impinge on the privacy of another human being. Human oversight Human oversight should be guaranteed to ensure that the outcomes of using AI are fair and just. Human-centric design practises should be employed and capacity to be given for a human to step in at any stage and make a decision on how and when AI should be used, and to over-ride any decision made by AI. Rather dramatically but entirely reasonably, the UN suggests any decision affecting life or death should not be left to AI. Transparency and Explainability This, to my mind, forms part of the guidelines around equality. Everyone using AI should fully understand the systems they are using, the decision-making processes used by the system and its ramifications. Individuals should be told when a decision regarding their rights, freedoms or benefits has been made by artificial intelligence, and most importantly, the explanation should be made in a way that makes it comprehensible. Responsibility and Accountability This is the whistleblower principle, that covers audit and due diligence as well as protection for whistleblowers to make sure that someone is responsible and accountable for the decisions made by, and use of, AI. Governance should be put in place around the ethical and legal responsibility of humans for any AI-based decisions. Any of these decisions that cause harm should be investigated and action taken. Inclusivity and participation Just as in any other area of business, when designing, deploying and using artificial intelligence systems, an inclusive, interdisciplinary and participatory approach should be taken, which also includes gender equality. Stakeholders and any communities that are affected should be informed and consulted and informed of any benefits and potential risks. Building your AI integration around these central pillars should help you feel reassured that your entry into AI integration is built on an ethical and solid foundation. Photo by Immo Wegmann on Unsplash Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post The ethics of AI and how they affect you appeared first on AI News. View the full article