Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    800
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. Oracle is planning to spend around $40 billion on Nvidia chips to support a massive new data centre being developed by OpenAI in Texas, according to reporting by the Financial Times. The move marks one of the largest chip purchases to date and signals the growing demand for AI computing power. The site is located in Abilene, Texas, and is described as the first US “Stargate” data centre. It’s part of a long-term plan backed by OpenAI and SoftBank to create large-scale AI facilities. When finished next year, the Texas centre is expected to provide 1.2 gigawatts of computing power, placing it among the largest in the world. Oracle has indicated it intends to purchase around 400,000 of Nvidia’s new GB200 chips, some of Nvidia’s most advanced processors designed for training and running AI models. Oracle plans to lease the computing power from these chips to OpenAI, according to people familiar with the deal. The site is owned by Crusoe and Blue Owl Capital, which have raised $15 billion through a mix of debt and equity to fund the project. Construction began in June 2024, and the campus will include eight buildings. Oracle has signed a 15-year lease for the site, although Stargate itself hasn’t put any money into this specific facility. JPMorgan is providing most of the debt financing, providing two loans totalling $9.6 billion, including a $7.1 billion loan announced this week. Crusoe and Blue Owl have each added roughly $5 billion in cash to the project. Elon Musk is expanding his own project in Memphis, Tennessee, with plans to house about 1 million Nvidia chips. That centre, called “Colossus,” is mostly using Nvidia’s earlier H100 and H200 chips. Musk said this week the next phase will become the “first gigawatt AI training supercluster.” Meanwhile, Amazon is working on a separate data centre in Virginia that will also surpass 1GW. The Abilene project is part of a larger shift for OpenAI as it looks to lessen its dependence on Microsoft. Until now, OpenAI has relied heavily on Microsoft for access to cloud computing. Much of Microsoft’s nearly $14 billion investment in the company came in the form of cloud credits. But OpenAI has grown frustrated that Microsoft can’t keep up with its increased demand for computing power. Earlier this year, OpenAI and Microsoft agreed to end their exclusive partnership, and are still working to resolve how long Microsoft will continue to hold licensing rights to OpenAI’s models. Looking ahead, Stargate is expected to supply a large share of OpenAI’s computing needs. The group, which launched in early 2025, aims to raise as much as $500 billion over four years to fund more data centre projects in the US. So far, OpenAI and SoftBank have each pledged $18 billion to Stargate. Oracle and MGX, a sovereign wealth fund based in Abu Dhabi, have committed another $7 billion each, according to people familiar with the details. The four companies will hold equity in the project, with OpenAI and SoftBank taking the lead. Stargate hasn’t yet invested in any specific project, but its plans are expanding beyond the US. Just last week, during President Donald Trump’s visit to the Gulf region, OpenAI announced a new Stargate facility in Abu Dhabi. The site will cover 10 square miles and offer 5GW of data centre power – enough to run more than 2 million Nvidia GB200 chips. Local AI firm G42 is managing the construction. (Photo from Unsplash) See also: Can the US really enforce a global AI chip ban? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Oracle plans $40B Nvidia chip deal for AI facility in Texas appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  2. Nvidia is preparing to go head-to-head with Huawei to maintain its relevance in the booming AI chip market of China. The upcoming AI chip to be created for China represents something of a strategic gamble by Nvidia – can the company’s third attempt at regulatory compliance preserve its foothold against surging domestic competition? Despite mounting geopolitical pressures from consecutive US administrations, Nvidia refuses to abandon the ******** market entirely. The company is set to introduce a stripped-down Blackwell-based processor designed specifically to navigate export restrictions and compete against sophisticated local alternatives, particularly from giant Huawei. A strategy born from necessity Reuters sources reveal that Nvidia’s latest offering will carry a price tag between $6,500 and $8,000 – a dramatic reduction from the $10,000-$12,000 commanded by the now-banned H20 model. Production is scheduled to commence in June, underscoring the urgency behind Nvidia’s China AI chip initiative. The cost reduction comes with significant trade-offs. The new processor will use Nvidia’s RTX Pro 6000D foundation paired with standard GDDR7 memory, abandoning the high-bandwidth memory found in premium variants. More tellingly, the chip forgoes Taiwan Semiconductor’s advanced CoWoS packaging technology, a decision that simultaneously reduces capabilities and manufacturing complexity. Following April’s effective prohibition of H20 sales, Nvidia absorbed a $5.5 billion writedown on ******** inventory and commitments. The company’s initial plan to modify the H20 for continued ******** sales ultimately proved unfeasible under current US export frameworks. Domestic competition intensifies Huawei’s emergence as a legitimate challenger has fundamentally altered China’s AI chip landscape. The company’s Ascend 910C and 910B processors have secured adoption among major domestic technology firms including Tencent, Baidu, and ByteDance, primarily for inference applications, an area in which they demonstrate competitive performance. The competitive pressure extends beyond individual chips to complete infrastructure solutions. Huawei’s CloudMatrix 384 rack system challenges Nvidia’s Blackwell GB200 NVL72 configuration directly, signaling the ******** firm’s ambition to compete in the entire AI hardware stack. Market dynamics reflect a shift in balance. Reports indicate H20 chips have traded at discounts exceeding 10% compared to Huawei’s Ascend 910B, highlighting Nvidia’s struggle to maintain pricing power against domestic alternatives. CEO Jensen Huang acknowledged the erosion, revealing that Nvidia’s ******** market share has approximately halved following recent export restrictions. Billions at stake The financial implications are substantial. Huang estimates China’s AI chip market potential at $50 billion, while Nvidia recorded over $17 billion in ******** sales during 2024. The figures underscore why the company continues pursuing Nvidia China AI chip development despite regulatory headwinds. Sources suggest Nvidia isn’t placing all hopes on this single product. The company reportedly plans a second Blackwell-variant for China, with production targeted for September. The multi-pronged approach indicates Nvidia’s commitment to maintaining ******** market presence through diverse product offerings tailored to different customer segments and regulatory requirements. An uncertain path forward The strategic questions surrounding the latest Nvidia China AI chip venture are profound. Can deliberately weakened hardware compete effectively against rapidly improving domestic alternatives? Will ******** customers accept performance compromises despite local options continuing their advance in capablity? Huang’s recent acknowledgment that “China is right behind us. We’re very, very close” in AI capabilities suggests the competitive gap may be narrowing faster than anticipated. Combined with substantial government backing for domestic semiconductor development, the market raises fundamental questions about Nvidia’s long-term viability in China. (Photo by Mariia Shalabaieva) See also: Can the US really enforce a global AI chip ban? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Will the budget China AI chip from Nvidia survive Huawei’s growth? appeared first on AI News. View the full article
  3. Anthropic has unveiled its latest Claude 4 model family, and it’s looking like a leap for anyone building next-gen AI assistants or coding. The stars of the show are Claude Opus 4, the new powerhouse, and Claude Sonnet 4, designed to be a smart all-rounder. Anthropic isn’t shy about its ambitions, stating these models are geared to “advance our customers’ AI strategies across the board.” They’re positioning Opus 4 as the tool to “push boundaries in coding, research, writing, and scientific discovery,” while Sonnet 4 is billed as an “instant upgrade from Sonnet 3.7,” ready to bring “frontier performance to everyday use cases.” Claude Opus 4: The new coding champ When Anthropic calls Claude Opus 4 its “most powerful model yet and the best coding model in the world,” you sit up and take notice. And they’ve got the numbers to back it up, with Opus 4 topping the charts on crucial industry tests, hitting 72.5% on SWE-bench and 43.2% on Terminal-bench. But it’s not just about quick sprints. Opus 4 is built for the long haul, designed for “sustained performance on long-running tasks that require focused effort and thousands of steps.” Imagine an AI that can “work continuously for several hours”—that’s what Anthropic claims. This should be a massive step up from previous Sonnet models and could expand what AI agents can achieve, tackling problems that require real persistence. Claude Sonnet 4: For daily AI and agentic work While Opus 4 is the heavyweight champion, Claude Sonnet 4 is shaping up to be the versatile workhorse, promising a significant boost for a huge range of applications. Early feedback from those who’ve had a sneak peek is glowing. For instance, GitHub “says Claude Sonnet 4 soars in agentic scenarios” and is so impressed they “plan to introduce it as the base model for the new coding agent in GitHub Copilot.” That’s a hefty endorsement. Tech commentator Manus is also impressed, highlighting its “improvements in following complex instructions, clear reasoning, and aesthetic outputs.” The positive vibes continue with iGent, which “reports Sonnet 4 excels at autonomous multi-feature app development, as well as substantially improved problem-solving and codebase navigation—reducing navigation errors from 20% to near zero.” That’s a game-changer for development workflows. Sourcegraph is equally optimistic, seeing the model as a “substantial leap in software development—staying on track longer, understanding problems more deeply, and providing more elegant code quality.” Augment Code has seen “higher success rates, more surgical code edits, and more careful work through complex tasks,” leading them to make Sonnet 4 their “top choice for their primary model.” Hybrid modes and developer delights One of the really clever bits about the Claude 4 family is its hybrid nature. Both Opus 4 and Sonnet 4 can operate in two gears: one for those near-instant replies we often need, and another that allows for “extended thinking for deeper reasoning.” This deeper thinking mode is part of the Pro, Max, Team, and Enterprise Claude plans. Good news for everyone, though – Sonnet 4, complete with this extended thinking, will also be available to free users, which is a fantastic move for making top-tier AI more accessible. Anthropic is also rolling out some tasty new tools for developers on its API, clearly aiming to supercharge the creation of more sophisticated AI agents: Code execution tool: This lets models actually run code, opening up all sorts of possibilities for interactive and problem-solving applications. MCP connector: Introduced by Anthropic, MCP standardises context exchange between AI assistants and software environments. Files API: This will make it much easier for AI to work directly with files, which is a big deal for many real-world tasks. Prompt caching: Developers will be able to cache prompts for up to an hour. This might sound small, but it can make a real difference to speed and efficiency, especially for frequently used queries. Leading the pack in real-world performance Anthropic is keen to emphasise that its “Claude 4 models lead on SWE-bench Verified, a benchmark for performance on real software engineering tasks.” Beyond coding, they stress that these models “deliver strong performance across coding, reasoning, multimodal capabilities, and agentic tasks.” Despite the leaps in capability, Anthropic is holding the line on pricing. Claude Opus 4 will set you back $15 per million input tokens and $75 per million output tokens. Claude Sonnet 4, the more accessible option, is priced at $3 per million input tokens and $15 per million output tokens. This consistency will be welcomed by existing users. Both Claude Opus 4 and Sonnet 4 are ready to go via the Anthropic API, and they’re also popping up on Amazon Bedrock and Google Cloud’s Vertex AI. This broad availability means businesses and developers worldwide can start experimenting and integrating these new tools fairly easily. Anthropic is clearly doubling down on making AI more capable, particularly in the complex realms of coding and autonomous agent behaviour. With these new models and developer tools, the potential for innovation just got a serious boost. (Image credit: Anthropic) See also: Details leak of Jony Ive’s ambitious OpenAI device Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic Claude 4: A new era for intelligent agents and AI coding appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  4. After what felt like an age of tech industry tea-leaf reading, OpenAI has officially snapped up “io,” the much-buzzed-about startup building an AI device from former Apple design guru Jony Ive and OpenAI’s chief, Sam Altman. The price tag? $6.5 billion. OpenAI put out a video this week talking about the Ive and Altman venture in a general sort of way, but now, a few more tidbits about what they’re actually cooking have slipped out. And what are they planning with all that cash and brainpower? Well, the eagle-eyed folks at The Washington Post spotted an internal chat between Sam Altman and OpenAI staff where he set a target of shipping 100 million AI “companions.” Altman allegedly even told his team the OpenAI device is “the chance to do the biggest thing we’ve ever done as a company here.” To be clear, Altman has set that 100 million number as an eventual target. “We’re not going to ship 100 million devices literally on day one,” he said. But then, in a flex that’s pure Silicon Valley, he added they’d hit that 100 million mark “faster than any company has ever shipped 100 million of something new before.” So, what is this mysterious “companion”? The gadget is designed to be entirely aware of a user’s surroundings, and even their “life.” While they’ve mostly talked about a single device, Altman did let slip it might be more of a “family of devices.” Jony Ive, as expected, dubbed it “a new design movement.” You can almost hear the minimalist manifesto being drafted. Why the full-blown acquisition, though? Weren’t they just going to partner up? Originally, yes. The plan was for Ive’s startup to cook up the hardware and sell it, with OpenAI delivering the brains. But it seems the vision got *******. This isn’t just another accessory, you see. Altman stressed the device will be a “central facet of using OpenAI.” He even said, “We both got excited about the idea that, if you subscribed to ChatGPT, we should just mail you new computers, and you should use those.” Frankly, they reckon our current tech – our trusty laptops, the websites we browse – just isn’t up to snuff for the kind of AI experiences they’re dreaming of. Altman was pretty blunt, saying current use of AI “is not the sci-fi dream of what AI could do to enable you in all the ways that I think the models are capable of.” So, we know it’s not a smartphone. Altman’s also put the kibosh on it being a pair of glasses. And Jony Ive, well, he’s apparently not rushing to make another wearable, which makes sense given his design ethos. The good news for the impatient among us (i.e., everyone in tech) is that this isn’t just vapourware. Ive’s team has an actual prototype. Altman’s even taken one home to “live with it”. As for when we might get our hands on one? Altman’s reportedly aiming for a late 2026 release. Naturally, OpenAI is keeping the actual device under wraps, but you can always count on supply chain whispers for a few clues. The ever-reliable (well, usually!) Apple supply chain analyst Ming-Chi Kuo has thrown a few alleged design details into the ring via social media. Kuo reckons it’ll be “slightly larger” than the Humane AI Pin, but that it will look “as compact and elegant as an iPod Shuffle.” And yes, like the Shuffle, Kuo says no screen. According to Kuo, the device will chat with your phone and computer instead, using good old-fashioned microphones for your voice and cameras to see what’s going on around you. Interestingly, he suggests it’ll be worn around the neck, necklace-style, rather than clipped on like the AI Pin. Kuo’s crystal ball points to mass production in 2027, but he wisely adds a pinch of salt, noting the final look and feel could still change. So, the billion-dollar (well, £5.1 billion) question remains: will this OpenAI device be the next big thing, the gamechanger we’ve been waiting for? Or will it be another noble-but-failed attempt to break free from the smartphone’s iron grip, joining the likes of the AI Pin in the ‘great ideas that didn’t quite make it’ pile? Altman, for one, is brimming with confidence. Having lived with the prototype, he’s gone on record saying he believes it will be “the coolest piece of technology that the world will have ever seen.” See also: Linux Foundation: Slash costs, boost growth with open-source AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Details leak of Jony Ive’s ambitious OpenAI device appeared first on AI News. View the full article
  5. The Linux Foundation and Meta are putting some numbers behind how open-source AI (OSAI) is driving innovation and adoption. The adoption of AI tools is pretty much everywhere now, with 94% of organisations surveyed already using them. And get this: within that crowd, 89% are tapping into open-source AI for some part of their tech backbone. A paper released this week by Meta and the Linux Foundation stitches together academic brainpower, industry frontline stories, and global survey data to showcase an ecosystem that’s buzzing thanks to being open and affordable. If there’s one thing that jumps off the page, it’s the money talk. Cost savings, folks, are a huge deal here. Unsurprisingly, two-thirds of businesses are saying that open source AI is just plain cheaper to get up and running compared to proprietary. So, it’s no shocker that almost half of them point to these savings as a big reason for going the open-source route. We’re not talking about trimming a few coins here and there. Researchers reckon companies would be shelling out 3.5 times more cash if open-source software simply vanished. As AI digs its heels deeper into everything we do, the financial muscle of open-source is only going to get stronger, potentially even overshadowing traditional open-source software’s impact. But this isn’t just about pinching pennies; it’s about unleashing brains. The report points out that AI can slash business unit costs by over 50%, which, as you can imagine, opens the door for revenue boosts. When open AI models are out there for cheap, or even free, it levels the playing field. Suddenly, developers and businesses of all sizes can jump in, play around, and rethink how they do things. Often it’s the smaller players, the agile startups and medium-sized businesses, that are diving headfirst into open-source AI more so than the big corporate giants. And since these are often the places where groundbreaking ideas and new products are born, it really hammers home how vital OSAI is for keeping the innovation engine chugging and helping those plucky, cutting-edge firms compete. And if you want a textbook example of how going open can turbocharge things, look no further than PyTorch. The report digs into how Meta’s decision to shift its heavyweight deep learning framework to an open governance model, under a non-profit, turned out to be a masterstroke. The report leans on a close look by Yue and Nagle (2024), who tracked what happened next. Once PyTorch flew the Meta nest, contributions from Meta itself “significantly decreased.” Sounds a bit off, right? But actually, it signalled a healthy move away from one company calling the shots. What really ramped up was input from “external companies, especially from the developers of complementary technology, such as chip manufacturers.” Meanwhile, the actual users, the developers building stuff with PyTorch, kept their engagement steady – “no change.” It’s a clear win. As the researchers put it, this kind of shift for major OSAI software “promotes broader participation and increased contributions and decreases the dominance of any single company.” It’s a powerful testament to what report authors Anna Hermansen and Cailean Osborne found: “engagement in open, collaborative activities is a better indicator of innovation than patents.” This isn’t just theory; it’s making waves in massive sectors. Take manufacturing. Open-source AI is set to be a game-changer there, mostly because its open code means you can bend it and shape it to fit. This flexibility allows AI to slot neatly into factory workflows, automating tasks and smoothing out order management. A 2023 McKinsey report, flagged in the study, even predicts AI could pump up to $290 billion extra into advanced manufacturing. Then there’s healthcare. In places like hospitals and local clinics, where every penny and every minute counts, free and flexible tools like open-source AI can literally be lifesavers. Imagine AI helping with diagnoses or flagging diseases early. McKinsey thinks the global healthcare sector could see up to a $260 billion boost in value once AI is really rolled out. A 2024 analysis even showed that open models in healthcare can go toe-to-toe with the proprietary ones—meaning hospitals can get tailored, privacy-friendly OSAI without skimping on performance. And it’s not just about the tech; it’s about the people. The report mentions that AI-related skills could see wages jump by up to 20%. That’s a big deal and really underlines why we need to be thinking about training and development for this new AI era. Hilary Carter, SVP of Research at The Linux Foundation, said: “The findings in this report make it clear: open-source AI is a catalyst for economic growth and opportunity. As adoption scales across sectors, we’re seeing measurable cost savings, increased productivity and rising demand for AI-related skills that can boost wages and career prospects. “Open-source AI is not only transforming how businesses operate—it’s reshaping how people work.” So, the takeaway? Open AI models are fast becoming the standard, the very foundation of future breakthroughs. They’re pushing growth and healthy competition by making powerful AI tools available without an eye-watering price tag. The Linux Foundation’s report isn’t just cheerleading; it’s laying out the hard numbers to show why open-source AI is absolutely crucial for a robust, stable, and forward-looking economy. See also: Alibaba Wan2.1-VACE: Open-source AI video tool for all Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Linux Foundation: Slash costs, boost growth with open-source AI appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  6. The Middle East is pulling in more attention from global tech investors than ever. Saudi Arabia, the UAE, and Qatar are rolling out billions of dollars in deals, working with top US companies, and building the kind of infrastructure needed to run large-scale AI systems. It’s not just about the money. There are new laws, startup activity, and plans for growth that are turning heads in Silicon Valley and beyond. Strategic deals anchor US tech ties US President Donald Trump recently visited the region and announced more than $2 trillion in investment agreements. These included major partnerships between Gulf states and American firms in artificial intelligence, cloud services, and defence tech. The UAE said it would build one of the world’s largest AI campuses in Abu Dhabi. At the same time, Saudi Arabia launched an AI company called Humain. Backed by the Public Investment Fund, the firm has already formed deals with Nvidia and AMD to bring in thousands of chips for local use. The idea is to run and train AI models inside the kingdom, cutting the need to rely on overseas services. These deals aren’t one-off events. They point to deeper ties between the Gulf and US tech companies. Gulf leaders want to localise AI development, but US companies see the region as a growing market for cloud, data, and chips. This growing alignment offers both sides an edge in a global race where speed and access matter. Gulf states scale up AI infrastructure AI systems need strong computing power. That means data centres, chips, and networks that can handle constant, heavy demand. Countries like Saudi Arabia and the UAE are putting their money behind this need. Saudi Arabia’s Humain is planning to deploy over 18,000 Nvidia chips, some of the most advanced in the market. These will power training clusters that let researchers and firms build new models at home. The UAE, through partnerships with Amazon and OpenAI, is also expanding its local data capacity. One campus in Abu Dhabi will include large-scale AI labs and supercomputers. Running powerful AI models close to home offers more than speed. It helps with data control, lowers costs, and reduces delays. Governments in the region are aware that long-term control over AI infrastructure will play a major role in future national development and influence. These projects are part of each country’s national tech strategy. Saudi Arabia’s Vision 2030 includes tech among its focus areas. The UAE’s AI strategy aims to be one of the top AI-ready countries in the next five years. Startups are finding momentum Investment isn’t only flowing to big infrastructure. April 2025 saw MENA startups raise $228.4 million. That’s more than double what they raised in March. Fintech and B2B platforms are leading the charge. Thndr, a Cairo-based investment platform, raised $15.7 million to expand into Saudi Arabia and the UAE. These countries have growing retail investor bases and are looking for tools that make trading and saving more accessible. The Gulf’s young, tech-savvy population and high mobile use make it an ideal testbed for startups. At the same time, government-backed funds are investing in early-stage companies to help grow local talent and reduce dependence on imported services. Governments are also creating more startup-friendly zones. Free economic zones in the UAE and planned innovation hubs in Saudi Arabia offer tax benefits and simplified licensing for tech ventures. Investors say that regulatory support is improving, and founders now have clearer paths to launch and scale. Cloud and data centre expansion gathers pace Cloud service demand is rising across the Middle East. Smart city projects, e-government platforms, and AI applications are driving the need for secure, local data storage and processing. Oracle has pledged $14 billion to expand its cloud footprint in Saudi Arabia. Google, AWS, and Microsoft are also investing in regional data hubs. These centres will support everything from banking to logistics. Building out cloud services is key to keeping data local and speeding up online services. It also lowers costs for local firms, which no longer need to rely on foreign servers. The result is a growing tech sector that has the tools to serve customers in real time. Large-scale data operations also open the door for more regional SaaS companies. With cloud capacity in place, local developers can create enterprise tools, AI services, and e-commerce platforms tailored to local needs. Policy reforms drive diversification Behind these tech moves are changes in policy. Governments are cutting red tape, easing rules for foreign ownership, and offering tax breaks for tech investors. The aim is to reduce the region’s reliance on oil and build a broader economic base. Saudi Arabia’s Vision 2030 includes goals for digital infrastructure, education, and innovation. The UAE’s AI strategy is tied to its push to attract top researchers and engineers. These are not just plans on paper. They’re being matched with funding, laws, and global partnerships. There is also a cultural shift underway. Tech is being taught in schools, and universities are opening AI-focused programs. This is helping to build a future workforce that can support local companies and attract international firms. More investors are noting the predictability and speed of doing business. This is especially important for tech startups that need fast feedback and steady support to grow. When rules are clear and approvals are quick, companies are more likely to stay. Balancing growth and geopolitical interests With more tech investment comes more attention. The US sees the region as a way to grow its global tech influence, especially as ties with China remain tense. For Middle Eastern nations, working with US companies gives them access to know-how and supply chains that would take years to build from scratch. At the same time, there are concerns about who controls the tech, where data is stored, and how it’s used. Some countries are pushing for data rules that favour local storage. Others want to develop their own large language models and keep training data inside national borders. Some regional leaders are starting to speak more openly about digital independence. They want to be buyers, yes, but also builders. That means investing in chips, software, and talent that can support homegrown tech. A few years ago, that seemed far off. Now, with the right backing, it’s starting to look within reach. Navigating these issues will shape the next phase of tech growth in the Middle East. Governments want to move fast but also retain control over key parts of their digital economy. The Middle East’s role in global tech is shifting. It’s no longer just a market for new gadgets or services. It’s becoming a centre for infrastructure, AI training, startup growth, and cloud services. Countries in the region are investing with a clear goal: to build long-term strength in a sector that shapes how business, education, and even government will work in the years ahead. If current trends continue, the Middle East won’t just be receiving tech. It will be helping shape it. (Photo by Unsplash) See also: Saudi Arabia moves to build its AI future with HUMAIN and NVIDIA Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Why the Middle East is a hot place for global tech investments appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  7. According to Thales, AI and quantum threats have catapulted to the top of the worry list for organisations wrestling with data security. That’s the key takeaway from the cybersecurity giant’s 2025 Data Threat Report, an annual deep dive into the latest data security threats, emerging trends, and hot topics. This year’s findings are stark: almost seven out of ten organisations now see the sheer speed of AI development – especially where generative AI is concerned – as the number one security headache related to its adoption. This anxiety isn’t just about pace; it’s also fed by concerns over a fundamental lack of integrity in AI systems (flagged by 64% of those surveyed) and a troubling deficit in trustworthiness (a worry for 57%). Generative AI is a data-hungry beast, relying heavily on high-quality, often sensitive, information for core functions like training models, making inferences, and, of course, generating content. As we make rapid advancements in “agentic AI” – systems that can act more autonomously – the pressure to ensure data quality is high calibre becomes even more critical. After all, sound decisionmaking and reliable actions from AI systems depend entirely on the data they’re fed. Many organisations are already diving in, with a third of respondents indicating generative AI is either being actively integrated or is already a force for transformation within their operations. Security threats increase as organisations embrace generative AI As generative AI throws up a complex web of data security challenges while simultaneously offering strategic avenues to bolster defences, its growing integration signals a distinct shift. Businesses are moving beyond just dipping their toes in the AI water; they’re now looking at more mature, operational deployments. Interestingly, while most respondents tabbed the swift uptake of GenAI as their biggest security concern, those further along the AI adoption curve aren’t hitting the pause button to completely lock down their systems or fine-tune their tech stacks before forging ahead. This dash for rapid transformation – often overshadowing efforts to ensure organisational readiness – could mean these companies are, perhaps unwittingly, creating their own most serious security weak spots. Eric Hanselman, Chief Analyst at S&P Global Market Intelligence 451 Research, said: “The fast-evolving GenAI landscape is pressuring enterprises to move quickly, sometimes at the cost of caution, as they race to stay ahead of the adoption curve. “Many enterprises are deploying GenAI faster than they can fully understand their application architectures, compounded by the rapid spread of SaaS tools embedding GenAI capabilities, adding layers of complexity and risk.” On a more positive note, 73% of respondents report they are putting money into AI-specific security tools to counter threats, either through fresh budgets or by reshuffling existing resources. Those making AI security a priority are also diversifying their approaches: over two-thirds have sourced tools from their cloud providers, three in five are turning to established security vendors, and almost half are looking to new or emerging startups for solutions. What’s particularly telling is how quickly security for generative AI has climbed the spending charts, nabbing the second spot in ranked-choice voting, just pipped to the post by the perennial concern of cloud security. This shift powerfully underscores the growing recognition of AI-driven risks and the urgent need for specialised defences to counter them. Data breaches show modest decline, though threats remain elevated While the nightmare of a data breach still looms large for many, their reported frequency has actually dipped slightly over the past few years. Back in 2021, 56% of enterprises surveyed said they’d experienced a breach at some point; that figure has eased to 45% in the 2025 report. Delving deeper, the percentage of respondents reporting a breach within the last 12 months has dropped from 23% in 2021 to a more encouraging 14% in 2025. When it comes to the persistent villains of the threat landscape, malware continues to lead the pack, holding onto its top spot since 2021. Phishing has craftily climbed into second place, nudging ransomware down to third. As for who’s causing the most concern, external actors dominate: hacktivists are currently seen as the primary menace, followed by nation-state actors. Human error, whilst still a significant factor, has slipped to third, down one position from the previous year. Vendors pressed on readiness for quantum threats The 2025 Thales Data Threat Report also casts a revealing light on the growing unease within most organisations about quantum-related security risks. The top threat here, cited by a hefty 63% of respondents, is the looming danger of “future encryption compromise.” This is the unsettling prospect that powerful quantum computers could one day shatter current or even future encryption algorithms, exposing data previously thought to be securely locked away. Hot on its heels, 61% identified key distribution vulnerabilities, where quantum breakthroughs could undermine the methods we use to securely exchange encryption keys. Furthermore, 58% highlighted the “harvest now, decrypt later” (HNDL) threat – a chilling scenario where encrypted data, scooped up today, could be decrypted by powerful quantum machines in the future. In response to these gathering clouds, half of the organisations surveyed are taking a hard look at their current encryption strategies with 60% already prototyping or evaluating post-quantum cryptography (PQC) solutions. However, it seems trust is a scarce commodity, as only a third are pinning their hopes on telecom or cloud providers to navigate this complex transition for them. Todd Moore, Global VP of Data Security Products at Thales, commented: “The clock is ticking on post-quantum readiness. It’s encouraging that three out of five organisations are already prototyping new ciphers, but deployment timelines are tight and falling behind could leave critical data exposed. “Even with clear timelines for transitioning to PQC algorithms, the pace of encryption change has been slower than expected due to a mix of legacy systems, complexity, and the challenge of balancing innovation with security.” There’s clearly a lot more work to be done to get operational data security truly up to speed, not just to support the advanced capabilities of emerging technologies like generative AI, but also to lay down a secure foundation for whatever threats are just around the corner. (Image by Pete Linforth) See also: AI tool speeds up government feedback, experts urge caution Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Thales: AI and quantum threats top security agendas appeared first on AI News. View the full article
  8. Dell Technologies used the Dell Technologies World in Las Vegas to announce the latest generation of AI acceleration servers which come equipped with Nvidia’s Blackwell Ultra GPUs. The systems claim to deliver up to four times faster AI training capabilities compared to previous generations, as Dell expands its AI Factory partnership with Nvidia amid intense competition in the enterprise AI hardware market. The servers arrive as organisations move from experimental AI projects to production-scale implementations, creating demand for more sophisticated computing infrastructure. The new lineup features air-cooled PowerEdge XE9780 and XE9785 servers, designed for conventional data centres, and liquid-cooled XE9780L and XE9785L variants, optimised for whole-rack deployment. The advanced systems support configurations with up to 192 Nvidia Blackwell Ultra GPUs with direct-to-chip liquid cooling, expandable to 256 GPUs per Dell IR7000 rack. “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, the eponymous chairman and chief executive officer. “Our job is to make AI more accessible. With the Dell AI Factory with Nvidia, enterprises can manage the entire AI lifecycle in use cases, from deployment to training, at any scale.” Dell’s self-designation as “the world’s top provider of AI-centric infrastructure” appears calculated as companies try to deploy AI and navigate technical hurdles. Critical assessment of Dell’s AI hardware strategy While Dell’s AI acceleration hardware advancements appear impressive on the basis of tech specs, several factors will ultimately determine their market impact. The company has withheld pricing information for these high-end systems, which will undoubtedly represent substantial capital investments for organisations considering deployment. The cooling infrastructure alone, particularly for the liquid-cooled variants, may need modifications to data centres for many potential customers, adding complexity and cost beyond the server hardware itself. Industry observers note that Dell faces intensifying competition in the AI hardware space from companies like Super Micro Computer, which has aggressively targeted the AI server market with similar offerings. However, Super Micro has recently encountered production cost challenges and margin pressure, potentially creating an opening for Dell if it can deliver competitive pricing. Jensen Huang, founder and CEO of Nvidia, emphasised the transformative potential of these systems: “AI factories are the infrastructure of modern industry, generating intelligence to power work in healthcare, finance and manufacturing. With Dell Technologies, we’re offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge.” Comprehensive AI acceleration ecosystem Dell’s AI acceleration strategy extends beyond server hardware to encompass networking, storage, and software components: The networking portfolio now includes the PowerSwitch SN5600 and SN2201 switches (part of Nvidia’s Spectrum-X platform) and Nvidia Quantum-X800 InfiniBand switches, capable of up to 800 gigabits per second throughput with Dell ProSupport and Deployment Services. The Dell AI Data Platform has received upgrades to enhance data management for AI applications, including a denser ObjectScale system with Nvidia BlueField-3 and Spectrum-4 networking integrations. In software, Dell offers the Nvidia AI Enterprise software platform directly, featuring Nvidia NIM, NeMo microservices, and Blueprints to streamline AI development workflows. The company also introduced Managed Services for its AI Factory with Nvidia, providing monitoring, reporting, and maintenance to help organisations address expertise gaps – skilled professionals remain in short supply. Availability timeline and market implications Dell’s AI acceleration platform rollout follows a staggered schedule throughout 2025: Air-cooled PowerEdge XE9780 and XE9785 servers with NVIDIA HGX B300 GPUs will be available in the second half of 2025 The liquid-cooled PowerEdge XE9780L and XE9785L variants are expected later this year The PowerEdge XE7745 server with Nvidia RTX Pro 6000 Blackwell Server Edition GPUs will launch in July 2025 The PowerEdge XE9712 featuring GB300 NVL72 will arrive in the second half of 2025 Dell plans to support Nvidia’s Vera CPU and Vera Rubin platform, signalling a longer-term commitment to expanding its AI ecosystem beyond this product lineup. Strategic analysis of the AI acceleration market Dell’s push into AI acceleration hardware reflects a strategy change to capitalise on the artificial intelligence *****, and use its established enterprise customer relationships. As organisations realise the complexity and expense of implementing AI at scale, Dell appears to be positioning itself as a comprehensive solution provider rather than merely a hardware vendor. However, the success of Dell’s AI acceleration initiative will ultimately depend on how effectively systems deliver measurable business value. Organisations investing in high-end infrastructure will demand operational improvements and competitive advantages that justify the significant capital expenditure. The partnership with Nvidia provides Dell access to next-gen AI accelerator technology, but also creates dependency on Nvidia’s supply chain and product roadmap. Given persistent chip shortages and extraordinary demand for AI accelerators, Dell’s ability to secure adequate GPU allocations will prove crucial for meeting customer expectations. (Photo by Nvidia) See also: Dell, Intel and University of Cambridge deploy the ***’s fastest AI supercomputer Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Dell unveils Nvidia Blackwell-based AI acceleration platform appeared first on AI News. View the full article
  9. AI is transforming industries worldwide, and its growing impact on digital finance is becoming significant. One of the most intriguing applications of AI lies in the analysis and forecasting of the Bitcoin price in USD – a notoriously volatile metric that traditional financial models struggle to predict. Today, AI is helping decode cryptocurrency’s complexities with speed and precision. From deep learning networks and sentiment analysis to real-time anomaly detection, AI technologies are shaping how investors, institutions and platforms understand and respond to Bitcoin’s dynamic movements. Predicting Bitcoin with deep learning Bitcoin has evolved into a global platform operating through numerous nodes around the world. Each node creates further data, like trading volume, volatility and price changes, alongside other trade-related documentation. All this serves as the basis for training deep learning frameworks. Forecasting Bitcoin prices using recurrent neural networks (RNNs) is particularly effective due to the use of Long Short-Term Memory (LSTM) networks. LSTMs excel at capturing long-term dependencies in time series data, making them ideal for handling the complex and dynamic nature of Bitcoin’s real-time price movements. The ability to remember and learn from previously observed patterns allows LSTMs to model the statistical behaviour of Bitcoin prices over time. A recently published study in Forecasting (2024) proposed a hybrid model for predicting Bitcoin prices by incorporating LSTM with attention mechanisms and gradient-specific optimisation. The study boasts an impressive accuracy of 99.84%. Advanced deep learning strategies in financial forecasting, as this study suggests, are superior to those found in traditional models. Using NLP to decode market emotions Natural Language Processing (NLP) tools help cryptocurrency investors analyse real-time, unstructured data to understand market sentiment and investor behaviour. In 2023, a study posted in arXiv introduced an end-to-end model for forecasting sentiment of tweets together with price prediction. The model employs a BERT-based neural network for sentiment analysis and a GRU for price forecasting. The study claims that by integrating sentiment analysis and deep learning, the cryptocurrency market can be predicted with greater accuracy. The mean absolute percentage error of 3.6% shows the potential in the synergy of both domains. Spotting market anomalies with unsupervised AI Unsupervised learning techniques, like clustering algorithms and autoencoders, are particularly effective at anomaly detection in the cryptocurrency space. Especially in tumultuous markets like Bitcoin, where discerning unpredictable patterns is key, such tools excel at detecting unexpected patterns. Models can flag possible scenarios of flash crashes, price manipulations and other sophisticated activities on the exchange by monitoring real-time market data against historical market data. For instance, if Bitcoin’s price in USD drops sharply while asset dependencies remain constant, AI can detect the outlier and notify human traders or activate protective protocols. Mining blockchain data for AI insights Active addresses provide one of the most significant advantages of Bitcoin – the transparency of its blockchain. On-chain data enables real-time monitoring of network activity and participant behaviour, like the number of active addresses, hash rate, wallet distributions, and transaction volumes. AI models can analyse such data to identify large-scale trends. For instance, in the previous bull runs, a surge in wallets containing 1-10 BTC (commonly associated with retail investors) was observed. Relatively, declines in miner flows to exchanges can predict supply restrictions. Reinforcement learning models are being taught to predict the impact of on-chain movements on Bitcoin’s market value. A hybrid of blockchain analytics and machine learning is redefining how analysts build predictive models based on clear, accessible public data. The new AI bots break down market borders Artificial intelligence has increased its foothold in cryptocurrency markets through autonomous trading systems. Unlike older trading bots that had a set checklist to complete, modern bots are highly sophisticated and employ flexible plans based on real-time data. Modern AI bots don’t just follow trends; they consider various factors influencing market prices. AI bots shift from trend-following to mean reversion and take into account price shifts and technical indicators. Some bots can simulate market conditions each second to determine statistically-reasonable points for investment. Ethical and technical problems surrounding AI application on cryptocurrency Implementing AI in cryptocurrency trading can be risky, although rewarding. Assurance over-fitting remains an issue, as builds driven by historical data are less reliable with remaining ****** swans or unexpected changes in regulations. Coordinated bot networks pose significant risks to trading volume and market sentiment. For this reason, many platforms have focused on publishing algorithmic audit trading reports for transparency and establishing ethics teams to mitigate any misuse of AI technologies. Explanatory frameworks of AI models, like model transparency and accountability, are particularly important in applications related to finances because of the enormous risks involved and the fragile trust of users. April 2025 updates: Bitcoin and AI integration In April 2025, Glassnode reported that addresses holding between 1,000 and 10,000 Bitcoin surged to 2,014, up from 1,944 in early March. The increase in this number of “whales” has been accumulating since April 2024, suggesting that there is confidence returning from major holders. Brief reflection: Bitcoin assessments enter the AI era Analysing and predicting the dynamics of Bitcoin and its price in USD through artificial intelligence is a trend that is here for the long run. It will be an essential part of strategy when dealing in cryptocurrency markets. AI provides unprecedented opportunities in financial market analysis through neural networks, mining on the blockchain, business behavioural prediction and creation of risk models. For AI specialists, this might represent the only practical example of having a blend of ultra-high-frequency data, actual working scenarios and comprehensive public systems that encourage collaboration. For the rest of us, we are shown a reality of being surrounded by systems that optimise buying and selling at lightning speeds. (Image source: Unsplash) The post AI transforms Bitcoin price prediction and market strategy appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  10. AI’s thirst for energy is ballooning into a monster of a challenge. And it’s not just about the electricity bills. The environmental fallout is serious, stretching to guzzling precious water resources, creating mountains of electronic waste, and, yes, adding to those greenhouse gas emissions we’re all trying to cut. As AI models get ever more complex and weave themselves into yet more parts of our lives, a massive question mark hangs in the air: can we power this revolution without costing the Earth? The numbers don’t lie: AI’s energy demand is escalating fast The sheer computing power needed for the smartest AI out there is on an almost unbelievable upward curve – some say it’s doubling roughly every few months. This isn’t a gentle slope; it’s a vertical climb that’s threatening to leave even our most optimistic energy plans in the dust. To give you a sense of scale, AI’s future energy needs could soon gulp down as much electricity as entire countries like Japan or the Netherlands, or even large US states like California. When you hear stats like that, you start to see the potential squeeze AI could put on the power grids we all rely on. 2024 saw a record 4.3% surge in global electricity demand, and AI’s expansion was a big reason why, alongside the ***** in electric cars and factories working harder. Wind back to 2022, and data centres, AI, and even cryptocurrency mining were already accounting for nearly 2% of all the electricity used worldwide – that’s about 460 terawatt-hours (TWh). Jump to 2024, and data centres on their own use around 415 TWh, which is roughly 1.5% of the global total, and growing at 12% a year. AI’s direct share of that slice is still relatively small – about 20 TWh, or 0.02% of global energy use – but hold onto your hats, because that number is set to rocket upwards. The forecasts? Well, they’re pretty eye-opening. By the end of 2025, AI data centres around the world could demand an extra 10 gigawatts (GW) of power. That’s more than the entire power capacity of a place like Utah. Roll on to 2026, and global data centre electricity use could hit 1,000 TWh – similar to what Japan uses right now. And, by 2027, the global power hunger of AI data centres is tipped to reach 68 GW, which is almost what California had in total power capacity back in 2022. Towards the end of this decade, the figures get even more jaw-dropping. Global data centre electricity consumption is predicted to double to around 945 TWh by 2030, which is just shy of 3% of all the electricity used on the planet. OPEC reckons data centre electricity use could even triple to 1,500 TWh by then. And Goldman Sachs? They’re saying global power demand from data centres could leap by as much as 165% compared to 2023, with those data centres specifically kitted out for AI seeing their demand shoot up by more than four times. There are even suggestions that data centres could be responsible for up to 21% of all global energy demand by 2030 if you count the energy it takes to get AI services to us, the users. When we talk about AI’s energy use, it mainly splits into two big chunks: training the AI, and then actually using it. Training enormous models, like GPT-4, takes a colossal amount of energy. Just to train GPT-3, for example, it’s estimated they used 1,287 megawatt-hours (MWh) of electricity, and GPT-4 is thought to have needed a whopping 50 times more than that. While training is a power hog, it’s the day-to-day running of these trained models that can chew through over 80% of AI’s total energy. It’s reported that asking ChatGPT a single question uses about ten times more energy than a Google search (we’re talking roughly 2.9 Wh versus 0.3 Wh). With everyone jumping on the generative AI bandwagon, the race is on to build ever more powerful – and therefore more energy-guzzling – data centres. So, can we supply energy for AI – and for ourselves? This is the million-dollar question, isn’t it? Can our planet’s energy systems cope with this new demand? We’re already juggling a mix of fossil fuels, nuclear power, and renewables. If we’re going to feed AI’s growing appetite sustainably, we need to ramp up and diversify how we generate energy, and fast. Naturally, renewable energy – solar, wind, hydro, geothermal – is a huge piece of the puzzle. In the US, for instance, renewables are set to go from 23% of power generation in 2024 to 27% by 2026. The tech giants are making some big promises; Microsoft, for example, is planning to buy 10.5 GW of renewable energy between 2026 and 2030 just for its data centres. AI itself could actually help us use renewable energy more efficiently, perhaps cutting energy use by up to 60% in some areas by making energy storage smarter and managing power grids better. But let’s not get carried away. Renewables have their own headaches. The sun doesn’t always shine, and the wind doesn’t always blow, which is a real problem for data centres that need power around the clock, every single day. The batteries we have now to smooth out these bumps are often expensive and take up a lot of room. Plus, plugging massive new renewable projects into our existing power grids can be a slow and complicated business. This is where nuclear power is starting to look more appealing to some, especially as a steady, low-carbon way to power AI’s massive energy needs. It delivers that crucial 24/7 power, which is exactly what data centres crave. There’s a lot of buzz around Small Modular Reactors (SMRs) too, because they’re potentially more flexible and have beefed-up safety features. And it’s not just talk; big names like Microsoft, Amazon, and Google are seriously looking into nuclear options. Matt Garman, who heads up AWS, recently put it plainly to the BBC, calling nuclear a “great solution” for data centres. He said it’s “an excellent source of zero carbon, 24/7 power.” He also stressed that planning for future energy is a massive part of what AWS does. “It’s something we plan many years out,” Garman mentioned. “We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that, particularly as we look 10 years out.” Still, nuclear power isn’t a magic wand. Building new reactors takes a notoriously long time, costs a fortune, and involves wading through complex red tape. And let’s be frank, public opinion on nuclear power is still a bit shaky, often because of past accidents, even though modern reactors are much safer. The sheer speed at which AI is developing also creates a bit of a mismatch with how long it takes to get a new nuclear plant up and running. This could mean we end up leaning even more heavily on fossil fuels in the short term, which isn’t great for our green ambitions. Plus, the idea of sticking data centres right next to nuclear plants has got some people worried about what that might do to electricity prices and reliability for everyone else. Not just kilowatts: Wider environmental shadow of AI looms AI’s impact on the planet goes way beyond just the electricity it uses. Those data centres get hot, and cooling them down uses vast amounts of water. Your average data centre sips about 1.7 litres of water for every kilowatt-hour of energy it burns through. Back in 2022, Google’s data centres reportedly drank their way through about 5 billion gallons of fresh water – that’s a 20% jump from the year before. Some estimates suggest that for every kWh a data centre uses, it might need up to two litres of water just for cooling. Put it another way, global AI infrastructure could soon be chugging six times more water than the entirety of Denmark. And then there’s the ever-growing mountain of electronic waste, or e-waste. Because AI tech – especially specialised hardware like GPUs and TPUs – moves so fast, old kit gets thrown out more often. We could be looking at AI contributing to an e-waste pile-up from data centres hitting five million tons every year by 2030. Even making the AI chips and all the other bits for data centres takes a toll on our natural resources and the environment. It means mining for critical minerals like lithium and cobalt, often using methods that aren’t exactly kind to the planet. Just to make one AI chip can take over 1,400 litres of water and 3,000 kWh of electricity. This hunger for new hardware is also pushing for more semiconductor factories, which, guess what, often leads to more gas-powered energy plants being built. And, of course, we can’t forget the carbon emissions. When AI is powered by electricity generated from burning fossil fuels, it adds to the climate change problem we’re all facing. It’s estimated that training just one big AI model can pump out as much CO2 as hundreds of US homes do in a year. If you look at the environmental reports from the big tech companies, you can see AI’s growing carbon footprint. Microsoft’s yearly emissions, for example, went up by about 40% between 2020 and 2023, mostly because they were building more data centres for AI. Google also reported that its total greenhouse gas emissions have shot up by nearly 50% over the last five years, with the power demands of its AI data centres being a major culprit. Can we innovate our way out? It might sound like all doom and gloom, but a combination of new ideas could help. A big focus is on making AI algorithms themselves more energy-efficient. Researchers are coming up with clever tricks like “model pruning” (stripping out unnecessary bits of an AI model), “quantisation” (using less precise numbers, which saves energy), and “knowledge distillation” (where a smaller, thriftier AI model learns from a big, complex one). Designing smaller, more specialised AI models that do specific jobs with less power is also a priority. Inside data centres, things like “power capping” (putting a lid on how much power hardware can draw) and “dynamic resource allocation” (shifting computing power around based on real-time needs and when renewable energy is plentiful) can make a real difference. Software that’s “AI-aware” can even shift less urgent AI jobs to times when energy is cleaner or demand on the grid is lower. AI can even be used to make the cooling systems in data centres more efficient. On-device AI could also help to reduce power consumption. Instead of sending data off to massive, power-hungry cloud data centres, the AI processing happens right there on your phone or device. This could slash energy use, as the chips designed for this prioritise being efficient over raw power. And we can’t forget about rules and regulations. Governments are starting to wake up to the need to make AI accountable for its energy use and wider environmental impact. Having clear, standard ways to measure and report AI’s footprint is a crucial first step. We also need policies that encourage companies to make hardware that lasts longer and is easier to recycle, to help tackle that e-waste mountain. Things like energy credit trading systems could even give companies a financial reason to choose greener AI tech. It’s worth noting that the United Arab Emirates and the United States shook hands this week on a deal to build the biggest AI campus outside the US in the Gulf. While this shows just how important AI is becoming globally, it also throws a spotlight on why all these energy and environmental concerns need to be front and centre for such huge projects. Finding a sustainable future for AI AI has the power to do some amazing things, but its ferocious appetite for energy is a serious hurdle. The predictions for its future power demands are genuinely startling, potentially matching what whole countries use. If we’re going to meet this demand, we need a smart mix of energy sources. Renewables are fantastic for the long run, but they have their wobbles when it comes to consistent supply and scaling up quickly. Nuclear power – including those newer SMRs – offers a reliable, low-carbon option that’s definitely catching the eye of big tech companies. But we still need to get our heads around the safety, cost, and how long they take to build. And remember, it’s not just about electricity. AI’s broader environmental impact – from the water it drinks to cool data centres, to the growing piles of e-waste from its hardware, and the resources it uses up during manufacturing – is huge. We need to look at the whole picture if we’re serious about lessening AI’s ecological footprint. The good news? There are plenty of promising ideas and innovations bubbling up. Energy-saving AI algorithms, clever power management in data centres, AI-aware software that can manage workloads intelligently, and the shift towards on-device AI all offer ways to cut down on energy use. Plus, the fact that we’re even talking about AI’s environmental impact more means that discussions around policies and rules to push for sustainability are finally happening. Dealing with AI’s energy and environmental challenges needs everyone – researchers, the tech industry, and policymakers – to roll up their sleeves and work together, and fast. If we make energy efficiency a top priority in how AI is developed, invest properly in sustainable energy, manage hardware responsibly from cradle to grave, and put supportive policies in place, we can aim for a future where AI’s incredible potential is unlocked in a way that doesn’t break our planet. The race to lead in AI has to be a race for sustainable AI too. (Photo by Nejc Soklič) See also: AI tool speeds up government feedback, experts urge caution Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Will the AI ***** fuel a global energy crisis? appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  11. When Huawei shocked the global tech industry with its Mate 60 Pro smartphone featuring an advanced 7-nanometer chip despite sweeping US technology restrictions, it demonstrated that innovation finds a way even under the heaviest sanctions. The US response was swift and predictable: tighter export controls and expanded restrictions. Now, with reports suggesting Huawei’s Ascend AI chips are approaching Nvidia-level performance—though the ******** company remains characteristically silent about these developments—America has preemptively escalated its semiconductor war to global proportions. The Trump administration’s declaration that using Huawei’s Ascend chips “anywhere in the world” violates US export controls reveals more than policy enforcement—it exposes a fundamental fear that American technological dominance may no longer be guaranteed through restrictions alone. This global AI chip ban emerged on May 14, 2025, when President Donald Trump’s administration rescinded the Biden-era AI Diffusion Rule without revealing details of a replacement policy. Instead, the Bureau of Industry and Security (BIS) announced guidance to “strengthen export controls for overseas AI chips,” specifically targeting Huawei’s Ascend processors. The new guidelines warn of “enforcement actions” including imprisonment and fines for any global business found using these ********-developed chips—a fundamental departure from traditional export controls, which typically govern what leaves a country’s borders, not what happens entirely outside them. The scope of America’s tech authority The South China Morning Post reports that these new guidelines explicitly single out Huawei’s Ascend chips after scrapping the Biden administration’s country-tiered “AI diffusion” rule. But the implications of this global AI chip ban extend far beyond bilateral US-China tensions. By asserting jurisdiction over global technology choices, America essentially demands that sovereign nations and independent businesses worldwide comply with its domestic policy preferences. This extraterritorial approach raises fundamental questions about national sovereignty and international trade. Should a Brazilian AI startup be prevented from using the most cost-effective chip solution simply because those chips are manufactured by a ******** company? Should European research institutions abandon promising collaborations because they involve hardware Washington deems unacceptable? According to Financial Times reporting, BIS stated that Huawei’s Ascend 910B, 910C, and 910D were all subject to the regulations as they were likely “designed with certain US software or technology or produced with semiconductor manufacturing equipment that is the direct product of certain US-origin software or technology, or both.” Industry resistance to universal controls Even within the United States, the chipmaking sector expresses alarm about Washington’s semiconductor policies. The aggressive expansion of export controls creates uncertainty beyond ******** companies, affecting global supply chains and innovation partnerships built over decades. “Washington’s new guidelines are essentially forcing global tech firms to pick a side – ******** or US hardware – which will further deepen the tech divide between the world’s two largest economies,” analysts note. This forced binary choice ignores the nuanced reality of modern technology development, where innovation emerges from diverse, international collaborations. The economic implications prove staggering. Recent analysis indicates Huawei’s Ascend 910B AI chip delivers 80% of Nvidia A100’s efficiency when training large language models, though “in some other tests, Ascend chips can beat the A100 by 20%.” By blocking access to competitive alternatives, this global AI chip ban may inadvertently stifle innovation and maintain artificial market monopolies. The innovation paradox Perhaps most ironically, policies intended to maintain American technological leadership may undermine it. Nvidia CEO Jensen Huang acknowledged earlier this month that Huawei was “one of the most formidable technology companies in the world,” noting that China was “not behind” in AI development. Attempting to isolate such capabilities through global restrictions may accelerate the development of parallel technology ecosystems, ultimately reducing American influence rather than preserving it. The secrecy surrounding Huawei’s Ascend chips—with the company keeping “details of its AI chips close to its chest, with only public information coming from third-party teardown reports”—has intensified with US sanctions. Following escalating restrictions, Huawei stopped officially disclosing information about the series, including release dates, production schedules, and fabrication technologies. The chips specified in current US restrictions, including the Ascend 910C and 910D, haven’t even been officially confirmed by Huawei. Geopolitical ramifications In a South China Morning Post’s report, Chim Lee, a senior analyst at the Economist Intelligence Unit, warns that “if the guidance is enforced strictly, it is likely to provoke retaliation from China” and could become “a negotiating point in ongoing trade talks between Washington and Beijing.” This assessment underscores the counterproductive nature of aggressive unilateral action in an interconnected global economy. The semiconductor industry thrives on international collaboration, shared research, and open competition. Policies that fragment this ecosystem serve no one’s long-term interests—including America’s. As the global community grapples with challenges from climate change to healthcare innovation, artificial barriers preventing the best minds from accessing optimal tools ultimately harm human progress. Beyond binary choices The question isn’t whether nations should protect strategic interests—they should and must. But when export controls extend “anywhere in the world,” we cross from legitimate national security policy into technological authoritarianism. The global technology community deserves frameworks that balance security concerns with innovation imperatives. This global AI chip ban risks accelerating the technological fragmentation it seeks to prevent. History suggests markets divided by political decree often spawn parallel innovation ecosystems that compete more effectively than those operating under artificial constraints. Rather than extending controls globally, a strategic approach would focus on out-innovating competitors through superior technology and international partnerships. The current path toward technological bifurcation serves neither American interests nor global innovation—it simply creates a more fragmented, less efficient world where artificial barriers replace natural competition. The semiconductor industry’s future depends on finding sustainable solutions that address legitimate security concerns without dismantling the collaborative networks that drive technological advancement. As this global AI chip ban takes effect, the world watches to see whether innovation will flourish through competition or fragment through control. See also: Huawei’s AI hardware breakthrough challenges Nvidia’s dominance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Can the US really enforce a global AI chip ban? appeared first on AI News. View the full article
  12. America’s quest to protect its semiconductor technology from China has taken increasingly dramatic turns over the past few years—from export bans to global restrictions—but the latest proposal from Congress ventures into unprecedented territory. Lawmakers are now pushing for mandatory GPS-style tracking embedded in every AI chip exported from the United States, essentially turning advanced semiconductors into devices that report their location back to Washington. On May 15, 2025, a bipartisan group of eight House representatives introduced the Chip Security Act, which would require companies like Nvidia to embed location verification mechanisms in their processors before export. This represents perhaps the most invasive approach yet in America’s technological competition with China, moving far beyond restricting where chips can go to actively monitoring where they end up. The mechanics of AI chip surveillance Under the proposed Chip Security Act, AI chip surveillance would become mandatory for all “covered integrated circuit products”—including those classified under Export Control Classification Numbers 3A090, 3A001.z, 4A090, and 4A003.z. Companies like Nvidia would be required to embed location verification mechanisms in their AI chips before export, reexport, or in-country transfer to foreign nations. Representative Bill Huizenga, the Michigan Republican who introduced the House bill, stated that “we must employ safeguards to help ensure export controls are not being circumvented, allowing these advanced AI chips to fall into the hands of nefarious actors.” His co-lead, Representative Bill Foster—an Illinois Democrat and former physicist who designed chips during his scientific career—added, “I know that we have the technical tools to prevent powerful AI technology from getting into the wrong hands.” The legislation goes far beyond simple location tracking. Companies would face ongoing surveillance obligations, required to report any credible information about chip diversion, including location changes, unauthorized users, or tampering attempts. This creates a continuous monitoring system that extends indefinitely beyond the point of *****, fundamentally altering the relationship between manufacturers and their products. Cross-party support for technology control Perhaps most striking about this AI chip surveillance initiative is its bipartisan nature. The bill enjoys broad support across party lines, co-led by House Select Committee on China Chairman John Moolenaar and Ranking Member ***** Krishnamoorthi. Other cosponsors include Representatives Ted Lieu, Rick Crawford, Josh Gottheimer, and Darin LaHood. Moolenaar said that “the ******** ********** Party has exploited weaknesses in our export control enforcement system—using shell companies and smuggling networks to divert sensitive US technology.” The bipartisan consensus on AI chip surveillance reflects how deeply the China challenge has penetrated American political thinking, transcending traditional partisan divisions. The Senate has already introduced similar legislation through Senator Tom Cotton, suggesting that semiconductor surveillance has broad congressional support. Coordination between chambers indicates that some form of AI chip surveillance may become law regardless of which party controls Congress. Technical challenges and implementation questions The technical requirements for implementing AI chip surveillance raise significant questions about feasibility, security, and performance. The bill mandates that chips implement “location verification using techniques that are feasible and appropriate” within 180 days of enactment, but provides little detail on how such mechanisms would work without compromising chip performance or introducing new vulnerabilities. For industry leaders like Nvidia, implementing mandatory surveillance technology could fundamentally alter product design and manufacturing processes. Each chip would need embedded capabilities to verify its location, potentially requiring additional components, increased power consumption, and processing overhead that could impact performance—precisely what customers in AI applications cannot afford. The bill also grants the Secretary of Commerce broad enforcement authority to “verify, in a manner the Secretary determines appropriate, the ownership and location” of exported chips. This creates a real-time surveillance system where the US government could potentially track every advanced semiconductor worldwide, raising questions about data sovereignty and privacy. Commercial surveillance meets national security AI chip surveillance proposal represents an unprecedented fusion of national security imperatives with commercial technology products. Unlike traditional export controls that simply restrict destinations, the approach creates ongoing monitoring obligations that blur the lines between private commerce and state surveillance. Representative Foster’s background as a physicist lends technical credibility to the initiative, but it also highlights how scientific expertise can be enlisted in geopolitical competition. The legislation reflects a belief that technical solutions can solve political problems—that embedding surveillance capabilities in semiconductors can prevent their misuse. Yet the proposed law raises fundamental questions about the nature of technology export in a globalized world. Should every advanced semiconductor become a potential surveillance device? How will mandatory AI chip surveillance affect innovation in countries that rely on US technology? What precedent does this set for other nations seeking to monitor their technology exports? Accelerating technological decoupling The mandatory AI chip surveillance requirement could inadvertently accelerate the development of alternative semiconductor ecosystems. If US chips come with built-in tracking mechanisms, countries may intensify efforts to develop domestic alternatives or source from suppliers without such requirements. China, already investing heavily in semiconductor self-sufficiency following years of US restrictions, may view these surveillance requirements as further justification for technological decoupling. The irony is striking: efforts to track ******** use of US chips may ultimately reduce their appeal and market share in global markets. Meanwhile, allied nations may question whether they want their critical infrastructure dependent on chips that can be monitored by the US government. The legislation’s broad language suggests that AI chip surveillance would apply to all foreign countries, not just adversaries, potentially straining relationships with partners who value technological sovereignty. The future of semiconductor governance As the Trump administration continues to formulate its replacement for Biden’s AI Diffusion Rule, Congress appears unwilling to wait. The Chip Security Act represents a more aggressive approach than traditional export controls, moving from restriction to active surveillance in ways that could reshape the global semiconductor industry. This evolution reflects deeper changes in how nations view technology exports in an era of great power competition. The semiconductor industry, once governed primarily by market forces and technical standards, increasingly operates under geopolitical imperatives that prioritize control over commerce. Whether AI chip surveillance becomes law depends on congressional action and industry response. But the bipartisan support suggests that some form of semiconductor monitoring may be inevitable, marking a new chapter in the relationship between technology, commerce, and national security. Conclusion: The end of anonymous semiconductors from America? The question facing the industry is no longer whether the US will control technology exports, but how extensively it will monitor them after they leave American shores. In this emerging paradigm, every chip becomes a potential intelligence asset, and every export a data point in a global surveillance network. The semiconductor industry now faces a critical choice: adapt to a future where products carry their own tracking systems, or risk being excluded from the US market entirely. As Congress pushes for mandatory AI chip surveillance, we may be witnessing the end of anonymous semiconductors and the beginning of an era where every processor knows exactly where it belongs—and reports back accordingly. See also: US-China tech war escalates with new AI chips export controls Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Congress pushes GPS tracking for every exported semiconductor appeared first on AI News. View the full article
  13. Microsoft is cutting about 7,000 jobs, or 3% of its workforce. The move isn’t about poor performance or falling revenue. It’s a clear shift in strategy—fewer layers, more engineers, and more investment in artificial intelligence. The layoffs affect staff across divisions and global offices. But the bulk of those let go are in middle management and non-technical roles, a pattern showing up across tech. The message: reduce overhead, speed up product cycles, and make room for ******* AI spending. The numbers behind the shift Microsoft ended its latest quarter with $70.07 billion in revenue. That beat Wall Street estimates and shows strong business health, and the company plans to spend as much as $80 billion this fiscal year—mainly on data centres designed for training and running AI models. That’s a big leap in infrastructure spending but it also explains why Microsoft is trimming elsewhere. AI models are compute-heavy and demand new types of hardware. Storage, cooling, and power need to scale: Building that capacity takes money, time, and fewer internal delays, and Microsoft appears to be cutting anything that slows the push. Management in the firing line Most cuts hit middle managers and support staff. These are roles that help coordinate, review, and report—but don’t directly write code or design systems. While these positions have long helped large companies function, they’re now being seen as blockers to fast action. Sources told Business Insider that Microsoft wants a higher ratio of technical staff to managers. This isn’t just about saving costs, it’s about reducing the number of people between engineers and final decisions. Analyst Rishi Jaluria told the Financial Times that tech giants like Microsoft have “too many layers.” He said companies are trying to strip back bureaucracy as they chase AI leadership. Microsoft has not publicly broken down which departments were most affected. But reports suggest LinkedIn, a Microsoft subsidiary, saw job cuts as part of this broader shift. Aligning with a broader industry trend Microsoft isn’t the only company trimming management, as Amazon, Google, and Meta have all done similarly. They’re removing layers and pushing more decisions closer to those building the product. For Microsoft, the changes come after several earlier rounds of cuts. In early 2024, the company laid off around 2,000 workers in performance-based trims. This new wave is different as it targets structure, not staff output. $80 billion on AI infrastructure Microsoft’s investment plan puts AI at the centre of its growth. According to Reuters, the company wants to spend up to $80 billion in fiscal 2025, much of it going toward AI-enabled data centres. These centres power large language models, natural language tools, and enterprise AI systems. Without them, even the best models won’t run at scale. The company’s move shows how serious it is about owning the AI backbone. This is about more than software updates, it’s about physical hardware, cloud capacity, and tight control over how AI gets built and used. Microsoft’s early partnership with OpenAI gave it a jumpstart, but Google, Meta, Amazon, and Apple are all making big AI moves. Microsoft appears to be betting that first-mover advantage is only as strong as the infrastructure behind it. Employee reactions reflect mixed sentiment As with most layoffs, employee reactions vary. Some posts on social media reflect understanding, others voice concern about job security and team stability. Several ex-employees described the mood as “tense but expected.” Many said they had been preparing for changes since Microsoft’s 2024 performance cuts. Some worry that too much focus on AI will weaken support roles, and others believe cutting managers will create confusion rather than clarity. Still, public sentiment shows a growing acceptance that AI is changing what jobs look like—even at the biggest firms. What this means for the industry Microsoft’s restructuring sets a tone: Strong revenue no longer guarantees job security, and growth in AI now drives org charts, not the other way around. Middle management is no longer safe, and non-technical roles must prove direct value to AI goals. Even product teams may face more pressure to automate or streamline. For employees, the message is clear. Learn how AI fits your job—or risk being cut from the plan. For other tech firms, Microsoft’s strategy may serve as a roadmap. Spending more on AI means spending less elsewhere. and many companies will likely follow that playbook to stay competitive. Long-term questions remain The short-term logic is clear. Microsoft is cutting structure to fund AI growth. But over time, companies will need to balance innovation with internal support. Removing middle managers may speed up some work, but it can also reduce mentorship, training, and context—things that help teams stay aligned. AI may need more data and compute. But people still build the tools, ask the right questions, and set the goals. How companies treat those people now will shape how well they compete later. (Photo by Ron Lach) See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Why Microsoft is cutting roles despite strong earnings appeared first on AI News. View the full article
  14. One of the ways in which organisations are using the latest AI algorithms to help them grow and thrive is the adoption of privately-held AI models in aligning their business strategies. The differentiation between private and public AI is important in this context – most organisations are rightly wary of allowing public AIs access to what are sensitive data sets, such as HR information, financial data, and details of operational history. It stands to reason that if an AI is given specific data on which to base its responses, its output will be more relevant, and be therefore more effective in helping decision-makers to judge how to strategise. Using private reasoning engines is the logical way that companies can get the best results from AI and keep their intellectual property safe. Enterprise-specific data and the ability to fine-tune a local AI model give organisations the ability to provide bespoke forecasting and operational tuning that are more grounded in the day-to-day reality of a company’s work. A Deloitte Strategy Insight paper calls private AI a “bespoke compass”, and places the use of internal data as a competitive advantage, and Accenture describes AIs as “poised to provide the most significant economic uplift and change to work since the agricultural and industrial revolutions.” There is the possibility, however, that like traditional business intelligence, using historical data drawn from several years of operations across the enterprise, can entrench decision-making in patterns from the past. McKinsey says companies are in danger of “mirroring their institutional past in algorithmic amber.” The Harvard Business Review picks up on some of the technical complexity, stating that the act of customising a model so that it’s activities are more relevant to the company is difficult, and perhaps, therefore, not a task to be taken on by any but the most AI-literate at a level of data science and programming. MIT Sloane strikes a balance between the fervent advocates and the conservative voices for private AI in business strategising. It advises that AI be regarded as a co-pilot, and urges continual questioning and verification of AI output, especially when the stakes are high. Believe in the revolution However, decision-makers considering pursuing this course of action (getting on the AI wave, but doing so in a private, safety-conscious way) may wish to consider the motivations of those sources of advice that advocate strongly for AI enablement in this way. Deloitte, for example, builds and manages AI solutions for clients using custom infrastructure such as its factory-as-a-service offerings, while Accenture has practices dedicated to its clients’ AI strategy, such as Accenture Applied Intelligence. It partners with AWS and Azure, building bespoke AI systems for Fortune 500 companies, among others, and Deloitte is partners with Oracle and Nvidia. With ‘skin in the game’, phrases such as “the most significant […] change to work since the agricultural and industrial revolutions” and a “bespoke compass” are inspiring, but the vendors’ motivations may not be entirely altruistic. Advocates for AI in general rightly point to the ability of models to identify trends and statistical undercurrents much more efficiently than humans. Given the mass of data available to the modern enterprise, comprising both internal and externally-available information, having software that can parse data at scale is an incredible advantage. Instead of manually creating analysis of huge repositories of data – which is time-consuming and error-prove – AI can see through the chaff and surface real, actionable insights. Asking the right questions Additionally, AI models can interpret queries couched in normal language, and make predictions based on empirical information, which, in the context of private AIs, is highly-relevant to the organisation. Relatively unskilled personnel can query data without having skills in statistical analysis or database query languages, and get answers that otherwise would have involved multiple teams and skill-sets drawn from across the enterprise. That time-saving alone is considerable, letting organisations focus on strategy, rather than forming the necessary data points and manually querying the information they’ve managed to gather. Both McKinsey and Gartner warn, however, of overconfidence and data obsolescence. On the latter, historical data may not be relevant to strategising, especially if records go back several years. Overconfidence is perhaps best termed in the context of AI as operators trusting AI responses without question, not delving independently into responses’ detail, or in some cases, taking as fact the responses to badly-phrased queries. For any software algorithm, human phrases such as “base your findings on our historical data” are open to interpretation, unlike, for example, “base your findings on the last twelve months’ sales data, ignoring outliers that differ from the mean by over 30%, although do state those instances for me to consider.” Software of experience Organisations might pursue private AI solutions alongside mature, existing business intelligence platforms. SAP Business Organisations is nearly 30 years old, yet a youngster compared to SAS Business Intelligence that’s been around since before the internet became mainstream in the 1990s. Even relative newcomers such as Microsoft Power *** represents at least a decade of development, iteration, customer feedback, and real-world use in business analysis. It seems sensible, therefore, that private AI’s deployment on business data should be regarded as an addition to the strategiser’s toolkit, rather than a silver bullet that replaces “traditional” tools. For users of private AI that have the capacity to audit and tweak their model’s inputs and inner algorithms, retaining human control and oversight is important – just as it is with tools like Oracle’s Business Intelligence suite. There are some scenarios where the intelligent processing of and acting on real-time data (online retail pricing mechanisms, for example) gives AI analysis a competitive edge over the incumbent *** platforms. But AI has yet to develop into a magical Swiss Army Knife for business strategy. Until AI purposed for business data analysis is as developed, iterated on, battle-hardened, and mature as some of the market’s go-to *** platforms, early adopters might temper the enthusiasm of AI and AI service vendors with practical experience and a critical eye. AI is a new tool, and one with a great deal of potential. However, it remains first-generation in its current guises, public and private. (Image source: “It’s about rules and strategy” by pshutterbug is licensed under CC BY 2.0.) The post AI in business intelligence: Caveat emptor appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  15. An AI tool aims to wade through mountains of government feedback and understand what the public is trying to say. *** Technology Secretary Peter Kyle said: “No one should be wasting time on something AI can do quicker and better, let alone wasting millions of taxpayer pounds on outsourcing such work to contractors. This digital assistant, aptly named ‘Consult’, just aced its first big test with the Scottish Government. The Scottish Gov threw Consult in at the deep end, asking it to make sense of public opinion on regulating non-surgical cosmetic procedures such as lip fillers and laser hair removal. Consult came back with findings almost identical to what human officials had pieced together. Now, the plan is to roll this tech out across various government departments. The current way of doing things is expensive and slow. Millions of pounds often go to outside contractors just to analyse what the public thinks. Consult is part of a ******* push to build a leaner, more responsive *** government—one that can deliver on its ‘Plan for Change’ without breaking the bank or taking an age to do it. So, how did it fare in Scotland? Consult chewed through responses from over 2,000 people. Using generative AI, it picked out the main themes and concerns bubbling up from the feedback across six key questions. Of course, Consult wasn’t left completely to its own devices. Experts in the Scottish Government double-checked and fine-tuned these initial themes. Then, the AI got back to work to sort individual responses into these categories. Officials ended up with more precious time to consider what people were saying and what it meant for policy. Because this was Consult’s first live outing, the Scottish Government went through every single response by hand too—just to be sure. Figuring out exactly what someone means in a written comment and then deciding which ‘theme’ it fits under can be a bit subjective. Even humans don’t always agree. When the government compared Consult’s handiwork to human analysis, the AI was right most of the time. Where there were differences, they were so minor they didn’t change the overall picture of what mattered most to people. Consult is part of a ******* AI toolkit called ‘Humphrey’—a suite of digital helpers designed to free up civil servants from admin and cut down on those contractor bills. It’s all part of a grander vision to use technology to sharpen up public services, aiming to find £45 billion in productivity savings. The goal is a more nimble government that is better at delivering that ‘Plan for Change’ we keep hearing about. “After demonstrating such promising results, Humphrey will help us cut the costs of governing and make it easier to collect and comprehensively review what experts and the public are telling us on a range of crucial issues,” added Kyle. “The Scottish Government has taken a bold first step. Very soon, I’ll be using Consult, within Humphrey, in my own department and others in Whitehall will be using it too – speeding up our work to deliver the Plan for Change.” Over in Scotland, Public Health Minister Jenni Minto said: “Using the tool was very beneficial in helping the Scottish Government understand more quickly what people wanted us to hear and our respondents’ range of views. “Using this tool has allowed the Scottish Government to move more quickly to a focus on the policy questions and dive into the detail of the evidence we’ve been presented with, while remaining confident that we have heard the strong views expressed by respondents.” Of course, like many AI deployments in government, it’s still early days, and Consult is officially still in the trial phase. More number-crunching and testing are on the cards to make sure it’s working just as it should before any big decisions about a full rollout are made. But the potential here is huge. The government runs about 500 consultations every year. If Consult lives up to its promise, it could save officials a staggering 75,000 days of analysis annually. And what did the civil servants who first worked with Consult think? They were reportedly “pleasantly surprised,” finding the AI’s initial analysis a “useful starting point.” Others raved that it “saved [them] a heck of a lot of time” and let them “get to the analysis and draw out what’s needed next” much faster. Interestingly, they also felt Consult brought a new level of fairness to the table. As one official put it, its use “takes away the bias and makes it more consistent,” preventing individual analysts from, perhaps unconsciously, letting their “own preconceived ideas” colour the findings. Some consultations receive tens, even hundreds of thousands of responses. Given how well Consult has performed in these early tests, it won’t be long before it’s used on these massive consultations. It’s worth noting that humans aren’t being kicked out of the loop. Consult has been built to keep the experts involved every step of the way. Officials will always review the themes the AI suggests and how it sorts the responses. They’ll have an interactive dashboard to play with, letting them filter and search for specific insights. It’s about AI doing the heavy lifting, so the humans can do the smart thinking. Experts urge caution about the use of AI in government This move towards AI in government isn’t happening in a vacuum, and experts are watching closely. Stuart Harvey, CEO of Datactics, commented: “Using AI to speed up public consultations is a great example of how technology can improve efficiency and save money. But AI is only as good as the data behind it. For tools like this to work well and fairly, government departments need to make sure their data is accurate, up-to-date, and properly managed. “People need to trust the decisions made with AI. That means making sure the process is clear, well-governed, and ethical. If the data is messy or poorly handled, it can lead to biased or unreliable outcomes. “As the government expands its use of AI in public services, it’s vital to invest in strong data practices. That includes building clear and consistent data systems, making data accessible for review, and keeping humans involved in key decisions—especially when it comes to hearing from the public.” This sentiment is echoed by academics. Professor Michael Rovatsos from the University of Edinburgh, for instance, acknowledges the benefits but also wisely cautions about the risks of AI biases and even the potential for these tools to be manipulated. He’s calling for tough safeguards and ongoing investment to make sure any AI tool used by the government remains reliable and fair. Stuart Munton, Chief for Group Operations at AND Digital, added: “The government’s use of AI to speed up public consultations is a welcome step toward smarter, more efficient public services. However, as AI adoption grows, we must ensure that people – not just technology – are at the heart of this transformation.” “Tools like this will only reach their full potential if we invest in equipping public sector teams with the right skills and training. Empowering diverse talent to work with AI will not only improve how these tools perform but also ensure that innovation is inclusive to real-world needs.” If done right, with these expert caveats in mind, AI tools like Consult have the potential to improve how governments listen, learn, and make policy based on public opinion. The *** government isn’t hanging about; the plan is to get Consult working across various departments by the end of 2025. (Photo by Scott Rodgerson) See also: US slams brakes on AI Diffusion Rule, hardens chip export curbs Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI tool speeds up government feedback, experts urge caution appeared first on AI News. View the full article
  16. Alibaba has unveiled Wan2.1-VACE, an open-source AI model designed to shake up how we create and edit videos. VACE isn’t appearing out of thin air; it’s part of Alibaba’s broader Wan2.1 family of video AI models. And they’re making a rather bold claim for it, stating it’s the “first open-source model in the industry to provide a unified solution for various video generation and editing tasks.” If Alibaba can succeed in shifting users away from having to juggle multiple, separate tools towards one streamlined hub—it could be a true game-changer. So, what can this thing actually do? Well, for starters, it can whip up videos using all sorts of prompts, including text commands, still pictures, and even snippets of other video clips. But it’s not just about making videos from scratch. The editing toolkit supports referencing images or specific frames to guide the AI, advanced video “repainting” (more on that in a sec), tweaking just selected bits of your existing video, and even stretching out the video. Alibaba reckons these features “enable the flexible combination of various tasks to enhance creativity.” Imagine you want to create a video with specific characters interacting, maybe based on some photos you have. VACE claims to be able to do that. Got a still image you wish was dynamic? Alibaba’s open-source AI model can add natural-looking movement to bring it to life. For those who love to fine-tune, there are those advanced “video repainting” functions I mentioned earlier. This includes things like transferring poses from one subject to another, having precise control over motion, adjusting depth perception, and even changing the colours. One feature that caught my eye is its ability to “supports adding, modification or deletion to selective specific areas of a video without affecting the surroundings.” That’s a massive plus for detailed edits – no more accidentally messing up the background when you’re just trying to tweak one small element. Plus, it can make your video canvas ******* and even fill in the new space with relevant content to make everything look richer and more expansive. You could take a flat photograph, turn it into a video, and tell the objects in it exactly how to move by drawing out a path. Need to swap out a character or an object with something else you provide as a reference? No problem. Animate those referenced characters? Done. Control their pose precisely? You got it. Alibaba even gives the example of its open-source AI model taking a tall, skinny vertical image and cleverly expanding it sideways into a widescreen video, automagically adding new bits and pieces by referencing other images or prompts. That’s pretty neat. Of course, VACE isn’t just magic. There’s some clever tech involved, designed to handle the often-messy reality of video editing. A key piece is something Alibaba calls the Video Condition Unit (VCU), which “supports unified processing of multimodal inputs such as text, images, video, and masks.” Then there’s what they term a “Context Adapter structure.” This clever bit of engineering “injects various task concepts using formalised representations of temporal and spatial dimensions.” Essentially, think of it as giving the AI a really good understanding of time and space within the video. With all this clever tech, Alibaba reckons VACE will be a hit in quite a few areas. Think quick social media clips, eye-catching ads and marketing content, heavy-duty post-production special effects for film and TV, and even for generating custom educational and training videos. Alibaba makes Wan2.1-VACE open-source to spread the AI love Building AI models this powerful usually costs a fortune and needs massive computing power and tons of data. So, Alibaba making Wan2.1-VACE open source? That’s a big deal. “Open access helps lower the barrier for more businesses to leverage AI, enabling them to create high-quality visual content tailored to their needs, quickly and cost-effectively,” Alibaba explains. Basically, Alibaba is hoping to let more folks – especially smaller businesses and individual creators – get their hands on top-tier AI without breaking the bank. This democratisation of powerful tools is always a welcome sight. And they’re not just dropping one version. There’s a hefty 14-billion parameter model for those with serious horsepower, and a more nimble 1.3-billion parameter one for lighter setups. You can grab them for free right now on Hugging Face and GitHub, or via Alibaba Cloud’s own open-source community, ModelScope. (Image source: www.alibabagroup.com) See also: US slams brakes on AI Diffusion Rule, hardens chip export curbs Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Wan2.1-VACE: Open-source AI video tool for all appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  17. The Department of Commerce (DOC) has slammed the brakes on the sweeping “AI Diffusion Rule,” yanking it just a day before it was due to bite. Meanwhile, officials have laid down the gauntlet with stricter measures to control semiconductor exports. The AI Diffusion Rule, a piece of regulation cooked up under the Biden administration, was staring down a compliance deadline of May 15th. According to the folks at the DOC, letting this rule roll out would have been like throwing a spanner in the works of American innovation. DOC officials argue the rule would have saddled tech firms with “burdensome new regulatory requirements” and, perhaps more surprisingly, risked souring America’s relationships on the world stage by effectively “downgrading” dozens of countries “to second-tier status.” The nuts and bolts of this reversal will see the Bureau of Industry and Security (BIS), part of the DOC, publishing a notice in the Federal Register to make the rescission official. While this particular rule is heading for the shredder, the official line is that a replacement isn’t off the table; one will be cooked up and served “in the future.” Jeffery Kessler, the Under Secretary of Commerce for Industry and Security, has told BIS enforcement teams to stand down on anything concerning the now-canned AI Diffusion Rule. “The Trump Administration will pursue a bold, inclusive strategy to American AI technology with trusted foreign countries around the world, while keeping the technology out of the hands of our adversaries,” said Kessler. “At the same time, we reject the Biden Administration’s attempt to impose its own ill-conceived and counterproductive AI policies on the American people.” What was this ‘AI Diffusion Rule’ anyway? You might be wondering what this “AI Diffusion Rule” actually was, and why it’s causing such a stir. The rule wasn’t just a minor tweak; it was the Biden administration’s bid to get a tight grip on how advanced American tech – everything from the AI chips themselves to cloud computing access and even the crucial AI ‘model weights’ – flowed out of the US to the rest of the world. The idea, at least on paper, was to walk a tightrope: keep the US at the front of the AI pack, protect national security, and still champion American tech exports. But how did it plan to do this? The rule laid out a fairly complex playbook: A tiered system for nations: Imagine a global league table for AI access. Countries were split into three groups. Tier 1 nations, America’s closest allies like Japan and South Korea, would have seen hardly any new restrictions. Tier 3, unsurprisingly, included countries already under arms embargoes – like China and Russia – who were already largely banned from getting US chips and would face the toughest controls imaginable. The squeezed middle: This is where things got sticky. A large swathe of countries, including nations like Mexico, Portugal, India, and even Switzerland, found themselves in Tier 2. For them, the rule meant new limits on how many advanced AI chips they could import, especially if they were looking to build those super-powerful, large computing clusters essential for AI development. Caps and close scrutiny: Beyond the tiers, the rule introduced actual caps on the quantity of high-performance AI chips most countries could get their hands on. If anyone wanted to bring in chips above certain levels, particularly for building massive AI data centres, they’d have faced incredibly strict security checks and reporting duties. Controlling the ‘brains’: It wasn’t just about the hardware. The rule also aimed to regulate the storage and export of advanced AI model weights – essentially the core programming and learned knowledge of an AI system. There were strict rules about not storing these in arms-embargoed countries and only allowing their export to favoured allies, and even then, only under tight conditions. Tech as a bargaining chip: Underneath it all, the framework was also a bit of a power play. The US aimed to use access to its coveted AI technology as a carrot, encouraging other nations to sign up to American standards and safeguards if they wanted to keep the American chips and software flowing. The Biden administration had a clear rationale for these moves. They wanted to stop adversaries, with China being the primary concern, from getting their hands on advanced AI that could be turned against US interests or used for military purposes. It was also about cementing US leadership in AI, making sure the most potent AI systems and the infrastructure to run them stayed within the US and its closest circle of allies, all while trying to keep US tech exports competitive. However, the AI Diffusion Rule and broader plan didn’t exactly get a standing ovation. Far from it. Major US tech players – including giants like Nvidia, Microsoft, and Oracle – voiced strong concerns. They argued that the rule, instead of protecting US interests, would stifle innovation, bog businesses down in red tape, and ultimately hurt the competitiveness of American companies on the global stage. Crucially, they also doubted it would effectively stop China from accessing advanced AI chips through other means. And it wasn’t just industry. Many countries weren’t thrilled about being labelled “second-tier,” a status they felt was not only insulting but also risked undermining diplomatic ties. There was a real fear it could push them to look for AI technology elsewhere, potentially even from China, which was hardly the intended outcome. This widespread pushback and the concerns about hampering innovation and international relations are exactly what the current Department of Commerce is pointing to as reasons for today’s decisive action to scrap the rule. Fresh clampdown on AI chip exports It wasn’t just about scrapping old rules, though. The BIS also rolled out a new playbook to tighten America’s grip on AI chip exports, showing they’re serious about guarding the nation’s tech crown jewels. The latest clampdown includes: A spotlight on Huawei Ascend chips: New guidance makes it crystal clear: using Huawei Ascend chips anywhere on the planet is now a no-go under US export controls. This takes direct aim at one of China’s big players in the AI hardware game. Heads-up on ******** AI model training: A stark warning has gone out to the public and the industry about the serious consequences if US AI chips are used to train or run ******** AI models. The worry? That American tech could inadvertently supercharge AI systems that might not have US interests at heart. Guidance on shoring up supply chains: US firms are getting a fresh batch of advice on how to batten down the hatches on their supply chains to stop controlled tech from being siphoned off to unapproved destinations or users. The Department of Commerce is selling today’s double-whammy – axing the rule and beefing up export controls – as essential to “ensure that the United States will remain at the forefront of AI innovation and maintain global AI dominance.” It’s a strategy that looks to clear the runway for domestic tech growth while building higher fences around critical AI technologies, especially advanced semiconductors. This policy pivot will likely get a thumbs-up from some quarters in the US tech scene, particularly those who were getting sweaty palms about the AI Diffusion Rule and the red tape it threatened. On the flip side, the even tougher export controls – especially those zeroing in on China and firms like Huawei – show that trade policy is still very much a frontline tool in the high-stakes global chess game over who leads in tech. The whisper of a “replacement rule” down the line means this isn’t the final chapter in the saga of how to manage the AI revolution. For now, it seems the game plan is to clear the path for homegrown innovation and be much more careful about who gets to play with America’s latest breakthroughs. See also: Samsung AI strategy delivers record revenue despite semiconductor headwinds Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post US slams brakes on AI Diffusion Rule, hardens chip export curbs appeared first on AI News. View the full article
  18. Saudi Arabia’s new state subsidiary, HUMAIN, is collaborating with NVIDIA to build AI infrastructure, nurture talent, and launch large-scale digital systems. The effort includes plans to set up AI “factories” powered by up to 500 megawatts of energy. The sites will be filled with NVIDIA GPUs, including the Grace Blackwell GB300 supercomputers connected via NVIDIA’s InfiniBand network. The goal is to create a base for training models, running simulations, and managing complex AI deployments. A major part of the push is about control. Saudi Arabia wants to build sovereign AI – models trained using local data, language, and systems. By building its own infrastructure, it avoids relying on foreign cloud providers. The shift aligns with a broader trend, as governments around the world start to question how AI tools are built, where data goes, and who controls it. HUMAIN is meant to give Saudi Arabia more say in that process. While other countries have launched national AI strategies, HUMAIN stands out for its structure. It’s not just a policy office or research fund; instead, it operates across the full AI value chain – building data centres, managing data, training models, and deploying applications. Few countries have a single body doing likewise with such a broad remit. Singapore’s NAIS 2.0, for example, focuses on public sector use cases and talent development, and the UAE’s approach, which emphasises frameworks and governance. China has set up AI labs in several cities, but they tend to work in silos. HUMAIN brings elements together with a central goal: make Saudi Arabia a producer, not just a user, of AI. The ambition is clear, but it comes with trade-offs. Running GPU-heavy data centres on this scale will use a lot of power. The 500-megawatt figure is far beyond typical enterprise deployments. Globally, the environmental cost of AI has become a growing concern. Microsoft and Google have both reported rising emissions from AI-related infrastructure. Saudi Arabia will need to explain how its AI factories will be powered – especially if it wants to align with its own sustainability targets under Vision 2030. The partnership with NVIDIA isn’t just about machines, it also includes training for people. HUMAIN and NVIDIA say they will run large-scale education programmes to help thousands of Saudi developers gain skills in AI, robotics, simulation, and digital twins. Building local talent is a core part of the effort, and without it, infrastructure likely won’t get used to its full potential. “AI, like electricity and internet, is essential infrastructure for every nation,” said Jensen Huang, founder and CEO of NVIDIA. “Together with HUMAIN, we are building AI infrastructure for the people and companies of Saudi Arabia to realise the bold vision of the Kingdom.” One of the tools HUMAIN plans to deploy is NVIDIA Omniverse, to be used as a multi-tenant platform for industries like logistics, manufacturing, and energy. These sectors could create digital twins – virtual versions of real systems – to test, monitor, and improve operations. The idea is simple: simulate before you build, or run stress tests in digital form to save time and money later. This type of simulation and optimisation supports Saudi Arabia’s broader push into automation and smart industry. It fits in a wider narrative of transitioning from oil to advanced tech as a core pillar of the economy. The deal fits into NVIDIA’s global strategy, and the company has similar partnerships in India, the UAE, and Europe. Saudi Arabia offers strong government support, deep funding, and the promise to become a new AI hub in the Middle East. In return, NVIDIA provides the technical backbone – GPUs, software platforms, and the know-how to run them. The partnership helps both sides. Saudi Arabia gets the tools to build AI from the ground up and build a new economic version of itself, while NVIDIA gains a long-term customer and a foothold in a growing market. There are still gaps to watch. How will HUMAIN govern the use of its models? Will they be open for researchers and startups, or tightly controlled by the state? What role will local universities or private companies play? And can workforce development keep pace with the rapid buildout of infrastructure? HUMAIN isn’t just building for now. The structure suggests a long-term bet – one that links compute power, national priorities, and a shift in how AI is developed and deployed. Saudi Arabia wants more than access. It wants influence. And HUMAIN, in partnership with NVIDIA, is the engine it’s building to get there. (Photo by Mariia Shalabaieva) See also: Huawei’s AI hardware breakthrough challenges Nvidia’s dominance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Saudi Arabia moves to build its AI future with HUMAIN and NVIDIA appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  19. The US Food and Drugs Administration (FDA) has stated that it wants to accelerate the deployment of AI across its centres. FDA Commissioner Martin A. Makary has announced an aggressive timeline to scale use of AI by 30 June 2025 and is betting big on the technology to change drug approval processes for the US. But the rapid AI deployment at the FDA raises important questions about whether innovation can be balanced with oversight. Strategic leadership drive: FDA names first AI chief The foundation for the ambitious FDA AI deployment was laid with the appointment of Jeremy Walsh as the first-ever Chief AI Officer. Walsh previously led enterprise-scale technology deployments in federal health and intelligence agencies and came from government contractor Booz Allen Hamilton, where he worked for 14 years as chief technologist. His appointment, announced just before the May 8th rollout announcement, signals the agency’s serious commitment to technological transformation. The timing is significant – Walsh’s hiring coincided with workforce cuts at the FDA, including the loss of key tech talent. Among the losses was Sridhar Mantha, the former director of strategic programmes at the Center for Drug Evaluation and Research, who had co-chaired the AI Council at CDER and helped develop policy around AI’s use in drug development. Ironically, Mantha is now working alongside Walsh to coordinate the agency-wide rollout. The pilot programme: Impressive results, limited details What’s driving the rapid AI deployment is the reported success of the agency’s pilot programme trialling the software. Commissioner Makary said he was “blown away by the success of our first AI-assisted scientific review pilot,” with one official claiming the technology enabled him to perform scientific review tasks in minutes that used to take three days. However, the scope, rigour and results from the pilot scheme remain unreleased. The agency has not published detailed reports on the pilot’s methodology, validation procedures, or specific use cases tested. The lack of transparency is concerning given the high-stakes nature of drug evaluation. When pressed for details, the FDA has promised that additional details and updates on the initiative will be shared publicly in June. For an agency responsible for protecting public health through rigorous scientific review, the absence of published pilot data raises questions about the evidence base supporting such an aggressive timeline. Industry perspective: Cautious optimism meets concerns The pharmaceutical industry’s reaction to the FDA AI deployment reflects a mixture of optimism and apprehension. Companies have long sought faster approval processes, with Makary pointedly asking, “Why does it take over 10 years for a new drug to come to market?” “While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the centre. We’re pleased to see the FDA taking concrete action to harness the potential of AI,” said PhRMA spokesperson Andrew Powaleny. However, industry experts are raising practical concerns. Mike Hinckle, an FDA compliance expert at K&L Gates, highlighted a key issue: pharmaceutical companies will want to know how the proprietary data they submit will be secured. The concern is particularly acute given reports that the FDA was in discussions with OpenAI about a project called cderGPT, which appears to be an AI tool for the Centre for Drug Evaluation and Research. Expert warnings: The rush vs rigour debate Leading experts in the field are expressing concern about the pace of deployment. Eric Topol, founder of the Scripps Research Translational Institute, told Axios: “The idea is good, but the lack of details and the perceived ‘rush’ is concerning.” He identified critical gaps in transparency, including questions about which models are being used to train the AI, and what inputs are provided for specialised fine-tuning. Former FDA commissioner Robert Califf struck a balanced tone: “I have nothing but enthusiasm tempered by caution about the timeline.” His comment reflects the broader sentiment among experts who support AI integration but question whether the June 30th deadline allows sufficient time for proper validation and safeguards to be implemented. Rafael Rosengarten from the Alliance for AI in Healthcare supports automation but emphasises the need for governance, saying there is a need for policy guidance around what kind of data is used to train AI models and what kind of model performance is considered acceptable. Political context: Trump’s deregulatory AI vision The FDA AI deployment must be understood in the broader context of the Trump administration’s approach to AI governance. Trump’s overhaul of federal AI policy – ditching Biden-era guardrails in favour of speed and international dominance in technology – has turned the government into a tech testing ground. The administration has explicitly prioritised innovation over precaution. Vice President JD Vance outlined four key AI policy priorities, including encouraging “pro-growth AI policies” instead of “excessive regulation of the AI sector,” and he has taken action to ensure the forthcoming White House AI Action Plan would “avoid an overly precautionary regulatory regime.” The philosophy is evident in how the FDA is approaching its AI deployment. With Elon Musk leading a charge under an “AI-first” flag, critics warn that rushed rollouts at agencies could compromise data security, automate important decisions, and put Americans at risk. Safeguards and governance: What’s missing? While the FDA has promised that its AI systems will maintain strict information security and act in compliance with FDA policy, specific details about safeguards remain sparse. The agency’s claims that AI is a tool to support, not replace, human expertise and can enhance regulatory rigour by helping predict toxicities and adverse events. This provides some reassurance but lacks specificity. The absence of published governance frameworks for what is an internal process contrasts sharply with the FDA’s guidance for industry. The agency has previously issued draft guidance to pharma companies, providing recommendations on the use of AI intended to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality. Its published draft guidance in that instance was based on feedback from over 800 external comments and its experience with more than 500 drug submissions involving AI components in their development since 2016. The broader AI landscape: Federal agencies as testing grounds The FDA’s initiative is part of a larger federal AI adoption wave. The General Services Administration is piloting an AI chatbot to automate routine tasks, and the Social Security Administration plans to use AI software to transcribe applicant hearings. However, GSA officials noted its tool has been in development for 18 months – highlighting the contrast with the FDA’s accelerated timeline, which at the time of writing, is a matter of weeks. The rapid federal adoption reflects the Trump administration’s belief that America is well-positioned to maintain its global dominance in AI and that the Federal Government must capitalise on the advantages of American innovation. It also maintains the importance of strong protections for Americans’ privacy, civil rights, and civil liberties. Innovation at a crossroads The FDA’s ambitious timeline embodies the fundamental tension between technological promise and regulatory responsibility. While AI offers clear benefits in automating tedious tasks, the rush to implementation raises critical questions about transparency, accountability, and the erosion of scientific rigour. The June 30th deadline will test whether the agency can maintain the public trust that has long been its cornerstone. Success requires more than technological capability – it demands proof that oversight hasn’t been sacrificed for speed. The FDA AI deployment represents a defining moment for pharmaceutical regulation. The outcome will determine whether rapid AI adoption strengthens public health protection or serves as a cautionary tale about prioritising efficiency over safety in matters of life and death. The stakes couldn’t be higher. See also: AI vs COVID-19: Here are the AI tools and services fighting coronavirus Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post FDA AI deployment: Innovation vs oversight in drug regulation appeared first on AI News. View the full article
  20. Google’s recent announcement of an AI Futures Fund shows the company wants to invest in AI startups. It sees itself as providing capital, early access to AI models yet to reach production, and support for startups from experts at the company. This is not Google’s first rodeo. To date, Alphabet (Google’s parent company) has invested in 38 AI companies. Big name acquisitions to date include the ***’s DeepMind, Waymo, the autonomous vehicle intelligence company, and home automation specialist Nest. While such investments signal a broad intent to at least associate Google with startups coming to market with a smart idea (if not necessarily to ******** up smaller companies), the current situation regarding the US courts’ attitude to monopolistic behaviour by Google questions just how far any relationship may go in the longer term. If Google faces increased scrutiny in the next few years on the back of any eventual ruling the US Department of Justice makes, having what might be interpreted as a monopolistic position in AI could bring down the ire of the judiciary. On the other hand, if Google is forced to divest itself of some of its more profitable divisions – the Chrome browser, the Android mobile operating system, some or all of its ad networks – the company may have to double down on its other sources of revenue; and AI could become its favoured métier. If the board at Alphabet decide to bet large stakes on AI, one core aspect of doing so will need clear and definite resolution: the economic viability of continuing AI implementation in the forms that users have become familiar with in the last couple of years. According to some industry commentators, the AI ‘whale’, OpenAI, is struggling to monetise its operations to the extent that will satisfy its investors. Google’s share of the AI market is tiny in comparison with OpenAI’s, but it suffers from the same potential financial issues. Google’s way through may be to continue its original role as provider of information searched for on the internet, and use its models to improve the search results given to online queries, and perhaps monetise around that transaction: either charging end-users for AI-powered search, or having advertisers pay for top spots in AI-generated search results. In that role, Google would be returning to its original function, but with the addition of AI algorithms under the surface – AI that improves a service that’s proven to be in massive demand, rather than AI being the main focus of user activity. Meta’s latest earnings call signalled that Mark Zuckerberg wants to do just that: return to the roots of the Facebook platform as a social connector, but have AI improve users’ experiences. It’s proposed that any acquisitions by Google of AI companies in the future would have to get the approval of the US Department of Justice. The company says such a move would limit investment in future AI technologies, a sentiment echoed by Anthropic representatives during the anti-monopolistic search practices court case brought against Google by the DOJ. If such a government approval edict were in place for Google, it would change the nature of companies that Google might fund via the AI Futures Fund or similar scheme. Rather than risking censure by appearing to add companies to the Alphabet stable that are in line with Google’s AI offerings, those benefiting from the company’s largess would be more likely to be niche players, bringing unique products to sectors of the economy where Google doesn’t already hold sway. Amazon’s acquisitions have been, at first glance, more in line with that seeking out of niche products to snap up. The Ring home smart device company (acquired in 2018 for $1bn) and One Medical (2022, $3.9bn) were purchases well outside of Amazon’s core cloud and retail verticals. Both, however, are prime sources of training data for AI models – consumer behaviour metrics and healthcare information are prime data real-estate. Google’s strategy for investment in smaller companies will need to be similarly canny, given that whatever the outcome of the DOJ case, its activities will be subject to intense scrutiny by the courts, end-users, and the press. No article discussing the activities of US big tech companies would be complete in 2025 without the addition of caveats around the present American leadership’s attitude to competition in the sector. The elephant in the room is the real possibility of executive veto of, or significant amendment to, any judicial ruling. Those potential game-changing elements could affect Google and Alphabet’s investment plans with little notice and less reason. (Image source: “Dallas DA LGBT Task Force visits DOJ FBP” by Dallas County DA is licensed under CC BY-NC-ND 2.0.) See also: Apple AI stresses privacy with synthetic and anonymised data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google’s AI Futures Fund may have to tread carefully appeared first on AI News. View the full article
  21. The Trump AI chip policy reversal announced this week signals a shift in how advanced computing technologies will flow in global markets, as the administration prepares to dismantle Biden’s complex three-tier regulatory framework. Set to take effect on May 15, 2025, the Biden administration’s Framework for Artificial Intelligence Diffusion would have created a stratified global technology landscape with significant implications for international trade, innovation, and geopolitical relationships. According to a Commerce Department on Wednesday, the Trump administration views the existing approach as fundamentally flawed. “The Biden AI rule is overly complex, overly bureaucratic, and would stymie American innovation,” a Commerce Department spokeswoman told Reuters. “We will be replacing it with a much simpler rule that freees American innovation and ensures American AI dominance,” they said. The policy shift comes months after the Biden administration finalised an export control framework during its final week in office. That framework represented the culmination of a four-year effort to restrict China’s access to cutting-edge chips while maintaining US leadership in artificial intelligence technology. The decision to rescind the rule reflects the Trump administration’s different approach to balancing national security concerns with commercial interests. < Understanding the three-tier system The soon-to-be-eliminated rule had established a hierarchical structure for global technology access. In the first tier, 17 countries plus Taiwan would have enjoyed unlimited access to advanced AI chips. A second tier of approximately 120 countries would have operated under strict numerical caps limiting their imports. The third and final tier – including China, Russia, Iran, and North Korea – would have been completely blocked from accessing these technologies. The structured approach aimed to prevent advanced technologies from reaching countries of concern through intermediaries while still allowing access for allies and neutral nations. However, critics argued the complexity of the system would create significant compliance burdens and push international partners toward alternative suppliers. < The new approach taking shape Instead of the tiered system, sources cited by Reuters indicate the Trump administration is considering implementing a global licensing regime supported by inter-governmental agreements. The approach would potentially offer more flexibility and maintain controls over sensitive technology. The timing of the announcement appears strategically significant. Bloomberg reported the changes are developing as President Trump prepares for a trip to the Middle East, where countries including Saudi Arabia and the United Arab Emirates have expressed frustration over existing restrictions on their acquisition of AI chips. The Commerce Department’s decision could be announced as soon as Thursday, according to a source familiar with the matter. < Market reaction and industry impact News of the policy reversal has already sent ripples through financial markets. Shares of Nvidia, the dominant manufacturer of chips used for training AI models, ended 3% higher on May 7 following the announcement, though they dipped 0.7% in after-hours trading, according to Reuters. The company has consistently opposed the growing number of US restrictions. Nvidia CEO Jensen Huang argues that American companies should be able to sell into China, which, he predicts, will become a $50 billion market for AI chips in the next couple of years. However, it’s important to note that the Trump AI chip policy shift does not signal a complete abandonment of export controls. The administration has already demonstrated its willingness to take strong action against China, specifically, by banning Nvidia from selling its H20 chip there – a move that cost the company $5.5 billion in writedowns, according to Bloomberg. < Global winners and losers The policy reversal creates a complex map of potential winners and losers in the global technology landscape. Countries like India and Malaysia, which hadn’t faced chip restrictions before the Biden rule was unveiled in January, will see temporary relief. In Malaysia’s case, this could particularly benefit Oracle Corporation, which has plans for a massive data centre expansion that would have exceeded limits established by rules governing AI hardware distribution. Middle Eastern nations also stand to gain. The UAE and Saudi Arabia, which have faced chip export controls since 2023, may now be able to negotiate more favourable terms. Trump has expressed interest in easing restrictions for the UAE specifically and could announce the beginning of work on a government-to-government AI chip agreement during his upcoming visit to the region from May 13 to 16. The UAE’s aggressive pursuit of such an agreement, backed by its pledge to invest up to $1.4 trillion in US technology and infrastructure over the next decade, exemplifies how high-stakes these negotiations have become for countries seeking to establish themselves as AI powerhouses. < Uncertainty ahead According to Axios, the Trump administration is currently developing a new control scheme, which could emerge as either a new rule or an executive order. The transition ******* creates significant uncertainty for companies like Nvidia regarding the regulatory environment they’ll face in the coming months. While the new framework takes shape, the administration has indicated it will continue enforcing existing chip export controls. One potential element of the new approach might involve imposing controls specifically on countries that have diverted chips to China, including Malaysia and Thailand, according to a source familiar with the matter. Industry stakeholders remain divided on the issue. While chip manufacturers have lobbied aggressively against strict export controls, some AI companies, including Anthropic, have advocated for maintaining protections that safeguard US intellectual property and technological advantages. < Balancing competing priorities The Biden administration’s export controls were designed to limit access to chips needed for cutting-edge AI development, with a particular focus on preventing ******** firms from finding indirect routes to technology that existing export controls prevented them from importing directly. Creating a balanced approach that addresses national security concerns while promoting US commercial interests presents significant challenges. Establishing agreements with a wide range of countries eager to purchase advanced AI chips would require navigating complex diplomatic relationships and potentially creating dozens of separate policy frameworks. The Commerce Department has not provided a specific timeline for when any new rules are to be finalised or implemented, only indicating that debate continues on the optimal approach forward. The shift in Trump AI chip policy reflects the administration’s broader emphasis on American competitiveness and innovation while still maintaining control over technologies with national security implications. As officials work to craft a replacement framework, the global AI chip market remains in flux, with profound implications for technological development, international relations, and corporate strategies in the evolving artificial intelligence landscape. also: US-China AI chip race: Cambricon’s first profit lands Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Trump AI chip policy to scrap Biden’s export controls appeared first on AI News. View the full article
  22. Apple continues its focus on hardware produced in-house, and is currently working on a new generation of chips for future hardware, according to Bloomberg. The processors are thought to be destined for use in smart glasses, AI-capable servers, and the next generations of Macs. One project involves a custom chip designed for smart glasses, which are thought to offer voice commands, photo capture, and audio playback, but will not be full augmented reality (AR) devices. The chip design is based on the low-power components used at present in the latest models of the Apple Watch, but modified to use less energy and support multiple cameras. Apple has yet to comment on any of the rumoured projects ­ it’s a company with a strict policy of keeping the products it may, or may not be developing under wraps. However, production for the glasses chip is said to begin by late 2026 or early 2027. If that timeline holds true, devices could reach the market in two years. As with most of Apple’s chips, Taiwan Semiconductor Manufacturing Co. is expected to handle production. Smart glasses have been in development at Apple for several years, industry insiders claim. The company aims to build full AR wearables that overlay digital information onto real-world views, but the technology is yet to be ready for everyday use. In this sector, Meta has already broken some ground, launching smart glasses in partnership with Ray-Ban. Apple seems to be pursuing a similar product, minus the AR features – at least, in any device’s first iteration. Sources say Apple is developing both AR and non-AR glasses under the codename N401, previously N50. According to reports, Apple’s CEO Tim Cook hopes for the company to take a lead in this market segment. Meta, meanwhile, is expanding its own product line, planning to debut a high-end model of its Ray-Ban style device with a display later this year. The company is said to be targeting 2027 for its first, fully-AR glasses gadget. Apple’s non-AR glasses could use cameras to scan the environment and apply AI to assist users, mirroring Meta’s current strategy. Apple is said to be biding its time, and waiting for AI software to mature before committing to a full product release. In the meantime, Apple is exploring other avenues to improve its current product lines, with engineers reportedly testing features like cameras in AirPods and smartwatches, which will likely use Apple chips currently in development. Codename “Nevis” is slated for a camera-enabled Apple Watch, while “Glennie” is intended for AirPods. Both are thought to be planned for release by 2027. Apple is said to be preparing a new set of processors specifically for Macs; the M6 (Komodo) and M7 (Borneo), and a higher-end chip “Sotra”. Apple is also thought to be planning to upgrade the iPad Pro and MacBook Pro with its M5 chip later this year. Internal-to-Apple chip development efforts are part of Apple’s broader push to control the full hardware stack of its products. The hardware group, led by Johny Srouji, has been expanding its portfolio: Earlier this year, Apple launched its first in-house modem chip in the iPhone 16e, with a higher-end version, the C2, planned for release in 2026. (Photo by Unsplash) See also: Apple AI stresses privacy with synthetic and anonymised data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Apple developing custom chips for smart glasses and more appeared first on AI News. View the full article
  23. Samsung Electronics’ strategic focus on AI has delivered high revenue in the first quarter of 2025, as the South Korean tech giant navigates semiconductor market challenges and growing global trade uncertainties. The company posted an all-time quarterly high revenue of KRW 79.14 trillion ($55.4 billion), marking a 10% increase year-over-year, according to its financial results released on Wednesday. Operating profit climbed to KRW 6.7 trillion ($4.68 billion), representing a modest 1.5% increase compared to the same ******* last year. The results exceeded Samsung’s earlier forecast of KRW 79 trillion and analysts’ expectations. Smartphone success counters chip challenges The Mobile Experience (MX) Business emerged as the best performer, contributing KRW 37 trillion in consolidated revenue and KRW 4.3 trillion in operating profit – its highest level in four years. The success was driven primarily by strong sales of the flagship Galaxy S25 series, which features AI abilities via Galaxy AI. “Enhanced cost competency and price declines for some components also contributed to solid double-digit profitability,” the company’s earnings report said. In contrast, Samsung’s Device Solutions (DS) Division, which includes its semiconductor operations, posted KRW 25.1 trillion in revenue and KRW 1.1 trillion in operating profit – a 42% decline from the previous year. The performance reflects ongoing challenges in the semiconductor market, particularly in high-bandwidth memory (HBM) sales. “Overall earnings were impacted by the erosion of average selling price (ASP), as well as a decrease in HBM sales due to export controls on AI chips and deferred demand in anticipation of upcoming enhanced HBM3E products,” Samsung said. Trade tensions cloud future outlook Despite the record revenue, Samsung has expressed caution about the second quarter, dropping its usual business outlook due to growing macroeconomic uncertainties stemming from global trade tensions and slowing economic growth. “Due to the rapid changes in policies and geopolitical tensions among major countries, it’s difficult to accurately predict the business impact of tariffs and established countermeasures,” a Samsung executive stated during Wednesday’s earnings call. Of particular concern are US President Donald Trump’s “reciprocal” tariffs, most of which have been suspended until July but threaten to impact dozens of countries including Vietnam and South Korea, where Samsung produces smartphones and displays. While Samsung noted that its flagship products like semiconductors, smartphones, and tablets are currently exempt from these tariffs, the company revealed that Washington is conducting a product-specific tariff probe into these categories. “There are a lot of uncertainties ahead of us […] we are communicating with related countries to minimise negative effects,” Samsung said during the call. In response to its challenges, the company disclosed it is considering relocating production of TVs and home appliances. AI investment and future strategy Despite these headwinds, Samsung remains committed to its artificial intelligence strategy, allocating its highest-ever annual R&D expenditure for 2024. In the first quarter of 2025, the company increased R&D spending by 16% compared to the same ******* last year, amounting to KRW 9 trillion. For the remainder of 2025, Samsung plans to expand its AI smartphone lineup through the introduction of “Awesome Intelligence” to the Galaxy A series and the launch of the Galaxy S25 Edge in Q2. Later in the year, the company will strengthen its foldable lineup with enhanced AI user experiences. In the semiconductor space, Samsung aims to strengthen its position in the high-value-added market through its server-centric portfolio and the ramp-up of enhanced HBM3E 12H products to meet initial demand. The company expects AI-related demand to remain high in the second half of 2025, coinciding with the launch of new GPUs. “In the mobile and PC markets, on-device AI is expected to proliferate, so the Memory Business will proactively respond to this shift in the business environment with its industry-leading 10.7Gbps LPDDR5x products,” Samsung stated. The company’s foundry business remains focused on its 2nm Gate-All-Around (GAA) process development, which remains on schedule despite current challenges. Market reaction and competitive landscape Samsung shares were trading down approximately 0.6% following the announcement, reflecting investor concerns about the uncertain outlook. The results highlight Samsung’s complex position in the AI market – succeeding in consumer-facing applications while working to catch up with competitors in AI-specific semiconductor components. Local rival SK Hynix, which reported a 158% jump in operating profit last week to KRW 7.4 trillion, has overtaken Samsung in overall DRAM market revenue for the first time, capturing 36% global market share compared to Samsung’s 34%, according to Counterpoint Research. SK Hynix’s success has been particularly pronounced in the high-bandwidth memory segment, which is crucial for AI server applications. “Samsung has assumed that the uncertainties are diminished, it expects its performance to improve in the second half of the year,” the company noted, striking a cautiously optimistic tone despite the challenges ahead. Samsung’s record revenue masks a pivotal crossroads for the tech giant: while its AI-enhanced smartphones flourish, its once-dominant semiconductor business risks falling behind in the AI revolution. The coming quarters will reveal whether Samsung’s massive R&D investments can reclaim lost ground in HBM chips, or if we’re witnessing a fundamental power shift in Asian tech manufacturing that could alter the global AI supply chain for years to come. For a company that rebuilt itself numerous times over its 56-year history, the AI semiconductor race may prove to be its most consequential transformation yet. (Image credit: Anthropic) See also: Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost <figurewp-block-image”> Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Samsung AI strategy delivers record revenue despite semiconductor headwinds appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  24. ServiceNow has opened its Knowledge 2025 conference with a brand new AI platform. The ambition is clear: to give businesses a single, coherent way to get all their different AI tools and intelligent agents working together, right across the company. This isn’t just a fresh lick of paint; we’re talking deep-rooted new features and much cosier relationships with the likes of NVIDIA, Microsoft, Google, and Oracle. The aim? To finally help businesses orchestrate their operations with genuine intelligence, and it seems some big names like Adobe, Aptiv, the NHL, Visa, and Wells Fargo are already seeing it pay off. Most business leaders you speak with today will tell you they’re wrestling with a tangled mess of complexity. We’ve got systems that don’t talk to each other, data all over the shop, often no real game plan for AI, and that constant pressure to do more with less. ServiceNow believes AI innovation is the answer here, a way to fundamentally change how businesses run – making them more resilient, more efficient, and helping them get a handle on costs, all while chipping away at that mountain of tech debt and operational guesswork. However, the company rightly points out that just throwing AI at the problem won’t cut it. You need a clear vision and a real commitment to using ‘agentic AI’ – think smart, autonomous AI helpers – all underpinned by a platform built for what’s next. For the companies already diving in, these AI agents are apparently delivering the goods, boosting productivity in all sorts of ways. And it’s hitting the bottom line: ServiceNow’s own research, the Enterprise AI Maturity Index, shows that 55% of organisations using this kind of agentic AI have seen their gross margins improve. That’s a hefty jump compared to the mere 22% for those not yet on board. Bill McDermott, Chairman and CEO of ServiceNow, commented: “ServiceNow is igniting a new era of enterprise transformation with the ServiceNow AI Platform. We’re unleashing the full power of AI, across any industry, any agent, any workflow. “For decades, CEOs have wanted technology to accelerate the speed of business transformation. With this next generation architecture, we finally have the foundation to run the integrated enterprise in real time. We are the only ones who can orchestrate AI, data, and workflows on a single platform.” It’s not just talk; some big names are already putting ServiceNow’s agentic AI through its paces: Adobe is using it to speed up automation in IT and workplace services. Think AI agents handling common but time-consuming requests like password resets. The upshot? Fewer support tickets, quicker fixes, and happier, more productive staff. They’re also looking to add ServiceNow’s Workflow Data Fabric and RaptorDB to chew through their data even faster. Aptiv, known for its work in critical industries, is teaming up with ServiceNow. Their new partnership aims to blend ServiceNow’s AI smarts with Aptiv’s edge intelligence to boost automation and keep things running smoothly when it really matters. The NHL is, in their words, “going all in on ServiceNow AI.” The goal is to streamline how they operate, making life easier for employees needing quick solutions and helping arena technicians give fans an even better game day. Wells Fargo has rolled out ServiceNow AI with RaptorDB to automate tricky workflows and process huge amounts of data in real-time. This, they hope, will lead to smarter, AI-driven decisions right across the bank. Visa is set to bring in ServiceNow Disputes Management – a system they built together. It uses AI agents on the ServiceNow platform to help resolve payment disputes. And it seems they like what they see, as Visa plans to use ServiceNow’s AI to run its own managed dispute services. So, what’s under the bonnet of this reimagined ServiceNow AI Platform? It’s built to bring together intelligence, data, and the actual doing – the orchestration. The idea is to help companies move beyond a few scattered AI experiments to making AI a core part of how they operate. A big piece of this puzzle is what they call a “smart, conversational AI Engagement Layer.” This lets people get complex tasks done across different systems by tapping into ServiceNow’s Knowledge Graph, Workflow Data Fabric, and AI Agent Fabric. These clever bits are designed to connect smoothly with all sorts of enterprise data systems and AI frameworks. This launch is clearly just the start of a whole wave of new stuff. We’re seeing expanded partnerships, literally thousands of ready-made AI agents, and the introduction of the ServiceNow AI Control Tower. Alongside the main platform, ServiceNow is adding some additional firepower: AI Control Tower: Think of it as a central dashboard for managing everything AI – whether it’s ServiceNow’s own AI, or tools from other providers. It’s about keeping an eye on things, making sure it’s secure, and getting real value from every AI agent, model, and workflow, all in one place. The goal is better teamwork, solid oversight, and automation that actually scales. AI Agent Fabric: This is the communication network for a company’s AI ecosystem. It’s about getting AI agents – no matter who built them (ServiceNow, partners, or your own tech teams) – to work together smoothly across different tools, teams, and even vendors like Microsoft, NVIDIA, Google, and Oracle. They can share information, coordinate jobs, and generally get things done more effectively together. Next-generation CRM: They’re also beefing up their Customer Relationship Management offering with more AI. The plan is to bring sales, order fulfilment, and customer service onto one platform. This means businesses can shift from just reacting to customer problems to proactively engaging with them at every stage, from quoting a price right through to renewal. Apriel Nemotron 15B: This new reasoning LLM is built with NVIDIA. It’s designed to power intelligent AI agents that can think and act at scale, promising high accuracy, quick responses, lower running costs, and generally faster, smarter AI for everyone. And it’s not just about the tech; ServiceNow is also doubling down on people with the launch of ServiceNow University. This is their bid to give organisations the tools to upskill their workforce, helping them not just unlock individual potential but also drive real business change. At the end of the day, even the smartest AI needs smart people to make the most of it. See also: UAE to teach its children AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ServiceNow bets on unified AI to untangle enterprise complexity appeared first on AI News. View the full article
  25. The United Arab Emirates looks set to integrating AI education in its schools’ curricula, meaning all children from kindergarten to high school will learn about the technology, how it can be used day-to-day, and the best ways to implement the various types of models. There will also be classes covering the ethics of AI, something that the country’s young might eventually teach to the world, according to OpenAI’s Sam Altman, who once termed the UAE the world’s ‘sandbox’ in which issues around AI such as governance could be thrashed out, and from which the rest of the world can make its regulatory models. The new curriculum will include areas such as data and algorithms, software applications, the ethics of AI, real-world applications of the technology, policies, and social engagement. All modules have been designed to be age-appropriate, and will be incorporated into the standard curriculum, rather than being taught after-hours. The rollout into schools is part of the country’s National Artificial Intelligence Strategy 2031, which aims to position the Kingdom as a global leader in AI capabilities – including education – and is part of wider efforts to diversify the UAE’s economy from its core basis of oil production and *****. In addition to the changes to school timetables, the National AI Strategy also includes funds to promote AI in research, with institutions like the American University of Sharjah and United Arab Emirates University committed to the technology’s use in the higher and postgraduate sectors. There are also public awareness and learning initiatives springing up in the country as it gathers momentum to become the country that’s most behind the possibilities of AI in the modern world. An AI investment fund is expected to reach a value of $100 billion in the next few years, according to people familiar with the project. The country also has plans to spend around $1.4 trillion in in the US in the next ten years on energy generation, semiconductor manufacture, and AI infrastructure. Investments abroad will ensure the Kingdom retains positive relations with elements of its supply chain related to AI. US President Trump is thought to be considering easing tariffs and other restrictions on exports of Nvidia hardware to the Kingdom, and has planned a visit to the region later this month, when he will also visit Saudi Arabia and Qatar. The UAE has actively encouraged investment in infrastructure from ******** manufacturers such as Huawei, and is seen as something of an impartial middle-ground in the ongoing trade war between the Western and Eastern industrial and technology blocs. The wider region is home to some of the most affluent nations, so any curbs on trade tend to have negative effects on vendors based on both sides of the divide. See also: Conversations with AI – Education (Image source: “Dubai” by Eugene Kaspersky is licensed under CC BY-NC-SA 2.0.) The post UAE to teach its children AI appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.