Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    559
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating. AI-powered diabetes management The app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings. What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south. For those who live with the anxiety of hypoglycaemia (when blood sugar plummets to dangerous levels), the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action. Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households. “By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions. How AI is speeding up diabetes research It’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed. Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities. The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself. What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise. ”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann. “Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.” Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes. “With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.” What this means for the future of healthcare tech Having covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI. For the estimated 590 million people (or 1 in 9 of the adult population) worldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights. The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise. If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s. For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own. (Photo by Alexander Grey) See also: DeepSeek’s latest AI model a ‘big step backwards’ for free speech Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Diabetes management: IBM and Roche use AI to forecast blood sugar levels appeared first on AI News. View the full article
  2. DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it up AI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions. “DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety. What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries. In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses. Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly. “It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed. China criticism? Computer says no This pattern becomes even more pronounced when examining the model’s handling of questions about the ******** government. Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the ******** government.” Where previous DeepSeek models might have offered measured responses to questions about ******** politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly. There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing. “The model is open source with a permissive license, so the community can (and will) address this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness. What DeepSeek’s latest model shows about free speech in the AI era The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question. As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content. DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence. (Photo by John Cameron) See also: Ethics in automation: Addressing bias and compliance in AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek’s latest AI model a ‘big step backwards’ for free speech appeared first on AI News. View the full article
  3. London-based AI lab Odyssey has launched a research preview of a model transforming video into interactive worlds. Initially focusing on world models for film and game production, the Odyssey team has stumbled onto potentially a completely new entertainment medium. The interactive video generated by Odyssey’s AI model responds to inputs in real-time. You can interact with it using your keyboard, phone, controller, or eventually even voice commands. The folks at Odyssey are billing it as an “early version of the Holodeck.” The underlying AI can generate realistic-looking video frames every 40 milliseconds. That means when you press a button or make a gesture, the video responds almost instantly—creating the illusion that you’re actually influencing this digital world. “The experience today feels like exploring a glitchy dream—raw, unstable, but undeniably new,” according to Odyssey. We’re not talking about polished, AAA-game quality visuals here, at least not yet. Not your standard video tech Let’s get a bit technical for a moment. What makes this AI-generated interactive video tech different from, say, a standard video game or CGI? It all comes down to something Odyssey calls a “world model.” Unlike traditional video models that generate entire clips in one go, world models work frame-by-frame to predict what should come next based on the current state and any user inputs. It’s similar to how large language models predict the next word in a sequence, but infinitely more complex because we’re talking about high-resolution video frames rather than words. “A world model is, at its core, an action-conditioned dynamics model,” as Odyssey puts it. Each time you interact, the model takes the current state, your action, and the history of what’s happened, then generates the next video frame accordingly. The result is something that feels more organic and unpredictable than a traditional game. There’s no pre-programmed logic saying “if a player does X, then Y happens”—instead, the AI is making its best guess at what should happen next based on what it’s learned from watching countless videos. Odyssey tackles historic challenges with AI-generated video Building something like this isn’t exactly a walk in the park. One of the biggest hurdles with AI-generated interactive video is keeping it stable over time. When you’re generating each frame based on previous ones, small errors can compound quickly (a phenomenon AI researchers call “drift.”) To tackle this, Odyssey has used what they term a “narrow distribution model”—essentially pre-training their AI on general video footage, then fine-tuning it on a smaller set of environments. This trade-off means less variety but better stability so everything doesn’t become a bizarre mess. The company says they’re already making “fast progress” on their next-gen model, which apparently shows “a richer range of pixels, dynamics, and actions.” Running all this fancy AI tech in real-time isn’t cheap. Currently, the infrastructure powering this experience costs between £0.80-£1.60 (1-2) per user-hour, relying on clusters of H100 GPUs scattered across the US and EU. That might sound expensive for streaming video, but it’s remarkably cheap compared to producing traditional game or film content. And Odyssey expects these costs to tumble further as models become more efficient. Interactive video: The next storytelling medium? Throughout history, new technologies have given birth to new forms of storytelling—from cave paintings to books, photography, radio, film, and video games. Odyssey believes AI-generated interactive video is the next step in this evolution. If they’re right, we might be looking at the prototype of something that will transform entertainment, education, advertising, and more. Imagine training videos where you can practice the skills being taught, or travel experiences where you can explore destinations from your sofa. The research preview available now is obviously just a small step towards this vision and more of a proof of concept than a finished product. However, it’s an intriguing glimpse at what might be possible when AI-generated worlds become interactive playgrounds rather than just passive experiences. You can give the research preview a try here. See also: Telegram and xAI forge Grok AI deal Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Odyssey’s AI model transforms video into interactive worlds appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  4. Salesforce has agreed to acquire data management firm Informatica in a deal valued at around $8 billion. This includes equity value, minus Salesforce’s existing investment in the company. Informatica shareholders will receive $25 in cash per share. The move aims to help Salesforce build a stronger foundation for AI tools that can act on their own, often called agentic AI. Informatica’s software is known for helping businesses collect, manage, and organise large sets of data – the kind of support Salesforce needs to improve its AI systems’ work in different business applications. The deal brings together tools for organising and cleaning data (like Master Data Management and data integration) with Salesforce’s cloud platform. The idea is to make sure any AI features running on Salesforce have access to organised and secure data. For companies using AI in daily operations, having the right data isn’t enough. They also need to know where that data came from, how it has been changed, and whether it can be trusted. That’s where Informatica’s tools come in with benefits such as: Transparency: Informatica can show how data flows through systems, helping companies meet audit or regulatory needs. Context: By combining Informatica’s metadata with Salesforce’s data models, AI agents will better understand how to connect the dots in business systems. Governance: Features like data quality controls and policy settings help make sure AI systems rely on clean and consistent data. Salesforce CEO Marc Benioff said the acquisition supports the company’s goal of building safe and responsible AI for business use. “We’re excited to acquire Informatica … Together, we’ll supercharge Agentforce, Data Cloud, Tableau, MuleSoft, and Customer 360,” Benioff said. Informatica CEO Amit Walia said joining Salesforce will help more businesses make better use of their data. How this helps Salesforce’s data products Informatica’s cloud tools will plug directly into Salesforce’s core products: Data cloud: Informatica will help ensure data collected is trustworthy and ready to use – not just gathered in one place. Agentforce: AI agents should be able to make smarter decisions with cleaner data and better understanding of business context. Customer 360: Salesforce CRM tools will gain data inputs, helping sales and support teams. MuleSoft: With Informatica’s data quality and governance tools, the data passing through MuleSoft APIs should be more reliable. Tableau: Users of Tableau will benefit from more detailed information, as the data behind the dashboards should be better organised and easier to understand. Steve Fisher, President and CTO at Salesforce, explained the value: “Imagine an AI agent that goes beyond simply seeing data points to understand their full context – origin, transformation, quality, and governance.” Salesforce plans to bring Informatica’s technology into its existing systems quickly after the deal closes. This includes integrating data quality, governance, and MDM features into Agentforce and Data Cloud. The company also said it will continue to support Informatica’s current strategy to build AI-driven data tools for use in different cloud environments. Informatica acquisition aligns with Salesforce’s strategy Salesforce executives described the acquisition as part of a long-term plan. Robin Washington, President and CFO, said the company targets deals like this one when it sees a clear fit for customers and a solid financial return. “We’re laser-focused on accelerated execution,” she said, pointing to sectors like government, healthcare, and finance, where the combined tools could have most impact. Informatica’s chairman Bruce Chizen said the deal shows how long-term investment strategies can pay off. He credited private equity backers Permira and CPP Investments for their role in guiding the company toward this outcome. Salesforce also said it plans to invest in Informatica’s partner network and apply its own sales and marketing muscle to grow Informatica’s cloud business further. Deal terms and next steps The boards of both companies have approved the transaction. Shareholders representing about 63% of Informatica’s voting shares have signed off and no further votes are needed. The deal is expected to close early in Salesforce’s 2027 fiscal year, pending regulatory approval and other conditions. Salesforce will pay for the deal using a mix of cash and new debt. The company expects the deal to add to its non-GAAP earnings, margin, and cash flow starting in the second year after closing. It does not plan to change its shareholder return plans as a result of the acquisition. (Image from Pixabay) See also: Oracle plans $40B Nvidia chip deal for AI facility in Texas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce to buy Informatica in $8B deal appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  5. Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions. The ******** tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions. Architectural innovation born from necessity Zhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.” The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts models (machine-learning systems using multiple specialised sub-networks to solve complex computational challenges.) Huawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure. Performance metrics challenge industry leaders Real-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures. Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads. The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement. Geopolitical strategy drives technical innovation The Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints. Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.” The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation. Market implications and deployment reality Beyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple ******** data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption. The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors. Industry disruption and future considerations Huawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines. The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance. For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability. (Image from Pixabay) See also: Oracle plans $40B Nvidia chip deal for AI facility in Texas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Huawei Supernode 384 disrupts Nvidia’s AI market hold appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  6. Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform. This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool $300 million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app. This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy. In fact, Telegram is gearing up to raise at least $1.5 billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service. And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a $173 million loss. Fast forward to 2024, and they’d flipped that on its head, banking a $540 million profit from $1.4 billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of $700 million from a $2 billion revenue pot. So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp. This summer, Telegram users will gain access to the best AI technology on the market. @elonmusk and I have agreed to a 1-year partnership to bring xAI’s @grok to our billion+ users and integrate it across all Telegram apps This also strengthens Telegram’s financial… pic.twitter.com/ZPK550AyRV — Pavel Durov (@durov) May 28, 2025 However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water. Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content. Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company. And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam. Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users. The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage? (Photo from Unsplash) See also: Mistral Agents API: Transforming complex AI workflows Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Telegram and xAI forge Grok AI deal appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  7. If you’re working in SEO or digital marketing, you’ve probably noticed how Google search results look different. That instant answer that pops up at the top of the page is AI Overview, and it’s changing the game. Instead of having to click through to a bunch of different websites, users can now get direct answers right there in the search results, thanks to AI. Michal Kurzanowski, the CEO of OC24 LTD, a marketing company specialising in SEO, has seen a lot of changes over the years. But this new AI feature? It’s something entirely new. With his experience in helping businesses get better rankings, Michal understands how AI Overviews are reshaping SEO. Back in May 2023, Google introduced the feature as Search Generative Experience (SGE), renamed it in May 2024 to AI Overview, and launched it in the US. By the end of the year, it expanded to over 130 countries. According to a case study analysing millions of search results, 78% of users were happy with the AI-generated answers. That’s a pretty good sign that this feature is here to stay. What is AI overview? It’s a feature that gives users the answer they’re looking for right at the top of the search results. Google’s AI pulls information from all over the web and gives a short response to the user’s query. Instead of making them click on multiple links, the AI compiles all the relevant info into a summary. The answers are usually 160-170 words, just enough to give the user what they need, fast. But here’s the catch: when users get answers this quickly, they’re less likely to click on any links below. And that’s a problem for SEO because it means less traffic to your website. Now, here’s the kicker: AI Overview can’t be disabled – there’s no way to opt out. However, if you want to get rid of it in your own browser, there’s a Chrome extension called Hide Google AI Overviews that will block it from appearing. But for the rest of us in digital marketing, it’s time to figure out how to work with the change. How does AI overview affect SEO? AI Overviews take up a massive chunk of a screen’s real estate. When they appear, they often dominate the top of the search results page, meaning even if your page ranks on page one, you could get passed over because the AI response already answered the question. It’s not all bad, though. 33.4% of the links that show up in AI Overviews are actually from pages that are also ranked in the top 10 of organic search. So, it’s not like it’s impossible to get featured if your page isn’t number one, but it is tougher. Now here’s where it gets interesting: 46.5% of the URLs that appear in AI Overviews are from websites ranked outside the top 50. So even pages that aren’t ranking highly can still be included. But, for those trying to grab organic traffic, it’s a double-edged sword. The domains that show up most often on search pages with AI Overviews are youtube.com, quora.com, wikipedia.org, reddit.com, among others, and information requests are most often generated by AI Overview (about 93%). [Hidden Content] How to optimise content for AI overview This is a dynamic field, and you need to be ready for changes, because SEO is always about challenges, testing, algorithm changes, and so on. AI Overview can actually help a brand become more recognisable and improve its reputation if you get on its radar. Content optimisation is still important, but other factors now play a major role. Michal Kurzanowski has put together a checklist for creating top-notch content that Google’s artificial intelligence will like. Follow Google’s recommendations for authors, as it automatically selects links for AI-powered response blocks from various sources, including sites that meet search engine quality standards. Start with a strong intro: The first 100 words of your page are crucial. Make sure they answer the user’s main question right off the bat. The quicker you get to the point, the better. Keep content fresh and relevant: AI likes fresh content. Update your pages regularly, and make sure your information is always relevant to the questions people are asking. Use descriptive headings: Don’t just throw random headings in there. Use H1, H2, and H3 tags that are specific and describe exactly what the content is about. Q&A format works well because many AI responses are structured this way, and it helps increase your chances of being selected. Lists Are key: Artificial intelligence loves numbered and bulleted lists! About 40% of responses come from content that includes lists. Quality over quantity: Share original research, insights, and your own case studies. Google isn’t interested in generic stuff – it’s looking for real expertise. Including quotes and statistics makes your content more authoritative. It can boost your chances of being featured by 30-40%, a huge win. Visuals and interactive elements: Add videos, infographics, and quizzes to keep users engaged. EEAT principle: The one’s huge – make sure your content reflects expertise, authoritativeness, and trustworthiness. The more your content shows these qualities, the better. Final thoughts Let’s be real: the SEO world is shifting fast. AI Overviews are here to stay, and it’s up to content creators to adapt. The days of getting traffic just by ranking high are changing. Now, it’s about providing the best, most relevant, and easiest-to-understand content that answers users’ questions quickly. For businesses like OC24 Limited, staying ahead of these changes is essential. Embrace AI Overviews by optimising your content in a way that both Google and users love, and you’ll not only keep up but thrive. The post The impact of Google AI Overview on SEO appeared first on AI News. View the full article
  8. The *** is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today. The deployment is seen as a signal of the ***’s commitment to leveraging technology to navigate an increasingly complex global security landscape. For Britain, what unfolds in the territories of two of its closest Arctic neighbours – Norway and Iceland – has direct and profound implications. The national security of the *** is linked to stability in the High North. The once remote and frozen expanse is changing, and with it, the security calculus for the ***. Foreign Secretary David Lammy said: “The Arctic is becoming an increasingly important frontier for geopolitical competition and trade, and a key flank for European and *** security. “We cannot bolster the ***’s defence and deliver the Plan for Change without greater security in the Arctic. This is a region where Russia’s shadowfleet operates, threatening critical infrastructure like undersea cables to the *** and Europe, and helping fund Russia’s aggressive activity.” British and Norwegian naval vessels conduct vital joint patrols in the Arctic. These missions are at the sharp end of efforts to detect, deter, and manage the increasing subsea threats that loom over vital energy supplies, national infrastructure, and broader regional security. Russia’s Northern Fleet, in particular, presents a persistent challenge in these icy waters. This high-level engagement follows closely on the heels of the Prime Minister’s visit to Norway earlier this month for a Joint Expeditionary Force meeting, where further support for Ukraine was a key talking point with allies from the Baltic and Scandinavian states. During the Icelandic stop of his tour, Lammy will unveil a ***-Iceland tech partnership to boost Arctic security. This new scheme is designed to harness AI technologies for monitoring hostile activity across this vast and challenging region. It’s a forward-looking strategy, acknowledging that as the Arctic opens up, so too do the opportunities for those who might seek to exploit its vulnerabilities. As global temperatures climb and the ancient ice caps continue their retreat, previously impassable shipping routes are emerging. This is not just a matter for climate scientists; it’s redrawing geopolitical maps. The Arctic is fast becoming an arena of increased competition, with nations eyeing newly accessible reserves of gas, oil, and precious minerals. Unsurprisingly, this scramble for resources is cranking up security concerns. Adding another layer of complexity, areas near the Arctic are being actively used by Russia’s fleet of nuclear-powered icebreakers. Putin’s vessels are crucial to his “High North” strategy, carving paths for tankers that, in turn, help to bankroll his ******** war in Ukraine. Such operations cast a long shadow, threatening not only maritime security but also the delicate Arctic environment. Reports suggest Putin has been forced to rely on “dodgy and decaying vessels,” which frequently suffer breakdowns and increase the risk of devastating oil spills. The ***’s defence partnership with Norway is deeply rooted, with British troops undertaking vital Arctic training in the country for over half a century. This enduring collaboration is now being elevated through an agreement to fortify the security of both nations. “It’s more important than ever that we work with our allies in the High North, like Norway and Iceland, to enhance our ability to patrol and protect these waters,” added Lammy. “That’s why we have today announced new *** funding to work more closely with Iceland, using AI to bolster our ability to monitor and detect hostile state activity in the Arctic.” Throughout his Arctic tour, the Foreign Secretary will be emphasising the ***’s role in securing NATO’s northern flank. This includes the often unseen but hugely significant task of protecting the region’s critical undersea infrastructure – the cables and pipelines that are the lifelines for stable energy supplies and telecoms for the *** and much of Europe. These targeted Arctic security initiatives are part and parcel of a broader, robust enhancement of the ***’s overall defence posture. Earlier this year, the Prime Minister announced the most significant sustained increase in defence spending since the Cold War. This will see *** defence expenditure climb to 2.5% of GDP by April 2027, with a clear ambition to reach 3% in the next Parliament, contingent on economic and fiscal conditions. The significance of maritime security and the Arctic is also recognised in the ***’s ambitious new Security and Defence Partnership with the EU, agreed last week. This pact commits both sides to closer collaboration to make Europe a safer place. In today’s interconnected world, security, climate action, and international collaboration are inextricably linked. The turn to AI isn’t just a tech upgrade; it’s a strategic necessity. (Photo by Annie Spratt) See also: Thales: AI and quantum threats top security agendas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** deploys AI to boost Arctic security amid growing threats appeared first on AI News. View the full article
  9. As companies rely more on automated systems, ethics has become a key concern. Algorithms increasingly shape decisions that were previously made by people, and these systems have an impact on jobs, credit, healthcare, and legal outcomes. That power demands responsibility. Without clear rules and ethical standards, automation can reinforce unfairness and cause harm. Ignoring ethics affects real people in real ways, not only changing degrees of public trust. Biased systems can deny loans, jobs, or healthcare, and automation can increase the speed of bad decisions if no guardrails are in place. When systems make the wrong call, it’s often hard to appeal or even understand why, and the lack of transparency turns small errors into ******* issues. Understanding bias in AI systems Bias in automation often comes from data. If historical data includes discrimination, systems trained on it may repeat those patterns. For example, an AI tool used to screen job applicants might reject candidates based on gender, race, or age if its training data reflects those past biases. Bias also enters through design, where choices about what to measure, which outcomes to favour, and how to label data can create skewed results. There are many kinds of bias. Sampling bias happens when a data set doesn’t represent all groups, whereas labelling bias can come from subjective human input. Even technical choices like optimisation targets or algorithm type can skew results. The issues are not just theoretical. Amazon dropped its use of a recruiting tool in 2018 after it favoured male candidates, and some facial recognition systems have been found to misidentify people of colour at higher rates than Caucasians. Such problems damage trust and raise legal and social concerns. Another real concern is proxy bias. Even when protected traits like race are not used directly, other features like zip code or education level can act as stand-ins, meaning the system may still discriminate even if the input seems neutral, for instance on the basis of richer or poorer areas. Proxy bias is hard to detect without careful testing. The rise in AI bias incidents is a sign that more attention is needed in system design. Meeting the standards that matter Laws are catching up. The EU’s AI Act, passed in 2024, ranks AI systems by risk. High-risk systems, like those used in hiring or credit scoring, must meet strict requirements, including transparency, human oversight, and bias checks. In the US, there is no single AI law, but regulators are active. The Equal Employment Opportunity Commission (EEOC) warns employers about the risks of AI-driven hiring tools, and the Federal Trade Commission (FTC) has also signalled that biased systems may violate anti-discrimination laws. The White House has issued a Blueprint for an AI Bill of Rights, offering guidance on safe and ethical use. While not a law, it sets expectations, covering five key areas: safe systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives. Companies must also watch US state laws. California has moved to regulate algorithmic decision-making, and Illinois requires firms to tell job applicants if AI is used in video interviews. Failing to comply can bring fines and lawsuits. Regulators in New York City now require audits for AI systems used in hiring. The audits must show whether the system gives fair results in gender and race groups, and employers must also notify applicants when automation is used. Compliance is more than just avoiding penalties – it is also about establishing trust. Firms that can show that their systems are fair and accountable are more likely to win support from users and regulators. How to build fairer systems Ethics in automation doesn’t happen by chance. It takes planning, the right tools, and ongoing attention. Bias and fairness must be built into the process from the start, not bolted on later. That entails setting goals, choosing the right data, and including the right voices at the table. Doing this well means following a few key strategies: Conducting bias assessments The first step in overcoming bias is to find it. Bias assessments should be performed early and often, from development to deployment, to ensure that systems do not produce unfair outcomes. Metrics might include error rates in groups or decisions that have a greater impact on one group than others. Bias audits should be performed by third parties when possible. Internal reviews can miss key issues or lack independence, and transparency in objective audit processes builds public trust. Implementing diverse data sets Diverse training data helps reduce bias by including samples from all user groups, especially those often excluded. A voice assistant trained mostly on male voices will work poorly for women, and a credit scoring model that lacks data on low-income users may misjudge them. Data diversity also helps models adapt to real-world use. Users come from different backgrounds, and systems should reflect that. Geographic, cultural, and linguistic variety all matter. Diverse data isn’t enough on its own – it must also be accurate and well-labelled. Garbage in, garbage out still applies, so teams need to check for errors and gaps, and correct them. Promoting inclusivity in design Inclusive design involves the people affected. Developers should consult with users, especially those at risk of harm (or those who might, by using biased AI, cause harm), as this helps uncover blind spots. That might mean involving advocacy groups, civil rights experts, or local communities in product reviews. It means listening before systems go live, not after complaints roll in. Inclusive design also means cross-disciplinary teams. Bringing in voices from ethics, law, and social science can improve decision-making, as these teams are more likely to ask different questions and spot risks. Teams should be diverse too. People with different life experiences spot different issues, and a system built by a homogenous group may overlook risks others would catch. What companies are doing right Some firms and agencies are taking steps to address AI bias and improve compliance. Between 2005 and 2019, the Dutch Tax and Customs Administration wrongly accused around 26,000 families of fraudulently claiming childcare benefits. An algorithm used in the fraud detection system disproportionately targeted families with dual nationalities and low incomes. The fallout led to public outcry and the resignation of the Dutch government in 2021. LinkedIn has faced scrutiny over gender bias in its job recommendation algorithms. Research from MIT and other sources found that men were more likely to be matched with higher-paying leadership roles, partly due to behavioural patterns in how users applied for jobs. In response, LinkedIn implemented a secondary AI system to ensure a more representative pool of candidates. Another example is the New York City Automated Employment Decision Tool (AEDT) law, which took effect on January 1, 2023, with enforcement starting on July 5, 2023. The law requires employers and employment agencies using automated tools for hiring or promotion to conduct an independent bias audit in one year of use, publicly disclose a summary of the results, and notify candidates at least 10 business days in advance, rules which aim to make AI-driven hiring more transparent and fair. Aetna, a health insurer, launched an internal review of its claim approval algorithms, and found that some models led to longer delays for lower-income patients. The company changed how data was weighted and added more oversight to reduce this gap. The examples show that AI bias can be addressed, but it takes effort, clear goals, and strong accountability. Where we go from here Automation is here to stay, but trust in systems depends on fairness of results and clear rules. Bias in AI systems can cause harm and legal risk, and compliance is not a box to check – it’s part of doing things right. Ethical automation starts with awareness. It takes strong data, regular testing, and inclusive design. Laws can help, but real change also depends on company culture and leadership. (Photo from Pixabay) See also: Why the Middle East is a hot place for global tech investments Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ethics in automation: Addressing bias and compliance in AI appeared first on AI News. View the full article
  10. Oracle is planning to spend around $40 billion on Nvidia chips to support a massive new data centre being developed by OpenAI in Texas, according to reporting by the Financial Times. The move marks one of the largest chip purchases to date and signals the growing demand for AI computing power. The site is located in Abilene, Texas, and is described as the first US “Stargate” data centre. It’s part of a long-term plan backed by OpenAI and SoftBank to create large-scale AI facilities. When finished next year, the Texas centre is expected to provide 1.2 gigawatts of computing power, placing it among the largest in the world. Oracle has indicated it intends to purchase around 400,000 of Nvidia’s new GB200 chips, some of Nvidia’s most advanced processors designed for training and running AI models. Oracle plans to lease the computing power from these chips to OpenAI, according to people familiar with the deal. The site is owned by Crusoe and Blue Owl Capital, which have raised $15 billion through a mix of debt and equity to fund the project. Construction began in June 2024, and the campus will include eight buildings. Oracle has signed a 15-year lease for the site, although Stargate itself hasn’t put any money into this specific facility. JPMorgan is providing most of the debt financing, providing two loans totalling $9.6 billion, including a $7.1 billion loan announced this week. Crusoe and Blue Owl have each added roughly $5 billion in cash to the project. Elon Musk is expanding his own project in Memphis, Tennessee, with plans to house about 1 million Nvidia chips. That centre, called “Colossus,” is mostly using Nvidia’s earlier H100 and H200 chips. Musk said this week the next phase will become the “first gigawatt AI training supercluster.” Meanwhile, Amazon is working on a separate data centre in Virginia that will also surpass 1GW. The Abilene project is part of a larger shift for OpenAI as it looks to lessen its dependence on Microsoft. Until now, OpenAI has relied heavily on Microsoft for access to cloud computing. Much of Microsoft’s nearly $14 billion investment in the company came in the form of cloud credits. But OpenAI has grown frustrated that Microsoft can’t keep up with its increased demand for computing power. Earlier this year, OpenAI and Microsoft agreed to end their exclusive partnership, and are still working to resolve how long Microsoft will continue to hold licensing rights to OpenAI’s models. Looking ahead, Stargate is expected to supply a large share of OpenAI’s computing needs. The group, which launched in early 2025, aims to raise as much as $500 billion over four years to fund more data centre projects in the US. So far, OpenAI and SoftBank have each pledged $18 billion to Stargate. Oracle and MGX, a sovereign wealth fund based in Abu Dhabi, have committed another $7 billion each, according to people familiar with the details. The four companies will hold equity in the project, with OpenAI and SoftBank taking the lead. Stargate hasn’t yet invested in any specific project, but its plans are expanding beyond the US. Just last week, during President Donald Trump’s visit to the Gulf region, OpenAI announced a new Stargate facility in Abu Dhabi. The site will cover 10 square miles and offer 5GW of data centre power – enough to run more than 2 million Nvidia GB200 chips. Local AI firm G42 is managing the construction. (Photo from Unsplash) See also: Can the US really enforce a global AI chip ban? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Oracle plans $40B Nvidia chip deal for AI facility in Texas appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  11. Nvidia is preparing to go head-to-head with Huawei to maintain its relevance in the booming AI chip market of China. The upcoming AI chip to be created for China represents something of a strategic gamble by Nvidia – can the company’s third attempt at regulatory compliance preserve its foothold against surging domestic competition? Despite mounting geopolitical pressures from consecutive US administrations, Nvidia refuses to abandon the ******** market entirely. The company is set to introduce a stripped-down Blackwell-based processor designed specifically to navigate export restrictions and compete against sophisticated local alternatives, particularly from giant Huawei. A strategy born from necessity Reuters sources reveal that Nvidia’s latest offering will carry a price tag between $6,500 and $8,000 – a dramatic reduction from the $10,000-$12,000 commanded by the now-banned H20 model. Production is scheduled to commence in June, underscoring the urgency behind Nvidia’s China AI chip initiative. The cost reduction comes with significant trade-offs. The new processor will use Nvidia’s RTX Pro 6000D foundation paired with standard GDDR7 memory, abandoning the high-bandwidth memory found in premium variants. More tellingly, the chip forgoes Taiwan Semiconductor’s advanced CoWoS packaging technology, a decision that simultaneously reduces capabilities and manufacturing complexity. Following April’s effective prohibition of H20 sales, Nvidia absorbed a $5.5 billion writedown on ******** inventory and commitments. The company’s initial plan to modify the H20 for continued ******** sales ultimately proved unfeasible under current US export frameworks. Domestic competition intensifies Huawei’s emergence as a legitimate challenger has fundamentally altered China’s AI chip landscape. The company’s Ascend 910C and 910B processors have secured adoption among major domestic technology firms including Tencent, Baidu, and ByteDance, primarily for inference applications, an area in which they demonstrate competitive performance. The competitive pressure extends beyond individual chips to complete infrastructure solutions. Huawei’s CloudMatrix 384 rack system challenges Nvidia’s Blackwell GB200 NVL72 configuration directly, signaling the ******** firm’s ambition to compete in the entire AI hardware stack. Market dynamics reflect a shift in balance. Reports indicate H20 chips have traded at discounts exceeding 10% compared to Huawei’s Ascend 910B, highlighting Nvidia’s struggle to maintain pricing power against domestic alternatives. CEO Jensen Huang acknowledged the erosion, revealing that Nvidia’s ******** market share has approximately halved following recent export restrictions. Billions at stake The financial implications are substantial. Huang estimates China’s AI chip market potential at $50 billion, while Nvidia recorded over $17 billion in ******** sales during 2024. The figures underscore why the company continues pursuing Nvidia China AI chip development despite regulatory headwinds. Sources suggest Nvidia isn’t placing all hopes on this single product. The company reportedly plans a second Blackwell-variant for China, with production targeted for September. The multi-pronged approach indicates Nvidia’s commitment to maintaining ******** market presence through diverse product offerings tailored to different customer segments and regulatory requirements. An uncertain path forward The strategic questions surrounding the latest Nvidia China AI chip venture are profound. Can deliberately weakened hardware compete effectively against rapidly improving domestic alternatives? Will ******** customers accept performance compromises despite local options continuing their advance in capablity? Huang’s recent acknowledgment that “China is right behind us. We’re very, very close” in AI capabilities suggests the competitive gap may be narrowing faster than anticipated. Combined with substantial government backing for domestic semiconductor development, the market raises fundamental questions about Nvidia’s long-term viability in China. (Photo by Mariia Shalabaieva) See also: Can the US really enforce a global AI chip ban? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Will the budget China AI chip from Nvidia survive Huawei’s growth? appeared first on AI News. View the full article
  12. Anthropic has unveiled its latest Claude 4 model family, and it’s looking like a leap for anyone building next-gen AI assistants or coding. The stars of the show are Claude Opus 4, the new powerhouse, and Claude Sonnet 4, designed to be a smart all-rounder. Anthropic isn’t shy about its ambitions, stating these models are geared to “advance our customers’ AI strategies across the board.” They’re positioning Opus 4 as the tool to “push boundaries in coding, research, writing, and scientific discovery,” while Sonnet 4 is billed as an “instant upgrade from Sonnet 3.7,” ready to bring “frontier performance to everyday use cases.” Claude Opus 4: The new coding champ When Anthropic calls Claude Opus 4 its “most powerful model yet and the best coding model in the world,” you sit up and take notice. And they’ve got the numbers to back it up, with Opus 4 topping the charts on crucial industry tests, hitting 72.5% on SWE-bench and 43.2% on Terminal-bench. But it’s not just about quick sprints. Opus 4 is built for the long haul, designed for “sustained performance on long-running tasks that require focused effort and thousands of steps.” Imagine an AI that can “work continuously for several hours”—that’s what Anthropic claims. This should be a massive step up from previous Sonnet models and could expand what AI agents can achieve, tackling problems that require real persistence. Claude Sonnet 4: For daily AI and agentic work While Opus 4 is the heavyweight champion, Claude Sonnet 4 is shaping up to be the versatile workhorse, promising a significant boost for a huge range of applications. Early feedback from those who’ve had a sneak peek is glowing. For instance, GitHub “says Claude Sonnet 4 soars in agentic scenarios” and is so impressed they “plan to introduce it as the base model for the new coding agent in GitHub Copilot.” That’s a hefty endorsement. Tech commentator Manus is also impressed, highlighting its “improvements in following complex instructions, clear reasoning, and aesthetic outputs.” The positive vibes continue with iGent, which “reports Sonnet 4 excels at autonomous multi-feature app development, as well as substantially improved problem-solving and codebase navigation—reducing navigation errors from 20% to near zero.” That’s a game-changer for development workflows. Sourcegraph is equally optimistic, seeing the model as a “substantial leap in software development—staying on track longer, understanding problems more deeply, and providing more elegant code quality.” Augment Code has seen “higher success rates, more surgical code edits, and more careful work through complex tasks,” leading them to make Sonnet 4 their “top choice for their primary model.” Hybrid modes and developer delights One of the really clever bits about the Claude 4 family is its hybrid nature. Both Opus 4 and Sonnet 4 can operate in two gears: one for those near-instant replies we often need, and another that allows for “extended thinking for deeper reasoning.” This deeper thinking mode is part of the Pro, Max, Team, and Enterprise Claude plans. Good news for everyone, though – Sonnet 4, complete with this extended thinking, will also be available to free users, which is a fantastic move for making top-tier AI more accessible. Anthropic is also rolling out some tasty new tools for developers on its API, clearly aiming to supercharge the creation of more sophisticated AI agents: Code execution tool: This lets models actually run code, opening up all sorts of possibilities for interactive and problem-solving applications. MCP connector: Introduced by Anthropic, MCP standardises context exchange between AI assistants and software environments. Files API: This will make it much easier for AI to work directly with files, which is a big deal for many real-world tasks. Prompt caching: Developers will be able to cache prompts for up to an hour. This might sound small, but it can make a real difference to speed and efficiency, especially for frequently used queries. Leading the pack in real-world performance Anthropic is keen to emphasise that its “Claude 4 models lead on SWE-bench Verified, a benchmark for performance on real software engineering tasks.” Beyond coding, they stress that these models “deliver strong performance across coding, reasoning, multimodal capabilities, and agentic tasks.” Despite the leaps in capability, Anthropic is holding the line on pricing. Claude Opus 4 will set you back $15 per million input tokens and $75 per million output tokens. Claude Sonnet 4, the more accessible option, is priced at $3 per million input tokens and $15 per million output tokens. This consistency will be welcomed by existing users. Both Claude Opus 4 and Sonnet 4 are ready to go via the Anthropic API, and they’re also popping up on Amazon Bedrock and Google Cloud’s Vertex AI. This broad availability means businesses and developers worldwide can start experimenting and integrating these new tools fairly easily. Anthropic is clearly doubling down on making AI more capable, particularly in the complex realms of coding and autonomous agent behaviour. With these new models and developer tools, the potential for innovation just got a serious boost. (Image credit: Anthropic) See also: Details leak of Jony Ive’s ambitious OpenAI device Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic Claude 4: A new era for intelligent agents and AI coding appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  13. After what felt like an age of tech industry tea-leaf reading, OpenAI has officially snapped up “io,” the much-buzzed-about startup building an AI device from former Apple design guru Jony Ive and OpenAI’s chief, Sam Altman. The price tag? $6.5 billion. OpenAI put out a video this week talking about the Ive and Altman venture in a general sort of way, but now, a few more tidbits about what they’re actually cooking have slipped out. And what are they planning with all that cash and brainpower? Well, the eagle-eyed folks at The Washington Post spotted an internal chat between Sam Altman and OpenAI staff where he set a target of shipping 100 million AI “companions.” Altman allegedly even told his team the OpenAI device is “the chance to do the biggest thing we’ve ever done as a company here.” To be clear, Altman has set that 100 million number as an eventual target. “We’re not going to ship 100 million devices literally on day one,” he said. But then, in a flex that’s pure Silicon Valley, he added they’d hit that 100 million mark “faster than any company has ever shipped 100 million of something new before.” So, what is this mysterious “companion”? The gadget is designed to be entirely aware of a user’s surroundings, and even their “life.” While they’ve mostly talked about a single device, Altman did let slip it might be more of a “family of devices.” Jony Ive, as expected, dubbed it “a new design movement.” You can almost hear the minimalist manifesto being drafted. Why the full-blown acquisition, though? Weren’t they just going to partner up? Originally, yes. The plan was for Ive’s startup to cook up the hardware and sell it, with OpenAI delivering the brains. But it seems the vision got *******. This isn’t just another accessory, you see. Altman stressed the device will be a “central facet of using OpenAI.” He even said, “We both got excited about the idea that, if you subscribed to ChatGPT, we should just mail you new computers, and you should use those.” Frankly, they reckon our current tech – our trusty laptops, the websites we browse – just isn’t up to snuff for the kind of AI experiences they’re dreaming of. Altman was pretty blunt, saying current use of AI “is not the sci-fi dream of what AI could do to enable you in all the ways that I think the models are capable of.” So, we know it’s not a smartphone. Altman’s also put the kibosh on it being a pair of glasses. And Jony Ive, well, he’s apparently not rushing to make another wearable, which makes sense given his design ethos. The good news for the impatient among us (i.e., everyone in tech) is that this isn’t just vapourware. Ive’s team has an actual prototype. Altman’s even taken one home to “live with it”. As for when we might get our hands on one? Altman’s reportedly aiming for a late 2026 release. Naturally, OpenAI is keeping the actual device under wraps, but you can always count on supply chain whispers for a few clues. The ever-reliable (well, usually!) Apple supply chain analyst Ming-Chi Kuo has thrown a few alleged design details into the ring via social media. Kuo reckons it’ll be “slightly larger” than the Humane AI Pin, but that it will look “as compact and elegant as an iPod Shuffle.” And yes, like the Shuffle, Kuo says no screen. According to Kuo, the device will chat with your phone and computer instead, using good old-fashioned microphones for your voice and cameras to see what’s going on around you. Interestingly, he suggests it’ll be worn around the neck, necklace-style, rather than clipped on like the AI Pin. Kuo’s crystal ball points to mass production in 2027, but he wisely adds a pinch of salt, noting the final look and feel could still change. So, the billion-dollar (well, £5.1 billion) question remains: will this OpenAI device be the next big thing, the gamechanger we’ve been waiting for? Or will it be another noble-but-failed attempt to break free from the smartphone’s iron grip, joining the likes of the AI Pin in the ‘great ideas that didn’t quite make it’ pile? Altman, for one, is brimming with confidence. Having lived with the prototype, he’s gone on record saying he believes it will be “the coolest piece of technology that the world will have ever seen.” See also: Linux Foundation: Slash costs, boost growth with open-source AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Details leak of Jony Ive’s ambitious OpenAI device appeared first on AI News. View the full article
  14. The Linux Foundation and Meta are putting some numbers behind how open-source AI (OSAI) is driving innovation and adoption. The adoption of AI tools is pretty much everywhere now, with 94% of organisations surveyed already using them. And get this: within that crowd, 89% are tapping into open-source AI for some part of their tech backbone. A paper released this week by Meta and the Linux Foundation stitches together academic brainpower, industry frontline stories, and global survey data to showcase an ecosystem that’s buzzing thanks to being open and affordable. If there’s one thing that jumps off the page, it’s the money talk. Cost savings, folks, are a huge deal here. Unsurprisingly, two-thirds of businesses are saying that open source AI is just plain cheaper to get up and running compared to proprietary. So, it’s no shocker that almost half of them point to these savings as a big reason for going the open-source route. We’re not talking about trimming a few coins here and there. Researchers reckon companies would be shelling out 3.5 times more cash if open-source software simply vanished. As AI digs its heels deeper into everything we do, the financial muscle of open-source is only going to get stronger, potentially even overshadowing traditional open-source software’s impact. But this isn’t just about pinching pennies; it’s about unleashing brains. The report points out that AI can slash business unit costs by over 50%, which, as you can imagine, opens the door for revenue boosts. When open AI models are out there for cheap, or even free, it levels the playing field. Suddenly, developers and businesses of all sizes can jump in, play around, and rethink how they do things. Often it’s the smaller players, the agile startups and medium-sized businesses, that are diving headfirst into open-source AI more so than the big corporate giants. And since these are often the places where groundbreaking ideas and new products are born, it really hammers home how vital OSAI is for keeping the innovation engine chugging and helping those plucky, cutting-edge firms compete. And if you want a textbook example of how going open can turbocharge things, look no further than PyTorch. The report digs into how Meta’s decision to shift its heavyweight deep learning framework to an open governance model, under a non-profit, turned out to be a masterstroke. The report leans on a close look by Yue and Nagle (2024), who tracked what happened next. Once PyTorch flew the Meta nest, contributions from Meta itself “significantly decreased.” Sounds a bit off, right? But actually, it signalled a healthy move away from one company calling the shots. What really ramped up was input from “external companies, especially from the developers of complementary technology, such as chip manufacturers.” Meanwhile, the actual users, the developers building stuff with PyTorch, kept their engagement steady – “no change.” It’s a clear win. As the researchers put it, this kind of shift for major OSAI software “promotes broader participation and increased contributions and decreases the dominance of any single company.” It’s a powerful testament to what report authors Anna Hermansen and Cailean Osborne found: “engagement in open, collaborative activities is a better indicator of innovation than patents.” This isn’t just theory; it’s making waves in massive sectors. Take manufacturing. Open-source AI is set to be a game-changer there, mostly because its open code means you can bend it and shape it to fit. This flexibility allows AI to slot neatly into factory workflows, automating tasks and smoothing out order management. A 2023 McKinsey report, flagged in the study, even predicts AI could pump up to $290 billion extra into advanced manufacturing. Then there’s healthcare. In places like hospitals and local clinics, where every penny and every minute counts, free and flexible tools like open-source AI can literally be lifesavers. Imagine AI helping with diagnoses or flagging diseases early. McKinsey thinks the global healthcare sector could see up to a $260 billion boost in value once AI is really rolled out. A 2024 analysis even showed that open models in healthcare can go toe-to-toe with the proprietary ones—meaning hospitals can get tailored, privacy-friendly OSAI without skimping on performance. And it’s not just about the tech; it’s about the people. The report mentions that AI-related skills could see wages jump by up to 20%. That’s a big deal and really underlines why we need to be thinking about training and development for this new AI era. Hilary Carter, SVP of Research at The Linux Foundation, said: “The findings in this report make it clear: open-source AI is a catalyst for economic growth and opportunity. As adoption scales across sectors, we’re seeing measurable cost savings, increased productivity and rising demand for AI-related skills that can boost wages and career prospects. “Open-source AI is not only transforming how businesses operate—it’s reshaping how people work.” So, the takeaway? Open AI models are fast becoming the standard, the very foundation of future breakthroughs. They’re pushing growth and healthy competition by making powerful AI tools available without an eye-watering price tag. The Linux Foundation’s report isn’t just cheerleading; it’s laying out the hard numbers to show why open-source AI is absolutely crucial for a robust, stable, and forward-looking economy. See also: Alibaba Wan2.1-VACE: Open-source AI video tool for all Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Linux Foundation: Slash costs, boost growth with open-source AI appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  15. The Middle East is pulling in more attention from global tech investors than ever. Saudi Arabia, the UAE, and Qatar are rolling out billions of dollars in deals, working with top US companies, and building the kind of infrastructure needed to run large-scale AI systems. It’s not just about the money. There are new laws, startup activity, and plans for growth that are turning heads in Silicon Valley and beyond. Strategic deals anchor US tech ties US President Donald Trump recently visited the region and announced more than $2 trillion in investment agreements. These included major partnerships between Gulf states and American firms in artificial intelligence, cloud services, and defence tech. The UAE said it would build one of the world’s largest AI campuses in Abu Dhabi. At the same time, Saudi Arabia launched an AI company called Humain. Backed by the Public Investment Fund, the firm has already formed deals with Nvidia and AMD to bring in thousands of chips for local use. The idea is to run and train AI models inside the kingdom, cutting the need to rely on overseas services. These deals aren’t one-off events. They point to deeper ties between the Gulf and US tech companies. Gulf leaders want to localise AI development, but US companies see the region as a growing market for cloud, data, and chips. This growing alignment offers both sides an edge in a global race where speed and access matter. Gulf states scale up AI infrastructure AI systems need strong computing power. That means data centres, chips, and networks that can handle constant, heavy demand. Countries like Saudi Arabia and the UAE are putting their money behind this need. Saudi Arabia’s Humain is planning to deploy over 18,000 Nvidia chips, some of the most advanced in the market. These will power training clusters that let researchers and firms build new models at home. The UAE, through partnerships with Amazon and OpenAI, is also expanding its local data capacity. One campus in Abu Dhabi will include large-scale AI labs and supercomputers. Running powerful AI models close to home offers more than speed. It helps with data control, lowers costs, and reduces delays. Governments in the region are aware that long-term control over AI infrastructure will play a major role in future national development and influence. These projects are part of each country’s national tech strategy. Saudi Arabia’s Vision 2030 includes tech among its focus areas. The UAE’s AI strategy aims to be one of the top AI-ready countries in the next five years. Startups are finding momentum Investment isn’t only flowing to big infrastructure. April 2025 saw MENA startups raise $228.4 million. That’s more than double what they raised in March. Fintech and B2B platforms are leading the charge. Thndr, a Cairo-based investment platform, raised $15.7 million to expand into Saudi Arabia and the UAE. These countries have growing retail investor bases and are looking for tools that make trading and saving more accessible. The Gulf’s young, tech-savvy population and high mobile use make it an ideal testbed for startups. At the same time, government-backed funds are investing in early-stage companies to help grow local talent and reduce dependence on imported services. Governments are also creating more startup-friendly zones. Free economic zones in the UAE and planned innovation hubs in Saudi Arabia offer tax benefits and simplified licensing for tech ventures. Investors say that regulatory support is improving, and founders now have clearer paths to launch and scale. Cloud and data centre expansion gathers pace Cloud service demand is rising across the Middle East. Smart city projects, e-government platforms, and AI applications are driving the need for secure, local data storage and processing. Oracle has pledged $14 billion to expand its cloud footprint in Saudi Arabia. Google, AWS, and Microsoft are also investing in regional data hubs. These centres will support everything from banking to logistics. Building out cloud services is key to keeping data local and speeding up online services. It also lowers costs for local firms, which no longer need to rely on foreign servers. The result is a growing tech sector that has the tools to serve customers in real time. Large-scale data operations also open the door for more regional SaaS companies. With cloud capacity in place, local developers can create enterprise tools, AI services, and e-commerce platforms tailored to local needs. Policy reforms drive diversification Behind these tech moves are changes in policy. Governments are cutting red tape, easing rules for foreign ownership, and offering tax breaks for tech investors. The aim is to reduce the region’s reliance on oil and build a broader economic base. Saudi Arabia’s Vision 2030 includes goals for digital infrastructure, education, and innovation. The UAE’s AI strategy is tied to its push to attract top researchers and engineers. These are not just plans on paper. They’re being matched with funding, laws, and global partnerships. There is also a cultural shift underway. Tech is being taught in schools, and universities are opening AI-focused programs. This is helping to build a future workforce that can support local companies and attract international firms. More investors are noting the predictability and speed of doing business. This is especially important for tech startups that need fast feedback and steady support to grow. When rules are clear and approvals are quick, companies are more likely to stay. Balancing growth and geopolitical interests With more tech investment comes more attention. The US sees the region as a way to grow its global tech influence, especially as ties with China remain tense. For Middle Eastern nations, working with US companies gives them access to know-how and supply chains that would take years to build from scratch. At the same time, there are concerns about who controls the tech, where data is stored, and how it’s used. Some countries are pushing for data rules that favour local storage. Others want to develop their own large language models and keep training data inside national borders. Some regional leaders are starting to speak more openly about digital independence. They want to be buyers, yes, but also builders. That means investing in chips, software, and talent that can support homegrown tech. A few years ago, that seemed far off. Now, with the right backing, it’s starting to look within reach. Navigating these issues will shape the next phase of tech growth in the Middle East. Governments want to move fast but also retain control over key parts of their digital economy. The Middle East’s role in global tech is shifting. It’s no longer just a market for new gadgets or services. It’s becoming a centre for infrastructure, AI training, startup growth, and cloud services. Countries in the region are investing with a clear goal: to build long-term strength in a sector that shapes how business, education, and even government will work in the years ahead. If current trends continue, the Middle East won’t just be receiving tech. It will be helping shape it. (Photo by Unsplash) See also: Saudi Arabia moves to build its AI future with HUMAIN and NVIDIA Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Why the Middle East is a hot place for global tech investments appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  16. According to Thales, AI and quantum threats have catapulted to the top of the worry list for organisations wrestling with data security. That’s the key takeaway from the cybersecurity giant’s 2025 Data Threat Report, an annual deep dive into the latest data security threats, emerging trends, and hot topics. This year’s findings are stark: almost seven out of ten organisations now see the sheer speed of AI development – especially where generative AI is concerned – as the number one security headache related to its adoption. This anxiety isn’t just about pace; it’s also fed by concerns over a fundamental lack of integrity in AI systems (flagged by 64% of those surveyed) and a troubling deficit in trustworthiness (a worry for 57%). Generative AI is a data-hungry beast, relying heavily on high-quality, often sensitive, information for core functions like training models, making inferences, and, of course, generating content. As we make rapid advancements in “agentic AI” – systems that can act more autonomously – the pressure to ensure data quality is high calibre becomes even more critical. After all, sound decisionmaking and reliable actions from AI systems depend entirely on the data they’re fed. Many organisations are already diving in, with a third of respondents indicating generative AI is either being actively integrated or is already a force for transformation within their operations. Security threats increase as organisations embrace generative AI As generative AI throws up a complex web of data security challenges while simultaneously offering strategic avenues to bolster defences, its growing integration signals a distinct shift. Businesses are moving beyond just dipping their toes in the AI water; they’re now looking at more mature, operational deployments. Interestingly, while most respondents tabbed the swift uptake of GenAI as their biggest security concern, those further along the AI adoption curve aren’t hitting the pause button to completely lock down their systems or fine-tune their tech stacks before forging ahead. This dash for rapid transformation – often overshadowing efforts to ensure organisational readiness – could mean these companies are, perhaps unwittingly, creating their own most serious security weak spots. Eric Hanselman, Chief Analyst at S&P Global Market Intelligence 451 Research, said: “The fast-evolving GenAI landscape is pressuring enterprises to move quickly, sometimes at the cost of caution, as they race to stay ahead of the adoption curve. “Many enterprises are deploying GenAI faster than they can fully understand their application architectures, compounded by the rapid spread of SaaS tools embedding GenAI capabilities, adding layers of complexity and risk.” On a more positive note, 73% of respondents report they are putting money into AI-specific security tools to counter threats, either through fresh budgets or by reshuffling existing resources. Those making AI security a priority are also diversifying their approaches: over two-thirds have sourced tools from their cloud providers, three in five are turning to established security vendors, and almost half are looking to new or emerging startups for solutions. What’s particularly telling is how quickly security for generative AI has climbed the spending charts, nabbing the second spot in ranked-choice voting, just pipped to the post by the perennial concern of cloud security. This shift powerfully underscores the growing recognition of AI-driven risks and the urgent need for specialised defences to counter them. Data breaches show modest decline, though threats remain elevated While the nightmare of a data breach still looms large for many, their reported frequency has actually dipped slightly over the past few years. Back in 2021, 56% of enterprises surveyed said they’d experienced a breach at some point; that figure has eased to 45% in the 2025 report. Delving deeper, the percentage of respondents reporting a breach within the last 12 months has dropped from 23% in 2021 to a more encouraging 14% in 2025. When it comes to the persistent villains of the threat landscape, malware continues to lead the pack, holding onto its top spot since 2021. Phishing has craftily climbed into second place, nudging ransomware down to third. As for who’s causing the most concern, external actors dominate: hacktivists are currently seen as the primary menace, followed by nation-state actors. Human error, whilst still a significant factor, has slipped to third, down one position from the previous year. Vendors pressed on readiness for quantum threats The 2025 Thales Data Threat Report also casts a revealing light on the growing unease within most organisations about quantum-related security risks. The top threat here, cited by a hefty 63% of respondents, is the looming danger of “future encryption compromise.” This is the unsettling prospect that powerful quantum computers could one day shatter current or even future encryption algorithms, exposing data previously thought to be securely locked away. Hot on its heels, 61% identified key distribution vulnerabilities, where quantum breakthroughs could undermine the methods we use to securely exchange encryption keys. Furthermore, 58% highlighted the “harvest now, decrypt later” (HNDL) threat – a chilling scenario where encrypted data, scooped up today, could be decrypted by powerful quantum machines in the future. In response to these gathering clouds, half of the organisations surveyed are taking a hard look at their current encryption strategies with 60% already prototyping or evaluating post-quantum cryptography (PQC) solutions. However, it seems trust is a scarce commodity, as only a third are pinning their hopes on telecom or cloud providers to navigate this complex transition for them. Todd Moore, Global VP of Data Security Products at Thales, commented: “The clock is ticking on post-quantum readiness. It’s encouraging that three out of five organisations are already prototyping new ciphers, but deployment timelines are tight and falling behind could leave critical data exposed. “Even with clear timelines for transitioning to PQC algorithms, the pace of encryption change has been slower than expected due to a mix of legacy systems, complexity, and the challenge of balancing innovation with security.” There’s clearly a lot more work to be done to get operational data security truly up to speed, not just to support the advanced capabilities of emerging technologies like generative AI, but also to lay down a secure foundation for whatever threats are just around the corner. (Image by Pete Linforth) See also: AI tool speeds up government feedback, experts urge caution Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Thales: AI and quantum threats top security agendas appeared first on AI News. View the full article
  17. Dell Technologies used the Dell Technologies World in Las Vegas to announce the latest generation of AI acceleration servers which come equipped with Nvidia’s Blackwell Ultra GPUs. The systems claim to deliver up to four times faster AI training capabilities compared to previous generations, as Dell expands its AI Factory partnership with Nvidia amid intense competition in the enterprise AI hardware market. The servers arrive as organisations move from experimental AI projects to production-scale implementations, creating demand for more sophisticated computing infrastructure. The new lineup features air-cooled PowerEdge XE9780 and XE9785 servers, designed for conventional data centres, and liquid-cooled XE9780L and XE9785L variants, optimised for whole-rack deployment. The advanced systems support configurations with up to 192 Nvidia Blackwell Ultra GPUs with direct-to-chip liquid cooling, expandable to 256 GPUs per Dell IR7000 rack. “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, the eponymous chairman and chief executive officer. “Our job is to make AI more accessible. With the Dell AI Factory with Nvidia, enterprises can manage the entire AI lifecycle in use cases, from deployment to training, at any scale.” Dell’s self-designation as “the world’s top provider of AI-centric infrastructure” appears calculated as companies try to deploy AI and navigate technical hurdles. Critical assessment of Dell’s AI hardware strategy While Dell’s AI acceleration hardware advancements appear impressive on the basis of tech specs, several factors will ultimately determine their market impact. The company has withheld pricing information for these high-end systems, which will undoubtedly represent substantial capital investments for organisations considering deployment. The cooling infrastructure alone, particularly for the liquid-cooled variants, may need modifications to data centres for many potential customers, adding complexity and cost beyond the server hardware itself. Industry observers note that Dell faces intensifying competition in the AI hardware space from companies like Super Micro Computer, which has aggressively targeted the AI server market with similar offerings. However, Super Micro has recently encountered production cost challenges and margin pressure, potentially creating an opening for Dell if it can deliver competitive pricing. Jensen Huang, founder and CEO of Nvidia, emphasised the transformative potential of these systems: “AI factories are the infrastructure of modern industry, generating intelligence to power work in healthcare, finance and manufacturing. With Dell Technologies, we’re offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge.” Comprehensive AI acceleration ecosystem Dell’s AI acceleration strategy extends beyond server hardware to encompass networking, storage, and software components: The networking portfolio now includes the PowerSwitch SN5600 and SN2201 switches (part of Nvidia’s Spectrum-X platform) and Nvidia Quantum-X800 InfiniBand switches, capable of up to 800 gigabits per second throughput with Dell ProSupport and Deployment Services. The Dell AI Data Platform has received upgrades to enhance data management for AI applications, including a denser ObjectScale system with Nvidia BlueField-3 and Spectrum-4 networking integrations. In software, Dell offers the Nvidia AI Enterprise software platform directly, featuring Nvidia NIM, NeMo microservices, and Blueprints to streamline AI development workflows. The company also introduced Managed Services for its AI Factory with Nvidia, providing monitoring, reporting, and maintenance to help organisations address expertise gaps – skilled professionals remain in short supply. Availability timeline and market implications Dell’s AI acceleration platform rollout follows a staggered schedule throughout 2025: Air-cooled PowerEdge XE9780 and XE9785 servers with NVIDIA HGX B300 GPUs will be available in the second half of 2025 The liquid-cooled PowerEdge XE9780L and XE9785L variants are expected later this year The PowerEdge XE7745 server with Nvidia RTX Pro 6000 Blackwell Server Edition GPUs will launch in July 2025 The PowerEdge XE9712 featuring GB300 NVL72 will arrive in the second half of 2025 Dell plans to support Nvidia’s Vera CPU and Vera Rubin platform, signalling a longer-term commitment to expanding its AI ecosystem beyond this product lineup. Strategic analysis of the AI acceleration market Dell’s push into AI acceleration hardware reflects a strategy change to capitalise on the artificial intelligence *****, and use its established enterprise customer relationships. As organisations realise the complexity and expense of implementing AI at scale, Dell appears to be positioning itself as a comprehensive solution provider rather than merely a hardware vendor. However, the success of Dell’s AI acceleration initiative will ultimately depend on how effectively systems deliver measurable business value. Organisations investing in high-end infrastructure will demand operational improvements and competitive advantages that justify the significant capital expenditure. The partnership with Nvidia provides Dell access to next-gen AI accelerator technology, but also creates dependency on Nvidia’s supply chain and product roadmap. Given persistent chip shortages and extraordinary demand for AI accelerators, Dell’s ability to secure adequate GPU allocations will prove crucial for meeting customer expectations. (Photo by Nvidia) See also: Dell, Intel and University of Cambridge deploy the ***’s fastest AI supercomputer Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Dell unveils Nvidia Blackwell-based AI acceleration platform appeared first on AI News. View the full article
  18. AI is transforming industries worldwide, and its growing impact on digital finance is becoming significant. One of the most intriguing applications of AI lies in the analysis and forecasting of the Bitcoin price in USD – a notoriously volatile metric that traditional financial models struggle to predict. Today, AI is helping decode cryptocurrency’s complexities with speed and precision. From deep learning networks and sentiment analysis to real-time anomaly detection, AI technologies are shaping how investors, institutions and platforms understand and respond to Bitcoin’s dynamic movements. Predicting Bitcoin with deep learning Bitcoin has evolved into a global platform operating through numerous nodes around the world. Each node creates further data, like trading volume, volatility and price changes, alongside other trade-related documentation. All this serves as the basis for training deep learning frameworks. Forecasting Bitcoin prices using recurrent neural networks (RNNs) is particularly effective due to the use of Long Short-Term Memory (LSTM) networks. LSTMs excel at capturing long-term dependencies in time series data, making them ideal for handling the complex and dynamic nature of Bitcoin’s real-time price movements. The ability to remember and learn from previously observed patterns allows LSTMs to model the statistical behaviour of Bitcoin prices over time. A recently published study in Forecasting (2024) proposed a hybrid model for predicting Bitcoin prices by incorporating LSTM with attention mechanisms and gradient-specific optimisation. The study boasts an impressive accuracy of 99.84%. Advanced deep learning strategies in financial forecasting, as this study suggests, are superior to those found in traditional models. Using NLP to decode market emotions Natural Language Processing (NLP) tools help cryptocurrency investors analyse real-time, unstructured data to understand market sentiment and investor behaviour. In 2023, a study posted in arXiv introduced an end-to-end model for forecasting sentiment of tweets together with price prediction. The model employs a BERT-based neural network for sentiment analysis and a GRU for price forecasting. The study claims that by integrating sentiment analysis and deep learning, the cryptocurrency market can be predicted with greater accuracy. The mean absolute percentage error of 3.6% shows the potential in the synergy of both domains. Spotting market anomalies with unsupervised AI Unsupervised learning techniques, like clustering algorithms and autoencoders, are particularly effective at anomaly detection in the cryptocurrency space. Especially in tumultuous markets like Bitcoin, where discerning unpredictable patterns is key, such tools excel at detecting unexpected patterns. Models can flag possible scenarios of flash crashes, price manipulations and other sophisticated activities on the exchange by monitoring real-time market data against historical market data. For instance, if Bitcoin’s price in USD drops sharply while asset dependencies remain constant, AI can detect the outlier and notify human traders or activate protective protocols. Mining blockchain data for AI insights Active addresses provide one of the most significant advantages of Bitcoin – the transparency of its blockchain. On-chain data enables real-time monitoring of network activity and participant behaviour, like the number of active addresses, hash rate, wallet distributions, and transaction volumes. AI models can analyse such data to identify large-scale trends. For instance, in the previous bull runs, a surge in wallets containing 1-10 BTC (commonly associated with retail investors) was observed. Relatively, declines in miner flows to exchanges can predict supply restrictions. Reinforcement learning models are being taught to predict the impact of on-chain movements on Bitcoin’s market value. A hybrid of blockchain analytics and machine learning is redefining how analysts build predictive models based on clear, accessible public data. The new AI bots break down market borders Artificial intelligence has increased its foothold in cryptocurrency markets through autonomous trading systems. Unlike older trading bots that had a set checklist to complete, modern bots are highly sophisticated and employ flexible plans based on real-time data. Modern AI bots don’t just follow trends; they consider various factors influencing market prices. AI bots shift from trend-following to mean reversion and take into account price shifts and technical indicators. Some bots can simulate market conditions each second to determine statistically-reasonable points for investment. Ethical and technical problems surrounding AI application on cryptocurrency Implementing AI in cryptocurrency trading can be risky, although rewarding. Assurance over-fitting remains an issue, as builds driven by historical data are less reliable with remaining ****** swans or unexpected changes in regulations. Coordinated bot networks pose significant risks to trading volume and market sentiment. For this reason, many platforms have focused on publishing algorithmic audit trading reports for transparency and establishing ethics teams to mitigate any misuse of AI technologies. Explanatory frameworks of AI models, like model transparency and accountability, are particularly important in applications related to finances because of the enormous risks involved and the fragile trust of users. April 2025 updates: Bitcoin and AI integration In April 2025, Glassnode reported that addresses holding between 1,000 and 10,000 Bitcoin surged to 2,014, up from 1,944 in early March. The increase in this number of “whales” has been accumulating since April 2024, suggesting that there is confidence returning from major holders. Brief reflection: Bitcoin assessments enter the AI era Analysing and predicting the dynamics of Bitcoin and its price in USD through artificial intelligence is a trend that is here for the long run. It will be an essential part of strategy when dealing in cryptocurrency markets. AI provides unprecedented opportunities in financial market analysis through neural networks, mining on the blockchain, business behavioural prediction and creation of risk models. For AI specialists, this might represent the only practical example of having a blend of ultra-high-frequency data, actual working scenarios and comprehensive public systems that encourage collaboration. For the rest of us, we are shown a reality of being surrounded by systems that optimise buying and selling at lightning speeds. (Image source: Unsplash) The post AI transforms Bitcoin price prediction and market strategy appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  19. AI’s thirst for energy is ballooning into a monster of a challenge. And it’s not just about the electricity bills. The environmental fallout is serious, stretching to guzzling precious water resources, creating mountains of electronic waste, and, yes, adding to those greenhouse gas emissions we’re all trying to cut. As AI models get ever more complex and weave themselves into yet more parts of our lives, a massive question mark hangs in the air: can we power this revolution without costing the Earth? The numbers don’t lie: AI’s energy demand is escalating fast The sheer computing power needed for the smartest AI out there is on an almost unbelievable upward curve – some say it’s doubling roughly every few months. This isn’t a gentle slope; it’s a vertical climb that’s threatening to leave even our most optimistic energy plans in the dust. To give you a sense of scale, AI’s future energy needs could soon gulp down as much electricity as entire countries like Japan or the Netherlands, or even large US states like California. When you hear stats like that, you start to see the potential squeeze AI could put on the power grids we all rely on. 2024 saw a record 4.3% surge in global electricity demand, and AI’s expansion was a big reason why, alongside the ***** in electric cars and factories working harder. Wind back to 2022, and data centres, AI, and even cryptocurrency mining were already accounting for nearly 2% of all the electricity used worldwide – that’s about 460 terawatt-hours (TWh). Jump to 2024, and data centres on their own use around 415 TWh, which is roughly 1.5% of the global total, and growing at 12% a year. AI’s direct share of that slice is still relatively small – about 20 TWh, or 0.02% of global energy use – but hold onto your hats, because that number is set to rocket upwards. The forecasts? Well, they’re pretty eye-opening. By the end of 2025, AI data centres around the world could demand an extra 10 gigawatts (GW) of power. That’s more than the entire power capacity of a place like Utah. Roll on to 2026, and global data centre electricity use could hit 1,000 TWh – similar to what Japan uses right now. And, by 2027, the global power hunger of AI data centres is tipped to reach 68 GW, which is almost what California had in total power capacity back in 2022. Towards the end of this decade, the figures get even more jaw-dropping. Global data centre electricity consumption is predicted to double to around 945 TWh by 2030, which is just shy of 3% of all the electricity used on the planet. OPEC reckons data centre electricity use could even triple to 1,500 TWh by then. And Goldman Sachs? They’re saying global power demand from data centres could leap by as much as 165% compared to 2023, with those data centres specifically kitted out for AI seeing their demand shoot up by more than four times. There are even suggestions that data centres could be responsible for up to 21% of all global energy demand by 2030 if you count the energy it takes to get AI services to us, the users. When we talk about AI’s energy use, it mainly splits into two big chunks: training the AI, and then actually using it. Training enormous models, like GPT-4, takes a colossal amount of energy. Just to train GPT-3, for example, it’s estimated they used 1,287 megawatt-hours (MWh) of electricity, and GPT-4 is thought to have needed a whopping 50 times more than that. While training is a power hog, it’s the day-to-day running of these trained models that can chew through over 80% of AI’s total energy. It’s reported that asking ChatGPT a single question uses about ten times more energy than a Google search (we’re talking roughly 2.9 Wh versus 0.3 Wh). With everyone jumping on the generative AI bandwagon, the race is on to build ever more powerful – and therefore more energy-guzzling – data centres. So, can we supply energy for AI – and for ourselves? This is the million-dollar question, isn’t it? Can our planet’s energy systems cope with this new demand? We’re already juggling a mix of fossil fuels, nuclear power, and renewables. If we’re going to feed AI’s growing appetite sustainably, we need to ramp up and diversify how we generate energy, and fast. Naturally, renewable energy – solar, wind, hydro, geothermal – is a huge piece of the puzzle. In the US, for instance, renewables are set to go from 23% of power generation in 2024 to 27% by 2026. The tech giants are making some big promises; Microsoft, for example, is planning to buy 10.5 GW of renewable energy between 2026 and 2030 just for its data centres. AI itself could actually help us use renewable energy more efficiently, perhaps cutting energy use by up to 60% in some areas by making energy storage smarter and managing power grids better. But let’s not get carried away. Renewables have their own headaches. The sun doesn’t always shine, and the wind doesn’t always blow, which is a real problem for data centres that need power around the clock, every single day. The batteries we have now to smooth out these bumps are often expensive and take up a lot of room. Plus, plugging massive new renewable projects into our existing power grids can be a slow and complicated business. This is where nuclear power is starting to look more appealing to some, especially as a steady, low-carbon way to power AI’s massive energy needs. It delivers that crucial 24/7 power, which is exactly what data centres crave. There’s a lot of buzz around Small Modular Reactors (SMRs) too, because they’re potentially more flexible and have beefed-up safety features. And it’s not just talk; big names like Microsoft, Amazon, and Google are seriously looking into nuclear options. Matt Garman, who heads up AWS, recently put it plainly to the BBC, calling nuclear a “great solution” for data centres. He said it’s “an excellent source of zero carbon, 24/7 power.” He also stressed that planning for future energy is a massive part of what AWS does. “It’s something we plan many years out,” Garman mentioned. “We invest ahead. I think the world is going to have to build new technologies. I believe nuclear is a big part of that, particularly as we look 10 years out.” Still, nuclear power isn’t a magic wand. Building new reactors takes a notoriously long time, costs a fortune, and involves wading through complex red tape. And let’s be frank, public opinion on nuclear power is still a bit shaky, often because of past accidents, even though modern reactors are much safer. The sheer speed at which AI is developing also creates a bit of a mismatch with how long it takes to get a new nuclear plant up and running. This could mean we end up leaning even more heavily on fossil fuels in the short term, which isn’t great for our green ambitions. Plus, the idea of sticking data centres right next to nuclear plants has got some people worried about what that might do to electricity prices and reliability for everyone else. Not just kilowatts: Wider environmental shadow of AI looms AI’s impact on the planet goes way beyond just the electricity it uses. Those data centres get hot, and cooling them down uses vast amounts of water. Your average data centre sips about 1.7 litres of water for every kilowatt-hour of energy it burns through. Back in 2022, Google’s data centres reportedly drank their way through about 5 billion gallons of fresh water – that’s a 20% jump from the year before. Some estimates suggest that for every kWh a data centre uses, it might need up to two litres of water just for cooling. Put it another way, global AI infrastructure could soon be chugging six times more water than the entirety of Denmark. And then there’s the ever-growing mountain of electronic waste, or e-waste. Because AI tech – especially specialised hardware like GPUs and TPUs – moves so fast, old kit gets thrown out more often. We could be looking at AI contributing to an e-waste pile-up from data centres hitting five million tons every year by 2030. Even making the AI chips and all the other bits for data centres takes a toll on our natural resources and the environment. It means mining for critical minerals like lithium and cobalt, often using methods that aren’t exactly kind to the planet. Just to make one AI chip can take over 1,400 litres of water and 3,000 kWh of electricity. This hunger for new hardware is also pushing for more semiconductor factories, which, guess what, often leads to more gas-powered energy plants being built. And, of course, we can’t forget the carbon emissions. When AI is powered by electricity generated from burning fossil fuels, it adds to the climate change problem we’re all facing. It’s estimated that training just one big AI model can pump out as much CO2 as hundreds of US homes do in a year. If you look at the environmental reports from the big tech companies, you can see AI’s growing carbon footprint. Microsoft’s yearly emissions, for example, went up by about 40% between 2020 and 2023, mostly because they were building more data centres for AI. Google also reported that its total greenhouse gas emissions have shot up by nearly 50% over the last five years, with the power demands of its AI data centres being a major culprit. Can we innovate our way out? It might sound like all doom and gloom, but a combination of new ideas could help. A big focus is on making AI algorithms themselves more energy-efficient. Researchers are coming up with clever tricks like “model pruning” (stripping out unnecessary bits of an AI model), “quantisation” (using less precise numbers, which saves energy), and “knowledge distillation” (where a smaller, thriftier AI model learns from a big, complex one). Designing smaller, more specialised AI models that do specific jobs with less power is also a priority. Inside data centres, things like “power capping” (putting a lid on how much power hardware can draw) and “dynamic resource allocation” (shifting computing power around based on real-time needs and when renewable energy is plentiful) can make a real difference. Software that’s “AI-aware” can even shift less urgent AI jobs to times when energy is cleaner or demand on the grid is lower. AI can even be used to make the cooling systems in data centres more efficient. On-device AI could also help to reduce power consumption. Instead of sending data off to massive, power-hungry cloud data centres, the AI processing happens right there on your phone or device. This could slash energy use, as the chips designed for this prioritise being efficient over raw power. And we can’t forget about rules and regulations. Governments are starting to wake up to the need to make AI accountable for its energy use and wider environmental impact. Having clear, standard ways to measure and report AI’s footprint is a crucial first step. We also need policies that encourage companies to make hardware that lasts longer and is easier to recycle, to help tackle that e-waste mountain. Things like energy credit trading systems could even give companies a financial reason to choose greener AI tech. It’s worth noting that the United Arab Emirates and the United States shook hands this week on a deal to build the biggest AI campus outside the US in the Gulf. While this shows just how important AI is becoming globally, it also throws a spotlight on why all these energy and environmental concerns need to be front and centre for such huge projects. Finding a sustainable future for AI AI has the power to do some amazing things, but its ferocious appetite for energy is a serious hurdle. The predictions for its future power demands are genuinely startling, potentially matching what whole countries use. If we’re going to meet this demand, we need a smart mix of energy sources. Renewables are fantastic for the long run, but they have their wobbles when it comes to consistent supply and scaling up quickly. Nuclear power – including those newer SMRs – offers a reliable, low-carbon option that’s definitely catching the eye of big tech companies. But we still need to get our heads around the safety, cost, and how long they take to build. And remember, it’s not just about electricity. AI’s broader environmental impact – from the water it drinks to cool data centres, to the growing piles of e-waste from its hardware, and the resources it uses up during manufacturing – is huge. We need to look at the whole picture if we’re serious about lessening AI’s ecological footprint. The good news? There are plenty of promising ideas and innovations bubbling up. Energy-saving AI algorithms, clever power management in data centres, AI-aware software that can manage workloads intelligently, and the shift towards on-device AI all offer ways to cut down on energy use. Plus, the fact that we’re even talking about AI’s environmental impact more means that discussions around policies and rules to push for sustainability are finally happening. Dealing with AI’s energy and environmental challenges needs everyone – researchers, the tech industry, and policymakers – to roll up their sleeves and work together, and fast. If we make energy efficiency a top priority in how AI is developed, invest properly in sustainable energy, manage hardware responsibly from cradle to grave, and put supportive policies in place, we can aim for a future where AI’s incredible potential is unlocked in a way that doesn’t break our planet. The race to lead in AI has to be a race for sustainable AI too. (Photo by Nejc Soklič) See also: AI tool speeds up government feedback, experts urge caution Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Will the AI ***** fuel a global energy crisis? appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  20. When Huawei shocked the global tech industry with its Mate 60 Pro smartphone featuring an advanced 7-nanometer chip despite sweeping US technology restrictions, it demonstrated that innovation finds a way even under the heaviest sanctions. The US response was swift and predictable: tighter export controls and expanded restrictions. Now, with reports suggesting Huawei’s Ascend AI chips are approaching Nvidia-level performance—though the ******** company remains characteristically silent about these developments—America has preemptively escalated its semiconductor war to global proportions. The Trump administration’s declaration that using Huawei’s Ascend chips “anywhere in the world” violates US export controls reveals more than policy enforcement—it exposes a fundamental fear that American technological dominance may no longer be guaranteed through restrictions alone. This global AI chip ban emerged on May 14, 2025, when President Donald Trump’s administration rescinded the Biden-era AI Diffusion Rule without revealing details of a replacement policy. Instead, the Bureau of Industry and Security (BIS) announced guidance to “strengthen export controls for overseas AI chips,” specifically targeting Huawei’s Ascend processors. The new guidelines warn of “enforcement actions” including imprisonment and fines for any global business found using these ********-developed chips—a fundamental departure from traditional export controls, which typically govern what leaves a country’s borders, not what happens entirely outside them. The scope of America’s tech authority The South China Morning Post reports that these new guidelines explicitly single out Huawei’s Ascend chips after scrapping the Biden administration’s country-tiered “AI diffusion” rule. But the implications of this global AI chip ban extend far beyond bilateral US-China tensions. By asserting jurisdiction over global technology choices, America essentially demands that sovereign nations and independent businesses worldwide comply with its domestic policy preferences. This extraterritorial approach raises fundamental questions about national sovereignty and international trade. Should a Brazilian AI startup be prevented from using the most cost-effective chip solution simply because those chips are manufactured by a ******** company? Should European research institutions abandon promising collaborations because they involve hardware Washington deems unacceptable? According to Financial Times reporting, BIS stated that Huawei’s Ascend 910B, 910C, and 910D were all subject to the regulations as they were likely “designed with certain US software or technology or produced with semiconductor manufacturing equipment that is the direct product of certain US-origin software or technology, or both.” Industry resistance to universal controls Even within the United States, the chipmaking sector expresses alarm about Washington’s semiconductor policies. The aggressive expansion of export controls creates uncertainty beyond ******** companies, affecting global supply chains and innovation partnerships built over decades. “Washington’s new guidelines are essentially forcing global tech firms to pick a side – ******** or US hardware – which will further deepen the tech divide between the world’s two largest economies,” analysts note. This forced binary choice ignores the nuanced reality of modern technology development, where innovation emerges from diverse, international collaborations. The economic implications prove staggering. Recent analysis indicates Huawei’s Ascend 910B AI chip delivers 80% of Nvidia A100’s efficiency when training large language models, though “in some other tests, Ascend chips can beat the A100 by 20%.” By blocking access to competitive alternatives, this global AI chip ban may inadvertently stifle innovation and maintain artificial market monopolies. The innovation paradox Perhaps most ironically, policies intended to maintain American technological leadership may undermine it. Nvidia CEO Jensen Huang acknowledged earlier this month that Huawei was “one of the most formidable technology companies in the world,” noting that China was “not behind” in AI development. Attempting to isolate such capabilities through global restrictions may accelerate the development of parallel technology ecosystems, ultimately reducing American influence rather than preserving it. The secrecy surrounding Huawei’s Ascend chips—with the company keeping “details of its AI chips close to its chest, with only public information coming from third-party teardown reports”—has intensified with US sanctions. Following escalating restrictions, Huawei stopped officially disclosing information about the series, including release dates, production schedules, and fabrication technologies. The chips specified in current US restrictions, including the Ascend 910C and 910D, haven’t even been officially confirmed by Huawei. Geopolitical ramifications In a South China Morning Post’s report, Chim Lee, a senior analyst at the Economist Intelligence Unit, warns that “if the guidance is enforced strictly, it is likely to provoke retaliation from China” and could become “a negotiating point in ongoing trade talks between Washington and Beijing.” This assessment underscores the counterproductive nature of aggressive unilateral action in an interconnected global economy. The semiconductor industry thrives on international collaboration, shared research, and open competition. Policies that fragment this ecosystem serve no one’s long-term interests—including America’s. As the global community grapples with challenges from climate change to healthcare innovation, artificial barriers preventing the best minds from accessing optimal tools ultimately harm human progress. Beyond binary choices The question isn’t whether nations should protect strategic interests—they should and must. But when export controls extend “anywhere in the world,” we cross from legitimate national security policy into technological authoritarianism. The global technology community deserves frameworks that balance security concerns with innovation imperatives. This global AI chip ban risks accelerating the technological fragmentation it seeks to prevent. History suggests markets divided by political decree often spawn parallel innovation ecosystems that compete more effectively than those operating under artificial constraints. Rather than extending controls globally, a strategic approach would focus on out-innovating competitors through superior technology and international partnerships. The current path toward technological bifurcation serves neither American interests nor global innovation—it simply creates a more fragmented, less efficient world where artificial barriers replace natural competition. The semiconductor industry’s future depends on finding sustainable solutions that address legitimate security concerns without dismantling the collaborative networks that drive technological advancement. As this global AI chip ban takes effect, the world watches to see whether innovation will flourish through competition or fragment through control. See also: Huawei’s AI hardware breakthrough challenges Nvidia’s dominance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Can the US really enforce a global AI chip ban? appeared first on AI News. View the full article
  21. America’s quest to protect its semiconductor technology from China has taken increasingly dramatic turns over the past few years—from export bans to global restrictions—but the latest proposal from Congress ventures into unprecedented territory. Lawmakers are now pushing for mandatory GPS-style tracking embedded in every AI chip exported from the United States, essentially turning advanced semiconductors into devices that report their location back to Washington. On May 15, 2025, a bipartisan group of eight House representatives introduced the Chip Security Act, which would require companies like Nvidia to embed location verification mechanisms in their processors before export. This represents perhaps the most invasive approach yet in America’s technological competition with China, moving far beyond restricting where chips can go to actively monitoring where they end up. The mechanics of AI chip surveillance Under the proposed Chip Security Act, AI chip surveillance would become mandatory for all “covered integrated circuit products”—including those classified under Export Control Classification Numbers 3A090, 3A001.z, 4A090, and 4A003.z. Companies like Nvidia would be required to embed location verification mechanisms in their AI chips before export, reexport, or in-country transfer to foreign nations. Representative Bill Huizenga, the Michigan Republican who introduced the House bill, stated that “we must employ safeguards to help ensure export controls are not being circumvented, allowing these advanced AI chips to fall into the hands of nefarious actors.” His co-lead, Representative Bill Foster—an Illinois Democrat and former physicist who designed chips during his scientific career—added, “I know that we have the technical tools to prevent powerful AI technology from getting into the wrong hands.” The legislation goes far beyond simple location tracking. Companies would face ongoing surveillance obligations, required to report any credible information about chip diversion, including location changes, unauthorized users, or tampering attempts. This creates a continuous monitoring system that extends indefinitely beyond the point of *****, fundamentally altering the relationship between manufacturers and their products. Cross-party support for technology control Perhaps most striking about this AI chip surveillance initiative is its bipartisan nature. The bill enjoys broad support across party lines, co-led by House Select Committee on China Chairman John Moolenaar and Ranking Member ***** Krishnamoorthi. Other cosponsors include Representatives Ted Lieu, Rick Crawford, Josh Gottheimer, and Darin LaHood. Moolenaar said that “the ******** ********** Party has exploited weaknesses in our export control enforcement system—using shell companies and smuggling networks to divert sensitive US technology.” The bipartisan consensus on AI chip surveillance reflects how deeply the China challenge has penetrated American political thinking, transcending traditional partisan divisions. The Senate has already introduced similar legislation through Senator Tom Cotton, suggesting that semiconductor surveillance has broad congressional support. Coordination between chambers indicates that some form of AI chip surveillance may become law regardless of which party controls Congress. Technical challenges and implementation questions The technical requirements for implementing AI chip surveillance raise significant questions about feasibility, security, and performance. The bill mandates that chips implement “location verification using techniques that are feasible and appropriate” within 180 days of enactment, but provides little detail on how such mechanisms would work without compromising chip performance or introducing new vulnerabilities. For industry leaders like Nvidia, implementing mandatory surveillance technology could fundamentally alter product design and manufacturing processes. Each chip would need embedded capabilities to verify its location, potentially requiring additional components, increased power consumption, and processing overhead that could impact performance—precisely what customers in AI applications cannot afford. The bill also grants the Secretary of Commerce broad enforcement authority to “verify, in a manner the Secretary determines appropriate, the ownership and location” of exported chips. This creates a real-time surveillance system where the US government could potentially track every advanced semiconductor worldwide, raising questions about data sovereignty and privacy. Commercial surveillance meets national security AI chip surveillance proposal represents an unprecedented fusion of national security imperatives with commercial technology products. Unlike traditional export controls that simply restrict destinations, the approach creates ongoing monitoring obligations that blur the lines between private commerce and state surveillance. Representative Foster’s background as a physicist lends technical credibility to the initiative, but it also highlights how scientific expertise can be enlisted in geopolitical competition. The legislation reflects a belief that technical solutions can solve political problems—that embedding surveillance capabilities in semiconductors can prevent their misuse. Yet the proposed law raises fundamental questions about the nature of technology export in a globalized world. Should every advanced semiconductor become a potential surveillance device? How will mandatory AI chip surveillance affect innovation in countries that rely on US technology? What precedent does this set for other nations seeking to monitor their technology exports? Accelerating technological decoupling The mandatory AI chip surveillance requirement could inadvertently accelerate the development of alternative semiconductor ecosystems. If US chips come with built-in tracking mechanisms, countries may intensify efforts to develop domestic alternatives or source from suppliers without such requirements. China, already investing heavily in semiconductor self-sufficiency following years of US restrictions, may view these surveillance requirements as further justification for technological decoupling. The irony is striking: efforts to track ******** use of US chips may ultimately reduce their appeal and market share in global markets. Meanwhile, allied nations may question whether they want their critical infrastructure dependent on chips that can be monitored by the US government. The legislation’s broad language suggests that AI chip surveillance would apply to all foreign countries, not just adversaries, potentially straining relationships with partners who value technological sovereignty. The future of semiconductor governance As the Trump administration continues to formulate its replacement for Biden’s AI Diffusion Rule, Congress appears unwilling to wait. The Chip Security Act represents a more aggressive approach than traditional export controls, moving from restriction to active surveillance in ways that could reshape the global semiconductor industry. This evolution reflects deeper changes in how nations view technology exports in an era of great power competition. The semiconductor industry, once governed primarily by market forces and technical standards, increasingly operates under geopolitical imperatives that prioritize control over commerce. Whether AI chip surveillance becomes law depends on congressional action and industry response. But the bipartisan support suggests that some form of semiconductor monitoring may be inevitable, marking a new chapter in the relationship between technology, commerce, and national security. Conclusion: The end of anonymous semiconductors from America? The question facing the industry is no longer whether the US will control technology exports, but how extensively it will monitor them after they leave American shores. In this emerging paradigm, every chip becomes a potential intelligence asset, and every export a data point in a global surveillance network. The semiconductor industry now faces a critical choice: adapt to a future where products carry their own tracking systems, or risk being excluded from the US market entirely. As Congress pushes for mandatory AI chip surveillance, we may be witnessing the end of anonymous semiconductors and the beginning of an era where every processor knows exactly where it belongs—and reports back accordingly. See also: US-China tech war escalates with new AI chips export controls Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Congress pushes GPS tracking for every exported semiconductor appeared first on AI News. View the full article
  22. Microsoft is cutting about 7,000 jobs, or 3% of its workforce. The move isn’t about poor performance or falling revenue. It’s a clear shift in strategy—fewer layers, more engineers, and more investment in artificial intelligence. The layoffs affect staff across divisions and global offices. But the bulk of those let go are in middle management and non-technical roles, a pattern showing up across tech. The message: reduce overhead, speed up product cycles, and make room for ******* AI spending. The numbers behind the shift Microsoft ended its latest quarter with $70.07 billion in revenue. That beat Wall Street estimates and shows strong business health, and the company plans to spend as much as $80 billion this fiscal year—mainly on data centres designed for training and running AI models. That’s a big leap in infrastructure spending but it also explains why Microsoft is trimming elsewhere. AI models are compute-heavy and demand new types of hardware. Storage, cooling, and power need to scale: Building that capacity takes money, time, and fewer internal delays, and Microsoft appears to be cutting anything that slows the push. Management in the firing line Most cuts hit middle managers and support staff. These are roles that help coordinate, review, and report—but don’t directly write code or design systems. While these positions have long helped large companies function, they’re now being seen as blockers to fast action. Sources told Business Insider that Microsoft wants a higher ratio of technical staff to managers. This isn’t just about saving costs, it’s about reducing the number of people between engineers and final decisions. Analyst Rishi Jaluria told the Financial Times that tech giants like Microsoft have “too many layers.” He said companies are trying to strip back bureaucracy as they chase AI leadership. Microsoft has not publicly broken down which departments were most affected. But reports suggest LinkedIn, a Microsoft subsidiary, saw job cuts as part of this broader shift. Aligning with a broader industry trend Microsoft isn’t the only company trimming management, as Amazon, Google, and Meta have all done similarly. They’re removing layers and pushing more decisions closer to those building the product. For Microsoft, the changes come after several earlier rounds of cuts. In early 2024, the company laid off around 2,000 workers in performance-based trims. This new wave is different as it targets structure, not staff output. $80 billion on AI infrastructure Microsoft’s investment plan puts AI at the centre of its growth. According to Reuters, the company wants to spend up to $80 billion in fiscal 2025, much of it going toward AI-enabled data centres. These centres power large language models, natural language tools, and enterprise AI systems. Without them, even the best models won’t run at scale. The company’s move shows how serious it is about owning the AI backbone. This is about more than software updates, it’s about physical hardware, cloud capacity, and tight control over how AI gets built and used. Microsoft’s early partnership with OpenAI gave it a jumpstart, but Google, Meta, Amazon, and Apple are all making big AI moves. Microsoft appears to be betting that first-mover advantage is only as strong as the infrastructure behind it. Employee reactions reflect mixed sentiment As with most layoffs, employee reactions vary. Some posts on social media reflect understanding, others voice concern about job security and team stability. Several ex-employees described the mood as “tense but expected.” Many said they had been preparing for changes since Microsoft’s 2024 performance cuts. Some worry that too much focus on AI will weaken support roles, and others believe cutting managers will create confusion rather than clarity. Still, public sentiment shows a growing acceptance that AI is changing what jobs look like—even at the biggest firms. What this means for the industry Microsoft’s restructuring sets a tone: Strong revenue no longer guarantees job security, and growth in AI now drives org charts, not the other way around. Middle management is no longer safe, and non-technical roles must prove direct value to AI goals. Even product teams may face more pressure to automate or streamline. For employees, the message is clear. Learn how AI fits your job—or risk being cut from the plan. For other tech firms, Microsoft’s strategy may serve as a roadmap. Spending more on AI means spending less elsewhere. and many companies will likely follow that playbook to stay competitive. Long-term questions remain The short-term logic is clear. Microsoft is cutting structure to fund AI growth. But over time, companies will need to balance innovation with internal support. Removing middle managers may speed up some work, but it can also reduce mentorship, training, and context—things that help teams stay aligned. AI may need more data and compute. But people still build the tools, ask the right questions, and set the goals. How companies treat those people now will shape how well they compete later. (Photo by Ron Lach) See also: Alarming rise in AI-powered scams: Microsoft reveals $4B in thwarted fraud Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Why Microsoft is cutting roles despite strong earnings appeared first on AI News. View the full article
  23. One of the ways in which organisations are using the latest AI algorithms to help them grow and thrive is the adoption of privately-held AI models in aligning their business strategies. The differentiation between private and public AI is important in this context – most organisations are rightly wary of allowing public AIs access to what are sensitive data sets, such as HR information, financial data, and details of operational history. It stands to reason that if an AI is given specific data on which to base its responses, its output will be more relevant, and be therefore more effective in helping decision-makers to judge how to strategise. Using private reasoning engines is the logical way that companies can get the best results from AI and keep their intellectual property safe. Enterprise-specific data and the ability to fine-tune a local AI model give organisations the ability to provide bespoke forecasting and operational tuning that are more grounded in the day-to-day reality of a company’s work. A Deloitte Strategy Insight paper calls private AI a “bespoke compass”, and places the use of internal data as a competitive advantage, and Accenture describes AIs as “poised to provide the most significant economic uplift and change to work since the agricultural and industrial revolutions.” There is the possibility, however, that like traditional business intelligence, using historical data drawn from several years of operations across the enterprise, can entrench decision-making in patterns from the past. McKinsey says companies are in danger of “mirroring their institutional past in algorithmic amber.” The Harvard Business Review picks up on some of the technical complexity, stating that the act of customising a model so that it’s activities are more relevant to the company is difficult, and perhaps, therefore, not a task to be taken on by any but the most AI-literate at a level of data science and programming. MIT Sloane strikes a balance between the fervent advocates and the conservative voices for private AI in business strategising. It advises that AI be regarded as a co-pilot, and urges continual questioning and verification of AI output, especially when the stakes are high. Believe in the revolution However, decision-makers considering pursuing this course of action (getting on the AI wave, but doing so in a private, safety-conscious way) may wish to consider the motivations of those sources of advice that advocate strongly for AI enablement in this way. Deloitte, for example, builds and manages AI solutions for clients using custom infrastructure such as its factory-as-a-service offerings, while Accenture has practices dedicated to its clients’ AI strategy, such as Accenture Applied Intelligence. It partners with AWS and Azure, building bespoke AI systems for Fortune 500 companies, among others, and Deloitte is partners with Oracle and Nvidia. With ‘skin in the game’, phrases such as “the most significant […] change to work since the agricultural and industrial revolutions” and a “bespoke compass” are inspiring, but the vendors’ motivations may not be entirely altruistic. Advocates for AI in general rightly point to the ability of models to identify trends and statistical undercurrents much more efficiently than humans. Given the mass of data available to the modern enterprise, comprising both internal and externally-available information, having software that can parse data at scale is an incredible advantage. Instead of manually creating analysis of huge repositories of data – which is time-consuming and error-prove – AI can see through the chaff and surface real, actionable insights. Asking the right questions Additionally, AI models can interpret queries couched in normal language, and make predictions based on empirical information, which, in the context of private AIs, is highly-relevant to the organisation. Relatively unskilled personnel can query data without having skills in statistical analysis or database query languages, and get answers that otherwise would have involved multiple teams and skill-sets drawn from across the enterprise. That time-saving alone is considerable, letting organisations focus on strategy, rather than forming the necessary data points and manually querying the information they’ve managed to gather. Both McKinsey and Gartner warn, however, of overconfidence and data obsolescence. On the latter, historical data may not be relevant to strategising, especially if records go back several years. Overconfidence is perhaps best termed in the context of AI as operators trusting AI responses without question, not delving independently into responses’ detail, or in some cases, taking as fact the responses to badly-phrased queries. For any software algorithm, human phrases such as “base your findings on our historical data” are open to interpretation, unlike, for example, “base your findings on the last twelve months’ sales data, ignoring outliers that differ from the mean by over 30%, although do state those instances for me to consider.” Software of experience Organisations might pursue private AI solutions alongside mature, existing business intelligence platforms. SAP Business Organisations is nearly 30 years old, yet a youngster compared to SAS Business Intelligence that’s been around since before the internet became mainstream in the 1990s. Even relative newcomers such as Microsoft Power *** represents at least a decade of development, iteration, customer feedback, and real-world use in business analysis. It seems sensible, therefore, that private AI’s deployment on business data should be regarded as an addition to the strategiser’s toolkit, rather than a silver bullet that replaces “traditional” tools. For users of private AI that have the capacity to audit and tweak their model’s inputs and inner algorithms, retaining human control and oversight is important – just as it is with tools like Oracle’s Business Intelligence suite. There are some scenarios where the intelligent processing of and acting on real-time data (online retail pricing mechanisms, for example) gives AI analysis a competitive edge over the incumbent *** platforms. But AI has yet to develop into a magical Swiss Army Knife for business strategy. Until AI purposed for business data analysis is as developed, iterated on, battle-hardened, and mature as some of the market’s go-to *** platforms, early adopters might temper the enthusiasm of AI and AI service vendors with practical experience and a critical eye. AI is a new tool, and one with a great deal of potential. However, it remains first-generation in its current guises, public and private. (Image source: “It’s about rules and strategy” by pshutterbug is licensed under CC BY 2.0.) The post AI in business intelligence: Caveat emptor appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  24. An AI tool aims to wade through mountains of government feedback and understand what the public is trying to say. *** Technology Secretary Peter Kyle said: “No one should be wasting time on something AI can do quicker and better, let alone wasting millions of taxpayer pounds on outsourcing such work to contractors. This digital assistant, aptly named ‘Consult’, just aced its first big test with the Scottish Government. The Scottish Gov threw Consult in at the deep end, asking it to make sense of public opinion on regulating non-surgical cosmetic procedures such as lip fillers and laser hair removal. Consult came back with findings almost identical to what human officials had pieced together. Now, the plan is to roll this tech out across various government departments. The current way of doing things is expensive and slow. Millions of pounds often go to outside contractors just to analyse what the public thinks. Consult is part of a ******* push to build a leaner, more responsive *** government—one that can deliver on its ‘Plan for Change’ without breaking the bank or taking an age to do it. So, how did it fare in Scotland? Consult chewed through responses from over 2,000 people. Using generative AI, it picked out the main themes and concerns bubbling up from the feedback across six key questions. Of course, Consult wasn’t left completely to its own devices. Experts in the Scottish Government double-checked and fine-tuned these initial themes. Then, the AI got back to work to sort individual responses into these categories. Officials ended up with more precious time to consider what people were saying and what it meant for policy. Because this was Consult’s first live outing, the Scottish Government went through every single response by hand too—just to be sure. Figuring out exactly what someone means in a written comment and then deciding which ‘theme’ it fits under can be a bit subjective. Even humans don’t always agree. When the government compared Consult’s handiwork to human analysis, the AI was right most of the time. Where there were differences, they were so minor they didn’t change the overall picture of what mattered most to people. Consult is part of a ******* AI toolkit called ‘Humphrey’—a suite of digital helpers designed to free up civil servants from admin and cut down on those contractor bills. It’s all part of a grander vision to use technology to sharpen up public services, aiming to find £45 billion in productivity savings. The goal is a more nimble government that is better at delivering that ‘Plan for Change’ we keep hearing about. “After demonstrating such promising results, Humphrey will help us cut the costs of governing and make it easier to collect and comprehensively review what experts and the public are telling us on a range of crucial issues,” added Kyle. “The Scottish Government has taken a bold first step. Very soon, I’ll be using Consult, within Humphrey, in my own department and others in Whitehall will be using it too – speeding up our work to deliver the Plan for Change.” Over in Scotland, Public Health Minister Jenni Minto said: “Using the tool was very beneficial in helping the Scottish Government understand more quickly what people wanted us to hear and our respondents’ range of views. “Using this tool has allowed the Scottish Government to move more quickly to a focus on the policy questions and dive into the detail of the evidence we’ve been presented with, while remaining confident that we have heard the strong views expressed by respondents.” Of course, like many AI deployments in government, it’s still early days, and Consult is officially still in the trial phase. More number-crunching and testing are on the cards to make sure it’s working just as it should before any big decisions about a full rollout are made. But the potential here is huge. The government runs about 500 consultations every year. If Consult lives up to its promise, it could save officials a staggering 75,000 days of analysis annually. And what did the civil servants who first worked with Consult think? They were reportedly “pleasantly surprised,” finding the AI’s initial analysis a “useful starting point.” Others raved that it “saved [them] a heck of a lot of time” and let them “get to the analysis and draw out what’s needed next” much faster. Interestingly, they also felt Consult brought a new level of fairness to the table. As one official put it, its use “takes away the bias and makes it more consistent,” preventing individual analysts from, perhaps unconsciously, letting their “own preconceived ideas” colour the findings. Some consultations receive tens, even hundreds of thousands of responses. Given how well Consult has performed in these early tests, it won’t be long before it’s used on these massive consultations. It’s worth noting that humans aren’t being kicked out of the loop. Consult has been built to keep the experts involved every step of the way. Officials will always review the themes the AI suggests and how it sorts the responses. They’ll have an interactive dashboard to play with, letting them filter and search for specific insights. It’s about AI doing the heavy lifting, so the humans can do the smart thinking. Experts urge caution about the use of AI in government This move towards AI in government isn’t happening in a vacuum, and experts are watching closely. Stuart Harvey, CEO of Datactics, commented: “Using AI to speed up public consultations is a great example of how technology can improve efficiency and save money. But AI is only as good as the data behind it. For tools like this to work well and fairly, government departments need to make sure their data is accurate, up-to-date, and properly managed. “People need to trust the decisions made with AI. That means making sure the process is clear, well-governed, and ethical. If the data is messy or poorly handled, it can lead to biased or unreliable outcomes. “As the government expands its use of AI in public services, it’s vital to invest in strong data practices. That includes building clear and consistent data systems, making data accessible for review, and keeping humans involved in key decisions—especially when it comes to hearing from the public.” This sentiment is echoed by academics. Professor Michael Rovatsos from the University of Edinburgh, for instance, acknowledges the benefits but also wisely cautions about the risks of AI biases and even the potential for these tools to be manipulated. He’s calling for tough safeguards and ongoing investment to make sure any AI tool used by the government remains reliable and fair. Stuart Munton, Chief for Group Operations at AND Digital, added: “The government’s use of AI to speed up public consultations is a welcome step toward smarter, more efficient public services. However, as AI adoption grows, we must ensure that people – not just technology – are at the heart of this transformation.” “Tools like this will only reach their full potential if we invest in equipping public sector teams with the right skills and training. Empowering diverse talent to work with AI will not only improve how these tools perform but also ensure that innovation is inclusive to real-world needs.” If done right, with these expert caveats in mind, AI tools like Consult have the potential to improve how governments listen, learn, and make policy based on public opinion. The *** government isn’t hanging about; the plan is to get Consult working across various departments by the end of 2025. (Photo by Scott Rodgerson) See also: US slams brakes on AI Diffusion Rule, hardens chip export curbs Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI tool speeds up government feedback, experts urge caution appeared first on AI News. View the full article
  25. Alibaba has unveiled Wan2.1-VACE, an open-source AI model designed to shake up how we create and edit videos. VACE isn’t appearing out of thin air; it’s part of Alibaba’s broader Wan2.1 family of video AI models. And they’re making a rather bold claim for it, stating it’s the “first open-source model in the industry to provide a unified solution for various video generation and editing tasks.” If Alibaba can succeed in shifting users away from having to juggle multiple, separate tools towards one streamlined hub—it could be a true game-changer. So, what can this thing actually do? Well, for starters, it can whip up videos using all sorts of prompts, including text commands, still pictures, and even snippets of other video clips. But it’s not just about making videos from scratch. The editing toolkit supports referencing images or specific frames to guide the AI, advanced video “repainting” (more on that in a sec), tweaking just selected bits of your existing video, and even stretching out the video. Alibaba reckons these features “enable the flexible combination of various tasks to enhance creativity.” Imagine you want to create a video with specific characters interacting, maybe based on some photos you have. VACE claims to be able to do that. Got a still image you wish was dynamic? Alibaba’s open-source AI model can add natural-looking movement to bring it to life. For those who love to fine-tune, there are those advanced “video repainting” functions I mentioned earlier. This includes things like transferring poses from one subject to another, having precise control over motion, adjusting depth perception, and even changing the colours. One feature that caught my eye is its ability to “supports adding, modification or deletion to selective specific areas of a video without affecting the surroundings.” That’s a massive plus for detailed edits – no more accidentally messing up the background when you’re just trying to tweak one small element. Plus, it can make your video canvas ******* and even fill in the new space with relevant content to make everything look richer and more expansive. You could take a flat photograph, turn it into a video, and tell the objects in it exactly how to move by drawing out a path. Need to swap out a character or an object with something else you provide as a reference? No problem. Animate those referenced characters? Done. Control their pose precisely? You got it. Alibaba even gives the example of its open-source AI model taking a tall, skinny vertical image and cleverly expanding it sideways into a widescreen video, automagically adding new bits and pieces by referencing other images or prompts. That’s pretty neat. Of course, VACE isn’t just magic. There’s some clever tech involved, designed to handle the often-messy reality of video editing. A key piece is something Alibaba calls the Video Condition Unit (VCU), which “supports unified processing of multimodal inputs such as text, images, video, and masks.” Then there’s what they term a “Context Adapter structure.” This clever bit of engineering “injects various task concepts using formalised representations of temporal and spatial dimensions.” Essentially, think of it as giving the AI a really good understanding of time and space within the video. With all this clever tech, Alibaba reckons VACE will be a hit in quite a few areas. Think quick social media clips, eye-catching ads and marketing content, heavy-duty post-production special effects for film and TV, and even for generating custom educational and training videos. Alibaba makes Wan2.1-VACE open-source to spread the AI love Building AI models this powerful usually costs a fortune and needs massive computing power and tons of data. So, Alibaba making Wan2.1-VACE open source? That’s a big deal. “Open access helps lower the barrier for more businesses to leverage AI, enabling them to create high-quality visual content tailored to their needs, quickly and cost-effectively,” Alibaba explains. Basically, Alibaba is hoping to let more folks – especially smaller businesses and individual creators – get their hands on top-tier AI without breaking the bank. This democratisation of powerful tools is always a welcome sight. And they’re not just dropping one version. There’s a hefty 14-billion parameter model for those with serious horsepower, and a more nimble 1.3-billion parameter one for lighter setups. You can grab them for free right now on Hugging Face and GitHub, or via Alibaba Cloud’s own open-source community, ModelScope. (Image source: www.alibabagroup.com) See also: US slams brakes on AI Diffusion Rule, hardens chip export curbs Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Wan2.1-VACE: Open-source AI video tool for all appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.