DeepSeek has unveiled its first-generation DeepSeek-R1 and DeepSeek-R1-Zero models that are designed to tackle complex reasoning tasks.
DeepSeek-R1-Zero is trained solely through large-scale reinforcement learning (RL) without relying on supervised fine-tuning (SFT) as a preliminary step. According to DeepSeek, this approach has led to the natural emergence of “numerous powerful and interesting reasoning behaviours,” including self-verification, reflection, and the generation of extensive chains of thought (CoT).
“Notably, [DeepSeek-R1-Zero] is the first open research to validate that reasoning capabilities of LLMs can be incentivised purely through RL, without the need for SFT,” DeepSeek researchers explained. This milestone not only underscores the model’s innovative foundations but also paves the way for RL-focused advancements in reasoning AI.
However, DeepSeek-R1-Zero’s capabilities come with certain limitations. Key challenges include “endless repetition, poor readability, and language mixing,” which could pose significant hurdles in real-world applications. To address these shortcomings, DeepSeek developed its flagship model: DeepSeek-R1.
Introducing DeepSeek-R1
DeepSeek-R1 builds upon its predecessor by incorporating cold-start data prior to RL training. This additional pre-training step enhances the model’s reasoning capabilities and resolves many of the limitations noted in DeepSeek-R1-Zero.
Notably, DeepSeek-R1 achieves performance comparable to OpenAI’s much-lauded o1 system across mathematics, coding, and general reasoning tasks, cementing its place as a leading competitor.
DeepSeek has chosen to open-source both DeepSeek-R1-Zero and DeepSeek-R1 along with six smaller distilled models. Among these, DeepSeek-R1-Distill-Qwen-32B has demonstrated exceptional results—even outperforming OpenAI’s o1-mini across multiple benchmarks.
MATH-500 (Pass@1): DeepSeek-R1 achieved 97.3%, eclipsing OpenAI (96.4%) and other key competitors.
LiveCodeBench (Pass@1-COT): The distilled version DeepSeek-R1-Distill-Qwen-32B scored 57.2%, a standout performance among smaller models.
AIME 2024 (Pass@1): DeepSeek-R1 achieved 79.8%, setting an impressive standard in mathematical problem-solving.
DeepSeek-R1 is here! Performance on par with OpenAI-o1 Fully open-source model & technical report MIT licensed: Distill & commercialize freely! Website & API are live now! Try DeepThink at [Hidden Content] today! 1/n pic.twitter.com/7BlpWAPu6y — DeepSeek (@deepseek_ai) January 20, 2025
A pipeline to benefit the wider industry
DeepSeek has shared insights into its rigorous pipeline for reasoning model development, which integrates a combination of supervised fine-tuning and reinforcement learning.
According to the company, the process involves two SFT stages to establish the foundational reasoning and non-reasoning abilities, as well as two RL stages tailored for discovering advanced reasoning patterns and aligning these capabilities with human preferences.
“We believe the pipeline will benefit the industry by creating better models,” DeepSeek remarked, alluding to the potential of their methodology to inspire future advancements across the AI sector.
One standout achievement of their RL-focused approach is the ability of DeepSeek-R1-Zero to execute intricate reasoning patterns without prior human instruction—a first for the open-source AI research community.
Importance of distillation
DeepSeek researchers also highlighted the importance of distillation—the process of transferring reasoning abilities from larger models to smaller, more efficient ones, a strategy that has unlocked performance gains even for smaller configurations.
Smaller distilled iterations of DeepSeek-R1 – such as the 1.5B, 7B, and 14B versions – were able to hold their own in niche applications. The distilled models can outperform results achieved via RL training on models of comparable sizes.
Bonus: Open-Source Distilled Models! Distilled from DeepSeek-R1, 6 small models fully open-sourced 32B & 70B models on par with OpenAI-o1-mini Empowering the open-source community Pushing the boundaries of **open AI**! 2/n pic.twitter.com/tfXLM2xtZZ — DeepSeek (@deepseek_ai) January 20, 2025
For researchers, these distilled models are available in configurations spanning from 1.5 billion to 70 billion parameters, supporting Qwen2.5 and Llama3 architectures. This flexibility empowers versatile usage across a wide range of tasks, from coding to natural language understanding.
DeepSeek has adopted the MIT License for its repository and weights, extending permissions for commercial use and downstream modifications. Derivative works, such as using DeepSeek-R1 to train other large language models (LLMs), are permitted. However, users of specific distilled models should ensure compliance with the licences of the original base models, such as Apache 2.0 and Llama3 licences.
(Photo by Prateek Katyal)
See also: Microsoft advances materials discovery with MatterGen
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post DeepSeek-R1 reasoning models rival OpenAI in performance appeared first on AI News.
View the full article
The discovery of new materials is key to solving some of humanity’s biggest challenges. However, as highlighted by Microsoft, traditional methods of discovering new materials can feel like “finding a needle in a haystack.”
Historically, finding new materials relied on laborious and costly trial-and-error experiments. More recently, computational screening of vast materials databases helped to speed up the process, but it remained a time-intensive process.
Now, a powerful new generative AI tool from Microsoft could accelerate this process significantly. Dubbed MatterGen, the tool steps away from traditional screening methods and instead directly engineers novel materials based on design requirements, offering a potentially game-changing approach to materials discovery.
Published in a paper in Nature, Microsoft describes MatterGen as a diffusion model that operates within the 3D geometry of materials. Where an image diffusion model might generate images from text prompts by tweaking pixel colours, MatterGen generates material structures by altering elements, positions, and periodic lattices in randomised structures. This bespoke architecture is designed specifically to handle the unique demands of materials science, such as periodicity and 3D arrangements.
“MatterGen enables a new paradigm of generative AI-assisted materials design that allows for efficient exploration of materials, going beyond the limited set of known ones,” explains Microsoft.
A leap beyond screening
Traditional computational methods involve screening enormous databases of potential materials to identify candidates with desired properties. Yet, even these methods are limited in their ability to explore the universe of unknown materials and require researchers to sift through millions of options before finding promising candidates.
In contrast, MatterGen starts from scratch—generating materials based on specific prompts about chemistry, mechanical attributes, electronic properties, magnetic behaviour, or combinations of these constraints. The model was trained using over 608,000 stable materials compiled from the Materials Project and Alexandria databases.
In the comparison below, MatterGen significantly outperformed traditional screening methods in generating novel materials with specific properties—specifically a bulk modulus greater than 400 GPa, meaning they are hard to compress.
While screening exhibited diminishing returns over time as its pool of known candidates became exhausted, MatterGen continued generating increasingly novel results.
One common challenge encountered during materials synthesis is compositional disorder—the phenomenon where atoms randomly swap positions within a crystal lattice. Traditional algorithms often fail to distinguish between similar structures when deciding what counts as a “truly novel” material.
To address this, Microsoft devised a new structure-matching algorithm that incorporates compositional disorder into its evaluations. The tool identifies whether two structures are merely ordered approximations of the same underlying disordered structure, enabling more robust definitions of novelty.
Proving MatterGen works for materials discovery
To prove MatterGen’s potential, Microsoft collaborated with researchers at Shenzhen Institutes of Advanced Technology (SIAT) – part of the ******** Academy of Sciences – to experimentally synthesise a novel material designed by the AI.
The material, TaCr₂O₆, was generated by MatterGen to meet a bulk modulus target of 200 GPa. While the experimental result fell slightly short of the target, measuring a modulus of 169 GPa, the relative error was just 20%—a small discrepancy from an experimental perspective.
Interestingly, the final material exhibited compositional disorder between Ta and Cr atoms, but its structure aligned closely with the model’s prediction. If this level of predictive accuracy can be translated to other domains, MatterGen could have a profound impact on material designs for batteries, fuel cells, magnets, and more.
Today in @Nature: Our MatterGen model represents a paradigm shift in materials design, applying generative AI to create new compounds with specific properties with unprecedented precision. pic.twitter.com/RpnphXUY0c — Satya Nadella (@satyanadella) January 16, 2025
Microsoft positions MatterGen as a complementary tool to its previous AI model, MatterSim, which accelerates simulations of material properties. Together, the tools could serve as a technological “flywheel”, enhancing both the exploration of new materials and the simulation of their properties in iterative loops.
This approach aligns with what Microsoft refers to as the “fifth paradigm of scientific discovery,” in which AI moves beyond pattern recognition to actively guide experiments and simulations.
Microsoft has released MatterGen’s source code under the MIT licence. Alongside the code, the team has made the model’s training and fine-tuning datasets available to support further research and encourage broader adoption of this technology.
Reflecting on generative AI’s broader scientific potential, Microsoft draws parallels to drug discovery, where such tools have already started transforming how researchers design and develop medicines. Similarly, MatterGen could reshape the way we approach materials design, particularly for critical domains such as renewable energy, electronics, and aerospace engineering.
(Image credit: Microsoft)
See also: L’Oréal: Making cosmetics sustainable with generative AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Microsoft advances materials discovery with MatterGen appeared first on AI News.
View the full article
The US-China AI chip race has entered a new phase as ******** chip designer Cambricon Technologies reports its first-ever quarterly profit. The milestone emerges against a backdrop of escalating US export controls that have increasingly restricted ******** companies’ access to advanced semiconductor technology, particularly Nvidia’s sophisticated AI processors.
Cambricon’s breakthrough into profitability signals a significant shift in the US-China AI chip race, transforming from a 2016 startup into China’s most valuable artificial intelligence company, now valued at approximately 300 billion yuan ($41 billion).
While this represents only a fraction of Nvidia’s $3 trillion market capitalisation, it marks China’s growing capability to develop sophisticated AI chips domestically.
The company’s financial turnaround is particularly noteworthy in the context of technological competition between the world’s two largest economies. After years of losses, Cambricon reported its first quarterly profit in the final quarter of 2024, with net profits ranging from 240 million yuan to 328 million yuan, despite posting a 724 million yuan loss in the first nine months.
The market’s response to this shifting dynamic in the US-China AI chip race has been remarkable. Cambricon’s shares on the Shanghai Stock Exchange’s Star Market have surged more than 470% over the past year, climbing from 120.80 yuan to 695.96 yuan.
The company projects a 70% revenue increase to 1.2 billion yuan in 2024, driven by China’s aggressive buildup of computing infrastructure to support its AI ambitions.
At the technical level, Cambricon has positioned itself as China’s answer to US chip restrictions with its 7-nanometre AI chips. The company’s flagship Cambricon-1A processor has gained significant traction in the domestic market, particularly in products from major technology companies like Huawei Technologies.
The stakes in the US-China AI chip race continue to rise, with analysts at Changjiang Securities projecting that China’s AI semiconductor market will reach 178 billion yuan by 2025. Beijing’s push for semiconductor self-sufficiency and increasing investments from domestic technology companies in AI infrastructure are fueling this growth.
Recent US regulations announced in January 2025 have intensified the race, restricting ******** access to advanced AI technology and limiting it to American companies and their allies. In response, major ******** technology companies are investing heavily in domestic computing infrastructure.
ByteDance, TikTok’s parent company, has committed 4.5 billion yuan to a new computing centre in Datong City, Shanxi province. This highlights the growing market opportunity for domestic chip manufacturers.
While Cambricon’s progress represents a significant advancement in the US-China AI chip race, challenges remain. The company must continue to narrow the technological gap with international competitors while maintaining its growth trajectory.
However, supportive government policies and growing domestic demand provide a favourable environment for continued development. Cambricon’s inclusion in the SSE 50 Index, which tracks the Shanghai Stock Exchange’s most valuable companies, underscores its strategic importance to China’s technology sector.
As global tensions persist and access to foreign technology becomes more restricted, developing domestic AI chip capabilities has become increasingly important for China’s technological advancement and economic security.
The post US-China AI chip race: Cambricon’s first profit lands appeared first on AI News.
View the full article
L’Oréal will leverage IBM’s generative AI (GenAI) technology to create innovative and sustainable cosmetic products.
The partnership will involve developing a bespoke AI foundation model to supercharge L’Oréal’s Research & Innovation (R&I) teams in creating eco-friendly formulations using renewable raw materials. In turn, this initiative is designed to reduce both energy and material waste.
Described as the cosmetics industry’s first formulation-focused AI model, this effort is a glimpse into a future where cutting-edge technology drives environmentally-conscious solutions.
Stéphane Ortiz, Head of Innovation Métiers & Product Development at L’Oréal R&I, said: “As part of our Digital Transformation Program, this partnership will extend the speed and scale of our innovation and reformulation pipeline, with products always reaching higher standards of inclusivity, sustainability, and personalisation.”
AI and beauty: A perfect match
By marrying L’Oréal’s expertise in cosmetic science with IBM’s AI technologies, the companies aim to unlock new pathways in both cosmetic innovation and sustainability. The role of AI in tailoring and personalising products is well-established, but diving deeper into its role in crafting renewable and sustainably-sourced formulations underscores a broader ecological mission.
Matthieu Cassier, Chief Transformation & Digital Officer at L’Oréal R&I, commented: “Building on years of unique beauty science expertise and data structuring, this major alliance with IBM is opening a new exciting era for our innovation and development process.”
Foundation models serve as the technological backbone for this collaboration. These AI systems are trained on vast datasets, enabling them to perform various tasks and transfer learnings across different applications.
Although these models are perhaps most known for revolutionising natural language processing (NLP), IBM has advanced their use cases beyond text, including applications in chemistry, geospatial data, and time series analysis.
In this context, the custom AI model being developed for L’Oréal will process a massive database of cosmetic formulas and raw material components. From creating new products to reformulating existing ones and scaling up for production, the model will accelerate critical tasks for the company’s R&D teams.
“This collaboration is a truly impactful application of generative AI, leveraging the power of technology and expertise for the good of the planet,” said Alessandro Curioni, IBM Fellow and VP for Europe and Africa, as well as Director at IBM Research Zurich.
“At IBM, we believe in the power of purpose-built, customised AI to help transform businesses. Using IBM’s latest AI technology, L’Oréal will be able to derive meaningful insights from their rich formula and product data to create a tailored AI model to help achieve their operational goals and continue creating high-quality and sustainable products.”
One of the more fascinating dimensions of this collaboration is its potential to deepen understanding of renewable ingredient behaviour within cosmetic formulations.
Guilhaume Leroy-Méline, IBM Distinguished Engineer and CTO of IBM Consulting France, said: “This alliance between highly specialised expertise in artificial intelligence and cosmetics seeks to revolutionise cosmetic formulation. It embodies the spirit of AI-augmented research, emphasising sustainability and diversity.”
For IBM, this partnership reflects its broader strategy to extend AI applications into industries requiring bespoke solutions. As Curioni pointed out, custom AI has the potential to reshape businesses on multiple levels.
By co-developing this bespoke formulation model, IBM and L’Oréal are setting the stage for a beauty industry that prizes both sustainability and cutting-edge innovation. If successful, the partnership could very well serve as a blueprint for other industries looking to bring AI’s transformative potential to bear on sustainability efforts.
(Photo by Kelly Sikkema)
See also: Cisco: Securing enterprises in the AI era
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post L’Oréal: Making cosmetics sustainable with generative AI appeared first on AI News.
View the full article
As AI becomes increasingly integral to business operations, new safety concerns and security threats emerge at an unprecedented pace—outstripping the capabilities of traditional cybersecurity solutions.
The stakes are high with potentially significant repercussions. According to Cisco’s 2024 AI Readiness Index, only 29% of surveyed organisations feel fully equipped to detect and prevent unauthorised tampering with AI technologies.
Continuous model validation
DJ Sampath, Head of AI Software & Platform at Cisco, said: “When we talk about model validation, it is not just a one time thing, right? You’re doing the model validation on a continuous basis.
“So as you see changes happen to the model – if you’re doing any type of finetuning, or you discover new attacks that are starting to show up that you need the models to learn from – we’re constantly learning all of that information and revalidating the model to see how these models are behaving under these new attacks that we’ve discovered.
“The other very important point is that we have a really advanced threat research team which is constantly looking at these AI attacks and understanding how these attacks can further be enhanced. In fact, we’re, we’re, we’re contributing to the work groups inside of standards organisations like MITRE, OWASP, and NIST.”
Beyond preventing harmful outputs, Cisco addresses the vulnerabilities of AI models to malicious external influences that can change their behaviour. These risks include prompt injection attacks, jailbreaking, and training data poisoning—each demanding stringent preventive measures.
Evolution brings new complexities
Frank Dickson, Group VP for Security & Trust at IDC, gave his take on the evolution of cybersecurity over time and what advancements in AI mean for the industry.
“The first macro trend was that we moved from on-premise to the cloud and that introduced this whole host of new problem statements that we had to address. And then as applications move from monolithic to microservices, we saw this whole host of new problem sets.
“AI and the addition of LLMs… same thing, whole host of new problem sets.”
The complexities of AI security are heightened as applications become multi-model. Vulnerabilities can arise at various levels – from models to apps – implicating different stakeholders such as developers, end-users, and vendors.
“Once an application moved from on-premise to the cloud, it kind of stayed there. Yes, we developed applications across multiple clouds, but once you put an application in AWS or Azure or GCP, you didn’t jump it across those various cloud environments monthly, quarterly, weekly, right?
“Once you move from monolithic application development to microservices, you stay there. Once you put an application in Kubernetes, you don’t jump back into something else.
“As you look to secure a LLM, the important thing to note is the model changes. And when we talk about model change, it’s not like it’s a revision … this week maybe [developers are] using Anthropic, next week they may be using Gemini.
“They’re completely different and the threat vectors of each model are completely different. They all have their strengths and they all have their dramatic weaknesses.”
Unlike conventional safety measures integrated into individual models, Cisco delivers controls for a multi-model environment through its newly-announced AI Defense. The solution is self-optimising, using Cisco’s proprietary machine learning algorithms to identify evolving AI safety and security concerns—informed by threat intelligence from Cisco Talos.
Adjusting to the new normal
Jeetu Patel, Executive VP and Chief Product Officer at Cisco, shared his view that major advancements in a short ******* of time always seem revolutionary but quickly feel normal.
“Waymo is, you know, self-driving cars from Google. You get in, and there’s no one sitting in the car, and it takes you from point A to point B. It feels mind-bendingly amazing, like we are living in the future. The second time, you kind of get used to it. The third time, you start complaining about the seats.
“Even how quickly we’ve gotten used to AI and ChatGPT over the course of the past couple years, I think what will happen is any major advancement will feel exceptionally progressive for a short ******* of time. Then there’s a normalisation that happens where everyone starts getting used to it.”
Patel believes that normalisation will happen with AGI as well. However, he notes that “you cannot underestimate the progress that these models are starting to make” and, ultimately, the kind of use cases they are going to unlock.
“No-one had thought that we would have a smartphone that’s gonna have more compute capacity than the mainframe computer at your fingertips and be able to do thousands of things on it at any point in time and now it’s just another way of life. My 14-year-old daughter doesn’t even think about it.
“We ought to make sure that we as companies get adjusted to that very quickly.”
See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Cisco: Securing enterprises in the AI era appeared first on AI News.
View the full article
In the last days of office, President Joe Biden is taking steps to address the growing energy demands of advanced AI data centres.
On Tuesday, he signed an executive order aimed at providing federal support to fast-track the development of infrastructure needed to power these AI DC facilities, the White House has announced.
The order sets out a plan to lease federal land managed by the Defence and Energy departments for gigawatt-scale AI data centres and clean power projects. The goal is to meet the immense energy requirements of these facilities in a shorter timeframe.
“The order will speed up how we build the next generation of AI infrastructure right here in America,” Biden said. He added that the initiative is designed to boost economic competitiveness, strengthen national security, promote clean energy, and ensure AI safety.
One notable aspect of the order is its emphasis on American-made technology. Companies that use federal sites for AI data centres will be ordered to purchase a fair amount of US-made semiconductors. The specifics will be worked out for each project and is in line with the Biden administration’s overall aim to invest over $30 billion in domestic chip production.
The administration is also working to streamline processes for building AI infrastructure. Federal agencies have been directed to expedite the granting of permits, improve connections to the electric grid, and advance transmission development around federal sites. The measures are intended to avoid delays that could hinder the rapid deployment of essential infrastructure.
Tarun Chhabra, White House technology adviser, emphasised the urgency of the endeavour. “It’s absolutely essential to make sure the AI industry can build the infrastructure it needs for training and deploying powerful AI models right here in the US,” he told reporters.
Chhabra noted that the demand for computing power and electricity is soaring, especially as AI developers work on advanced “frontier models,” the most powerful and resource-intensive AI systems available. He estimated that by 2028, some AI data centres could require as much as five gigawatts of capacity – enough energy to power millions of homes.
The executive order also seeks to align AI development with environmental goals. Companies developing projects are urged to use sustainable energy and have minimal impact on electricity prices. The requirement reflects growing concerns about the environmental impact of AI data centres, which consume enormous amounts of energy, plus water for cooling.
In a related move, the Commerce Department has tightened restrictions on AI chip and technology exports. The goal is to keep advanced computing capabilities in the US and allied nations while limiting access for competing countries. This complements the executive order’s focus on safeguarding national security by keeping cutting-edge AI models secure and based on US soil.
Biden also gave a ******* picture of AI with regards to national security. Chhabra pointed out that AI systems being developed today already have capabilities that might be used for malicious purposes, like weapon development or cyberattacks. By securing domestic data centres, the US reduces the likelihood of adversaries gaining access to these powerful systems.
The tech industry has reacted positively to the order. Major corporations like Microsoft, BlackRock, and SoftBank have announced significant investments in US-based AI infrastructure, aligning with the administration’s ambition for domestic technological leadership.
By addressing energy demands, environmental considerations, and national security risks, Biden’s executive order lays the groundwork for a robust, secure, and sustainable AI infrastructure. It reflects the administration’s commitment to ensuring that the US remains a global leader in AI development and deployment.
(Photo by Unsplash)
See also: US-China tech war escalates with new AI chips export controls
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Biden’s executive order targets energy needs for AI data centres appeared first on AI News.
View the full article
The race for AI video training has taken an unexpected turn. Major tech companies are now paying content creators thousands of dollars for their unused footage, marking a significant shift in how artificial intelligence companies acquire training data.
In a revealing report from Bloomberg, tech giants including Google, OpenAI, and Moonvalley are actively seeking exclusive, unpublished video content from YouTubers and digital content creators to train AI algorithms. The move comes as companies compete to develop increasingly sophisticated AI video generators.
The economics of the new market are fascinating. According to Bloomberg‘s findings, AI companies are willing to pay between $1 and $4 per minute for video footage, with rates varying based on quality and uniqueness. Premium content, such as 4K video footage, drone captures, and 3D animations, commands higher prices, while standard unused content from platforms like YouTube, Instagram, or TikTok typically sells for $1-2 per minute.
Dan Levitt, senior vice president of creators at Wasserman, which represents prominent social media figures like YouTuber MatPat and fashion influencer Taylen Biggs, describes it as “an arms race” in which companies desperately need more footage.
However, he adds a note of caution, in that this lucrative opportunity might be temporary, suggesting creators should act quickly to capitalise on current demand. The development of the market has given rise to specialised intermediaries. Companies like Troveo AI and Calliope Networks have emerged as third-party licensing facilitators, managing rights for thousands of hours of video footage owned by creators.
These companies handle negotiations with content creators and bundle the content for AI companies, streamlining the process for both parties. Marty Pesis, co-founder and chief executive officer of Troveo, reveals that his company has already paid over $5 million to creators, highlighting the significant scale of the market.
“All the companies building video models we’re either working with or are in our pipeline right now,” Pesis notes, underlining the widespread demand for training content. The arrangement offers a win-win situation for both parties.
For content creators, it presents an opportunity to monetise footage that would otherwise remain unused. Many creators accumulate hundreds of hours of footage annually while producing content for various platforms, but only a fraction of their material makes it into a final, published video.
The deals come with safeguards. Andrew Graham, head of digital corporate advisory and partnerships for Creative Artists Agency (CAA), explains that most agreements include specific terms preventing AI companies from creating digital replicas of content creators’ work or mimicking exact scenes from their channels.
These protections ensure that creators’ brands and reputations remain intact while participating in AI video training. The development comes against a controversy surrounding AI companies’ use of online content.
In 2024, several lawsuits were filed against major AI companies by news publishers, actors, and content creators, alleging unauthorised use of their intellectual property for AI training, be it for video, audio, text, or visual art. The new approach of directly licensing content represents a more structured and legal framework for acquiring training data.
The trend also reflects a broader shift in the relationship between content creators and AI companies. Rather than having their public content scraped without compensation, creators now have the opportunity to participate actively in and benefit from AI development.
As Levitt puts it, “This is a way to actually participate in that, to do this in a much more legal, structured manner, and now you at least have some upside.” For the AI industry, the development marks a step toward more ethical and transparent data collection practices.
Companies like Moonvalley openly acknowledge their reliance on licensed content. The company states that the vast majority of its training data comes directly from content creators and filmmakers who choose to participate in licensing agreements.
As AI video technology continues to evolve, this new market for unused footage may reshape how content creators approach their work, potentially influencing how they capture and store unused footage for future licensing opportunities.
However, as Levitt suggests, creators interested in capitalising on this opportunity should act soon, as the window for such lucrative licensing deals may not remain open indefinitely.
(Photo by Unsplash/Steven Van)
See also: OpenAI: Copyrighted data ‘impossible’ to avoid for AI training
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI giants pay thousands for creators’ unused footage to train models appeared first on AI News.
View the full article
For verified travel tips and real support, visit: [Hidden Content]
We all rely on Microsoft Office for productivity, in an office setting or at home. And while Microsoft does a great job when it comes to keeping Microsoft Office stable, there are situations when Office apps are not as fast as we might want. If Microsoft Office is running slowly for you, you’re probably not alone. However, there are certain optimisations and improvements you can make to speed up Office for Mac. Here are some of the most common methods to inject new life into your Office install:
Reasons why Microsoft Office slows down
Microsoft Office can end up running slowly because of one, or even all, of the following:
Office files get corrupted and the application runs slowly as a result.
A buildup of temporary files can cause a slowdown.
Microsoft Office has not been updated. Updates can speed up the application.
Documents that have DOC or DOCM extensions might be storing macros, which can lead to delays when opening documents.
Lack of hardware resources on the device.
Uninstall and re-install Microsoft Office
As detailed in this article, one of the best methods to speed up Microsoft Office is to reinstall it, a process which will overwrite corrupted elements and/or replace any missing files. Fully deleting the app is important to create a fresh environment for the reinstallation.
Go to the Applications folder and move the Microsoft Office folder to the Trash.
Press Shift+Command+G to open “~/Library/Containers”.
From the Group Containers folder, drag UBF8T346G9.OfficeOsfWebHost, ubf8t346g9.office and ubf8t346g9.ms to the Trash.
Trash the following: com.microsoft.onenote.mac, com.microsoft.Word, com.microsoft.RMS-XPCService, com.microsoft.Powerpoint, com.microsoft.Outlook, com.microsoft.Excel and any other file that contains ‘.com.microsoft’ in its name.
Use a Mac cleanup tool
An alternative is to use a cleanup tool. Cleanup tools identify any remaining files or temporary files from apps and remove them. The advantage for less technically-skilled users is that the deletion of unwanted or unused files happens with just the press of a button.
Cleanup tools can also uninstall apps. As we saw earlier, manually installing and then re-installing Office can be tricky to perform manually. With a cleanup tool, app removal is streamlined and automatic. The cleanup app removes files for you, saving time and ensuring all elements of Office are fully removed from your Mac.
Update your Microsoft Office installation
Microsoft releases plenty of updates for Microsoft Office, and, in general, it’s a good idea to keep apps updated. If you’re still using an older version of Office, it’s possibly one of the reasons for poor performance. Users are typically notified by the Microsoft Office Update tool whenever a new version is available, plus, there’s always the option of going to Microsoft’s website and downloading available updates manually.
Manually delete temporary files
Using a cleanup tool is the best option to find and delete temporary files. However, if you’re a confident Mac user, empty the contents of ‘~/Library/Caches’ into the Trash. Again, use Shift+Command+G to locate the folder.
Reinstall an older version
If after reinstalling Office and running all available updates, the problem persists, it may be worth using an older version of Office. Start with a version that’s only a month or two older than the original, and then progress to even older versions, testing the performance of the application as you go.
The post Office for Mac running slowly? Here’s how to speed it up appeared first on AI News.
View the full article
The Biden administration’s final major policy move landed this week with a significant impact on global AI, as it unveiled the most comprehensive AI chips export controls to date. This eleventh-hour decision, announced just days before the administration change, divides the world into AI computing haves and have-nots, with China squarely in the crosshairs of the most stringent restrictions imposed on artificial intelligence technology.
“Artificial intelligence is quickly becoming central to security and economic strength,” the White House fact sheet declares, framing the controls as a decisive action “to ensure that US technology undergirds global AI use and that adversaries cannot easily abuse advanced AI.”
The new AI chips export controls split the global technology landscape into three distinct tiers, fundamentally reshaping how nations can access and develop AI capabilities. Access to advanced AI processors remains unrestricted for 18 key allies, so-called tier-one nations, including Japan, Britain, and the Netherlands.
However, the administration has implemented strict AI chips export quotas for other nations, creating a new global AI development hierarchy. The 18 allies possess “robust technology protection regimes and technology ecosystems aligned with the national security and foreign policy interests of the US,” the policy document states.
For other countries, the restrictions impose precise limitations – chip orders maxing out at roughly 1,700 advanced GPUs can proceed without licences, primarily benefiting academic and research institutions.
Impact on global AI development
The reverberations through the AI industry were immediate. Nvidia, whose AI accelerators power many of the world’s most advanced artificial intelligence systems, saw its shares decline 2%. Vice President of Government Affairs Ned Finkle warned that the export curb “threatens to derail innovation and economic growth worldwide.”
The stakes are exceptionally high for Nvidia, which derives 56% of its revenue from international markets. Cloud computing giants face a complex recalibration of their AI infrastructure.
Under the new framework, US-headquartered providers must adopt a precise mathematical approach to their global operations: no more than 50% of their AI computing power can be deployed outside the country, with a maximum of 25% beyond tier-one countries, and just 7% in any single non-tier-one nation.
US-China AI technology battle intensifies
The timing and scope of these AI chip export controls reveal their primary target: China’s rapidly advancing AI capabilities. The White House document explicitly warns about “countries of concern” that “actively employ AI — including US-made AI” in ways that could “undermine US AI leadership.”
With China accounting for 17% of Nvidia’s sales, the commercial impact aligns directly with the administration’s strategic goals. China’s Commerce Ministry’s swift response – promising to “take necessary measures to safeguard its legitimate rights and interests” – signals a new chapter in the technological cold war between the world’s leading AI powers.
The restrictions specifically target China’s ability to develop advanced AI systems, particularly those that could enable “the development of weapons of mass destruction, supporting powerful offensive cyber operations, and aiding human rights abuses.”
Global response and future implications
The US’s European allies have raised concerns about the broad reach of the controls. EU Executive Vice-President Henna Virkkunen and Commissioner Maroš Šefčovič emphasized the need for continued access to advanced AI technology, stating they are “looking forward to engaging constructively with the next US administration” to maintain “a secure transatlantic supply chain on AI technology and supercomputers.”
US National Security Adviser Jake Sullivan frames the controls within a broader technological revolution: “The US has to be prepared for rapid increases in AI’s capability in the coming years, which could have a transformative impact on the economy and our national security.”
Set to take effect in 120 days, the AI chip export controls represent more than just Biden’s final policy move – they establish a new paradigm for global AI development. As former Trump administration national security official Meghan Harris notes, “How effective the rule ends up being in the next 10 to 15 years is now up to the incoming team.”
The regulations mark a defining moment in both US-China relations and global AI development, creating boundaries and alliances that will shape the future of artificial intelligence well beyond the current administration. With these controls, Biden’s final act may be remembered as the moment that redefined the global AI technology landscape.
See also: South Korea wants to develop 50 types of AI chips by 2030
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post US-China tech war escalates with new AI chips export controls appeared first on AI News.
View the full article
AI is set to become a cornerstone of the ***’s vision for economic and societal renewal with a sweeping action plan unveiled today by Prime Minister Keir Starmer.
The government has committed to all 50 recommendations outlined in the ambitious AI Opportunities Action Plan created by Matt Clifford CBE, tech entrepreneur and chair of the Advanced Research and Invention Agency.
“Our plan will make Britain the world leader,” declared Starmer. “That means more jobs and investment in the ***, more money in people’s pockets, and transformed public services. That’s the change this government is delivering.”
The plan positions AI as a key driver of national progress, aiming to revolutionise public services, boost productivity, and establish the *** as a global leader in the field.
Chris Lehane, Chief Global Affairs Officer at OpenAI, said: “From the locomotive to the Colossus computer, the *** has a rich history of leadership in tech innovation and the research and development of AI.
“The government’s AI action plan – led by the Prime Minister and Secretary Peter Kyle – recognises where AI development is headed and sets the *** on the right path to benefit from its growth. The *** has an enormous national resource in the talent of its people, institutions, and businesses which together can leverage AI to advance the country’s national interest.”
A plan to unlock economic potential
The economic benefits of AI adoption form a central tenet of the government’s strategy. The International Monetary Fund estimates that embracing AI could increase productivity by 1.5 percentage points annually. Over a decade, this could add up to £47 billion in economic gains per year, according to Treasury calculations.
The strategy aims not only to grow the economy but to create tangible benefits for citizens.
Dr Jean Innes, CEO of the Alan Turing Institute, said: “This plan offers an exciting route map, and we welcome its focus on adoption of safe and responsible AI, AI skills, and an ambition to sustain the ***’s global leadership, putting AI to work driving growth, and delivering benefits for society.
“We share these ambitions and look forward to working with the government, universities, industry and civil society to shape the future of these technologies to support the ***’s success and improve people’s lives.”
Three major companies – Vantage Data Centres, Nscale, and Kyndryl – have already pledged a combined £14 billion in investments, creating 13,250 jobs across the country and reinforcing confidence in the ***’s AI potential. This adds to the £25 billion in commitments secured at last year’s International Investment Summit.
Vantage Data Centres is set to invest over £12 billion in *** data centre projects, including one of Europe’s largest data centre campuses in Wales. Meanwhile, Kyndryl will establish a new tech hub in Liverpool—creating up to 1,000 AI-related jobs. Nscale plans to back *** data centre infrastructure with a $2.5 billion investment, which includes the construction of the nation’s largest sovereign AI data centre in Essex by 2026.
Alison Kay, VP for *** and Ireland at Amazon Web Services (AWS), said: “At AWS, we’ve seen first-hand the benefits that digital technologies like AI can bring, and that’s why we’ve announced plans to invest £8 billion over the next five years building, operating, and maintaining data centres in the ***.
“By putting close industry collaboration and public-private partnership at the heart of the government’s agenda, every citizen, community and business in every region of the *** will have the opportunity to realise the benefits of AI, and thrive and grow.”
Zahra Bahrololoumi CBE, CEO of Salesforce *** and Ireland, added: “Businesses in the *** are in a prime position to fully unlock the opportunities of AI, thanks to a strong innovation culture and risk-based laws. That is why Salesforce chose the *** as the location of its first-ever AI Centre, building on our commitment to invest $4 billion in our *** operations to harness the transformational power of AI.”
Transforming public services
AI is already being deployed within *** hospitals, helping to modernise the NHS by diagnosing conditions such as breast ******* more swiftly, improving care for non-verbal patients, and expediting patient discharges.
“NHS datasets could be invaluable for impactful AI innovations in healthcare,” said Dr Bilal, Professor of Applied AI and Technology Ethics at Birmingham City University. “But they remain largely inaccessible to many researchers due to prohibitive costs and logistical hurdles.”
“Initiatives like NHS Secure Data Environments are a great start but must be made more affordable, or ideally free, for academic institutions.
Beyond healthcare, the government is betting that broader adoption of AI could achieve similar efficiencies in other public sectors. For instance, the plan highlights its potential to reduce administrative workloads for teachers, speed up planning consultations to facilitate new housing developments, and even identify potholes on roads via AI-enhanced cameras to expedite repairs.
The new strategy introduces AI Growth Zones, designed to accelerate the development of critical infrastructure. These zones will not only expedite planning permissions but also ensure dedicated energy connections to power AI projects. The first such zone will be established in Culham, Oxfordshire.
Building AI infrastructure
Supporting the ***’s AI aspirations requires significant investment in infrastructure. The plan includes:
A twentyfold increase in public compute capacity, with immediate work commencing on a new supercomputer to support AI advancements.
The creation of a National Data Library, designed to safely unlock the potential of public data to fuel AI innovation.
The establishment of a dedicated AI Energy Council, chaired by the Science and Energy Secretaries, to address the energy demands of AI development. This aligns with the government’s goal of becoming a clean energy superpower.
A new government team tasked with developing the ***’s sovereign AI capabilities.
The government also aims to provide stability for businesses by balancing the dynamism of the US and the regulatory guardrails seen in the EU.
Science, Innovation, and Technology Secretary Peter Kyle said the *** would leverage its unique strengths: “We already have remarkable strengths we can tap into when it comes to AI—building our status as the cradle of computer science and intelligent machines and establishing ourselves as the third largest AI market in the world.”
Reservations and risks with the AI action plan
While the ambitious plan has been met with enthusiasm by many, industry experts caution against overlooking the potential risks posed by unregulated AI deployment.
Dr Pia Hüsch, Research Fellow in Cyber, Technology and National Security at RUSI, commented: “Labour’s AI Opportunities Action Plan has economic growth as the top priority, shifting focus away from the previous government’s priorities around AI safety and regulation.
“While the focus on investing in infrastructure such as computing power and a national data library is welcome, the *** Government must not forget risks posed by AI technologies or the international partnerships that are needed to secure long-term benefit from AI technologies.”
Similarly, Deryck Mitchelson, Global Chief Information Security Officer at Check Point Software, expressed concerns about security and ethics: “AI integration without airtight security measures will only amplify these risks. External oversight of AI models and training datasets is essential—not optional.
“We need built-in equality, ethics, and a transparent framework to measure outcomes and prove these systems genuinely enhance performance, not just cut costs.”
Mitchelson warned that hasty deployment could erode public trust in AI-driven services and deepen inequality. He emphasised that the government must present this initiative as more than a cost-saving strategy and instead prioritise transparency, accountability, and robust safeguards.
The AI Opportunities Action Plan is a key pillar of the government’s Industrial Strategy and the first stage of a forthcoming Digital and Technology Sector Plan. It also ties into the government’s broader Plan for Change, aiming to ensure AI’s benefits are felt across every corner of the ***.
Chancellor Rachel Reeves MP underscored the initiative’s dual focus on economic growth and public service transformation: “AI is a powerful tool that will help grow our economy, make our public services more efficient and open up new opportunities to help improve living standards.
“This action plan is the government’s modern industrial strategy in action.”
As the *** accelerates efforts to harness AI, it faces the dual challenge of staying ahead in the global race for innovation while ensuring the potential pitfalls of the technology are minimised. Today’s announcements mark a bold step forward, but the road ahead will require careful navigation.
See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post *** Government signs off sweeping AI action plan appeared first on AI News.
View the full article
For verified travel tips and real support, visit: [Hidden Content]
The new year was only beginning, but technology had already taken centre stage in a tragic event that shocked many.
Just outside the Trump International Hotel in Las Vegas, a Tesla Cybertruck erupted in an explosion, leaving one person dead and seven others with minor injuries. The devastating incident, confirmed by Las Vegas Sheriff Kevin McMahill, has sparked discussions about the role of artificial intelligence and its darker implications in today’s world.
The Las Vegas Metro Police Department said that the truck’s bed had an alarming mix of gasoline canisters, camp fuel, and large firework mortars. Authorities believe these items were tied to a detonation system controlled by the driver, who appeared to have meticulously planned the attack. The combination of materials painted a chilling picture of a calculated and premeditated act.
The driver, identified as 37-year-old Matthew Livelsberger, was an active-duty soldier in the US Army. Investigators found a “possible manifesto” saved on his phone, along with emails to a podcaster and other documents outlining his intentions. Surveillance footage revealed him preparing for the explosion by pouring fuel onto the truck at a stop before driving to the hotel. Despite all of the preparations, officials confirmed that Livelsberger had no prior criminal record and was not under surveillance at the time of the incident.
One revelation drew significant public attention: Livelsberger had used ChatGPT to aid in his plans. Law enforcement reported that he queried the AI tool for information about assembling explosives, calculating how fast a round would need to be fired to detonate the materials, and understanding which legal loopholes might allow him to acquire the components. Sheriff McMahill addressed this unsettling development, stating, “We know AI was going to change the game for all of us at some point or another, in really all of our lives. I think this is the first incident that I’m aware of on US soil where ChatGPT is utilised to help an individual build a particular device.”
Tragically, Livelsberger’s life ended at the scene with a self-inflicted gunshot wound. Authorities identified his body through DNA and tattoos due to the extensive burns he sustained in the explosion.
OpenAI, the company behind ChatGPT, responded to the incident with a statement expressing their sorrow and emphasising their commitment to responsible AI use. “Our models are designed to refuse harmful instructions and minimise harmful content. In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or ******** activities,” the statement read. OpenAI also confirmed their cooperation with law enforcement in the ongoing investigation.
The explosion itself was described as a deflagration—a slower, less destructive reaction compared to a high-explosive detonation. Investigators suspect the muzzle flash from a gunshot may have ignited fuel vapours or fireworks fuses in the truck, triggering a chain reaction. Other possibilities, though, such as an electrical short, have not been ruled out.
The Las Vegas explosion is a grim reminder of technology’s double-edged nature. While AI has enormous potential, its darker applications are forcing society to consider how to prevent such tragedies in the future.
(Photo by Unsplash)
See also: OpenAI: Musk wanted us to merge with Tesla or take ‘full control’
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Driver used ChatGPT to plan attack, authorities reveal appeared first on AI News.
View the full article
You’ve got a great idea for an AI-based application. Think of fine-tuning like teaching a pre-trained AI model a new trick.
Sure, it already knows plenty from training on massive datasets, but you need to tweak it to your needs. For example, if you need it to pick up abnormalities in scans or figure out what your customers’ feedback really means.
That’s where hyperparameters come in. Think of the large language model as your basic recipe and the hyperparameters as the spices you use to give your application its unique “flavour.”
In this article, we’ll go through some basic hyperparameters and model tuning in general.
What is fine-tuning?
Imagine someone who’s great at painting landscapes deciding to switch to portraits. They understand the fundamentals – colour theory, brushwork, perspective – but now they need to adapt their skills to capture expressions and emotions.
The challenge is teaching the model the new task while keeping its existing skills intact. You also don’t want it to get too ‘obsessed’ with the new data and miss the big picture. That’s where hyperparameter tuning saves the day.
LLM fine-tuning helps LLMs specialise. It takes their broad knowledge and trains them to ace a specific task, using a much smaller dataset.
Why hyperparameters matter in fine-tuning
Hyperparameters are what separate ‘good enough’ models from truly great ones. If you push them too hard, the model can overfit or miss key solutions. If you go too easy, a model might never reach its full potential.
Think of hyperparameter tuning as a type of business automation workflow. You’re talking to your model; you adjust, observe, and refine until it clicks.
7 key hyperparameters to know when fine-tuning
Fine-turning success depends on tweaking a few important settings. This might sound complex, but the settings are logical.
1. Learning rate
This controls how much the model changes its understanding during training. This type of hyperparameter optimisation is critical because if you as the operator…
Go too fast, the model might skip past better solutions,
Go too slow, it might feel like you’re watching paint dry – or worse, it gets stuck entirely.
For fine-tuning, small, careful adjustments (rather like adjusting a light’s dimmer switch) usually do the trick. Here you want to strike the right balance between accuracy and speedy results.
How you’ll determine the right mix depends on how well the model tuning is progressing. You’ll need to check periodically to see how it’s going.
2. Batch size
This is how many data samples the model processes at once. When you’re using a hyper tweaks optimiser, you want to get the size just right, because…
Larger batches are quick but might gloss over the details,
Smaller batches are slow but thorough.
Medium-sized batches might be the Goldilocks option – just right. Again, the best way to find the balonce is to carefully monitor the results before moving on to the next step.
3. Epochs
An epoch is one complete run through your dataset. Pre-trained models already know quite a lot, so they don’t usually need as many epochs as models starting from scratch. How many epochs is right?
Too many, and the model might start memorizing instead of learning (hello, overfitting),
Too few, and it may not learn enough to be useful.
4. Dropout rate
Think of this like forcing the model to get creative. You do this by turning off random parts of the model during training. It’s a great way to stop your model being over-reliant on specific pathways and getting lazy. Instead, it encourages the LLM to use more diverse problem-solving strategies.
How do you get this right? The optimal dropout rate depends on how complicated your dataset is. A general rule of thumb is that you should match the dropout rate to the chance of outliers.
So, for a medical diagnostic tool, it makes sense to use a higher dropout rate to improve the model’s accuracy. If you’re creating translation software, you might want to reduce the rate slightly to improve the training speed.
5. Weight decay
This keeps the model from getting too attached to any one feature, which helps prevent overfitting. Think of it as a gentle reminder to ‘keep it simple.’
6. Learning rate schedules
This adjusts the learning rate over time. Usually, you start with bold, sweeping updates and taper off into fine-tuning mode – kind of like starting with broad strokes on a canvas and refining the details later.
7. Freezing and unfreezing layers
Pre-trained models come with layers of knowledge. Freezing certain layers means you lock-in their existing learning, while unfreezing others lets them adapt to your new task. Whether you freeze or unfreeze depends on how similar the old and new tasks are.
Common challenges to fine-tuning
Fine tuning sounds great, but let’s not sugarcoat it – there are a few roadblocks you’ll probably hit:
Overfitting: Small datasets make it easy for models to get lazy and memorise instead of generalise. You can keep this behaviour in check by using techniques like early stopping, weight decay, and dropout,
Computational costs: Testing hyperparameters can seem like playing a game of whack-a-mole. It’s time-consuming and can be resource intensive. Worse yet, it’s something of a guessing game. You can use tools like Optuna or Ray Tune to automate some of the grunt work.
Every task is different: There’s no one-size-fits-all approach. A technique that works well for one project could be disastrous for another. You’ll need to experiment.
Tips to fine-tune AI models successfully
Keep these tips in mind:
Start with defaults: Check the recommended settings for any pre-trained models. Use them as a starting point or cheat sheet,
Consider task similarity: If your new task is a close cousin to the original, make small tweaks and freeze most layers. If it’s a total 180 degree turn, let more layers adapt and use a moderate learning rate,
Keep an eye on validation performance: Check how the model performs on a separate validation set to make sure it’s learning to generalise and not just memorising the training data.
Start small: Run a test with a smaller dataset before you run the whole model through the training. It’s a quick way to catch mistakes before they snowball.
Final thoughts
Using hyperparameters make it easier for you to train your model. You’ll need to go through some trial and error, but the results make the effort worthwhile. When you get this right, the model excels at its task instead of just making a mediocre effort.
The post The role of hyperparameters in fine-tuning AI models appeared first on AI News.
View the full article
Plaintiffs in the case of Kadrey et al. vs. Meta have filed a motion alleging the firm knowingly used copyrighted works in the development of its AI models.
The plaintiffs, which include author Richard Kadrey, filed their “Reply in Support of Plaintiffs’ Motion for Leave to File Third Amended Consolidated Complaint” in the United States District Court in the Northern District of California.
The filing accuses Meta of systematically torrenting and stripping copyright management information (CMI) from pirated datasets, including works from the notorious shadow library LibGen.
According to documents recently submitted to the court, evidence reveals highly incriminating practices involving Meta’s senior leaders. Plaintiffs allege that Meta CEO Mark Zuckerberg gave explicit approval for the use of the LibGen dataset, despite internal concerns raised by the company’s AI executives.
A December 2024 memo from internal Meta discussions acknowledged LibGen as “a dataset we know to be pirated,” with debates arising about the ethical and legal ramifications of using such materials. Documents also revealed that top engineers hesitated to torrent the datasets, citing concerns about using corporate laptops for potentially unlawful activities.
Additionally, internal communications suggest that after acquiring the LibGen dataset, Meta stripped CMI from the copyrighted works contained within—a practice that plaintiffs highlight as central to claims of copyright infringement.
According to the deposition of Michael Clark – a corporate representative for Meta – the company implemented scripts designed to remove any information identifying these works as copyrighted, including keywords like “copyright,” “acknowledgements,” or lines commonly used in such texts. Clark attested that this practice was done intentionally to prepare the dataset for training Meta’s Llama AI models.
“Doesn’t feel right”
The allegations against Meta paint a portrait of a company knowingly partaking in a widespread piracy scheme facilitated through torrenting.
According to a string of emails included as exhibits, Meta engineers expressed concerns about the optics of torrenting pirated datasets from within corporate spaces. One engineer noted that “torrenting from a [Meta-owned] corporate laptop doesn’t feel right,” but despite hesitation, the rapid downloading and distribution – or “seeding” – of pirated data took place.
Legal counsel for the plaintiffs has stated that as late as January 2024, Meta had “already torrented (both downloaded and distributed) data from LibGen.” Moreover, records show that hundreds of related documents were initially obtained by Meta months prior but were withheld during early discovery processes. Plaintiffs argue this delayed disclosure amounts to bad-faith attempts by Meta to obstruct access to vital evidence.
During a deposition on 17 December 2024, Zuckerberg himself reportedly admitted that such activities would raise “lots of red flags” and stated it “seems like a bad thing,” though he provided limited direct responses regarding Meta’s broader AI training practices.
This case originally began as an intellectual property infringement action on behalf of authors and publishers claiming violations relating to AI use of their materials. However, the plaintiffs are now seeking to add two major claims to their suit: a violation of the Digital Millennium Copyright Act (DMCA) and a breach of the California Comprehensive Data Access and Fraud Act (CDAFA).
Under the DMCA, the plaintiffs assert that Meta knowingly removed copyright protections to conceal unauthorised uses of copyrighted texts in its Llama models.
As cited in the complaint, Meta allegedly stripped CMI “to reduce the chance that the models will memorise this data” and that this removal of rights management indicators made discovering the infringement more difficult for copyright holders.
The CDAFA allegations involve Meta’s methods for obtaining the LibGen dataset, including allegedly engaging in torrenting to acquire copyrighted datasets without permission. Internal documentation shows Meta engineers openly discussed concerns that seeding and torrenting might prove to be “legally not ok.”
Meta case may impact emerging legislation around AI development
At the heart of this expanding legal battle lies growing concern over the intersection of copyright law and AI.
Plaintiffs argue the stripping of copyright protections from textual datasets denies rightful compensation to copyright owners and allows Meta to build AI systems like Llama on the financial ruins of authors’ and publishers’ creative efforts.
The timing of these allegations arises amidst heightened global scrutiny surrounding “generative AI” technologies. Companies like OpenAI, Google, and Meta have all come under fire regarding the use of copyrighted data to train their models. Courts across jurisdictions are currently grappling with the long-term impact of AI on rights management, with potentially landmark cases being decided in both the US and the ***.
In this particular case, US courts have shown increasing willingness to hear complaints about AI’s potential harm to long-established copyright law precedents. Plaintiffs, in their motion, referred to The Intercept Media v. OpenAI, a recent decision from New York in which a similar DMCA claim was allowed to proceed.
Meta continues to deny all allegations in the case and has yet to publicly respond to Zuckerberg’s reported deposition statements.
Whether or not plaintiffs succeed in these amendments, authors across the world face growing anxieties about how their creative works are handled within the context of AI. With copyright law struggling to keep pace with technological advances, this case underscores the need for clearer guidance at an international level to protect both creators and innovators.
For Meta, these claims also represent a reputational risk. As AI becomes the central focus of its future strategy, the allegations of reliance on pirated libraries are unlikely to help its ambitions of maintaining leadership in the field.
The unfolding case of Kadrey et al. vs. Meta could have far-reaching ramifications for the development of AI models moving forward, potentially setting legal precedents in the US and beyond.
(Photo by Amy Syiek)
See also: *** wants to prove AI can modernise public services responsibly
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Meta accused of using pirated data for AI development appeared first on AI News.
View the full article
When future generations look back at the rise of artificial intelligence technologies, the year 2025 may be remembered as a major turning point, when the industry took concrete steps towards greater inclusion, and embraced decentralised frameworks that recognise and fairly compensate every stakeholder.
The growth of AI has already sparked transformation in multiple industries, but the pace of uptake has also led to concerns around data ownership, privacy and copyright infringement. Because AI is centralised with the most powerful models controlled by corporations, content creators have largely been sidelined.
OpenAI, the world’s most prominent AI company, has already admitted that’s the case. In January 2024, it told the ***’s House of Lords Communications and Digital Select Committee that it would not have been able to create its iconic chatbot, ChatGPT, without training it on copyrighted material.
OpenAI trained ChatGPT on everything that was posted on the public internet prior to 2023, but the people who created that content – much of which is copyrighted – have not been paid any compensation; a major source of contention.
There’s an opportunity for decentralised AI projects like that proposed by the ASI Alliance to offer an alternative way of AI model development. The Alliance is building a framework that gives content creators a method to retain control over their data, along with mechanisms for fair reward should they choose to share their material with AI model makers. It’s a more ethical basis for AI development, and 2025 could be the year it gets more attention.
AI’s copyright conundrum
OpenAI isn’t the only AI company that’s been accused of copyright infringement. The vast majority of AI models, including those that purport to be open-source, like Meta Platforms’ Llama 3 model, are guilty of scraping the public internet for training data.
AI developers routinely help themselves to whatever content they find online, ignoring the fact that much of the material is copyrighted. Copyright laws are designed to protect the creators of original works, like books, articles, songs, software, artworks and photos, from being exploited, and make unauthorised use of such materials ********.
The likes of OpenAI, Meta, Anthropic, StabilityAI, Perplexity AI, Cohere, and AI21 Labs get round the law by claiming ‘fair use,’ reference to an ambiguous clause in copyright law that allows the limited use of protected content without the need to obtain permission from the creator. But there’s no clear definition of what actually constitutes ‘fair use,’ and many authors claim that AI threatens their livelihoods.
Many content creators have resorted to legal action, with a prominent lawsuits filed by the New York Times against OpenAI. In the suit, the Times alleges that OpenAI committed copyright infringement when it ingested thousands of articles to train its large language models. The media organisation claims that such practice is unlawful, as ChatGPT is a competing product that aims to ‘steal audience’ from the Times website.
The lawsuit has led to a debate – should AI companies be allowed to keep consuming any content on the internet, or should they be compelled to ask for permission first, and compensate those who create training data?
Consensus appears to be shifting toward the latter. For instance, the late former OpenAI researcher Suchir Balaji, told the Times in an interview that he was tasked with leading the collection of data to train ChatGPT’s models. He said his job involved scraping content from every possible source, including user-generated posts on social media, pirated book archives and articles behind paywalls. All content was scraped without permission being sought, he said.
Balaji said he initially bought OpenAI’s argument that if the information was posted online and freely available, scraping constituted fair use. However, he said that later, he began to question the stance after realising that products like ChatGPT could harm content creators. Ultimately, he said, he could no longer justify the practice of scraping data, resigning from the company in the summer of 2024.
A growing case for decentralised AI
Balaji’s departure from OpenAI appears to coincide with a realisation among AI companies that the practice of helping themselves to any content found online is unsustainable, and that content creators need legal protection.
Evidence of this comes from the spate of content licensing deals announced over the last year. OpenAI has agreed deals with a number of high-profile content publishers, including the Financial Times, NewsCorp, Conde Nast, Axel Springer, Associated Press, and Reddit, which hosts millions of pages of user-generated content on its forums. Other AI developers, like Google, Microsoft and Meta, have forged similar partnerships.
But it remains to be seen if these arrangements will prove to be satisfactory, especially if AI firms generate billions of dollars in revenue. While the terms of the content licensing deals haven’t been made public, The Information claims they are worth a few million dollars per year at most. Considering that OpenAI’s former chief scientist Ilya Sutskever was paid a salary of $1.9 million in 2016, the money offered to publishers may fall short of what content is really worth.
There’s also the fact that millions of smaller content creators – like bloggers, social media influencers etc. – continue to be excluded from deals.
The arguments around AI’s infringement of copyright are likely to last years without being resolved, and the legal ambiguity around data scraping, along with the growing recognition among practitioners that such practices are unethical, are helping to strengthen the case for decentralised frameworks.
Decentralised AI frameworks provide developers with a more principled model for AI training where the rights of content creators are respected, and where every contributor can be rewarded fairly.
Sitting at the heart of decentralised AI is blockchain, which enables the development, training, deployment, and governance of AI models across distributed, global networks owned by everyone. This means everyone can participate in building AI systems that are transparent, as opposed to centralised, corporate-owned AI models that are often described as “****** boxes.”
Just as the arguments around AI copyright infringement intensify, decentralised AI projects are making inroads; this year promises to be an important one in the shift towards more transparent and ethical AI development.
Decentralised AI in action
Late in 2024, three blockchain-based AI startups formed the Artificial Superintelligence (ASI) Alliance, an organisation working towards the creation of a “decentralised superintelligence” to power advanced AI systems anyone can use.
The ASI Alliance says it’s the largest open-source, independent player in AI research and development. It was created by SingularityNET, which has developed a decentralised AI network and compute layer; Fetch.ai, focused on building autonomous AI agents that can perform complex tasks without human assistance; and Ocean Protocol, the creator of a transparent exchange for AI training data.
The ASI Alliance’s mission is to provide an alternative to centralised AI systems, emphasising open-source and decentralised platforms, including data and compute resources.
To protect content creators, the ASI Alliance is building an exchange framework based on Ocean Protocol’s technology, where anyone can contribute data to be used for AI training. Users will be able to upload data to the blockchain-based system and retain ownership of it, earning rewards whenever it’s accessed by AI models or developers. Others will be able to contribute by helping to label and annotate data to make it more accessible to AI models, and earn rewards for performing this work. In this way, the ASI Alliance promotes a more ethical way for developers to obtain the training data they need to create AI models.
Shortly after forming, the Alliance launched the ASI<Train/> initiative, focused on the development of more transparent and ethical “domain-specific models” specialising in areas like robotics, science, and medicine. Its first model is Cortex, which is said to be modeled on the human brain and designed to power autonomous robots in real-world environments.
The specialised models differ from general-purpose LLMs, which are great at answering questions and creating content and images, but less useful when asked to solve more complex problems that require significant expertise. But creating specialised models will be a community effort: the ASI Alliance needs industry experts to provide the necessary data to train models.
Fetch.ai’s CEO Humayun Sheikh said the ASI Alliance’s decentralised ownership model creates an ecosystem “where individuals support groundbreaking technology and share in value creation.”
Users without specific knowledge can buy and “stake” FET tokens to become part-owners of decentralised AI models and earn a share of the revenue they generate when they’re used by AI applications.
For content creators, the benefits of a decentralised approach to AI are clear. ASI’s framework lets them keep control of their data and track when it’s used by AI models. It integrates mechanisms encoded in smart contracts to ensure that everyone is fairly compensated. Participants earn rewards for contributing computational resources, data, and expertise, or by supporting the ecosystem through staking.
The ASI Alliance operates a model of decentralised governance, where token holders can vote on key decisions to ensure the project evolves to benefit stakeholders, rather than the shareholders of corporations.
AI for everyone is a necessity
The progress made by decentralised AI is exciting, and it comes at a time when it’s needed. AI is evolving quickly and centralised AI companies are currently at the forefront of adoption; for many, a major cause of concern.
Given the transformative potential of AI and the risks it poses to individual livelihoods, it’s important that the industry shifts to more responsible models. AI systems should be developed for the benefit of everyone, and this means every contributor rewarded for participation. Only decentralised AI systems have shown they can do this.
Decentralised AI is not just a nice-to-have but a necessity, representing the only viable alternative capable of breaking big tech’s stranglehold on creativity.
The post Copyright concerns create need for a fair alternative in AI sector appeared first on AI News.
View the full article
For verified travel tips and real support, visit: [Hidden Content]
Microsoft has officially released its latest language model, Phi-4, on the AI repository Hugging Face. The model is available under the permissive MIT licence, allowing broad usage for developers, researchers, and businesses alike—a significant step for democratising AI innovations.
Unveiled in December 2024, Phi-4 has been drawing attention for its cutting-edge capabilities despite its compact size. Its release on Hugging Face opens the door for even wider adoption, highlighting that powerful models don’t always require massive infrastructure costs.
From Azure to open access
Although Microsoft initially announced Phi-4 last month, its availability was confined to Azure AI Foundry—the company’s development platform aimed at building AI-driven solutions. This exclusivity created a stir among the AI community, with many eager to get their hands on the model.
Microsoft’s AI Principal Research Engineer, Shital Shah, addressed the demand on X: “We have been completely amazed by the response to phi-4 release. A lot of folks had been asking us for weight release. Few even uploaded bootlegged phi-4 weights on Hugging Face. Well, wait no more. We are releasing today official phi-4 model on Hugging Face!”
The official release eliminates the need for unauthorised or “bootlegged” versions, providing a legitimate channel for developers keen to explore Phi-4’s potential.
Why Phi-4 matters
Phi-4 isn’t just another entry in Microsoft’s AI portfolio—it represents an evolution in the conversation about AI efficiency and accessibility.
At a time when colossal models like GPT-4 dominate discussions due to their expansive capabilities, Phi-4 offers something revolutionary: big performance in a small package.
Key benefits of Phi-4 include:
Compact size and energy efficiency
Phi-4’s lightweight architecture allows it to operate effectively on consumer-grade hardware, eliminating the need for expensive server infrastructure. Its compact form also translates to significantly reduced energy usage, which aligns well with the tech industry’s growing emphasis on sustainability and green computing.
Excels in advanced mathematical reasoning
Phi-4 shines in tasks demanding mathematical reasoning, a capability measured by its score of 80.4 on the challenging MATH benchmark. This performance outpaces many comparable and even larger models, positioning Phi-4 as a strong contender for industries such as finance, engineering, and data analytics.
Specialised applications
Training on curated datasets has made Phi-4 highly accurate for domain-specific uses. From auto-filling forms to generating tailored content, it’s particularly valuable in industries like healthcare and customer service, where compliance, speed, and accuracy are critical.
Enhanced safety features
By leveraging Azure AI’s Content Safety tools, Phi-4 incorporates mechanisms like prompt shields and protected material detection to mitigate risks associated with adversarial prompts, making it safer to deploy in live environments.
Making AI accessible to mid-sized businesses
Sustainability and security are vital, but so is cost-effectiveness. Phi-4’s capability to deliver high performance without the need for large computational resources makes it a viable choice for mid-sized enterprises eager to adopt AI solutions. This could lower barriers for businesses seeking to automate operations or enhance productivity.
Innovative training techniques
The model’s training process combines synthetic datasets and curated organic data, boosting Phi-4’s effectiveness while addressing common challenges with data availability. This methodology could set the stage for future advances in model development, balancing scalability with precision.
Model for the masses
Phi-4’s launch with an MIT licence signifies more than just access—it represents a shift in how AI technologies are developed and shared. The permissive nature of this licence allows developers to use, modify, and redistribute Phi-4 with few restrictions, fostering further innovation.
This move also reflects broader trends in the AI field: a deliberate effort to democratise access to powerful models, enabling smaller organisations and independent developers to benefit from advanced technologies that were previously the preserve of tech giants or highly funded research labs.
As AI adoption becomes increasingly central across sectors, the demand for efficient, adaptable, and affordable AI models continues to climb. Phi-4 is positioned for this next phase of AI proliferation by offering impressive performance at reduced costs. It could catalyse growth particularly in industries like healthcare, where streamlined and precise computational tools make life-changing benefits possible.
At the same time, Phi-4 highlights the viability of a more sustainable AI future. By showing that smaller AI models can excel in practical applications while consuming fewer resources, Microsoft opens the door for environmentally-conscious advancements in machine learning.
Smaller, more efficient models are proving that size isn’t everything in AI—and the era of resource-intensive giants dominating the field may be giving way to a more diverse, inclusive, and innovative ecosystem.
See also: NVIDIA advances AI frontiers with CES 2025 announcements
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Microsoft releases Phi-4 language model on Hugging Face appeared first on AI News.
View the full article
AI and Big Data Expo Global is under four weeks away. Set to take place at the Olympia, London, on 5-6 February 2025, this must-attend artificial intelligence and big data event is for professionals from all industries looking to learn more about the newest technology solutions.
Key highlights:
Headline speakers: The event boasts a stellar line-up of more than 150 speakers from leading global organisations including NVIDIA, LinkedIn, Unilever, Sainsbury’s, Co-op, Salesforce, BT Group, Meta, Lloyds Banking Group, Philips, The Economist, Jaguar Land Rover, and many others. These industry leaders will share their expertise and visions on how AI and Big Data are shaping the future across various sectors.
Industry-leading agenda including:
Strategic insights into the convergence of machine learning, natural language processing, and neural architectures shaping AI’s future.
Explore how AI is transforming businesses globally, beyond just augmenting intelligence.
Understand how AI impacts work, organisational culture, trust, and leadership.
Examine AI’s effect on skills, human-AI collaboration, and the workplace experience.
Empower your organisation to navigate the AI transformation journey.
Dive into advanced analytics and AI for smarter, data-driven business decisions.
Networking opportunities: With more than 7,000 attendees expected, the AI and Big Data Expo offers opportunities for networking, including the Networking drinks on Day 1 of the event. Plus, utilise our AI-powered matchmaking tool to connect with potential collaborators, clients and thought leaders from around the globe.
Co-located shows: Gain access to nine co-located events, covering a wide range of technological innovations and trends. This multi-event format ensures attendees can explore the intersection of AI, big data and other emerging technologies.
Exhibition floor: Discover the latest innovations from more than 150 industry-leading solution providers, including Salesforce, Experian, Edge Impulse, Snowflake, Coursera and more. The exhibition floor is your gateway to seeing cutting-edge products and services first-hand, offering solutions that can transform your business.
In today’s landscape, AI isn’t just a tool—it’s a strategic imperative. Executives and senior employees need to stay ahead of emerging trends to drive innovation, efficiency, and growth across their organisations.
Discover how AI can transform your business! Dive deep into cutting-edge sessions covering everything from AI ethics and infrastructure to human-AI collaboration and revolutionary use cases.
Register today:
Don’t miss your chance to attend this world-leading event and elevate your AI expertise. Secure your pass today by visiting our registration page.
About AI & Big Data Expo:
The AI and Big Data Expo is part of TechEx—the leading technology event: [Hidden Content]. Prepare for two days of unrivalled access to the trends and innovations shaping the future of AI, automation, and big data. Plus, gain access to nine co-located events all under the TechEx Events Series. Don’t miss out!
We look forward to welcoming you to the AI & Big Data Expo Global in London!
The post AI and Big Data Expo Global: Less than 4 weeks to go! appeared first on AI News.
View the full article
Singapore-based Firmus Technologies has been recognised with the Asia Pacific Data Centre Project of the Year award for its AI Factory facility.
The facility stands out for its advanced infrastructure and focus on energy efficiency, reflecting broader efforts to meet the rising demands of AI computing sustainably.
The AI Factory is part of Firmus’s ongoing initiative to transform existing ST Telemedia Global Data Centres (STT GDC) into GPU-powered AI computing platforms. The redesigned centres are equipped with state-of-the-art hardware and efficient cooling systems, enabling them to meet both enterprise and research needs with improved energy performance metrics.
As artificial intelligence continues to need more power, energy efficiency has become a major issue. Firmus has addressed the issue for nearly a decade with its AI Factory platform, which combines advanced immersion cooling technology with dependable design, build, and operation services. The company states its platform has several significant advantages, including:
Energy efficiency: 45% more FLOP per utility picoJoule than traditional data centres,
Cost-effectiveness: Up to 30% cheaper total cost of ownership (TCO) than direct-to-chip cooling platforms,
Scalability and sustainability: Supports high-density AI workloads while reducing environmental effects,
Global expertise: A track record in building and operating immersion-cooled data centres in Singapore and Australia.
The deployment of the AI Factory in Singapore shows how innovative approaches to data centre infrastructure can address the energy demands of AI. The project highlights a potential pathway for sustainable AI development by achieving a pPUE of 1.02 and a reduction in energy consumption of 45%. The achievement aligns with Singapore’s National AI Strategy 2.0, which emphasises sustainable growth in AI and data centre innovation.
Tim Rosenfield, co-CEO of Firmus Technologies, explained the broader vision behind the project, noting that it’s about balancing AI growth with sustainability. “By rethinking data centre design, we have created a platform that supports the growth of AI while promoting environmental sustainability. If we can do it in Singapore, where space is constrained and the humid climate is against us, we can do it anywhere,” he said.
Firmus has recently changed its leadership team, adding Dr. Daniel Kearney as chief technology officer. Previously AWS’s Head of Technology for the ASEAN Enterprise business, Kearney leads the engineering team at Firmus. He pointed out how sustainable AI infrastructure is becoming essential as AI technologies expand. “This win against established data centre players recognises the importance of technology like ours in meeting the growth of AI and the energy challenges it brings,” he said.
The company has been advancing its work through the Sustainable Metal Cloud (SMC), an initiative aimed at improving the efficiency and sustainability of AI infrastructure. Recent updates from Firmus include:
Power efficiency benchmarks: Firmus became the first to publish comprehensive power consumption data alongside performance results for the MLPerf Training benchmark,
Policy contributions: Insights from Tim Rosenfield contributed to the Tony Blair Institute for Global Change’s policy agenda on managing the energy demands of the AI sector,
Industry discussions: At ATxSG24, Firmus’s Chairman, Edward Pretty, joined a panel featuring organisations like NVIDIA, the World Bank, and Alibaba Cloud to explore the balance between sustainability and the computational needs of AI,
Hypercube expansion: Firmus’s team of 700 is installing the first fleet of Sustainable AI Factories, known as HyperCubes in multiple regions.
Engagement at NVIDIA GTC 2024: The company participated in two panels at NVIDIA’s GTC event, discussing sustainable AI infrastructure alongside partners like NVIDIA, Deloitte, and WEKA.
See also: The AI revolution: Reshaping data centres and the digital landscape
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Singapore-based Firmus wins recognition for AI data centre design appeared first on AI News.
View the full article
NVIDIA CEO and founder Jensen Huang took the stage for a keynote at CES 2025 to outline the company’s vision for the future of AI in gaming, autonomous vehicles (AVs), robotics, and more.
“AI has been advancing at an incredible pace,” Huang said. “It started with perception AI — understanding images, words, and sounds. Then generative AI — creating text, images, and sound. Now, we’re entering the era of ‘physical AI,’ AI that can perceive, reason, plan, and act.”
With NVIDIA’s platforms and GPUs at the core, Huang explained how the company continues to fuel breakthroughs across multiple industries while unveiling innovations such as the Cosmos platform, next-gen GeForce RTX 50 Series GPUs, and compact AI supercomputer Project DIGITS.
RTX 50 series: “The GPU is a beast”
One of the most significant announcements during CES 2025 was the introduction of the GeForce RTX 50 Series, powered by NVIDIA Blackwell architecture. Huang debuted the flagship RTX 5090 GPU, boasting 92 billion transistors and achieving an impressive 3,352 trillion AI operations per second (TOPS).
“GeForce enabled AI to reach the masses, and now AI is coming home to GeForce,” said Huang.
Holding the blacked-out GPU, Huang called it “a beast,” highlighting its advanced features, including dual cooling fans and its ability to leverage AI for revolutionary real-time graphics.
Set for a staggered release in early 2025, the RTX 50 Series includes the flagship RTX 5090 and RTX 5080 (available 30 January), followed by the RTX 5070 Ti and RTX 5070 (February). Laptop GPUs join the lineup in March.
In addition, NVIDIA introduced DLSS 4 – featuring ‘Multi-Frame Generation’ technology – which boosts gaming performance up to eightfold by generating three additional frames for every frame rendered.
Other advancements, such as RTX Neural Shaders and RTX Mega Geometry, promise heightened realism in video games, including precise face and hair rendering using generative AI.
Cosmos: Ushering in physical AI
NVIDIA took another step forward with the Cosmos platform at CES 2025, which Huang described as a “game-changer” for robotics, industrial AI, and AVs. Much like the impact of large language models on generative AI, Cosmos represents a new frontier for AI applications in robotics and autonomous systems.
“The ChatGPT moment for general robotics is just around the corner,” Huang declared.
Cosmos integrates generative models, tokenisers, and video processing frameworks to enable robots and vehicles to simulate potential outcomes and predict optimal actions. By ingesting text, image, and video prompts, Cosmos can generate “virtual world states,” tailored for complex robotics and AV use cases involving real-world environments and lighting.
Top robotics and automotive leaders – including XPENG, Hyundai Motor Group, and Uber – are among the first to adopt Cosmos, which is available on GitHub via an open licence.
Pras Velagapudi, CTO at Agility, comments: “Data scarcity and variability are key challenges to successful learning in robot environments. Cosmos’ text-, image- and video-to-world capabilities allow us to generate and augment photorealistic scenarios for a variety of tasks that we can use to train models without needing as much expensive, real-world data capture.”
Empowering developers with AI models
NVIDIA also unveiled new AI foundation models for RTX PCs, which aim to supercharge content creation, productivity, and enterprise applications. These models, presented as NVIDIA NIM (Neural Interaction Model) microservices, are designed to integrate with the RTX 50 Series hardware.
Huang emphasised the accessibility of these tools: “These AI models run in every single cloud because NVIDIA GPUs are now available in every cloud.”
NVIDIA is doubling down on its push to equip developers with advanced tools for building AI-driven solutions. The company introduced AI Blueprints: pre-configured tools for crafting agents tailored to specific enterprise needs, such as content generation, fraud detection, and video management.
“They are completely open source, so you could take it and modify the blueprints,” explains Huang.
Huang also announced the release of Llama Nemotron, designed for developers to build and deploy powerful AI agents.
Ahmad Al-Dahle, VP and Head of GenAI at Meta, said: “Agentic AI is the next frontier of AI development, and delivering on this opportunity requires full-stack optimisation across a system of LLMs to deliver efficient, accurate AI agents.
“Through our collaboration with NVIDIA and our shared commitment to open models, the NVIDIA Llama Nemotron family built on Llama can help enterprises quickly create their own custom AI agents.”
Philipp Herzig, Chief AI Officer at SAP, added: “AI agents that collaborate to solve complex tasks across multiple lines of the business will unlock a whole new level of enterprise productivity beyond today’s generative AI scenarios.
“Through SAP’s Joule, hundreds of millions of enterprise users will interact with these agents to accomplish their goals faster than ever before. NVIDIA’s new open Llama Nemotron model family will foster the development of multiple specialised AI agents to transform business processes.”
Safer and smarter autonomous vehicles
NVIDIA’s announcements extended to the automotive industry, where its DRIVE Hyperion AV platform is fostering a safer and smarter future for AVs. Built on the new NVIDIA AGX Thor system-on-a-chip (SoC), the platform allows vehicles to achieve next-level functional safety and autonomous capabilities using generative AI models.
“The autonomous vehicle revolution is here,” Huang said. “Building autonomous vehicles, like all robots, requires three computers: NVIDIA DGX to train AI models, Omniverse to test-drive and generate synthetic data, and DRIVE AGX, a supercomputer in the car.”
Huang explained that synthetic data is critical for AV development, as it dramatically enhances real-world datasets. NVIDIA’s AI data factories – powered by Omniverse and Cosmos platforms – generate synthetic driving scenarios, increasing the effectiveness of training data exponentially.
Toyota, the world’s largest automaker, is committed to using NVIDIA DRIVE AGX Orin and the safety-certified NVIDIA DriveOS to develop its next-generation vehicles. Heavyweights such as JLR, Mercedes-Benz, and Volvo Cars have also adopted DRIVE Hyperion.
Project DIGITS: Compact AI supercomputer
Huang concluded his NVIDIA keynote at CES 2025 with a final “one more thing” announcement: Project DIGITS, NVIDIA’s smallest yet most powerful AI supercomputer, powered by the cutting-edge GB10 Grace Blackwell Superchip.
“This is NVIDIA’s latest AI supercomputer,” Huang declared, revealing its compact size, claiming it’s portable enough to “practically fit in a pocket.”
Project DIGITS enables developers and engineers to train and deploy AI models directly from their desks, providing the full power of NVIDIA’s AI stack in a compact form.
Set to launch in May, Project DIGITS represents NVIDIA’s push to make AI supercomputing accessible to individuals as well as organisations.
Vision for tomorrow
Reflecting on NVIDIA’s journey since inventing the programmable GPU in 1999, Huang described the past 12 years of AI-driven change as transformative.
“Every single layer of the technology stack has been fundamentally transformed,” he said.
With advancements spanning gaming, AI-driven agents, robotics, and autonomous vehicles, Huang foresees an exciting future.
“All of the enabling technologies I’ve talked about today will lead to surprising breakthroughs in general robotics and AI over the coming years,” Huang concludes.
(Image Credit: NVIDIA)
See also: Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post NVIDIA advances AI frontiers with CES 2025 announcements appeared first on AI News.
View the full article
Sam Altman, CEO and co-founder of OpenAI, has shared candid reflections on the company’s journey as it aims to achieve superintelligence.
With ChatGPT recently marking its second anniversary, Altman outlines OpenAI’s achievements, ongoing challenges, and vision for the future of AI.
“The second birthday of ChatGPT was only a little over a month ago, and now we have transitioned into the next paradigm of models that can do complex reasoning,” Altman reflects.
A bold mission to achieve AGI and superintelligence
OpenAI was founded in 2015 with a clear, albeit bold, mission: to develop AGI and ensure it benefits all of humanity.
Altman and the founding team believed AGI could become “the most impactful technology in human history.” Yet, he recalls, the world wasn’t particularly interested in their quest back then.
“At the time, very few people cared, and if they did, it was mostly because they thought we had no chance of success,” Altman explains.
Fast forward to 2022, OpenAI was still a relatively quiet research facility testing what was then referred to as ‘Chat With GPT-3.5.’ Developers had been exploring the capabilities of its API, and the excitement sparked the idea of launching a user-ready demo.
This demo led to the creation of ChatGPT, which Altman acknowledges benefited from “mercifully” better branding than its initial name. When it launched on 30 November 2022, ChatGPT proved to be a tipping point.
“The launch of ChatGPT kicked off a growth curve like nothing we have ever seen—in our company, our industry, and the world broadly,” he says
OpenAI has since witnessed an evolution marked by staggering interest, not just in its tools but in the broader possibilities of AI.
Building at breakneck speed
Altman admits that scaling OpenAI into a global tech powerhouse came with significant challenges.
“In the last two years, we had to build an entire company, almost from scratch, around this new technology,” he notes, adding, “There is no way to train people for this except by doing it.”
Operating in uncharted waters, the OpenAI team often faced ambiguity—making decisions on the fly and dealing with the inevitable missteps.
“Building up a company at such high velocity with so little training is a messy process,” Altman explains. “It’s often two steps forward, one step back (and sometimes, one step forward and two steps back).”
Yet, despite the chaos, Altman credits the team’s resilience and ability to adapt.
OpenAI now boasts over 300 million weekly active users, a sharp increase from the 100 million reported just a year ago. Much of this success lies in the organisation’s ethos of learning by doing, combined with a commitment to putting “technology out into the world that people genuinely seem to love and that solves real problems.”
‘A big failure of governance’
Of course, the journey so far hasn’t been without turmoil. Altman recounts a particularly difficult chapter from November 2023 when he was suddenly ousted as CEO, briefly recruited by Microsoft, only to be reinstated by OpenAI days later amid industry backlash and staff protests.
Speaking openly, Altman highlights the need for better governance structures in organisations tackling critical technologies like AI.
“The whole event was, in my opinion, a big failure of governance by well-meaning people, myself included,” he admits. “Looking back, I certainly wish I had done things differently, and I’d like to believe I’m a better, more thoughtful leader today than I was a year ago.”
The episode served as a stark reminder of the complexity of managing rapid growth and the stakes involved in AI development. It also drove OpenAI to forge new governance structures “that enable us to pursue our mission of ensuring that AGI benefits all of humanity.”
Altman expressed deep gratitude for the support OpenAI received during the crisis from employees, partners, and customers. “My biggest takeaway is how much I have to be thankful for and how many people I owe gratitude towards,” he emphasises.
Pivoting towards superintelligence
Looking forward, Altman says OpenAI is beginning to aim beyond AGI towards the development of “superintelligence”—AI systems that far surpass human cognitive capabilities.
“We are now confident we know how to build AGI as we have traditionally understood it,” Altman shares. OpenAI predicts that by the end of this year, AI agents will significantly “join the workforce,” revolutionising industries with smarter automation and companion systems.
Achieving superintelligence would be especially transformative for society, with the potential to accelerate scientific discoveries, but also poses the most significant dangers.
“We believe in the importance of being world leaders on safety and alignment research … OpenAI cannot be a normal company,” he notes, underscoring the need to approach innovation responsibly.
OpenAI’s strategy includes gradually introducing breakthroughs into the world, allowing for society to adapt alongside AI’s rapid evolution. “Iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes,” Altman argues.
Reflecting on the organisation’s trajectory, Altman admits OpenAI’s path has been defined by both extraordinary breakthroughs and significant challenges—from scaling teams to navigating public scrutiny.
“Nine years ago, we really had no idea what we were eventually going to become; even now, we only sort of know,” he says.
What remains clear is his unwavering commitment to OpenAI’s vision. “Our vision won’t change; our tactics will continue to evolve,” Altman claims, attributing the company’s remarkable progress to the team’s willingness to rethink processes and embrace challenges.
As AI continues to reshape industries and daily life, Altman’s central message is evident: While the journey has been anything but smooth, OpenAI is steadfast in its mission to unlock the benefits of AI for all.
“How lucky and humbling it is to be able to play a role in this work,” Altman concludes.
See also: OpenAI funds $1 million study on AI and morality at Duke University
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Sam Altman, OpenAI: ‘Lucky and humbling’ to work towards superintelligence appeared first on AI News.
View the full article
For verified travel tips and real support, visit: [Hidden Content]
Video surveillance has come a long way from simple CCTV setups. Today’s businesses demand more – smarter analytics, enhanced security, and seamless scalability. As organisations adopt AI and automation across their operations, video management systems (VMS) face new challenges:
How to keep video surveillance scalable and easy to manage?
Can AI analytics like face recognition or behaviour detection be integrated without breaking the budget?
Is my current system prepared for modern security risks?
These questions are not hypothetical. They represent real obstacles businesses face when managing video surveillance systems. Solving them requires innovative thinking, flexible tools, and a smarter approach to how systems are designed and operated.
The Shift to Smarter Surveillance
Traditional video surveillance systems often fail to meet the needs of dynamic, modern environments. Whether it’s a retail chain looking to analyse customer behaviour or a factory monitoring equipment safety, the tools of yesterday aren’t enough to address today’s demands.
The shift towards smarter surveillance involves integrating modular, AI-driven systems that:
Adapt to your specific needs,
Automate tedious tasks like footage analysis,
Offer advanced analytics, like emotion detection or license plate recognition,
Remain accessible to both tech-savvy professionals and beginners.
This isn’t just a technical shift; it’s a shift in mindset. Businesses now see surveillance not only as a security measure but as a strategic tool for operational insight.
Meet Xeoma: The modular approach to smarter surveillance
At the forefront of this smarter surveillance revolution is Xeoma, a modular, AI-powered video surveillance software that provides various solutions to challenges of modern businesses:
Modularity for customisation. Xeoma’s plug-and-play structure allows businesses to tailor their surveillance systems. Whether you need facial recognition, vehicle detection, or heatmaps of customer activity, Xeoma makes it easy to add or remove modules as needed.
AI-powered analytics: Xeoma offers cutting-edge features like:
Object recognition: Detect and classify objects like people, animals, and vehicles,
Voice-to-text: Transcribe spoken words into text,
Fire detection: Detect the presence of fire or smoke,
Licence plate recognition: Automatically read and record vehicle licence plates,
Age and gender recognition: Determine the age range and gender of individuals.
Ease of use: Unlike many systems with steep learning curves, Xeoma is designed to be user-friendly. Its intuitive interface ensures that even non-technical users can quickly set up and operate the software.
Seamless integration: Xeoma integrates with IoT devices, access control systems, and other third-party tools, making it an ideal choice for businesses looking to enhance their existing setups.
Cost efficiency: With Xeoma, you only pay once thanks to the lifetime licences. The pricing structure ensures that businesses of all sizes, from startups to enterprises, can find a solution that fits their budgets.
Unlimited scalability: Xeoma has no limitations in number of cameras it can work with. Either the system has tens, hundreds or thousands of cameras – Xeoma will handle them all
Encrypted communication: Xeoma uses secure communication protocols (HTTPS, SSL/TLS) to encrypt data transmitted between the server, cameras, and clients. The prevents unauthorised access during data transmission.
Xeoma’s flexible design and robust features allow it to be tailored to a wide range of scenarios, empowering organisations to meet their unique challenges while staying efficient, secure, and scalable.
How Xeoma benefits your business: Scenarios
Xeoma isn’t just a tool for security – it’s a versatile platform that adapts to your environment, whether you run a small retail store, manage a factory floor, or oversee an entire urban surveillance network.
Retail: Elevating customer experience
Picture this: You manage a busy store where you need to understand peak traffic hours and monitor for shoplifting. With Xeoma one can:
Deploy AI-based ‘face recognition’ to discreetly flag known shoplifters or VIP customers to enhance service,
Use ‘visitors counter’ and ‘crowd detector’ to identify when foot traffic is highest and allocate staff accordingly,
Analyse heatmaps to see which areas of the store attract the most attention, optimising product placement,
Add ‘unique visitors counter’ module to your system to group people by frequency of attendance. At the same time, age and gender recognition will assist you in tailoring your promo more accurately,
Enhance the results of your marketing efforts with eye tracking by getting insights into human psychology.
Manufacturing: Ensuring workplace safety
On a bustling factory floor, every second matters, and safety is critical. Xeoma can help by:
Detecting if workers are in restricted zones using ‘cross-line detector,’
Monitoring compliance with safety protocols with helmet and mask detectors.
Sending real-time alerts to supervisors about potential hazards, like machinery malfunctions or unauthorised access, via a plethora of means from push notifications to personalised alerts,
Elevating trust and satisfaction levels with timelapse and streaming to YouTube.
Urban surveillance: Protecting communities
If you’re part of a city planning team or law enforcement agency, Xeoma scales effortlessly to monitor entire districts:
Use licence plate recognition to track vehicles entering and exiting restricted areas,
Automate responses to emergencies, from traffic incidents and rule violations (for example, speeding, passing on red traffic light or ******** parking detectors) to public safety threats,
Identify suspicious behaviour in crowded public spaces using ‘loitering detector,’
Detect graffiti and ads that have prohibited words like “drugs” with text recognition,
Recognise faces to find wanted or missing people with face identification.
Education: Safeguarding schools
For schools and universities, safety is a top priority. Xeoma provides:
AI alerts with ‘detector of abandoned objects’ and ‘sound detector’ for detecting unattended bags or abnormal behaviour, ensuring quick response times,
Smoke and fire detection that allows you to prevent or promptly respond to the body of fire.
Smart automated verification with ‘smart-card reader’ and ‘face ID’ that help to avoid the penetration by unauthorised persons,
Integration with existing access control systems via API or HTTP protocol for a seamless security solution,
Live streaming to your educational entity website or YouTube can enhance parental engagement or build a positive image, while eye tracking serves as an effective anti-cheat solution in monitoring systems.
Hospitality: Enhancing guest experiences
In the hospitality industry, guest satisfaction is everything. Xeoma helps you: • Monitor entrances and exits with access control integration for smooth check-ins and check-outs, • Use ’emotion detector’ to gauge customer satisfaction in common areas, • Ensure staff compliance with protocols to maintain service quality with ‘voice-to-text’ module.
Conclusion: Connecting Xeoma to your vision
Every business has its unique challenges, and Xeoma’s versatility means it can be the solution you need to overcome yours. Imagine running a business where:
Your team has actionable insights at their fingertips,
Potential threats are flagged before they escalate,
Your surveillance system doesn’t just protect – it empowers decision-making and growth.
Xeoma isn’t just about surveillance; it’s about giving you peace of mind, actionable intelligence, and the flexibility to focus on what matters most – your people, your customers, and your vision for the future.
Whether you’re securing a retail space, safeguarding a factory, or protecting an entire community, Xeoma’s modular, AI-powered platform adapts to your goals and grows alongside you.
Ready to see how Xeoma can transform your video surveillance strategy? Explore a free demo and start building your ideal system today.
The post Rethinking video surveillance: The case for smarter, more flexible solutions appeared first on AI News.
View the full article
For verified travel tips and real support, visit: [Hidden Content]
OpenAI is awarding a $1 million grant to a Duke University research team to look at how AI could predict human moral judgments.
The initiative highlights the growing focus on the intersection of technology and ethics, and raises critical questions: Can AI handle the complexities of morality, or should ethical decisions remain the domain of humans?
Duke University’s Moral Attitudes and Decisions Lab (MADLAB), led by ethics professor Walter Sinnott-Armstrong and co-investigator Jana Schaich Borg, is in charge of the “Making Moral AI” project. The team envisions a “moral GPS,” a tool that could guide ethical decision-making.
Its research spans diverse fields, including computer science, philosophy, psychology, and neuroscience, to understand how moral attitudes and decisions are formed and how AI can contribute to the process.
The role of AI in morality
MADLAB’s work examines how AI might predict or influence moral judgments. Imagine an algorithm assessing ethical dilemmas, such as deciding between two unfavourable outcomes in autonomous vehicles or providing guidance on ethical business practices. Such scenarios underscore AI’s potential but also raise fundamental questions: Who determines the moral framework guiding these types of tools, and should AI be trusted to make decisions with ethical implications?
OpenAI’s vision
The grant supports the development of algorithms that forecast human moral judgments in areas such as medical, law, and business, which frequently involve complex ethical trade-offs. While promising, AI still struggles to grasp the emotional and cultural nuances of morality. Current systems excel at recognising patterns but lack the deeper understanding required for ethical reasoning.
Another concern is how this technology might be applied. While AI could assist in life-saving decisions, its use in defence strategies or surveillance introduces moral dilemmas. Can unethical AI actions be justified if they serve national interests or align with societal goals? These questions emphasise the difficulties of embedding morality into AI systems.
Challenges and opportunities
Integrating ethics into AI is a formidable challenge that requires collaboration across disciplines. Morality is not universal; it is shaped by cultural, personal, and societal values, making it difficult to encode into algorithms. Additionally, without safeguards such as transparency and accountability, there is a risk of perpetuating biases or enabling harmful applications.
OpenAI’s investment in Duke’s research marks at step toward understanding the role of AI in ethical decision-making. However, the journey is far from over. Developers and policymakers must work together to ensure that AI tools align with social values, and emphasise fairness and inclusivity while addressing biases and unintended consequences.
As AI becomes more integral to decision-making, its ethical implications demand attention. Projects like “Making Moral AI” offer a starting point for navigating a complex landscape, balancing innovation with responsibility in order to shape a future where technology serves the greater good.
(Photo by Unsplash)
See also: AI governance: Analysing emerging global regulations
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post OpenAI funds $1 million study on AI and morality at Duke University appeared first on AI News.
View the full article
The emerging US-China Artificial General Intelligence (AGI) rivalry could face a major policy transformation, as the US-China Economic and Security Review Commission (USCC) recommends a Manhattan Project-style initiative and restrictions on humanoid robots in its latest report to Congress.
Released in November 2024, the Commission’s annual report outlined 32 recommendations that could fundamentally alter how the two countries interact, with artificial intelligence taking centre stage in a new chapter of strategic rivalry.
US-China: the AGI moonshot and critical tech controls
At the heart of the report lies an ambitious proposal: establishing a government-backed programme to develop AGI – AI systems that could match and potentially exceed human cognitive abilities.
However, the recommendation is just one piece of a larger technological puzzle, including export controls, investment screening, and new trade policies to preserve US technological advantages.
The proposed AGI initiative would provide multi-year contracts to leading AI companies, cloud providers, and data centre operators. It would be backed by the Defense Department’s highest priority, “DX Rating” – a designation typically reserved for critical national security projects.
This level of government involvement in AI development mirrors the urgency seen in previous technological races. It raises crucial questions about the role of state intervention in an industry primarily driven by private sector innovation.
The Commission’s tech-focused recommendations extend beyond AI. Notable proposals include restricting imports of ********-made autonomous humanoid robots with advanced dexterity, locomotion, and intelligence capabilities.
The report also targets energy infrastructure products with remote monitoring capabilities, reflecting growing concerns about connected technologies in critical infrastructure. The report builds on existing export controls in the semiconductor space by recommending stronger oversight of technology transfers and investment flows.
This comes as China continues to build domestic chip-making capabilities despite international restrictions. The Commission suggests creating an Outbound Investment Office that prevents US capital and expertise from advancing China’s technological capabilities in sensitive sectors.
Reshaping trade relations and investment flows
Perhaps most significantly, the report recommends eliminating China’s Permanent Normal Trade Relations (PNTR) status—a move that could reshape the technology supply chain and trade flows that have defined the global tech industry for decades. This recommendation acknowledges how deeply intertwined the US and ******** tech ecosystems have become, while suggesting that this interdependence may now pose more risks than benefits.
Data transparency is another key theme, with recommendations for expanded reporting requirements on investments and technology transfers. The Commission calls for better tracking of investments flowing through offshore entities, addressing a significant blind-spot in current oversight mechanisms.
The report’s release comes at a critical juncture in technological development. China’s push for self-sufficiency in vital technologies and its “new quality productive forces” initiative demonstrates Beijing’s determination to lead in next-generation technologies. Meanwhile, AI capabilities and quantum computing breakthroughs have raised the stakes in technology competition.
However, the Commission’s recommendations face practical challenges. Achieving AGI remains a complex scientific challenge that may not yield quick results, regardless of funding levels. Additionally, restrictions on technology transfers and investment could have unintended consequences for global innovation networks that have historically benefited both nations.
If these recommendations are implemented, the tech industry may need to navigate an increasingly complex regulatory landscape. Companies would face new compliance requirements for international investments, technology transfers, and collaborative research projects.
Challenges and future implications
The effectiveness of the proposed measures will likely depend on coordination with allies and partners who share similar technological capabilities and concerns. The report acknowledges this by recommending multilateral approaches to export controls and investment screening.
US-China technological competition has entered a new phase where government policy may play a more direct role in shaping development. Whether this approach accelerates or hinders innovation remains to be seen, but the tech industry should prepare for increased scrutiny and regulation of international technological collaboration.
(Photo by Nathan Bingle)
See also: ******** firms use cloud loophole to access US AI tech
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Manhattan Project 2.0? US eyes AGI breakthrough in escalating China rivalry appeared first on AI News.
View the full article
When devices, networks, and AI work together seamlessly, it creates a smarter, more connected ecosystem.
This isn’t a distant dream; it’s a reality rapidly emerging as blockchain, IoT, and AI come together. These technologies are no longer working in isolation – they form a trio that redefines how industries could function.
David Palmer, chief product officer of Pairpoint by Vodafone, captures this shift: “Blockchain is providing trust. It gave us tokenisation, it gave us smart contracts, and it gave us a new way of automating, which is now spilling over into the wider business landscape.”
Building trust with blockchain
At its core, blockchain has matured from experimental concepts to practical tools for industries. Its early potential is now manifest in real-world applications like supply chain management and decentralised finance (DeFi). Blockchain not only ensures trust through transparency but lets organisations streamline operations and gain new efficiencies.
Palmer described blockchain’s evolution: “It’s been years in the past where we’ve done a lot of proof of concepts, we’ve done a lot of training. It’s been a lot of headlines. But today I really want to explore how blockchain and IoT and AI can work together to really be a part of the new business digital infrastructure that’s emerging.”
IoT’s expanding role in data generation
IoT devices have become omnipresent, embedded in everything from cars and drones to household sensors. Experts expect that by 2030, there will be around 30 billion IoT devices worldwide. These devices generate massive amounts of data, which AI systems capitalise on to provide actionable insights. According to Palmer, “By 2030, we’re expecting over 30 billion IoT devices. These are cars, drones, cabinets, sensors, all woven into the business process and business industry.”
But IoT isn’t just about data collection. It introduces the concept of the “economy of things,” where devices transact autonomously. To make this work, however, these devices need secure and reliable connectivity – a role blockchain is uniquely equipped to fulfil.
AI’s appetite for reliable data
AI thrives on data, but the quality and security of that data are paramount. Public datasets have reached their limits, pushing businesses to tap into proprietary data generated by IoT devices. This creates a two-way relationship: IoT devices supply data for AI, while AI enhances these devices with real-time intelligence.
Palmer emphasises the importance of data trustworthiness in this ecosystem: “You need an identity which gives you origin of data. So we know the data is coming from a certain source, is signed, but then we also need to trust the AI that’s coming back.”
Blockchain plays an impartant role in ensuring trust. It guarantees the legitimacy of both the data given to AI systems and the intelligence delivered back to IoT devices through verified digital identities and cryptographic signing.
Digital wallets and the adoption of blockchain
Digital wallets are becoming a cornerstone of this evolving ecosystem. Their global numbers are expected to grow from 4 billion today to 5.6 billion by 2030. Unlike traditional wallets, blockchain-enabled wallets go beyond cryptocurrencies, supporting functionalities like account abstraction and integration with tools like WalletConnect.
One breakthrough is the integration of tokenised bank deposits. These bridge traditional banking with blockchain, encouraging businesses to use blockchain for their transaction needs. As a result, blockchain is making its way into broader business applications.
Finance meets IoT
The integration of finance into IoT devices is another forward step. Using smart contracts and AI, devices as disparate as cars and drones can now handle payments autonomously. Toll payments, EV charging, and retail purchases are just the beginning of this embedded finance ecosystem.
Palmer illustrated the potential: “By linking EV chargers and vehicles to blockchain, you can then relate that to their payment credential and their payment preferences. And then you can have a peer-to-peer transaction.”
The same principle applies to energy grids, where vehicles can sell energy during peak times and recharge during off-peak hours, thereby enhancing sustainability.
Decentralised infrastructure networks
Another interesting development is the rise of decentralised physical infrastructure networks (DePIN). These networks allow shared or tokenised resources to create community-driven infrastructures. For instance, protocols like Render pool GPU resources for gaming, while Filecoin decentralises storage.
According to Palmer, “It’s about how communities can build specific AI and specific connectivity infrastructure, specific payments infrastructure for their businesses.”
Blockchain and the role of CBDCs
Governments are also noting blockchain’s potential. Central Bank Digital Currencies (CBDCs) are being explored as a way to integrate blockchain into macroeconomic policies, such as managing money supply and redistributing income. Tokenised deposits further extend blockchain’s role by digitising traditional monetary systems.
With CBDCs and tokenised deposits, blockchain is moving beyond niche applications to become an important part of financial ecosystems worldwide.
The metaverse and its evolution
The metaverse, once a far-off concept, is rapidly evolving. Innovations like AI-enabled smart glasses change how users interact with immersive digital content. Palmer noted: “This year, the introduction of the glasses by Meta […] allow you to […] access your content but also have access to AI agents.”
AI robots are also adding a new dimension to the metaverse by bridging virtual and physical experiences. These same technologies and methods open up opportunities in a variety of industries, including manufacturing and healthcare.
A seamless digital ecosystem
The convergence of blockchain, IoT, and AI marks a turning point in digital transformation. Blockchain ensures trust, IoT generates data, and AI delivers intelligence. Together, these technologies promise to create a digital operating system capable of reshaping industries and economies by 2030.
Palmer concludes, “If we can link billions of devices to blockchain and AI through secure infrastructure, we unlock the potential of a truly interconnected digital economy.”
See also: AI meets blockchain and decentralised data
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post How blockchain, IoT, and AI are shaping the future of digital transformation appeared first on AI News.
View the full article
As we approach a new year filled with potential, the landscape of technology, particularly artificial intelligence (AI) and machine learning (ML), is on the brink of significant transformation. Manish Jethwa, CTO at Ordnance Survey (OS), the national mapping agency for Great Britain, offers an insightful glimpse into what we can expect from these advancements and their implications for the geospatial sector.
Breaking Down Barriers with AI
Looking ahead, Jethwa anticipates continued significant advancements in AI and machine learning, particularly with the push towards Gen AI. According to him, the integration of large language models (LLMs) with more sophisticated agents will not only perform complex tasks on behalf of users but also further reduce barriers to interaction. This shift, especially in the geospatial field, means that translating natural language into precise data queries will become more seamless, ultimately making geospatial datasets more accessible, mainstream, and user-friendly.
Training for Complex Tasks
Beyond LLMs, Jethwa is optimistic about progress in the broader category of machine learning, driven by greater access to graphics processing units for training.
He says: “At Ordnance Survey (OS), we’ll leverage this capability to train models for specific, complex tasks such as automatic feature extraction from imagery.
“With an increasing volume of data generated automatically, hopefully next year will also bring innovative tools and techniques to validate data, ensuring it can be confidently utilised for its intended use.”
He underscores the importance of not only pursuing new capabilities but also ensuring that these tools are integrated responsibly into workflows, focusing on quality and risk management.
The Ethical Frontier
The rapid evolution of AI brings with it an urgent need for ethical considerations.
Jethwa explains: “I would like to see a greater emphasis on ethical AI and responsible technology development,” including creating AI systems that are “transparent, fair, and unbiased” while also considering their environmental and societal impact.
This focus on ethics is encapsulated in OS’s Responsible AI Charter, which guides their approach to integrating new techniques safely.
Moreover, Jethwa highlights the role of workforce development in successful transformations. He believes organisations must commit to “retraining and upskilling employees to prepare them for the impact of AI and digital transformation.”
This is vital to ensure that in the pursuit of enhanced efficiency, companies do not “lose the personality, creativity, and emotion that we bring as humans into the workplace.”
Embracing Change While Managing Risks
Despite the promise of technological advancements, obstacles remain in the journey toward digital transformation. Jethwa notes that challenges such as “cultural resistance and rapid successive changes leading to change fatigue will likely persist.”
He advocates for a careful balance between adopting new technologies and addressing the human elements of transformation processes.
As AI continues to influence various aspects of business, from decision-making to risk management, the issue of cybersecurity also looms large. Jethwa points out that “cybersecurity threats being powered by AI are becoming more sophisticated,” urging companies to develop comprehensive strategies that cover everything from data storage to analysis documentation.
The Imperative to Progress
In an evolving landscape, organisations that stagnate risk falling behind their competitors. Jethwa explains: “Companies that fail to keep up open themselves up to risks, such as changing customer expectations as well as attracting and retaining talent.”
He also emphasises the need for a “clear vision of future goals, effective communication of progress, and celebrating milestones to sustain momentum” in digital transformation initiatives.
As we move into a new year filled with promise, the future of AI and geospatial technology holds transformative power – but it must be used responsibly. The path that lies ahead in 2025 requires vigilance, an unwavering commitment to ethical practices and a human touch in order to drive successful innovation.
(Photos by Annie Spratt and Ordnance Survey)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post Ordnance Survey: Navigating the role of AI and ethical considerations in geospatial technology appeared first on AI News.
View the full article
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more.
AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation.
“The ***** of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys.
“This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.”
Regions diverge in regulatory strategy
The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026.
Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.”
Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021.
“In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said.
“Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.”
The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level.
“There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted.
This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason.
“There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys.
“It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.”
Balancing innovation and safety
Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions.
Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack.
“More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys.
This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised.
AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators.
“Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys.
Impact on related industries
One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution.
“From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys.
However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny.
“AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added.
“At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.”
Copyright battles and legal precedents
The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools.
High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission.
“These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys.
While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve?
“Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys.
“It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.”
Just this week, the *** Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out.
Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework.
The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms.
(Photo by Nathan Bingle)
See also: Anthropic urges AI regulation to avoid catastrophes
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
The post AI governance: Analysing emerging global regulations appeared first on AI News.
View the full article
Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.