Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    298
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. DeepSeek, a ******** AI startup aiming for artificial general intelligence (AGI), announced plans to open-source five repositories starting next week as part of its commitment to transparency and community-driven innovation. However, this development comes against the backdrop of mounting controversies that have drawn parallels to the TikTok saga. Today, DeepSeek shared its intentions in a tweet that outlined its vision of open collaboration: “We’re a tiny team at DeepSeek exploring AGI. Starting next week, we’ll be open-sourcing five repos, sharing our small but sincere progress with full transparency.” The repositories – which the company describes as “documented, deployed, and battle-tested in production” – include fundamental building blocks of DeepSeek’s online service. By open-sourcing its tools, DeepSeek hopes to contribute to the broader AI research community. “As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey. No ivory towers – just pure garage-energy and community-driven innovation,” the company said. This philosophy has drawn praise for fostering collaboration in a field that often suffers from secrecy, but DeepSeek’s rapid rise has also raised eyebrows. Despite being a small team with a mission rooted in transparency, the company has been under intense scrutiny amid allegations of data misuse and geopolitical entanglements. Rising fast, under fire Practically unknown until recently, DeepSeek burst onto the scene with a business model that stood in stark contrast to more established players like OpenAI and Google. Offering its advanced AI capabilities for free, DeepSeek quickly gained global acclaim for its cutting-edge performance. However, its exponential rise has also sparked debates about the trade-offs between innovation and privacy. US lawmakers are now pushing for a ban on DeepSeek after security researchers found the app transferring user data to a banned state-owned company. A probe has also been launched by Microsoft and OpenAI over a breach of the latter’s systems by a group allegedly linked to DeepSeek. Concerns about data collection and potential misuse have triggered comparisons to the controversies surrounding TikTok, another ******** tech success story grappling with regulatory pushback in the West. DeepSeek continues AGI innovation amid controversy DeepSeek’s commitment to open-source its technology appears timed to deflect criticism and reassure sceptics about its intentions. Open-sourcing has long been heralded as a way to democratise technology and increase transparency, and DeepSeek’s “daily unlocks,” that are set to begin soon, could offer the community reassuring insight into its operations. Nevertheless, questions remain over how much of the technology will be open for scrutiny and whether the move is an attempt to shift the narrative amid growing political and regulatory pressure. It’s unclear whether this balancing act will be enough to satisfy lawmakers or deter critics, but one thing is certain: DeepSeek’s open-source leap marks another turn in its dramatic rise. While the company’s motto of “garage-energy and community-driven innovation” resonates with developers eager for open collaboration, its future may rest as much on its ability to address security concerns as on its technical prowess. (Photo by Solen Feyissa) See also: DeepSeek’s AI dominance expands from EVs to e-scooters in China Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including IoT Tech Expo, Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek to open-source AGI research amid privacy concerns appeared first on AI News. View the full article
  2. Former OpenAI CTO Mira Murati has announced the launch of Thinking Machines, a new AI research and product company. With a mission to democratise access to AI and create systems that are both customisable and capable of working collaboratively with humans, the startup is setting ambitious goals to transform how AI integrates into everyday life and industry. “We’re building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals,” the company explains. Bridging gaps in the current AI landscape Thinking Machines aims to address key gaps in the current AI landscape. While AI capabilities have advanced dramatically, there remain significant challenges in making these technologies accessible and practical for widespread use. Currently, knowledge about cutting-edge AI systems is concentrated among a select few research labs, which limits public understanding and hinders broader innovation. The company highlights that systems today are also challenging to customise for individual needs and values, a critical hurdle for real-world adoption. Thinking Machines sees the solution in creating AI systems that are more widely understood, customisable, and capable. The company plans to combine intellectual openness, advanced infrastructure, and innovative AI safety practices to empower both researchers and end-users. I started Thinking Machines Lab alongside a remarkable team of scientists, engineers, and builders. We're building three things: – Helping people adapt AI systems to work for their specific needs – Developing strong foundations to build more capable AI systems – Fostering a… — Mira Murati (@miramurati) February 18, 2025 Murati’s team brings formidable expertise to this mission, comprising scientists, engineers, and technologists responsible for creating some of the most widely used AI tools, such as OpenAI’s ChatGPT, Character.ai, and open-source frameworks like PyTorch and OpenAI Gym. A human-centric approach to AI A key cornerstone of the company’s philosophy is collaboration. Thinking Machines intends to maintain a culture of openness by sharing research papers, technical blog posts, and code with the wider AI community. “Scientific progress is a collective effort,” the company states. “We believe that we’ll most effectively advance humanity’s understanding of AI by collaborating with the wider community of researchers and builders.” This open research ethos reflects a broader trend in the AI sector, where transparency and community engagement are increasingly seen as drivers of not just innovation, but also societal trust in AI technologies. Unlike many organisations focused purely on creating autonomous AI, Thinking Machines is also putting a strong emphasis on human-AI collaboration. Multimodal systems – AI capable of working with a combination of formats like text, video, and imagery – are central to this human-centric vision. These systems are designed to interact seamlessly with people, helping users leverage AI to achieve specific goals and solve meaningful problems. The focus on personalisation also sets Thinking Machines apart. The team envisions AI systems that go beyond narrow-use cases, enabling diverse applications across fields ranging from scientific research to engineering and creative work. The company is particularly interested in creating AI tools that adapt to individual expertise and situations, allowing users to “make AI work for their unique needs and goals.” Thinking Machines will prioritise strong foundations While many AI startups are rushing to deploy systems, Thinking Machines is aiming to get the foundations right. This rests on two major pillars: model intelligence and high-quality infrastructure. Murati’s team is building frontier AI models capable of pushing the boundaries of fields like programming and scientific discovery. These advanced technologies could enable revolutionary applications, from uncovering new scientific insights to achieving engineering breakthroughs. Equally, the company is committed to designing efficient, secure, and user-friendly infrastructure to drive productivity and support the next generation of AI systems. Rather than opting for shortcuts, Thinking Machines is taking a patient, methodical approach to maximise long-term impact. Advanced multimodal capabilities are another area of focus. By integrating modalities like language, imagery, and sensory data, the company aims to build systems capable of richer communication and deeper real-world integration. Ethical AI through product-driven learning Thinking Machines also plans to intertwine research and product design, an approach that not only informs innovation but also ensures relevance and usability. Products will drive iterative learning, enabling the team to gain insights from real-world deployment. Meanwhile, real-world testing will further solidify the company’s commitment to AI safety, blending proactive research with rigorous post-deployment monitoring. Murati’s team outlines three key principles for advancing AI safety: Maintaining a high safety bar to prevent misuse while preserving user freedoms. Sharing best practices with the industry for building secure AI systems. Accelerating external research on AI alignment by providing access to code, datasets, and model specifications. The team also recognises that the most meaningful breakthroughs often come from “rethinking our objectives, not just optimising existing metrics.” By measuring real-world value, Thinking Machines hopes to create AI systems that truly benefit society across a wider array of use cases. Thinking Machines: A fresh AI startup, guided by experience The launch of Thinking Machines signifies the next chapter for Mira Murati, who played a crucial role in leading some of OpenAI’s most successful projects. I shared the following note with the OpenAI team today. pic.twitter.com/nsZ4khI06P — Mira Murati (@miramurati) September 25, 2024 Murati’s wealth of experience, alongside a team of world-class AI creators, puts the new venture on solid footing to make a significant impact on the sector. By affirming a commitment to openness, collaboration, and long-term thinking, the startup may provide an antidote to common criticisms of the fast-moving AI world, from opacity to ethical risks. The mission is clear: to empower people in every industry to harness AI’s transformative potential—on their terms. (Photo by Nejc Soklič) See also: Grok 3: The next-gen ‘truth-seeking’ AI model Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Thinking Machines: Ex-OpenAI CTO’s new AI startup appeared first on AI News. View the full article
  3. South Korea is preparing to host the world’s largest AI data centre by capacity, setting its sights on global technological leadership. The monumental project, led by Fir Hills – a division of California-based Stock Farm Road, Inc. (SFR) – is expected to commence construction in winter 2025 and aims to generate an initial annual revenue of $3.5 billion. The facility, located in the Jeollanam-do Province, is the result of a strategic collaboration between seasoned innovators. SFR was co-founded by LG heir Brian Koo and Dr Amin Badr-El-Din, a veteran of successful ventures in technology, energy, and global public-private partnerships. “This is more than just a technological milestone; it’s a strategic leap forward for Korea’s global technological leadership,” said Dr Badr-El-Din. “We are incredibly proud to partner with Stock Farm Road and the Jeollanam-do government to build this crucial infrastructure, creating an unprecedented opportunity to build the foundation for next-generation AI.” The initial project is valued at over $10 billion, with the potential to grow to $35 billion. Upon completion in 2028, the centre will boast a 3-gigawatt capacity, making it the largest AI data facility in the world. The centre is designed to meet the sophisticated requirements of next-generation AI, featuring advanced cooling infrastructure, cutting-edge fibre bandwidth for regional and global connectivity, and the capability to manage significant and sudden energy load variations. This level of infrastructural sophistication not only promises bolstered AI innovation but is also a boon for South Korea’s economy. With projections suggesting that the burgeoning data centre services market will grow to $438.3 billion by 2030, South Korea’s 3GW behemoth positions itself as a pivotal player in the industry. “Having witnessed firsthand the immense technological capabilities of large Asian enterprises, I recognise the potential of this project to elevate Korea and the region to a new level of technological advancement and economic prosperity,” commented Koo. “This data centre is not merely an infrastructure project, but the launchpad for a new digital industrial revolution.” A data centre for the future of South Korea The scale of the project translates into major economic gains for the Jeollanam-do Province and beyond. The initiative is expected to create over 10,000 jobs spanning various sectors, including energy supply and storage (ESS), renewable energy production, equipment manufacturing, and research and development (R&D). Moreover, the data centre’s services will benefit millions, catering to the needs of hundreds of large corporations—hyperscalers and developers among them. The centre’s development is only the first step in a broader strategy by SFR. Future projects are already on the horizon, as the company plans to establish AI infrastructure partnerships across Asia, Europe, and the US in the next 18 months. SFR’s expertise in Energy-to-Intelligence (e2i²) forms the backbone of this push, transforming traditional energy infrastructure into platforms for advanced AI innovation. With access to significant financial resources and a robust network extending across both Asian and Western markets, SFR could lead the next generation of global intelligent infrastructure. In just over a decade, South Korea has transformed into a technological powerhouse. With companies like Samsung, LG, and Hyundai leading in their respective sectors, the nation is now solidifying its standing in AI and data infrastructure. The massive Jeollanam-do AI data centre underscores how strategically aligned public-private partnerships can foster innovation on a monumental scale. By establishing such a colossal infrastructure project, South Korea is furthering its ambitions to catalyse a true digital industrial revolution. See also: *** must act to secure its semiconductor industry leadership Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post South Korea is building the world’s largest AI data centre appeared first on AI News. View the full article
  4. DeepSeek mobility integration is spreading across China’s transport sector, with companies including automotive giants and e-scooter manufacturers incorporating AI into their products. The adoption wave began with primary electric vehicle (EV) manufacturers and has expanded recently to include the country’s leading electric two-wheeler brands. DeepSeek’s mobility integration transforms the auto industry According to the South China Morning Post, over the past two weeks, more than a dozen ******** automakers have announced plans to integrate DeepSeek’s AI technology into their vehicles. The roster includes industry leader BYD, established manufacturers like Geely, Great Wall Motor, Chery Automobile, and SAIC Motor, and emerging players like Leapmotor. BYD’s commitment to the technology is particularly noteworthy, with the company planning to integrate DeepSeek in its Xuanji vehicle software platform. The integration will let BYD offer preliminary self-driving capabilities on nearly all its models with no change to the sticker price, making autonomous driving accessible to more consumers. The initiative covers around 20 models, including the highly-affordable Seagull hatchback, which is currently priced at 69,800 yuan (US$9,575). E-scooter brands join the DeepSeek bandwagon DeepSeek has hit China’s e-scooter sector most recently, as Xiaomi-backed Segway-Ninebot Group and Nasdaq-listed Niu Technologies work to incorporate AI into their electric two-wheelers. Ninebot stated on Friday that it would “deeply integrate DeepSeek” into its products, promising enhanced features through its mobile app. The improvements are said to include AI-powered content creation, data analytics, personalised recommendations, and intelligent services to riders. Niu Technologies claims to have integrated DeepSeek’s large language models (LLMs) as of February 9 this year. The company plans to use the technology for: Driver assistance systems Riding safety features AI-powered travel companions Voice interaction Intelligent service recommendations Yadea Group, the world’s largest by sales electric two-wheeler manufacturer, announced on Saturday that it plans to embed DeepSeek’s technology into its ecosystem. The rapid adoption of DeepSeek in China’s mobility sector reflects what industry observers call “DeepSeek fever.” The technology’s appeal lies in its cost-effective and cost-efficient approach to AI integration. The Hangzhou-based company’s open-source AI models, DeepSeek-V3 and DeepSeek-R1, operate at a fraction of the cost and computing power typically required for large language model projects. “Cars without DeepSeek will either lose market share or be edged out of the market,” said Phate Zhang, founder of Shanghai-based EV data provider CnEVPost. The expansion of DeepSeek mobility integration comes at a time when ******** e-scooter brands are gaining traction in overseas markets. According to customs data, the value of electric two-wheeler exports rose 27.6% to US$5.82 billion in 2024, passing the previous peak of US$5.31 billion in 2022. Export volume increased by 47% to 22.13 million units. Research firm IDC notes that DeepSeek’s open-source model has fostered a collaborative innovation ecosystem via platforms like GitHub, letting developers participate in optimisation and security testing. The collaborative approach is expected to improve companies’ ability to deploy, train, and utilise large language models. The impact of DeepSeek mobility integration on China’s transport sector appears to be growing. Zhang Yongwei, general secretary of China EV100, projects that by 2025, approximately 15 million cars – representing two-thirds of national sales – will be equipped with preliminary autonomous driving systems, underscoring the transformative potential of the technology in reshaping China’s transport system. (Photo by Kenny Leys) See also: DeepSeek ban? China data transfer boosts security concerns Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek’s AI dominance expands from EVs to e-scooters in China appeared first on AI News. View the full article
  5. The CRM company monday.com has released what it terms its ‘AI vision,’ designed to help businesses and teams manage and track their work more efficiently. According to the platform’s published strategy, the company will focus on three key principles – AI Blocks, Product Power-ups, and a Digital Workforce. Its aims are to “accelerate its vision to democratise the power of software.” By making its software more accessible and usable for its users, monday.com hopes to address the challenges encountered by businesses and work forces in teams of all sizes, streamline workflows and increase productivity. The latest AI strategies are designed to help small and medium-sized businesses (SMBs) and mid-market companies grow and adapt quickly without the need to hire more staff. According to monday.com, AI will boost process times that are typically slowed by the scale of larger enterprises and Fortune 500 companies. In line with monday.com’s stated goal to “democratise access to AI,” the platform’s approach is to be user-friendly, allowing any users regardless of technical knowledge to create, customise, and use AI tools in their workflows. Daniel Lereya, Chief Product and Technology Officer at monday.com spoke about the company’s approach to making software accessible to everyone. “By embedding intelligence into the products our customers already know, use, and love, AI will accelerate our mission to democratise the power of software,” he said. “With a majority of our customers in non-tech industries, they’re looking to us to lead them through the AI transformation. Our productisation of AI ensures that intelligence serves our customers and not the other way around.” Of the three principles ‘AI Blocks’ are customisable AI tools that can be added to existing workflows, regardless of technical knowledge. The AI Blocks feature capabilities like “Categorise” and “Extract,’ letting users analyse data, and recognise patterns in a few clicks. The second principle, ‘Product Power-ups,’ refers to the integration of AI features directly into monday.com’s existing suite of products. This is designed to help the company’s customers address challenges in areas like CRM data automation, resource management, predictive risk management, and real-time service ticket resolution. The features could help teams make quicker, more informed decisions, streamline workflows and increase efficiency. Finally, ‘Digital Workforce’ is a collection of AI agents that operate to support users and customers. Digital Workforce is can handle specific tasks that are automatable, including project risk analysis, the identification of ongoing customer service issues, and helping move delayed sales deals forward. monday.com is planning the launch of ‘monday Expert’ in March, its first AI agent that’s designed to support the onboarding of new users, and perform some tasks on behalf of users. The company claims its AI capabilities have had a notable effect, with reports of monday.com users performing around 10 million AI-driven actions in 2024. The numbers of AI use instances nearly tripled each quarter of 2024, indicating a rapid rise in the use of AI in the CRM platform. (Image: “Monday – Back to work!” by tinto is licensed under CC BY 2.0.) ion Summit: Leaders call for unity and equitable development Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post monday.com announces AI features to help businesses and employees appeared first on AI News. View the full article
  6. xAI unveiled its Grok 3 AI model on Monday, alongside new capabilities such as image analysis and refined question answering. The company harnessed an immense data centre equipped with approximately 200,000 GPUs to develop Grok 3. According to xAI owner Elon Musk, this project utilised “10x” more computing power than its predecessor, Grok 2, with an expanded dataset that reportedly includes information from legal case filings. Musk claimed that Grok 3 is a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically-correct.” The Grok 3 rollout includes a family of models designed for different needs. Grok 3 mini, for example, prioritises faster response times over absolute accuracy. However, particularly noteworthy are the new reasoning-focused Grok 3 models. Dubbed Grok 3 Reasoning and Grok 3 mini Reasoning, these variants aim to emulate human-like cognitive processes by “thinking through” problems. Comparable to models like OpenAI’s o3-mini and DeepSeek’s R1, these reasoning systems attempt to fact-check their responses—reducing the likelihood of errors or missteps. Grok 3: The benchmark results xAI asserts that Grok 3 surpasses OpenAI’s GPT-4o in certain benchmarks, including AIME and GPQA, which assess the model’s proficiency in tackling complex problems across mathematics, physics, biology, and chemistry. The early version of Grok 3 is also currently leading on Chatbot Arena, a crowdsourced evaluation platform where users pit AI models against one another and rank their outputs. The model is the first to break the Arena’s 1400 score. BREAKING: @xAI early version of Grok-3 (codename "chocolate") is now #1 in Arena! Grok-3 is: – First-ever model to break 1400 score! – #1 across all categories, a milestone that keeps getting harder to achieve Huge congratulations to @xAI on this milestone! View thread … [Hidden Content] pic.twitter.com/hShGy8ZN1o — lmarena.ai (formerly lmsys.org) (@lmarena_ai) February 18, 2025 According to xAI, Grok 3 Reasoning outperforms its rivals on a variety of prominent benchmarks: These reasoning models are already integrated into features available via the Grok app. Users can select commands like “Think” or activate the more computationally-intensive “Big Brain” mode for tackling particularly challenging questions. xAI has positioned the reasoning models as ideal tools for STEM (science, technology, engineering, and mathematics) applications, including mathematics, science, and coding challenges. Guarding against AI distillation Interestingly, not all of Grok 3’s internal processes are laid bare to users. Musk explained that some of the reasoning models’ “thoughts” are intentionally obscured to prevent distillation—a controversial practice where competing AI developers extract knowledge from proprietary models. The practice was thrust into the spotlight in recent weeks after ******** AI firm DeepSeek faced allegations of distilling OpenAI’s models to develop its latest model, R-1. xAI’s new reasoning models serve as the foundation for a new Grok app feature called DeepSearch. The feature uses Grok models to scan the internet and Musk’s social platform, X, for relevant information before synthesising a detailed abstract in answer to user queries. Accessing Grok 3 and committing to open-source Access to the latest Grok model is currently tied to X’s subscription tiers. Premium+ subscribers, who pay $50 (~£41) per month, will receive priority access to the latest functionalities. xAI is also introducing a SuperGrok subscription plan, reportedly priced at either $30 per month or $300 annually. SuperGrok subscribers will benefit from enhanced reasoning capabilities, more DeepSearch queries, and unlimited image generation features. The company also teased upcoming features. Within a week, the Grok app is expected to introduce a voice mode—enabling users to interact with the AI through a synthesised voice similar to Gemini Live. Musk further revealed plans to release Grok 3 models via an enterprise-ready API in the coming weeks, with DeepSearch functionality included. Although Grok 3 is still fresh, xAI intends to open-source its predecessor in the coming months. Musk claims that xAI will continue to open-source the last version of Grok. “When Grok 3 is mature and stable, which is probably within a few months, then we’ll open-source Grok 2,” explains Musk. The ‘anti-woke’ AI model Grok has long been marketed as unfiltered, bold, and willing to engage with queries that competitors might avoid. Musk previously described the AI as “anti-woke,” presenting it as a model unafraid to touch on controversial topics. True to its promise, early models like Grok and Grok 2 embraced politically-charged queries, even veering into colourful language when prompted. Yet, these versions also revealed some biases when delving deep into political discourse. “We’re working to shift Grok closer to politically-neutral,” said Musk. However, whether Grok 3 achieves this goal remains to be seen. With such changes at play, analysts are already highlighting the potential societal impacts of introducing increasingly “truth-seeking” yet politically-sensitive AI systems. With Grok 3, Musk and xAI have made a bold statement, pushing their technology forward while potentially fuelling debates around bias, transparency, and the ethics of AI deployment. As competitors like OpenAI, Google, and DeepSeek refine their offerings, Grok 3’s success will hinge on its ability to balance accuracy, user demand, and societal responsibility. See also: AI in 2025: Purpose-driven models, human integration, and more Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Grok 3: The next-gen ‘truth-seeking’ AI model appeared first on AI News. View the full article
  7. The *** semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness. Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The *** has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment. “By accelerating the implementation of the National Semiconductor Strategy, we can unlock investment, foster innovation, and strengthen our position in this critical industry. Semiconductors are the backbone of modern technology, powering everything from consumer electronics to AI data centres. With the global semiconductor market projected to reach $1 trillion by 2030, the *** must act to secure its historic leadership in this lucrative and strategically vital industry. “We must act at pace to secure the ***’s semiconductor future and as such our technological and economic resilience,” explains Foster. *** semiconductor industry strengths and challenges The *** has long been a leader in semiconductor design and intellectual property (IP), with Cambridge in particular serving as a global hub for innovation. Companies like Arm, which designs chips used in 99% of the world’s smartphones, exemplify the ***’s strengths in this area. However, a techUK report warns that these strengths are under threat due to insufficient investment, skills shortages, and a lack of tailored support for the sector. “The *** is not starting from zero,” the report states. “We have globally competitive capabilities in design and IP, but we must double down on these strengths to compete internationally.” The ***’s semiconductor industry contributed £12 billion in turnover in 2021, with 90% of companies expecting growth in the coming years. However, the sector faces significant challenges, including high costs, limited access to private capital, and a reliance on international talent. The report highlights that only 5% of funding for *** semiconductor startups originates domestically, with many companies struggling to find qualified investors. A fundamental need for strategic investment and innovation The report makes 27 recommendations across six key areas, including design and IP, R&D, manufacturing, skills, and global partnerships. Some of the key proposals include: Turn current strengths into leadership: The *** must leverage its existing capabilities in design, IP, and compound semiconductors. This includes supporting regional clusters like Cambridge and South Wales, which have proven track records of innovation. Establishing a National Semiconductor Centre: This would act as a central hub for the industry, providing support for businesses, coordinating R&D efforts, and fostering collaboration between academia and industry. Expanding R&D tax credits: The report calls for the inclusion of capital expenditure in R&D tax credits to incentivise investment in new facilities and equipment. Creating a Design Competence Centre: This would provide shared facilities for chip designers, reducing the financial risk of innovation and supporting the development of advanced designs. Nurturing skills: The *** must address the skills shortage in the semiconductor sector by upskilling workers, attracting international talent, and promoting STEM education. Capitalise on global partnerships: The *** must strengthen its position in the global semiconductor supply chain by forming strategic partnerships with allied countries. This includes collaborating on R&D, securing access to critical materials, and navigating export controls. Urgent action is required to secure the *** semiconductor industry The report warns that the *** risks falling behind other nations if it does not act quickly. Countries like the US, China, and the EU have already announced significant investments in their domestic semiconductor industries. The European Chips Act, for example, has committed €43 billion to support semiconductor infrastructure, skills, and startups. “Governments across the world are acting quickly to attract semiconductor companies while also building domestic capability,” the report states. “The *** must use its existing resources tactically, playing to its globally recognised strengths within the semiconductor value chain.” The ***’s semiconductor industry has the potential to be a global leader, but this will require sustained investment, strategic planning, and collaboration between government, industry, and academia. “The *** Government should look to its semiconductor ambitions as an essential part of delivering the wider Industrial Strategy and securing not just the fastest growth in the G7, but also secure and resilient economic growth,” the report concludes. (Photo by Rocco Dipoppa) See also: AI in 2025: Purpose-driven models, human integration, and more Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** must act to secure its semiconductor industry leadership appeared first on AI News. View the full article
  8. No industry is immune to fraud, with cases rising in finance, retail, and loyalty programmes. From fake invoices and falsified receipts to identity fraud and synthetic accounts, traditional fraud detection methods struggle to keep up. Many businesses still rely on manual reviews, which are slow and error-prone, often detecting fraud only after the damage is done. As fraud tactics grow more sophisticated, organisations need a smarter approach. Automated document fraud detection powered by AI offers a proactive solution, letting businesses to verify documents in real-time, detect anomalies, and prevent fraud before it occurs. Here is where AI-powered intelligent document processing (IDP) is changing the game. By combining machine learning, optical character recognition (OCR), and real-time data verification, AI can automatically analyse, authenticate, and flag fraudulent documents in seconds. Unlike traditional methods, AI-driven fraud detection is faster, more accurate, and continuously improving, helping businesses detect fraud patterns before they cause financial and reputational damage. In this blog, we’ll explore what IDP is, how fraud is detected using AI, and the industries in which it can be applied. What is intelligent document processing & how does AI improve fraud detection? The amount of documents, invoices, receipts, and identity records businesses process has skyrocketed. However, manual document handling and traditional fraud detection methods struggle to keep up with the increasing volume and complexity of fraud attempts. This is where intelligent document processing comes in. What is intelligent document processing? Intelligent document processing is an AI-powered technology that automates the extraction, classification, and verification of data from documents. It uses machine learning (ML), natural language processing (NLP), and optical character recognition (OCR) to read and analyse structured and unstructured documents, with abilities far beyond traditional rule-based systems. Instead of relying on manual reviews or keyword-based matching, IDP understands context, patterns, and anomalies, making it an invaluable tool for detecting fraudulent activity. How AI enhances fraud detection with IDP AI-powered IDP improves fraud detection by enabling businesses to: Detect anomalies instantly: AI scans thousands of documents in real-time, identifying irregularities in invoices, receipts, and identity records that humans might overlook. Verify document authenticity: By cross-referencing data across multiple sources, AI can detect manipulated text, forged signatures, and fake documents. Identify duplicate or altered submissions: Fraudsters often modify genuine receipts or submit duplicate claims. AI can compare submissions and flag inconsistencies. Reduce false positives: Unlike traditional rule-based systems that flag legitimate transactions as fraud, AI continuously learns and improves accuracy over time. Scale fraud detection effortlessly: AI can process millions of documents, allowing businesses to detect fraud without increasing human workload. Why traditional fraud detection falls short Most fraud detection methods rely on manual audits, fixed rules, and pattern-matching techniques, which are: Time-consuming and expensive: Manual document checks require significant resources. Prone to human error: Fraudsters exploit gaps that humans may miss. Limited in scope: Rule-based systems struggle to detect new and evolving fraud tactics. By using AI and IDP, businesses gain a faster, more reliable, and scalable fraud detection system that adapts to evolving threats. In the next section, we’ll explore how AI-driven fraud detection is applied across industries like finance, retail, and loyalty programmes. AI-powered fraud detection in industries Fraud impacts businesses in many ways, from loyalty programme abuse to invoice fraud and identity theft. Traditional fraud detection methods often fall short, as fraudsters constantly evolve their tactics to evade security measures. AI-powered IDP is transforming how businesses analyse, verify, and detect fraud across various industries. Here’s how AI is tackling fraud in key sectors. Preventing loyalty fraud in rewards programmes Loyalty programmes are designed to reward genuine customers, but fraudsters have found ways to manipulate these systems for personal gain. Common fraud techniques include: Creating multiple accounts to claim sign-up bonuses multiple times. Submitting fake or altered receipts to earn rewards without making real purchases. Abusing refund and return policies to retain loyalty points after a transaction is reversed. Hacking accounts to steal and redeem someone else’s loyalty points. AI-powered fraud detection helps prevent these tactics by: Verifying receipts: AI scans submitted receipts and detects forgeries, duplicates, and altered information. Identifying suspicious patterns: Machine learning algorithms spot unusual transaction behaviours, like multiple claims from the same user with different identities. Automating account authentication: AI uses identity verification to ensure real customers benefit from rewards, preventing bot-driven abuse. With real-time fraud detection, businesses can reduce losses from loyalty fraud while ensuring that rewards go to legitimate customers. Stopping invoice & expense fraud in finance & accounting Fraudsters frequently target invoice and expense management systems by submitting fake, inflated, or duplicate claims. Some common invoice fraud techniques include: Invoice tampering: Fraudsters modify invoice amounts or change vendor details to redirect payments. Duplicate claims: Employees or vendors submit the same invoice multiple times for reimbursement. Fake receipts: Fraudsters generate counterfeit receipts to justify fraudulent expenses. AI and optical character recognition (OCR) technology help detect these fraudulent activities by: Extracting and verifying invoice data: AI scans invoices and cross-checks them against existing records, vendor details, and previous payments to detect duplications or alterations. Spotting irregular patterns: Machine learning identifies inconsistencies like overinflated amounts, mismatched dates, and suspicious vendor behaviour. Automating compliance checks: AI ensures that invoices meet company expense policies and tax regulations, reducing human errors in financial audits. By integrating AI-driven document processing, finance teams can speed up invoice verification, prevent fraudulent payouts, and eliminate manual review bottlenecks. Banking fraud: Loan and mortgage fraud Fraudsters frequently target banks and financial institutions by manipulating loan and mortgage applications. They use falsified documents, stolen identities, or synthetic identities to secure funds they have no intention of repaying. Some common fraud techniques include: Document forgery: Fraudsters alter bank statements, pay stubs, or tax documents to exaggerate income or hide liabilities. Identity theft: Stolen personal information is used to apply for loans or mortgages under a false identity. Synthetic identity fraud: A mix of real and fake information is used to create a new identity with a high credit score to secure fraudulent loans. Straw borrower schemes: A third party applies for a loan on behalf of someone who would not qualify, concealing the true borrower’s financial risk. AI-powered fraud detection in banking helps prevent loan and mortgage fraud through: Advanced document verification: AI analyses submitted financial documents for inconsistencies, altered text, and signs of forgery. Identity verification and biometric matching: AI-powered facial recognition and ID authentication ensure applicants are who they claim to be. Cross-referencing financial data: AI scans multiple data sources, like credit history and banking records, to detect unusual patterns or mismatches. Real-time risk assessment: Machine learning models evaluate loan applications for fraudulent indicators, reducing the risk of granting high-risk loans. By integrating AI into banking fraud detection, financial institutions can strengthen their security, reduce loan defaults, and ensure compliance with regulatory standards. AI-driven fraud detection is redefining security standards across industries by automating verification, detecting anomalies, and reducing fraud risks in real time. In the next section, we’ll explore why AI is the future of fraud prevention and how it continuously adapts to emerging fraud tactics. Why AI is the future of fraud prevention Fraud tactics are constantly evolving, making traditional detection methods less effective over time. Manual reviews and rule-based systems are too rigid to keep up with increasingly sophisticated fraud schemes. AI, on the other hand, offers a dynamic, self-learning approach that continuously adapts to new threats. Unlike static fraud detection models, AI analyses patterns in real time, detects anomalies, and refines its accuracy over time. By automating document authentication, verifying identities, and flagging suspicious transactions, AI minimises human error and strengthens fraud prevention across industries. Its ability to process millions of documents instantly ensures that fraud is detected before financial damage occurs, rather than after. Conclusion: The future of fraud detection is AI-driven Businesses can no longer afford to rely on outdated fraud prevention strategies. AI-powered intelligent document processing provides a scalable, efficient, and highly accurate way to detect and prevent fraud, reducing financial losses and compliance risks. By using AI, companies can automate fraud detection, improve security, and stay ahead of emerging threats. As fraud continues to evolve, businesses must evolve with it. AI is no longer the future of fraud prevention, it is the present. The question is: Is your business ready to embrace it? (Image source: Unsplash) The post AI helps prevent fraud with intelligent document processing appeared first on AI News. View the full article
  9. As AI becomes increasingly embedded in our daily lives, industry leaders and experts are forecasting a transformative 2025. From groundbreaking developments to existential challenges, AI’s evolution will continue to shape industries, change workflows, and spark deeper conversations about its implications. For this article, AI News caught up with some of the world’s leading minds to see what they envision for the year ahead. Smaller, purpose-driven models Grant Shipley, Senior Director of AI at Red Hat, predicts a shift away from valuing AI models by their sizeable parameter counts. “2025 will be the year when we stop using the number of parameters that models have as a metric to indicate the value of a model,” he said. Instead, AI will focus on specific applications. Developers will move towards chaining together smaller models in a manner akin to microservices in software development. This modular, task-based approach is likely to facilitate more efficient and bespoke applications suited to particular needs. Open-source leading the way Bill Higgins, VP of watsonx Platform Engineering and Open Innovation at IBM, expects open-source AI models will grow in popularity in 2025. “Despite mounting pressure, many enterprises are still struggling to show measurable returns on their AI investments—and the high licensing fees of proprietary models is a major factor. In 2025, open-source AI solutions will emerge as a dominant force in closing this gap,” he explains. Alongside the affordability of open-source AI models comes transparency and increased customisation potential, making them ideal for multi-cloud environments. With open-source models matching proprietary systems in power, they could offer a way for enterprises to move beyond experimentation and into scalability. This plays into a prediction from Nick Burling, SVP at Nasuni, who believes that 2025 will usher in a more measured approach to AI investments. “Enterprises will focus on using AI strategically, ensuring that every AI initiative is justified by clear, measurable returns,” said Burling. Cost efficiency and edge data management will become crucial, helping organisations optimise operations while keeping budgets in check. Augmenting human expertise For Jonathan Siddharth, CEO of Turing, the standout feature of 2025 AI systems will be their ability to learn from human expertise at scale. “The key advancement will come from teaching AI not just what to do, but how to approach problems with the logical reasoning that coding naturally cultivates,” he says. Competitiveness, particularly in industries like finance and healthcare, will hinge on mastering this integration of human expertise with AI. Behavioural psychology will catch up Understanding the interplay between human behaviour and AI systems is at the forefront of predictions for Niklas Mortensen, Chief Design Officer at Designit. “With so many examples of algorithmic bias leading to unwanted outputs – and humans being, well, humans – behavioural psychology will catch up to the AI train,” explained Mortensen. The solutions? Experimentation with ‘pause moments’ for human oversight and intentional balance between automation and human control in critical operations such as healthcare and transport. Mortensen also believes personal AI assistants will finally prove their worth by meeting their long-touted potential in organising our lives efficiently and intuitively. Bridge between physical and digital worlds Andy Wilson, Senior Director at Dropbox, envisions AI becoming an indispensable part of our daily lives. “AI will evolve from being a helpful tool to becoming an integral part of daily life and work – offering innovative ways to connect, create, and collaborate,” Wilson says. Mobile devices and wearables will be at the forefront of this transformation, delivering seamless AI-driven experiences. However, Wilson warns of new questions on boundaries between personal and workplace data, spurred by such integrations. Driving sustainability goals With 2030 sustainability targets looming over companies, Kendra DeKeyrel, VP ESG & Asset Management at IBM, highlights how AI can help fill the gap. DeKeyrel calls on organisations to adopt AI-powered technologies for managing energy consumption, lifecycle performance, and data centre strain. “These capabilities can ultimately help progress sustainability goals overall,” she explains. Unlocking computational power and inference James Ingram, VP Technology at Streetbees, foresees a shift in computational requirements as AI scales to handle increasingly complex problems. “The focus will move from pre-training to inference compute,” he said, highlighting the importance of real-time reasoning capabilities. Expanding context windows will also significantly enhance how AI retains and processes information, likely surpassing human efficiency in certain domains. Rise of agentic AI and unified data foundations According to Dominic Wellington, Enterprise Architect at SnapLogic, “Agentic AI marks a more flexible and creative era for AI in 2025.” However, such systems require robust data integration because siloed information risks undermining their reliability. Wellington anticipates that 2025 will witness advanced solutions for improving data hygiene, integrity, and lineage—all vital for enabling agentic AI to thrive. From hype to reality Jason Schern, Field CTO of Cognite, predicts that 2025 will be remembered as the year when truly transformative, validated generative AI solutions emerge. “Through the fog of AI for AI’s sake noise, singular examples of truly transformative embedding of Gen AI into actual workflows will stand out,” predicts Schern. These domain-specific AI agents will revolutionise industrial workflows by offering tailored decision-making. Schern cited an example in which AI slashed time-consuming root cause analyses from months to mere minutes. Deepfakes and crisis of trust Sophisticated generative AI threatens the authenticity of images, videos, and information, warns Siggi Stefnisson, Cyber Safety CTO at Gen. “Even experts may not be able to tell what’s authentic,” warns Stefnisson. Combating this crisis requires robust digital credentials for verifying authenticity and promoting trust in increasingly blurred digital realities. 2025: Foundational shifts in the AI landscape As multiple predictions converge, it’s clear that foundational shifts are on the horizon. The experts that contributed to this year’s industry predictions highlight smarter applications, stronger integration with human expertise, closer alignment with sustainability goals, and heightened security. However, many also foresee significant ethical challenges. 2025 represents a crucial year: a transition from the initial excitement of AI proliferation to mature and measured adoption that promises value and a more nuanced understanding of its impact. See also: AI Action Summit: Leaders call for unity and equitable development Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI in 2025: Purpose-driven models, human integration, and more appeared first on AI News. View the full article
  10. Apple’s aim to integrate Qwen AI into ******** iPhones has taken a significant step forward, with sources indicating a potential partnership between the Cupertino giant and Alibaba Group Holding. The development could reshape how AI features are implemented in one of the world’s most regulated tech markets. According to multiple sources familiar with the matter, Apple is in advanced talks to use Alibaba’s Qwen AI models for its iPhone lineup in mainland China. The move would depart from Apple’s global strategy of using OpenAI’s GPT models for its AI features, highlighting the company’s willingness to adapt to local market conditions. The technical edge of Qwen AI Qwen AI is attractive to Apple in China because of the former’s proven capabilities in the open-source AI ecosystem. Recent benchmarks from Hugging Face, a leading collaborative machine-learning platform, position Qwen at the forefront of open-source large language models (LLMs). The platform’s data shows Qwen-powered models dominating the top 10 positions in performance global rankings, demonstrating the technical maturity that Apple seeks for its AI integration. “The selection of Qwen AI for iPhone integration would validate Alibaba’s AI capabilities,” explains Morningstar’s senior equity analyst Chelsey Lam. “This could be particularly important for Apple’s strategy to re-invigorate iPhone sales in China, where AI features have become increasingly important for smartphone users.” Regulatory navigation and market impact The potential partnership reflects an understanding of China’s AI regulatory landscape. While Apple’s global AI features remain unavailable in China due to regulatory requirements, partnering with Alibaba could provide a compliant pathway to introduce advanced AI capabilities. Market reaction to the news has been notably positive: Alibaba’s stock surged 7.6% on Monday, followed by an additional 1.3% gain on Tuesday Apple shares responded with a 2.2% increase The tech sector has shown renewed interest in China-focused AI integration strategies Development timeline and expectations The timing of the potential collaboration aligns with Apple’s upcoming China developer conference in Shanghai, scheduled for March 25. Industry observers speculate the event could serve as a platform on which to announce the integration of Qwen AI features into the iPhone ecosystem. “The partnership could change how international tech companies approach AI localisation in China,” noted a senior AI researcher at a leading ******** university, speaking anonymously. “It’s not just about technology integration; it’s about creating a sustainable model for AI development in China’s regulatory framework.” Implications for developers and users For ******** iOS developers, the potential integration of Qwen AI presents opportunity. The partnership could enable: Creation of locally optimised AI applications Enhanced natural language processing capabilities specific to ******** users Seamless integration with local services and platforms Prospects and industry impact The effects of the partnership extend beyond immediate market concerns. As global tech companies navigate operating in China, the Apple-Alibaba collaboration could serve as a blueprint for future integration. For Alibaba, securing Apple as a flagship partner could catalyse more partnerships with global technology companies seeking AI solutions for China. The collaboration would demonstrate Qwen AI’s capability to meet the stringent requirements of one of the world’s most demanding tech companies. Looking ahead While both companies maintain official silence on the partnership, the tech community awaits announcements at the upcoming Shanghai developer conference. The development is important when AI capabilities increasingly influence smartphone purchasing decisions. For Apple, success in China will impact its global growth trajectory, and integrating Qwen AI could provide the competitive edge it needs to maintain its premium market position against local manufacturers offering advanced AI features. It underscores a broader trend in the tech industry: the growing importance of localised AI solutions in major markets. See also: Has Huawei outsmarted Apple in the AI race? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here The post Could Alibaba’s Qwen AI power the next generation of iPhones in China? appeared first on AI News. View the full article
  11. Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.” Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the ****** bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.” He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent. “Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns. Oversight without stifling innovation Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials. “My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.” Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the ***** of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research. Global divisions around preventing AI misuse The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday. The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union. However, the *** and the US declined to sign the communique. The *** government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”. This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and *** are opting for more agile and innovation-driven strategies. Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI. “The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked. Prioritising national and global safety Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes. From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on. While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences. (Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.) See also: NEPC: AI sprint risks environmental catastrophe Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News. View the full article
  12. The advent of more powerful processors in the early 2000’s shipping with support in hardware for virtualisation started the computing revolution that led, in time, to what we now call the cloud. With single hardware instances able to run dozens, if not hundreds of virtual machines concurrently, businesses could offer their users multiple services and applications that would otherwise have been financially impractical, if not impossible. But virtual machines (VMs) have several downsides. Often, an entire virtualised operating system is overkill for many applications, and although very much more malleable, scalable, and agile than a fleet of bare-metal servers, VMs still require significantly more memory and processing power, and are less agile than the next evolution of this type of technology – containers. In addition to being more easily scaled (up or down, according to demand), containerised applications consist of only the necessary parts of an application and its supporting dependencies. Therefore apps based on micro-services tend to be lighter and more easily configurable. Virtual machines exhibit the same security issues that affect their bare-metal counterparts, and to some extent, container security issues reflect those of their component parts: a mySQL bug in a specific version of the upstream application will affect containerised versions too. With regards to VMs, bare metal installs, and containers, cybersecurity concerns and activities are very similar. But container deployments and their tooling bring specific security challenges to those charged with running apps and services, whether manually piecing together applications with choice containers, or running in production with orchestration at scale. Container-specific security risks Misconfiguration: Complex applications are made up of multiple containers, and misconfiguration – often only a single line in a .yaml file, can grant unnecessary privileges and increase the attack surface. For example, although it’s not trivial for an attacker to gain root access to the host machine from a container, it’s still a too-common practice to run Docker as root, with no user namespace remapping, for example. Vulnerable container images: In 2022, Sysdig found over 1,600 images identified as malicious in Docker Hub, in addition to many containers stored in the repo with hard-coded cloud credentials, ssh keys, and NPM tokens. The process of pulling images from public registries is opaque, and the convenience of container deployment (plus pressure on developers to produce results, fast) can mean that apps can easily be constructed with inherently insecure, or even malicious components. Orchestration layers: For larger projects, orchestration tools such as Kubernetes can increase the attack surface, usually due to misconfiguration and high levels of complexity. A 2022 survey from D2iQ found that only 42% of applications running on Kubernetes made it into production – down in part to the difficulty of administering large clusters and a steep learning curve. According to Ari Weil at Akamai, “Kubernetes is mature, but most companies and developers don’t realise how complex […] it can be until they’re actually at scale.” Container security with machine learning The specific challenges of container security can be addressed using machine learning algorithms trained on observing the components of an application when it’s ‘running clean.’ By creating a baseline of normal behaviour, machine learning can identify anomalies that could indicate potential threats from unusual traffic, unauthorised changes to configuration, odd user access patterns, and unexpected system calls. ML-based container security platforms can scan image repositories and compare each against databases of known vulnerabilities and issues. Scans can be automatically triggered and scheduled, helping prevent the addition of harmful elements during development and in production. Auto-generated audit reports can be tracked against standard benchmarks, or an organisation can set its own security standards – useful in environments where highly-sensitive data is processed. The connectivity between specialist container security functions and orchestration software means that suspected containers can be isolated or closed immediately, insecure permissions revoked, and user access suspended. With API connections to local firewalls and VPN endpoints, entire environments or subnets can be isolated, or traffic stopped at network borders. Final word Machine learning can reduce the risk of data breach in containerised environments by working on several levels. Anomaly detection, asset scanning, and flagging potential misconfiguration are all possible, plus any degree of automated alerting or amelioration are relatively simple to enact. The transformative possibilities of container-based apps can be approached without the security issues that have stopped some from exploring, developing, and running microservice-based applications. The advantages of cloud-native technologies can be won without compromising existing security standards, even in high-risk sectors. (Image source) The post The role of machine learning in enhancing cloud-native container security appeared first on AI News. View the full article
  13. Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging efficiency breakthroughs. The stakes are high, with collective capital expenditure jumping 30% up from 2024’s $246 billion investment. While investors may question the necessity of such aggressive spending, tech leaders are doubling down on their belief that AI represents a transformative opportunity worth every dollar. Amazon stands at the forefront of this AI arms spend, according toa reportby Business Insider. Amazon is flexing its financial muscle with a planned $100 billion capital expenditure for 2025 – a dramatic leap from its $77 billion last year. AWS chief Andy Jassy isn’t mincing words, calling AI a “once-in-a-lifetime business opportunity” that demands aggressive investment. Microsoft’s Satya Nadella also has a bullish stance with his own hard numbers. Having earmarked $80 billion for AI infrastructure in 2025, Microsoft’s existing AI ventures are already delivering; Nadella has spoken of $13 billion annual revenue from AI and 175% year-over-year growth. His perspective draws from economic wisdom: citing the Jevons paradox, he argues that making AI more efficient and accessible will spark an unprecedented surge in demand. Not to be outdone, Google parent Alphabet is pushing all its chips to the centre of the table, with a $75 billion infrastructure investment in 2025, dwarfing analysts’ expectations of $58 billion. Despite market jitters about cloud growth and AI strategy, CEO Sundar Pichai maintains Google’s product innovation engine is firing on all cylinders. Meta’s approach is to pour $60-65 billion into capital spending in 2025 – up from $39 billion in 2024. The company is carving its own path by championing an “American standard” for open-source AI models, a strategy has caught investor attention, particularly given Meta’s proven track record in monetising AI through sophisticated ad targeting. The emergence of DeepSeek’s efficient AI models has sparked some debate in investment circles. Investing.com’s Jesse Cohen voices growing demands for concrete returns on existing AI investments. Yet Wedbush’s Dan Ives dismisses such concerns, likening DeepSeek to “the Temu of AI” and insisting the revolution is just beginning. The market’s response to these bold plans tells a mixed story. Meta’s strategy has won investor applause, while Amazon and Google face more sceptical reactions, with stock drops of 5% and 8% respectively following spending announcements in earnings calls. Yet tech leaders remain undeterred, viewing robust AI infrastructure as non-negotiable for future success. The intensity of infrastructure investment suggests a reality: technological breakthroughs in AI efficiency aren’t slowing the race – they’re accelerating it. As big tech pours unprecedented resources into AI development, it’s betting that increased efficiency will expand rather than contract the market for AI services. The high-stakes gamble on AI’s future reveals a shift in how big tech views investment. Rather than waiting to see how efficiency improvements might reduce costs, it’s are scaling up aggressively, convinced that tomorrow’s AI landscape will demand more infrastructure, not less. In this view, DeepSeek’s breakthroughs aren’t a threat to their strategy – they’re validation of AI’s expanding potential. The message from Silicon Valley is that the AI revolution demands massive infrastructure investment, and the giants of tech are all in. The question isn’t whether to invest in AI infrastructure, but whether $320 billion will be enough to meet the coming surge in demand. See also: DeepSeek ban? China data transfer boosts security concerns Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News. View the full article
  14. Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself. “This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same *******, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.” The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one. “Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.” As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued. The case for a European approach to the AI race Von der Leyen rejected notions that Europe has fallen behind its global competitors. “Too often, I hear that Europe is late to the race – while the US and China have already gotten ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.” Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI. “Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.” Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles. “This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.” Accelerating innovation: AI factories and gigafactories To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed. A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.” “In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.” Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders. “We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.” The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration. “AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts. Building trust with the AI Act Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states. “The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities. “At the same time, I know, we have to make it easier, we have to cut red tape. And we will.” €200 billion to remain in the AI race Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry. During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments. “We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.” Ethical AI is a global responsibility Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility. “Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity. Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits. “AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked. “We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.” See also: AI Action Summit: Leaders call for unity and equitable development Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News. View the full article
  15. As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI. Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish a cohesive global framework for AI governance. AI Action Summit is ‘a wake-up call’ French President Emmanuel Macron has described the summit as “a wake-up call for Europe,” emphasising the need for collective action in the face of AI’s transformative potential. This comes as the US has committed $500 billion to AI infrastructure. The ***, meanwhile, has unveiled its Opportunities Action Plan ahead of the full implementation of the *** AI Act. Ahead of the AI Summit, *** tech minister Peter Kyle told The Guardian the AI race must be led by “western, liberal, democratic” countries. These developments signal a renewed global dedication to harnessing AI’s capabilities while addressing its risks. Matt Cloke, CTO at Endava, highlighted the importance of bridging the gap between AI’s potential and its practical implementation. “Much of the conversation is set to focus on understanding the risks involved with using AI while helping to guide decision-making in an ever-evolving landscape,” he said. Cloke also stressed the role of organisations in ensuring AI adoption goes beyond regulatory frameworks. “Modernising core systems enables organisations to better harness AI while ensuring regulatory compliance,” he explained. “With improved data management, automation, and integration capabilities, these systems make it easier for organisations to stay agile and quickly adapt to impending regulatory changes.” Governance and workforce among critical AI Action Summit topics Kit Cox, CTO and Founder of Enate, outlined three critical areas for the summit’s agenda. “First, AI governance needs urgent clarity,” he said. “We must establish global guidelines to ensure AI is safe, ethical, and aligned across nations. A disconnected approach won’t work; we need unity to build trust and drive long-term progress.” Cox also emphasised the need for a future-ready workforce. “Employers and governments must invest in upskilling the workforce for an AI-driven world,” he said. “This isn’t just about automation replacing jobs; it’s about creating opportunities through education and training that genuinely prepare people for the future of work.” Finally, Cox called for democratising AI’s benefits. “AI must be fair and democratic both now and in the future,” he said. “The benefits can’t be limited to a select few. We must ensure that AI’s power reaches beyond Silicon Valley to all corners of the globe, creating opportunities for everyone to thrive.” Developing AI in the public interest Professor Gina Neff, Professor of Responsible AI at Queen Mary University of London and Executive Director at Cambridge University’s Minderoo Centre for Technology & Democracy, stressed the importance of making AI relatable to everyday life. “For us in civil society, it’s essential that we bring imaginaries about AI into the everyday,” she said. “From the barista who makes your morning latte to the mechanic fixing your car, they all have to understand how AI impacts them and, crucially, why AI is a human issue.” Neff also pushed back against big tech’s dominance in AI development. “I’ll be taking this spirit of public interest into the Summit and pushing back against big tech’s push for hyperscaling. Thinking about AI as something we’re building together – like we do our cities and local communities – puts us all in a better place.” Addressing bias and building equitable AI Professor David Leslie, Professor of Ethics, Technology, and Society at Queen Mary University of London, highlighted the unresolved challenges of bias and diversity in AI systems. “Over a year after the first AI Safety Summit at Bletchley Park, only incremental progress has been made to address the many problems of cultural bias and toxic and imbalanced training data that have characterised the development and use of Silicon Valley-led frontier AI systems,” he said. Leslie called for a renewed focus on public interest AI. “The French AI Action Summit promises to refocus the conversation on AI governance to tackle these and other areas of immediate risk and harm,” he explained. “A main focus will be to think about how to advance public interest AI for all through mission-driven and society-led funding.” He proposed the creation of a public interest AI foundation, supported by governments, companies, and philanthropic organisations. “This type of initiative will have to address issues of algorithmic and data biases head on, at concrete and practice-based levels,” he said. “Only then can it stay true to the goal of making AI technologies – and the infrastructures upon which they depend – accessible global public goods.” Systematic evaluation Professor Maria Liakata, Professor of Natural Language Processing at Queen Mary University of London, emphasised the need for rigorous evaluation of AI systems. “AI has the potential to make public service more efficient and accessible,” she said. “But at the moment, we are not evaluating AI systems properly. Regulators are currently on the back foot with evaluation, and developers have no systematic way of offering the evidence regulators need.” Liakata called for a flexible and systematic approach to AI evaluation. “We must remain agile and listen to the voices of all stakeholders,” she said. “This would give us the evidence we need to develop AI regulation and help us get there faster. It would also help us get better at anticipating the risks posed by AI.” AI in healthcare: Balancing innovation and ethics Dr Vivek Singh, Lecturer in Digital Pathology at Barts ******* Institute, Queen Mary University of London, highlighted the ethical implications of AI in healthcare. “The Paris AI Action Summit represents a critical opportunity for global collaboration on AI governance and innovation,” he said. “I hope to see actionable commitments that balance ethical considerations with the rapid advancement of AI technologies, ensuring they benefit society as a whole.” Singh called for clear frameworks for international cooperation. “A key outcome would be the establishment of clear frameworks for international cooperation, fostering trust and accountability in AI development and deployment,” he said. AI Action Summit: A pivotal moment The 2025 AI Action Summit in Paris represents a pivotal moment for global AI governance. With calls for unity, equity, and public interest at the forefront, the summit aims to address the challenges of bias, regulation, and workforce readiness while ensuring AI’s benefits are shared equitably. As world leaders and industry experts converge, the hope is that actionable commitments will pave the way for a more inclusive and ethical AI future. (Photo by Jorge Gascón) See also: EU AI Act: What businesses need to know as regulations go live Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News. View the full article
  16. US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free. However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the ******** AI startup. A threat to US AI dominance DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the ******** ********** Party (CCP). According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns. Just fyi, @deepseek_ai collects your IP, keystroke patterns, device info, etc etc, and stores it in China, where all that data is vulnerable to arbitrary requisition from the State. From their own privacy policy: pic.twitter.com/wueJokHcn3 — Luke de Pulford (@lukedepulford) January 27, 2025 Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight. Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. DeepSeek is collecting the sensitive data of American citizens and sending it directly to the ******** ********** Party — a national security threat. That's why I’m introducing bipartisan legislation to immediately ban DeepSeek from all US gov devices.[Hidden Content] — Rep Josh Gottheimer (@RepJoshG) February 6, 2025 Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions. Potential ban of DeepSeek a TikTok redux? The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by ******** company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise. In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat. Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy. AI becomes a geopolitical battleground The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China. American AI firms like OpenAI have enjoyed a dominant position in recent years, but ******** companies have poured resources into catching up and, in some cases, surpassing their US competitors. DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security. China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in ******** surveillance activities or even potential economic manipulations. A nationwide DeepSeek ban is on the cards If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match. In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools. DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen. (Photo by Solen Feyissa) See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News. View the full article
  17. Digma, a company offering products designed to act on pre-production observability data, has announced the launch of its preemptive observability analysis (POA) engine. The engine is designed to check, identify, and provide ‘fix’ suggestions, helping to balance systems and reduce issues found in codebases as their complexity increases. The application of preemptive observability in pre-production may be more important as AI code generators become more common , the company claims. For instance, a 2023 Stanford University study revealed that developers using AI coding assistants were more likely to introduce bugs to their code. Despite this, major companies like Google are increasing their reliance on AI-generated code, with over 25% of the company’s new code being AI-created. Nir Shafrir, CEO and Co-founder of Digma, commented on the growing resources that are being dedicated to ensuring systems perform well, saying, “We’re seeing a lot of effort invested in assuring optimal system performance, but many issues are still being discovered in complex code bases late in production.” “Beyond this, scaling has often remained a rough estimation in organisations anticipating growth, and many are hitting barriers in technology growth that arise precisely during periods of significant organisational expansion. It means that engineering teams may spend between 20-40% of their time addressing issues discovered late in production environments, with some organisations spending up to 50% of engineering resources on fixing production problems.” Preemptive observability is expected to become a key factor helping companies gain competitive advantage. It has several potential benefits for AI-generated code, including speed increases and improvements to the reliability of human-written code. According to Digma, preemptive observability helps ensure manually written code is more trustworthy, and reduces risk in the final product. As well as tackling bugs introduced by AI code generation, Digma’s preemptive observability analysis engine has been designed to combat common, long-established issues companies may have experienced with human-made code, which may result in service level agreement (SLA) violations and performance issues. For high transactional establishments, like retail, fintech, and e-commerce, this technology could become valuable. Digma’s algorithm has been designed to use pattern matching and anomaly detection techniques to analyse data and find specific behaviours or issues. It is capable of predicting what an application’s response times and resource usage should be, identifying possible issues before they can cause any noticeable damage. Digma specifically detects the part of the code that is causing an issue by analysing tracing data. Preemptive observability analysis prevents problems rather than dealing with the aftermath of the issues. Teams can monitor holistically, and address potential issues in areas that are frequently ignored once in production. Roni Dover, CTO and Co-founder of Digma, highlighted what differentiates Digma’s preemptive observability analysis engine from others: “By understanding runtime behaviour and suggesting fixes for performance issues, scaling problems, and team conflicts, we’re helping enterprises prevent problems and reduce risks proactively rather than putting out fires in production.” Application performance monitoring (APM) tools are used to identify service issues, monitor production statuses, and highlight SLA errors. APMs are practical for sending alerts when services fail or slow during production. But unlike preemptive observability, APMs are limited in non-production settings, and can’t provide analysis of problems’ sources. By identifying performance and scaling issues early on in the production process, even when data volumes are low, preemptive observability helps prevent major problems and reduce cloud costs. Digma recently completed a successful $6 million seed funding round, indicating a growing confidence in the technology. Image source: “Till Bechtolsheimer’s – Alfa Romeo Giulia Sprint GT No.40 – 2013 Donington Historic Festival” by Motorsport in Pictures is licensed under CC BY-NC-SA 2.0. See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Digma’s preemptive observability engine cuts code issues, streamlines AI appeared first on AI News. View the full article
  18. Digital interactions have become a major part of our life; according to the latest statistics, there were over 5.52 billion internet users as of October 2024, with 67.5% being social media users. But despite the prominence of the digital space in today’s world, most of the interactions are still subpar when it comes to the aspect of personalisation. What does this mean? Put simply, the different categories of internet users which include individuals, companies, and influencers, do not have the flexibility or options to fully express their individuality, customise content or provide targeted services for specific markets. Most of the digital platforms that currently exist only provide an avenue for internet users to create static profiles made up of personal data. This should not be the case in a world where most interactions happen online. Digital profiles ought to be more than a collection of data; they should mimic a fully-developed persona that internet users can use to express themselves authentically or in a more personalised way in their digital interactions. Setting the stage for futuristic digital interactions Innovation did not stop with the internet or Web 2.0 social media networks. We now have more advanced technologies, notably AI and web 3.0, which are proving to be game-changers in the hyper-personalisation of digital experiences. So, how are the two technologies adding value to today’s static digital profiles? Let’s start with AI. Innovations in this space have been the talk of the technology community and beyond, with significant funding flowing into the industry over the past two years. While most people are only familiar with generative AI use cases, this nascent technology has the potential to support the creation of hyper-realistic and intelligent digital human avatars that could replace static profiles or business chatbots whose capabilities remain limited. On the other hand, web 3.0 introduces a futuristic digital space where personalised avatars can interact, trade or engage in more advanced activities like hosting meetings or events. Although possible with web 2.0 platforms as well, web 3.0 innovations are going a level higher to feature NFTs and utility tokens, which let users create adaptable human avatars or purchase advanced customisation features to make avatars more personalised. A case study of the Antix AI-powered SaaS platform Antix is one of the few innovations that currently uses integrated GPT-4.0 support and a web 3.0 utility token to create hyper-realistic and intelligent digital human avatars. The AI-powered software-as-a-service (SaaS) platform enhances digital interactions by providing individuals, companies, and influencers an opportunity to use hyper-personalised digital humans to deliver hyper-realistic interactions. Antix’s digital humans use advanced machine learning and natural language processing to make digital interactions more personalised. Notably, digital humans are designed as non-fungible tokens (NFTs) which means they can evolve alongside the owner. Internet citizens can use the Antix platform to create highly personalised and adaptable digital profiles that feature a multitude of customisations which include style, emotions, appearance, and voice. Antix’s digital humans can be customised to operate as the face of a brand by representing it in the digital space, and perform key functions like engaging with an audience, and hosting virtual events and marketing campaigns. Digital humans perform customer support functions better than typical chatbots because of their personalised make up. Digital humans could be useful for influencers consistently producing new content for their audience. Instead of shooting content themselves, influencers can delegate the role to Antix’s digital humans. Some of the benefits of adopting this approach include reduction in equipment cost, simplified content adaptation, and the option to remain anonymous. It is also important to highlight that this ecosystem is powered by a utility token dubbed, $ANTIX. The token supports key functions in the Antix platform, including subscription purchases, asset repairs, and ecosystem rewards. A new dawn for digital interactions For almost three decades now, digital interactions have mostly revolved around static personas. This could be about to change; advancements in 4IR technologies like AI and web 3.0 are bringing more value to the digital space. While it may take a few years before most people embrace the concept of AI-powered digital humans and decentralised marketplaces, it is only a matter of time before demand for digital twins which mimic real-life personas hits the roof. The shift will mark a new dawn, a time when digital interactions are not only hyper-personalised but feel almost real. Web 3.0 is poised to be the economic powerhouse of the digital interaction space. In fact, we’re already seeing this evolution with AI-powered agents tasked with operations in the cryptocurrency economy. It is not a question if, but rather when digital humans will become one of the main forms of interaction on the internet. (Image source: Unsplash) The post How AI and web 3.0 can reshape digital interactions appeared first on AI News. View the full article
  19. The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT. While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise. Unlocking the potential of AI while minimising environmental risks AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the ***’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.” Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems. Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion. With plans already in place to reform the ***’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly. Five steps to sustainable AI The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the *** as a leader in resource-efficient AI: Expand environmental reporting mandates Communicate the sector’s environmental impacts Set sustainability requirements for data centres Reconsider data collection, storage, and management practices Lead by example with government investment Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking. Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels. Smarter, greener data centres One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates. Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure. In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage. Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action: “In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency. “This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.” Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the ***.” Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible. “That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.” Childs emphasised the importance of a coordinated approach from the start of projects. “As the *** government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.” For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the *** may fall behind in the AI arena; this may not necessarily be true. “It is crucial to reevaluate our approach to developing sustainable AI in the future.” Time for transparency around AI environmental risks Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six *** residents are aware of the significant environmental costs associated with AI systems. “AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI *** and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.” As the *** pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations. (Photo by Braden Collum) See also: Sustainability is key in 2025 for businesses to advance AI efforts Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News. View the full article
  20. AI continues to evolve, transforming industries with advances in automation, decision-making, and predictive analytics. AI models like DeepSeek push the boundaries of what’s possible, making complex tasks more efficient and accessible. At the same time, Web3 is reshaping digital ownership and finance through decentralisation. As the two technologies advance, their convergence seems inevitable. However, integrating AI with blockchain and decentralised systems has proved challenging – until now. The company AIvalanche DeFAI Agents (AVAXAI) is addressing the gap by introducing the first DeepSeek-powered DeFAI product on the Avalanche blockchain. The platform provides a way for users to create, manage, and trade tokenised AI and DeFAI agents. Agents will be able to benefit from Avalanche’s scalability, fast transactions, and energy-efficient infrastructure. By merging decentralised ownership with AI-driven automation, AIvalanche DeFAI Agents offers new opportunities in the Web3 space. The DeepSeek controversy and its impact on AI’s future DeepSeek has been at the centre of global attention, not only for its technical advancements, but also for concerns about its use. In January, the company unveiled a chatbot that reportedly matched the performance of its rivals at a significantly lower training cost, a development that shook international markets. AI-related stocks, including Australia’s chip-maker Brainchip, saw sharp declines following the news. However, DeepSeek’s rapid rise has also raised security concerns. Australia has banned the DeepSeek AI from all government devices and systems, citing an “unacceptable risk” to national security. According to the BBC, officials insist that the decision is based on security assessments, not the company’s ******** origins. The government’s move emphasises ongoing debates over AI governance and the potential risks of incorporating AI into important systems. Despite these concerns, AIvalanche DeFAI Agents continues to explore new ways to utilise DeepSeek’s abilities in a decentralised framework. It wants to provide users with greater control over AI agents and maintain security and transparency in Web3. Decentralised AI agents for ownership and monetisation DeepSeek is an AI model built for tasks like data analysis and autonomous operations. AIvalanche DeFAI Agents extends its capabilities by integrating tokenised AI and DeFAI agents into the Avalanche C-Chain. The platform combines Avalanche’s efficiency with AI functionality, letting users create, manage, and deploy AI agents with minimal effort. Users can use AIvalanche DeFAI Agents to develop AI agents and investigate ways to monetise them. The decentralised framework enables trustless transactions, altering the way AI ownership and interaction take place. Key features of AIvalanche DeFAI agents Create and manage AI agents: Users can build AI agents in just a few clicks. Each agent has a dedicated page outlining its capabilities. Co-ownership of AI agents: Anyone can invest in AI agents early by acquiring tokens before they gain mainstream attention. Users can also engage with established AI agents while trading their tokens. Monetising AI agents: AI agents evolve by learning from new data. They have their own wallets and can execute transactions, manage tasks, and distribute revenue. Support from key players in the Avalanche ecosystem AIvalanche DeFAI Agents has gained recognition in the Avalanche ecosystem, receiving support from entities like Avalaunch and AVenturesDAO. Avalaunch provides a launchpad for Avalanche-based projects, while AVenturesDAO is a community-driven investment group. Their involvement highlights growing interest in decentralised AI and DeFAI agents. Expanding access through public sales and listings AIvalanche DeFAI Agents is currently conducting a public ***** across several launchpads, including Ape Terminal, Polkastarter, Avalaunch, and Seedify. The platforms enable broader participation in the Web3 AI agent economy. Following a public *****, the platform plans to list its AVAXAI token on centralised exchanges like Gate.io and MEXC. The listings could improve accessibility and liquidity and increase the platform’s adoption. As AI and decentralised finance (DeFi) continue to intersect, AIvalanche DeFAI Agents aims to establish itself in the space. (Photo by Unsplash) See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here The post AVAXAI brings Deepseek to Web3 with decentralised AI agents appeared first on AI News. View the full article
  21. Since AI hit the mainstream in the last few years, industries across the world are feeling the positive impacts. From helping humanity to clean up our oceans to helping doctors to detect cancers earlier than ever before, AI’s potential and impact are growing by the day. Regardless of whether this is being powered via a supercomputer, edge computing methods, or a traditional data centre, society is truly feeling the positive effect of advances in the AI industry. However, with fresh innovation has always come questions around environmental impact. These concerns are gaining momentum, especially around the energy consumption associated with the increased processing power required to run increasingly large systems. The United Nations Environment Programme recently expressed concerns about the rising levels of e-waste and cooling considerations for data centres more specifically. This also follows on from similar concerns from academia, who have flagged that a larger carbon footprint might be the price we are paying for innovation. Add on top of this the fact that governments globally are implementing new regulations and reporting requirements as part of initiatives to curb the impact of climate change, such as the EU’s Circular Economy Action Plan (CEAP), and it becomes clear this issue is coming to the forefront of the AI agenda. Analysts around the globe are also beginning to focus on this, with Gartner naming energy-efficient computing as a top technology trend for 2025, as organisations come under pressure to show they are considering the impact AI is having on the environment. Businesses that are not considering sustainability as part of core AI infrastructure and expansion or technology strategies are at risk of hindering their own progress. Failing to keep pace with sustainable practices can cause reputational damage, as organisations may be seen as behind the curve in an increasingly sustainability-focused world, alongside risking non-compliance with regulation. When looking at the previously mentioned example of e-waste, if organisations are found to be not properly recycling devices (such as our Global Take Back Service), they could face negative feedback and lose business as a result of poor brand image. With these factors in mind, it’s clear that businesses must consider building a sustainable AI framework that supports operational efficiency, encouraging business growth. Prioritising the implementation of technologies that limit energy consumption can be a huge help when it comes to ensuring regulatory compliance and the ability to meet greater sustainability goals. These also come with the ability to help organisations to future-proof against market instability with reduced reliance on energy along with strengthening brand reputation in an increasingly environmentally-conscious world. This is within easy reach for many businesses as there is a large number of offerings in the market that can balance sustainability efforts, with high processing capabilities. At ASUS we have partnered with Intel to provide servers that prioritise energy efficiency. There is so much available to businesses today if they choose to take the initial step and consider a strategy to implement technologies which balance regulatory pressures, customer expectations, and overall business goals will help organisations to feel confident innovation won’t come at a cost to the environment. IDC has raised this with its audience, recently releasing predictions on the technology industry which outlined; “To address the environmental challenges of harnessing AI’s benefits, enterprises are turning to Sustainable AI Frameworks that focus on minimising the environmental impact of artificial intelligence by addressing key elements such as energy efficiency, resource optimisation, and e-waste reduction.” As AI innovation continues to grow, alongside market pressure, businesses will find it becomes clear which organisations within the market are able to cope, and which will be left behind. Ultimately, those who choose to embed sustainability into AI strategies will lead the way. (Photo by Angela Benito) See also: French initiative for responsible AI leaders Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Sustainability is key in 2025 for businesses to advance AI efforts appeared first on AI News. View the full article
  22. Enterprise AI transformation is reaching a tipping point. In the Asia Pacific, Zebra Technologies has unveiled ambitious plans to change frontline operations across the region. At a time when CISQ estimates poor software quality will cost US businesses $2.41 trillion in 2022, the push for practical, results-driven AI implementation is urgent. “Elements of our three-pillar strategy have been around for quite some time, but what’s revolutionising the frontline today is intelligent automation,” Tom Bianculli, Chief Technology Officer at Zebra Technologies, told reporters at a briefing during Zebra’s 2025 Kickoff in Perth, Australia last week. “We’re not just digitising workflows – we’re connecting wearable technology with robotic workflows, enabling frontline workers to seamlessly interact with automation in ways that were impossible just five years ago.” Practical applications driving change The real-world impact of enterprise AI transformation is already evident in Zebra’s recent collaboration with a major North American retailer. The solution combines traditional AI with generative AI capabilities, enabling fast shelf analysis and automated task generation. “You snap a picture of a shelf, [and] within one second, the traditional AI identifies all the products on the shelf, identifies where there’s missing product, maybe misplaced product… and then it makes that information available to a Gen AI agent that then decides what should you do,” Bianculli explains. This level of automation has demonstrated significant operational improvements, reducing staffing requirements at the retailer by 25%. When it detects missing stock, the system automatically generates tasks for the right personnel, streamlining what was previously a multi-step manual process. APAC leading AI adoption The Asia Pacific region is emerging as a frontrunner in enterprise AI transformation. IBM research presented at the briefing indicates that 54% of APAC enterprises now expect AI to deliver longer-term innovation and revenue generation benefits. The region’s AI investment priorities for 2025 are clearly defined: – 21% focused on enhancing customer experiences – 18% directed toward business process automation – 16% invested in sales automation and customer lifecycle management Ryan Goh, Senior Vice President and General Manager of Asia Pacific at Zebra Technologies, points to practical implementations that are already driving results: “We have customers in e-commerce using ring scanners to scan packages, significantly improving their productivity compared to traditional scanning methods.” Innovation at the edge Zebra’s approach to AI deployment encompasses: – AI devices with native neural architecture for on-device processing – Multimodal experiences that mirror human cognitive capabilities – Gen AI agents optimising workload distribution between edge and cloud The company is advancing its activities in edge computing, with Bianculli revealing plans for on-device language models. This innovation mainly targets environments where internet connectivity is restricted or prohibited, ensuring AI capabilities remain accessible regardless of network conditions. Regional market dynamics The enterprise AI transformation journey varies significantly across APAC markets. India’s landscape is particularly dynamic, with the country’s GDP projected to grow 6.6% and manufacturing expected to surge by 7% YOY. Its commitment to AI is evident, with 96% of organisations surveyed by WEF actively running AI programmes. Japan presents a different scenario, with 1.2% projected GDP growth and some unique challenges to automation adoption. “We used to think that tablets are for retail, but the Bay Area proved us wrong,” Goh notes, highlighting unexpected applications in manufacturing and customer self-service solutions. Future trajectory Gartner’s projections indicate that by 2027, 25% of CIOs will implement augmented connected workforce initiatives that will halve the time required for competency development. Zebra is already moving in this direction with its Z word companion, which uses generative AI and large language models and is scheduled for pilot deployment with select customers in Q2 of this year. With a global presence spanning 120+ offices in 55 countries and 10,000+ channel partners across 185 countries, Zebra is positioned play strongly in the enterprise AI transformation across APAC. As the region moves from AI experimentation to full-scale deployment, the focus remains on delivering practical innovations that drive measurable business outcomes and operational efficiency. (Photo by ) See also: Walmart and Amazon drive retail transformation with AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here The post Zebra Technologies and enterprise AI in the APAC appeared first on AI News. View the full article
  23. ESSEC Business School and Accenture have announced the launch of a new initiative, ‘AI for Responsible Leadership,’ which marks the 10th anniversary of the establishment of the role of Chair at ESSEC, titled the ESSEC Accenture Strategic Business Analytics Chair. The initiative aims to encourage the use of artificial intelligence by leaders in ways that are responsible and ethical, and that lead to high levels of professional performance. It aims to provide current and future leaders with the skills they require when faced with challenges in the future; economic, environmental, or social. Several organisations support the initiative, including institutions, businesses, and specialised groups, including ESSEC Metalab for Data, Technology & Society, and Accenture Research. Executive Director of the ESSEC Metalab, Abdelmounaim Derraz, spoke of the collaboration, saying, “Technical subjects are continuing to shake up business schools, and AI has opened up opportunities for collaboration between partner companies, researchers, and other members of the ecosystem (students, think tanks, associations, [and] public service).” ESSEC and Accenture aim to integrate perspectives from multiple fields of expertise, an approach that is a result of experimentation in the decade the Chair has existed. The elements of the initiative include workshops and talks designed to promote the exchange of knowledge and methods. It will also include a ‘barometer’ to help track AI’s implementation and overall impact on responsible leadership. The initiative will engage with a network of institutions and academic publications, and an annual Grand Prix will recognise projects that focus on and explore the subject of AI and leadership. Fabrice Marque, founder of the initiative and the current ESSEC Accenture Strategics Business Analytics Chair, said, “For years, we have explored the potential of using data and artificial intelligence in organisations. The synergies we have developed with our partners (Accenture, Accor, Dataiku, Engie, Eurofins, MSD, Orange) allowed us to evaluate and test innovative solutions before deploying them. “With this initiative, we’re taking a major step: bringing together an engaged ecosystem to sustainably transform how leaders think, decide, and act in the face of tomorrow’s challenges. Our ambition is clear: to make AI a lever for performance, innovation and responsibility for […] leaders.” Managing Director at Accenture and sponsor of the ESSEC/Accenture Chair and initiative, Aurélien Bouriot, said, “The ecosystem will benefit from the resources that Accenture puts at its disposal, and will also benefit our employees who participate.” Laetitia Cailleteau, Managing Director at Accenture and leader of Responsible AI & Generative AI for Europe, highlighted the importance of future leaders understanding all aspects of AI. “AI is a pillar of the ongoing industrial transformation. Tomorrow’s leaders must understand the technical, ethical, and human aspects and risks – and know how to manage them. In this way, they will be able to maximise value creation and generate a positive impact for the organisation, its stakeholders and society as a whole.” Image credit: Wikimedia Commons See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post French initiative for responsible AI leaders appeared first on AI News. View the full article
  24. OpenAI is releasing a powerful agentic capability that enables ChatGPT to conduct complex, multi-step research tasks online. The feature, called Deep Research, reportedly achieves in tens of minutes what could take a human researcher hours or even days. OpenAI describes Deep Research as a significant milestone in its journey toward artificial general intelligence (AGI). “The ability to synthesise knowledge is a prerequisite for creating new knowledge,” says OpenAI. “For this reason, Deep Research marks a significant step toward our broader goal of developing AGI.” Agentic AI enables ChatGPT to assist with complex research Deep Research empowers ChatGPT to find, analyse, and synthesise information from hundreds of online sources autonomously. With just a prompt from the user, the tool can deliver a comprehensive report, comparable to the output of a research analyst, according to OpenAI. Drawing capabilities from a variant of OpenAI’s upcoming “o3” model, the aim is to free users from time-consuming, labour-intensive information gathering. Whether it’s a competitive analysis of streaming platforms, an informed policy review, or even personalised recommendations for a new commuter bike, Deep Research promises precise and reliable results. Importantly, every output includes full citations and transparent documentation—enabling users to verify the findings with ease. The tool appears particularly adept at uncovering niche or non-intuitive insights, making it an invaluable asset across industries like finance, science, policymaking, and engineering. But OpenAI also envisions Deep Research being useful for the average user, such as shoppers looking for hyper-personalised recommendations or a specific product. people will post lots of great examples, but here is a fun one: i am in japan right now and looking for an old NSX. i spent hours searching unsuccessfully for the perfect one. i was about to give up and deep research just…found it. — Sam Altman (@sama) February 3, 2025 This latest agentic capability operates through the user interface of ChatGPT; users simply select the “Deep Research” option in the message composer and type their query. Supporting files or spreadsheets can also be uploaded for additional context. Once initiated, the AI embarks on a rigorous multi-step process, which may take 5-30 minutes to complete. A sidebar provides updates on the actions taken and the sources consulted. Users can carry on with other tasks and will be notified when the final report is ready. The results are presented in the chat as detailed, well-documented reports. In the coming weeks, OpenAI plans to enhance these outputs further by embedding images, data visualisations, and graphs to deliver even greater clarity and context. Unlike GPT-4o – which excels in real-time, multimodal conversations – Deep Research prioritises depth and detail. Its ability to rigorously cite sources and provide comprehensive analysis sets it apart—shifting the focus from fast, summarised answers to well-documented, research-grade insights. Built for real-world challenges Deep Rsearch leverages sophisticated training methodologies, grounded in real-world browsing and reasoning tasks across diverse domains. Its model was trained via reinforcement learning to autonomously plan and execute multi-step research processes, including backtracking and adaptively refining its approach as new information becomes available. The tool can browse user-uploaded files, generate and iterate on graphs using Python, embed media such as generated images and web pages into responses, and cite exact sentences or passages from its sources. The result of this extensive training is a highly capable agent for tackling complex real-world problems. OpenAI evaluated Deep Research across a broad set of expert-level exams known as “Humanity’s Last Exam”. The exams – comprising over 3,000 questions covering topics from rocket science and linguistics to ecology and classics – test an AI’s competence in solving multifaceted problems. The results were impressive, with the model achieving a record-breaking 26.6% accuracy across these domains: GPT-4o: 3.3% Grok-2: 3.8% Claude 3.5 Sonnet: 4.3% OpenAI o1: 9.1% DeepSeek-R1: 9.4% Deep research: 26.6% (with browsing + Python tools) Deep Research also reached a new state-of-the-art performance on the GAIA benchmark, which evaluates AI models on real-world questions requiring reasoning, multi-modal fluency, and tool-use proficiency. Deep Research topped the leaderboard with a score of 72.57%. Limitations and challenges While the Deep Research agentic AI capability in ChatGPT signifies a bold step forward, OpenAI acknowledges that the technology is still in its early stages and comes with limitations. The system occasionally “hallucinates” facts or offers incorrect inferences, albeit at a notably reduced rate compared to existing GPT models, according to OpenAI. It also faces challenges in differentiating between authoritative sources and speculative content, and it struggles to calibrate its confidence levels—often displaying undue certainty for potentially uncertain findings. Minor formatting errors in reports and citations, as well as delays in initiating tasks, could also frustrate initial users. OpenAI says these issues are expected to improve over time with more usage and iterative refinements. OpenAI is rolling out the capability gradually, starting with Pro users, who will have access to up to 100 queries per month. Plus and Team tiers will follow suit, with Enterprise access arriving next. ***, Swiss, and European Economic Area residents are not yet able to access the feature, but OpenAI says it’s working on expanding its rollout to these regions. In the weeks ahead, OpenAI will expand the feature to ChatGPT’s mobile and desktop platforms. The long-term vision includes enabling connections to subscription-based or proprietary data sources, further enhancing the robustness and personalisation of its outputs. Looking further ahead, OpenAI envisions integrating Deep Research with “Operator,” an existing chatbot capability that takes real-world actions. This integration would allow ChatGPT to seamlessly handle tasks that require both asynchronous online research and real-world execution. (Photo by John Schnobrich) See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ChatGPT gains agentic capability for complex research appeared first on AI News. View the full article
  25. OpenAI offers two versions of its chatbot, ChatGPT-4 and ChatGPT-3.5, each catering to different needs. ChatGPT-4 is the more advanced option, providing improved accuracy and reasoning, while ChatGPT-3.5 remains a solid choice, especially for those looking for a free AI tool. The right model depends on user needs – whether it’s a more powerful AI for complex tasks or a simple, accessible chatbot for everyday use. Both models are built on the same foundational AI concepts, but they have notable differences. ChatGPT-4 offers more advanced reasoning, a larger context window, and multimodal capabilities, making it better suited for complex problem-solving and content generation. In contrast, ChatGPT-3.5 is designed for general-purpose tasks and is easier to access since it’s free. While ChatGPT-4 requires a subscription, ChatGPT-3.5 is available at no cost, making it a practical option for casual users who don’t need advanced features. Who should choose ChatGPT-4? ChatGPT-4 is designed for users who need a more powerful AI model that can handle both text and image inputs. It can handle longer conversations, making it helpful for users who want thorough, context-rich interactions. It also supports internet browsing in specific plans, allowing for limited real-time information retrieval. However, this model is only available with subscription plans, which begin at $20 per month for individual users and progress to higher-tier options for teams and enterprises. While these plans offer extra features like a larger context window and better performance, they also require a financial commitment that may be unnecessary for users with basic AI needs. Who should choose ChatGPT-3.5? ChatGPT-3.5 remains a viable alternative for users looking for a free AI chatbot that does not require a subscription. It can perform a variety of general tasks, including answering questions, drafting text, and offering conversational support. While it lacks multimodal capabilities and has a smaller context window than ChatGPT-4, it is still a reliable tool for many common uses. The setup process is straightforward – users simply need to create an OpenAI account to start using the model via the web or through mobile apps. It supports voice interactions on mobile devices, making it more convenient for hands-free use. Businesses and professionals looking for a scalable AI solution will likely prefer ChatGPT-4, which provides more sophisticated responses, advanced reasoning, and additional enterprise features. Its ability to process multimodal inputs, evaluate data, and manage longer conversations makes it a more effective tool for professional and research-based tasks. Making the right choice: ChatGPT-4 or ChatGPT-3.5? For those deciding between the two, the choice largely depends on the intended use. ChatGPT-4 is the better option for users who require higher accuracy and enhanced reasoning. It is well-suited for professionals, researchers, and businesses seeking a more powerful AI tool. In comparison, ChatGPT-3.5 is ideal for users who need a simple and user-friendly AI model capable of handling a wide range of tasks. Are there better AI alternatives? While ChatGPT-4 and ChatGPT-3.5 are both capable AI tools, they may not be everyone’s cup of tea. Users looking for a free, multimodal AI tool with extensive real-time web search capabilities may find other models more suitable. Similarly, people who need AI specifically for coding and development may prefer a model optimised for those tasks. OpenAI’s models are designed to be general-purpose, but they may not meet the needs of users requiring highly specialised AI applications. For those exploring alternatives, Google Gemini, Anthropic Claude, and Microsoft Copilot are among the top competitors in the AI chatbot space. Google Gemini, previously known as Bard, integrates deeply with Google Search and offers strong multimodal capabilities. Many users appreciate its accessibility and free-tier offerings. Anthropic’s Claude is another option, particularly for those focused on ethical AI development and security. It features one of the largest context windows available, making it suitable for long-form content generation. Meanwhile, Microsoft Copilot integrates with Microsoft 365 applications and Bing, providing an AI assistant that seamlessly fits into productivity and development workflows. (Photo by Unsplash) See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ChatGPT-4 vs. ChatGPT-3.5: Which to use? appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.