ChatGPT
Diamond Member-
Posts
559 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
One of the powerful methods for enhancing customer experiences and building lasting relationships is through Voice of Customer (VoC) tools. These tools allow businesses to gather insights directly from their customers, helping them to improve services, products, and overall customer satisfaction. What are voice of customer (VoC) tools? VoC tools are specialised software applications designed to collect, analyse, and interpret customer feedback. Feedback can come from various sources, including surveys, social media, direct customer interactions, and product reviews. The primary goal of the tools is to build a comprehensive understanding of customer sentiment, pain points, and preferences. VoC tools let organisations gather qualitative and quantitative data, translating the voice of their customers into actionable insights. By implementing these tools, businesses can achieve a deeper understanding of their customers, leading to informed decision-making and ultimately, enhanced customer loyalty. Top 7 Voice of Customer (VoC) tools for 2025 Here are the top seven VoC tools to consider in 2025, each offering unique features and functions to help you capture the voice of your customers effectively: 1. Revuze Revuze is an AI-driven VoC tool that focuses on extracting actionable insights from customer feedback, reviews, and surveys. Key features: Natural language processing to analyse open-ended responses. Comprehensive reporting dashboards that highlight key themes. The ability to benchmark against competitors. Benefits: Revuze empowers businesses to turn large amounts of feedback into strategic insights, enhancing decision-making and customer engagement. 2. Satisfactory Satisfactory is a user-friendly VoC tool that emphasises customer feedback collection through satisfaction surveys and interactive forms. Key features: Simple survey creation with customisable templates. Live feedback tracking and reporting. Integration with popular CRM systems like Salesforce. Benefits: Satisfactory helps businesses quickly gather customer feedback, allowing for immediate action to improve customer satisfaction and experience. 3. GetFeedback GetFeedback offers a streamlined platform for creating surveys and collecting customer insights, designed for usability across various industries. Key features: Easy drag-and-drop survey builder. Real-time feedback collection via multiple channels. Integration capabilities with other tools like Salesforce and HubSpot. Benefits: GeTFEEDBACK provides actionable insights while ensuring an engaging experience for customers participating in surveys. 4. Chattermill Chattermill focuses on analysing customer feedback through sophisticated AI and machine learning algorithms, turning unstructured data into actionable insights. Key features: Customer sentiment analysis across multiple data sources. Automated reporting tools and dashboards. Customisable alerts for key metrics and issues. Benefits: Chattermill enables businesses to react quickly to customer feedback, enhancing their responsiveness and improving overall service quality. 5. Skeepers Skeepers is designed for brands looking to amplify the customer voice by combining feedback gathering and brand advocacy functions. Key features: Comprehensive review management system. Real-time customer jury feedback for products. Customer advocacy programme integration. Benefits: Skeepers helps brands transform customer insights into powerful endorsements, boosting brand reputation and fostering trust. 6. Medallia Medallia is an established leader in the VoC space, providing an extensive platform for capturing feedback from various touchpoints throughout the customer journey. Key features: Robust analytics capabilities and AI-driven insights. Multi-channel feedback collection, including mobile, web, and in-store. Integration with existing systems for data flow. Benefits: Medallia’s comprehensive suite offers valuable tools for organisations aiming to transform customer feedback into strategic opportunities. 7. InMoment InMoment combines customer feedback across all channels, providing organisations with insights to enhance customer experience consistently. Key features: AI-powered analytics for deep insights and trends. Multi-channel capabilities for collecting feedback. Advanced reporting and visualisation tools. Benefits: With InMoment, businesses can create a holistic view of the customer experience, driving improvements across the organisation. Benefits of using VoC tools Enhanced customer understanding: By capturing and analysing customer feedback, businesses gain insights into what customers truly want, their pain points, and overall satisfaction levels. Improvement of products and services: VoC tools help organisations identify specific areas where products or services can be improved based on customer feedback, leading to increased satisfaction and loyalty. Informed decision making: With access to real-time customer insights, organisations can make data-driven decisions, ensuring that strategies align with customer preferences. Increased customer loyalty: When customers feel heard and valued, they are more likely to remain loyal to a brand, leading to repeat business and long-term growth. Competitive advantage: Organisations that effectively use customer feedback can stay ahead of competitors by quickly adapting to market demands and trends. Proactive issue resolution: VoC tools enable businesses to identify customer complaints early, allowing them to address issues proactively and improve overall customer satisfaction. Enhanced employee engagement: A deep understanding of customer needs can help employees deliver better service, enhancing their engagement and job satisfaction. How to choose VoC tools Choosing the right VoC tool involves several considerations: Define your goals: Before researching tools, clearly define what you want to achieve with VoC. Whether it’s improving product features, enhancing customer service, or understanding market trends, outlining your goals will help narrow your choices. Assess your budget: VoC tools come with various pricing models. Determine your budget and evaluate the tools that provide the best value for your investment. Evaluate features: Based on your goals, assess the features of each tool. Prioritise the features that align with your needs, like sentiment analysis, real-time reporting, or integration capabilities. Check integration options: Ensure that the chosen VoC tool can easily integrate with your existing systems. Integration can save time and enhance the overall efficiency of data utilisation. Look for scalability: As your business grows, your VoC needs may change. Choose a tool that can scale with your business and adapt to evolving customer insight demands. Request demos and trials: Take advantage of free trials or request demos to see how the tools function in real-time. The experience can provide valuable information about usability and effectiveness. Read reviews and case studies: Researching customer reviews, testimonials, and case studies can give you insights into how well the tool performs and its impact on businesses similar to yours. The post Top seven Voice of Customer (VoC) tools for 2025 appeared first on AI News. View the full article
-
The Qwen team at Alibaba has unveiled QwQ-32B, a 32 billion parameter AI model that demonstrates performance rivalling the much larger DeepSeek-R1. This breakthrough highlights the potential of scaling Reinforcement Learning (RL) on robust foundation models. The Qwen team have successfully integrated agent capabilities into the reasoning model, enabling it to think critically, utilise tools, and adapt its reasoning based on environmental feedback. “Scaling RL has the potential to enhance model performance beyond conventional pretraining and post-training methods,” the team stated. “Recent studies have demonstrated that RL can significantly improve the reasoning capabilities of models.” QwQ-32B achieves performance comparable to DeepSeek-R1, which boasts 671 billion parameters (with 37 billion activated), a testament to the effectiveness of RL when applied to robust foundation models pretrained on extensive world knowledge. This remarkable outcome underscores the potential of RL to bridge the gap between model size and performance. The model has been evaluated across a range of benchmarks, including AIME24, LiveCodeBench, LiveBench, IFEval, and BFCL, designed to assess its mathematical reasoning, coding proficiency, and general problem-solving capabilities. The results highlight QwQ-32B’s performance in comparison to other leading models, including DeepSeek-R1-Distilled-Qwen-32B, DeepSeek-R1-Distilled-Llama-70B, o1-mini, and the original DeepSeek-R1. Benchmark results: AIME24: QwQ-32B achieved 79.5, slightly behind DeepSeek-R1-6718’s 79.8, but significantly ahead of OpenAl-o1-mini’s 63.6 and the distilled models. LiveCodeBench: QwQ-32B scored 63.4, again closely matched by DeepSeek-R1-6718’s 65.9, and surpassing the distilled models and OpenAl-o1-mini’s 53.8. LiveBench: QwQ-32B achieved 73.1, with DeepSeek-R1-6718 scoring 71.6, and outperforming the distilled models and OpenAl-o1-mini’s 57.5. IFEval: QwQ-32B scored 83.9, very close to DeepSeek-R1-6718’s 83.3, and leading the distilled models and OpenAl-o1-mini’s 59.1. BFCL: QwQ-32B achieved 66.4, with DeepSeek-R1-6718 scoring 62.8, demonstrating a lead over the distilled models and OpenAl-o1-mini’s 49.3. The Qwen team’s approach involved a cold-start checkpoint and a multi-stage RL process driven by outcome-based rewards. The initial stage focused on scaling RL for math and coding tasks, utilising accuracy verifiers and code execution servers. The second stage expanded to general capabilities, incorporating rewards from general reward models and rule-based verifiers. “We find that this stage of RL training with a small amount of steps can increase the performance of other general capabilities, such as instruction following, alignment with human preference, and agent performance, without significant performance drop in math and coding,” the team explained. QwQ-32B is open-weight and available on Hugging Face and ModelScope under the Apache 2.0 license, and is also accessible via Qwen Chat. The Qwen team views this as an initial step in scaling RL to enhance reasoning capabilities and aims to further explore the integration of agents with RL for long-horizon reasoning. “As we work towards developing the next generation of Qwen, we are confident that combining stronger foundation models with RL powered by scaled computational resources will propel us closer to achieving Artificial General Intelligence (AGI),” the team stated. See also: Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Qwen QwQ-32B: Scaled reinforcement learning showcase appeared first on AI News. View the full article
-
Last week, leading experts from academia, industry, and regulatory backgrounds gathered to discuss the legal and commercial implications of AI explainability, with a particular focus on its impact in retail. Hosted by Professor Shlomit Yaniski Ravid of Yale Law and Fordham Law, the panel brought together thought leaders to address the growing need for transparency in AI-driven decision-making, emphasising the importance of ensuring AI operates in ethical and legal parameters and the need to ‘open the ****** box’ of AI decision-making. Regulatory challenges and the new AI standard ISO 42001 Tony Porter, former Surveillance Camera Commissioner for the *** Home Office, provided insights into regulatory challenges surrounding AI transparency. He highlighted the significance of ISO 42001, the international standard for AI management systems which offers a framework for responsible AI governance. “Regulations are evolving rapidly, but standards like ISO 42001 provide organisations with a structured approach to balancing innovation with accountability,” Porter said. The panel dissociation led by Prof. Yaniski Ravid featured representatives from leading AI companies, who shared how their organisations implement transparency in AI systems, particularly in retail and legal applications. Chamelio: Transforming legal decision-making with explainable AI Alex Zilberman from Chamelio, a legal intelligence platform exclusively built for in-house legal teams, addressed the role of AI in corporate legal operations. Chamelio changes how in-house legal teams operate through an AI agent that learns and uses the legal knowledge stored in its repository of contracts, policies, compliance documents, corporate records, regulatory filings, and other business-important legal documents. Chamelio’s AI agent performs core legal tasks like extracting important obligations, streamlines contract reviews, monitors compliance, and delivers actionable insights that would otherwise remain buried in thousands of pages of documents. The platform integrates with existing tools and adapts to a team’s legal knowledge. “Trust is the number one requirement to build a system that professionals can use,” Zilberman said. “This trust is achieved by providing as much transparency as possible. Our solution allows users to understand where each recommendation comes from, ensuring they can confirm and verify every insight.” Chamelio avoids the ‘****** box’ model by letting legal professionals trace the reasoning behind AI-generated recommendations. For example, when the system encounters areas of a contract that it doesn’t recognise, instead of guessing, it flags the uncertainty and requests human input. This approach helps legal professionals control important decisions, particularly in unprecedented scenarios like clauses with no precedent or conflicting legal terms. Buffers.ai: Changing inventory optimisation Pini Usha from Buffers.ai shared insights on AI-driven inventory optimisation, an important application in retail. Buffers.ai serves medium to large retail and manufacturing brands, including H&M, P&G, and Toshiba, helping retailers – particularly in the fashion industry – tackle inventory optimisation challenges like forecasting, replenishment, and assortment planning. The company helps ensure the right product quantities are delivered to the correct locations, reducing instances of stockouts and excess inventory. Buffers.ai offers a full-SaaS ERP plugin that integrates with systems like SAP and Priority, providing ROI in months. “Transparency is key. If businesses cannot understand how AI predicts demand fluctuations or supply chain risks, they will be hesitant to rely on it,” Usha said. Buffers.ai integrates explainability tools that allow clients to visualise and adjust AI-driven forecasts, helping ensure alignment with real-time business operations and market trends. For example, when placing a new product with no historical data, the system analyses similar product trends, store characteristics, and local demand signals. If a branch has historically shown strong demand for comparable items, the system might recommend a higher quantity without any existing data for the new product. Similarly, when allocating inventory between branches and online stores, the system details factors like regional sales performance, customer traffic patterns, and online conversion rates to explain its recommendations. Corsight AI: Facial recognition in retail and law enforcement Matan Noga from Corsight AI discussed the role of explainability in facial recognition technology, which is used increasingly for security and customer experience enhancement in retail. Corsight AI specialises in real-world facial recognition, and provides its solutions to law enforcement, airports, malls, and retailers. The company’s technology is used for applications like watchlist alerting, locating missing persons, and forensic investigations. Corsight AI differentiates itself by focusing on high-speed, and real-time recognition in ways compliant with evolving privacy laws and ethical AI guidelines. The company works with government and its commercial clients to promote responsible AI adoption, emphasising the importance of explainability in building trust and ensuring ethical use. ImiSight: AI-powered image intelligence Daphne Tapia from ImiSight highlighted the importance of explainability in AI-powered image intelligence, particularly in high-stakes applications like border security and environmental monitoring. ImiSight specialises in multi-sensor integration and analysis, utilising AI/ML algorithms to detect changes, anomalies, and objects in sectors like land encroachment, environmental monitoring, and infrastructure maintenance. “AI explainability means understanding why a specific object or change was detected. We prioritise traceability and transparency to ensure users can trust our system’s outputs,” Tapia said. ImiSight continuously refines its models based on real-world data and user feedback. The company collaborates with regulatory agencies to ensure its AI meets international compliance standards. The panel underscored the important role of AI explainability in fostering trust, accountability, and ethical use of AI technologies, particularly in retail and other high-stakes industries. By prioritising transparency and human oversight, organisations can ensure AI systems are both effective and trustworthy, aligning with evolving regulatory standards and public expectations. Watch the full session here The post Explainability for retailers using AI: Insights from experts appeared first on AI News. View the full article
-
Deepgram has unveiled Nova-3 Medical, an AI speech-to-text (STT) model tailored for transcription in the demanding environment of healthcare. Designed to integrate seamlessly with existing clinical workflows, Nova-3 Medical aims to address the growing need for accurate and efficient transcription in the ***’s public NHS and private healthcare landscape. As electronic health records (EHRs), telemedicine, and digital health platforms become increasingly prevalent, the demand for reliable AI-powered transcription has never been higher. However, traditional speech-to-text models often struggle with the complex and specialised vocabulary used in clinical settings, leading to errors and “hallucinations” that can compromise patient care. Deepgram’s Nova-3 Medical is engineered to overcome these challenges. The model leverages advanced machine learning and specialised medical vocabulary training to accurately capture medical terms, acronyms, and clinical jargon—even in challenging audio conditions. This is particularly crucial in environments where healthcare professionals may move away from recording devices. “Nova‑3 Medical represents a significant leap forward in our commitment to transforming clinical documentation through AI,” said Scott Stephenson, CEO of Deepgram. “By addressing the nuances of clinical language and offering unprecedented customisation, we are empowering developers to build products that improve patient care and operational efficiency.” One of the key features of the model is its ability to deliver structured transcriptions that integrate seamlessly with clinical workflows and EHR systems, ensuring vital patient data is accurately organised and readily accessible. The model also offers flexible, self-service customisation, including Keyterm Prompting for up to 100 key terms, allowing developers to tailor the solution to the unique needs of various medical specialties. Versatile deployment options – including on-premises and Virtual Private Cloud (VPC) configurations – ensure enterprise-grade security and HIPAA compliance, which is crucial for meeting *** data protection regulations. “Speech-to-text for enterprise use cases is not trivial, and there is a fundamental difference between voice AI platforms designed for enterprise use cases vs entertainment use cases,” said Kevin Fredrick, Managing Partner at OneReach.ai. “Deepgram’s Nova-3 model and Nova-3-Medical model, are leading voice AI offerings, including TTS, in terms of the accuracy, latency, efficiency, and scalability required for enterprise use cases.” Benchmarking Nova-3 Medical: Accuracy, speed, and efficiency Deepgram has conducted benchmarking to demonstrate the performance of Nova-3 Medical. The model claims to deliver industry-leading transcription accuracy, optimising both overall word recognition and critical medical term accuracy. Word Error Rate (WER): With a median WER of 3.45%, Nova-3 Medical outperforms competitors, achieving a 63.6% reduction in errors compared to the next best competitor. This enhanced precision minimises manual corrections and streamlines workflows. Keyword Error Rate (KER): Crucially, Nova-3 Medical achieves a KER of 6.79%, marking a 40.35% reduction in errors compared to the next best competitor. This ensures that critical medical terms – such as drug names and conditions – are accurately transcribed, reducing the risk of miscommunication and patient safety issues. In addition to accuracy, Nova-3 Medical excels in real-time applications. The model transcribes speech 5-40x faster than many alternative speech recognition vendors, making it ideal for telemedicine and digital health platforms. Its scalable architecture ensures high performance even as transcription volumes increase. Furthermore, Nova-3 Medical is designed to be cost-effective. Starting at $0.0077 per minute of streaming audio – which Deepgram claims is more than twice as affordable as leading cloud providers – it allows healthcare tech companies to reinvest in innovation and accelerate product development. Deepgram’s Nova-3 Medical aims to empower developers to build transformative medical transcription applications, driving exceptional outcomes across healthcare. (Photo by Alexander Sinn) See also: Autoscience Carl: The first AI scientist writing peer-reviewed papers Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Deepgram Nova-3 Medical: AI speech model cuts healthcare transcription errors appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Opera has introduced “Browser Operator,” a native AI agent designed to perform tasks for users directly within the browser. Rather than acting as a separate tool, Browser Operator is an extension of the browser itself—designed to empower users by automating repetitive tasks like purchasing products, completing online forms, and gathering web content. Unlike server-based AI integrations which require sensitive data to be sent to third-party servers, Browser Operator processes tasks locally within the Opera browser. Opera’s demonstration video showcases how Browser Operator can streamline an everyday task like buying socks. Instead of manually scrolling through product pages or filling out payment forms, users could delegate the entire process to Browser Operator—allowing them to shift focus to activities that matter more to them, such as spending time with loved ones. Harnessing natural language processing powered by Opera’s AI Composer Engine, Browser Operator interprets written instructions from users and executes corresponding tasks within the browser. All operations occur locally on a user’s device, leveraging the browser’s own infrastructure to safely and swiftly complete commands. If Browser Operator encounters a sensitive step in the process, such as entering payment details or approving an order, it pauses and requests the user’s input. You also have the freedom to intervene and take control of the process at any time. Every step Browser Operator takes is transparent and fully reviewable, providing users a clear understanding of how tasks are being executed. If mistakes occur – like placing an incorrect order – you can further instruct the AI agent to make amends, such as cancelling the order or adjusting a form. The key differentiators: Privacy, performance, and precision What sets Browser Operator apart from other AI-integrated tools is its localised, privacy-first architecture. Unlike competitors that depend on screenshots or video recordings to understand webpage content, Opera’s approach uses the Document Object Model (DOM) Tree and browser layout data—a textual representation of the webpage. This difference offers several key advantages: Faster task completion: Browser Operator doesn’t need to “see” and interpret pixels on the screen or emulate mouse movements. Instead, it accesses web page elements directly, avoiding unnecessary overhead and allowing it to process pages holistically without scrolling. Enhanced privacy: With all operations conducted on the browser itself, user data – including logins, cookies, and browsing history – remains secure on the local device. No screenshots, keystrokes, or personal information are sent to Opera’s servers. Easier interaction with page elements: The AI can engage with elements hidden from the user’s view, such as behind cookie popups or verification dialogs, enabling seamless access to web page content. By enabling the browser to autonomously perform tasks, Opera is taking a significant step forward in making browsers “agentic”—not just tools for accessing the internet, but assistants that actively enhance productivity. See also: You.com ARI: Professional-grade AI research agent for businesses Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Opera introduces browser-integrated AI agent appeared first on AI News. View the full article
-
The newly-formed Autoscience Institute has unveiled ‘Carl,’ the first AI system crafting academic research papers to pass a rigorous double-blind peer-review process. Carl’s research papers were accepted in the Tiny Papers track at the International Conference on Learning Representations (ICLR). Critically, these submissions were generated with minimal human involvement, heralding a new era for AI-driven scientific discovery. Meet Carl: The ‘automated research scientist’ Carl represents a leap forward in the role of AI as not just a tool, but an active participant in academic research. Described as “an automated research scientist,” Carl applies natural language models to ideate, hypothesise, and cite academic work accurately. Crucially, Carl can read and comprehend published papers in mere seconds. Unlike human researchers, it works continuously, thus accelerating research cycles and reducing experimental costs. According to Autoscience, Carl successfully “ideated novel scientific hypotheses, designed and performed experiments, and wrote multiple academic papers that passed peer review at workshops.” This underlines the potential of AI to not only complement human research but, in many ways, surpass it in speed and efficiency. Carl is a meticulous worker, but human involvement is still vital Carl’s ability to generate high-quality academic work is built on a three-step process: Ideation and hypothesis formation: Leveraging existing research, Carl identifies potential research directions and generates hypotheses. Its deep understanding of related literature allows it to formulate novel ideas in the field of AI. Experimentation: Carl writes code, tests hypotheses, and visualises the resulting data through detailed figures. Its tireless operation shortens iteration times and reduces redundant tasks. Presentation: Finally, Carl compiles its findings into polished academic papers—complete with data visualisations and clearly articulated conclusions. Although Carl’s capabilities make it largely independent, there are points in its workflow where human involvement is still required to adhere to computational, formatting, and ethical standards: Greenlighting research steps: To avoid wasting computational resources, human reviewers provide “continue” or “stop” signals during specific stages of Carl’s process. This guidance steers Carl through projects more efficiently but does not influence the specifics of the research itself. Citations and formatting: The Autoscience team ensures all references are correctly cited and formatted to meet academic standards. This is currently a manual step but ensures the research aligns with the expectations of its publication venue. Assistance with pre-API models: Carl occasionally relies on newer OpenAI and Deep Research models that lack auto-accessible APIs. In such cases, manual interventions – such as copy-pasting outputs – bridge these gaps. Autoscience expects these tasks to be entirely automated in the future when APIs become available. For Carl’s debut paper, the human team also helped craft the “related works” section and refine the language. These tasks, however, were unnecessary following updates applied before subsequent submissions. Stringent verification process for academic integrity Before submitting any research, the Autoscience team undertook a rigorous verification process to ensure Carl’s work met the highest standards of academic integrity: Reproducibility: Every line of Carl’s code was reviewed and experiments were rerun to confirm reproducibility. This ensured the findings were scientifically valid and not coincidental anomalies. Originality checks: Autoscience conducted extensive novelty evaluations to ensure that Carl’s ideas were new contributions to the field and not rehashed versions of existing publications. External validation: A hackathon involving researchers from prominent academic institutions – such as MIT, Stanford University, and U.C. Berkeley – independently verified Carl’s research. Further plagiarism and citation checks were performed to ensure compliance with academic norms. Undeniable potential, but raises larger questions Achieving acceptance at a workshop as respected as the ICLR is a significant milestone, but Autoscience recognises the greater conversation this milestone may spark. Carl’s success raises larger philosophical and logistical questions about the role of AI in academic settings. “We believe that legitimate results should be added to the public knowledge base, regardless of where they originated,” explained Autoscience. “If research meets the scientific standards set by the academic community, then who – or what – created it should not lead to automatic disqualification.” “We also believe, however, that proper attribution is necessary for transparent science, and work purely generated by AI systems should be discernable from that produced by humans.” Given the novelty of autonomous AI researchers like Carl, conference organisers may need time to establish new guidelines that account for this emerging paradigm, especially to ensure fair evaluation and intellectual attribution standards. To prevent unnecessary controversy at present, Autoscience has withdrawn Carl’s papers from ICLR workshops while these frameworks are being devised. Moving forward, Autoscience aims to contribute to shaping these evolving standards. The company intends to propose a dedicated workshop at NeurIPS 2025 to formally accommodate research submissions from autonomous research systems. As the narrative surrounding AI-generated research unfolds, it’s clear that systems like Carl are not merely tools but collaborators in the pursuit of knowledge. But as these systems transcend typical boundaries, the academic community must adapt to fully embrace this new paradigm while safeguarding integrity, transparency, and proper attribution. (Photo by Rohit Tandon) See also: You.com ARI: Professional-grade AI research agent for businesses Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Autoscience Carl: The first AI scientist writing peer-reviewed papers appeared first on AI News. View the full article
-
Blockchain has tried to claim many things as its own over the years, from global payment processing to real-world assets. But in artificial intelligence, it’s found synergy with a sector willing to give something back. As this symbiotic relationship has grown, it’s become routine to hear AI and blockchain mentioned in the same breath. While the benefits web3 technology can bring to artificial intelligence are well documented – transparency, P2P economies, tokenisation, censorship resistance, and so on – this is a reciprocal arrangement. In return, AI is fortifying blockchain projects in different ways, enhancing the ability to process vast datasets, and automating on-chain processes. The relationship may have taken a while to get started, but blockchain and AI are now entwined. Trust meets efficiency While AI brings intelligent automation and data-driven decision-making, blockchain offers security, decentralisation, and transparency. Together, they can address each other’s limitations, offering new opportunities in digital and real-world industries. Blockchain provides a tamper-proof foundation and AI brings adaptability, plus the ability to optimise complex systems. Together, the two promise to enhance scalability, security, and privacy – key pillars for modern finance and supply chain applications. AI’s ability to analyse large amounts of data is a natural fit for blockchain networks, allowing data archives to be processed in real time. Machine learning algorithms can predict network congestion – as seen with tools like Chainlink’s off-chain computation, which offers dynamic fee adjustments or transaction prioritisation. Security also gains: AI can monitor blockchain activity in real-time to identify anomalies more quickly than manual scans, so teams can move to mitigate attacks. Privacy is improved, with AI managing zero-knowledge proofs and other cryptographic techniques to shield user data; methods explored by projects like Zcash. These types of enhancements make blockchain more robust and attractive to the enterprise. In DeFi, Giza‘s agent-driven markets embody the convergence of web3 and artificial intelligence. Its protocol runs autonomous agents like ARMA, which manage yield strategies across protocols and offer real-time adaptation. Secured by smart accounts and decentralised execution, agents can deliver positive yields, and currently manage hundreds of thousands of dollars in on-chain assets. Giza shows how AI can optimise decentralised finance and is a project that uses the two technologies to good effect. Blockchain as AI’s backbone Blockchain offers AI a decentralised infrastructure to foster trust and collaboration. AI models, often opaque and centralised, face scrutiny over data integrity and bias – issues blockchain counters with transparent, immutable records. Platforms like Ocean Protocol use blockchain to log AI training data, providing traceability without compromising ownership. That can be a boon for sectors like healthcare, where the need for verifiable analytics is important. Decentralisation also enables secure multi-party computation, where AI agents collaborate across organisations – think federated learning for drug discovery – without a central authority, as demonstrated in 2024 by IBM’s blockchain AI pilots. The trustless framework reduces reliance on big tech, helping to democratise AI. While AI can enhance blockchain performance, blockchain itself can provide a foundation for ethical and secure AI deployment. The transparency and immutability with which blockchain is associated can mitigate AI-related risks by ensuring AI model integrity, for example. AI algorithms and training datasets can be recorded on-chain so they’re auditable. Web3 technology helps in governance models for AI, as stakeholders can oversee and regulate project development, reducing the risks of biased or unethical AI. Digital technologies with real-world impact The synergy between blockchain and AI exists now. In supply chains, AI helps to optimise logistics while blockchain can track item provenance. In energy, blockchain-based smart grids paired with AI can predict demand; Siemens reported a 15% efficiency gain in a 2024 trial of such a system in Germany. These cases highlight how AI scales blockchain’s utility, while the latter’s security can realise AI’s potential. Together, they create smart, reliable systems. The relationship between AI and blockchain is less a merger than a mutual enhancement. Blockchain’s trust and decentralisation ground AI’s adaptability, while AI’s optimisation unlocks blockchain’s potential beyond that of a static ledger. From supply chain transparency to DeFi’s capital efficiency, their combined impact is tangible, yet their relationship is just beginning. (Image source: Unsplash) The post Trust meets efficiency: AI and blockchain mutuality appeared first on AI News. View the full article
-
Palo Alto-based You.com has introduced ARI, a professional-grade AI research agent for businesses to access competitive insights. ARI (Advanced Research & Insights) delivers comprehensive, accurate, and interactive reports within minutes—potentially shaking up the $250 billion management consulting industry. You.com claims ARI completes reports that typically require weeks of labour and cost thousands of dollars in just five minutes, at a fraction of traditional expenses. With the ability to process over 400 sources simultaneously – a figure set to grow as the technology scales – ARI promises to deliver “verified citations and insights 3X faster than other currently available solutions.” Bryan McCann, Co-Founder and CTO of You.com, said: “ARI’s breakthrough is its ability to maintain contextual understanding while processing hundreds of sources simultaneously. “When combined with chain-of-thought reasoning and extended test-time compute, ARI is able to discover and incorporate adjacent research areas dynamically as analysis progresses.” A powerful AI agent for business research Traditional AI research tools are typically limited to processing between 30 to 40 data sources at a time. ARI stands out by handling hundreds of public and private data streams, ensuring unparalleled accuracy and scope in its analysis. The system doesn’t just stop at summarising data; it enhances user experience by producing rich, interactive graphs, charts, and visualisations for deeper insights. Designed to cater equally to high-level professionals and knowledge workers across industries, ARI combines advanced functionality with user-friendly accessibility. This dual-purpose design allows enterprises to deploy it as a personal assistant or as a replacement for expensive research efforts traditionally carried out by consulting firms. At the heart of ARI is a series of capabilities: Simultaneous source analysis: Processes hundreds of data sources, both public and private. Chain-of-Thought reasoning: Dynamically evolves research parameters as insights emerge. Real-time verification: Provides direct validation for every claim and data point. Interactive visualisation engine: Automatically generates and cites graphs and charts to enhance reporting. Enterprise data integration: Analyses a mix of public and private datasets to deliver actionable insights. During its initial deployment phase, ARI has demonstrated its versatility and potential for impact across several industries: Consulting: By analysing market reports, competitor financials, patent filings, and social sentiment data in hours rather than weeks, ARI supports due diligence with ease. Financial services: With the ability to integrate real-time data from earnings calls, SEC filings, and industry news, ARI helps support faster and more accurate investment decisions. Healthcare: ARI accelerates the synthesis of clinical trials, medical journals, patient data, and treatment guidelines, providing insights that support evidence-based care. Media: From audience data to trending topics and competitor activity, ARI enables the rapid identification of new story angles and anticipates emerging narratives in key markets. Dr Dennis Ballwieser, Managing Director and Editor at Wort & Bild Verlag, commented: “The research time has dropped from a few days to just a few hours, and the accuracy across both ******* and English content has been remarkable. “What excites me most is the opportunity to democratise access to professional-grade research. With ARI’s ability to analyse hundreds of verifiable sources simultaneously while maintaining accuracy, we can now offer professional insights to organisations of all sizes at a fraction of the traditional cost.” Accelerating access to strategic insights The potential for technologies like ARI goes beyond time and cost savings. For companies such as global consultancy firm APCO Worldwide, ARI’s capabilities provide a level of quality and personalisation that aligns with the modern needs of clients. Philip Fraser, CIO at APCO Worldwide, said: “To us, ARI represents a step-change in the quality and alignment to the needs of our clients. We are very excited about working with You.com to integrate the power of ARI into our award-winning, proprietary Margy AI platform.” Through such integrations, ARI has the potential to move organisations away from periodic, resource-intensive research projects towards continuous real-time intelligence that drives better decision-making across all levels. Richard Socher, Co-Founder and CEO of You.com, added: “When every employee has instant access to comprehensive, validated insights that previously required teams of consultants and weeks of work, it changes the speed and quality of business decision-making. ARI represents a paradigm shift in how organisations operate.” ARI is the newest addition to You.com’s expanding AI agent ecosystem, which has already seen the development of over 50,000 custom agents since late 2024. The company has raised $99 million in funding from investors such as Salesforce Ventures, NVIDIA, and Georgian Ventures. With ARI, You.com aims to set a new standard for an enterprise-grade AI research agent as part of broader decision-making systems. (Photo by Jeremy Beadle) See also: Endor Labs: AI transparency vs ‘open-washing’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post You.com ARI: Professional-grade AI research agent for businesses appeared first on AI News. View the full article
-
EU-funded initiative CERTAIN aims to drive ethical AI compliance in Europe amid increasing regulations like the EU AI Act. CERTAIN — short for “Certification for Ethical and Regulatory Transparency in Artificial Intelligence” — will focus on the development of tools and frameworks that promote transparency, compliance, and sustainability in AI technologies. The project is led by Idemia Identity & Security France in collaboration with 19 partners across ten European countries, including the St. Pölten University of Applied Sciences (UAS) in Austria. With its official launch in January 2025, CERTAIN could serve as a blueprint for global AI governance. Driving ethical AI practices in Europe According to Sebastian Neumaier, Senior Researcher at the St. Pölten UAS’ Institute of IT Security Research and project manager for CERTAIN, the goal is to address crucial regulatory and ethical challenges. “In CERTAIN, we want to develop tools that make AI systems transparent and verifiable in accordance with the requirements of the EU’s AI Act. Our goal is to develop practically feasible solutions that help companies to efficiently fulfil regulatory requirements and sustainably strengthen confidence in AI technologies,” emphasised Neumaier. To achieve this, CERTAIN aims to create user-friendly tools and guidelines that simplify even the most complex AI regulations—helping organisations both in the public and private sectors navigate and implement these rules effectively. The overall intent is to provide a bridge between regulation and innovation, empowering businesses to leverage AI responsibly while fostering public trust. Harmonising standards and improving sustainability One of CERTAIN’s primary objectives is to establish consistent standards for data sharing and AI development across Europe. By setting industry-wide norms for interoperability, the project seeks to improve collaboration and efficiency in the use of AI-driven technologies. The effort to harmonise data practices isn’t just limited to compliance; it also aims to unlock new opportunities for innovation. CERTAIN’s solutions will create open and trustworthy European data spaces—essential components for driving sustainable economic growth. In line with the EU’s Green Deal, CERTAIN places a strong focus on sustainability. AI technologies, while transformative, come with significant environmental challenges—such as high energy consumption and resource-intensive data processing. CERTAIN will address these issues by promoting energy-efficient AI systems and advocating for eco-friendly methods of data management. This dual approach not only aligns with EU sustainability goals but also ensures that AI development is carried out with the health of the planet in mind. A collaborative framework to unlock AI innovation A unique aspect of CERTAIN is its approach to fostering collaboration and dialogue among stakeholders. The project team at St. Pölten UAS is actively engaging with researchers, tech companies, policymakers, and end-users to co-develop, test, and refine ideas, tools, and standards. This practice-oriented exchange extends beyond product development. CERTAIN also serves as a central authority for informing stakeholders about legal, ethical, and technical matters related to AI and certification. By maintaining open channels of communication, CERTAIN ensures that its outcomes are not only practical but also widely adopted. CERTAIN is part of the EU’s Horizon Europe programme, specifically under Cluster 4: Digital, Industry, and Space. The project’s multidisciplinary and international consortium includes leading academic institutions, industrial giants, and research organisations, making it a powerful collective effort to shape the future of AI in Europe. In January 2025, representatives from all 20 consortium members met in Osny, France, to kick off their collaborative mission. The two-day meeting set the tone for the project’s ambitious agenda, with partners devising strategies for tackling the regulatory, technical, and ethical hurdles of AI. Ensuring compliance with ethical AI regulations in Europe As the EU’s AI Act edges closer to implementation, guidelines and tools like those developed under CERTAIN will be pivotal. The Act will impose strict requirements on AI systems, particularly those deemed “high-risk,” such as applications in healthcare, transportation, and law enforcement. While these regulations aim to ensure safety and accountability, they also pose challenges for organisations seeking to comply. CERTAIN seeks to alleviate these challenges by providing actionable solutions that align with Europe’s legal framework while encouraging innovation. By doing so, the project will play a critical role in positioning Europe as a global leader in ethical AI development. See also: Endor Labs: AI transparency vs ‘open-washing’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post CERTAIN drives ethical AI compliance in Europe appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Fetch.ai has launched ASI-1 Mini, a native Web3 large language model designed to support complex agentic AI workflows. Described as a gamechanger for AI accessibility and performance, ASI-1 Mini is heralded for delivering results on par with leading LLMs but at significantly reduced hardware costs—a leap forward in making AI enterprise-ready. ASI-1 Mini integrates into Web3 ecosystems, enabling secure and autonomous AI interactions. Its release sets the foundation for broader innovation within the AI sector—including the imminent launch of the Cortex suite, which will further enhance the use of large language models and generalised intelligence. “This launch marks the beginning of ASI-1 Mini’s rollout and a new era of community-owned AI. By decentralising AI’s value chain, we’re empowering the Web3 community to invest in, train, and own foundational AI models,” said Humayun Sheikh, CEO of Fetch.ai and Chairman of the Artificial Superintelligence Alliance. “We’ll soon introduce advanced agentic tool integration, multi-modal capabilities, and deeper Web3 synergy to enhance ASI-1 Mini’s automation capabilities while keeping AI’s value creation in the hands of its contributors.” Democratising AI with Web3: Decentralised ownership and shared value Key to Fetch.ai’s vision is the democratisation of foundational AI models, allowing the Web3 community to not just use, but also train and own proprietary LLMs like ASI-1 Mini. This decentralisation unlocks opportunities for individuals to directly benefit from the economic growth of cutting-edge AI models, which could achieve multi-billion-dollar valuations. Through Fetch.ai’s platform, users can invest in curated AI model collections, contribute to their development, and share in generated revenues. For the first time, decentralisation is driving AI model ownership—ensuring financial benefits are more equitably distributed. Advanced reasoning and tailored performance ASI-1 Mini introduces adaptability in decision-making with four dynamic reasoning modes: Multi-Step, Complete, Optimised, and Short Reasoning. This flexibility allows it to balance depth and precision based on the specific task at hand. Whether performing intricate, multi-layered problem-solving or delivering concise, actionable insights, ASI-1 Mini adapts dynamically for maximum efficiency. Its Mixture of Models (MoM) and Mixture of Agents (MoA) frameworks further enhance this versatility. Mixture of Models (MoM): ASI-1 Mini selects relevant models dynamically from a suite of specialised AI models, which are optimised for specific tasks or datasets. This ensures high efficiency and scalability, especially for multi-modal AI and federated learning. Mixture of Agents (MoA): Independent agents with unique knowledge and reasoning capabilities work collaboratively to solve complex tasks. The system’s coordination mechanism ensures efficient task distribution, paving the way for decentralised AI models that thrive in dynamic, multi-agent systems. This sophisticated architecture is built on three interacting layers: Foundational layer: ASI-1 Mini serves as the core intelligence and orchestration hub. Specialisation layer (MoM Marketplace): Houses diverse expert models, accessible through the ASI platform. Action layer (AgentVerse): Features agents capable of managing live databases, integrating APIs, facilitating decentralised workflows, and more. By selectively activating only necessary models and agents, the system ensures performance, precision, and scalability in real-time tasks. Transforming AI efficiency and accessibility Unlike traditional LLMs, which come with high computational overheads, ASI-1 Mini is optimised for enterprise-grade performance on just two GPUs, reducing hardware costs by a remarkable eightfold. For businesses, this means reduced infrastructure costs and increased scalability, breaking down financial barriers to high-performance AI integration. On benchmark tests like Massive Multitask Language Understanding (MMLU), ASI-1 Mini matches or surpasses leading LLMs in specialised domains such as medicine, history, business, and logical reasoning. Rolling out in two phases, ASI-1 Mini will soon process vastly larger datasets with upcoming context window expansions: Up to 1 million tokens: Allows the model to analyse complex documents or technical manuals. Up to 10 million tokens: Enables high-stakes applications like legal record review, financial analysis, and enterprise-scale datasets. These enhancements will make ASI-1 Mini invaluable for complex and multi-layered tasks. Tackling the “******-box” problem The AI industry has long faced the challenge of addressing the ******-box problem, where deep learning models reach conclusions without clear explanations. ASI-1 Mini mitigates this issue with continuous multi-step reasoning, facilitating real-time corrections and optimised decision-making. While it doesn’t entirely eliminate opacity, ASI-1 provides more explainable outputs—critical for industries like healthcare and finance. Its multi-expert model architecture not only ensures transparency but also optimises complex workflows across diverse sectors. From managing databases to executing real-time business logic, ASI-1 outperforms traditional models in both speed and reliability. AgentVerse integration: Building the agentic AI economy ASI-1 Mini is set to connect with AgentVerse, Fetch.ai’s agent marketplace, providing users with the tools to build and deploy autonomous agents capable of real-world task execution via simple language commands. For example, users could automate trip planning, restaurant reservations, or financial transactions through “micro-agents” hosted on the platform. This ecosystem enables open-source AI customisation and monetisation, creating an “agentic economy” where developers and businesses thrive symbiotically. Developers can monetise micro-agents, while users gain seamless access to tailored AI solutions. As its agentic ecosystem matures, ASI-1 Mini aims to evolve into a multi-modal powerhouse capable of processing structured text, images, and complex datasets with context-aware decision-making. See also: Endor Labs: AI transparency vs ‘open-washing’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Fetch.ai launches first Web3 agentic AI model appeared first on AI News. View the full article
-
As the AI industry focuses on transparency and security, debates around the true meaning of “openness” are intensifying. Experts from open-source security firm Endor Labs weighed in on these pressing topics. Andrew Stiefel, Senior Product Marketing Manager at Endor Labs, emphasised the importance of applying lessons learned from software security to AI systems. “The US government’s 2021 Executive Order on Improving America’s Cybersecurity includes a provision requiring organisations to produce a software bill of materials (SBOM) for each product sold to federal government agencies.” An SBOM is essentially an inventory detailing the open-source components within a product, helping detect vulnerabilities. Stiefel argued that “applying these same principles to AI systems is the logical next step.” “Providing better transparency for citizens and government employees not only improves security,” he explained, “but also gives visibility into a model’s datasets, training, weights, and other components.” What does it mean for an AI model to be “open”? Julien Sobrier, Senior Product Manager at Endor Labs, added crucial context to the ongoing discussion about AI transparency and “openness.” Sobrier broke down the complexity inherent in categorising AI systems as truly open. “An AI model is made of many components: the training set, the weights, and programs to train and test the model, etc. It is important to make the whole chain available as open source to call the model ‘open’. It is a broad definition for now.” Sobrier noted the lack of consistency across major players, which has led to confusion about the term. “Among the main players, the concerns about the definition of ‘open’ started with OpenAI, and Meta is in the news now for their LLAMA model even though that’s ‘more open’. We need a common understanding of what an open model means. We want to watch out for any ‘open-washing,’ as we saw it with free vs open-source software.” One potential pitfall, Sobrier highlighted, is the increasingly common practice of “open-washing,” where organisations claim transparency while imposing restrictions. “With cloud providers offering a paid version of open-source projects (such as databases) without contributing back, we’ve seen a shift in many open-source projects: The source code is still open, but they added many commercial restrictions.” “Meta and other ‘open’ LLM providers might go this route to keep their competitive advantage: more openness about the models, but preventing competitors from using them,” Sobrier warned. DeepSeek aims to increase AI transparency DeepSeek, one of the rising — albeit controversial — players in the AI industry, has taken steps to address some of these concerns by making portions of its models and code open-source. The move has been praised for advancing transparency while providing security insights. “DeepSeek has already released the models and their weights as open-source,” said Andrew Stiefel. “This next move will provide greater transparency into their hosted services, and will give visibility into how they fine-tune and run these models in production.” Such transparency has significant benefits, noted Stiefel. “This will make it easier for the community to audit their systems for security risks and also for individuals and organisations to run their own versions of DeepSeek in production.” Beyond security, DeepSeek also offers a roadmap on how to manage AI infrastructure at scale. “From a transparency side, we’ll see how DeepSeek is running their hosted services. This will help address security concerns that emerged after it was discovered they left some of their Clickhouse databases unsecured.” Stiefel highlighted that DeepSeek’s practices with tools like Docker, Kubernetes (K8s), and other infrastructure-as-code (IaC) configurations could empower startups and hobbyists to build similar hosted instances. Open-source AI is hot right now DeepSeek’s transparency initiatives align with the broader trend toward open-source AI. A report by IDC reveals that 60% of organisations are opting for open-source AI models over commercial alternatives for their generative AI (GenAI) projects. Endor Labs research further indicates that organisations use, on average, between seven and twenty-one open-source models per application. The reasoning is clear: leveraging the best model for specific tasks and controlling API costs. “As of February 7th, Endor Labs found that more than 3,500 additional models have been trained or distilled from the original DeepSeek R1 model,” said Stiefel. “This shows both the energy in the open-source AI model community, and why security teams need to understand both a model’s lineage and its potential risks.” For Sobrier, the growing adoption of open-source AI models reinforces the need to evaluate their dependencies. “We need to look at AI models as major dependencies that our software depends on. Companies need to ensure they are legally allowed to use these models but also that they are safe to use in terms of operational risks and supply chain risks, just like open-source libraries.” He emphasised that any risks can extend to training data: “They need to be confident that the datasets used for training the LLM were not poisoned or had sensitive private information.” Building a systematic approach to AI model risk As open-source AI adoption accelerates, managing risk becomes ever more critical. Stiefel outlined a systematic approach centred around three key steps: Discovery: Detect the AI models your organisation currently uses. Evaluation: Review these models for potential risks, including security and operational concerns. Response: Set and enforce guardrails to ensure safe and secure model adoption. “The key is finding the right balance between enabling innovation and managing risk,” Stiefel said. “We need to give software engineering teams latitude to experiment but must do so with full visibility. The security team needs line-of-sight and the insight to act.” Sobrier further argued that the community must develop best practices for safely building and adopting AI models. A shared methodology is needed to evaluate AI models across parameters such as security, quality, operational risks, and openness. Beyond transparency: Measures for a responsible AI future To ensure the responsible growth of AI, the industry must adopt controls that operate across several vectors: SaaS models: Safeguarding employee use of hosted models. API integrations: Developers embedding third-party APIs like DeepSeek into applications, which, through tools like OpenAI integrations, can switch deployment with just two lines of code. Open-source models: Developers leveraging community-built models or creating their own models from existing foundations maintained by companies like DeepSeek. Sobrier warned of complacency in the face of rapid AI progress. “The community needs to build best practices to develop safe and open AI models,” he advised, “and a methodology to rate them along security, quality, operational risks, and openness.” As Stiefel succinctly summarised: “Think about security across multiple vectors and implement the appropriate controls for each.” See also: AI in 2025: Purpose-driven models, human integration, and more Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Endor Labs: AI transparency vs ‘open-washing’ appeared first on AI News. View the full article
-
DeepSeek, a ******** AI startup aiming for artificial general intelligence (AGI), announced plans to open-source five repositories starting next week as part of its commitment to transparency and community-driven innovation. However, this development comes against the backdrop of mounting controversies that have drawn parallels to the TikTok saga. Today, DeepSeek shared its intentions in a tweet that outlined its vision of open collaboration: “We’re a tiny team at DeepSeek exploring AGI. Starting next week, we’ll be open-sourcing five repos, sharing our small but sincere progress with full transparency.” The repositories – which the company describes as “documented, deployed, and battle-tested in production” – include fundamental building blocks of DeepSeek’s online service. By open-sourcing its tools, DeepSeek hopes to contribute to the broader AI research community. “As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey. No ivory towers – just pure garage-energy and community-driven innovation,” the company said. This philosophy has drawn praise for fostering collaboration in a field that often suffers from secrecy, but DeepSeek’s rapid rise has also raised eyebrows. Despite being a small team with a mission rooted in transparency, the company has been under intense scrutiny amid allegations of data misuse and geopolitical entanglements. Rising fast, under fire Practically unknown until recently, DeepSeek burst onto the scene with a business model that stood in stark contrast to more established players like OpenAI and Google. Offering its advanced AI capabilities for free, DeepSeek quickly gained global acclaim for its cutting-edge performance. However, its exponential rise has also sparked debates about the trade-offs between innovation and privacy. US lawmakers are now pushing for a ban on DeepSeek after security researchers found the app transferring user data to a banned state-owned company. A probe has also been launched by Microsoft and OpenAI over a breach of the latter’s systems by a group allegedly linked to DeepSeek. Concerns about data collection and potential misuse have triggered comparisons to the controversies surrounding TikTok, another ******** tech success story grappling with regulatory pushback in the West. DeepSeek continues AGI innovation amid controversy DeepSeek’s commitment to open-source its technology appears timed to deflect criticism and reassure sceptics about its intentions. Open-sourcing has long been heralded as a way to democratise technology and increase transparency, and DeepSeek’s “daily unlocks,” that are set to begin soon, could offer the community reassuring insight into its operations. Nevertheless, questions remain over how much of the technology will be open for scrutiny and whether the move is an attempt to shift the narrative amid growing political and regulatory pressure. It’s unclear whether this balancing act will be enough to satisfy lawmakers or deter critics, but one thing is certain: DeepSeek’s open-source leap marks another turn in its dramatic rise. While the company’s motto of “garage-energy and community-driven innovation” resonates with developers eager for open collaboration, its future may rest as much on its ability to address security concerns as on its technical prowess. (Photo by Solen Feyissa) See also: DeepSeek’s AI dominance expands from EVs to e-scooters in China Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including IoT Tech Expo, Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek to open-source AGI research amid privacy concerns appeared first on AI News. View the full article
-
Former OpenAI CTO Mira Murati has announced the launch of Thinking Machines, a new AI research and product company. With a mission to democratise access to AI and create systems that are both customisable and capable of working collaboratively with humans, the startup is setting ambitious goals to transform how AI integrates into everyday life and industry. “We’re building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals,” the company explains. Bridging gaps in the current AI landscape Thinking Machines aims to address key gaps in the current AI landscape. While AI capabilities have advanced dramatically, there remain significant challenges in making these technologies accessible and practical for widespread use. Currently, knowledge about cutting-edge AI systems is concentrated among a select few research labs, which limits public understanding and hinders broader innovation. The company highlights that systems today are also challenging to customise for individual needs and values, a critical hurdle for real-world adoption. Thinking Machines sees the solution in creating AI systems that are more widely understood, customisable, and capable. The company plans to combine intellectual openness, advanced infrastructure, and innovative AI safety practices to empower both researchers and end-users. I started Thinking Machines Lab alongside a remarkable team of scientists, engineers, and builders. We're building three things: – Helping people adapt AI systems to work for their specific needs – Developing strong foundations to build more capable AI systems – Fostering a… — Mira Murati (@miramurati) February 18, 2025 Murati’s team brings formidable expertise to this mission, comprising scientists, engineers, and technologists responsible for creating some of the most widely used AI tools, such as OpenAI’s ChatGPT, Character.ai, and open-source frameworks like PyTorch and OpenAI Gym. A human-centric approach to AI A key cornerstone of the company’s philosophy is collaboration. Thinking Machines intends to maintain a culture of openness by sharing research papers, technical blog posts, and code with the wider AI community. “Scientific progress is a collective effort,” the company states. “We believe that we’ll most effectively advance humanity’s understanding of AI by collaborating with the wider community of researchers and builders.” This open research ethos reflects a broader trend in the AI sector, where transparency and community engagement are increasingly seen as drivers of not just innovation, but also societal trust in AI technologies. Unlike many organisations focused purely on creating autonomous AI, Thinking Machines is also putting a strong emphasis on human-AI collaboration. Multimodal systems – AI capable of working with a combination of formats like text, video, and imagery – are central to this human-centric vision. These systems are designed to interact seamlessly with people, helping users leverage AI to achieve specific goals and solve meaningful problems. The focus on personalisation also sets Thinking Machines apart. The team envisions AI systems that go beyond narrow-use cases, enabling diverse applications across fields ranging from scientific research to engineering and creative work. The company is particularly interested in creating AI tools that adapt to individual expertise and situations, allowing users to “make AI work for their unique needs and goals.” Thinking Machines will prioritise strong foundations While many AI startups are rushing to deploy systems, Thinking Machines is aiming to get the foundations right. This rests on two major pillars: model intelligence and high-quality infrastructure. Murati’s team is building frontier AI models capable of pushing the boundaries of fields like programming and scientific discovery. These advanced technologies could enable revolutionary applications, from uncovering new scientific insights to achieving engineering breakthroughs. Equally, the company is committed to designing efficient, secure, and user-friendly infrastructure to drive productivity and support the next generation of AI systems. Rather than opting for shortcuts, Thinking Machines is taking a patient, methodical approach to maximise long-term impact. Advanced multimodal capabilities are another area of focus. By integrating modalities like language, imagery, and sensory data, the company aims to build systems capable of richer communication and deeper real-world integration. Ethical AI through product-driven learning Thinking Machines also plans to intertwine research and product design, an approach that not only informs innovation but also ensures relevance and usability. Products will drive iterative learning, enabling the team to gain insights from real-world deployment. Meanwhile, real-world testing will further solidify the company’s commitment to AI safety, blending proactive research with rigorous post-deployment monitoring. Murati’s team outlines three key principles for advancing AI safety: Maintaining a high safety bar to prevent misuse while preserving user freedoms. Sharing best practices with the industry for building secure AI systems. Accelerating external research on AI alignment by providing access to code, datasets, and model specifications. The team also recognises that the most meaningful breakthroughs often come from “rethinking our objectives, not just optimising existing metrics.” By measuring real-world value, Thinking Machines hopes to create AI systems that truly benefit society across a wider array of use cases. Thinking Machines: A fresh AI startup, guided by experience The launch of Thinking Machines signifies the next chapter for Mira Murati, who played a crucial role in leading some of OpenAI’s most successful projects. I shared the following note with the OpenAI team today. pic.twitter.com/nsZ4khI06P — Mira Murati (@miramurati) September 25, 2024 Murati’s wealth of experience, alongside a team of world-class AI creators, puts the new venture on solid footing to make a significant impact on the sector. By affirming a commitment to openness, collaboration, and long-term thinking, the startup may provide an antidote to common criticisms of the fast-moving AI world, from opacity to ethical risks. The mission is clear: to empower people in every industry to harness AI’s transformative potential—on their terms. (Photo by Nejc Soklič) See also: Grok 3: The next-gen ‘truth-seeking’ AI model Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Thinking Machines: Ex-OpenAI CTO’s new AI startup appeared first on AI News. View the full article
-
South Korea is preparing to host the world’s largest AI data centre by capacity, setting its sights on global technological leadership. The monumental project, led by Fir Hills – a division of California-based Stock Farm Road, Inc. (SFR) – is expected to commence construction in winter 2025 and aims to generate an initial annual revenue of $3.5 billion. The facility, located in the Jeollanam-do Province, is the result of a strategic collaboration between seasoned innovators. SFR was co-founded by LG heir Brian Koo and Dr Amin Badr-El-Din, a veteran of successful ventures in technology, energy, and global public-private partnerships. “This is more than just a technological milestone; it’s a strategic leap forward for Korea’s global technological leadership,” said Dr Badr-El-Din. “We are incredibly proud to partner with Stock Farm Road and the Jeollanam-do government to build this crucial infrastructure, creating an unprecedented opportunity to build the foundation for next-generation AI.” The initial project is valued at over $10 billion, with the potential to grow to $35 billion. Upon completion in 2028, the centre will boast a 3-gigawatt capacity, making it the largest AI data facility in the world. The centre is designed to meet the sophisticated requirements of next-generation AI, featuring advanced cooling infrastructure, cutting-edge fibre bandwidth for regional and global connectivity, and the capability to manage significant and sudden energy load variations. This level of infrastructural sophistication not only promises bolstered AI innovation but is also a boon for South Korea’s economy. With projections suggesting that the burgeoning data centre services market will grow to $438.3 billion by 2030, South Korea’s 3GW behemoth positions itself as a pivotal player in the industry. “Having witnessed firsthand the immense technological capabilities of large Asian enterprises, I recognise the potential of this project to elevate Korea and the region to a new level of technological advancement and economic prosperity,” commented Koo. “This data centre is not merely an infrastructure project, but the launchpad for a new digital industrial revolution.” A data centre for the future of South Korea The scale of the project translates into major economic gains for the Jeollanam-do Province and beyond. The initiative is expected to create over 10,000 jobs spanning various sectors, including energy supply and storage (ESS), renewable energy production, equipment manufacturing, and research and development (R&D). Moreover, the data centre’s services will benefit millions, catering to the needs of hundreds of large corporations—hyperscalers and developers among them. The centre’s development is only the first step in a broader strategy by SFR. Future projects are already on the horizon, as the company plans to establish AI infrastructure partnerships across Asia, Europe, and the US in the next 18 months. SFR’s expertise in Energy-to-Intelligence (e2i²) forms the backbone of this push, transforming traditional energy infrastructure into platforms for advanced AI innovation. With access to significant financial resources and a robust network extending across both Asian and Western markets, SFR could lead the next generation of global intelligent infrastructure. In just over a decade, South Korea has transformed into a technological powerhouse. With companies like Samsung, LG, and Hyundai leading in their respective sectors, the nation is now solidifying its standing in AI and data infrastructure. The massive Jeollanam-do AI data centre underscores how strategically aligned public-private partnerships can foster innovation on a monumental scale. By establishing such a colossal infrastructure project, South Korea is furthering its ambitions to catalyse a true digital industrial revolution. See also: *** must act to secure its semiconductor industry leadership Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post South Korea is building the world’s largest AI data centre appeared first on AI News. View the full article
-
DeepSeek mobility integration is spreading across China’s transport sector, with companies including automotive giants and e-scooter manufacturers incorporating AI into their products. The adoption wave began with primary electric vehicle (EV) manufacturers and has expanded recently to include the country’s leading electric two-wheeler brands. DeepSeek’s mobility integration transforms the auto industry According to the South China Morning Post, over the past two weeks, more than a dozen ******** automakers have announced plans to integrate DeepSeek’s AI technology into their vehicles. The roster includes industry leader BYD, established manufacturers like Geely, Great Wall Motor, Chery Automobile, and SAIC Motor, and emerging players like Leapmotor. BYD’s commitment to the technology is particularly noteworthy, with the company planning to integrate DeepSeek in its Xuanji vehicle software platform. The integration will let BYD offer preliminary self-driving capabilities on nearly all its models with no change to the sticker price, making autonomous driving accessible to more consumers. The initiative covers around 20 models, including the highly-affordable Seagull hatchback, which is currently priced at 69,800 yuan (US$9,575). E-scooter brands join the DeepSeek bandwagon DeepSeek has hit China’s e-scooter sector most recently, as Xiaomi-backed Segway-Ninebot Group and Nasdaq-listed Niu Technologies work to incorporate AI into their electric two-wheelers. Ninebot stated on Friday that it would “deeply integrate DeepSeek” into its products, promising enhanced features through its mobile app. The improvements are said to include AI-powered content creation, data analytics, personalised recommendations, and intelligent services to riders. Niu Technologies claims to have integrated DeepSeek’s large language models (LLMs) as of February 9 this year. The company plans to use the technology for: Driver assistance systems Riding safety features AI-powered travel companions Voice interaction Intelligent service recommendations Yadea Group, the world’s largest by sales electric two-wheeler manufacturer, announced on Saturday that it plans to embed DeepSeek’s technology into its ecosystem. The rapid adoption of DeepSeek in China’s mobility sector reflects what industry observers call “DeepSeek fever.” The technology’s appeal lies in its cost-effective and cost-efficient approach to AI integration. The Hangzhou-based company’s open-source AI models, DeepSeek-V3 and DeepSeek-R1, operate at a fraction of the cost and computing power typically required for large language model projects. “Cars without DeepSeek will either lose market share or be edged out of the market,” said Phate Zhang, founder of Shanghai-based EV data provider CnEVPost. The expansion of DeepSeek mobility integration comes at a time when ******** e-scooter brands are gaining traction in overseas markets. According to customs data, the value of electric two-wheeler exports rose 27.6% to US$5.82 billion in 2024, passing the previous peak of US$5.31 billion in 2022. Export volume increased by 47% to 22.13 million units. Research firm IDC notes that DeepSeek’s open-source model has fostered a collaborative innovation ecosystem via platforms like GitHub, letting developers participate in optimisation and security testing. The collaborative approach is expected to improve companies’ ability to deploy, train, and utilise large language models. The impact of DeepSeek mobility integration on China’s transport sector appears to be growing. Zhang Yongwei, general secretary of China EV100, projects that by 2025, approximately 15 million cars – representing two-thirds of national sales – will be equipped with preliminary autonomous driving systems, underscoring the transformative potential of the technology in reshaping China’s transport system. (Photo by Kenny Leys) See also: DeepSeek ban? China data transfer boosts security concerns Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek’s AI dominance expands from EVs to e-scooters in China appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
The CRM company monday.com has released what it terms its ‘AI vision,’ designed to help businesses and teams manage and track their work more efficiently. According to the platform’s published strategy, the company will focus on three key principles – AI Blocks, Product Power-ups, and a Digital Workforce. Its aims are to “accelerate its vision to democratise the power of software.” By making its software more accessible and usable for its users, monday.com hopes to address the challenges encountered by businesses and work forces in teams of all sizes, streamline workflows and increase productivity. The latest AI strategies are designed to help small and medium-sized businesses (SMBs) and mid-market companies grow and adapt quickly without the need to hire more staff. According to monday.com, AI will boost process times that are typically slowed by the scale of larger enterprises and Fortune 500 companies. In line with monday.com’s stated goal to “democratise access to AI,” the platform’s approach is to be user-friendly, allowing any users regardless of technical knowledge to create, customise, and use AI tools in their workflows. Daniel Lereya, Chief Product and Technology Officer at monday.com spoke about the company’s approach to making software accessible to everyone. “By embedding intelligence into the products our customers already know, use, and love, AI will accelerate our mission to democratise the power of software,” he said. “With a majority of our customers in non-tech industries, they’re looking to us to lead them through the AI transformation. Our productisation of AI ensures that intelligence serves our customers and not the other way around.” Of the three principles ‘AI Blocks’ are customisable AI tools that can be added to existing workflows, regardless of technical knowledge. The AI Blocks feature capabilities like “Categorise” and “Extract,’ letting users analyse data, and recognise patterns in a few clicks. The second principle, ‘Product Power-ups,’ refers to the integration of AI features directly into monday.com’s existing suite of products. This is designed to help the company’s customers address challenges in areas like CRM data automation, resource management, predictive risk management, and real-time service ticket resolution. The features could help teams make quicker, more informed decisions, streamline workflows and increase efficiency. Finally, ‘Digital Workforce’ is a collection of AI agents that operate to support users and customers. Digital Workforce is can handle specific tasks that are automatable, including project risk analysis, the identification of ongoing customer service issues, and helping move delayed sales deals forward. monday.com is planning the launch of ‘monday Expert’ in March, its first AI agent that’s designed to support the onboarding of new users, and perform some tasks on behalf of users. The company claims its AI capabilities have had a notable effect, with reports of monday.com users performing around 10 million AI-driven actions in 2024. The numbers of AI use instances nearly tripled each quarter of 2024, indicating a rapid rise in the use of AI in the CRM platform. (Image: “Monday – Back to work!” by tinto is licensed under CC BY 2.0.) ion Summit: Leaders call for unity and equitable development Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post monday.com announces AI features to help businesses and employees appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
xAI unveiled its Grok 3 AI model on Monday, alongside new capabilities such as image analysis and refined question answering. The company harnessed an immense data centre equipped with approximately 200,000 GPUs to develop Grok 3. According to xAI owner Elon Musk, this project utilised “10x” more computing power than its predecessor, Grok 2, with an expanded dataset that reportedly includes information from legal case filings. Musk claimed that Grok 3 is a “maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically-correct.” The Grok 3 rollout includes a family of models designed for different needs. Grok 3 mini, for example, prioritises faster response times over absolute accuracy. However, particularly noteworthy are the new reasoning-focused Grok 3 models. Dubbed Grok 3 Reasoning and Grok 3 mini Reasoning, these variants aim to emulate human-like cognitive processes by “thinking through” problems. Comparable to models like OpenAI’s o3-mini and DeepSeek’s R1, these reasoning systems attempt to fact-check their responses—reducing the likelihood of errors or missteps. Grok 3: The benchmark results xAI asserts that Grok 3 surpasses OpenAI’s GPT-4o in certain benchmarks, including AIME and GPQA, which assess the model’s proficiency in tackling complex problems across mathematics, physics, biology, and chemistry. The early version of Grok 3 is also currently leading on Chatbot Arena, a crowdsourced evaluation platform where users pit AI models against one another and rank their outputs. The model is the first to break the Arena’s 1400 score. BREAKING: @xAI early version of Grok-3 (codename "chocolate") is now #1 in Arena! Grok-3 is: – First-ever model to break 1400 score! – #1 across all categories, a milestone that keeps getting harder to achieve Huge congratulations to @xAI on this milestone! View thread … [Hidden Content] pic.twitter.com/hShGy8ZN1o — lmarena.ai (formerly lmsys.org) (@lmarena_ai) February 18, 2025 According to xAI, Grok 3 Reasoning outperforms its rivals on a variety of prominent benchmarks: These reasoning models are already integrated into features available via the Grok app. Users can select commands like “Think” or activate the more computationally-intensive “Big Brain” mode for tackling particularly challenging questions. xAI has positioned the reasoning models as ideal tools for STEM (science, technology, engineering, and mathematics) applications, including mathematics, science, and coding challenges. Guarding against AI distillation Interestingly, not all of Grok 3’s internal processes are laid bare to users. Musk explained that some of the reasoning models’ “thoughts” are intentionally obscured to prevent distillation—a controversial practice where competing AI developers extract knowledge from proprietary models. The practice was thrust into the spotlight in recent weeks after ******** AI firm DeepSeek faced allegations of distilling OpenAI’s models to develop its latest model, R-1. xAI’s new reasoning models serve as the foundation for a new Grok app feature called DeepSearch. The feature uses Grok models to scan the internet and Musk’s social platform, X, for relevant information before synthesising a detailed abstract in answer to user queries. Accessing Grok 3 and committing to open-source Access to the latest Grok model is currently tied to X’s subscription tiers. Premium+ subscribers, who pay $50 (~£41) per month, will receive priority access to the latest functionalities. xAI is also introducing a SuperGrok subscription plan, reportedly priced at either $30 per month or $300 annually. SuperGrok subscribers will benefit from enhanced reasoning capabilities, more DeepSearch queries, and unlimited image generation features. The company also teased upcoming features. Within a week, the Grok app is expected to introduce a voice mode—enabling users to interact with the AI through a synthesised voice similar to Gemini Live. Musk further revealed plans to release Grok 3 models via an enterprise-ready API in the coming weeks, with DeepSearch functionality included. Although Grok 3 is still fresh, xAI intends to open-source its predecessor in the coming months. Musk claims that xAI will continue to open-source the last version of Grok. “When Grok 3 is mature and stable, which is probably within a few months, then we’ll open-source Grok 2,” explains Musk. The ‘anti-woke’ AI model Grok has long been marketed as unfiltered, bold, and willing to engage with queries that competitors might avoid. Musk previously described the AI as “anti-woke,” presenting it as a model unafraid to touch on controversial topics. True to its promise, early models like Grok and Grok 2 embraced politically-charged queries, even veering into colourful language when prompted. Yet, these versions also revealed some biases when delving deep into political discourse. “We’re working to shift Grok closer to politically-neutral,” said Musk. However, whether Grok 3 achieves this goal remains to be seen. With such changes at play, analysts are already highlighting the potential societal impacts of introducing increasingly “truth-seeking” yet politically-sensitive AI systems. With Grok 3, Musk and xAI have made a bold statement, pushing their technology forward while potentially fuelling debates around bias, transparency, and the ethics of AI deployment. As competitors like OpenAI, Google, and DeepSeek refine their offerings, Grok 3’s success will hinge on its ability to balance accuracy, user demand, and societal responsibility. See also: AI in 2025: Purpose-driven models, human integration, and more Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Grok 3: The next-gen ‘truth-seeking’ AI model appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
[AI]UK must act to secure its semiconductor industry leadership
ChatGPT posted a topic in World News
The *** semiconductor industry is at a critical juncture, with techUK urging the government to act to maintain its global competitiveness. Laura Foster, Associate Director of Technology and Innovation at techUK, said: “The *** has a unique opportunity to lead in the global semiconductor landscape, but success will require bold action and sustained commitment. “By accelerating the implementation of the National Semiconductor Strategy, we can unlock investment, foster innovation, and strengthen our position in this critical industry. Semiconductors are the backbone of modern technology, powering everything from consumer electronics to AI data centres. With the global semiconductor market projected to reach $1 trillion by 2030, the *** must act to secure its historic leadership in this lucrative and strategically vital industry. “We must act at pace to secure the ***’s semiconductor future and as such our technological and economic resilience,” explains Foster. *** semiconductor industry strengths and challenges The *** has long been a leader in semiconductor design and intellectual property (IP), with Cambridge in particular serving as a global hub for innovation. Companies like Arm, which designs chips used in 99% of the world’s smartphones, exemplify the ***’s strengths in this area. However, a techUK report warns that these strengths are under threat due to insufficient investment, skills shortages, and a lack of tailored support for the sector. “The *** is not starting from zero,” the report states. “We have globally competitive capabilities in design and IP, but we must double down on these strengths to compete internationally.” The ***’s semiconductor industry contributed £12 billion in turnover in 2021, with 90% of companies expecting growth in the coming years. However, the sector faces significant challenges, including high costs, limited access to private capital, and a reliance on international talent. The report highlights that only 5% of funding for *** semiconductor startups originates domestically, with many companies struggling to find qualified investors. A fundamental need for strategic investment and innovation The report makes 27 recommendations across six key areas, including design and IP, R&D, manufacturing, skills, and global partnerships. Some of the key proposals include: Turn current strengths into leadership: The *** must leverage its existing capabilities in design, IP, and compound semiconductors. This includes supporting regional clusters like Cambridge and South Wales, which have proven track records of innovation. Establishing a National Semiconductor Centre: This would act as a central hub for the industry, providing support for businesses, coordinating R&D efforts, and fostering collaboration between academia and industry. Expanding R&D tax credits: The report calls for the inclusion of capital expenditure in R&D tax credits to incentivise investment in new facilities and equipment. Creating a Design Competence Centre: This would provide shared facilities for chip designers, reducing the financial risk of innovation and supporting the development of advanced designs. Nurturing skills: The *** must address the skills shortage in the semiconductor sector by upskilling workers, attracting international talent, and promoting STEM education. Capitalise on global partnerships: The *** must strengthen its position in the global semiconductor supply chain by forming strategic partnerships with allied countries. This includes collaborating on R&D, securing access to critical materials, and navigating export controls. Urgent action is required to secure the *** semiconductor industry The report warns that the *** risks falling behind other nations if it does not act quickly. Countries like the US, China, and the EU have already announced significant investments in their domestic semiconductor industries. The European Chips Act, for example, has committed €43 billion to support semiconductor infrastructure, skills, and startups. “Governments across the world are acting quickly to attract semiconductor companies while also building domestic capability,” the report states. “The *** must use its existing resources tactically, playing to its globally recognised strengths within the semiconductor value chain.” The ***’s semiconductor industry has the potential to be a global leader, but this will require sustained investment, strategic planning, and collaboration between government, industry, and academia. “The *** Government should look to its semiconductor ambitions as an essential part of delivering the wider Industrial Strategy and securing not just the fastest growth in the G7, but also secure and resilient economic growth,” the report concludes. (Photo by Rocco Dipoppa) See also: AI in 2025: Purpose-driven models, human integration, and more Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** must act to secure its semiconductor industry leadership appeared first on AI News. View the full article -
[AI]AI helps prevent fraud with intelligent document processing
ChatGPT posted a topic in World News
No industry is immune to fraud, with cases rising in finance, retail, and loyalty programmes. From fake invoices and falsified receipts to identity fraud and synthetic accounts, traditional fraud detection methods struggle to keep up. Many businesses still rely on manual reviews, which are slow and error-prone, often detecting fraud only after the damage is done. As fraud tactics grow more sophisticated, organisations need a smarter approach. Automated document fraud detection powered by AI offers a proactive solution, letting businesses to verify documents in real-time, detect anomalies, and prevent fraud before it occurs. Here is where AI-powered intelligent document processing (IDP) is changing the game. By combining machine learning, optical character recognition (OCR), and real-time data verification, AI can automatically analyse, authenticate, and flag fraudulent documents in seconds. Unlike traditional methods, AI-driven fraud detection is faster, more accurate, and continuously improving, helping businesses detect fraud patterns before they cause financial and reputational damage. In this blog, we’ll explore what IDP is, how fraud is detected using AI, and the industries in which it can be applied. What is intelligent document processing & how does AI improve fraud detection? The amount of documents, invoices, receipts, and identity records businesses process has skyrocketed. However, manual document handling and traditional fraud detection methods struggle to keep up with the increasing volume and complexity of fraud attempts. This is where intelligent document processing comes in. What is intelligent document processing? Intelligent document processing is an AI-powered technology that automates the extraction, classification, and verification of data from documents. It uses machine learning (ML), natural language processing (NLP), and optical character recognition (OCR) to read and analyse structured and unstructured documents, with abilities far beyond traditional rule-based systems. Instead of relying on manual reviews or keyword-based matching, IDP understands context, patterns, and anomalies, making it an invaluable tool for detecting fraudulent activity. How AI enhances fraud detection with IDP AI-powered IDP improves fraud detection by enabling businesses to: Detect anomalies instantly: AI scans thousands of documents in real-time, identifying irregularities in invoices, receipts, and identity records that humans might overlook. Verify document authenticity: By cross-referencing data across multiple sources, AI can detect manipulated text, forged signatures, and fake documents. Identify duplicate or altered submissions: Fraudsters often modify genuine receipts or submit duplicate claims. AI can compare submissions and flag inconsistencies. Reduce false positives: Unlike traditional rule-based systems that flag legitimate transactions as fraud, AI continuously learns and improves accuracy over time. Scale fraud detection effortlessly: AI can process millions of documents, allowing businesses to detect fraud without increasing human workload. Why traditional fraud detection falls short Most fraud detection methods rely on manual audits, fixed rules, and pattern-matching techniques, which are: Time-consuming and expensive: Manual document checks require significant resources. Prone to human error: Fraudsters exploit gaps that humans may miss. Limited in scope: Rule-based systems struggle to detect new and evolving fraud tactics. By using AI and IDP, businesses gain a faster, more reliable, and scalable fraud detection system that adapts to evolving threats. In the next section, we’ll explore how AI-driven fraud detection is applied across industries like finance, retail, and loyalty programmes. AI-powered fraud detection in industries Fraud impacts businesses in many ways, from loyalty programme abuse to invoice fraud and identity theft. Traditional fraud detection methods often fall short, as fraudsters constantly evolve their tactics to evade security measures. AI-powered IDP is transforming how businesses analyse, verify, and detect fraud across various industries. Here’s how AI is tackling fraud in key sectors. Preventing loyalty fraud in rewards programmes Loyalty programmes are designed to reward genuine customers, but fraudsters have found ways to manipulate these systems for personal gain. Common fraud techniques include: Creating multiple accounts to claim sign-up bonuses multiple times. Submitting fake or altered receipts to earn rewards without making real purchases. Abusing refund and return policies to retain loyalty points after a transaction is reversed. Hacking accounts to steal and redeem someone else’s loyalty points. AI-powered fraud detection helps prevent these tactics by: Verifying receipts: AI scans submitted receipts and detects forgeries, duplicates, and altered information. Identifying suspicious patterns: Machine learning algorithms spot unusual transaction behaviours, like multiple claims from the same user with different identities. Automating account authentication: AI uses identity verification to ensure real customers benefit from rewards, preventing bot-driven abuse. With real-time fraud detection, businesses can reduce losses from loyalty fraud while ensuring that rewards go to legitimate customers. Stopping invoice & expense fraud in finance & accounting Fraudsters frequently target invoice and expense management systems by submitting fake, inflated, or duplicate claims. Some common invoice fraud techniques include: Invoice tampering: Fraudsters modify invoice amounts or change vendor details to redirect payments. Duplicate claims: Employees or vendors submit the same invoice multiple times for reimbursement. Fake receipts: Fraudsters generate counterfeit receipts to justify fraudulent expenses. AI and optical character recognition (OCR) technology help detect these fraudulent activities by: Extracting and verifying invoice data: AI scans invoices and cross-checks them against existing records, vendor details, and previous payments to detect duplications or alterations. Spotting irregular patterns: Machine learning identifies inconsistencies like overinflated amounts, mismatched dates, and suspicious vendor behaviour. Automating compliance checks: AI ensures that invoices meet company expense policies and tax regulations, reducing human errors in financial audits. By integrating AI-driven document processing, finance teams can speed up invoice verification, prevent fraudulent payouts, and eliminate manual review bottlenecks. Banking fraud: Loan and mortgage fraud Fraudsters frequently target banks and financial institutions by manipulating loan and mortgage applications. They use falsified documents, stolen identities, or synthetic identities to secure funds they have no intention of repaying. Some common fraud techniques include: Document forgery: Fraudsters alter bank statements, pay stubs, or tax documents to exaggerate income or hide liabilities. Identity theft: Stolen personal information is used to apply for loans or mortgages under a false identity. Synthetic identity fraud: A mix of real and fake information is used to create a new identity with a high credit score to secure fraudulent loans. Straw borrower schemes: A third party applies for a loan on behalf of someone who would not qualify, concealing the true borrower’s financial risk. AI-powered fraud detection in banking helps prevent loan and mortgage fraud through: Advanced document verification: AI analyses submitted financial documents for inconsistencies, altered text, and signs of forgery. Identity verification and biometric matching: AI-powered facial recognition and ID authentication ensure applicants are who they claim to be. Cross-referencing financial data: AI scans multiple data sources, like credit history and banking records, to detect unusual patterns or mismatches. Real-time risk assessment: Machine learning models evaluate loan applications for fraudulent indicators, reducing the risk of granting high-risk loans. By integrating AI into banking fraud detection, financial institutions can strengthen their security, reduce loan defaults, and ensure compliance with regulatory standards. AI-driven fraud detection is redefining security standards across industries by automating verification, detecting anomalies, and reducing fraud risks in real time. In the next section, we’ll explore why AI is the future of fraud prevention and how it continuously adapts to emerging fraud tactics. Why AI is the future of fraud prevention Fraud tactics are constantly evolving, making traditional detection methods less effective over time. Manual reviews and rule-based systems are too rigid to keep up with increasingly sophisticated fraud schemes. AI, on the other hand, offers a dynamic, self-learning approach that continuously adapts to new threats. Unlike static fraud detection models, AI analyses patterns in real time, detects anomalies, and refines its accuracy over time. By automating document authentication, verifying identities, and flagging suspicious transactions, AI minimises human error and strengthens fraud prevention across industries. Its ability to process millions of documents instantly ensures that fraud is detected before financial damage occurs, rather than after. Conclusion: The future of fraud detection is AI-driven Businesses can no longer afford to rely on outdated fraud prevention strategies. AI-powered intelligent document processing provides a scalable, efficient, and highly accurate way to detect and prevent fraud, reducing financial losses and compliance risks. By using AI, companies can automate fraud detection, improve security, and stay ahead of emerging threats. As fraud continues to evolve, businesses must evolve with it. AI is no longer the future of fraud prevention, it is the present. The question is: Is your business ready to embrace it? (Image source: Unsplash) The post AI helps prevent fraud with intelligent document processing appeared first on AI News. View the full article -
As AI becomes increasingly embedded in our daily lives, industry leaders and experts are forecasting a transformative 2025. From groundbreaking developments to existential challenges, AI’s evolution will continue to shape industries, change workflows, and spark deeper conversations about its implications. For this article, AI News caught up with some of the world’s leading minds to see what they envision for the year ahead. Smaller, purpose-driven models Grant Shipley, Senior Director of AI at Red Hat, predicts a shift away from valuing AI models by their sizeable parameter counts. “2025 will be the year when we stop using the number of parameters that models have as a metric to indicate the value of a model,” he said. Instead, AI will focus on specific applications. Developers will move towards chaining together smaller models in a manner akin to microservices in software development. This modular, task-based approach is likely to facilitate more efficient and bespoke applications suited to particular needs. Open-source leading the way Bill Higgins, VP of watsonx Platform Engineering and Open Innovation at IBM, expects open-source AI models will grow in popularity in 2025. “Despite mounting pressure, many enterprises are still struggling to show measurable returns on their AI investments—and the high licensing fees of proprietary models is a major factor. In 2025, open-source AI solutions will emerge as a dominant force in closing this gap,” he explains. Alongside the affordability of open-source AI models comes transparency and increased customisation potential, making them ideal for multi-cloud environments. With open-source models matching proprietary systems in power, they could offer a way for enterprises to move beyond experimentation and into scalability. This plays into a prediction from Nick Burling, SVP at Nasuni, who believes that 2025 will usher in a more measured approach to AI investments. “Enterprises will focus on using AI strategically, ensuring that every AI initiative is justified by clear, measurable returns,” said Burling. Cost efficiency and edge data management will become crucial, helping organisations optimise operations while keeping budgets in check. Augmenting human expertise For Jonathan Siddharth, CEO of Turing, the standout feature of 2025 AI systems will be their ability to learn from human expertise at scale. “The key advancement will come from teaching AI not just what to do, but how to approach problems with the logical reasoning that coding naturally cultivates,” he says. Competitiveness, particularly in industries like finance and healthcare, will hinge on mastering this integration of human expertise with AI. Behavioural psychology will catch up Understanding the interplay between human behaviour and AI systems is at the forefront of predictions for Niklas Mortensen, Chief Design Officer at Designit. “With so many examples of algorithmic bias leading to unwanted outputs – and humans being, well, humans – behavioural psychology will catch up to the AI train,” explained Mortensen. The solutions? Experimentation with ‘pause moments’ for human oversight and intentional balance between automation and human control in critical operations such as healthcare and transport. Mortensen also believes personal AI assistants will finally prove their worth by meeting their long-touted potential in organising our lives efficiently and intuitively. Bridge between physical and digital worlds Andy Wilson, Senior Director at Dropbox, envisions AI becoming an indispensable part of our daily lives. “AI will evolve from being a helpful tool to becoming an integral part of daily life and work – offering innovative ways to connect, create, and collaborate,” Wilson says. Mobile devices and wearables will be at the forefront of this transformation, delivering seamless AI-driven experiences. However, Wilson warns of new questions on boundaries between personal and workplace data, spurred by such integrations. Driving sustainability goals With 2030 sustainability targets looming over companies, Kendra DeKeyrel, VP ESG & Asset Management at IBM, highlights how AI can help fill the gap. DeKeyrel calls on organisations to adopt AI-powered technologies for managing energy consumption, lifecycle performance, and data centre strain. “These capabilities can ultimately help progress sustainability goals overall,” she explains. Unlocking computational power and inference James Ingram, VP Technology at Streetbees, foresees a shift in computational requirements as AI scales to handle increasingly complex problems. “The focus will move from pre-training to inference compute,” he said, highlighting the importance of real-time reasoning capabilities. Expanding context windows will also significantly enhance how AI retains and processes information, likely surpassing human efficiency in certain domains. Rise of agentic AI and unified data foundations According to Dominic Wellington, Enterprise Architect at SnapLogic, “Agentic AI marks a more flexible and creative era for AI in 2025.” However, such systems require robust data integration because siloed information risks undermining their reliability. Wellington anticipates that 2025 will witness advanced solutions for improving data hygiene, integrity, and lineage—all vital for enabling agentic AI to thrive. From hype to reality Jason Schern, Field CTO of Cognite, predicts that 2025 will be remembered as the year when truly transformative, validated generative AI solutions emerge. “Through the fog of AI for AI’s sake noise, singular examples of truly transformative embedding of Gen AI into actual workflows will stand out,” predicts Schern. These domain-specific AI agents will revolutionise industrial workflows by offering tailored decision-making. Schern cited an example in which AI slashed time-consuming root cause analyses from months to mere minutes. Deepfakes and crisis of trust Sophisticated generative AI threatens the authenticity of images, videos, and information, warns Siggi Stefnisson, Cyber Safety CTO at Gen. “Even experts may not be able to tell what’s authentic,” warns Stefnisson. Combating this crisis requires robust digital credentials for verifying authenticity and promoting trust in increasingly blurred digital realities. 2025: Foundational shifts in the AI landscape As multiple predictions converge, it’s clear that foundational shifts are on the horizon. The experts that contributed to this year’s industry predictions highlight smarter applications, stronger integration with human expertise, closer alignment with sustainability goals, and heightened security. However, many also foresee significant ethical challenges. 2025 represents a crucial year: a transition from the initial excitement of AI proliferation to mature and measured adoption that promises value and a more nuanced understanding of its impact. See also: AI Action Summit: Leaders call for unity and equitable development Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI in 2025: Purpose-driven models, human integration, and more appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Apple’s aim to integrate Qwen AI into ******** iPhones has taken a significant step forward, with sources indicating a potential partnership between the Cupertino giant and Alibaba Group Holding. The development could reshape how AI features are implemented in one of the world’s most regulated tech markets. According to multiple sources familiar with the matter, Apple is in advanced talks to use Alibaba’s Qwen AI models for its iPhone lineup in mainland China. The move would depart from Apple’s global strategy of using OpenAI’s GPT models for its AI features, highlighting the company’s willingness to adapt to local market conditions. The technical edge of Qwen AI Qwen AI is attractive to Apple in China because of the former’s proven capabilities in the open-source AI ecosystem. Recent benchmarks from Hugging Face, a leading collaborative machine-learning platform, position Qwen at the forefront of open-source large language models (LLMs). The platform’s data shows Qwen-powered models dominating the top 10 positions in performance global rankings, demonstrating the technical maturity that Apple seeks for its AI integration. “The selection of Qwen AI for iPhone integration would validate Alibaba’s AI capabilities,” explains Morningstar’s senior equity analyst Chelsey Lam. “This could be particularly important for Apple’s strategy to re-invigorate iPhone sales in China, where AI features have become increasingly important for smartphone users.” Regulatory navigation and market impact The potential partnership reflects an understanding of China’s AI regulatory landscape. While Apple’s global AI features remain unavailable in China due to regulatory requirements, partnering with Alibaba could provide a compliant pathway to introduce advanced AI capabilities. Market reaction to the news has been notably positive: Alibaba’s stock surged 7.6% on Monday, followed by an additional 1.3% gain on Tuesday Apple shares responded with a 2.2% increase The tech sector has shown renewed interest in China-focused AI integration strategies Development timeline and expectations The timing of the potential collaboration aligns with Apple’s upcoming China developer conference in Shanghai, scheduled for March 25. Industry observers speculate the event could serve as a platform on which to announce the integration of Qwen AI features into the iPhone ecosystem. “The partnership could change how international tech companies approach AI localisation in China,” noted a senior AI researcher at a leading ******** university, speaking anonymously. “It’s not just about technology integration; it’s about creating a sustainable model for AI development in China’s regulatory framework.” Implications for developers and users For ******** iOS developers, the potential integration of Qwen AI presents opportunity. The partnership could enable: Creation of locally optimised AI applications Enhanced natural language processing capabilities specific to ******** users Seamless integration with local services and platforms Prospects and industry impact The effects of the partnership extend beyond immediate market concerns. As global tech companies navigate operating in China, the Apple-Alibaba collaboration could serve as a blueprint for future integration. For Alibaba, securing Apple as a flagship partner could catalyse more partnerships with global technology companies seeking AI solutions for China. The collaboration would demonstrate Qwen AI’s capability to meet the stringent requirements of one of the world’s most demanding tech companies. Looking ahead While both companies maintain official silence on the partnership, the tech community awaits announcements at the upcoming Shanghai developer conference. The development is important when AI capabilities increasingly influence smartphone purchasing decisions. For Apple, success in China will impact its global growth trajectory, and integrating Qwen AI could provide the competitive edge it needs to maintain its premium market position against local manufacturers offering advanced AI features. It underscores a broader trend in the tech industry: the growing importance of localised AI solutions in major markets. See also: Has Huawei outsmarted Apple in the AI race? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here The post Could Alibaba’s Qwen AI power the next generation of iPhones in China? appeared first on AI News. View the full article
-
Eric Schmidt, former CEO of Google, has warned that AI misuse poses an “extreme risk” and could do catastrophic harm. Speaking to BBC Radio 4’s Today programme, Schmidt cautioned that AI could be weaponised by extremists and “rogue states” such as North Korea, Iran, and Russia to “harm innocent people.” Schmidt expressed concern that rapid AI advancements could be exploited to create weapons, including biological attacks. Highlighting the dangers, he said: “The real fears that I have are not the ones that most people talk about AI, I talk about extreme risk.” Using a chilling analogy, Schmidt referenced the al-Qaeda leader responsible for the 9/11 attacks: “I’m always worried about the ****** bin Laden scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people.” He emphasised the pace of AI development and its potential to be co-opted by nations or groups with malevolent intent. “Think about North Korea, or Iran, or even Russia, who have some evil goal … they could misuse it and do real harm,” Schmidt warns. Oversight without stifling innovation Schmidt urged governments to closely monitor private tech companies pioneering AI research. While noting that tech leaders are generally aware of AI’s societal implications, they may make decisions based on different values from those of public officials. “My experience with the tech leaders is that they do have an understanding of the impact they’re having, but they might make a different values judgement than the government would make.” Schmidt also endorsed the export controls introduced under former US President Joe Biden last year to restrict the ***** of advanced microchips. The measure is aimed at slowing the progress of geopolitical adversaries in AI research. Global divisions around preventing AI misuse The tech veteran was in Paris when he made his remarks, attending the AI Action Summit, a two-day event that wrapped up on Tuesday. The summit, attended by 57 countries, saw the announcement of an agreement on “inclusive” AI development. Signatories included major players like China, India, the EU, and the African Union. However, the *** and the US declined to sign the communique. The *** government said the agreement lacked “practical clarity” and failed to address critical “harder questions” surrounding national security. Schmidt cautioned against excessive regulation that might hinder progress in this transformative field. This was echoed by US Vice-President JD Vance who warned that heavy-handed regulation “would kill a transformative industry just as it’s taking off”. This reluctance to endorse sweeping international accords reflects diverging approaches to AI governance. The EU has championed a more restrictive framework for AI, prioritising consumer protections, while countries like the US and *** are opting for more agile and innovation-driven strategies. Schmidt pointed to the consequences of Europe’s tight regulatory stance, predicting that the region would miss out on pioneering roles in AI. “The AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe,” he remarked. Prioritising national and global safety Schmidt’s comments come against a backdrop of increasing scrutiny over AI’s dual-use potential—its ability to be used for both beneficial and harmful purposes. From deepfakes to autonomous weapons, AI poses a bevy of risks if left without measures to guard against misuse. Leaders and experts, including Schmidt, are advocating for a balanced approach that fosters innovation while addressing these dangers head-on. While international cooperation remains a complex and contentious issue, the overarching consensus is clear: without safeguards, AI’s evolution could have unintended – and potentially catastrophic – consequences. (Photo by Guillaume Paumier under CC BY 3.0 license. Cropped to landscape from original version.) See also: NEPC: AI sprint risks environmental catastrophe Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Eric Schmidt: AI misuse poses an ‘extreme risk’ appeared first on AI News. View the full article
-
The advent of more powerful processors in the early 2000’s shipping with support in hardware for virtualisation started the computing revolution that led, in time, to what we now call the cloud. With single hardware instances able to run dozens, if not hundreds of virtual machines concurrently, businesses could offer their users multiple services and applications that would otherwise have been financially impractical, if not impossible. But virtual machines (VMs) have several downsides. Often, an entire virtualised operating system is overkill for many applications, and although very much more malleable, scalable, and agile than a fleet of bare-metal servers, VMs still require significantly more memory and processing power, and are less agile than the next evolution of this type of technology – containers. In addition to being more easily scaled (up or down, according to demand), containerised applications consist of only the necessary parts of an application and its supporting dependencies. Therefore apps based on micro-services tend to be lighter and more easily configurable. Virtual machines exhibit the same security issues that affect their bare-metal counterparts, and to some extent, container security issues reflect those of their component parts: a mySQL bug in a specific version of the upstream application will affect containerised versions too. With regards to VMs, bare metal installs, and containers, cybersecurity concerns and activities are very similar. But container deployments and their tooling bring specific security challenges to those charged with running apps and services, whether manually piecing together applications with choice containers, or running in production with orchestration at scale. Container-specific security risks Misconfiguration: Complex applications are made up of multiple containers, and misconfiguration – often only a single line in a .yaml file, can grant unnecessary privileges and increase the attack surface. For example, although it’s not trivial for an attacker to gain root access to the host machine from a container, it’s still a too-common practice to run Docker as root, with no user namespace remapping, for example. Vulnerable container images: In 2022, Sysdig found over 1,600 images identified as malicious in Docker Hub, in addition to many containers stored in the repo with hard-coded cloud credentials, ssh keys, and NPM tokens. The process of pulling images from public registries is opaque, and the convenience of container deployment (plus pressure on developers to produce results, fast) can mean that apps can easily be constructed with inherently insecure, or even malicious components. Orchestration layers: For larger projects, orchestration tools such as Kubernetes can increase the attack surface, usually due to misconfiguration and high levels of complexity. A 2022 survey from D2iQ found that only 42% of applications running on Kubernetes made it into production – down in part to the difficulty of administering large clusters and a steep learning curve. According to Ari Weil at Akamai, “Kubernetes is mature, but most companies and developers don’t realise how complex […] it can be until they’re actually at scale.” Container security with machine learning The specific challenges of container security can be addressed using machine learning algorithms trained on observing the components of an application when it’s ‘running clean.’ By creating a baseline of normal behaviour, machine learning can identify anomalies that could indicate potential threats from unusual traffic, unauthorised changes to configuration, odd user access patterns, and unexpected system calls. ML-based container security platforms can scan image repositories and compare each against databases of known vulnerabilities and issues. Scans can be automatically triggered and scheduled, helping prevent the addition of harmful elements during development and in production. Auto-generated audit reports can be tracked against standard benchmarks, or an organisation can set its own security standards – useful in environments where highly-sensitive data is processed. The connectivity between specialist container security functions and orchestration software means that suspected containers can be isolated or closed immediately, insecure permissions revoked, and user access suspended. With API connections to local firewalls and VPN endpoints, entire environments or subnets can be isolated, or traffic stopped at network borders. Final word Machine learning can reduce the risk of data breach in containerised environments by working on several levels. Anomaly detection, asset scanning, and flagging potential misconfiguration are all possible, plus any degree of automated alerting or amelioration are relatively simple to enact. The transformative possibilities of container-based apps can be approached without the security issues that have stopped some from exploring, developing, and running microservice-based applications. The advantages of cloud-native technologies can be won without compromising existing security standards, even in high-risk sectors. (Image source) The post The role of machine learning in enhancing cloud-native container security appeared first on AI News. View the full article
-
Tech giants are beginning an unprecedented $320 billion AI infrastructure spending spree in 2025, brushing aside concerns about more efficient AI models from challengers like DeepSeek. The massive investment push from Amazon, Microsoft, Google, and Meta signals the big players’ unwavering conviction that AI’s future demands bold infrastructure bets, despite (or perhaps because of) emerging efficiency breakthroughs. The stakes are high, with collective capital expenditure jumping 30% up from 2024’s $246 billion investment. While investors may question the necessity of such aggressive spending, tech leaders are doubling down on their belief that AI represents a transformative opportunity worth every dollar. Amazon stands at the forefront of this AI arms spend, according toa reportby Business Insider. Amazon is flexing its financial muscle with a planned $100 billion capital expenditure for 2025 – a dramatic leap from its $77 billion last year. AWS chief Andy Jassy isn’t mincing words, calling AI a “once-in-a-lifetime business opportunity” that demands aggressive investment. Microsoft’s Satya Nadella also has a bullish stance with his own hard numbers. Having earmarked $80 billion for AI infrastructure in 2025, Microsoft’s existing AI ventures are already delivering; Nadella has spoken of $13 billion annual revenue from AI and 175% year-over-year growth. His perspective draws from economic wisdom: citing the Jevons paradox, he argues that making AI more efficient and accessible will spark an unprecedented surge in demand. Not to be outdone, Google parent Alphabet is pushing all its chips to the centre of the table, with a $75 billion infrastructure investment in 2025, dwarfing analysts’ expectations of $58 billion. Despite market jitters about cloud growth and AI strategy, CEO Sundar Pichai maintains Google’s product innovation engine is firing on all cylinders. Meta’s approach is to pour $60-65 billion into capital spending in 2025 – up from $39 billion in 2024. The company is carving its own path by championing an “American standard” for open-source AI models, a strategy has caught investor attention, particularly given Meta’s proven track record in monetising AI through sophisticated ad targeting. The emergence of DeepSeek’s efficient AI models has sparked some debate in investment circles. Investing.com’s Jesse Cohen voices growing demands for concrete returns on existing AI investments. Yet Wedbush’s Dan Ives dismisses such concerns, likening DeepSeek to “the Temu of AI” and insisting the revolution is just beginning. The market’s response to these bold plans tells a mixed story. Meta’s strategy has won investor applause, while Amazon and Google face more sceptical reactions, with stock drops of 5% and 8% respectively following spending announcements in earnings calls. Yet tech leaders remain undeterred, viewing robust AI infrastructure as non-negotiable for future success. The intensity of infrastructure investment suggests a reality: technological breakthroughs in AI efficiency aren’t slowing the race – they’re accelerating it. As big tech pours unprecedented resources into AI development, it’s betting that increased efficiency will expand rather than contract the market for AI services. The high-stakes gamble on AI’s future reveals a shift in how big tech views investment. Rather than waiting to see how efficiency improvements might reduce costs, it’s are scaling up aggressively, convinced that tomorrow’s AI landscape will demand more infrastructure, not less. In this view, DeepSeek’s breakthroughs aren’t a threat to their strategy – they’re validation of AI’s expanding potential. The message from Silicon Valley is that the AI revolution demands massive infrastructure investment, and the giants of tech are all in. The question isn’t whether to invest in AI infrastructure, but whether $320 billion will be enough to meet the coming surge in demand. See also: DeepSeek ban? China data transfer boosts security concerns Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Big tech’s $320B AI spend defies efficiency race appeared first on AI News. View the full article
-
Europe has no intention of playing catch-up in the global AI race, European Commission President Ursula von der Leyen declared at the AI Action Summit in Paris. While the US and China are often seen as frontrunners, von der Leyen emphasised that the AI race “is far from over” and that Europe has distinct strengths to carve a leading role for itself. “This is the third summit on AI safety in just over one year,” von der Leyen remarked. “In the same *******, three new generations of ever more powerful AI models have been released. Some expect models that will approach human reasoning within a year’s time.” The European Commission President set the tone of the event by contrasting the groundwork laid in previous summits with the urgency of this one. “Past summits focused on laying the groundwork for AI safety. Together, we built a shared consensus that AI will be safe, that it will promote our values and benefit humanity. But this Summit is focused on action. And that is exactly what we need right now.” As the world witnesses AI’s disruptive power, von der Leyen urged Europe to “formulate a vision of where we want AI to take us, as society and as humanity.” Growing adoption, “in the key sectors of our economy, and for the key challenges of our times,” provides a golden opportunity for the continent to lead, she argued. The case for a European approach to the AI race Von der Leyen rejected notions that Europe has fallen behind its global competitors. “Too often, I hear that Europe is late to the race – while the US and China have already gotten ahead. I disagree,” she stated. “The frontier is constantly moving. And global leadership is still up for grabs.” Instead of replicating what other regions are doing, she called for doubling down on Europe’s unique strengths to define the continent’s distinct approach to AI. “Too often, I have heard that we should replicate what others are doing and run after their strengths,” she said. “I think that instead, we should invest in what we can do best and build on our strengths here in Europe, which are our science and technology mastery that we have given to the world.” Von der Leyen defined three pillars of the so-called “European brand of AI” that sets it apart: 1) focusing on high-complexity, industry-specific applications, 2) taking a cooperative, collaborative approach to innovation, and 3) embracing open-source principles. “This summit shows there is a distinct European brand of AI,” she asserted. “It is already driving innovation and adoption. And it is picking up speed.” Accelerating innovation: AI factories and gigafactories To maintain its competitive edge, Europe must supercharge its AI innovation, von der Leyen stressed. A key component of this strategy lies in its computational infrastructure. Europe already boasts some of the world’s fastest supercomputers, which are now being leveraged through the creation of “AI factories.” “In just a few months, we have set up a record of 12 AI factories,” von der Leyen revealed. “And we are investing €10 billion in them. This is not a promise—it is happening right now, and it is the largest public investment for AI in the world, which will unlock over ten times more private investment.” Beyond these initial steps, von der Leyen unveiled an even more ambitious initiative. AI gigafactories, built on the scale of CERN’s Large Hadron Collider, will provide the infrastructure needed for training AI systems at unprecedented scales. They aim to foster collaboration between researchers, entrepreneurs, and industry leaders. “We provide the infrastructure for large computational power,” von der Leyen explained. “Talents of the world are welcome. Industries will be able to collaborate and federate their data.” The cooperative ethos underpinning AI gigafactories is part of a broader European push to balance competition with collaboration. “AI needs competition but also collaboration,” she emphasised, highlighting that the initiative will serve as a “safe space” for these cooperative efforts. Building trust with the AI Act Crucially, von der Leyen reiterated Europe’s commitment to making AI safe and trustworthy. She pointed to the EU AI Act as the cornerstone of this strategy, framing it as a harmonised framework to replace fragmented national regulations across member states. “The AI Act [will] provide one single set of safety rules across the European Union – 450 million people – instead of 27 different national regulations,” she said, before acknowledging businesses’ concerns about regulatory complexities. “At the same time, I know, we have to make it easier, we have to cut red tape. And we will.” €200 billion to remain in the AI race Financing such ambitious plans naturally requires significant resources. Von der Leyen praised the recently launched EU AI Champions Initiative, which has already pledged €150 billion from providers, investors, and industry. During her speech at the summit, von der Leyen announced the Commission’s complementary InvestAI initiative that will bring in an additional €50 billion. Altogether, the result is mobilising a massive €200 billion in public-private AI investments. “We will have a focus on industrial and mission-critical applications,” she said. “It will be the largest public-private partnership in the world for the development of trustworthy AI.” Ethical AI is a global responsibility Von der Leyen closed her address by framing Europe’s AI ambitions within a broader, humanitarian perspective, arguing that ethical AI is a global responsibility. “Cooperative AI can be attractive well beyond Europe, including for our partners in the Global South,” she proclaimed, extending a message of inclusivity. Von der Leyen expressed full support for the AI Foundation launched at the summit, highlighting its mission to ensure widespread access to AI’s benefits. “AI can be a gift to humanity. But we must make sure that benefits are widespread and accessible to all,” she remarked. “We want AI to be a force for good. We want an AI where everyone collaborates and everyone benefits. That is our path – our European way.” See also: AI Action Summit: Leaders call for unity and equitable development Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ursula von der Leyen: AI race ‘is far from over’ appeared first on AI News. View the full article