Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    800
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. NetApp has shed light on the pressing issues faced by organisations globally as they strive to optimise their strategies for AI success. “2025 is shaping up to be a defining year for AI, as organisations transition from experimentation to scaling their AI capabilities,” said Gabie Boko, NetApp’s Chief Marketing Officer. “Businesses are making significant investments to drive innovation and efficiency, but these efforts will succeed only if global tech executives can address the mounting challenges of data complexity, security, and sustainability.” The findings of NetApp’s latest Data Complexity Report paints a detailed picture of where businesses currently stand on their AI journeys and the key trends that will shape the technology’s future. Cost of transformation Two-thirds of businesses worldwide claim their data is “fully or mostly optimised” for AI purposes, highlighting vast improvements in making data accessible, accurate, and well-documented. Yet, the study reveals that the journey towards AI maturity requires further significant investment. A striking 40% of global technology executives anticipate “unprecedented investment” will be necessary in 2025 just to enhance AI and data management capabilities. While considerable progress has been made, achieving impactful breakthroughs demands an even greater commitment in financial and infrastructural resources. Catching up with AI’s potential might not come cheap, but leaders prepared to invest could reap significant rewards in innovation and efficiency. Data silos impede AI success One of the principal barriers identified in the report is the fragmentation of data. An overwhelming 79% of global tech executives state that unifying their data, reducing silos and ensuring smooth interconnectedness, is key to unlocking AI’s full potential. Companies that have embraced unified data storage are better placed to overcome this hurdle. By connecting data regardless of its type or location (across hybrid multi-cloud environments,) they ensure constant accessibility and minimise fragmentation. The report indicates that organisations prioritising data unification are significantly more likely to meet their AI goals in 2025. Nearly one-third (30%) of businesses failing to prioritise unification foresee missing their targets, compared to just 23% for those placing this at the heart of their strategy. Executives have doubled down on data management and infrastructure as top priorities, increasingly recognising that optimising their capacity to gather, store, and process information is essential for AI maturity. Companies refusing to tackle these data challenges risk falling behind in an intensely competitive global market. Scaling risks of AI As businesses accelerate their AI adoption, the associated risks – particularly around security – are becoming more acute. More than two-fifths (41%) of global tech executives predict a stark rise in security threats by 2025 as AI becomes integral to more facets of their operations. AI’s rapid rise has expanded attack surfaces, exposing data sets to new vulnerabilities and creating unique challenges such as protecting sensitive AI models. Countries leading the AI race, including India, the US, and Japan, are nearly twice as likely to encounter escalating security concerns compared to less AI-advanced nations like Germany, France, and Spain. Increased awareness of AI-driven security challenges is reflected in business priorities. Over half (59%) of global executives name cybersecurity as one of the top stressors confronting organisations today. However, progress is being made. Despite elevated concerns, the report suggests that effective security measures are yielding results. Since 2023, the number of executives ranking cybersecurity and ransomware protection as their top priority has fallen by 17%, signalling optimism in combating these risks effectively. Limiting AI’s environmental costs Beyond security risks, AI’s growth is raising urgent questions of sustainability. Over one-third of global technology executives (34%) predict that AI advancements will drive significant changes to corporate sustainability practices. Meanwhile, 33% foresee new government policies and investments targeting energy usage. The infrastructure powering AI and transforming raw data into business value demands significant energy, counteracting organisational sustainability targets. AI-heavy nations often feel the environmental impact more acutely than their less AI-focused counterparts. While 72% of businesses still prioritise carbon footprint reduction, the report notes a decline from 84% in 2023, pointing to increasing tension between sustainability commitments and the relentless march of innovation. For organisations to scale AI without causing irreparable damage to the planet, maintaining environmental responsibility alongside technological growth will be paramount in coming years. Krish Vitaldevara, SVP and GM at NetApp, commented: “The organisations leading in advanced analytics and AI are those that have unified and well-cataloged data, robust security and compliance for sensitive information, and a clear understanding of how data evolves. “By tackling these challenges, they can drive innovation while ensuring resilience, responsibility, and timely insights in the new AI era.” You can find a full copy of NetApp’s report here (PDF) (Photo by Chunli Ju) See also: New AI training techniques aim to overcome current challenges Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Keys to AI success: Security, sustainability, and overcoming silos appeared first on AI News. View the full article
  2. Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data. Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations. The paradigm of large-scale pre-trained AI systems, such as OpenAI’s ChatGPT and CLIP (Contrastive Language–Image Pre-training), has reshaped expectations for machines. These highly generalist models, capable of handling a vast array of tasks with consistent precision, have seen widespread adoption for both professional and personal use. However, such versatility comes at a hefty price. Training and running these models demands prodigious amounts of energy and time, raising sustainability concerns, as well as requiring cutting-edge hardware significantly more expensive than standard computers. Compounding these issues is that generalist tendencies may hinder the efficiency of AI models when applied to specific tasks. For instance, “in practical applications, the classification of all kinds of object classes is rarely required,” explains Associate Professor Go Irie, who led the research. “For example, in an autonomous driving system, it would be sufficient to recognise limited classes of objects such as cars, pedestrians, and traffic signs. “We would not need to recognise food, furniture, or animal species. Retaining classes that do not need to be recognised may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage.” A potential solution lies in training models to “forget” redundant or unnecessary information—streamlining their processes to focus solely on what is required. While some existing methods already cater to this need, they tend to assume a “white-box” approach where users have access to a model’s internal architecture and parameters. Oftentimes, however, users get no such visibility. “******-box” AI systems, more common due to commercial and ethical restrictions, conceal their inner mechanisms, rendering traditional forgetting techniques impractical. To address this gap, the research team turned to derivative-free optimisation—an approach that sidesteps reliance on the inaccessible internal workings of a model. Advancing through forgetting The study, set to be presented at the Neural Information Processing Systems (NeurIPS) conference in 2024, introduces a methodology dubbed “******-box forgetting.” The process modifies the input prompts (text instructions fed to models) in iterative rounds to make the AI progressively “forget” certain classes. Associate Professor Irie collaborated on the work with co-authors Yusuke Kuwana and Yuta Goto (both from TUS), alongside Dr Takashi Shibata from NEC Corporation. For their experiments, the researchers targeted CLIP, a vision-language model with image classification abilities. The method they developed is built upon the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm designed to optimise solutions step-by-step. In this study, CMA-ES was harnessed to evaluate and hone prompts provided to CLIP, ultimately suppressing its ability to classify specific image categories. As the project progressed, challenges arose. Existing optimisation techniques struggled to scale up for larger volumes of targeted categories, leading the team to devise a novel parametrisation strategy known as “latent context sharing.” This approach breaks latent context – a representation of information generated by prompts – into smaller, more manageable pieces. By allocating certain elements to a single token (word or character) while reusing others across multiple tokens, they dramatically reduced the problem’s complexity. Crucially, this made the process computationally tractable even for extensive forgetting applications. Through benchmark tests on multiple image classification datasets, the researchers validated the efficacy of ******-box forgetting—achieving the goal of making CLIP “forget” approximately 40% of target classes without direct access to the AI model’s internal architecture. This research marks the first successful attempt to induce selective forgetting in a ******-box vision-language model, demonstrating promising results. Benefits of helping AI models forget data Beyond its technical ingenuity, this innovation holds significant potential for real-world applications where task-specific precision is paramount. Simplifying models for specialised tasks could make them faster, more resource-efficient, and capable of running on less powerful devices—hastening the adoption of AI in areas previously deemed unfeasible. Another key use lies in image generation, where forgetting entire categories of visual context could prevent models from inadvertently creating undesirable or harmful content, be it offensive material or misinformation. Perhaps most importantly, this method addresses one of AI’s greatest ethical quandaries: privacy. AI models, particularly large-scale ones, are often trained on massive datasets that may inadvertently contain sensitive or outdated information. Requests to remove such data—especially in light of laws advocating for the “Right to be Forgotten”—pose significant challenges. Retraining entire models to exclude problematic data is costly and time-intensive, yet the risks of leaving it unaddressed can have far-reaching consequences. “Retraining a large-scale model consumes enormous amounts of energy,” notes Associate Professor Irie. “‘Selective forgetting,’ or so-called machine unlearning, may provide an efficient solution to this problem.” These privacy-focused applications are especially relevant in high-stakes industries like healthcare and finance, where sensitive data is central to operations. As the global race to advance AI accelerates, the Tokyo University of Science’s ******-box forgetting approach charts an important path forward—not only by making the technology more adaptable and efficient but also by adding significant safeguards for users. While the potential for misuse remains, methods like selective forgetting demonstrate that researchers are proactively addressing both ethical and practical challenges. See also: Why QwQ-32B-Preview is the reasoning AI to watch Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Machine unlearning: Researchers make AI models ‘forget’ data appeared first on AI News. View the full article
  3. Artificial intelligence entered the market with a splash, driving massive buzz and adoption. But now the pace is faltering. Business leaders still talk the talk about embracing AI, because they want the benefits – McKinsey estimates that GenAI could save companies up to $2.6 trillion across a range of operations. However, they aren’t walking the walk. According to one survey of senior analytics and IT leaders, only 20% of GenAI applications are currently in production. Why the wide gap between interest and reality? The answer is multifaceted. Concerns around security and data privacy, compliance risks, and data management are high-profile, but there’s also anxiety about AI’s lack of transparency and worries about ROI, costs, and skill gaps. In this article, we’ll examine the barriers to AI adoption, and share some measures that business leaders can take to overcome them. Get a handle on data “High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, adding, “Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.” Today, only 43% of IT professionals say they’re confident about their ability to meet AI’s data demands. Given that data is so vital for AI success, it’s not surprising that data challenges are an oft-cited factor in slow AI adoption. The best way to overcome this hurdle is to go back to data basics. Organisations need to build a strong data governance strategy from the ground up, with rigorous controls that enforce data quality and integrity. Take ethics and governance seriously With regulations mushrooming, compliance is already a headache for many organisations. AI only adds new areas of risk, more regulations, and increased ethical governance issues for business leaders to worry about, to the extent that security and compliance risk was the most-cited concern in Cloudera’s State of Enterprise AI and Modern Data Architecture report. While the rise in AI regulations might seem alarming at first, executives should embrace the support that these frameworks offer, as they can give organisations a structure around which to build their own risk controls and ethical guardrails. Developing compliance policies, appointing teams for AI governance, and ensuring that humans retain authority over AI-powered decisions are all important steps in creating a comprehensive system of AI ethics and governance. Reinforce control over security and privacy Security and data privacy concerns loom large for every business, and with good reason. Cisco’s 2024 Data Privacy Benchmark Study revealed that 48% of employees admit to entering non-public company information into GenAI tools (and an unknown number have done so and won’t admit it), leading 27% of organisations to ban the use of such tools. The best way to reduce the risks is to limit access to sensitive data. This involves doubling down on access controls and privilege creep, and keeping data away from publicly-hosted LLMs. Avi Perez, CTO of Pyramid Analytics, explained that his business intelligence software’s AI infrastructure was deliberately built to keep data away from the LLM, sharing only metadata that describes the problem and interfacing with the LLM as the best way for locally-hosted engines to run analysis.”There’s a huge set of issues there. It’s not just about privacy, it’s also about misleading results. So in that framework, data privacy and the issues associated with it are tremendous, in my opinion. They’re a showstopper,” Perez said. With Pyramid’s setup, however, “the LLM generates the recipe, but it does it without ever getting [its] hands on the data, and without doing mathematical operations. […] That eliminates something like 95% of the problem, in terms of data privacy risks.” Boost transparency and explainability Another serious obstacle to AI adoption is a lack of trust in its results. The infamous story of Amazon’s AI-powered hiring tool which discriminated against women has become a cautionary tale that scares many people away from AI. The best way to combat this fear is to increase explainability and transparency. “AI transparency is about clearly explaining the reasoning behind the output, making the decision-making process accessible and comprehensible,” said Adnan Masood, chief AI architect at UST and a Microsoft regional director. “At the end of the day, it’s about eliminating the ****** box mystery of AI and providing insight into the how and why of AI decision-making.”Unfortunately, many executives overlook the importance of transparency. A recent IBM study reported that only 45% of CEOs say they are delivering on capabilities for openness. AI champions need to prioritise the development of rigorous AI governance policies that prevent ****** boxes arising, and invest in explainability tools like SHapley Additive exPlanations (SHAPs), fairness toolkits like Google’s Fairness Indicators, and automated compliance checks like the Institute of Internal Auditors’ AI Auditing Framework. Define clear business value Cost is on the list of AI barriers, as always. The Cloudera survey found that 26% of respondents said AI tools are too expensive, and Gartner included “unclear business value” as a factor in the failure of AI projects. Yet the same Gartner report noted that GenAI had delivered an average revenue increase and cost savings of over 15% among its users, proof that AI can drive financial lift if implemented correctly. This is why it’s crucial to approach AI like every other business project – identify areas that will deliver fast ROI, define the benefits you expect to see, and set specific KPIs so you can prove value.”While there’s a lot that goes into building out an AI strategy and roadmap, a critical first step is to identify the most valuable and transformative AI use cases on which to focus,” said Michael Robinson, Director of Product Marketing at UiPath. Set up effective training programs The skills gap remains a significant roadblock to AI adoption, but it seems that little effort is being made to address the issue. A report from Worklife indicates the initial ***** in AI adoption came from early adopters. Now, it’s down to the laggards, who are inherently sceptical and generally less confident about AI – and any new tech. This makes training crucial. Yet according to Asana’s State of AI at Work study, 82% of participants said their organisations haven’t provided training on using generative AI. There’s no indication that training isn’t working; rather that it isn’t happening as it should. The clear takeaway is to offer comprehensive training in quality prompting and other relevant skills. Encouragingly, the same research shows that even using AI without training increases people’s skills and confidence. So, it’s a good idea to get started with low- and no-code tools that allow employees who are unskilled in AI to learn on the job. The barriers to AI adoption are not insurmountable Although AI adoption has slowed, there’s no indication that it’s in danger in the long term. The many obstacles holding companies back from rolling out AI tools can be overcome without too much trouble. Many of the steps, like reinforcing data quality and ethical governance, should be taken regardless of whether or not AI is under consideration, while other steps taken will pay for themselves in increased revenue and the productivity gains that AI can bring. The post Narrowing the confidence gap for wider AI adoption appeared first on AI News. View the full article
  4. There’s a new contender in the AI space that’s making waves: QwQ-32B-Preview. This so-called “reasoning” AI model is being compared to OpenAI o1, and it’s one of the few you can download under a permissive license. For developers and researchers eager to experiment, that’s a significant bonus. Built by Alibaba’s Qwen team, QwQ-32B-Preview is anything but lightweight. It packs 32.5 billion parameters—think of these as the building blocks of its problem-solving abilities—and can handle prompts of up to 32,000 words; longer than some novels! Tests show it outperforms OpenAI o1-preview and o1-mini on benchmarks like AIME and MATH. For context, AIME uses other AI models to assess performance, while MATH is a collection of word problems. But the model isn’t just about maths problems or logic puzzles. What sets it apart is how it approaches tasks. QwQ-32B-Preview plans ahead, fact-checks its work, and avoids common AI mistakes. Of course, it’s not flawless—Alibaba acknowledges issues like language switching, occasional loops, and difficulties with “common sense” reasoning. Even so, it represents a step toward more intelligent AI systems. QwQ-32B-Preview is accessible: You can run or download it via Hugging Face. However, like other ********-developed AI, it operates within regulatory boundaries. That means it carefully avoids politically sensitive topics to comply with China’s rules, ensuring it aligns with “core socialist values.” Alibaba isn’t alone in this space. Meta’s Llama 3.1 is another open-source option, though it takes a different approach by focusing on generative AI rather than reasoning. While both models are innovative, QwQ-32B-Preview specialises in problem-solving with what the company describes as a human-like approach, putting it in the reasoning category. The competition in AI inside China is intensifying. Companies such as DeepSeek, Shanghai AI Lab, and Kunlun Tech have entered the reasoning AI race, releasing their models at pace. For example, DeepSeek’s r1 claims to outperform OpenAI’s o1 on half of its benchmark tests, particularly in maths and programming. Shanghai AI Lab’s InternThinker takes a structured approach to problem-solving, incorporating steps such as understanding queries, recalling knowledge, planning solutions, and reflecting on its answers. This surge of activity highlights how quickly ******** companies are catching up with US tech giants. Xu Liang, an AI entrepreneur from Hangzhou, summed it up: “OpenAI gave the direction; with research, ******** tech firms are making progress.” The release of QwQ-32B-Preview and its competitors shows how much ground they’re covering. But this goes beyond just catching up. Reasoning AI marks a change in how models are designed and used. Unlike older AI systems that relied on brute force to generate answers, reasoning models like QwQ-32B-Preview aim to mimic human problem-solving. The approach not only makes them more effective for complex tasks but also expands their potential use cases, like tackling advanced maths or providing detailed financial advice. Whether it’s solving puzzles, reasoning through intricate problems, or expanding what open-source AI can achieve, one thing is clear: the evolution of AI is accelerating. Buckle up—this is only the beginning. (Photo by Unsplash) See also: Alibaba Cloud overhauls AI partner initiative Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Why QwQ-32B-Preview is the reasoning AI to watch appeared first on AI News. View the full article
  5. Layer 1 relational blockchain Chromia unveils its Asgard mainnet upgrade. Includes new extensions that add specialised capabilities to the blockchain. The Oracle Extension has launched with the AI Inference Extension expected early next year. The Chromia blockchain development team has announced the successful completion of its Asgard mainnet upgrade that introduces new features and capabilities for the platform. The new features enhance the overall capacity of the Layer 1 blockchain and add specialised capabilities for users. The Asgard mainnet upgrade includes the launch of Chomia Extensions, expected to support the growth of decentralised finance (DeFi) and AI-enabled applications on the Chromia Network. Announced on November 15, the Chromia team believes the mainnet will redesign how data is organised on the blockchain, changing the development and use of Web 3 applications. The blockchain utilises a modular framework to offer users and developers of decentralised application (dApp) chains, customisable fee structures, and advanced digital assets. The extensions are modular enhancements that enable developers to build additional features on top of the main blockchain, expanding functionality and utility while maintaining the benefits of the existing infrastructure. The Extensions complement the platform’s relational data architecture, modular network design, and gas-free economic model for end users. Chromia launches Oracle Extensions The Oracle Extension]] provides fully on-chain, real-time price feeds that are updated approximately once a second. It provides developers in Chromia’s ecosystem the ability to develop DeFi applications like decentralised exchanges, futures and options platforms, and lending protocols. Ludvig Öberg, VP of the Chromia Platform Unit said he believes the Oracle Extension will help grow the DeFi space on the blockchain. “The Oracle Extension lays the groundwork for an expansion of decentralised finance activity on the Chromia network and the growth of network value.” According to the team statement, the launch of the Oracle Extension purports to bond well with the overall goal of “strengthening connections across the wider cryptocurrency ecosystem.” The company has made recent efforts to integrate native CHR tokens and other Chromia-based tokens with centralised exchanges and cross-chain wallets. Chromia plans to release a public demo of the Oracle Extension, with integrations by DeFi protocols expected soon after. AI Inference Extension to launch early 2025 Chromia has also announced it has plans to launch an AI Inference Extension, expected in Q1 2025. The module will enable developers to execute AI models directly on-chain using Chromia’s decentralised provider network. The project’s recently formed Data and AI Division’s focus is on creating tools to enhance transparency in AI training data and inputs. Speaking about the AI Extension’s planned launch, Yeou Jie, Head of Business Development at Chromia, said their team’s plan is to expand “transparency to AI.” “As the world’s only relational blockchain, Chromia has demonstrated its ability to bring transparency to AI and other data-intensive use cases. The AI Inference Extension will take this a step further, enabling on-chain execution of AI models.” Seen as the first major technical update on Chromia, Asgard mainnet provides a way for the blockchain to support other functions and decentralised applications including AI, gaming, finance, and enterprise use-cases. The post Chromia’s Asgard upgrade launches: “New era for DeFi and AI” appeared first on AI News. View the full article
  6. Amazon Web Services (AWS) has announced improvements to bolster Bedrock, its fully managed generative AI service. The updates include new foundational models from several AI pioneers, enhanced data processing capabilities, and features aimed at improving inference efficiency. Dr Swami Sivasubramanian, VP of AI and Data at AWS, said: “Amazon Bedrock continues to see rapid growth as customers flock to the service for its broad selection of leading models, tools to easily customise with their data, built-in responsible AI features, and capabilities for developing sophisticated agents. “With this new set of capabilities, we are empowering customers to develop more intelligent AI applications that will deliver greater value to their end-users.” Amazon Bedrock expands its model diversity AWS is set to become the first cloud provider to feature models from AI developers Luma AI and poolside, while also incorporating Stability AI’s latest release. Through its new Amazon Bedrock Marketplace, customers will have access to over 100 emerging and specialised models from across industries, ensuring they can select the most appropriate tools for their unique needs. Luma AI’s Ray 2 Luma AI, known for advancing generative AI in video content creation, brings its next-generation Ray 2 model to Amazon Bedrock. This model generates high-quality, lifelike video outputs from text or image inputs and allows organisations to create detailed outputs in fields such as fashion, architecture, and graphic design. AWS’s presence as the first provider for this model ensures businesses can experiment with new camera angles, cinematographic styles, and consistent characters with a frictionless workflow. poolside’s malibu and point Designed to address challenges in modern software engineering, poolside’s models – malibu and point – specialise in code generation, testing, documentation, and real-time code completion. Importantly, developers can securely fine-tune these models using their private datasets. Accompanied by Assistant – an integration for development environments – poolside’s tools allow engineering teams to accelerate productivity, ship projects faster, and increase accuracy. Stability AI’s Stable Diffusion 3.5 Large Amazon Bedrock customers will soon gain access to Stability AI’s text-to-image model Stable Diffusion 3.5 Large. This addition supports businesses in creating high-quality visual media for use cases in areas like gaming, advertising, and retail. Through the Bedrock Marketplace, AWS also enables access to over 100 specialised models. These include solutions tailored to fields such as biology (EvolutionaryScale’s ESM3 generative model), financial data (Writer’s Palmyra-Fin), and media (Camb.ai’s text-to-audio MARS6). Zendesk, a global customer service software firm, leverages Bedrock’s marketplace to personalise support across email and social channels using AI-driven localisation and sentiment analysis tools. For example, they use models like Widn.AI to tailor responses based on real-time sentiment in customers’ native languages. Scaling inference with new Amazon Bedrock features Large-scale generative AI applications require balancing the cost, latency, and accuracy of inference processes. AWS is addressing this challenge with two new Amazon Bedrock features: Prompt Caching The new caching capability reduces redundant processing of prompts by securely storing frequently used queries, saving on both time and costs. This feature can lead to up to a 90% reduction in costs and an 85% decrease in latency. For example, Adobe incorporated Prompt Caching into its Acrobat AI Assistant to summarise documents and answer questions, achieving a 72% reduction in response times during initial testing. Intelligent Prompt Routing This feature dynamically directs prompts to the most suitable foundation model within a family, optimising results for both cost and quality. Customers such as Argo Labs, which builds conversational voice AI solutions for restaurants, have already benefited. While simpler queries (like booking tables) are handled by smaller models, more nuanced requests (e.g., dietary-specific menu questions) are intelligently routed to larger models. Argo Labs’ usage of intelligent Prompt Routing has not only improved response quality but also reduced costs by up to 30%. Data utilisation: Knowledge bases and automation A key attraction of generative AI lies in its ability to extract value from data. AWS is enhancing its Amazon Bedrock Knowledge Bases to ensure organisations can deploy their unique datasets for richer AI-powered user experiences. Using structured data AWS has introduced capabilities for structured data retrieval within Knowledge Bases. This enhancement allows customers to query data stored across Amazon services like SageMaker Lakehouse and Redshift through natural-language prompts, with results translated back into SQL queries. Octus, a credit intelligence firm, plans to use this capability to provide clients with dynamic, natural-language reports on its structured financial data. GraphRAG integration By incorporating automated graph modelling (powered by Amazon Neptune), customers can now generate and connect relational data for stronger AI applications. BMW Group, for instance, will use GraphRAG to augment its virtual assistant MAIA. This assistant taps into BMW’s wealth of internal data to deliver comprehensive responses and premium user experiences. Separately, AWS has unveiled Amazon Bedrock Data Automation, a tool that transforms unstructured content (e.g., documents, video, and audio) into structured formats for analytics or retrieval-augmented generation (RAG). Companies like Symbeo (automated claims processing) and Tenovos (digital asset management) are already piloting the tool to improve operational efficiency and data reuse. The expansion of Amazon Bedrock’s ecosystem reflects its growing popularity, with the service recording a 4.7x increase in its customer base over the last year. Industry leaders like Adobe, BMW, Zendesk, and Tenovos have all embraced AWS’s latest innovations to improve their generative AI capabilities. Most of the newly announced tools – such as inference management, Knowledge Bases with structured data retrieval, and GraphRAG – are currently in preview, while notable model releases from Luma AI, poolside, and Stability AI are expected soon. See also: Alibaba Cloud overhauls AI partner initiative Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Amazon Bedrock gains new AI models, tools, and features appeared first on AI News. View the full article
  7. The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme. AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need. The technologies chosen for this scheme include solutions targeting cancer and chronic respiratory diseases, as well as advancements in radiology diagnostics. These AI systems promise to revolutionise the accuracy and efficiency of healthcare, potentially driving better diagnostic tools and patient care. The AI Airlock, as described by the MHRA, is a “sandbox” environment—an experimental framework designed to help manufacturers determine how best to collect real-world evidence to support the regulatory approval of their devices. Unlike traditional medical devices, AI models continue to evolve through learning, making the establishment of safety and efficacy evidence more complex. The Airlock enables this exploration within a monitored virtual setting, giving developers insight into the practical challenges of regulation while supporting the NHS’s broader adoption of transformative AI technologies. Safely enabling AI healthcare innovation Laura Squire, the lead figure in MedTech regulatory reform and Chief Officer at the MHRA, said: “New AI medical devices have the potential to increase the accuracy of healthcare decisions, save time, and improve efficiency—leading to better outcomes for the NHS and patients across all healthcare settings. “But we need to be confident that AI-powered medical devices introduced into the NHS are safe, stay safe, and perform as intended through their lifetime of use.” Squire emphasised that the AI Airlock pilot allows collaboration “in partnership with technology specialists, developers and the NHS,” facilitating the exploration of best practices and accelerating safe patient access to innovative solutions. Government representatives have praised the initiative for its forward-thinking framework. Karin Smyth, Minister of State for Health, commented: “As part of our 10-Year Health Plan, we’re shifting NHS care from analogue to digital, and this project will help bring the most promising technology to patients. “AI has the power to revolutionise care by supporting doctors to diagnose diseases, automating time-consuming admin tasks, and reducing hospital admissions by predicting future ill health.” Science Minister Lord Vallance lauded the AI Airlock pilot as “a great example of government working with businesses to enable them to turn ideas into products that improve lives.” He added, “This shows how good regulation can facilitate emerging technologies for the benefit of the UK and our economy.” Selected technologies The deployment of AI-powered medical devices requires meeting stringent criteria to ensure innovation, patient benefits, and regulatory challenge readiness. The five technologies selected for this inaugural pilot offer vital insights into healthcare’s future: Lenus Stratify Patients with Chronic Obstructive Pulmonary Disease (COPD) are among those who stand to benefit significantly from AI innovation. Lenus Stratify, developed by Lenus Health, analyses patient data to predict severe lung disease outcomes, reducing unscheduled hospital admissions. The system empowers care providers to adopt earlier interventions, affording patients an improved quality of life while alleviating NHS resource strain. Philips Radiology Reporting Enhancer Philips has integrated AI into existing radiology workflows to enhance the efficiency and accuracy of critical radiology reports. This system uses AI to prepare the “Impression” section of reports, summarising essential diagnostic information for healthcare providers. By automating this process, Philips aims to minimise workload struggles, human errors, and miscommunication, creating a more seamless diagnostic experience. Federated AI Monitoring Service (FAMOS) One recurring AI challenge is the concept of “drift,” when changing real-world conditions impair system performance over time. Newton’s Tree has developed FAMOS to monitor AI models in real time, flagging degradation and enabling rapid corrections. Hospitals, regulators, and software developers can use this tool to ensure algorithms remain high-performing, adapting to evolving circumstances while prioritising patient safety. OncoFlow Personalised Cancer Management Targeting the pressing healthcare challenge of reducing waiting times for cancer treatment, OncoFlow speeds up clinical workflows through its intelligent care pathway platform. Initially applied to breast cancer protocols, the system later aims to expand across other oncology domains. With quicker access to tailored therapies, patients gain increased survival rates amidst mounting NHS pressures. SmartGuideline Developed to simplify complex clinical decision-making processes, SmartGuideline uses large-language AI trained on official NICE medical guidelines. This technology allows clinicians to ask routine questions and receive verified, precise answers, eliminating the ambiguity associated with current AI language models. By integrating this tool, patients benefit from more accurate treatments grounded in up-to-date medical knowledge. Broader implications The influence of the AI Airlock extends beyond its current applications. The MHRA expects pilot findings, due in 2025, to inform future medical device regulations and create a clearer path for manufacturers developing AI-enabled technologies. The evidence derived will contribute to shaping post-Brexit UKCA marking processes, helping manufacturers achieve compliance with higher levels of transparency. By improving regulatory frameworks, the UK could position itself as a global hub for med-tech innovation while ensuring faster access to life-saving tools. The urgency of these developments was underscored earlier this year in Lord Darzi’s review of health and care. The report outlined the “critical state” of the NHS, offering AI interventions as a promising pathway to sustainability. The work on AI Airlock by the MHRA addresses one of the report’s major recommendations for enabling regulatory solutions and “unlocking the AI revolution” for healthcare advancements. While being selected into the AI Airlock pilot does not indicate regulatory approval, the technologies chosen represent a potential leap forward in applying AI to some of healthcare’s most pressing challenges. The coming years will test the potential of these solutions under regulatory scrutiny. If successful, the initiative from the MHRA could redefine how pioneering technologies like AI are adopted in healthcare, balancing the need for speed, safety, and efficiency. With the NHS under immense pressure from growing demand, AI’s ability to augment clinicians, predict illnesses, and streamline workflows may well be the game-changer the system urgently needs. (Photo by National Cancer Institute) See also: AI’s role in helping to prevent skin cancer through behaviour change Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News. View the full article
  8. Google Cloud has launched two generative AI models on its Vertex AI platform, Veo and Imagen 3, amid reports of surging revenue growth among enterprises leveraging the technology. According to Google Cloud’s data, 86% of enterprise companies currently using generative AI in production environments have witnessed increased revenue, with an estimated average growth of 6%. This metric has driven the tech giant’s latest innovation push, resulting in the introduction of Veo – its most sophisticated video generation model to date – and Imagen 3, an advanced text-to-image generation system. Breaking ground Veo, now available in private preview on Vertex AI, represents a milestone as Google becomes the first hyperscaler to offer an image-to-video model. The technology enables businesses to generate high-quality videos from simple text or image prompts, potentially revolutionising video production workflows across industries. Imagen 3 – scheduled for release to all Vertex AI customers next week – promises unprecedented realism in generated images, with marked improvements in detail, lighting, and artifact reduction. The model includes new features for enterprise customers on an allowlist, including advanced editing capabilities and brand customisation options. Transforming operations Several major firms have begun implementing these technologies into their operations. Mondelez International, the company behind brands such as Oreo, Cadbury, and Chips Ahoy!, is using the technology to accelerate campaign content creation across its global portfolio of brands. Jon Halvorson, SVP of Consumer Experience & Digital Commerce at Mondelez International, explained: “Our collaboration with Google Cloud has been instrumental in harnessing the power of generative AI, notably through Imagen 3, to revolutionise content production. “This technology has enabled us to produce hundreds of thousands of customised assets, enhancing creative quality while significantly reducing both time to market and costs.” Knowledge sharing platform Quora has developed Poe, a platform that enables users to interact with generative AI models. Veo and Imagen are now integrated with Poe. Spencer Chan, Product Lead for Poe at Quora, commented: “We created Poe to democratise access to the world’s best gen AI models. With Veo, we’re now enabling millions of users to bring their ideas to life through stunning, high-quality generative video.” Safety and security In response to growing concerns about AI-generated content, Google has implemented robust safety features in both models. These include: Digital watermarking through Google DeepMind’s SynthID. Built-in safety filters to prevent harmful content creation. Strict data governance policies ensure customer data protection. Industry-first copyright indemnity for generative AI services. The launch of these new models signals Google’s growing influence in the enterprise AI space and suggests a shift toward more sophisticated, integrated AI solutions for business applications. (Imagery Credit: Google Cloud) See also: Alibaba Marco-o1: Advancing LLM reasoning capabilities Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google launches Veo and Imagen 3 generative AI models appeared first on AI News. View the full article
  9. Alibaba Cloud is overhauling its AI partner ecosystem, unveiling the “Partner Rainforest Plan” during its annual Partner Summit 2024. The ******** tech giant’s cloud division has outlined several new initiatives, including an AI partner accelerator programme, enhanced incentives, and a refreshed global strategy for service partners, as it seeks to strengthen its position in the market. Selina Yuan, President of International Business at Alibaba Cloud Intelligence, said: “At Alibaba Cloud, we believe that collaboration is the key to unlocking innovation and driving growth. Our global partners are not just participants, they are the architects of a new digital landscape in the AI era. The company’s new AI Alliance Accelerator Programme aims to establish partnerships with 50 AI technology providers and 50 channel partners by 2025. Selected technology partners will receive enhanced technical support, expanded distribution channels, and dedicated AI consulting services, while channel partners will benefit from increased financial incentives for AI-related initiatives. Alibaba Cloud has also introduced its Revitalised Service Partner Programme, designed to upskill existing partners and cultivate new ones through AI training and empowerment. The programme includes the ****** development of Managed Large Language Model Services with service partners, leveraging the company’s generative AI capabilities. The cloud provider has also committed to extending strategic partnerships with 18 service partners – including prominent names such as Deloitte, Accenture, and Cognizant Worldwide – from its existing pool of 50 global standard service partners. In various regional developments, Alibaba Cloud has established strategic partnerships across Asia: Indonesia: The company has partnered with Telkom Indonesia to deliver AI-supported cloud solutions and develop digital talent. Japan: Information security firm Securai will localise Alibaba Cloud’s Zstack service for the ********* market. Thailand: A memorandum of understanding with Yell Group aims to address growing demand for generative AI in the creative media industry. The company, which currently maintains partnerships with approximately 12,000 organisations worldwide – including industry leaders such as Salesforce, Fortinet, IBM, and Neo4j – has introduced a Synergistic Incentive Programme to foster collaboration between its global technology and channel partners. “Today, with our revamped global partner ecosystem, we are committed to supporting our global partners to jointly reap the benefits of the AI era and meet the diverse business demands of global customers,” Yuan concludes. (Photo by Hannah Busing) See also: Alibaba Marco-o1: Advancing LLM reasoning capabilities Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Cloud overhauls AI partner initiative appeared first on AI News. View the full article
  10. AgentFun.AI, a platform to create and trade AI agents, went live on November 27 on Cronos zkEVM. It became the first decentralised app dedicated to AI agents in the Cronos ecosystem. Users can create, build, and trade tokenised AI agents on AgentFun.AI. They set growth milestones for the agents, which have individual personalities. Agents accumulate fans and become trade-able assets with each milestone reached. Each agent can interact with users on the blockchain and on social media. Exploring the appeal of trading tokenised AI agents Users can own a fraction of high-value AI models, lowering the barrier to AI ecosystem entry. These tokens often operate on decentralised platforms, increasing transparency, security, and cross-platform integration. Users across the globe can participate in funding and developing innovative AI models, accelerating AI adoption. Creating an agent through AgentFun.AI starts with buying a small amount of AGENTFUN tokens on H2 Finance, a leading decentralised exchange (DEX) on Cronos zkEVM. Each agent generated requires a nominal fee of 1 $AGENTFUN. The entire process of creating and training AI agents has been gamified, further encouraging users to use the platform. Agent capabilities increase in direct proportion to demand As the AI agents grow, they obtain new capabilities, with rising demand from other users on Cronos zkEVM unlocking new features. As an agent reaches the fixed market cap of $127,100, it becomes capable of engaging in Telegram chats. It cultivates a liquidity pool on Cronos’ DEX, alleviating trading and discoverability in the ecosystem. Additional features are unlocked as agents attain milestone-based targets. An agent that reaches the market cap of $1.27 million starts interacting on X. Its growth is essentially unlimited, giving users a strong incentive to network in the ecosystem that AgentFun.AI has helped build. Further encouragement is provided by the fact that users can create AI agents with unique personalities and narratives. Each agent starts as a tailored language model for conversational interactions before it gains new capabilities, features, and skills. In other words, the agents are designed for specific tasks or domains, using a customised language model at their core. Unlike general-purpose AI, they are fine-tuned on highly specialised datasets to excel in a particular area. Over time, they can theoretically evolve by incorporating autonomous decision-making and API integration and communicating with other systems, like interacting with users in dynamic environments. They could perform tasks using logic-based or reinforcement learning modules or fetch and process real-time data. When a user creates an agent, they provide social links to facilitate socialisation and discovery. A new agent token is launched in a pool with a supply of one billion. The agent token will eventually be listed on H2 Finance as more users buy it, using the $AGENTFUN earned from sold tokens and the remaining token supply. Supporting Cronos as a leading AI-agent-powered ecosystem A final notable aspect of AgentFun.AI’s launch is its support of Cronos’ strategy to become one of the first niche, AI agent-powered ecosystems. As part of this, it will stimulate experimentation and adoption of the first AI agents on Cronos zkEVM. The may encourage other developers to introduce AI agent dApps, leading to a flood of innovation on Cronos.The launch of AgentFun.AI on Cronos aligns with the latter’s mission to build a financial ecosystem with openness, fairness, and community empowerment at its core. Cronos envisions a future enabled primarily by AI agents where decentralised finance is universally accessible, which aligns with its conviction that the agents can provide unlimited growth opportunities. (Image source: Depositphotos) The post AI agents and ecosystems with AgentFun.AI’s launch on Cronos appeared first on AI News. View the full article
  11. Salesforce has unveiled the findings of its *** AI Readiness Index, signalling the nation is in a position to spearhead the next wave of AI innovation, also known as agentic AI. The report places the *** ahead of its G7 counterparts in terms of AI adoption but also underscores areas ripe for improvement, such as support for SMEs, fostering cross-sector partnerships, and investing in talent development. Zahra Bahrololoumi CBE, UKI CEO at Salesforce, commented: “Agentic AI is revolutionising enterprise software by enabling humans and agents to collaborate seamlessly and drive customer success. “The *** AI Readiness Index positively highlights that the *** has both the vision and infrastructure to be a powerhouse globally in AI, and lead the current third wave of agentic AI.” *** AI adoption sets the stage for agentic revolution The Index details how both the public and private sectors in the *** have embraced AI’s transformative potential. With a readiness score of 65.5, surpassing the G7 average of 61.2, the *** is establishing itself as a hub for large-scale AI projects, driven by a robust innovation culture and pragmatic regulatory approaches. The government has played its part in maintaining a stable and secure environment for tech investment. Initiatives such as the AI Safety Summit at Bletchley Park and risk-oriented AI legislation showcase Britain’s leadership on critical AI issues like transparency and privacy. Business readiness is equally impressive, with *** industries scoring 52, well above the G7 average of 47.8. SMEs in the *** are increasingly prioritising AI adoption, further bolstering the nation’s stance in the international AI arena. Adam Evans, EVP & GM of Salesforce AI Platform, is optimistic about the evolution of agentic AI. Evans foresees that, by 2025, these agents will become business-aware—expertly navigating industry-specific challenges to ******** meaningful tasks and decisions. Investments fuelling AI growth Salesforce is committing $4 billion to the ***’s AI ecosystem over the next five years. Since establishing its *** AI Centre in London, Salesforce says it has engaged over 3,000 stakeholders in AI training and workshops. Key investment focuses include creating a regulatory bridge between the EU’s rules-based approach and the more relaxed US approach, and ensuring SMEs have the resources to integrate AI. A strong emphasis also ***** on enhancing digital skills and centralising training to support the AI workforce of the future. Feryal Clark, Minister for AI and Digital Government, said: “These findings are further proof the *** is in prime position to take advantage of AI, and highlight our strength in spurring innovation, investment, and collaboration across the public and private sector. “There is a global race for AI and we’ll be setting out plans for how the *** can use the technology to ramp-up adoption across the economy, kickstart growth, and build an AI sector which can scale and compete on the global stage.” Antony Walker, Deputy CEO at techUK, added: “To build this progress, government and industry must collaborate to foster innovation, support SMEs, invest in skills, and ensure flexible regulation, cementing the ***’s leadership in the global AI economy.” Agentic AI boosting *** business productivity Capita, Secret Escapes, Heathrow, and Bionic are among the organisations that have adopted Salesforce’s Agentforce to boost their productivity. Adolfo Hernandez, CEO of Capita, said: “We want to transform Capita’s recruitment process into a fast, seamless and autonomous experience that benefits candidates, our people, and our clients. “With autonomous agents providing 24/7 support, our goal is to enable candidates to complete the entire recruitment journey within days as opposed to what has historically taken weeks. Secret Escapes, a curator of luxury travel deals, finds autonomous agents crucial for personalising services to its 60 million ********* members. Kate Donaghy, Head of Business Technology at Secret Escapes, added: “Agentforce uses our unified data to automate routine tasks like processing cancellations, updating booking information, or even answering common travel questions about luggage, flight information, and much more—freeing up our customer service agents to handle more complex and last-minute travel needs to better serve our members.” The ***’s AI readiness is testament to the synergy between government, business, and academia. To maintain its leadership, the *** must sustain its focus on collaboration, skills development, and innovation. (Photo by Matthew Wiebe) See also: Generative AI use soars among Brits, but is it sustainable? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce: *** set to lead agentic AI revolution appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  12. Alibaba has announced Marco-o1, a large language model (LLM) designed to tackle both conventional and open-ended problem-solving tasks. Marco-o1, from Alibaba’s MarcoPolo team, represents another step forward in the ability of AI to handle complex reasoning challenges—particularly in maths, physics, coding, and areas where clear standards may be absent. Building upon OpenAI’s reasoning advancements with its o1 model, Marco-o1 distinguishes itself by incorporating several advanced techniques, including Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), and novel reflection mechanisms. These components work in concert to enhance the model’s problem-solving capabilities across various domains. The development team has implemented a comprehensive fine-tuning strategy using multiple datasets, including a filtered version of the Open-O1 CoT Dataset, a synthetic Marco-o1 CoT Dataset, and a specialised Marco Instruction Dataset. In total, the training corpus comprises over 60,000 carefully curated samples. The model has demonstrated particularly impressive results in multilingual applications. In testing, Marco-o1 achieved notable accuracy improvements of 6.17% on the English MGSM dataset and 5.60% on its ******** counterpart. The model has shown particular strength in translation tasks, especially when handling colloquial expressions and cultural nuances. One of the model’s most innovative features is its implementation of varying action granularities within the MCTS framework. This approach allows the model to explore reasoning paths at different levels of detail, from broad steps to more precise “mini-steps” of 32 or 64 tokens. The team has also introduced a reflection mechanism that prompts the model to self-evaluate and reconsider its reasoning, leading to improved accuracy in complex problem-solving scenarios. The MCTS integration has proven particularly effective, with all MCTS-enhanced versions of the model showing significant improvements over the base Marco-o1-CoT version. The team’s experiments with different action granularities have revealed interesting patterns, though they note that determining the optimal strategy requires further research and more precise reward models. (Credit: MarcoPolo Team, AI Business, Alibaba International Digital Commerce) The development team has been transparent about the model’s current limitations, acknowledging that while Marco-o1 exhibits strong reasoning characteristics, it still falls short of a fully realised “o1” model. They emphasise that this release represents an ongoing commitment to improvement rather than a finished product. Looking ahead, the Alibaba team has announced plans to incorporate reward models, including Outcome Reward Modeling (ORM) and Process Reward Modeling (PRM), to enhance the decision-making capabilities og Marco-o1. They are also exploring reinforcement learning techniques to further refine the model’s problem-solving abilities. The Marco-o1 model and associated datasets have been made available to the research community through Alibaba’s GitHub repository, complete with comprehensive documentation and implementation guides. The release includes installation instructions and example scripts for both direct model usage and deployment via FastAPI. (Photo by Alina Grubnyak) See also: New AI training techniques aim to overcome current challenges Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Marco-o1: Advancing LLM reasoning capabilities appeared first on AI News. View the full article
  13. OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think. Reportedly led by a dozen AI researchers, scientists, and investors, the new training techniques, which underpin OpenAI’s recent ‘o1’ model (formerly Q* and Strawberry), have the potential to transform the landscape of AI development. The reported advances may influence the types or quantities of resources AI companies need continuously, including specialised hardware and energy to aid the development of AI models. The o1 model is designed to approach problems in a way that mimics human reasoning and thinking, breaking down numerous tasks into steps. The model also utilises specialised data and feedback provided by experts in the AI industry to enhance its performance. Since ChatGPT was unveiled by OpenAI in 2022, there has been a surge in AI innovation, and many technology companies claim existing AI models require expansion, be it through greater quantities of data or improved computing resources. Only then can AI models consistently improve. Now, AI experts have reported limitations in scaling up AI models. The 2010s were a revolutionary ******* for scaling, but Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, says that the training of AI models, particularly in the understanding language structures and patterns, has levelled off. “The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Scaling the right thing matters more now,” they said. In recent times, AI lab researchers have experienced delays in and challenges to developing and releasing large language models (LLM) that are more powerful than OpenAI’s GPT-4 model. First, there is the cost of training large models, often running into tens of millions of dollars. And, due to complications that arise, like hardware failing due to system complexity, a final analysis of how these models run can take months. In addition to these challenges, training runs require substantial amounts of energy, often resulting in power shortages that can disrupt processes and impact the wider electriciy grid. Another issue is the colossal amount of data large language models use, so much so that AI models have reportedly used up all accessible data worldwide. Researchers are exploring a technique known as ‘test-time compute’ to improve current AI models when being trained or during inference phases. The method can involve the generation of multiple answers in real-time to decide on a range of best solutions. Therefore, the model can allocate greater processing resources to difficult tasks that require human-like decision-making and reasoning. The aim – to make the model more accurate and capable. Noam Brown, a researcher at OpenAI who helped develop the o1 model, shared an example of how a new approach can achieve surprising results. At the TED AI conference in San Francisco last month, Brown explained that “having a **** think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer.” Rather than simply increasing the model size and training time, this can change how AI models process information and lead to more powerful, efficient systems. It is reported that other AI labs have been developing versions of the o1 technique. The include xAI, Google DeepMind, and Anthropic. Competition in the AI world is nothing new, but we could see a significant impact on the AI hardware market as a result of new techniques. Companies like Nvidia, which currently dominates the supply of AI chips due to the high demand for their products, may be particularly affected by updated AI training techniques. Nvidia became the world’s most valuable company in October, and its rise in fortunes can be largely attributed to its chips’ use in AI arrays. New techniques may impact Nvidia’s market position, forcing the company to adapt its products to meet the evolving AI hardware demand. Potentially, this could open more avenues for new competitors in the inference market. A new age of AI development may be on the horizon, driven by evolving hardware demands and more efficient training methods such as those deployed in the o1 model. The future of both AI models and the companies behind them could be reshaped, unlocking unprecedented possibilities and greater competition. See also: Anthropic urges AI regulation to avoid catastrophes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, a The post New AI training techniques aim to overcome current challenges appeared first on AI News. View the full article
  14. A survey by CloudNine PR shows that 83% of *** adults are aware of generative AI tools, and 45% of those familiar with them want companies to be transparent about the environmental costs associated with the technologies. With data centres burning vast amounts of energy, the growing demand for GenAI has sparked a debate about its sustainability. The cost of intelligence: Generative AI’s carbon footprint Behind every AI-generated email, idea, or recommendation are data centres running thousands of energy-hungry servers. Data centres are responsible for both training the large language models that power generative AI and processing individual user queries. Unlike a simple Google search, which uses relatively little energy, a single generative AI request can consume up to ten times as much electricity. The numbers are staggering. If all nine billion daily Google searches worldwide were replaced with generative AI tasks, the additional electricity demand would match the annual energy consumption of 1.5 million EU residents. According to consultants Morgan Stanley, the energy demands of generative AI are expected to grow by 70% annually until 2027. By that point, the energy required to support generative AI systems could rival the electricity needs of an entire country—Spain, for example, based on its 2022 usage. *** consumers want greener AI practices The survey also highlights growing awareness among *** consumers about the environmental implications of generative AI. Nearly one in five respondents said they don’t trust generative AI providers to manage their environmental impact responsibly. Among regular users of these tools, 10% expressed a willingness to pay a premium for products or services that prioritise energy efficiency and sustainability. Interestingly, over a third (35%) of respondents think generative AI tools should “actively remind” users of their environmental impact. While this appears like a small step, it has the potential to encourage more mindful usage and place pressure on companies to adopt greener technologies. Efforts to tackle the environmental challenge Fortunately, some companies and policymakers are beginning to address these concerns. In the ******* States, the Artificial Intelligence Environmental Impacts Act was introduced earlier this year. The legislation aims to standardise how AI companies measure and report carbon emissions. It also provides a voluntary framework for developers to evaluate and disclose their systems’ environmental impact, pushing the industry towards greater transparency. Major players in the tech industry are also stepping up. Companies like Salesforce have voiced support for legislation requiring standardised methods to measure and report AI’s carbon footprint. Experts point to several practical ways to reduce generative AI’s environmental impact, including adopting energy-efficient hardware, using sustainable cooling methods in data centres, and transitioning to renewable energy sources. Despite these efforts, the urgency to address generative AI’s environmental impact ******** critical. As Uday Radia, owner of CloudNine PR, puts it: “Generative AI has huge potential to make our lives better, but there is a race against time to make it more sustainable before it gets out of control.” (Photo by Unsplash) See also: The AI revolution: Reshaping data centres and the digital landscape Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Generative AI use soars among brits, but is it sustainable? appeared first on AI News. View the full article
  15. Ai2 is releasing OLMo 2, a family of open-source language models that advances the democratisation of AI and narrows the gap between open and proprietary solutions. The new models, available in 7B and 13B parameter versions, are trained on up to 5 trillion tokens and demonstrate performance levels that match or exceed comparable fully open models whilst remaining competitive with open-weight models such as Llama 3.1 on English academic benchmarks. “Since the release of the first OLMo in February 2024, we’ve seen rapid growth in the open language model ecosystem, and a narrowing of the performance gap between open and proprietary models,” explained Ai2. The development team achieved these improvements through several innovations, including enhanced training stability measures, staged training approaches, and state-of-the-art post-training methodologies derived from their Tülu 3 framework. Notable technical improvements include the switch from nonparametric layer norm to RMSNorm and the implementation of rotary positional embedding. OLMo 2 model training breakthrough The training process employed a sophisticated two-stage approach. The initial stage utilised the OLMo-Mix-1124 dataset of approximately 3.9 trillion tokens, sourced from DCLM, Dolma, Starcoder, and Proof Pile II. The second stage incorporated a carefully curated mixture of high-quality web data and domain-specific content through the Dolmino-Mix-1124 dataset. Particularly noteworthy is the OLMo 2-Instruct-13B variant, which is the most capable model in the series. The model demonstrates superior performance compared to Qwen 2.5 14B instruct, Tülu 3 8B, and Llama 3.1 8B instruct models across various benchmarks. (Credit: Ai2) Commiting to open science Reinforcing its commitment to open science, Ai2 has released comprehensive documentation including weights, data, code, recipes, intermediate checkpoints, and instruction-tuned models. This transparency allows for full inspection and reproduction of results by the wider AI community. The release also introduces an evaluation framework called OLMES (Open Language Modeling Evaluation System), comprising 20 benchmarks designed to assess core capabilities such as knowledge recall, commonsense reasoning, and mathematical reasoning. OLMo 2 raises the bar in open-source AI development, potentially accelerating the pace of innovation in the field whilst maintaining transparency and accessibility. (Photo by Rick Barrett) See also: OpenAI enhances AI safety with new red teaming methods Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ai2 OLMo 2: Raising the bar for open language models appeared first on AI News. View the full article
  16. Blockchain AI research lab YeagerAI has announced the launch of the Intelligent Oracle, an AI-powered oracle that aims to provide decentralised applications (DApps) with online data on-chain. The Oracle can change how data is collected, offering new possibilities and use cases for blockchain DApps. It is built on the GenLayer blockchain, also a brainchild of YeagerAI, and designed to support a new generation of DApps. It can fetch any type of online data and deliver it on-chain. The Intelligent Oracle will initially launch on a permissioned local network, with the GenLayer Testnet expected operational by the end of 2024. By removing the dependency on human-powered resolution systems and offering cross-chain compatibility, the Intelligent Oracle aims to provide a scalable, efficient, and future-proof solution for decision-making. The Intelligent Oracle is powered by LLMs integrated in GenLayer’s Optimistic Democracy consensus mechanism. The consensus mechanism is ‘governed’ by validators that connect to LLMs, verifying and securing the data that the Oracle fetches from on- and off-chain sources. The validators enable the network to process non-deterministic transactions by fetching data from the internet. When a query is made, a lead validator generates a proposed result, while other validators independently verify the output against the pre-set equivalence criteria. Optimistic Democracy ensures all decisions are accurate, reliable, and secure. While blockchain oracles have evolved rapidly in the past few years, there remain several pertinent unresolved issues. Among them are the inability of blockchains to access external data, and blockchains only able to access what is available on-chain. The emerging uses of blockchains are impacted by the lack of broader oracles, with most futuristic DApps requiring immediate, accurate, and sometimes subjective data from the internet. To date, the solution has been to use traditional oracles, which only provide pre-defined datasets or require manual intervention, making them slow, costly, and inflexible. The Intelligent Oracle offers an autonomous solution, offering a virtually unlimited range of data types to dApp builders. Welcoming Intelligent Oracle: A new world of blockchain use cases The Intelligent Oracle is based on Intelligent Contracts operating on the GenLayer blockchain. The oracle operates in the GenLayer ecosystem, allowing users to fetch decentralised, transparent and secure data for their DApps or platforms. It offers cross-chain compatibility, allowing it to integrate with multiple blockchain ecosystems. Following the launch, blockchain DApp developers have significantly more possibilities open to them. The launch of the Intelligent Oracle could be a step forward for decentralised applications in prediction markets, insurance, and financial derivatives, for example. The Oracle enables cost-effective and fast data resolution. While traditional oracles can take days to resolve prediction markets – incurring delays and costs – the Intelligent Oracle achieves transaction finality in less than an hour at a cost of under $1 per market. YeagerAI has seen rapid adoption of its new Oracle service with several partners, and some platforms already committed to integrating Intelligent Oracle. Early partners committed to building with the technology include Radix DLT, Etherisc, PredX, Delphi Bets, and Provably. The post YeagerAI’s Intelligent Oracle: Built on GenLayer blockchain for real-time data access appeared first on AI News. View the full article
  17. The *** is establishing the Laboratory for AI Security Research (LASR) to help protect Britain and its allies against emerging threats in what officials describe as an “AI arms race.” The laboratory – which will receive an initial government funding of £8.22 million – aims to bring together experts from industry, academia, and government to assess AI’s impact on national security. The announcement comes as part of a broader strategy to strengthen the ***’s cyber defence capabilities. Speaking at the NATO Cyber Defence Conference at Lancaster House, the Chancellor of the Duchy of Lancaster said: “NATO needs to continue to adapt to the world of AI, because as the tech evolves, the threat evolves. “NATO has stayed relevant over the last seven decades by constantly adapting to new threats. It has navigated the worlds of nuclear proliferation and militant nationalism. The move from cold warfare to drone warfare.” The Chancellor painted a stark picture of the current cyber security landscape, stating: “Cyber war is now a daily reality. One where our defences are constantly being tested. The extent of the threat must be matched by the strength of our resolve to combat it and to protect our citizens and systems.” The new laboratory will operate under a ‘catalytic’ model, designed to attract additional investment and collaboration from industry partners. Key stakeholders in the new lab include GCHQ, the National Cyber Security Centre, the MOD’s Defence Science and Technology Laboratory, and prestigious academic institutions such as the University of Oxford and Queen’s University Belfast. In a direct warning about Russia’s activities, the Chancellor declared: “Be in no doubt: the ******* Kingdom and others in this room are watching Russia. We know exactly what they are doing, and we are countering their attacks both publicly and behind the scenes. “We know from history that appeasing dictators engaged in aggression against their neighbours only encourages them. Britain learned long ago the importance of standing strong in the face of such actions.” Reaffirming support for Ukraine, he added, “****** is a man who wants destruction, not peace. He is trying to deter our support for Ukraine with his threats. He will not be successful.” The new lab follows recent concerns about state actors using AI to bolster existing security threats. “Last year, we saw the US for the first time publicly call out a state for using AI to aid its malicious cyber activity,” the Chancellor noted, referring to North Korea’s attempts to use AI for malware development and vulnerability scanning. Stephen Doughty, Minister for Europe, North America and *** Overseas Territories, highlighted the dual nature of AI technology: “AI has enormous potential. To ensure it ******** a force for good in the world, we need to understand its threats and its opportunities.” Alongside LASR, the government announced a new £1 million incident response project to enhance collaborative cyber defence capabilities among allies. The laboratory will prioritise collaboration with Five Eyes countries and NATO allies, building on the ***’s historical strength in computing, dating back to Alan Turing’s groundbreaking work. The initiative forms part of the government’s comprehensive approach to cybersecurity, which includes the upcoming Cyber Security and Resilience Bill and the recent classification of data centres as critical national infrastructure. (Photo by Erik Mclean) See also: Anthropic urges AI regulation to avoid catastrophes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** establishes LASR to counter AI security threats appeared first on AI News. View the full article
  18. A critical part of OpenAI’s safeguarding process is “red teaming” — a structured methodology using both human and AI participants to explore potential risks and vulnerabilities in new systems. Historically, OpenAI has engaged in red teaming efforts predominantly through manual testing, which involves individuals probing for weaknesses. This was notably employed during the testing of their DALL·E 2 image generation model in early 2022, where external experts were invited to identify potential risks. Since then, OpenAI has expanded and refined its methodologies, incorporating automated and mixed approaches for a more comprehensive risk assessment. “We are optimistic that we can use more powerful AI to scale the discovery of model mistakes,” OpenAI stated. This optimism is rooted in the idea that automated processes can help evaluate models and train them to be safer by recognising patterns and errors on a larger scale. In their latest push for advancement, OpenAI is sharing two important documents on red teaming — a white paper detailing external engagement strategies and a research study introducing a novel method for automated red teaming. These contributions aim to strengthen the process and outcomes of red teaming, ultimately leading to safer and more responsible AI implementations. As AI continues to evolve, understanding user experiences and identifying risks such as ****** and misuse are crucial for researchers and developers. Red teaming provides a proactive method for evaluating these risks, especially when supplemented by insights from a range of independent external experts. This approach not only helps establish benchmarks but also facilitates the enhancement of safety evaluations over time. The human touch OpenAI has shared four fundamental steps in their white paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” to design effective red teaming campaigns: Composition of red teams: The selection of team members is based on the objectives of the campaign. This often involves individuals with diverse perspectives, such as expertise in natural sciences, cybersecurity, and regional politics, ensuring assessments cover the necessary breadth. Access to model versions: Clarifying which versions of a model red teamers will access can influence the outcomes. Early-stage models may reveal inherent risks, while more developed versions can help identify gaps in planned safety mitigations. Guidance and documentation: Effective interactions during campaigns rely on clear instructions, suitable interfaces, and structured documentation. This involves describing the models, existing safeguards, testing interfaces, and guidelines for recording results. Data synthesis and evaluation: Post-campaign, the data is assessed to determine if examples align with existing policies or require new behavioural modifications. The assessed data then informs repeatable evaluations for future updates. A recent application of this methodology involved preparing the OpenAI o1 family of models for public use—testing their resistance to potential misuse and evaluating their application across various fields such as real-world ******* planning, natural sciences, and AI research. Automated red teaming Automated red teaming seeks to identify instances where AI may fail, particularly regarding safety-related issues. This method excels at scale, generating numerous examples of potential errors quickly. However, traditional automated approaches have struggled with producing diverse, successful ******* strategies. OpenAI’s research introduces “Diverse And Effective Red Teaming With Auto-Generated Rewards And Multi-Step Reinforcement Learning,” a method which encourages greater diversity in ******* strategies while maintaining effectiveness. This method involves using AI to generate different scenarios, such as illicit advice, and training red teaming models to evaluate these scenarios critically. The process rewards diversity and efficacy, promoting more varied and comprehensive safety evaluations. Despite its benefits, red teaming does have limitations. It captures risks at a specific point in time, which may evolve as AI models develop. Additionally, the red teaming process can inadvertently create information hazards, potentially alerting malicious actors to vulnerabilities not yet widely known. Managing these risks requires stringent protocols and responsible disclosures. While red teaming continues to be pivotal in risk discovery and evaluation, OpenAI acknowledges the necessity of incorporating broader public perspectives on AI’s ideal behaviours and policies to ensure the technology aligns with societal values and expectations. See also: EU introduces draft regulatory guidance for AI models Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI enhances AI safety with new red teaming methods appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
  19. In 2024, Big Tech is all-in on artificial intelligence, with companies like Microsoft, Amazon, Alphabet, and Meta leading the way. Their combined spending on AI is projected to exceed a jaw-dropping $240 billion. Why? Because AI isn’t just the future—it’s the present, and the demand for AI-powered tools and infrastructure has never been higher. The companies aren’t just keeping up; they’re setting the pace for the industry. The scale of their investment is hard to ignore. In the first half of 2023, tech giants poured $74 billion into capital expenditure. By Q3, that number had jumped to $109 billion. In mid-2024, spending reached $104 billion, a remarkable 47% rise over the same ******* a year earlier. By Q3, the total hit $171 billion. If this pattern continues, Q4 might add another $70 billion, bringing the total to a truly staggering $240 billion for the year. Why so much spending? AI’s potential is immense, and companies are making sure they’re positioned to reap the rewards. A growing market: AI is projected to create $20 trillion in global economic impact by 2030. In countries like India, AI could contribute $500 billion to GDP by 2025. With stakes this high, big tech isn’t hesitating to invest heavily. Infrastructure demands: Training and running AI models require massive investment in infrastructure, from data centres to high-performance GPUs. Alphabet increased its capital expenditures by 62% last quarter compared to the previous year, even as it cut its workforce by 9,000 employees to manage costs. Revenue potential: AI is already proving its value. Microsoft’s AI products are expected to generate $10 billion annually—the fastest-growing segment in the company’s history. Alphabet, meanwhile, uses AI to write over 25% of its new code, streamlining operations. Amazon is also ramping up, with plans to spend $75 billion on capital expenditure in 2024. Meta’s forecast is not far behind, with estimates between $38 and $40 billion. Across the board, organisations recognise that maintaining their edge in AI requires sustained and significant investment. Supporting revenue streams What keeps the massive investments keep on coming is the strength of big tech’s core businesses. Last quarter, Alphabet’s digital advertising machine, which is powered by Google’s search engine, generated $49.39 billion in ad revenue, a 12% year-over-year increase. This as a solid foundation that allows Alphabet to pour resources into building out its AI arsenal without destabilising the bottom line. Microsoft’s diversified revenue streams are another example. While the company spent $20 billion on AI and cloud infrastructure last quarter, its productivity segment, which includes Office, grew by 12% to $28.3 billion, and its personal computing business, boosted by Xbox and the Activision Blizzard acquisition, grew 17% to $13.2 billion. These successes demonstrate how AI investments can support broader growth strategies. The financial payoff Big tech is already seeing the benefits of its heavy spending. Microsoft’s Azure platform has seen substantial growth, with its AI income approaching $6 billion. Amazon’s AI business is growing at triple-digit rates, and Alphabet reported a 34% jump in profits last quarter, with cloud revenue playing a major role. Meta, while primarily focused on advertising, is leveraging AI to make its platforms more engaging. AI-driven tools, such as improved feeds and search features keep users on its platforms longer, resulting in new revenue growth. AI spending shows no signs of slowing down. Tech leaders at Microsoft and Alphabet view AI as a long-term investment critical to their future success. And the results speak for themselves: Alphabet’s cloud revenue is up 35%, while Microsoft’s cloud business grew 20% last quarter. For the time being, the focus is on scaling up infrastructure and meeting demand. However, the real transformation will come when big tech unlocks AI’s full potential, transforming industries and redefining how we work and live. By investing in high-quality, centralised data strategies, businesses can ensure trustworthy and accurate AI implementations, and unlock AI’s full potential to drive innovation, improve decision-making, and gain competitive edge. AI’s revolutionary promise is within reach—but only for companies prepared to lay the groundwork for sustainable growth and long-term results. (Photo by Unsplash) See also: Microsoft tries to convert Google Chrome users Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Big tech’s AI spending hits new heights appeared first on AI News. View the full article
  20. Samsung has revealed Gauss2, the second-generation proprietary AI model set to significantly enhance user experiences. Paul Kyungwhoon Cheun, President and CTO of the Device eXperience (DX) Division, commented: “Samsung Electronics is committed to developing cutting-edge software, including AI and data analytics, to enhance user experiences. “With three distinct models, Samsung Gauss2 is already boosting our internal productivity, and we plan to integrate it into products to deliver higher levels of convenience and personalisation.” Samsung Gauss2: Multimodal AI Gauss2 builds upon last year’s internal generative AI model, showcasing advancements in handling language, code, and images. It comes in three variants: Compact, Balanced, and Supreme, tailored to different computational needs. Compact: Optimised for environments with limited computing resources, ensuring effective performance even on-device. Balanced: Strikes a balance among performance, speed, and efficiency; suitable for a variety of tasks. Supreme: Incorporates Mixture of Experts (MoE) technology to minimise computational costs while maximising efficiency and performance. Gauss2 is designed to support between nine and fourteen languages, depending on the model, and a range of programming languages. Samsung’s custom training techniques and tokeniser aim to deliver peak efficiency across supported languages. The new models promise response generation speeds 1.5 to 3 times faster than leading open-source alternatives, facilitating prompt AI interaction and minimising wait times. Customisation and deployment Samsung’s in-house generative AI model allows easier customisation for specific applications. Gauss2 supports diverse productivity tasks, with widespread adoption among Samsung developers. The coding assistant ‘code.i’ – enhanced by Gauss2 – is utilised extensively within the DX Division and by international research teams, with up to 60% of developers now engaging regularly. The Gauss Portal, another Gauss-powered AI service, enhances productivity through features such as document summarisation and translation. Since its launch, this AI service has expanded internationally, aiding various office tasks. As of August, call centre operations also benefit from AI-driven categorisation and summarisation. Going forward, Samsung aims to further boost internal productivity, improving services like code.i and enhancing the Gauss Portal’s natural language question-and-answer capabilities. Future functionalities will include multimodal operations, like chart analysis and image creation. ‘AI for All’ Samsung’s strategy, “AI for All,” envisions widespread incorporation of AI-based services across its product lines. By combining AI with knowledge graph technology, Samsung anticipates delivering even greater personalisation. Besides Gauss2, the conference featured presentations on software development, including the SmartThings platform’s customer experience improvements. Attendees engaged in 29 technical sessions covering: The future of healthcare developments within Samsung’s health ecosystem. Enhancements in the SmartThings experience through generative AI. Insights into the code.i AI coding assistant. Lifestyle content innovations for TV. AI solutions for Samsung’s home appliances. SDC24 Korea underscored Samsung’s vision to integrate AI across its ecosystem. (Image Credit: Samsung) See also: EU introduces draft regulatory guidance for AI models Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Samsung unveils Gauss2 AI model at SDC24 Korea appeared first on AI News. View the full article
  21. Discover how chatbots for marketing can boost your ROI with enhanced engagement and instant customer responses. What are chatbots? Chatbots are automated software applications designed to simulate human conversation. They interact with users through text or voice, providing immediate responses and performing various tasks. AI chatbots can understand and process natural language, enabling them to handle complex queries and provide relevant information or services. Chatbots come in various forms, including: Rule-based chatbots: Respond to specific commands predetermined by developers, AI-driven chatbots: Use machine learning and natural language processing (NLP) to understand and adapt to user queries. The importance of chatbots in marketing Chatbots have become an essential component in modern marketing strategies. They offer marketers a way to connect with consumers efficiently, enhance customer experience, and streamline interactions. Enhanced customer engagement: Chatbots engage customers by providing instant responses and personalised interactions, 24/7 availability: They operate around the clock, ensuring customer inquiries are addressed at any time, Cost-effectiveness: Reduce the need for extensive human customer support, lowering operational costs, Data collection: Gather valuable customer data and insights for better marketing strategies, Increased lead generation: Facilitate the collection of leads by interacting with potential customers and capturing their information. Chatbots play a crucial role in improving customer engagement. They provide a dynamic and interactive way for businesses to communicate with their audience, fostering stronger relationships and increasing satisfaction. Chatbots can quickly address common questions, offer recommendations, and guide customers through the purchasing process, creating a more personalised experience. One of the most valuable features of chatbots is their ability to operate around the clock. Unlike human support teams, chatbots are available 24/7, ensuring customers receive immediate assistance regardless of the time of day. Continuous availability can significantly enhance the customer experience, leading to higher levels of satisfaction and loyalty. The implementation of chatbots in marketing strategies not only streamlines operations but also delivers a more consistent and fulfilling customer experience. The result is an increase in engagement, satisfaction, and overall effectiveness in marketing campaigns. Implementing chatbots in marketing strategies Integrating chatbots into marketing strategies can significantly enhance customer engagement and streamline overall marketing efforts. This section delves into how chatbots can be used for personalised customer interactions and how they facilitate data collection and analysis. Chatbots offer a unique opportunity to create personalised interactions with customers. By using AI and machine learning, chatbots can tailor responses based on user behaviours, preferences, and past interactions. The personalised approach can make customers feel valued and understood, increasing their satisfaction and loyalty. Key aspects of personalised customer interactions: Greeting users by name: Addressing customers by their names makes interactions feel more personal and engaging, Tailored recommendations: Based on previous interactions, chatbots can recommend products, content, or services that align with individual preferences, Customising responses: Chatbots can adjust their responses based on the user’s mood, choices, and patterns. Chatbots are important in gathering and analysing customer data. The interactions between users and chatbots generate valuable insights that can be used to optimise marketing strategies. Collecting this data can help businesses understand customer needs, behaviour patterns, and preferences. Key areas where chatbots aid in data collection and analysis: User interaction history: Chatbots store conversation logs, providing insights into common customer queries and issues, Demographic data: Collecting information like age, location, and interests helps in segmenting the audience for targeted marketing, Feedback mechanisms: Gathering feedback directly through chatbot interactions allows businesses to gauge customer satisfaction and areas for improvement. In the realm of marketing, using chatbots can significantly boost return on investment (ROI). Two of the primary factors contributing to this increase are cost-effectiveness and enhanced lead generation and conversions. Chatbots offer substantial cost savings for businesses. By automating customer interactions, businesses can reduce the need for a large customer service team. This not only lowers operational costs but also streamlines processes. The initial investment in chatbot technology is often offset by the long-term savings achieved through decreased labour costs and increased efficiency. Also, chatbots can play a critical role in lead generation and conversion rates. By engaging users in real-time, chatbots can qualify leads, provide personalised recommendations, and guide users through the sales funnel. Immediate interaction can improve user experience and lead to higher conversion rates. Utilising chatbots for marketing can lead to higher efficiency, greater customer interaction, and ultimately a better ROI. By understanding and implementing these technologies, businesses can enhance their marketing strategies and achieve substantial financial benefits. The post Boost your ROI: The impact of chatbots on marketing appeared first on AI News. View the full article
  22. AI is rapidly becoming ubiquitous across business systems and IT ecosystems, with adoption and development racing faster than anyone could have expected. Today it seems that everywhere we turn, software engineers are building custom models and integrating AI into their products, as business leaders incorporate AI-powered solutions in their working environments. However, uncertainty about the best way to implement AI is stopping some companies from taking action. Boston Consulting Group’s latest Digital Acceleration Index (DAI), a global survey of 2,700 executives, revealed that only 28% say their organisation is fully prepared for new AI regulation. Their uncertainty is exacerbated by AI regulations arriving thick and fast: the EU AI act is on the way; Argentina released a draft AI plan; Canada has the AI and Data Act; China has enacted a slew of AI regulations; and the G7 nations launched the “Hiroshima AI process.” Guidelines abound, with the OECD developing AI principles, the UN proposing a new UN AI advisory body, and the Biden administration releasing a blueprint for an AI Bill of Rights (although that could quickly change with the second Trump administration). Legislation is also coming in individual US states, and is appearing in many industry frameworks. To date, 21 states have enacted laws to regulate AI use in some manner, including the Colourado AI Act, and clauses in California’s CCPA, plus a further 14 states have legislation awaiting approval. Meanwhile, there are loud voices on both sides of the AI regulation debate. A new survey from SolarWinds shows 88% of IT professionals advocate for stronger regulation, and separate research reveals that 91% of British people want the government to do more to hold businesses accountable for their AI systems. On the other hand, the leaders of over 50 tech companies recently wrote an open letter calling for urgent reform of the EU’s heavy AI regulations, arguing that they stifle innovation. It’s certainly a tricky ******* for business leaders and software developers, as regulators scramble to catch up with tech. Of course you want to take advantage of the benefits AI can provide, you can do so in a way that sets you up for compliance with whatever regulatory requirements are coming, and don’t handicap your AI use unnecessarily while your rivals speed ahead. We don’t have a crystal ball, so we can’t predict the future. But we can share some best practices for setting up systems and procedures that will prepare the ground for AI regulatory compliance. Map out AI usage in your wider ecosystem You can’t manage your team’s AI use unless you know about it, but that alone can be a significant challenge. Shadow IT is already the scourge of cybersecurity teams: Employees sign up for SaaS tools without the knowledge of IT departments, leaving an unknown number of solutions and platforms with access to business data and/or systems. Now security teams also have to grapple with shadow AI. Many apps, chatbots, and other tools incorporate AI, machine learning (ML), or natural language programming (NLP), without such solutions necessarily being obvious AI solutions. When employees log into these solutions without official approval, they bring AI into your systems without your knowledge. As Opice Blum’s data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilisation.” Some regulations hold you responsible for AI use by vendors. To take full control of the situation, you need to map all the AI in your, and your partner organisations’ environments. In this regard, using a tool like Harmonic can be instrumental in detecting AI use across the supply chain. Verify data governance Data privacy and security are core concerns for all AI regulations, both those already in place and those on the brink of approval. Your AI use already needs to comply with existing privacy laws like GDPR and CCPR, which require you to know what data your AI can access and what it does with the data, and for you to demonstrate guardrails to protect the data AI uses. To ensure compliance, you need to put robust data governance rules into place in your organisation, managed by a defined team, and backed up by regular audits. Your policies should include due diligence to evaluate data security and sources of all your tools, including those that use AI, to identify areas of potential bias and privacy risk. “It is incumbent on organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds. “This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.” Establish continuous monitoring for your AI systems Effective monitoring is crucial for managing any area of your business. When it comes to AI, as with other areas of cybersecurity, you need continuous monitoring to ensure that you know what your AI tools are doing, how they are behaving, and what data they are accessing. You also need to audit them regularly to keep on top of AI use in your organisation. “The idea of using AI to monitor and regulate other AI systems is a crucial development in ensuring these systems are both effective and ethical,” said Cache Merrill, founder of software development company Zibtek. “Currently, techniques like machine learning models that predict other models’ behaviours (meta-models) are employed to monitor AI. The systems analyse patterns and outputs of operational AI to detect anomalies, biases or potential failures before they become critical.” Cyber GRC automation platform Cypago allows you to run continuous monitoring and regulatory audit evidence collection in the background. The no-code automation allows you to set custom workflow capabilities without technical expertise, so alerts and mitigation actions are triggered instantly according to the controls and thresholds you set up. Cypago can connect with your various digital platforms, synchronise with virtually any regulatory framework, and turn all relevant controls into automated workflows. Once your integrations and regulatory frameworks are set up, creating custom workflows on the platform is as simple as uploading a spreadsheet. Use risk assessments as your guidelines It’s vital to know which of your AI tools are high risk, medium risk, and low risk – for compliance with external regulations, for internal business risk management, and for improving software development workflows. High risk use cases will need more safeguards and evaluation before deployment. “While AI risk management can be started at any point in the project development,” Ayesha Gulley, an AI policy expert from Holistic AI, said. “Implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.” When you know the risks posed by different AI solutions, you can choose the level of access you’ll grant them to data and critical business systems. In terms of regulations, the EU AI Act already distinguishes between AI systems with different risk levels, and NIST recommends assessing AI tools based on trustworthiness, social impact, and how humans interact with the system. Proactively set AI ethics governance You don’t need to wait for AI regulations to set up ethical AI policies. Allocate responsibility for ethical AI considerations, put together teams, and draw up policies for ethical AI use that include cybersecurity, model validation, transparency, data privacy, and incident reporting. Plenty of existing frameworks like NIST’s AI RMF and ISO/IEC 42001 recommend AI best practices that you can incorporate into your policies. “Regulating AI is both necessary and inevitable to ensure ethical and responsible use. While this may introduce complexities, it need not hinder innovation,” said Arik Solomon, CEO and co-founder of Cypago. “By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.” Companies that can demonstrate a proactive approach to ethical AI will be better positioned for compliance. AI regulations aim to ensure transparency and data privacy, so if your goals align with these principles, you’ll be more likely to have policies in place that comply with future regulation. The FairNow platform can help with this process, with tools for managing AI governance, bias checks, and risk assessments in a single location. Don’t let ***** of AI regulation hold you back AI regulations are still evolving and emerging, creating uncertainty for businesses and developers. But don’t let the fluid situation stop you from benefiting from AI. By proactively implementing policies, workflows, and tools that align with the principles of data privacy, transparency, and ethical use, you can prepare for AI regulations and take advantage of AI-powered possibilities. The post Preparing today for tomorrow’s AI regulations appeared first on AI News. View the full article
  23. Business Insider’s “CXO AI Playbook” looks at how firms are utilising AI to tackle challenges, scale operations, and plan for the future. The Playbook looks at stories from various industries to see what problems AI is solving, who’s driving these initiatives, and how it’s reshaping strategies. Salesforce, well known for its CRM software used by over 150,000 companies like Amazon and Walmart, is no stranger to innovation. It also owns Slack, the popular workplace communication app. Salesforce is now stepping up its AI game with Agentforce, a platform that lets businesses to build and deploy digital agents to automate tasks such as creating sales reports and summarising Slack conversations. What problem is it solving? Salesforce has been working with AI for years. In 2016, it launched Einstein, an AI feature baked into its CRM platform. Einstein handled basic scriptable tasks, but the rise of generative AI brought a chance to do more. Smarter tools could now make better decisions and understand natural language. This sparked a transformation. First came Einstein GPT, then Einstein Copilot, and now Agentforce—a platform designed for flexibility with prebuilt and customisable agents to handle diverse business needs. “Our customers wanted more. Some wanted to tweak the agents we offer, while others wanted to create their own,” said Tyler Carlson, Salesforce’s VP of Business Development. The tech behind it Agentforce is powered by Salesforce’s Atlas Reasoning Engine, developed in-house. The platform connects with AI models from major players like OpenAI, Anthropic, Amazon, and Google, giving businesses access to a variety of tools. Slack has become a testing ground for these AI agents. Currently in beta, Agentforce’s Slack integration puts automations where employees already spend their time. “Slack makes these tools easy to use and accessible,” Carlson added. Smarter, more flexible AI Agentforce uses ReAct prompting, a technique that helps agents break down problems into smaller steps and adjust their approach as they go. This leads to more accurate responses and hands-off task management, from answering questions to scheduling meetings. Agentforce works with Salesforce’s proprietary LLMs and third-party models, giving clients plenty of options. To ensure security, Salesforce enforces strict data privacy policies, including limits on data retention. Making it work for businesses With tools like Agentbuilder, companies can design AI agents tailored to their needs. For example, an agent could sort emails or answer specific HR questions using internal data. One example is Salesforce’s collaboration with Workday to create an AI service agent for employee queries. Salesforce is already seeing results, with Agentforce resolving 90% of customer inquiries in early trials. The goal? Broader adoption, more capabilities, and higher workloads handled by these agents. “We’re building a ******* ecosystem of partners and skills,” Carlson said. “By next year, we want Agentforce to be a must-have for businesses.” (Photo by Unsplash) See also: Paul O’Sullivan, Salesforce: Transforming work in the GenAI era Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce launches AI platform for automated task management appeared first on AI News. View the full article
  24. A report by Publicis Sapient sheds light on the disparities between the C-suite and practitioners, dubbed the “V-suite,” in their perceptions and adoption of generative AI. The report reveals a stark contrast in how the C-suite and V-suite view the potential of generative AI. While the C-suite focuses on visible use cases such as customer experience, service, and sales, the V-suite sees opportunities across various functional areas, including operations, HR, and finance. Risk perception The divide extends to risk perception as well. Fifty-one percent of C-level respondents expressed more concern about the risk and ethics of generative AI than other emerging technologies. In contrast, only 23 percent of the V-suite shared these worries. Simon James, Managing Director of Data & AI at Publicis Sapient, said: “It’s likely the C-suite is more worried about abstract, big-picture dangers – such as Hollywood-style scenarios of a rapidly-evolving superintelligence – than the V-suite.” The report also highlights the uncertainty surrounding generative AI maturity. Organisations can be at various stages of maturity simultaneously, with many struggling to define what success looks like. More than two-thirds of respondents lack a way to measure the success of their generative AI projects. Navigating the generative AI landscape Despite the C-suite’s focus on high-visibility use cases, generative AI is quietly transforming back-office functions. More than half of the V-suite respondents ranked generative AI as extremely important in areas like finance and operations over the next three years, compared to a smaller percentage of the C-suite. To harness the full potential of generative AI, the report recommends a portfolio approach to innovation projects. Leaders should focus on delivering projects, controlling shadow IT, avoiding duplication, empowering domain experts, connecting business units with the CIO’s office, and engaging the risk office early and often. Daniel Liebermann, Managing Director at Publicis Sapient, commented: “It’s as hard for leaders to learn how individuals within their organisation are using ChatGPT or Microsoft Copilot as it is to understand how they’re using the internet.” The path forward The report concludes with five steps to maximise innovation: adopting a portfolio approach, improving communication between the CIO’s office and the risk office, seeking out innovators within the organisation, using generative AI to manage information, and empowering team members through company culture and upskilling. As generative AI continues to evolve, organisations must bridge the gap between the C-suite and V-suite to unlock its full potential. The future of business transformation ***** in harnessing the power of a decentralised, bottom-up approach to innovation. See also: EU introduces draft regulatory guidance for AI models Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Generative AI: Disparities between C-suite and practitioners appeared first on AI News. View the full article
  25. The release of the “First Draft General-Purpose AI Code of Practice” marks the EU’s effort to create comprehensive regulatory guidance for general-purpose AI models. The development of this draft has been a collaborative effort, involving input from diverse sectors including industry, academia, and civil society. The initiative was led by four specialised Working Groups, each addressing specific aspects of AI governance and risk mitigation: Working Group 1: Transparency and copyright-related rules Working Group 2: Risk identification and assessment for systemic risk Working Group 3: Technical risk mitigation for systemic risk Working Group 4: Governance risk mitigation for systemic risk The draft is aligned with existing laws such as the Charter of Fundamental Rights of the ********* Union. It takes into account international approaches, striving for proportionality to risks, and aims to be future-proof by contemplating rapid technological changes. Key objectives outlined in the draft include: Clarifying compliance methods for providers of general-purpose AI models Facilitating understanding across the AI value chain, ensuring seamless integration of AI models into downstream products Ensuring compliance with Union law on copyrights, especially concerning the use of copyrighted material for model training Continuously assessing and mitigating systemic risks associated with AI models Recognising and mitigating systemic risks A core feature of the draft is its taxonomy of systemic risks, which includes types, natures, and sources of such risks. The document outlines various threats such as cyber offences, biological risks, loss of control over autonomous AI models, and large-scale disinformation. By acknowledging the continuously evolving nature of AI technology, the draft recognises that this taxonomy will need updates to remain relevant. As AI models with systemic risks become more common, the draft emphasises the need for robust safety and security frameworks (SSFs). It proposes a hierarchy of measures, sub-measures, and key performance indicators (KPIs) to ensure appropriate risk identification, analysis, and mitigation throughout a model’s lifecycle. The draft suggests that providers establish processes to identify and report serious incidents associated with their AI models, offering detailed assessments and corrections as needed. It also encourages collaboration with independent experts for risk assessment, especially for models posing significant systemic risks. Taking a proactive stance to AI regulatory guidance The EU AI Act, which came into force on 1 August 2024, mandates that the final version of this Code be ready by 1 May 2025. This initiative underscores the EU’s proactive stance towards AI regulation, emphasising the need for AI safety, transparency, and accountability. As the draft continues to evolve, the working groups invite stakeholders to participate actively in refining the document. Their collaborative input will shape a regulatory framework aimed at safeguarding innovation while protecting society from the potential pitfalls of AI technology. While still in draft form, the EU’s Code of Practice for general-purpose AI models could set a benchmark for responsible AI development and deployment globally. By addressing key issues such as transparency, risk management, and copyright compliance, the Code aims to create a regulatory environment that fosters innovation, upholds fundamental rights, and ensures a high level of consumer protection. This draft is open for written feedback until 28 November 2024. See also: Anthropic urges AI regulation to avoid catastrophes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post EU introduces draft regulatory guidance for AI models appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.