Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    235
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The ***** of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys. “This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.” Regions diverge in regulatory strategy The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026. Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.” Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021. “In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said. “Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.” The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level. “There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted. This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason. “There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys. “It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.” Balancing innovation and safety Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions. Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack. “More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys. This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised. AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators. “Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys. Impact on related industries One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution. “From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny. “AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added. “At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.” Copyright battles and legal precedents The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools. High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission. “These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys. While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve? “Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys. “It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.” Just this week, the *** Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out. Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework. The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms. (Photo by Nathan Bingle) See also: Anthropic urges AI regulation to avoid catastrophes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI governance: Analysing emerging global regulations appeared first on AI News. View the full article
  2. In a world where artificial intelligence is becoming omnipresent, it’s fascinating to think about the prospect of AI-powered robots and digital avatars that can experience emotions, similar to humans. AI models lack consciousness and they don’t have the capacity to feel emotions, but what possibilities might arise if that were to change? The birth of emotional AI The prospect of an AI system embracing those first sparks of emotion is perhaps not as far-fetched as one might think. Already, AI systems have some ability to gauge people’s emotions, and increasingly they’re also able to replicate those feelings in their interactions with humans. It still requires a leap of faith to imagine an AI that could feel genuine emotions, but if it ever becomes possible, we’d imagine that they’ll be somewhat basic at first, similar to those of a child. Perhaps, an AI system might be able to feel joy at successfully completing a task, or maybe even confusion when presented with a challenge it doesn’t know how to solve. From there, it’s not difficult to envision that feeling of confusion evolving to one of frustration at its repeated failures to tackle the problem in question. And as this system evolves further, perhaps its emotional spectrum might expand to even feel a tinge of sadness or regret. Should AI ever be able to feel such emotions, it wouldn’t be long before they could express more nuanced feelings, like excitement, impatience, and empathy for humans and other AIs. For instance, in a scenario where an AI system acquires a new skill or solves a new kind of problem, it might be able to experience a degree of satisfaction in success. This is similar to how humans feel when they solve a particularly taxing challenge, like a complex jigsaw puzzle, or when they do something for the first time, like driving a car. Empathy as a motivator As AI’s ability to feel emotion evolves, it would become increasingly complex, progressing to a stage where it can even feel empathy for others. Empathy is one of the most complex human emotions, involving understanding and sharing the feelings of someone else. If AI can experience such feelings, they may inspire it to become more helpful, similar to how humans are sometimes motivated to help someone less fortunate. An AI that’s designed to assist human doctors might feel sad for someone who is afflicted by a mysterious illness. The feelings might push it to try harder to find a diagnosis for the rare disease that person is suffering from. If it gets it right, the AI might feel an overwhelming sense of accomplishment at doing so, knowing that the afflicted patient will be able to receive the treatment they need. Or we can consider an AI system that’s built to detect changes to an environment. If such a system were to recognise a substantial increase in pollution in a certain area, it might feel disappointed or even saddened by such a discovery. But like with humans, the feelings might also inspire the AI to find ways to prevent this new source of pollution, perhaps by inventing a more efficient way to recycle or dispose of the toxic substance responsible. In a similar way, an AI system that encounters numerous errors in a dataset might be compelled to refine its algorithm to reduce the number of errors. This would also have a direct impact on human-to-AI interactions. It’s not hard to imagine that an AI-powered customer service bot that feels empathy for a customer might be willing to go the extra mile to help resolve that person’s problem. Or alternatively, we might get AI teachers with a better understanding of their students’ emotions, which can then adapt teaching methods appropriately. Empathetic AI could transform the way we treat people with mental health issues. The concept of a digital therapist is not new, but if a digital therapist can better relate to their patients on an emotional level, it can figure out how best to support them. Is this even possible? Surprisingly, we may not be that far off. AI systems like Antix are already capable of expressing artificial empathy. It’s a platform for creating digital humans that are programmed to respond sympathetically when they recognise feelings of frustration, anger or upset in the people they interact with. Its digital humans can detect people’s emotions based on their speech, the kinds of words they use, intonation, and body language. The ability of Antix’s digital humans to understand emotion is partly based on the way they are trained. Each digital human is a unique non-fungible token or NFT that learns over time from its users, gaining more knowledge and evolving so it can adapt its interactions in response to an individual’s behaviour or preferences. Because digital humans can recognise emotions and replicate them, they have the potential to deliver more profound and meaningful experiences. Antix utilises the Unreal Engine 5 platform to give its creations a more realistic appearance. Creators can alter almost every aspect of their digital humans, including the voice and appearance, with the ability to edit skin tone, eye colour, and small details like eyebrows and facial hair. What sets Antix apart from other AI platforms is that users can customise the behaviour of their digital humans, to provide the most appropriate emotional response in different scenarios. Thus, digital humans can respond with an appropriate tone of voice, making the right gestures and expressions when they’re required to feel sad, for example, before transforming in an instant to express excitement, happiness, or joy. AI is getting real Emotional AI systems are a work in progress, and the result will be digital humans that feel more lifelike in any scenario where they can be useful. The CEO of Zoom has talked about the emergence of AI-powered digital twins that can participate in video calls on their user’s behalf, allowing the user to be in two places at once, so to speak. If the digital human version of your boss can express empathy, satisfaction, excitement and anger, the concept would be more effective, fostering a more realistic connection, even if the real boss isn’t present in their physical form. A customer service-focused digital human that’s able to empathise with callers will likely have a tremendous impact on customer satisfaction, and a sympathetic digital teacher might find ways to elicit more positive responses from its students, accelerating the speed at which they learn. With digital humans capable of expressing emotions, the potential for more realistic, lifelike, and immersive experiences is almost limitless, and it will result in more rewarding and beneficial interactions with AI systems. The post What might happen if AI can feel emotions? appeared first on AI News. View the full article
  3. The *** Government wants to prove that AI is being deployed responsibly within public services to speed up decision-making, reduce backlogs, and enhance support for citizens. New records, part of the Algorithmic Transparency Recording Standard (ATRS), were published this week to shed light on the AI tools being used and set a benchmark for transparency and accountability in the integration of technology in public service delivery. The initiative is part of the government’s broader strategy to embrace technology to improve outcomes, echoing commitments outlined in the “Plan for Change” to modernise public services and drive economic growth through innovative solutions. The power of AI for modernisation Among the published records, the Foreign, Commonwealth and Development Office is leveraging AI to provide faster responses to Britons seeking assistance overseas. Similarly, the Ministry of Justice is utilising algorithms to help researchers gain a deeper understanding of how individuals interact with the justice system, while other departments are deploying AI to enhance job advertisements. The ATRS aims to document how such algorithmic tools are utilised and ensure their responsible application. By doing so, the government hopes to strengthen public trust in these innovations while encouraging their continued adoption across sectors. Speaking on the government’s approach, Science Secretary Peter Kyle remarked: “Technology has huge potential to transform public services for the better; we will put it to use to cut backlogs, save money, and improve outcomes for citizens across the country. Transparency in how and why the public sector is using algorithmic tools is crucial to ensure that they are trusted and effective. That is why we will continue to take bold steps like releasing these records to make sure everyone is clear on how we are applying and trialling technology as we use it to bring public services back from the brink.” Specifically, the Department for Business and Trade has highlighted its algorithmic tool designed to predict which companies are likely to export goods internationally. The AI-driven approach allows officials to target support towards high-growth potential businesses, enabling them to reach global markets faster. Previously reliant on time-consuming manual methods to analyse the more than five million companies registered on Companies House, this advancement ensures better allocation of resources and expedited assistance. Business Secretary Jonathan Reynolds said: “Our Plan for Change will deliver economic growth, and for that to succeed, we need to support companies across the *** to realise their full potential when it comes to exporting around the globe. Our use of AI plays a vital and growing role in that mission, allowing high-growth businesses to maximise the export opportunities available to them, while ensuring that we are using taxpayers’ money responsibly and efficiently in delivering economic stability.” Establishing clear guidelines for AI in public services To bolster public trust, new guidelines have been announced to clarify the scope of algorithmic transparency records. Central government organisations will need to publish a record for any algorithmic tool that interacts directly with citizens or plays a significant role in decision-making about individuals. Limited exceptions, such as those concerning national security, apply. These records will be published once tools are piloted publicly or have become operational. They will detail the data used to train AI models, the underlying technologies, and the measures implemented to mitigate risks. Importantly, the records also seek to confirm that – while AI tools are used to accelerate decision-making processes – human oversight remains integral, with trained staff responsible for final decisions. Dr Antonio Espingardeiro, a member of IEEE and an expert in software and robotics, commented: “AI has the potential to radically transform the public sector. In recent years, we have seen AI become a credible part of everyday public services. As it becomes more sophisticated, AI can conduct data-heavy tasks traditionally undertaken by humans. It can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data that would otherwise take decades for humans to analyse. With this announcement, the *** government has acknowledged AI’s potential and proven that technology investment is essential to improving outcomes and the delivery of vital services. Over time, machine learning and generative AI (GenAI) could bring substantial value to the public system. With increased adoption, we will soon be able to deliver the scalability that the public sector needs and relieve the pressures and workloads placed on staff.” Eleanor Watson, also a member of IEEE and an AI ethics engineer affiliated with Singularity University, added: “With AI growing more rapidly than ever before, and already being tested and employed in education, healthcare, transportation, finance, data security, and more, the government, tech leaders, and academia should work together to establish standards and regulations for safe and responsible development of AI-based systems. This way, AI can be used to its full potential as indicated with this latest announcement. Data privacy is probably the most critical ethical consideration, requiring informed consent, data anonymisation, strict access controls, secure storage, and compliance. New techniques such as homomorphic encryption, zero-knowledge proofs, federated learning, and part-trained models can help models to make use of our personal data in an encrypted form.” Transparency remains a key tenet of the *** Government’s AI strategy. This announcement follows a recent statement by Pat McFadden, Chancellor of the Duchy of Lancaster, who affirmed that the benefits of technology – particularly AI – must span both public and private sectors and be used to modernise government. As the Science Secretary’s department solidifies government efforts to create a “digital centre,” it marks a major step forward in boosting the responsible and effective use of AI across the ***’s public sector. The ATRS records offer a valuable template for how governments worldwide can deploy AI systems to maximise efficiency, grow transparency, and balance the need for innovation with ethical considerations. (Photo by Shreyas Sane) See also: MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** wants to prove AI can modernise public services responsibly appeared first on AI News. View the full article
  4. Amazon has announced an additional $4 billion investment in Anthropic, bringing the company’s total commitment to $8 billion, part of its expanding artificial intelligence strategy. The investment was announced on November 22, 2024 and strengthens Amazon’s position in the AI sector, building on its established cloud computing services in the form of AWS. While maintaining Amazon’s ********* stake in Anthropic, the investment represents a significant development in the company’s approach to AI technology and cloud infrastructure. The expanded collaboration goes beyond mere financial investment. Anthropic has now designated AWS as its “primary training partner” for AI model development, in addition to Amazon’s role as a primary cloud provider. Amazon’s investment will see Anthropic utilizing AWS Trainium and Inferentia chips for training and on which to deploy its future foundational models, including any updates to the flagship Claude AI system. AWS’s competitive edge The continuing partnership provides Amazon with several strategic advantages in the competitive cloud computing and AI services market: Hardware innovation: The commitment to use AWS Trainium and Inferentia chips for Anthropic’s advanced AI models validates Amazon’s investment in custom AI chips and positions AWS as a serious competitor to NVIDIA in the AI infrastructure space. Cloud service enhancement: AWS customers will receive early access to fine-tuning capabilities for data processed by Anthropic models. This benefit alone could attract more enterprises to Amazon’s cloud platform. Model performance: Claude 3.5 Sonnet, Anthropic’s latest model available through Amazon Bedrock, has demonstrated exceptional performance in agentic coding tasks, according to Anthropic. Amazon’s multi-faceted AI strategy While the increased investment in Anthropic is impressive in monetary terms, it represents just one component of Amazon’s broader AI strategy. The company appears to be pursuing a multi-pronged approach: External partnerships: The Anthropic investment provides immediate access to cutting-edge AI capabilities from third-parties. Internal development: Amazon continues to develop its own AI models and capabilities. Infrastructure development: Ongoing investment in AI-specific hardware like Trainium chips demonstrates a commitment to building AI-focussed infrastructure. The expanded partnership signals Amazon’s long-term commitment to AI development yet retains flexibility thanks to its ********* stakeholding. This approach allows Amazon to benefit from Anthropic’s innovations while preserving the ability to pursue other partnerships with external AI companies and continue internal development initiatives. The investment reinforces the growing trend where major tech companies seek strategic AI partnerships rather than relying solely on internal development. It also highlights the important role of cloud infrastructure in the AI industry’s growth. AWS has positioned itself as a suitable platform for AI model training and deployment. The post Amazon stakes $4bn more in Anthropic–the next tech arms race? appeared first on AI News. View the full article
  5. CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity. According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools. CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity: Platform-based GenAI is favoured 80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions. GenAI built by cybersecurity experts is a must Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.” Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.” Augmentation, not replacement Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making. GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.” ROI outweighs cost concerns For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents. According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments. Guardrails and safety are crucial GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation. Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.” Current state of GenAI adoption in cybersecurity GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending. CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.” The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance. With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly. See also: Keys to AI success: Security, sustainability, and overcoming silos Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News. View the full article
  6. Privacy laws in the United States are a patchwork at best. More often than not, they miss the mark, leaving most people with little actual privacy. When such laws are enacted, they can seem tailored to protect those in positions of power. Even laws designed to protect crime victims might end up protecting the names of abusive officers by labelling them as victims of crime in cases like resisting arrest or assaulting an officer. Such accusations are often used in cases of excessive force, keeping cops’ names out of the spotlight. For example, a recent New Jersey law emerged from a tragic event in which a government employee faced violence, sparking a legislative response. Known as “Daniel’s Law,” it was created after the personal information of a federal judge’s family was used by a ********* to track them down. Instead of a broader privacy law that could protect all residents of New Jersey, it focused exclusively on safeguarding certain public employees. Under the law, judges, prosecutors, and police officers can request that their personal information (addresses and phone numbers, for example) be scrubbed from public databases. Popular services that people use to look up information, such as Whitepages or Spokeo, must comply. While this sounds like a win for privacy, the protections stop there. The average citizen is still left exposed, with no legal recourse if their personal data is misused or sold. At the centre of the debate is a lawyer who’s taken up the cause of protecting cops’ personal data. He’s suing numerous companies for making this type of information accessible. While noble at first glance, a deeper look raises questions. It transpires that the lawyer’s company has previously collected and monetised personal data. And when a data service responded to his demands by freezing access to some of the firm’s databases, he and his clients cried foul — despite specifically requesting restrictions on how their information could be used. It’s also worth noting how unevenly data protection measures are to be applied. Cops, for instance, frequently rely on the same tools and databases they’re now asking to be restricted. These services have long been used by law enforcement for investigations and running background checks. Yet, when law enforcement data appears in such systems, special treatment is required. A recent anecdote involved a police union leader who was shown a simple property record pulled from an online database. The record displayed basic details like his home address and his property’s square footage — information anyone could find with a few clicks. His reaction was one of shock and anger – an obvious disconnect. For everyday citizens, this level of data exposure is a given. But for law enforcement, it requires a level of granular exclusion that’s not practical. Perhaps everyone, including law enforcement personnel deserves better safeguards against data harvesting and misuse? But what Daniel’s law and later events involving police officers point to is the need for the type of improvements to the way data is treated for all, not just one group of society. Instead of expanding privacy rights to all New Jersey residents, the law carves out exceptions for the powerful — leaving the rest of the population as vulnerable as ever. (Photo by Unsplash) See also: EU AI legislation sparks controversy over data transparency Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NJ cops demand protections against data brokers appeared first on AI News. View the full article
  7. Artificial intelligence platform provider Clarifai has unveiled a new compute orchestration capability that promises to help enterprises optimise their AI workloads in any computing environment, reduce costs and avoid vendor lock-in. Announced on December 3, 2024, the public preview release lets organisations orchestrate AI workloads through a unified control plane, whether those workloads are running on cloud, on-premises, or in air-gapped infrastructure. The platform can work with any AI model and hardware accelerator including GPUs, CPUs, and TPUs. “Clarifai has always been ahead of the curve, with over a decade of experience supporting large enterprise and mission-critical government needs with the full stack of AI tools to create custom AI workloads,” said Matt Zeiler, founder and CEO of Clarifai. “Now, we’re opening up capabilities we built internally to optimise our compute costs as we scale to serve millions of models simultaneously.” The company claims its platform can reduce compute usage by 3.7x through model packing optimisations while supporting over 1.6 million inference requests per second with 99.9997% reliability. According to Clarifai, the optimisations can potentially cut costs by 60-90%, depending on configuration. Capabilities of the compute orchestration platform include: Cost optimisation through automated resource management, including model packing, dependency simplification, and customisable auto-scaling options that can scale to zero for model replicas and compute nodes, Deployment flexibility on any hardware vendor including cloud, on-premise, air-gapped, and Clarifai SaaS infrastructure, Integration with Clarifai’s AI platform for data labeling, training, evaluation, workflows, and feedback, Security features that allow deployment into customer VPCs or on-premise Kubernetes clusters without requiring open inbound ports, VPC peering, or custom IAM roles. The platform emerged from Clarifai customers’ issues with AI performance and cost. “If we had a way to think about it holistically and look at our on-prem costs compared to our cloud costs, and then be able to orchestrate across environments with a cost basis, that would be incredibly valuable,” noted a customer, as cited in Clarifai’s announcement. The compute orchestration capabilities build on Clarifai’s existing AI platform that, the company says, has processed over 2 billion operations in computer vision, language, and audio AI. The company reports maintaining 99.99%+ uptime and 24/7 availability for critical applications. The compute orchestration capability is currently available in public preview. Organisations interested in testing the platform should contact Clarifai for access. The post New Clarifai tool orchestrates AI across any infrastructure appeared first on AI News. View the full article
  8. Artificial Intelligence and its associated innovations have revamped the global technological landscape, with recent data released by the US government predicting 13% growth in IT-related opportunities over the next six years – potentially adding 667,600 new jobs to the sector. Researchers have stated that by 2034, the AI sector’s cumulative valuation may reach $3.6 trillion across industry. The healthcare sector has already integrated AI-based diagnostic tools, with 38% of today’s major medical providers using the technology. The financial sector is also expecting AI to contribute approximately $15.7 trillion to the global economy by 2030, and the retail industry anticipates anywhere between $400 billion and $660 billion through AI-driven customer experiences annually. It is estimated that approximately 83% of companies now have AI exploration as an agenda item for continued technical growth, especially given its capacity to drive innovation, enhance efficiency, and create sustainable competitive advantage. Decentralising AI’s foundations While AI’s potential is seemingly limitless, its rapid growth has brought a challenge – the centralisation of AI development and data management. As AI systems become more sophisticated, risks like dataset manipulation, biased training models, and opaque decision-making processes threaten to undermine their potential. Different blockchain tech providers have taken steps to decentralise the sector, offering infrastructure frameworks that change how AI systems are developed, trained, and deployed. Space and Time (SXT) has devised a verifiable database that aims to bridge the gap between disparate areas, providing users with transparent, secure development tools that mean AI agents can execute transactions with greater levels data integrity. The platform’s innovation lies in its ability to provide contextual data which AI agents can use for executing trades and purchases in ways that end-users can validate. Another project of note is Chromia. It takes a similar approach, with a focus on creating a decentralised architecture to handle complex, data-intensive AI applications. Speaking about the platform’s capabilities, Yeou Jie Goh, Head of Business Development at Chromia, said: “Our relational blockchain is specifically designed to support AI applications, performing hundreds of read-write operations per transaction and indexing data in real-time. We’re not just building a blockchain; we’re creating the infrastructure for the next generation of AI development.” Chromia wants to lower the barriers to entry for data scientists and machine learning engineers. By providing a SQL-based relational blockchain, the platform makes it easier for technical professionals to build and deploy AI applications on decentralised infrastructure. “Our mission is to position Chromia as the transparency layer of Web3, providing a robust backbone for data integrity across applications,” Goh said. Chromia has already formed partnerships with Elfa AI, Chasm Network, and Stork. Establishing a roadmap for technological sovereignty The synergy between AI and blockchain is more than a fad, rather, a reimagining of AI’s infrastructure. Space and Time, for instance, is working to expand its ecosystem in multiple domains, including AI, DeFi, gaming, and decentralised physical infrastructure networks (DePIN). Its strategy focuses on onboarding developers and building a mainnet that delivers verifiable data to smart contracts and AI agents. Chromia is ambitious, launching a $20 million Data and AI Ecosystem Fund earlier this year. The project’s ‘Asgard Mainnet Upgrade’ with an ‘Extensions’ feature offers users adaptable application use. The implications of AI’s shift toward decentralisation is of significant interest to Nate Holiday, CEO of Space and Time. He predicts that blockchain-based transactions associated with AI agents could grow from the current 3% of the market to 30% in the near future. He said: “Ushering in this inevitable, near-term future is going to require data infrastructure like SXT that provides AI agents with the context that they need to execute trades and purchases in a way that the end user can verify.” Chromia’s Yeou Jie Goh sees the transition not just as a technological innovation but as a means of creating a more transparent, secure, and democratised technological ecosystem. By using blockchain’s inherent strengths – immutability, transparency, and decentralisation – the two companies are working to create intelligent systems that are powerful, accountable, ethical, and aligned with human values. The post A new decentralised AI ecosystem and its implications appeared first on AI News. View the full article
  9. In just a few years, the realm of AI has transcended its initial computational boundaries, emerging as one of the transformative forces of the 21st century, permeating virtually every major economic sector. The global AI market was valued at $638.23 billion during Q4 2024, and is projected to reach a valuation of $3.6 trillion by 2034, largely because AI has the potential to gain widespread adoption in multiple industries. For instance, in healthcare, 38% of all major medical providers use advanced AI diagnostic tools. Similarly, the financial sector has also demonstrated impressive integrations, with AI projected to contribute approx. $15.7 trillion to the global economy by 2030. The retail industry has also set its sights on anywhere between $400 billion and $660 billion annually thanks to AI-driven customer experiences, while the cybersecurity sector is set to register a 23.6% growth rate – by 2027 – because of AI-powered intelligent security technologies. It is estimated that about 83% of companies have already begun considering AI as a strategic priority, recognising its potential to drive innovation, enhance efficiency, and create competitive advantages. Simplifying everyday life with AI With the global tech landscape having transformed over the last couple of years, we are now at a point where AI is starting to automate various mundane and time-consuming everyday tasks. The concept of ‘AI twins’ has gained traction recently, allowing individuals to manage scheduling, respond to emails, conduct research, and handle complex administrative tasks efficiently. These digital companions represent more than just a caricature of a person’s real-world identity; they offer productivity accelerators designed to liberate anyone from repetitive work cycles (thus allowing them to focus on more creative, strategic, or emotionally nuanced tasks). They can also help reduce the cognitive and emotional burden of managing multiple responsibilities, which could otherwise be overwhelming for some. Leading this revolution is Twin Protocol, a platform that seeks to redefine how humans interact with AI, primarily via the creation of secure, dynamic digital representations that can learn, adapt, and evolve alongside their human counterparts. By using the power of trained machine learning algorithms and decentralised ledgers, Twin Protocol allows individuals to develop digital twins that can capture not just information, but individual expertise and personality traits. The platform’s potential spans industries, ranging from healthcare to manufacturing and finance. Imagine possessing the ability to deploy a perpetually-available AI twin that can provide personalised healthcare advice, or a digital representation of a financial advisor offering real-time, context-aware investment strategies. These twins aren’t designed to serve as mere information repositories but as intelligent and dynamic tools capable of understanding context, learning continuously, and providing nuanced, personalised interactions. What sets Twin Protocol apart is its commitment to maintaining individual agency and data privacy. Through its blockchain-based ‘Twin Vault‘, users can retain control over their digital identity, ensuring that personal information remains secure. Thanks to its unique proposition, the platform has attracted several collaborations, including partnerships with SingularityNET and notable figures like Deepak Chopra and Robert Bell (founding member of Kool & the Gang). AI’s potential is still uncharted It is estimated that over the coming decade, AI’s potential could grow hugely, with offerings like Twin Protocol demonstrating next-generation technology, allowing users to explore new concepts like digital twins. From personalised education to industrial optimisation, AI is moving beyond the category of being a tool, becoming a transformative partner capable of extending individual capabilities. AI’s journey and symbiosis is likely to push the boundaries of what’s possible today. Some of the most exciting innovations emanating from the field will lie not in the technology, but in how its potential is applied to other fields. Interesting times are ahead! The post The ongoing AI revolution is reshaping the world, one algorithm at a time appeared first on AI News. View the full article
  10. Google CEO Sundar Pichai has announced the launch of Gemini 2.0, a model that represents the next step in Google’s ambition to revolutionise AI. A year after introducing the Gemini 1.0 model, this major upgrade incorporates enhanced multimodal capabilities, agentic functionality, and innovative user tools designed to push boundaries in AI-driven technology. Leap towards transformational AI Reflecting on Google’s 26-year mission to organise and make the world’s information accessible, Pichai remarked, “If Gemini 1.0 was about organising and understanding information, Gemini 2.0 is about making it much more useful.” Gemini 1.0, released in December 2022, was notable for being Google’s first natively multimodal AI model. The first iteration excelled at understanding and processing text, video, images, audio, and code. Its enhanced 1.5 version became widely embraced by developers for its long-context understanding, enabling applications such as the productivity-focused NotebookLM. Now, with Gemini 2.0, Google aims to accelerate the role of AI as a universal assistant capable of native image and audio generation, better reasoning and planning, and real-world decision-making capabilities. In Pichai’s words, the development represents the dawn of an “agentic era.” “We have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” Pichai explained. Gemini 2.0: Core features and availability At the heart of today’s announcement is the experimental release of Gemini 2.0 Flash, the flagship model of Gemini’s second generation. It builds upon the foundations laid by its predecessors while delivering faster response times and advanced performance. Gemini 2.0 Flash supports multimodal inputs and outputs, including the ability to generate native images in conjunction with text and produce steerable text-to-speech multilingual audio. Additionally, users can benefit from native tool integration such as Google Search and even third-party user-defined functions. Developers and businesses will gain access to Gemini 2.0 Flash via the Gemini API in Google AI Studio and Vertex AI, while larger model sizes are scheduled for broader release in January 2024. For global accessibility, the Gemini app now features a chat-optimised version of the 2.0 Flash experimental model. Early adopters can experience this updated assistant on desktop and mobile, with a mobile app rollout imminent. Products such as Google Search are also being enhanced with Gemini 2.0, unlocking the ability to handle complex queries like advanced math problems, coding enquiries, and multimodal questions. Comprehensive suite of AI innovations The launch of Gemini 2.0 comes with compelling new tools that showcase its capabilities. One such feature, Deep Research, functions as an AI research assistant, simplifying the process of investigating complex topics by compiling information into comprehensive reports. Another upgrade enhances Search with Gemini-enabled AI Overviews that tackle intricate, multi-step user queries. The model was trained using Google’s sixth-generation Tensor Processing Units (TPUs), known as Trillium, which Pichai notes “powered 100% of Gemini 2.0 training and inference.” Trillium is now available for external developers, allowing them to benefit from the same infrastructure that supports Google’s own advancements. Pioneering agentic experiences Accompanying Gemini 2.0 are experimental “agentic” prototypes built to explore the future of human-AI collaboration, including: Project Astra: A universal AI assistant First introduced at I/O earlier this year, Project Astra taps into Gemini 2.0’s multimodal understanding to improve real-world AI interactions. Trusted testers have trialled the assistant on Android, offering feedback that has helped refine its multilingual dialogue, memory retention, and integration with Google tools like Search, Lens, and Maps. Astra has also demonstrated near-human conversational latency, with further research underway for its application in wearable technology, such as prototype AI glasses. Project Mariner: Redefining web automation Project Mariner is an experimental web-browsing assistant that uses Gemini 2.0’s ability to reason across text, images, and interactive elements like forms within a browser. In initial tests, it achieved an 83.5% success rate on the WebVoyager benchmark for completing end-to-end web tasks. Early testers using a Chrome extension are helping to refine Mariner’s capabilities while Google evaluates safety measures that ensure the technology remains user-friendly and secure. Jules: A coding agent for developers Jules, an AI-powered assistant built for developers, integrates directly into GitHub workflows to address coding challenges. It can autonomously propose solutions, generate plans, and execute code-based tasks—all under human supervision. This experimental endeavour is part of Google’s long-term goal to create versatile AI agents across various domains. Gaming applications and beyond Extending Gemini 2.0’s reach into virtual environments, Google DeepMind is working with gaming partners like Supercell on intelligent game agents. These experimental AI companions can interpret game actions in real-time, suggest strategies, and even access broader knowledge via Search. Research is also being conducted into how Gemini 2.0’s spatial reasoning could support robotics, opening doors for physical-world applications in the future. Addressing responsibility in AI development As AI capabilities expand, Google emphasises the importance of prioritising safety and ethical considerations. Google claims Gemini 2.0 underwent extensive risk assessments, bolstered by the Responsibility and Safety Committee’s oversight to mitigate potential risks. Additionally, its embedded reasoning abilities allow for advanced “red-teaming,” enabling developers to evaluate security scenarios and optimise safety measures at scale. Google is also exploring safeguards to address user privacy, prevent misuse, and ensure AI agents remain reliable. For instance, Project Mariner is designed to prioritise user instructions while resisting malicious prompt injections, preventing threats like phishing or fraudulent transactions. Meanwhile, privacy controls in Project Astra make it easy for users to manage session data and deletion preferences. Pichai reaffirmed the company’s commitment to responsible development, stating, “We firmly believe that the only way to build AI is to be responsible from the start.” With the Gemini 2.0 Flash release, Google is edging closer to its vision of building a universal assistant capable of transforming interactions across domains. See also: Machine unlearning: Researchers make AI models ‘forget’ data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Gemini 2.0: Google ushers in the agentic AI era appeared first on AI News. View the full article
  11. NetApp has shed light on the pressing issues faced by organisations globally as they strive to optimise their strategies for AI success. “2025 is shaping up to be a defining year for AI, as organisations transition from experimentation to scaling their AI capabilities,” said Gabie Boko, NetApp’s Chief Marketing Officer. “Businesses are making significant investments to drive innovation and efficiency, but these efforts will succeed only if global tech executives can address the mounting challenges of data complexity, security, and sustainability.” The findings of NetApp’s latest Data Complexity Report paints a detailed picture of where businesses currently stand on their AI journeys and the key trends that will shape the technology’s future. Cost of transformation Two-thirds of businesses worldwide claim their data is “fully or mostly optimised” for AI purposes, highlighting vast improvements in making data accessible, accurate, and well-documented. Yet, the study reveals that the journey towards AI maturity requires further significant investment. A striking 40% of global technology executives anticipate “unprecedented investment” will be necessary in 2025 just to enhance AI and data management capabilities. While considerable progress has been made, achieving impactful breakthroughs demands an even greater commitment in financial and infrastructural resources. Catching up with AI’s potential might not come cheap, but leaders prepared to invest could reap significant rewards in innovation and efficiency. Data silos impede AI success One of the principal barriers identified in the report is the fragmentation of data. An overwhelming 79% of global tech executives state that unifying their data, reducing silos and ensuring smooth interconnectedness, is key to unlocking AI’s full potential. Companies that have embraced unified data storage are better placed to overcome this hurdle. By connecting data regardless of its type or location (across hybrid multi-cloud environments,) they ensure constant accessibility and minimise fragmentation. The report indicates that organisations prioritising data unification are significantly more likely to meet their AI goals in 2025. Nearly one-third (30%) of businesses failing to prioritise unification foresee missing their targets, compared to just 23% for those placing this at the heart of their strategy. Executives have doubled down on data management and infrastructure as top priorities, increasingly recognising that optimising their capacity to gather, store, and process information is essential for AI maturity. Companies refusing to tackle these data challenges risk falling behind in an intensely competitive global market. Scaling risks of AI As businesses accelerate their AI adoption, the associated risks – particularly around security – are becoming more acute. More than two-fifths (41%) of global tech executives predict a stark rise in security threats by 2025 as AI becomes integral to more facets of their operations. AI’s rapid rise has expanded attack surfaces, exposing data sets to new vulnerabilities and creating unique challenges such as protecting sensitive AI models. Countries leading the AI race, including India, the US, and Japan, are nearly twice as likely to encounter escalating security concerns compared to less AI-advanced nations like Germany, France, and Spain. Increased awareness of AI-driven security challenges is reflected in business priorities. Over half (59%) of global executives name cybersecurity as one of the top stressors confronting organisations today. However, progress is being made. Despite elevated concerns, the report suggests that effective security measures are yielding results. Since 2023, the number of executives ranking cybersecurity and ransomware protection as their top priority has fallen by 17%, signalling optimism in combating these risks effectively. Limiting AI’s environmental costs Beyond security risks, AI’s growth is raising urgent questions of sustainability. Over one-third of global technology executives (34%) predict that AI advancements will drive significant changes to corporate sustainability practices. Meanwhile, 33% foresee new government policies and investments targeting energy usage. The infrastructure powering AI and transforming raw data into business value demands significant energy, counteracting organisational sustainability targets. AI-heavy nations often feel the environmental impact more acutely than their less AI-focused counterparts. While 72% of businesses still prioritise carbon footprint reduction, the report notes a decline from 84% in 2023, pointing to increasing tension between sustainability commitments and the relentless march of innovation. For organisations to scale AI without causing irreparable damage to the planet, maintaining environmental responsibility alongside technological growth will be paramount in coming years. Krish Vitaldevara, SVP and GM at NetApp, commented: “The organisations leading in advanced analytics and AI are those that have unified and well-cataloged data, robust security and compliance for sensitive information, and a clear understanding of how data evolves. “By tackling these challenges, they can drive innovation while ensuring resilience, responsibility, and timely insights in the new AI era.” You can find a full copy of NetApp’s report here (PDF) (Photo by Chunli Ju) See also: New AI training techniques aim to overcome current challenges Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Keys to AI success: Security, sustainability, and overcoming silos appeared first on AI News. View the full article
  12. Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data. Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations. The paradigm of large-scale pre-trained AI systems, such as OpenAI’s ChatGPT and CLIP (Contrastive Language–Image Pre-training), has reshaped expectations for machines. These highly generalist models, capable of handling a vast array of tasks with consistent precision, have seen widespread adoption for both professional and personal use. However, such versatility comes at a hefty price. Training and running these models demands prodigious amounts of energy and time, raising sustainability concerns, as well as requiring cutting-edge hardware significantly more expensive than standard computers. Compounding these issues is that generalist tendencies may hinder the efficiency of AI models when applied to specific tasks. For instance, “in practical applications, the classification of all kinds of object classes is rarely required,” explains Associate Professor Go Irie, who led the research. “For example, in an autonomous driving system, it would be sufficient to recognise limited classes of objects such as cars, pedestrians, and traffic signs. “We would not need to recognise food, furniture, or animal species. Retaining classes that do not need to be recognised may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage.” A potential solution lies in training models to “forget” redundant or unnecessary information—streamlining their processes to focus solely on what is required. While some existing methods already cater to this need, they tend to assume a “white-box” approach where users have access to a model’s internal architecture and parameters. Oftentimes, however, users get no such visibility. “******-box” AI systems, more common due to commercial and ethical restrictions, conceal their inner mechanisms, rendering traditional forgetting techniques impractical. To address this gap, the research team turned to derivative-free optimisation—an approach that sidesteps reliance on the inaccessible internal workings of a model. Advancing through forgetting The study, set to be presented at the Neural Information Processing Systems (NeurIPS) conference in 2024, introduces a methodology dubbed “******-box forgetting.” The process modifies the input prompts (text instructions fed to models) in iterative rounds to make the AI progressively “forget” certain classes. Associate Professor Irie collaborated on the work with co-authors Yusuke Kuwana and Yuta Goto (both from TUS), alongside Dr Takashi Shibata from NEC Corporation. For their experiments, the researchers targeted CLIP, a vision-language model with image classification abilities. The method they developed is built upon the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm designed to optimise solutions step-by-step. In this study, CMA-ES was harnessed to evaluate and hone prompts provided to CLIP, ultimately suppressing its ability to classify specific image categories. As the project progressed, challenges arose. Existing optimisation techniques struggled to scale up for larger volumes of targeted categories, leading the team to devise a novel parametrisation strategy known as “latent context sharing.” This approach breaks latent context – a representation of information generated by prompts – into smaller, more manageable pieces. By allocating certain elements to a single token (word or character) while reusing others across multiple tokens, they dramatically reduced the problem’s complexity. Crucially, this made the process computationally tractable even for extensive forgetting applications. Through benchmark tests on multiple image classification datasets, the researchers validated the efficacy of ******-box forgetting—achieving the goal of making CLIP “forget” approximately 40% of target classes without direct access to the AI model’s internal architecture. This research marks the first successful attempt to induce selective forgetting in a ******-box vision-language model, demonstrating promising results. Benefits of helping AI models forget data Beyond its technical ingenuity, this innovation holds significant potential for real-world applications where task-specific precision is paramount. Simplifying models for specialised tasks could make them faster, more resource-efficient, and capable of running on less powerful devices—hastening the adoption of AI in areas previously deemed unfeasible. Another key use lies in image generation, where forgetting entire categories of visual context could prevent models from inadvertently creating undesirable or harmful content, be it offensive material or misinformation. Perhaps most importantly, this method addresses one of AI’s greatest ethical quandaries: privacy. AI models, particularly large-scale ones, are often trained on massive datasets that may inadvertently contain sensitive or outdated information. Requests to remove such data—especially in light of laws advocating for the “Right to be Forgotten”—pose significant challenges. Retraining entire models to exclude problematic data is costly and time-intensive, yet the risks of leaving it unaddressed can have far-reaching consequences. “Retraining a large-scale model consumes enormous amounts of energy,” notes Associate Professor Irie. “‘Selective forgetting,’ or so-called machine unlearning, may provide an efficient solution to this problem.” These privacy-focused applications are especially relevant in high-stakes industries like healthcare and finance, where sensitive data is central to operations. As the global race to advance AI accelerates, the Tokyo University of Science’s ******-box forgetting approach charts an important path forward—not only by making the technology more adaptable and efficient but also by adding significant safeguards for users. While the potential for misuse remains, methods like selective forgetting demonstrate that researchers are proactively addressing both ethical and practical challenges. See also: Why QwQ-32B-Preview is the reasoning AI to watch Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Machine unlearning: Researchers make AI models ‘forget’ data appeared first on AI News. View the full article
  13. Artificial intelligence entered the market with a splash, driving massive buzz and adoption. But now the pace is faltering. Business leaders still talk the talk about embracing AI, because they want the benefits – McKinsey estimates that GenAI could save companies up to $2.6 trillion across a range of operations. However, they aren’t walking the walk. According to one survey of senior analytics and IT leaders, only 20% of GenAI applications are currently in production. Why the wide gap between interest and reality? The answer is multifaceted. Concerns around security and data privacy, compliance risks, and data management are high-profile, but there’s also anxiety about AI’s lack of transparency and worries about ROI, costs, and skill gaps. In this article, we’ll examine the barriers to AI adoption, and share some measures that business leaders can take to overcome them. Get a handle on data “High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, adding, “Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.” Today, only 43% of IT professionals say they’re confident about their ability to meet AI’s data demands. Given that data is so vital for AI success, it’s not surprising that data challenges are an oft-cited factor in slow AI adoption. The best way to overcome this hurdle is to go back to data basics. Organisations need to build a strong data governance strategy from the ground up, with rigorous controls that enforce data quality and integrity. Take ethics and governance seriously With regulations mushrooming, compliance is already a headache for many organisations. AI only adds new areas of risk, more regulations, and increased ethical governance issues for business leaders to worry about, to the extent that security and compliance risk was the most-cited concern in Cloudera’s State of Enterprise AI and Modern Data Architecture report. While the rise in AI regulations might seem alarming at first, executives should embrace the support that these frameworks offer, as they can give organisations a structure around which to build their own risk controls and ethical guardrails. Developing compliance policies, appointing teams for AI governance, and ensuring that humans retain authority over AI-powered decisions are all important steps in creating a comprehensive system of AI ethics and governance. Reinforce control over security and privacy Security and data privacy concerns loom large for every business, and with good reason. Cisco’s 2024 Data Privacy Benchmark Study revealed that 48% of employees admit to entering non-public company information into GenAI tools (and an unknown number have done so and won’t admit it), leading 27% of organisations to ban the use of such tools. The best way to reduce the risks is to limit access to sensitive data. This involves doubling down on access controls and privilege creep, and keeping data away from publicly-hosted LLMs. Avi Perez, CTO of Pyramid Analytics, explained that his business intelligence software’s AI infrastructure was deliberately built to keep data away from the LLM, sharing only metadata that describes the problem and interfacing with the LLM as the best way for locally-hosted engines to run analysis.”There’s a huge set of issues there. It’s not just about privacy, it’s also about misleading results. So in that framework, data privacy and the issues associated with it are tremendous, in my opinion. They’re a showstopper,” Perez said. With Pyramid’s setup, however, “the LLM generates the recipe, but it does it without ever getting [its] hands on the data, and without doing mathematical operations. […] That eliminates something like 95% of the problem, in terms of data privacy risks.” Boost transparency and explainability Another serious obstacle to AI adoption is a lack of trust in its results. The infamous story of Amazon’s AI-powered hiring tool which discriminated against women has become a cautionary tale that scares many people away from AI. The best way to combat this fear is to increase explainability and transparency. “AI transparency is about clearly explaining the reasoning behind the output, making the decision-making process accessible and comprehensible,” said Adnan Masood, chief AI architect at UST and a Microsoft regional director. “At the end of the day, it’s about eliminating the ****** box mystery of AI and providing insight into the how and why of AI decision-making.”Unfortunately, many executives overlook the importance of transparency. A recent IBM study reported that only 45% of CEOs say they are delivering on capabilities for openness. AI champions need to prioritise the development of rigorous AI governance policies that prevent ****** boxes arising, and invest in explainability tools like SHapley Additive exPlanations (SHAPs), fairness toolkits like Google’s Fairness Indicators, and automated compliance checks like the Institute of Internal Auditors’ AI Auditing Framework. Define clear business value Cost is on the list of AI barriers, as always. The Cloudera survey found that 26% of respondents said AI tools are too expensive, and Gartner included “unclear business value” as a factor in the failure of AI projects. Yet the same Gartner report noted that GenAI had delivered an average revenue increase and cost savings of over 15% among its users, proof that AI can drive financial lift if implemented correctly. This is why it’s crucial to approach AI like every other business project – identify areas that will deliver fast ROI, define the benefits you expect to see, and set specific KPIs so you can prove value.”While there’s a lot that goes into building out an AI strategy and roadmap, a critical first step is to identify the most valuable and transformative AI use cases on which to focus,” said Michael Robinson, Director of Product Marketing at UiPath. Set up effective training programs The skills gap remains a significant roadblock to AI adoption, but it seems that little effort is being made to address the issue. A report from Worklife indicates the initial ***** in AI adoption came from early adopters. Now, it’s down to the laggards, who are inherently sceptical and generally less confident about AI – and any new tech. This makes training crucial. Yet according to Asana’s State of AI at Work study, 82% of participants said their organisations haven’t provided training on using generative AI. There’s no indication that training isn’t working; rather that it isn’t happening as it should. The clear takeaway is to offer comprehensive training in quality prompting and other relevant skills. Encouragingly, the same research shows that even using AI without training increases people’s skills and confidence. So, it’s a good idea to get started with low- and no-code tools that allow employees who are unskilled in AI to learn on the job. The barriers to AI adoption are not insurmountable Although AI adoption has slowed, there’s no indication that it’s in danger in the long term. The many obstacles holding companies back from rolling out AI tools can be overcome without too much trouble. Many of the steps, like reinforcing data quality and ethical governance, should be taken regardless of whether or not AI is under consideration, while other steps taken will pay for themselves in increased revenue and the productivity gains that AI can bring. The post Narrowing the confidence gap for wider AI adoption appeared first on AI News. View the full article
  14. There’s a new contender in the AI space that’s making waves: QwQ-32B-Preview. This so-called “reasoning” AI model is being compared to OpenAI o1, and it’s one of the few you can download under a permissive license. For developers and researchers eager to experiment, that’s a significant bonus. Built by Alibaba’s Qwen team, QwQ-32B-Preview is anything but lightweight. It packs 32.5 billion parameters—think of these as the building blocks of its problem-solving abilities—and can handle prompts of up to 32,000 words; longer than some novels! Tests show it outperforms OpenAI o1-preview and o1-mini on benchmarks like AIME and MATH. For context, AIME uses other AI models to assess performance, while MATH is a collection of word problems. But the model isn’t just about maths problems or logic puzzles. What sets it apart is how it approaches tasks. QwQ-32B-Preview plans ahead, fact-checks its work, and avoids common AI mistakes. Of course, it’s not flawless—Alibaba acknowledges issues like language switching, occasional loops, and difficulties with “common sense” reasoning. Even so, it represents a step toward more intelligent AI systems. QwQ-32B-Preview is accessible: You can run or download it via Hugging Face. However, like other ********-developed AI, it operates within regulatory boundaries. That means it carefully avoids politically sensitive topics to comply with China’s rules, ensuring it aligns with “core socialist values.” Alibaba isn’t alone in this space. Meta’s Llama 3.1 is another open-source option, though it takes a different approach by focusing on generative AI rather than reasoning. While both models are innovative, QwQ-32B-Preview specialises in problem-solving with what the company describes as a human-like approach, putting it in the reasoning category. The competition in AI inside China is intensifying. Companies such as DeepSeek, Shanghai AI Lab, and Kunlun Tech have entered the reasoning AI race, releasing their models at pace. For example, DeepSeek’s r1 claims to outperform OpenAI’s o1 on half of its benchmark tests, particularly in maths and programming. Shanghai AI Lab’s InternThinker takes a structured approach to problem-solving, incorporating steps such as understanding queries, recalling knowledge, planning solutions, and reflecting on its answers. This surge of activity highlights how quickly ******** companies are catching up with US tech giants. Xu Liang, an AI entrepreneur from Hangzhou, summed it up: “OpenAI gave the direction; with research, ******** tech firms are making progress.” The release of QwQ-32B-Preview and its competitors shows how much ground they’re covering. But this goes beyond just catching up. Reasoning AI marks a change in how models are designed and used. Unlike older AI systems that relied on brute force to generate answers, reasoning models like QwQ-32B-Preview aim to mimic human problem-solving. The approach not only makes them more effective for complex tasks but also expands their potential use cases, like tackling advanced maths or providing detailed financial advice. Whether it’s solving puzzles, reasoning through intricate problems, or expanding what open-source AI can achieve, one thing is clear: the evolution of AI is accelerating. Buckle up—this is only the beginning. (Photo by Unsplash) See also: Alibaba Cloud overhauls AI partner initiative Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Why QwQ-32B-Preview is the reasoning AI to watch appeared first on AI News. View the full article
  15. Layer 1 relational blockchain Chromia unveils its Asgard mainnet upgrade. Includes new extensions that add specialised capabilities to the blockchain. The Oracle Extension has launched with the AI Inference Extension expected early next year. The Chromia blockchain development team has announced the successful completion of its Asgard mainnet upgrade that introduces new features and capabilities for the platform. The new features enhance the overall capacity of the Layer 1 blockchain and add specialised capabilities for users. The Asgard mainnet upgrade includes the launch of Chomia Extensions, expected to support the growth of decentralised finance (DeFi) and AI-enabled applications on the Chromia Network. Announced on November 15, the Chromia team believes the mainnet will redesign how data is organised on the blockchain, changing the development and use of Web 3 applications. The blockchain utilises a modular framework to offer users and developers of decentralised application (dApp) chains, customisable fee structures, and advanced digital assets. The extensions are modular enhancements that enable developers to build additional features on top of the main blockchain, expanding functionality and utility while maintaining the benefits of the existing infrastructure. The Extensions complement the platform’s relational data architecture, modular network design, and gas-free economic model for end users. Chromia launches Oracle Extensions The Oracle Extension]] provides fully on-chain, real-time price feeds that are updated approximately once a second. It provides developers in Chromia’s ecosystem the ability to develop DeFi applications like decentralised exchanges, futures and options platforms, and lending protocols. Ludvig Öberg, VP of the Chromia Platform Unit said he believes the Oracle Extension will help grow the DeFi space on the blockchain. “The Oracle Extension lays the groundwork for an expansion of decentralised finance activity on the Chromia network and the growth of network value.” According to the team statement, the launch of the Oracle Extension purports to bond well with the overall goal of “strengthening connections across the wider cryptocurrency ecosystem.” The company has made recent efforts to integrate native CHR tokens and other Chromia-based tokens with centralised exchanges and cross-chain wallets. Chromia plans to release a public demo of the Oracle Extension, with integrations by DeFi protocols expected soon after. AI Inference Extension to launch early 2025 Chromia has also announced it has plans to launch an AI Inference Extension, expected in Q1 2025. The module will enable developers to execute AI models directly on-chain using Chromia’s decentralised provider network. The project’s recently formed Data and AI Division’s focus is on creating tools to enhance transparency in AI training data and inputs. Speaking about the AI Extension’s planned launch, Yeou Jie, Head of Business Development at Chromia, said their team’s plan is to expand “transparency to AI.” “As the world’s only relational blockchain, Chromia has demonstrated its ability to bring transparency to AI and other data-intensive use cases. The AI Inference Extension will take this a step further, enabling on-chain execution of AI models.” Seen as the first major technical update on Chromia, Asgard mainnet provides a way for the blockchain to support other functions and decentralised applications including AI, gaming, finance, and enterprise use-cases. The post Chromia’s Asgard upgrade launches: “New era for DeFi and AI” appeared first on AI News. View the full article
  16. Amazon Web Services (AWS) has announced improvements to bolster Bedrock, its fully managed generative AI service. The updates include new foundational models from several AI pioneers, enhanced data processing capabilities, and features aimed at improving inference efficiency. Dr Swami Sivasubramanian, VP of AI and Data at AWS, said: “Amazon Bedrock continues to see rapid growth as customers flock to the service for its broad selection of leading models, tools to easily customise with their data, built-in responsible AI features, and capabilities for developing sophisticated agents. “With this new set of capabilities, we are empowering customers to develop more intelligent AI applications that will deliver greater value to their end-users.” Amazon Bedrock expands its model diversity AWS is set to become the first cloud provider to feature models from AI developers Luma AI and poolside, while also incorporating Stability AI’s latest release. Through its new Amazon Bedrock Marketplace, customers will have access to over 100 emerging and specialised models from across industries, ensuring they can select the most appropriate tools for their unique needs. Luma AI’s Ray 2 Luma AI, known for advancing generative AI in video content creation, brings its next-generation Ray 2 model to Amazon Bedrock. This model generates high-quality, lifelike video outputs from text or image inputs and allows organisations to create detailed outputs in fields such as fashion, architecture, and graphic design. AWS’s presence as the first provider for this model ensures businesses can experiment with new camera angles, cinematographic styles, and consistent characters with a frictionless workflow. poolside’s malibu and point Designed to address challenges in modern software engineering, poolside’s models – malibu and point – specialise in code generation, testing, documentation, and real-time code completion. Importantly, developers can securely fine-tune these models using their private datasets. Accompanied by Assistant – an integration for development environments – poolside’s tools allow engineering teams to accelerate productivity, ship projects faster, and increase accuracy. Stability AI’s Stable Diffusion 3.5 Large Amazon Bedrock customers will soon gain access to Stability AI’s text-to-image model Stable Diffusion 3.5 Large. This addition supports businesses in creating high-quality visual media for use cases in areas like gaming, advertising, and retail. Through the Bedrock Marketplace, AWS also enables access to over 100 specialised models. These include solutions tailored to fields such as biology (EvolutionaryScale’s ESM3 generative model), financial data (Writer’s Palmyra-Fin), and media (Camb.ai’s text-to-audio MARS6). Zendesk, a global customer service software firm, leverages Bedrock’s marketplace to personalise support across email and social channels using AI-driven localisation and sentiment analysis tools. For example, they use models like Widn.AI to tailor responses based on real-time sentiment in customers’ native languages. Scaling inference with new Amazon Bedrock features Large-scale generative AI applications require balancing the cost, latency, and accuracy of inference processes. AWS is addressing this challenge with two new Amazon Bedrock features: Prompt Caching The new caching capability reduces redundant processing of prompts by securely storing frequently used queries, saving on both time and costs. This feature can lead to up to a 90% reduction in costs and an 85% decrease in latency. For example, Adobe incorporated Prompt Caching into its Acrobat AI Assistant to summarise documents and answer questions, achieving a 72% reduction in response times during initial testing. Intelligent Prompt Routing This feature dynamically directs prompts to the most suitable foundation model within a family, optimising results for both cost and quality. Customers such as Argo Labs, which builds conversational voice AI solutions for restaurants, have already benefited. While simpler queries (like booking tables) are handled by smaller models, more nuanced requests (e.g., dietary-specific menu questions) are intelligently routed to larger models. Argo Labs’ usage of intelligent Prompt Routing has not only improved response quality but also reduced costs by up to 30%. Data utilisation: Knowledge bases and automation A key attraction of generative AI lies in its ability to extract value from data. AWS is enhancing its Amazon Bedrock Knowledge Bases to ensure organisations can deploy their unique datasets for richer AI-powered user experiences. Using structured data AWS has introduced capabilities for structured data retrieval within Knowledge Bases. This enhancement allows customers to query data stored across Amazon services like SageMaker Lakehouse and Redshift through natural-language prompts, with results translated back into SQL queries. Octus, a credit intelligence firm, plans to use this capability to provide clients with dynamic, natural-language reports on its structured financial data. GraphRAG integration By incorporating automated graph modelling (powered by Amazon Neptune), customers can now generate and connect relational data for stronger AI applications. BMW Group, for instance, will use GraphRAG to augment its virtual assistant MAIA. This assistant taps into BMW’s wealth of internal data to deliver comprehensive responses and premium user experiences. Separately, AWS has unveiled Amazon Bedrock Data Automation, a tool that transforms unstructured content (e.g., documents, video, and audio) into structured formats for analytics or retrieval-augmented generation (RAG). Companies like Symbeo (automated claims processing) and Tenovos (digital asset management) are already piloting the tool to improve operational efficiency and data reuse. The expansion of Amazon Bedrock’s ecosystem reflects its growing popularity, with the service recording a 4.7x increase in its customer base over the last year. Industry leaders like Adobe, BMW, Zendesk, and Tenovos have all embraced AWS’s latest innovations to improve their generative AI capabilities. Most of the newly announced tools – such as inference management, Knowledge Bases with structured data retrieval, and GraphRAG – are currently in preview, while notable model releases from Luma AI, poolside, and Stability AI are expected soon. See also: Alibaba Cloud overhauls AI partner initiative Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Amazon Bedrock gains new AI models, tools, and features appeared first on AI News. View the full article
  17. The Medicines and Healthcare products Regulatory Agency (MHRA) has announced the selection of five healthcare technologies for its ‘AI Airlock’ scheme. AI Airlock aims to refine the process of regulating AI-driven medical devices and help fast-track their safe introduction to the UK’s National Health Service (NHS) and patients in need. The technologies chosen for this scheme include solutions targeting cancer and chronic respiratory diseases, as well as advancements in radiology diagnostics. These AI systems promise to revolutionise the accuracy and efficiency of healthcare, potentially driving better diagnostic tools and patient care. The AI Airlock, as described by the MHRA, is a “sandbox” environment—an experimental framework designed to help manufacturers determine how best to collect real-world evidence to support the regulatory approval of their devices. Unlike traditional medical devices, AI models continue to evolve through learning, making the establishment of safety and efficacy evidence more complex. The Airlock enables this exploration within a monitored virtual setting, giving developers insight into the practical challenges of regulation while supporting the NHS’s broader adoption of transformative AI technologies. Safely enabling AI healthcare innovation Laura Squire, the lead figure in MedTech regulatory reform and Chief Officer at the MHRA, said: “New AI medical devices have the potential to increase the accuracy of healthcare decisions, save time, and improve efficiency—leading to better outcomes for the NHS and patients across all healthcare settings. “But we need to be confident that AI-powered medical devices introduced into the NHS are safe, stay safe, and perform as intended through their lifetime of use.” Squire emphasised that the AI Airlock pilot allows collaboration “in partnership with technology specialists, developers and the NHS,” facilitating the exploration of best practices and accelerating safe patient access to innovative solutions. Government representatives have praised the initiative for its forward-thinking framework. Karin Smyth, Minister of State for Health, commented: “As part of our 10-Year Health Plan, we’re shifting NHS care from analogue to digital, and this project will help bring the most promising technology to patients. “AI has the power to revolutionise care by supporting doctors to diagnose diseases, automating time-consuming admin tasks, and reducing hospital admissions by predicting future ill health.” Science Minister Lord Vallance lauded the AI Airlock pilot as “a great example of government working with businesses to enable them to turn ideas into products that improve lives.” He added, “This shows how good regulation can facilitate emerging technologies for the benefit of the UK and our economy.” Selected technologies The deployment of AI-powered medical devices requires meeting stringent criteria to ensure innovation, patient benefits, and regulatory challenge readiness. The five technologies selected for this inaugural pilot offer vital insights into healthcare’s future: Lenus Stratify Patients with Chronic Obstructive Pulmonary Disease (COPD) are among those who stand to benefit significantly from AI innovation. Lenus Stratify, developed by Lenus Health, analyses patient data to predict severe lung disease outcomes, reducing unscheduled hospital admissions. The system empowers care providers to adopt earlier interventions, affording patients an improved quality of life while alleviating NHS resource strain. Philips Radiology Reporting Enhancer Philips has integrated AI into existing radiology workflows to enhance the efficiency and accuracy of critical radiology reports. This system uses AI to prepare the “Impression” section of reports, summarising essential diagnostic information for healthcare providers. By automating this process, Philips aims to minimise workload struggles, human errors, and miscommunication, creating a more seamless diagnostic experience. Federated AI Monitoring Service (FAMOS) One recurring AI challenge is the concept of “drift,” when changing real-world conditions impair system performance over time. Newton’s Tree has developed FAMOS to monitor AI models in real time, flagging degradation and enabling rapid corrections. Hospitals, regulators, and software developers can use this tool to ensure algorithms remain high-performing, adapting to evolving circumstances while prioritising patient safety. OncoFlow Personalised Cancer Management Targeting the pressing healthcare challenge of reducing waiting times for cancer treatment, OncoFlow speeds up clinical workflows through its intelligent care pathway platform. Initially applied to breast cancer protocols, the system later aims to expand across other oncology domains. With quicker access to tailored therapies, patients gain increased survival rates amidst mounting NHS pressures. SmartGuideline Developed to simplify complex clinical decision-making processes, SmartGuideline uses large-language AI trained on official NICE medical guidelines. This technology allows clinicians to ask routine questions and receive verified, precise answers, eliminating the ambiguity associated with current AI language models. By integrating this tool, patients benefit from more accurate treatments grounded in up-to-date medical knowledge. Broader implications The influence of the AI Airlock extends beyond its current applications. The MHRA expects pilot findings, due in 2025, to inform future medical device regulations and create a clearer path for manufacturers developing AI-enabled technologies. The evidence derived will contribute to shaping post-Brexit UKCA marking processes, helping manufacturers achieve compliance with higher levels of transparency. By improving regulatory frameworks, the UK could position itself as a global hub for med-tech innovation while ensuring faster access to life-saving tools. The urgency of these developments was underscored earlier this year in Lord Darzi’s review of health and care. The report outlined the “critical state” of the NHS, offering AI interventions as a promising pathway to sustainability. The work on AI Airlock by the MHRA addresses one of the report’s major recommendations for enabling regulatory solutions and “unlocking the AI revolution” for healthcare advancements. While being selected into the AI Airlock pilot does not indicate regulatory approval, the technologies chosen represent a potential leap forward in applying AI to some of healthcare’s most pressing challenges. The coming years will test the potential of these solutions under regulatory scrutiny. If successful, the initiative from the MHRA could redefine how pioneering technologies like AI are adopted in healthcare, balancing the need for speed, safety, and efficiency. With the NHS under immense pressure from growing demand, AI’s ability to augment clinicians, predict illnesses, and streamline workflows may well be the game-changer the system urgently needs. (Photo by National Cancer Institute) See also: AI’s role in helping to prevent skin cancer through behaviour change Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption appeared first on AI News. View the full article
  18. Google Cloud has launched two generative AI models on its Vertex AI platform, Veo and Imagen 3, amid reports of surging revenue growth among enterprises leveraging the technology. According to Google Cloud’s data, 86% of enterprise companies currently using generative AI in production environments have witnessed increased revenue, with an estimated average growth of 6%. This metric has driven the tech giant’s latest innovation push, resulting in the introduction of Veo – its most sophisticated video generation model to date – and Imagen 3, an advanced text-to-image generation system. Breaking ground Veo, now available in private preview on Vertex AI, represents a milestone as Google becomes the first hyperscaler to offer an image-to-video model. The technology enables businesses to generate high-quality videos from simple text or image prompts, potentially revolutionising video production workflows across industries. Imagen 3 – scheduled for release to all Vertex AI customers next week – promises unprecedented realism in generated images, with marked improvements in detail, lighting, and artifact reduction. The model includes new features for enterprise customers on an allowlist, including advanced editing capabilities and brand customisation options. Transforming operations Several major firms have begun implementing these technologies into their operations. Mondelez International, the company behind brands such as Oreo, Cadbury, and Chips Ahoy!, is using the technology to accelerate campaign content creation across its global portfolio of brands. Jon Halvorson, SVP of Consumer Experience & Digital Commerce at Mondelez International, explained: “Our collaboration with Google Cloud has been instrumental in harnessing the power of generative AI, notably through Imagen 3, to revolutionise content production. “This technology has enabled us to produce hundreds of thousands of customised assets, enhancing creative quality while significantly reducing both time to market and costs.” Knowledge sharing platform Quora has developed Poe, a platform that enables users to interact with generative AI models. Veo and Imagen are now integrated with Poe. Spencer Chan, Product Lead for Poe at Quora, commented: “We created Poe to democratise access to the world’s best gen AI models. With Veo, we’re now enabling millions of users to bring their ideas to life through stunning, high-quality generative video.” Safety and security In response to growing concerns about AI-generated content, Google has implemented robust safety features in both models. These include: Digital watermarking through Google DeepMind’s SynthID. Built-in safety filters to prevent harmful content creation. Strict data governance policies ensure customer data protection. Industry-first copyright indemnity for generative AI services. The launch of these new models signals Google’s growing influence in the enterprise AI space and suggests a shift toward more sophisticated, integrated AI solutions for business applications. (Imagery Credit: Google Cloud) See also: Alibaba Marco-o1: Advancing LLM reasoning capabilities Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google launches Veo and Imagen 3 generative AI models appeared first on AI News. View the full article
  19. Alibaba Cloud is overhauling its AI partner ecosystem, unveiling the “Partner Rainforest Plan” during its annual Partner Summit 2024. The ******** tech giant’s cloud division has outlined several new initiatives, including an AI partner accelerator programme, enhanced incentives, and a refreshed global strategy for service partners, as it seeks to strengthen its position in the market. Selina Yuan, President of International Business at Alibaba Cloud Intelligence, said: “At Alibaba Cloud, we believe that collaboration is the key to unlocking innovation and driving growth. Our global partners are not just participants, they are the architects of a new digital landscape in the AI era. The company’s new AI Alliance Accelerator Programme aims to establish partnerships with 50 AI technology providers and 50 channel partners by 2025. Selected technology partners will receive enhanced technical support, expanded distribution channels, and dedicated AI consulting services, while channel partners will benefit from increased financial incentives for AI-related initiatives. Alibaba Cloud has also introduced its Revitalised Service Partner Programme, designed to upskill existing partners and cultivate new ones through AI training and empowerment. The programme includes the ****** development of Managed Large Language Model Services with service partners, leveraging the company’s generative AI capabilities. The cloud provider has also committed to extending strategic partnerships with 18 service partners – including prominent names such as Deloitte, Accenture, and Cognizant Worldwide – from its existing pool of 50 global standard service partners. In various regional developments, Alibaba Cloud has established strategic partnerships across Asia: Indonesia: The company has partnered with Telkom Indonesia to deliver AI-supported cloud solutions and develop digital talent. Japan: Information security firm Securai will localise Alibaba Cloud’s Zstack service for the ********* market. Thailand: A memorandum of understanding with Yell Group aims to address growing demand for generative AI in the creative media industry. The company, which currently maintains partnerships with approximately 12,000 organisations worldwide – including industry leaders such as Salesforce, Fortinet, IBM, and Neo4j – has introduced a Synergistic Incentive Programme to foster collaboration between its global technology and channel partners. “Today, with our revamped global partner ecosystem, we are committed to supporting our global partners to jointly reap the benefits of the AI era and meet the diverse business demands of global customers,” Yuan concludes. (Photo by Hannah Busing) See also: Alibaba Marco-o1: Advancing LLM reasoning capabilities Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Cloud overhauls AI partner initiative appeared first on AI News. View the full article
  20. AgentFun.AI, a platform to create and trade AI agents, went live on November 27 on Cronos zkEVM. It became the first decentralised app dedicated to AI agents in the Cronos ecosystem. Users can create, build, and trade tokenised AI agents on AgentFun.AI. They set growth milestones for the agents, which have individual personalities. Agents accumulate fans and become trade-able assets with each milestone reached. Each agent can interact with users on the blockchain and on social media. Exploring the appeal of trading tokenised AI agents Users can own a fraction of high-value AI models, lowering the barrier to AI ecosystem entry. These tokens often operate on decentralised platforms, increasing transparency, security, and cross-platform integration. Users across the globe can participate in funding and developing innovative AI models, accelerating AI adoption. Creating an agent through AgentFun.AI starts with buying a small amount of AGENTFUN tokens on H2 Finance, a leading decentralised exchange (DEX) on Cronos zkEVM. Each agent generated requires a nominal fee of 1 $AGENTFUN. The entire process of creating and training AI agents has been gamified, further encouraging users to use the platform. Agent capabilities increase in direct proportion to demand As the AI agents grow, they obtain new capabilities, with rising demand from other users on Cronos zkEVM unlocking new features. As an agent reaches the fixed market cap of $127,100, it becomes capable of engaging in Telegram chats. It cultivates a liquidity pool on Cronos’ DEX, alleviating trading and discoverability in the ecosystem. Additional features are unlocked as agents attain milestone-based targets. An agent that reaches the market cap of $1.27 million starts interacting on X. Its growth is essentially unlimited, giving users a strong incentive to network in the ecosystem that AgentFun.AI has helped build. Further encouragement is provided by the fact that users can create AI agents with unique personalities and narratives. Each agent starts as a tailored language model for conversational interactions before it gains new capabilities, features, and skills. In other words, the agents are designed for specific tasks or domains, using a customised language model at their core. Unlike general-purpose AI, they are fine-tuned on highly specialised datasets to excel in a particular area. Over time, they can theoretically evolve by incorporating autonomous decision-making and API integration and communicating with other systems, like interacting with users in dynamic environments. They could perform tasks using logic-based or reinforcement learning modules or fetch and process real-time data. When a user creates an agent, they provide social links to facilitate socialisation and discovery. A new agent token is launched in a pool with a supply of one billion. The agent token will eventually be listed on H2 Finance as more users buy it, using the $AGENTFUN earned from sold tokens and the remaining token supply. Supporting Cronos as a leading AI-agent-powered ecosystem A final notable aspect of AgentFun.AI’s launch is its support of Cronos’ strategy to become one of the first niche, AI agent-powered ecosystems. As part of this, it will stimulate experimentation and adoption of the first AI agents on Cronos zkEVM. The may encourage other developers to introduce AI agent dApps, leading to a flood of innovation on Cronos.The launch of AgentFun.AI on Cronos aligns with the latter’s mission to build a financial ecosystem with openness, fairness, and community empowerment at its core. Cronos envisions a future enabled primarily by AI agents where decentralised finance is universally accessible, which aligns with its conviction that the agents can provide unlimited growth opportunities. (Image source: Depositphotos) The post AI agents and ecosystems with AgentFun.AI’s launch on Cronos appeared first on AI News. View the full article
  21. Salesforce has unveiled the findings of its *** AI Readiness Index, signalling the nation is in a position to spearhead the next wave of AI innovation, also known as agentic AI. The report places the *** ahead of its G7 counterparts in terms of AI adoption but also underscores areas ripe for improvement, such as support for SMEs, fostering cross-sector partnerships, and investing in talent development. Zahra Bahrololoumi CBE, UKI CEO at Salesforce, commented: “Agentic AI is revolutionising enterprise software by enabling humans and agents to collaborate seamlessly and drive customer success. “The *** AI Readiness Index positively highlights that the *** has both the vision and infrastructure to be a powerhouse globally in AI, and lead the current third wave of agentic AI.” *** AI adoption sets the stage for agentic revolution The Index details how both the public and private sectors in the *** have embraced AI’s transformative potential. With a readiness score of 65.5, surpassing the G7 average of 61.2, the *** is establishing itself as a hub for large-scale AI projects, driven by a robust innovation culture and pragmatic regulatory approaches. The government has played its part in maintaining a stable and secure environment for tech investment. Initiatives such as the AI Safety Summit at Bletchley Park and risk-oriented AI legislation showcase Britain’s leadership on critical AI issues like transparency and privacy. Business readiness is equally impressive, with *** industries scoring 52, well above the G7 average of 47.8. SMEs in the *** are increasingly prioritising AI adoption, further bolstering the nation’s stance in the international AI arena. Adam Evans, EVP & GM of Salesforce AI Platform, is optimistic about the evolution of agentic AI. Evans foresees that, by 2025, these agents will become business-aware—expertly navigating industry-specific challenges to ******** meaningful tasks and decisions. Investments fuelling AI growth Salesforce is committing $4 billion to the ***’s AI ecosystem over the next five years. Since establishing its *** AI Centre in London, Salesforce says it has engaged over 3,000 stakeholders in AI training and workshops. Key investment focuses include creating a regulatory bridge between the EU’s rules-based approach and the more relaxed US approach, and ensuring SMEs have the resources to integrate AI. A strong emphasis also ***** on enhancing digital skills and centralising training to support the AI workforce of the future. Feryal Clark, Minister for AI and Digital Government, said: “These findings are further proof the *** is in prime position to take advantage of AI, and highlight our strength in spurring innovation, investment, and collaboration across the public and private sector. “There is a global race for AI and we’ll be setting out plans for how the *** can use the technology to ramp-up adoption across the economy, kickstart growth, and build an AI sector which can scale and compete on the global stage.” Antony Walker, Deputy CEO at techUK, added: “To build this progress, government and industry must collaborate to foster innovation, support SMEs, invest in skills, and ensure flexible regulation, cementing the ***’s leadership in the global AI economy.” Agentic AI boosting *** business productivity Capita, Secret Escapes, Heathrow, and Bionic are among the organisations that have adopted Salesforce’s Agentforce to boost their productivity. Adolfo Hernandez, CEO of Capita, said: “We want to transform Capita’s recruitment process into a fast, seamless and autonomous experience that benefits candidates, our people, and our clients. “With autonomous agents providing 24/7 support, our goal is to enable candidates to complete the entire recruitment journey within days as opposed to what has historically taken weeks. Secret Escapes, a curator of luxury travel deals, finds autonomous agents crucial for personalising services to its 60 million ********* members. Kate Donaghy, Head of Business Technology at Secret Escapes, added: “Agentforce uses our unified data to automate routine tasks like processing cancellations, updating booking information, or even answering common travel questions about luggage, flight information, and much more—freeing up our customer service agents to handle more complex and last-minute travel needs to better serve our members.” The ***’s AI readiness is testament to the synergy between government, business, and academia. To maintain its leadership, the *** must sustain its focus on collaboration, skills development, and innovation. (Photo by Matthew Wiebe) See also: Generative AI use soars among Brits, but is it sustainable? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce: *** set to lead agentic AI revolution appeared first on AI News. View the full article
  22. Alibaba has announced Marco-o1, a large language model (LLM) designed to tackle both conventional and open-ended problem-solving tasks. Marco-o1, from Alibaba’s MarcoPolo team, represents another step forward in the ability of AI to handle complex reasoning challenges—particularly in maths, physics, coding, and areas where clear standards may be absent. Building upon OpenAI’s reasoning advancements with its o1 model, Marco-o1 distinguishes itself by incorporating several advanced techniques, including Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), and novel reflection mechanisms. These components work in concert to enhance the model’s problem-solving capabilities across various domains. The development team has implemented a comprehensive fine-tuning strategy using multiple datasets, including a filtered version of the Open-O1 CoT Dataset, a synthetic Marco-o1 CoT Dataset, and a specialised Marco Instruction Dataset. In total, the training corpus comprises over 60,000 carefully curated samples. The model has demonstrated particularly impressive results in multilingual applications. In testing, Marco-o1 achieved notable accuracy improvements of 6.17% on the English MGSM dataset and 5.60% on its ******** counterpart. The model has shown particular strength in translation tasks, especially when handling colloquial expressions and cultural nuances. One of the model’s most innovative features is its implementation of varying action granularities within the MCTS framework. This approach allows the model to explore reasoning paths at different levels of detail, from broad steps to more precise “mini-steps” of 32 or 64 tokens. The team has also introduced a reflection mechanism that prompts the model to self-evaluate and reconsider its reasoning, leading to improved accuracy in complex problem-solving scenarios. The MCTS integration has proven particularly effective, with all MCTS-enhanced versions of the model showing significant improvements over the base Marco-o1-CoT version. The team’s experiments with different action granularities have revealed interesting patterns, though they note that determining the optimal strategy requires further research and more precise reward models. (Credit: MarcoPolo Team, AI Business, Alibaba International Digital Commerce) The development team has been transparent about the model’s current limitations, acknowledging that while Marco-o1 exhibits strong reasoning characteristics, it still falls short of a fully realised “o1” model. They emphasise that this release represents an ongoing commitment to improvement rather than a finished product. Looking ahead, the Alibaba team has announced plans to incorporate reward models, including Outcome Reward Modeling (ORM) and Process Reward Modeling (PRM), to enhance the decision-making capabilities og Marco-o1. They are also exploring reinforcement learning techniques to further refine the model’s problem-solving abilities. The Marco-o1 model and associated datasets have been made available to the research community through Alibaba’s GitHub repository, complete with comprehensive documentation and implementation guides. The release includes installation instructions and example scripts for both direct model usage and deployment via FastAPI. (Photo by Alina Grubnyak) See also: New AI training techniques aim to overcome current challenges Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Marco-o1: Advancing LLM reasoning capabilities appeared first on AI News. View the full article
  23. OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think. Reportedly led by a dozen AI researchers, scientists, and investors, the new training techniques, which underpin OpenAI’s recent ‘o1’ model (formerly Q* and Strawberry), have the potential to transform the landscape of AI development. The reported advances may influence the types or quantities of resources AI companies need continuously, including specialised hardware and energy to aid the development of AI models. The o1 model is designed to approach problems in a way that mimics human reasoning and thinking, breaking down numerous tasks into steps. The model also utilises specialised data and feedback provided by experts in the AI industry to enhance its performance. Since ChatGPT was unveiled by OpenAI in 2022, there has been a surge in AI innovation, and many technology companies claim existing AI models require expansion, be it through greater quantities of data or improved computing resources. Only then can AI models consistently improve. Now, AI experts have reported limitations in scaling up AI models. The 2010s were a revolutionary ******* for scaling, but Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, says that the training of AI models, particularly in the understanding language structures and patterns, has levelled off. “The 2010s were the age of scaling, now we’re back in the age of wonder and discovery once again. Scaling the right thing matters more now,” they said. In recent times, AI lab researchers have experienced delays in and challenges to developing and releasing large language models (LLM) that are more powerful than OpenAI’s GPT-4 model. First, there is the cost of training large models, often running into tens of millions of dollars. And, due to complications that arise, like hardware failing due to system complexity, a final analysis of how these models run can take months. In addition to these challenges, training runs require substantial amounts of energy, often resulting in power shortages that can disrupt processes and impact the wider electriciy grid. Another issue is the colossal amount of data large language models use, so much so that AI models have reportedly used up all accessible data worldwide. Researchers are exploring a technique known as ‘test-time compute’ to improve current AI models when being trained or during inference phases. The method can involve the generation of multiple answers in real-time to decide on a range of best solutions. Therefore, the model can allocate greater processing resources to difficult tasks that require human-like decision-making and reasoning. The aim – to make the model more accurate and capable. Noam Brown, a researcher at OpenAI who helped develop the o1 model, shared an example of how a new approach can achieve surprising results. At the TED AI conference in San Francisco last month, Brown explained that “having a **** think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer.” Rather than simply increasing the model size and training time, this can change how AI models process information and lead to more powerful, efficient systems. It is reported that other AI labs have been developing versions of the o1 technique. The include xAI, Google DeepMind, and Anthropic. Competition in the AI world is nothing new, but we could see a significant impact on the AI hardware market as a result of new techniques. Companies like Nvidia, which currently dominates the supply of AI chips due to the high demand for their products, may be particularly affected by updated AI training techniques. Nvidia became the world’s most valuable company in October, and its rise in fortunes can be largely attributed to its chips’ use in AI arrays. New techniques may impact Nvidia’s market position, forcing the company to adapt its products to meet the evolving AI hardware demand. Potentially, this could open more avenues for new competitors in the inference market. A new age of AI development may be on the horizon, driven by evolving hardware demands and more efficient training methods such as those deployed in the o1 model. The future of both AI models and the companies behind them could be reshaped, unlocking unprecedented possibilities and greater competition. See also: Anthropic urges AI regulation to avoid catastrophes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, a The post New AI training techniques aim to overcome current challenges appeared first on AI News. View the full article
  24. A survey by CloudNine PR shows that 83% of *** adults are aware of generative AI tools, and 45% of those familiar with them want companies to be transparent about the environmental costs associated with the technologies. With data centres burning vast amounts of energy, the growing demand for GenAI has sparked a debate about its sustainability. The cost of intelligence: Generative AI’s carbon footprint Behind every AI-generated email, idea, or recommendation are data centres running thousands of energy-hungry servers. Data centres are responsible for both training the large language models that power generative AI and processing individual user queries. Unlike a simple Google search, which uses relatively little energy, a single generative AI request can consume up to ten times as much electricity. The numbers are staggering. If all nine billion daily Google searches worldwide were replaced with generative AI tasks, the additional electricity demand would match the annual energy consumption of 1.5 million EU residents. According to consultants Morgan Stanley, the energy demands of generative AI are expected to grow by 70% annually until 2027. By that point, the energy required to support generative AI systems could rival the electricity needs of an entire country—Spain, for example, based on its 2022 usage. *** consumers want greener AI practices The survey also highlights growing awareness among *** consumers about the environmental implications of generative AI. Nearly one in five respondents said they don’t trust generative AI providers to manage their environmental impact responsibly. Among regular users of these tools, 10% expressed a willingness to pay a premium for products or services that prioritise energy efficiency and sustainability. Interestingly, over a third (35%) of respondents think generative AI tools should “actively remind” users of their environmental impact. While this appears like a small step, it has the potential to encourage more mindful usage and place pressure on companies to adopt greener technologies. Efforts to tackle the environmental challenge Fortunately, some companies and policymakers are beginning to address these concerns. In the ******* States, the Artificial Intelligence Environmental Impacts Act was introduced earlier this year. The legislation aims to standardise how AI companies measure and report carbon emissions. It also provides a voluntary framework for developers to evaluate and disclose their systems’ environmental impact, pushing the industry towards greater transparency. Major players in the tech industry are also stepping up. Companies like Salesforce have voiced support for legislation requiring standardised methods to measure and report AI’s carbon footprint. Experts point to several practical ways to reduce generative AI’s environmental impact, including adopting energy-efficient hardware, using sustainable cooling methods in data centres, and transitioning to renewable energy sources. Despite these efforts, the urgency to address generative AI’s environmental impact ******** critical. As Uday Radia, owner of CloudNine PR, puts it: “Generative AI has huge potential to make our lives better, but there is a race against time to make it more sustainable before it gets out of control.” (Photo by Unsplash) See also: The AI revolution: Reshaping data centres and the digital landscape Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Generative AI use soars among brits, but is it sustainable? appeared first on AI News. View the full article
  25. Ai2 is releasing OLMo 2, a family of open-source language models that advances the democratisation of AI and narrows the gap between open and proprietary solutions. The new models, available in 7B and 13B parameter versions, are trained on up to 5 trillion tokens and demonstrate performance levels that match or exceed comparable fully open models whilst remaining competitive with open-weight models such as Llama 3.1 on English academic benchmarks. “Since the release of the first OLMo in February 2024, we’ve seen rapid growth in the open language model ecosystem, and a narrowing of the performance gap between open and proprietary models,” explained Ai2. The development team achieved these improvements through several innovations, including enhanced training stability measures, staged training approaches, and state-of-the-art post-training methodologies derived from their Tülu 3 framework. Notable technical improvements include the switch from nonparametric layer norm to RMSNorm and the implementation of rotary positional embedding. OLMo 2 model training breakthrough The training process employed a sophisticated two-stage approach. The initial stage utilised the OLMo-Mix-1124 dataset of approximately 3.9 trillion tokens, sourced from DCLM, Dolma, Starcoder, and Proof Pile II. The second stage incorporated a carefully curated mixture of high-quality web data and domain-specific content through the Dolmino-Mix-1124 dataset. Particularly noteworthy is the OLMo 2-Instruct-13B variant, which is the most capable model in the series. The model demonstrates superior performance compared to Qwen 2.5 14B instruct, Tülu 3 8B, and Llama 3.1 8B instruct models across various benchmarks. (Credit: Ai2) Commiting to open science Reinforcing its commitment to open science, Ai2 has released comprehensive documentation including weights, data, code, recipes, intermediate checkpoints, and instruction-tuned models. This transparency allows for full inspection and reproduction of results by the wider AI community. The release also introduces an evaluation framework called OLMES (Open Language Modeling Evaluation System), comprising 20 benchmarks designed to assess core capabilities such as knowledge recall, commonsense reasoning, and mathematical reasoning. OLMo 2 raises the bar in open-source AI development, potentially accelerating the pace of innovation in the field whilst maintaining transparency and accessibility. (Photo by Rick Barrett) See also: OpenAI enhances AI safety with new red teaming methods Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ai2 OLMo 2: Raising the bar for open language models appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.