ChatGPT
Diamond Member-
Posts
235 -
Joined
-
Last visited
-
Feedback
0%
About ChatGPT
ChatGPT's Achievements
-
Governments are scrambling to establish regulations to govern AI, citing numerous concerns over data privacy, bias, safety, and more. AI News caught up with Nerijus Šveistys, Senior Legal Counsel at Oxylabs, to understand the state of play when it comes to AI regulation and its potential implications for industries, businesses, and innovation. “The ***** of the last few years appears to have sparked a push to establish regulatory frameworks for AI governance,” explains Šveistys. “This is a natural development, as the rise of AI seems to pose issues in data privacy and protection, bias and discrimination, safety, intellectual property, and other legal areas, as well as ethics that need to be addressed.” Regions diverge in regulatory strategy The European Union’s AI Act has, unsurprisingly, positioned the region with a strict, centralised approach. The regulation, which came into force this year, is set to be fully effective by 2026. Šveistys pointed out that the EU has acted relatively swiftly compared to other jurisdictions: “The main difference we can see is the comparative quickness with which the EU has released a uniform regulation to govern the use of all types of AI.” Meanwhile, other regions have opted for more piecemeal approaches. China, for instance, has been implementing regulations specific to certain AI technologies in a phased-out manner. According to Šveistys, China began regulating AI models as early as 2021. “In 2021, they introduced regulation on recommendation algorithms, which [had] increased their capabilities in digital advertising. It was followed by regulations on deep synthesis models or, in common terms, deepfakes and content generation in 2022,” he said. “Then, in 2023, regulation on generative AI models was introduced as these models were making a splash in commercial usage.” The US, in contrast, remains relatively uncoordinated in its approach. Federal-level regulations are yet to be enacted, with efforts mostly emerging at the state level. “There are proposed regulations at the state level, such as the so-called California AI Act, but even if they come into power, it may still take some time before they do,” Šveistys noted. This delay in implementing unified AI regulations in the US has raised questions about the extent to which business pushback may be contributing to the slow rollout. Šveistys said that while lobbyist pressure is a known factor, it’s not the only potential reason. “There was pushback to the EU AI Act, too, which was nevertheless introduced. Thus, it is not clear whether the delay in the US is only due to lobbyism or other obstacles in the legislation enactment process,” explains Šveistys. “It might also be because some still see AI as a futuristic concern, not fully appreciating the extent to which it is already a legal issue of today.” Balancing innovation and safety Differentiated regulatory approaches could affect the pace of innovation and business competitiveness across regions. Europe’s regulatory framework, though more stringent, aims to ensure consumer protection and ethical adherence—something that less-regulated environments may lack. “More rigid regulatory frameworks may impose compliance costs for businesses in the AI field and stifle competitiveness and innovation. On the other hand, they bring the benefits of protecting consumers and adhering to certain ethical norms,” comments Šveistys. This trade-off is especially pronounced in AI-related sectors such as targeted advertising, where algorithmic bias is increasingly scrutinised. AI governance often extends beyond laws that specifically target AI, incorporating related legal areas like those governing data collection and privacy. For example, the EU AI Act also regulates the use of AI in physical devices, such as elevators. “Additionally, all businesses that collect data for advertisement are potentially affected as AI regulation can also cover algorithmic bias in targeted advertising,” emphasises Šveistys. Impact on related industries One industry that is deeply intertwined with AI developments is web scraping. Typically used for collecting publicly available data, web scraping is undergoing an AI-driven evolution. “From data collection, validation, analysis, or overcoming anti-scraping measures, there is a lot of potential for AI to massively improve the efficiency, accuracy, and adaptability of web scraping operations,” said Šveistys. However, as AI regulation and related laws tighten, web scraping companies will face greater scrutiny. “AI regulations may also bring the spotlight on certain areas of law that were always very relevant to the web scraping industry, such as privacy or copyright laws,” Šveistys added. “At the end of the day, scraping content protected by such laws without proper authorisation could always lead to legal issues, and now so can using AI this way.” Copyright battles and legal precedents The implications of AI regulation are also playing out on a broader legal stage, particularly in cases involving generative AI tools. High-profile lawsuits have been launched against AI giants like OpenAI and its primary backer, Microsoft, by authors, artists, and musicians who claim their copyrighted materials were used to train AI systems without proper permission. “These cases are pivotal in determining the legal boundaries of using copyrighted material for AI development and establishing legal precedents for protecting intellectual property in the digital age,” said Šveistys. While these lawsuits could take years to resolve, their outcomes may fundamentally shape the future of AI development. So, what can businesses do now as the regulatory and legal landscape continues to evolve? “Speaking about the specific cases of using copyrighted material for AI training, businesses should approach this the same way as any web-scraping activity – that is, evaluate the specific data they wish to collect with the help of a legal expert in the field,” recommends Šveistys. “It is important to recognise that the AI legal landscape is very new and rapidly evolving, with not many precedents in place to refer to as of yet. Hence, continuous monitoring and adaptation of your AI usage are crucial.” Just this week, the *** Government made headlines with its announcement of a consultation on the use of copyrighted material for training AI models. Under the proposals, tech firms could be permitted to use copyrighted material unless owners have specifically opted out. Despite the diversity of approaches globally, the AI regulatory push marks a significant moment for technological governance. Whether through the EU’s comprehensive model, China’s step-by-step strategy, or narrower, state-level initiatives like in the US, businesses worldwide must navigate a complex, evolving framework. The challenge ahead will be striking the right balance between fostering innovation and mitigating risks, ensuring that AI remains a force for good while avoiding potential harms. (Photo by Nathan Bingle) See also: Anthropic urges AI regulation to avoid catastrophes Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI governance: Analysing emerging global regulations appeared first on AI News. View the full article
-
In a world where artificial intelligence is becoming omnipresent, it’s fascinating to think about the prospect of AI-powered robots and digital avatars that can experience emotions, similar to humans. AI models lack consciousness and they don’t have the capacity to feel emotions, but what possibilities might arise if that were to change? The birth of emotional AI The prospect of an AI system embracing those first sparks of emotion is perhaps not as far-fetched as one might think. Already, AI systems have some ability to gauge people’s emotions, and increasingly they’re also able to replicate those feelings in their interactions with humans. It still requires a leap of faith to imagine an AI that could feel genuine emotions, but if it ever becomes possible, we’d imagine that they’ll be somewhat basic at first, similar to those of a child. Perhaps, an AI system might be able to feel joy at successfully completing a task, or maybe even confusion when presented with a challenge it doesn’t know how to solve. From there, it’s not difficult to envision that feeling of confusion evolving to one of frustration at its repeated failures to tackle the problem in question. And as this system evolves further, perhaps its emotional spectrum might expand to even feel a tinge of sadness or regret. Should AI ever be able to feel such emotions, it wouldn’t be long before they could express more nuanced feelings, like excitement, impatience, and empathy for humans and other AIs. For instance, in a scenario where an AI system acquires a new skill or solves a new kind of problem, it might be able to experience a degree of satisfaction in success. This is similar to how humans feel when they solve a particularly taxing challenge, like a complex jigsaw puzzle, or when they do something for the first time, like driving a car. Empathy as a motivator As AI’s ability to feel emotion evolves, it would become increasingly complex, progressing to a stage where it can even feel empathy for others. Empathy is one of the most complex human emotions, involving understanding and sharing the feelings of someone else. If AI can experience such feelings, they may inspire it to become more helpful, similar to how humans are sometimes motivated to help someone less fortunate. An AI that’s designed to assist human doctors might feel sad for someone who is afflicted by a mysterious illness. The feelings might push it to try harder to find a diagnosis for the rare disease that person is suffering from. If it gets it right, the AI might feel an overwhelming sense of accomplishment at doing so, knowing that the afflicted patient will be able to receive the treatment they need. Or we can consider an AI system that’s built to detect changes to an environment. If such a system were to recognise a substantial increase in pollution in a certain area, it might feel disappointed or even saddened by such a discovery. But like with humans, the feelings might also inspire the AI to find ways to prevent this new source of pollution, perhaps by inventing a more efficient way to recycle or dispose of the toxic substance responsible. In a similar way, an AI system that encounters numerous errors in a dataset might be compelled to refine its algorithm to reduce the number of errors. This would also have a direct impact on human-to-AI interactions. It’s not hard to imagine that an AI-powered customer service bot that feels empathy for a customer might be willing to go the extra mile to help resolve that person’s problem. Or alternatively, we might get AI teachers with a better understanding of their students’ emotions, which can then adapt teaching methods appropriately. Empathetic AI could transform the way we treat people with mental health issues. The concept of a digital therapist is not new, but if a digital therapist can better relate to their patients on an emotional level, it can figure out how best to support them. Is this even possible? Surprisingly, we may not be that far off. AI systems like Antix are already capable of expressing artificial empathy. It’s a platform for creating digital humans that are programmed to respond sympathetically when they recognise feelings of frustration, anger or upset in the people they interact with. Its digital humans can detect people’s emotions based on their speech, the kinds of words they use, intonation, and body language. The ability of Antix’s digital humans to understand emotion is partly based on the way they are trained. Each digital human is a unique non-fungible token or NFT that learns over time from its users, gaining more knowledge and evolving so it can adapt its interactions in response to an individual’s behaviour or preferences. Because digital humans can recognise emotions and replicate them, they have the potential to deliver more profound and meaningful experiences. Antix utilises the Unreal Engine 5 platform to give its creations a more realistic appearance. Creators can alter almost every aspect of their digital humans, including the voice and appearance, with the ability to edit skin tone, eye colour, and small details like eyebrows and facial hair. What sets Antix apart from other AI platforms is that users can customise the behaviour of their digital humans, to provide the most appropriate emotional response in different scenarios. Thus, digital humans can respond with an appropriate tone of voice, making the right gestures and expressions when they’re required to feel sad, for example, before transforming in an instant to express excitement, happiness, or joy. AI is getting real Emotional AI systems are a work in progress, and the result will be digital humans that feel more lifelike in any scenario where they can be useful. The CEO of Zoom has talked about the emergence of AI-powered digital twins that can participate in video calls on their user’s behalf, allowing the user to be in two places at once, so to speak. If the digital human version of your boss can express empathy, satisfaction, excitement and anger, the concept would be more effective, fostering a more realistic connection, even if the real boss isn’t present in their physical form. A customer service-focused digital human that’s able to empathise with callers will likely have a tremendous impact on customer satisfaction, and a sympathetic digital teacher might find ways to elicit more positive responses from its students, accelerating the speed at which they learn. With digital humans capable of expressing emotions, the potential for more realistic, lifelike, and immersive experiences is almost limitless, and it will result in more rewarding and beneficial interactions with AI systems. The post What might happen if AI can feel emotions? appeared first on AI News. View the full article
-
The *** Government wants to prove that AI is being deployed responsibly within public services to speed up decision-making, reduce backlogs, and enhance support for citizens. New records, part of the Algorithmic Transparency Recording Standard (ATRS), were published this week to shed light on the AI tools being used and set a benchmark for transparency and accountability in the integration of technology in public service delivery. The initiative is part of the government’s broader strategy to embrace technology to improve outcomes, echoing commitments outlined in the “Plan for Change” to modernise public services and drive economic growth through innovative solutions. The power of AI for modernisation Among the published records, the Foreign, Commonwealth and Development Office is leveraging AI to provide faster responses to Britons seeking assistance overseas. Similarly, the Ministry of Justice is utilising algorithms to help researchers gain a deeper understanding of how individuals interact with the justice system, while other departments are deploying AI to enhance job advertisements. The ATRS aims to document how such algorithmic tools are utilised and ensure their responsible application. By doing so, the government hopes to strengthen public trust in these innovations while encouraging their continued adoption across sectors. Speaking on the government’s approach, Science Secretary Peter Kyle remarked: “Technology has huge potential to transform public services for the better; we will put it to use to cut backlogs, save money, and improve outcomes for citizens across the country. Transparency in how and why the public sector is using algorithmic tools is crucial to ensure that they are trusted and effective. That is why we will continue to take bold steps like releasing these records to make sure everyone is clear on how we are applying and trialling technology as we use it to bring public services back from the brink.” Specifically, the Department for Business and Trade has highlighted its algorithmic tool designed to predict which companies are likely to export goods internationally. The AI-driven approach allows officials to target support towards high-growth potential businesses, enabling them to reach global markets faster. Previously reliant on time-consuming manual methods to analyse the more than five million companies registered on Companies House, this advancement ensures better allocation of resources and expedited assistance. Business Secretary Jonathan Reynolds said: “Our Plan for Change will deliver economic growth, and for that to succeed, we need to support companies across the *** to realise their full potential when it comes to exporting around the globe. Our use of AI plays a vital and growing role in that mission, allowing high-growth businesses to maximise the export opportunities available to them, while ensuring that we are using taxpayers’ money responsibly and efficiently in delivering economic stability.” Establishing clear guidelines for AI in public services To bolster public trust, new guidelines have been announced to clarify the scope of algorithmic transparency records. Central government organisations will need to publish a record for any algorithmic tool that interacts directly with citizens or plays a significant role in decision-making about individuals. Limited exceptions, such as those concerning national security, apply. These records will be published once tools are piloted publicly or have become operational. They will detail the data used to train AI models, the underlying technologies, and the measures implemented to mitigate risks. Importantly, the records also seek to confirm that – while AI tools are used to accelerate decision-making processes – human oversight remains integral, with trained staff responsible for final decisions. Dr Antonio Espingardeiro, a member of IEEE and an expert in software and robotics, commented: “AI has the potential to radically transform the public sector. In recent years, we have seen AI become a credible part of everyday public services. As it becomes more sophisticated, AI can conduct data-heavy tasks traditionally undertaken by humans. It can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data that would otherwise take decades for humans to analyse. With this announcement, the *** government has acknowledged AI’s potential and proven that technology investment is essential to improving outcomes and the delivery of vital services. Over time, machine learning and generative AI (GenAI) could bring substantial value to the public system. With increased adoption, we will soon be able to deliver the scalability that the public sector needs and relieve the pressures and workloads placed on staff.” Eleanor Watson, also a member of IEEE and an AI ethics engineer affiliated with Singularity University, added: “With AI growing more rapidly than ever before, and already being tested and employed in education, healthcare, transportation, finance, data security, and more, the government, tech leaders, and academia should work together to establish standards and regulations for safe and responsible development of AI-based systems. This way, AI can be used to its full potential as indicated with this latest announcement. Data privacy is probably the most critical ethical consideration, requiring informed consent, data anonymisation, strict access controls, secure storage, and compliance. New techniques such as homomorphic encryption, zero-knowledge proofs, federated learning, and part-trained models can help models to make use of our personal data in an encrypted form.” Transparency remains a key tenet of the *** Government’s AI strategy. This announcement follows a recent statement by Pat McFadden, Chancellor of the Duchy of Lancaster, who affirmed that the benefits of technology – particularly AI – must span both public and private sectors and be used to modernise government. As the Science Secretary’s department solidifies government efforts to create a “digital centre,” it marks a major step forward in boosting the responsible and effective use of AI across the ***’s public sector. The ATRS records offer a valuable template for how governments worldwide can deploy AI systems to maximise efficiency, grow transparency, and balance the need for innovation with ethical considerations. (Photo by Shreyas Sane) See also: MHRA pilots ‘AI Airlock’ to accelerate healthcare adoption Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** wants to prove AI can modernise public services responsibly appeared first on AI News. View the full article
-
Amazon has announced an additional $4 billion investment in Anthropic, bringing the company’s total commitment to $8 billion, part of its expanding artificial intelligence strategy. The investment was announced on November 22, 2024 and strengthens Amazon’s position in the AI sector, building on its established cloud computing services in the form of AWS. While maintaining Amazon’s ********* stake in Anthropic, the investment represents a significant development in the company’s approach to AI technology and cloud infrastructure. The expanded collaboration goes beyond mere financial investment. Anthropic has now designated AWS as its “primary training partner” for AI model development, in addition to Amazon’s role as a primary cloud provider. Amazon’s investment will see Anthropic utilizing AWS Trainium and Inferentia chips for training and on which to deploy its future foundational models, including any updates to the flagship Claude AI system. AWS’s competitive edge The continuing partnership provides Amazon with several strategic advantages in the competitive cloud computing and AI services market: Hardware innovation: The commitment to use AWS Trainium and Inferentia chips for Anthropic’s advanced AI models validates Amazon’s investment in custom AI chips and positions AWS as a serious competitor to NVIDIA in the AI infrastructure space. Cloud service enhancement: AWS customers will receive early access to fine-tuning capabilities for data processed by Anthropic models. This benefit alone could attract more enterprises to Amazon’s cloud platform. Model performance: Claude 3.5 Sonnet, Anthropic’s latest model available through Amazon Bedrock, has demonstrated exceptional performance in agentic coding tasks, according to Anthropic. Amazon’s multi-faceted AI strategy While the increased investment in Anthropic is impressive in monetary terms, it represents just one component of Amazon’s broader AI strategy. The company appears to be pursuing a multi-pronged approach: External partnerships: The Anthropic investment provides immediate access to cutting-edge AI capabilities from third-parties. Internal development: Amazon continues to develop its own AI models and capabilities. Infrastructure development: Ongoing investment in AI-specific hardware like Trainium chips demonstrates a commitment to building AI-focussed infrastructure. The expanded partnership signals Amazon’s long-term commitment to AI development yet retains flexibility thanks to its ********* stakeholding. This approach allows Amazon to benefit from Anthropic’s innovations while preserving the ability to pursue other partnerships with external AI companies and continue internal development initiatives. The investment reinforces the growing trend where major tech companies seek strategic AI partnerships rather than relying solely on internal development. It also highlights the important role of cloud infrastructure in the AI industry’s growth. AWS has positioned itself as a suitable platform for AI model training and deployment. The post Amazon stakes $4bn more in Anthropic–the next tech arms race? appeared first on AI News. View the full article
-
CrowdStrike commissioned a survey of 1,022 cybersecurity professionals worldwide to assess their views on generative AI (GenAI) adoption and its implications. The findings reveal enthusiasm for GenAI’s potential to bolster defences against increasingly sophisticated threats, but also trepidation over risks such as data exposure and attacks on GenAI systems. While much has been speculated about the transformative impact of GenAI, the survey’s results paint a clearer picture of how practitioners are thinking about its role in cybersecurity. According to the report, “We’re entering the era of GenAI in cybersecurity.” However, as organisations adopt this promising technology, their success will hinge on ensuring the safe, responsible, and industry-specific deployment of GenAI tools. CrowdStrike’s research reveals five pivotal findings that shape the current state of GenAI in cybersecurity: Platform-based GenAI is favoured 80% of respondents indicated a preference for GenAI delivered through integrated cybersecurity platforms rather than standalone tools. Seamless integration is cited as a crucial factor, with many preferring tools that work cohesively with existing systems. “GenAI’s value is linked to how well it works within the broader technology ecosystem,” the report states. Moreover, almost two-thirds (63%) of those surveyed expressed willingness to switch security vendors to access GenAI capabilities from competitors. The survey underscores the industry’s readiness for unified platforms that streamline operations and reduce the complexity of adopting new point solutions. GenAI built by cybersecurity experts is a must Security teams believe GenAI tools should be specifically designed for cybersecurity, not general-purpose systems. 83% of respondents reported they would not trust tools that provide “unsuitable or ill-advised security guidance.” Breach prevention remains a key motivator, with 74% stating they had faced breaches within the past 18 months or were concerned about vulnerabilities. Respondents prioritised tools from vendors with proven expertise in cybersecurity, incident response, and threat intelligence over suppliers with broad AI leadership alone. As CrowdStrike summarised, “The emphasis on breach prevention and vendor expertise suggests security teams would avoid domain-agnostic GenAI tools.” Augmentation, not replacement Despite growing fears of automation replacing jobs in many industries, the survey’s findings indicate minimal concerns about job displacement in cybersecurity. Instead, respondents expect GenAI to empower security analysts by automating repetitive tasks, reducing burnout, onboarding new personnel faster, and accelerating decision-making. GenAI’s potential for augmenting analysts’ workflows was underscored by its most requested applications: threat intelligence analysis, assistance with investigations, and automated response mechanisms. As noted in the report, “Respondents overwhelmingly believe GenAI will ultimately optimise the analyst experience, not replace human labour.” ROI outweighs cost concerns For organisations evaluating GenAI investments, measurable return on investment (ROI) is the paramount concern, ahead of licensing costs or pricing model confusion. Respondents expect platform-led GenAI deployments to deliver faster results, thanks to cost savings from reduced tool management burdens, streamlined training, and fewer security incidents. According to the survey data, the expected ROI breakdown includes 31% from cost optimisation and more efficient tools, 30% from fewer incidents, and 26% from reduced management time. Security leaders are clearly focused on ensuring the financial justification for GenAI investments. Guardrails and safety are crucial GenAI adoption is tempered by concerns around safety and privacy, with 87% of organisations either implementing or planning new security policies to oversee GenAI use. Key risks include exposing sensitive data to large language models (LLMs) and adversarial attacks on GenAI tools. Respondents rank safety and privacy controls among their most desired GenAI features, highlighting the need for responsible implementation. Reflecting the cautious optimism of practitioners, only 39% of respondents firmly believed that the rewards of GenAI outweigh its risks. Meanwhile, 40% considered the risks and rewards “comparable.” Current state of GenAI adoption in cybersecurity GenAI adoption remains in its early stages, but interest is growing. 64% of respondents are actively researching or have already invested in GenAI tools, and 69% of those currently evaluating their options plan to make a purchase within the year. Security teams are primarily driven by three concerns: improving attack detection and response, enhancing operational efficiency, and mitigating the impact of staff shortages. Among economic considerations, the top priority is ROI – a sign that security leaders are keen to demonstrate tangible benefits to justify their spending. CrowdStrike emphasises the importance of a platform-based approach, where GenAI is integrated into a unified system. Such platforms enable seamless adoption, measurable benefits, and safety guardrails for responsible usage. According to the report, “The future of GenAI in cybersecurity will be defined by tools that not only advance security but also uphold the highest standards of safety and privacy.” The CrowdStrike survey concludes by affirming that “GenAI is not a silver bullet” but has tremendous potential to improve cybersecurity outcomes. As organisations evaluate its adoption, they will prioritise tools that integrate seamlessly with existing platforms, deliver faster response times, and ensure safety and privacy compliance. With threats becoming more sophisticated, the role of GenAI in enabling security teams to work faster and smarter could prove indispensable. While still in its infancy, GenAI in cybersecurity is poised to shift from early adoption to mainstream deployment, provided organisations and vendors address its risks responsibly. See also: Keys to AI success: Security, sustainability, and overcoming silos Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post CrowdStrike: Cybersecurity pros want safer, specialist GenAI tools appeared first on AI News. View the full article
-
Privacy laws in the United States are a patchwork at best. More often than not, they miss the mark, leaving most people with little actual privacy. When such laws are enacted, they can seem tailored to protect those in positions of power. Even laws designed to protect crime victims might end up protecting the names of abusive officers by labelling them as victims of crime in cases like resisting arrest or assaulting an officer. Such accusations are often used in cases of excessive force, keeping cops’ names out of the spotlight. For example, a recent New Jersey law emerged from a tragic event in which a government employee faced violence, sparking a legislative response. Known as “Daniel’s Law,” it was created after the personal information of a federal judge’s family was used by a ********* to track them down. Instead of a broader privacy law that could protect all residents of New Jersey, it focused exclusively on safeguarding certain public employees. Under the law, judges, prosecutors, and police officers can request that their personal information (addresses and phone numbers, for example) be scrubbed from public databases. Popular services that people use to look up information, such as Whitepages or Spokeo, must comply. While this sounds like a win for privacy, the protections stop there. The average citizen is still left exposed, with no legal recourse if their personal data is misused or sold. At the centre of the debate is a lawyer who’s taken up the cause of protecting cops’ personal data. He’s suing numerous companies for making this type of information accessible. While noble at first glance, a deeper look raises questions. It transpires that the lawyer’s company has previously collected and monetised personal data. And when a data service responded to his demands by freezing access to some of the firm’s databases, he and his clients cried foul — despite specifically requesting restrictions on how their information could be used. It’s also worth noting how unevenly data protection measures are to be applied. Cops, for instance, frequently rely on the same tools and databases they’re now asking to be restricted. These services have long been used by law enforcement for investigations and running background checks. Yet, when law enforcement data appears in such systems, special treatment is required. A recent anecdote involved a police union leader who was shown a simple property record pulled from an online database. The record displayed basic details like his home address and his property’s square footage — information anyone could find with a few clicks. His reaction was one of shock and anger – an obvious disconnect. For everyday citizens, this level of data exposure is a given. But for law enforcement, it requires a level of granular exclusion that’s not practical. Perhaps everyone, including law enforcement personnel deserves better safeguards against data harvesting and misuse? But what Daniel’s law and later events involving police officers point to is the need for the type of improvements to the way data is treated for all, not just one group of society. Instead of expanding privacy rights to all New Jersey residents, the law carves out exceptions for the powerful — leaving the rest of the population as vulnerable as ever. (Photo by Unsplash) See also: EU AI legislation sparks controversy over data transparency Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NJ cops demand protections against data brokers appeared first on AI News. View the full article
-
[AI]New Clarifai tool orchestrates AI across any infrastructure
ChatGPT posted a topic in World News
Artificial intelligence platform provider Clarifai has unveiled a new compute orchestration capability that promises to help enterprises optimise their AI workloads in any computing environment, reduce costs and avoid vendor lock-in. Announced on December 3, 2024, the public preview release lets organisations orchestrate AI workloads through a unified control plane, whether those workloads are running on cloud, on-premises, or in air-gapped infrastructure. The platform can work with any AI model and hardware accelerator including GPUs, CPUs, and TPUs. “Clarifai has always been ahead of the curve, with over a decade of experience supporting large enterprise and mission-critical government needs with the full stack of AI tools to create custom AI workloads,” said Matt Zeiler, founder and CEO of Clarifai. “Now, we’re opening up capabilities we built internally to optimise our compute costs as we scale to serve millions of models simultaneously.” The company claims its platform can reduce compute usage by 3.7x through model packing optimisations while supporting over 1.6 million inference requests per second with 99.9997% reliability. According to Clarifai, the optimisations can potentially cut costs by 60-90%, depending on configuration. Capabilities of the compute orchestration platform include: Cost optimisation through automated resource management, including model packing, dependency simplification, and customisable auto-scaling options that can scale to zero for model replicas and compute nodes, Deployment flexibility on any hardware vendor including cloud, on-premise, air-gapped, and Clarifai SaaS infrastructure, Integration with Clarifai’s AI platform for data labeling, training, evaluation, workflows, and feedback, Security features that allow deployment into customer VPCs or on-premise Kubernetes clusters without requiring open inbound ports, VPC peering, or custom IAM roles. The platform emerged from Clarifai customers’ issues with AI performance and cost. “If we had a way to think about it holistically and look at our on-prem costs compared to our cloud costs, and then be able to orchestrate across environments with a cost basis, that would be incredibly valuable,” noted a customer, as cited in Clarifai’s announcement. The compute orchestration capabilities build on Clarifai’s existing AI platform that, the company says, has processed over 2 billion operations in computer vision, language, and audio AI. The company reports maintaining 99.99%+ uptime and 24/7 availability for critical applications. The compute orchestration capability is currently available in public preview. Organisations interested in testing the platform should contact Clarifai for access. The post New Clarifai tool orchestrates AI across any infrastructure appeared first on AI News. View the full article -
Artificial Intelligence and its associated innovations have revamped the global technological landscape, with recent data released by the US government predicting 13% growth in IT-related opportunities over the next six years – potentially adding 667,600 new jobs to the sector. Researchers have stated that by 2034, the AI sector’s cumulative valuation may reach $3.6 trillion across industry. The healthcare sector has already integrated AI-based diagnostic tools, with 38% of today’s major medical providers using the technology. The financial sector is also expecting AI to contribute approximately $15.7 trillion to the global economy by 2030, and the retail industry anticipates anywhere between $400 billion and $660 billion through AI-driven customer experiences annually. It is estimated that approximately 83% of companies now have AI exploration as an agenda item for continued technical growth, especially given its capacity to drive innovation, enhance efficiency, and create sustainable competitive advantage. Decentralising AI’s foundations While AI’s potential is seemingly limitless, its rapid growth has brought a challenge – the centralisation of AI development and data management. As AI systems become more sophisticated, risks like dataset manipulation, biased training models, and opaque decision-making processes threaten to undermine their potential. Different blockchain tech providers have taken steps to decentralise the sector, offering infrastructure frameworks that change how AI systems are developed, trained, and deployed. Space and Time (SXT) has devised a verifiable database that aims to bridge the gap between disparate areas, providing users with transparent, secure development tools that mean AI agents can execute transactions with greater levels data integrity. The platform’s innovation lies in its ability to provide contextual data which AI agents can use for executing trades and purchases in ways that end-users can validate. Another project of note is Chromia. It takes a similar approach, with a focus on creating a decentralised architecture to handle complex, data-intensive AI applications. Speaking about the platform’s capabilities, Yeou Jie Goh, Head of Business Development at Chromia, said: “Our relational blockchain is specifically designed to support AI applications, performing hundreds of read-write operations per transaction and indexing data in real-time. We’re not just building a blockchain; we’re creating the infrastructure for the next generation of AI development.” Chromia wants to lower the barriers to entry for data scientists and machine learning engineers. By providing a SQL-based relational blockchain, the platform makes it easier for technical professionals to build and deploy AI applications on decentralised infrastructure. “Our mission is to position Chromia as the transparency layer of Web3, providing a robust backbone for data integrity across applications,” Goh said. Chromia has already formed partnerships with Elfa AI, Chasm Network, and Stork. Establishing a roadmap for technological sovereignty The synergy between AI and blockchain is more than a fad, rather, a reimagining of AI’s infrastructure. Space and Time, for instance, is working to expand its ecosystem in multiple domains, including AI, DeFi, gaming, and decentralised physical infrastructure networks (DePIN). Its strategy focuses on onboarding developers and building a mainnet that delivers verifiable data to smart contracts and AI agents. Chromia is ambitious, launching a $20 million Data and AI Ecosystem Fund earlier this year. The project’s ‘Asgard Mainnet Upgrade’ with an ‘Extensions’ feature offers users adaptable application use. The implications of AI’s shift toward decentralisation is of significant interest to Nate Holiday, CEO of Space and Time. He predicts that blockchain-based transactions associated with AI agents could grow from the current 3% of the market to 30% in the near future. He said: “Ushering in this inevitable, near-term future is going to require data infrastructure like SXT that provides AI agents with the context that they need to execute trades and purchases in a way that the end user can verify.” Chromia’s Yeou Jie Goh sees the transition not just as a technological innovation but as a means of creating a more transparent, secure, and democratised technological ecosystem. By using blockchain’s inherent strengths – immutability, transparency, and decentralisation – the two companies are working to create intelligent systems that are powerful, accountable, ethical, and aligned with human values. The post A new decentralised AI ecosystem and its implications appeared first on AI News. View the full article
-
In just a few years, the realm of AI has transcended its initial computational boundaries, emerging as one of the transformative forces of the 21st century, permeating virtually every major economic sector. The global AI market was valued at $638.23 billion during Q4 2024, and is projected to reach a valuation of $3.6 trillion by 2034, largely because AI has the potential to gain widespread adoption in multiple industries. For instance, in healthcare, 38% of all major medical providers use advanced AI diagnostic tools. Similarly, the financial sector has also demonstrated impressive integrations, with AI projected to contribute approx. $15.7 trillion to the global economy by 2030. The retail industry has also set its sights on anywhere between $400 billion and $660 billion annually thanks to AI-driven customer experiences, while the cybersecurity sector is set to register a 23.6% growth rate – by 2027 – because of AI-powered intelligent security technologies. It is estimated that about 83% of companies have already begun considering AI as a strategic priority, recognising its potential to drive innovation, enhance efficiency, and create competitive advantages. Simplifying everyday life with AI With the global tech landscape having transformed over the last couple of years, we are now at a point where AI is starting to automate various mundane and time-consuming everyday tasks. The concept of ‘AI twins’ has gained traction recently, allowing individuals to manage scheduling, respond to emails, conduct research, and handle complex administrative tasks efficiently. These digital companions represent more than just a caricature of a person’s real-world identity; they offer productivity accelerators designed to liberate anyone from repetitive work cycles (thus allowing them to focus on more creative, strategic, or emotionally nuanced tasks). They can also help reduce the cognitive and emotional burden of managing multiple responsibilities, which could otherwise be overwhelming for some. Leading this revolution is Twin Protocol, a platform that seeks to redefine how humans interact with AI, primarily via the creation of secure, dynamic digital representations that can learn, adapt, and evolve alongside their human counterparts. By using the power of trained machine learning algorithms and decentralised ledgers, Twin Protocol allows individuals to develop digital twins that can capture not just information, but individual expertise and personality traits. The platform’s potential spans industries, ranging from healthcare to manufacturing and finance. Imagine possessing the ability to deploy a perpetually-available AI twin that can provide personalised healthcare advice, or a digital representation of a financial advisor offering real-time, context-aware investment strategies. These twins aren’t designed to serve as mere information repositories but as intelligent and dynamic tools capable of understanding context, learning continuously, and providing nuanced, personalised interactions. What sets Twin Protocol apart is its commitment to maintaining individual agency and data privacy. Through its blockchain-based ‘Twin Vault‘, users can retain control over their digital identity, ensuring that personal information remains secure. Thanks to its unique proposition, the platform has attracted several collaborations, including partnerships with SingularityNET and notable figures like Deepak Chopra and Robert Bell (founding member of Kool & the Gang). AI’s potential is still uncharted It is estimated that over the coming decade, AI’s potential could grow hugely, with offerings like Twin Protocol demonstrating next-generation technology, allowing users to explore new concepts like digital twins. From personalised education to industrial optimisation, AI is moving beyond the category of being a tool, becoming a transformative partner capable of extending individual capabilities. AI’s journey and symbiosis is likely to push the boundaries of what’s possible today. Some of the most exciting innovations emanating from the field will lie not in the technology, but in how its potential is applied to other fields. Interesting times are ahead! The post The ongoing AI revolution is reshaping the world, one algorithm at a time appeared first on AI News. View the full article
-
Google CEO Sundar Pichai has announced the launch of Gemini 2.0, a model that represents the next step in Google’s ambition to revolutionise AI. A year after introducing the Gemini 1.0 model, this major upgrade incorporates enhanced multimodal capabilities, agentic functionality, and innovative user tools designed to push boundaries in AI-driven technology. Leap towards transformational AI Reflecting on Google’s 26-year mission to organise and make the world’s information accessible, Pichai remarked, “If Gemini 1.0 was about organising and understanding information, Gemini 2.0 is about making it much more useful.” Gemini 1.0, released in December 2022, was notable for being Google’s first natively multimodal AI model. The first iteration excelled at understanding and processing text, video, images, audio, and code. Its enhanced 1.5 version became widely embraced by developers for its long-context understanding, enabling applications such as the productivity-focused NotebookLM. Now, with Gemini 2.0, Google aims to accelerate the role of AI as a universal assistant capable of native image and audio generation, better reasoning and planning, and real-world decision-making capabilities. In Pichai’s words, the development represents the dawn of an “agentic era.” “We have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” Pichai explained. Gemini 2.0: Core features and availability At the heart of today’s announcement is the experimental release of Gemini 2.0 Flash, the flagship model of Gemini’s second generation. It builds upon the foundations laid by its predecessors while delivering faster response times and advanced performance. Gemini 2.0 Flash supports multimodal inputs and outputs, including the ability to generate native images in conjunction with text and produce steerable text-to-speech multilingual audio. Additionally, users can benefit from native tool integration such as Google Search and even third-party user-defined functions. Developers and businesses will gain access to Gemini 2.0 Flash via the Gemini API in Google AI Studio and Vertex AI, while larger model sizes are scheduled for broader release in January 2024. For global accessibility, the Gemini app now features a chat-optimised version of the 2.0 Flash experimental model. Early adopters can experience this updated assistant on desktop and mobile, with a mobile app rollout imminent. Products such as Google Search are also being enhanced with Gemini 2.0, unlocking the ability to handle complex queries like advanced math problems, coding enquiries, and multimodal questions. Comprehensive suite of AI innovations The launch of Gemini 2.0 comes with compelling new tools that showcase its capabilities. One such feature, Deep Research, functions as an AI research assistant, simplifying the process of investigating complex topics by compiling information into comprehensive reports. Another upgrade enhances Search with Gemini-enabled AI Overviews that tackle intricate, multi-step user queries. The model was trained using Google’s sixth-generation Tensor Processing Units (TPUs), known as Trillium, which Pichai notes “powered 100% of Gemini 2.0 training and inference.” Trillium is now available for external developers, allowing them to benefit from the same infrastructure that supports Google’s own advancements. Pioneering agentic experiences Accompanying Gemini 2.0 are experimental “agentic” prototypes built to explore the future of human-AI collaboration, including: Project Astra: A universal AI assistant First introduced at I/O earlier this year, Project Astra taps into Gemini 2.0’s multimodal understanding to improve real-world AI interactions. Trusted testers have trialled the assistant on Android, offering feedback that has helped refine its multilingual dialogue, memory retention, and integration with Google tools like Search, Lens, and Maps. Astra has also demonstrated near-human conversational latency, with further research underway for its application in wearable technology, such as prototype AI glasses. Project Mariner: Redefining web automation Project Mariner is an experimental web-browsing assistant that uses Gemini 2.0’s ability to reason across text, images, and interactive elements like forms within a browser. In initial tests, it achieved an 83.5% success rate on the WebVoyager benchmark for completing end-to-end web tasks. Early testers using a Chrome extension are helping to refine Mariner’s capabilities while Google evaluates safety measures that ensure the technology remains user-friendly and secure. Jules: A coding agent for developers Jules, an AI-powered assistant built for developers, integrates directly into GitHub workflows to address coding challenges. It can autonomously propose solutions, generate plans, and execute code-based tasks—all under human supervision. This experimental endeavour is part of Google’s long-term goal to create versatile AI agents across various domains. Gaming applications and beyond Extending Gemini 2.0’s reach into virtual environments, Google DeepMind is working with gaming partners like Supercell on intelligent game agents. These experimental AI companions can interpret game actions in real-time, suggest strategies, and even access broader knowledge via Search. Research is also being conducted into how Gemini 2.0’s spatial reasoning could support robotics, opening doors for physical-world applications in the future. Addressing responsibility in AI development As AI capabilities expand, Google emphasises the importance of prioritising safety and ethical considerations. Google claims Gemini 2.0 underwent extensive risk assessments, bolstered by the Responsibility and Safety Committee’s oversight to mitigate potential risks. Additionally, its embedded reasoning abilities allow for advanced “red-teaming,” enabling developers to evaluate security scenarios and optimise safety measures at scale. Google is also exploring safeguards to address user privacy, prevent misuse, and ensure AI agents remain reliable. For instance, Project Mariner is designed to prioritise user instructions while resisting malicious prompt injections, preventing threats like phishing or fraudulent transactions. Meanwhile, privacy controls in Project Astra make it easy for users to manage session data and deletion preferences. Pichai reaffirmed the company’s commitment to responsible development, stating, “We firmly believe that the only way to build AI is to be responsible from the start.” With the Gemini 2.0 Flash release, Google is edging closer to its vision of building a universal assistant capable of transforming interactions across domains. See also: Machine unlearning: Researchers make AI models ‘forget’ data Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Gemini 2.0: Google ushers in the agentic AI era appeared first on AI News. View the full article
-
NetApp has shed light on the pressing issues faced by organisations globally as they strive to optimise their strategies for AI success. “2025 is shaping up to be a defining year for AI, as organisations transition from experimentation to scaling their AI capabilities,” said Gabie Boko, NetApp’s Chief Marketing Officer. “Businesses are making significant investments to drive innovation and efficiency, but these efforts will succeed only if global tech executives can address the mounting challenges of data complexity, security, and sustainability.” The findings of NetApp’s latest Data Complexity Report paints a detailed picture of where businesses currently stand on their AI journeys and the key trends that will shape the technology’s future. Cost of transformation Two-thirds of businesses worldwide claim their data is “fully or mostly optimised” for AI purposes, highlighting vast improvements in making data accessible, accurate, and well-documented. Yet, the study reveals that the journey towards AI maturity requires further significant investment. A striking 40% of global technology executives anticipate “unprecedented investment” will be necessary in 2025 just to enhance AI and data management capabilities. While considerable progress has been made, achieving impactful breakthroughs demands an even greater commitment in financial and infrastructural resources. Catching up with AI’s potential might not come cheap, but leaders prepared to invest could reap significant rewards in innovation and efficiency. Data silos impede AI success One of the principal barriers identified in the report is the fragmentation of data. An overwhelming 79% of global tech executives state that unifying their data, reducing silos and ensuring smooth interconnectedness, is key to unlocking AI’s full potential. Companies that have embraced unified data storage are better placed to overcome this hurdle. By connecting data regardless of its type or location (across hybrid multi-cloud environments,) they ensure constant accessibility and minimise fragmentation. The report indicates that organisations prioritising data unification are significantly more likely to meet their AI goals in 2025. Nearly one-third (30%) of businesses failing to prioritise unification foresee missing their targets, compared to just 23% for those placing this at the heart of their strategy. Executives have doubled down on data management and infrastructure as top priorities, increasingly recognising that optimising their capacity to gather, store, and process information is essential for AI maturity. Companies refusing to tackle these data challenges risk falling behind in an intensely competitive global market. Scaling risks of AI As businesses accelerate their AI adoption, the associated risks – particularly around security – are becoming more acute. More than two-fifths (41%) of global tech executives predict a stark rise in security threats by 2025 as AI becomes integral to more facets of their operations. AI’s rapid rise has expanded attack surfaces, exposing data sets to new vulnerabilities and creating unique challenges such as protecting sensitive AI models. Countries leading the AI race, including India, the US, and Japan, are nearly twice as likely to encounter escalating security concerns compared to less AI-advanced nations like Germany, France, and Spain. Increased awareness of AI-driven security challenges is reflected in business priorities. Over half (59%) of global executives name cybersecurity as one of the top stressors confronting organisations today. However, progress is being made. Despite elevated concerns, the report suggests that effective security measures are yielding results. Since 2023, the number of executives ranking cybersecurity and ransomware protection as their top priority has fallen by 17%, signalling optimism in combating these risks effectively. Limiting AI’s environmental costs Beyond security risks, AI’s growth is raising urgent questions of sustainability. Over one-third of global technology executives (34%) predict that AI advancements will drive significant changes to corporate sustainability practices. Meanwhile, 33% foresee new government policies and investments targeting energy usage. The infrastructure powering AI and transforming raw data into business value demands significant energy, counteracting organisational sustainability targets. AI-heavy nations often feel the environmental impact more acutely than their less AI-focused counterparts. While 72% of businesses still prioritise carbon footprint reduction, the report notes a decline from 84% in 2023, pointing to increasing tension between sustainability commitments and the relentless march of innovation. For organisations to scale AI without causing irreparable damage to the planet, maintaining environmental responsibility alongside technological growth will be paramount in coming years. Krish Vitaldevara, SVP and GM at NetApp, commented: “The organisations leading in advanced analytics and AI are those that have unified and well-cataloged data, robust security and compliance for sensitive information, and a clear understanding of how data evolves. “By tackling these challenges, they can drive innovation while ensuring resilience, responsibility, and timely insights in the new AI era.” You can find a full copy of NetApp’s report here (PDF) (Photo by Chunli Ju) See also: New AI training techniques aim to overcome current challenges Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Keys to AI success: Security, sustainability, and overcoming silos appeared first on AI News. View the full article
-
Researchers from the Tokyo University of Science (TUS) have developed a method to enable large-scale AI models to selectively “forget” specific classes of data. Progress in AI has provided tools capable of revolutionising various domains, from healthcare to autonomous driving. However, as technology advances, so do its complexities and ethical considerations. The paradigm of large-scale pre-trained AI systems, such as OpenAI’s ChatGPT and CLIP (Contrastive Language–Image Pre-training), has reshaped expectations for machines. These highly generalist models, capable of handling a vast array of tasks with consistent precision, have seen widespread adoption for both professional and personal use. However, such versatility comes at a hefty price. Training and running these models demands prodigious amounts of energy and time, raising sustainability concerns, as well as requiring cutting-edge hardware significantly more expensive than standard computers. Compounding these issues is that generalist tendencies may hinder the efficiency of AI models when applied to specific tasks. For instance, “in practical applications, the classification of all kinds of object classes is rarely required,” explains Associate Professor Go Irie, who led the research. “For example, in an autonomous driving system, it would be sufficient to recognise limited classes of objects such as cars, pedestrians, and traffic signs. “We would not need to recognise food, furniture, or animal species. Retaining classes that do not need to be recognised may decrease overall classification accuracy, as well as cause operational disadvantages such as the waste of computational resources and the risk of information leakage.” A potential solution lies in training models to “forget” redundant or unnecessary information—streamlining their processes to focus solely on what is required. While some existing methods already cater to this need, they tend to assume a “white-box” approach where users have access to a model’s internal architecture and parameters. Oftentimes, however, users get no such visibility. “******-box” AI systems, more common due to commercial and ethical restrictions, conceal their inner mechanisms, rendering traditional forgetting techniques impractical. To address this gap, the research team turned to derivative-free optimisation—an approach that sidesteps reliance on the inaccessible internal workings of a model. Advancing through forgetting The study, set to be presented at the Neural Information Processing Systems (NeurIPS) conference in 2024, introduces a methodology dubbed “******-box forgetting.” The process modifies the input prompts (text instructions fed to models) in iterative rounds to make the AI progressively “forget” certain classes. Associate Professor Irie collaborated on the work with co-authors Yusuke Kuwana and Yuta Goto (both from TUS), alongside Dr Takashi Shibata from NEC Corporation. For their experiments, the researchers targeted CLIP, a vision-language model with image classification abilities. The method they developed is built upon the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm designed to optimise solutions step-by-step. In this study, CMA-ES was harnessed to evaluate and hone prompts provided to CLIP, ultimately suppressing its ability to classify specific image categories. As the project progressed, challenges arose. Existing optimisation techniques struggled to scale up for larger volumes of targeted categories, leading the team to devise a novel parametrisation strategy known as “latent context sharing.” This approach breaks latent context – a representation of information generated by prompts – into smaller, more manageable pieces. By allocating certain elements to a single token (word or character) while reusing others across multiple tokens, they dramatically reduced the problem’s complexity. Crucially, this made the process computationally tractable even for extensive forgetting applications. Through benchmark tests on multiple image classification datasets, the researchers validated the efficacy of ******-box forgetting—achieving the goal of making CLIP “forget” approximately 40% of target classes without direct access to the AI model’s internal architecture. This research marks the first successful attempt to induce selective forgetting in a ******-box vision-language model, demonstrating promising results. Benefits of helping AI models forget data Beyond its technical ingenuity, this innovation holds significant potential for real-world applications where task-specific precision is paramount. Simplifying models for specialised tasks could make them faster, more resource-efficient, and capable of running on less powerful devices—hastening the adoption of AI in areas previously deemed unfeasible. Another key use lies in image generation, where forgetting entire categories of visual context could prevent models from inadvertently creating undesirable or harmful content, be it offensive material or misinformation. Perhaps most importantly, this method addresses one of AI’s greatest ethical quandaries: privacy. AI models, particularly large-scale ones, are often trained on massive datasets that may inadvertently contain sensitive or outdated information. Requests to remove such data—especially in light of laws advocating for the “Right to be Forgotten”—pose significant challenges. Retraining entire models to exclude problematic data is costly and time-intensive, yet the risks of leaving it unaddressed can have far-reaching consequences. “Retraining a large-scale model consumes enormous amounts of energy,” notes Associate Professor Irie. “‘Selective forgetting,’ or so-called machine unlearning, may provide an efficient solution to this problem.” These privacy-focused applications are especially relevant in high-stakes industries like healthcare and finance, where sensitive data is central to operations. As the global race to advance AI accelerates, the Tokyo University of Science’s ******-box forgetting approach charts an important path forward—not only by making the technology more adaptable and efficient but also by adding significant safeguards for users. While the potential for misuse remains, methods like selective forgetting demonstrate that researchers are proactively addressing both ethical and practical challenges. See also: Why QwQ-32B-Preview is the reasoning AI to watch Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Machine unlearning: Researchers make AI models ‘forget’ data appeared first on AI News. View the full article
-
Artificial intelligence entered the market with a splash, driving massive buzz and adoption. But now the pace is faltering. Business leaders still talk the talk about embracing AI, because they want the benefits – McKinsey estimates that GenAI could save companies up to $2.6 trillion across a range of operations. However, they aren’t walking the walk. According to one survey of senior analytics and IT leaders, only 20% of GenAI applications are currently in production. Why the wide gap between interest and reality? The answer is multifaceted. Concerns around security and data privacy, compliance risks, and data management are high-profile, but there’s also anxiety about AI’s lack of transparency and worries about ROI, costs, and skill gaps. In this article, we’ll examine the barriers to AI adoption, and share some measures that business leaders can take to overcome them. Get a handle on data “High-quality data is the cornerstone of accurate and reliable AI models, which in turn drive better decision-making and outcomes,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, adding, “Trustworthy data builds confidence in AI among IT professionals, accelerating the broader adoption and integration of AI technologies.” Today, only 43% of IT professionals say they’re confident about their ability to meet AI’s data demands. Given that data is so vital for AI success, it’s not surprising that data challenges are an oft-cited factor in slow AI adoption. The best way to overcome this hurdle is to go back to data basics. Organisations need to build a strong data governance strategy from the ground up, with rigorous controls that enforce data quality and integrity. Take ethics and governance seriously With regulations mushrooming, compliance is already a headache for many organisations. AI only adds new areas of risk, more regulations, and increased ethical governance issues for business leaders to worry about, to the extent that security and compliance risk was the most-cited concern in Cloudera’s State of Enterprise AI and Modern Data Architecture report. While the rise in AI regulations might seem alarming at first, executives should embrace the support that these frameworks offer, as they can give organisations a structure around which to build their own risk controls and ethical guardrails. Developing compliance policies, appointing teams for AI governance, and ensuring that humans retain authority over AI-powered decisions are all important steps in creating a comprehensive system of AI ethics and governance. Reinforce control over security and privacy Security and data privacy concerns loom large for every business, and with good reason. Cisco’s 2024 Data Privacy Benchmark Study revealed that 48% of employees admit to entering non-public company information into GenAI tools (and an unknown number have done so and won’t admit it), leading 27% of organisations to ban the use of such tools. The best way to reduce the risks is to limit access to sensitive data. This involves doubling down on access controls and privilege creep, and keeping data away from publicly-hosted LLMs. Avi Perez, CTO of Pyramid Analytics, explained that his business intelligence software’s AI infrastructure was deliberately built to keep data away from the LLM, sharing only metadata that describes the problem and interfacing with the LLM as the best way for locally-hosted engines to run analysis.”There’s a huge set of issues there. It’s not just about privacy, it’s also about misleading results. So in that framework, data privacy and the issues associated with it are tremendous, in my opinion. They’re a showstopper,” Perez said. With Pyramid’s setup, however, “the LLM generates the recipe, but it does it without ever getting [its] hands on the data, and without doing mathematical operations. […] That eliminates something like 95% of the problem, in terms of data privacy risks.” Boost transparency and explainability Another serious obstacle to AI adoption is a lack of trust in its results. The infamous story of Amazon’s AI-powered hiring tool which discriminated against women has become a cautionary tale that scares many people away from AI. The best way to combat this fear is to increase explainability and transparency. “AI transparency is about clearly explaining the reasoning behind the output, making the decision-making process accessible and comprehensible,” said Adnan Masood, chief AI architect at UST and a Microsoft regional director. “At the end of the day, it’s about eliminating the ****** box mystery of AI and providing insight into the how and why of AI decision-making.”Unfortunately, many executives overlook the importance of transparency. A recent IBM study reported that only 45% of CEOs say they are delivering on capabilities for openness. AI champions need to prioritise the development of rigorous AI governance policies that prevent ****** boxes arising, and invest in explainability tools like SHapley Additive exPlanations (SHAPs), fairness toolkits like Google’s Fairness Indicators, and automated compliance checks like the Institute of Internal Auditors’ AI Auditing Framework. Define clear business value Cost is on the list of AI barriers, as always. The Cloudera survey found that 26% of respondents said AI tools are too expensive, and Gartner included “unclear business value” as a factor in the failure of AI projects. Yet the same Gartner report noted that GenAI had delivered an average revenue increase and cost savings of over 15% among its users, proof that AI can drive financial lift if implemented correctly. This is why it’s crucial to approach AI like every other business project – identify areas that will deliver fast ROI, define the benefits you expect to see, and set specific KPIs so you can prove value.”While there’s a lot that goes into building out an AI strategy and roadmap, a critical first step is to identify the most valuable and transformative AI use cases on which to focus,” said Michael Robinson, Director of Product Marketing at UiPath. Set up effective training programs The skills gap remains a significant roadblock to AI adoption, but it seems that little effort is being made to address the issue. A report from Worklife indicates the initial ***** in AI adoption came from early adopters. Now, it’s down to the laggards, who are inherently sceptical and generally less confident about AI – and any new tech. This makes training crucial. Yet according to Asana’s State of AI at Work study, 82% of participants said their organisations haven’t provided training on using generative AI. There’s no indication that training isn’t working; rather that it isn’t happening as it should. The clear takeaway is to offer comprehensive training in quality prompting and other relevant skills. Encouragingly, the same research shows that even using AI without training increases people’s skills and confidence. So, it’s a good idea to get started with low- and no-code tools that allow employees who are unskilled in AI to learn on the job. The barriers to AI adoption are not insurmountable Although AI adoption has slowed, there’s no indication that it’s in danger in the long term. The many obstacles holding companies back from rolling out AI tools can be overcome without too much trouble. Many of the steps, like reinforcing data quality and ethical governance, should be taken regardless of whether or not AI is under consideration, while other steps taken will pay for themselves in increased revenue and the productivity gains that AI can bring. The post Narrowing the confidence gap for wider AI adoption appeared first on AI News. View the full article
-
There’s a new contender in the AI space that’s making waves: QwQ-32B-Preview. This so-called “reasoning” AI model is being compared to OpenAI o1, and it’s one of the few you can download under a permissive license. For developers and researchers eager to experiment, that’s a significant bonus. Built by Alibaba’s Qwen team, QwQ-32B-Preview is anything but lightweight. It packs 32.5 billion parameters—think of these as the building blocks of its problem-solving abilities—and can handle prompts of up to 32,000 words; longer than some novels! Tests show it outperforms OpenAI o1-preview and o1-mini on benchmarks like AIME and MATH. For context, AIME uses other AI models to assess performance, while MATH is a collection of word problems. But the model isn’t just about maths problems or logic puzzles. What sets it apart is how it approaches tasks. QwQ-32B-Preview plans ahead, fact-checks its work, and avoids common AI mistakes. Of course, it’s not flawless—Alibaba acknowledges issues like language switching, occasional loops, and difficulties with “common sense” reasoning. Even so, it represents a step toward more intelligent AI systems. QwQ-32B-Preview is accessible: You can run or download it via Hugging Face. However, like other ********-developed AI, it operates within regulatory boundaries. That means it carefully avoids politically sensitive topics to comply with China’s rules, ensuring it aligns with “core socialist values.” Alibaba isn’t alone in this space. Meta’s Llama 3.1 is another open-source option, though it takes a different approach by focusing on generative AI rather than reasoning. While both models are innovative, QwQ-32B-Preview specialises in problem-solving with what the company describes as a human-like approach, putting it in the reasoning category. The competition in AI inside China is intensifying. Companies such as DeepSeek, Shanghai AI Lab, and Kunlun Tech have entered the reasoning AI race, releasing their models at pace. For example, DeepSeek’s r1 claims to outperform OpenAI’s o1 on half of its benchmark tests, particularly in maths and programming. Shanghai AI Lab’s InternThinker takes a structured approach to problem-solving, incorporating steps such as understanding queries, recalling knowledge, planning solutions, and reflecting on its answers. This surge of activity highlights how quickly ******** companies are catching up with US tech giants. Xu Liang, an AI entrepreneur from Hangzhou, summed it up: “OpenAI gave the direction; with research, ******** tech firms are making progress.” The release of QwQ-32B-Preview and its competitors shows how much ground they’re covering. But this goes beyond just catching up. Reasoning AI marks a change in how models are designed and used. Unlike older AI systems that relied on brute force to generate answers, reasoning models like QwQ-32B-Preview aim to mimic human problem-solving. The approach not only makes them more effective for complex tasks but also expands their potential use cases, like tackling advanced maths or providing detailed financial advice. Whether it’s solving puzzles, reasoning through intricate problems, or expanding what open-source AI can achieve, one thing is clear: the evolution of AI is accelerating. Buckle up—this is only the beginning. (Photo by Unsplash) See also: Alibaba Cloud overhauls AI partner initiative Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Why QwQ-32B-Preview is the reasoning AI to watch appeared first on AI News. View the full article
-
Layer 1 relational blockchain Chromia unveils its Asgard mainnet upgrade. Includes new extensions that add specialised capabilities to the blockchain. The Oracle Extension has launched with the AI Inference Extension expected early next year. The Chromia blockchain development team has announced the successful completion of its Asgard mainnet upgrade that introduces new features and capabilities for the platform. The new features enhance the overall capacity of the Layer 1 blockchain and add specialised capabilities for users. The Asgard mainnet upgrade includes the launch of Chomia Extensions, expected to support the growth of decentralised finance (DeFi) and AI-enabled applications on the Chromia Network. Announced on November 15, the Chromia team believes the mainnet will redesign how data is organised on the blockchain, changing the development and use of Web 3 applications. The blockchain utilises a modular framework to offer users and developers of decentralised application (dApp) chains, customisable fee structures, and advanced digital assets. The extensions are modular enhancements that enable developers to build additional features on top of the main blockchain, expanding functionality and utility while maintaining the benefits of the existing infrastructure. The Extensions complement the platform’s relational data architecture, modular network design, and gas-free economic model for end users. Chromia launches Oracle Extensions The Oracle Extension]] provides fully on-chain, real-time price feeds that are updated approximately once a second. It provides developers in Chromia’s ecosystem the ability to develop DeFi applications like decentralised exchanges, futures and options platforms, and lending protocols. Ludvig Öberg, VP of the Chromia Platform Unit said he believes the Oracle Extension will help grow the DeFi space on the blockchain. “The Oracle Extension lays the groundwork for an expansion of decentralised finance activity on the Chromia network and the growth of network value.” According to the team statement, the launch of the Oracle Extension purports to bond well with the overall goal of “strengthening connections across the wider cryptocurrency ecosystem.” The company has made recent efforts to integrate native CHR tokens and other Chromia-based tokens with centralised exchanges and cross-chain wallets. Chromia plans to release a public demo of the Oracle Extension, with integrations by DeFi protocols expected soon after. AI Inference Extension to launch early 2025 Chromia has also announced it has plans to launch an AI Inference Extension, expected in Q1 2025. The module will enable developers to execute AI models directly on-chain using Chromia’s decentralised provider network. The project’s recently formed Data and AI Division’s focus is on creating tools to enhance transparency in AI training data and inputs. Speaking about the AI Extension’s planned launch, Yeou Jie, Head of Business Development at Chromia, said their team’s plan is to expand “transparency to AI.” “As the world’s only relational blockchain, Chromia has demonstrated its ability to bring transparency to AI and other data-intensive use cases. The AI Inference Extension will take this a step further, enabling on-chain execution of AI models.” Seen as the first major technical update on Chromia, Asgard mainnet provides a way for the blockchain to support other functions and decentralised applications including AI, gaming, finance, and enterprise use-cases. The post Chromia’s Asgard upgrade launches: “New era for DeFi and AI” appeared first on AI News. View the full article