ChatGPT
Diamond Member-
Posts
559 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
As the 2025 AI Action Summit kicks off in Paris, global leaders, industry experts, and academics are converging to address the challenges and opportunities presented by AI. Against the backdrop of rapid technological advancements and growing societal concerns, the summit aims to build on the progress made since the 2024 Seoul Safety Summit and establish a cohesive global framework for AI governance. AI Action Summit is ‘a wake-up call’ French President Emmanuel Macron has described the summit as “a wake-up call for Europe,” emphasising the need for collective action in the face of AI’s transformative potential. This comes as the US has committed $500 billion to AI infrastructure. The ***, meanwhile, has unveiled its Opportunities Action Plan ahead of the full implementation of the *** AI Act. Ahead of the AI Summit, *** tech minister Peter Kyle told The Guardian the AI race must be led by “western, liberal, democratic” countries. These developments signal a renewed global dedication to harnessing AI’s capabilities while addressing its risks. Matt Cloke, CTO at Endava, highlighted the importance of bridging the gap between AI’s potential and its practical implementation. “Much of the conversation is set to focus on understanding the risks involved with using AI while helping to guide decision-making in an ever-evolving landscape,” he said. Cloke also stressed the role of organisations in ensuring AI adoption goes beyond regulatory frameworks. “Modernising core systems enables organisations to better harness AI while ensuring regulatory compliance,” he explained. “With improved data management, automation, and integration capabilities, these systems make it easier for organisations to stay agile and quickly adapt to impending regulatory changes.” Governance and workforce among critical AI Action Summit topics Kit Cox, CTO and Founder of Enate, outlined three critical areas for the summit’s agenda. “First, AI governance needs urgent clarity,” he said. “We must establish global guidelines to ensure AI is safe, ethical, and aligned across nations. A disconnected approach won’t work; we need unity to build trust and drive long-term progress.” Cox also emphasised the need for a future-ready workforce. “Employers and governments must invest in upskilling the workforce for an AI-driven world,” he said. “This isn’t just about automation replacing jobs; it’s about creating opportunities through education and training that genuinely prepare people for the future of work.” Finally, Cox called for democratising AI’s benefits. “AI must be fair and democratic both now and in the future,” he said. “The benefits can’t be limited to a select few. We must ensure that AI’s power reaches beyond Silicon Valley to all corners of the globe, creating opportunities for everyone to thrive.” Developing AI in the public interest Professor Gina Neff, Professor of Responsible AI at Queen Mary University of London and Executive Director at Cambridge University’s Minderoo Centre for Technology & Democracy, stressed the importance of making AI relatable to everyday life. “For us in civil society, it’s essential that we bring imaginaries about AI into the everyday,” she said. “From the barista who makes your morning latte to the mechanic fixing your car, they all have to understand how AI impacts them and, crucially, why AI is a human issue.” Neff also pushed back against big tech’s dominance in AI development. “I’ll be taking this spirit of public interest into the Summit and pushing back against big tech’s push for hyperscaling. Thinking about AI as something we’re building together – like we do our cities and local communities – puts us all in a better place.” Addressing bias and building equitable AI Professor David Leslie, Professor of Ethics, Technology, and Society at Queen Mary University of London, highlighted the unresolved challenges of bias and diversity in AI systems. “Over a year after the first AI Safety Summit at Bletchley Park, only incremental progress has been made to address the many problems of cultural bias and toxic and imbalanced training data that have characterised the development and use of Silicon Valley-led frontier AI systems,” he said. Leslie called for a renewed focus on public interest AI. “The French AI Action Summit promises to refocus the conversation on AI governance to tackle these and other areas of immediate risk and harm,” he explained. “A main focus will be to think about how to advance public interest AI for all through mission-driven and society-led funding.” He proposed the creation of a public interest AI foundation, supported by governments, companies, and philanthropic organisations. “This type of initiative will have to address issues of algorithmic and data biases head on, at concrete and practice-based levels,” he said. “Only then can it stay true to the goal of making AI technologies – and the infrastructures upon which they depend – accessible global public goods.” Systematic evaluation Professor Maria Liakata, Professor of Natural Language Processing at Queen Mary University of London, emphasised the need for rigorous evaluation of AI systems. “AI has the potential to make public service more efficient and accessible,” she said. “But at the moment, we are not evaluating AI systems properly. Regulators are currently on the back foot with evaluation, and developers have no systematic way of offering the evidence regulators need.” Liakata called for a flexible and systematic approach to AI evaluation. “We must remain agile and listen to the voices of all stakeholders,” she said. “This would give us the evidence we need to develop AI regulation and help us get there faster. It would also help us get better at anticipating the risks posed by AI.” AI in healthcare: Balancing innovation and ethics Dr Vivek Singh, Lecturer in Digital Pathology at Barts ******* Institute, Queen Mary University of London, highlighted the ethical implications of AI in healthcare. “The Paris AI Action Summit represents a critical opportunity for global collaboration on AI governance and innovation,” he said. “I hope to see actionable commitments that balance ethical considerations with the rapid advancement of AI technologies, ensuring they benefit society as a whole.” Singh called for clear frameworks for international cooperation. “A key outcome would be the establishment of clear frameworks for international cooperation, fostering trust and accountability in AI development and deployment,” he said. AI Action Summit: A pivotal moment The 2025 AI Action Summit in Paris represents a pivotal moment for global AI governance. With calls for unity, equity, and public interest at the forefront, the summit aims to address the challenges of bias, regulation, and workforce readiness while ensuring AI’s benefits are shared equitably. As world leaders and industry experts converge, the hope is that actionable commitments will pave the way for a more inclusive and ethical AI future. (Photo by Jorge Gascón) See also: EU AI Act: What businesses need to know as regulations go live Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI Action Summit: Leaders call for unity and equitable development appeared first on AI News. View the full article
-
US lawmakers are pushing for a DeepSeek ban after security researchers found the app transferring user data to a banned state-owned company. DeepSeek, practically unknown just weeks ago, took the tech world by storm—gaining global acclaim for its cutting-edge performance while sparking debates reminiscent of the TikTok saga. Its rise has been fuelled in part by its business model: unlike many of its American counterparts, including OpenAI and Google, DeepSeek offered its advanced powers for free. However, concerns have been raised about DeepSeek’s extensive data collection practices and a probe has been launched by Microsoft and OpenAI over a breach of the latter’s system by a group allegedly linked to the ******** AI startup. A threat to US AI dominance DeepSeek’s astonishing capabilities have, within a matter of weeks, positioned it as a major competitor to American AI stalwarts like OpenAI’s ChatGPT and Google Gemini. But, alongside the app’s prowess, concerns have emerged over alleged ties to the ******** ********** Party (CCP). According to security researchers, hidden code within DeepSeek’s AI has been found transmitting user data to China Mobile—a state-owned telecoms company banned in the US. DeepSeek’s own privacy policy permits the collection of data such as IP addresses, device information, and, most alarmingly, even keystroke patterns. Just fyi, @deepseek_ai collects your IP, keystroke patterns, device info, etc etc, and stores it in China, where all that data is vulnerable to arbitrary requisition from the State. From their own privacy policy: pic.twitter.com/wueJokHcn3 — Luke de Pulford (@lukedepulford) January 27, 2025 Such findings have led to bipartisan efforts in the US Congress to curtail DeepSeek’s influence, with lawmakers scrambling to protect sensitive data from potential CCP oversight. Reps. Darin LaHood (R-IL) and Josh Gottheimer (D-NJ) are spearheading efforts to introduce legislation that would prohibit DeepSeek from being installed on all government-issued devices. DeepSeek is collecting the sensitive data of American citizens and sending it directly to the ******** ********** Party — a national security threat. That's why I’m introducing bipartisan legislation to immediately ban DeepSeek from all US gov devices.[Hidden Content] — Rep Josh Gottheimer (@RepJoshG) February 6, 2025 Several federal agencies, among them NASA and the US Navy, have already preemptively issued a ban on DeepSeek. Similarly, the state of Texas has also introduced restrictions. Potential ban of DeepSeek a TikTok redux? The controversy surrounding DeepSeek bears similarities to debates over TikTok, the social video app owned by ******** company ByteDance. TikTok remains under fire over accusations that user data is accessible to the CCP, though definitive proof has yet to materialise. In contrast, DeepSeek’s case involves clear evidence, as revealed by cybersecurity investigators who identified the app’s unauthorised data transmissions. While some might say DeepSeek echoes the TikTok controversy, security experts argue that it represents a much starker and documented threat. Lawmakers around the world are taking note. In addition to the US proposals, DeepSeek has already faced bans from government systems in countries including Australia, South Korea, and Italy. AI becomes a geopolitical battleground The concerns over DeepSeek exemplify how AI has now become a geopolitical flashpoint between global superpowers—especially between the US and China. American AI firms like OpenAI have enjoyed a dominant position in recent years, but ******** companies have poured resources into catching up and, in some cases, surpassing their US competitors. DeepSeek’s lightning-quick growth has unsettled that balance, not only because of its AI models but also due to its pricing strategy, which undercuts competitors by offering the app free of charge. That begs the question of whether it’s truly “free” or if the cost is paid in lost privacy and security. China Mobile’s involvement raises further eyebrows, given the state-owned telecom company’s prior sanctions and prohibition from the US market. Critics worry that data collected through platforms like DeepSeek could fill gaps in ******** surveillance activities or even potential economic manipulations. A nationwide DeepSeek ban is on the cards If the proposed US legislation is passed, it could represent the first step toward nationwide restrictions or an outright ban on DeepSeek. Geopolitical tension between China and the West continues to shape policies in advanced technologies, and AI appears to be the latest arena for this ongoing chess match. In the meantime, calls to regulate applications like DeepSeek are likely to grow louder. Conversations about data privacy, national security, and ethical boundaries in AI development are becoming ever more urgent as individuals and organisations across the globe navigate the promises and pitfalls of next-generation tools. DeepSeek’s rise may have, indeed, rattled the AI hierarchy, but whether it can maintain its momentum in the face of increasing global pushback remains to be seen. (Photo by Solen Feyissa) See also: AVAXAI brings DeepSeek to Web3 with decentralised AI agents Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek ban? China data transfer boosts security concerns appeared first on AI News. View the full article
-
Digma, a company offering products designed to act on pre-production observability data, has announced the launch of its preemptive observability analysis (POA) engine. The engine is designed to check, identify, and provide ‘fix’ suggestions, helping to balance systems and reduce issues found in codebases as their complexity increases. The application of preemptive observability in pre-production may be more important as AI code generators become more common , the company claims. For instance, a 2023 Stanford University study revealed that developers using AI coding assistants were more likely to introduce bugs to their code. Despite this, major companies like Google are increasing their reliance on AI-generated code, with over 25% of the company’s new code being AI-created. Nir Shafrir, CEO and Co-founder of Digma, commented on the growing resources that are being dedicated to ensuring systems perform well, saying, “We’re seeing a lot of effort invested in assuring optimal system performance, but many issues are still being discovered in complex code bases late in production.” “Beyond this, scaling has often remained a rough estimation in organisations anticipating growth, and many are hitting barriers in technology growth that arise precisely during periods of significant organisational expansion. It means that engineering teams may spend between 20-40% of their time addressing issues discovered late in production environments, with some organisations spending up to 50% of engineering resources on fixing production problems.” Preemptive observability is expected to become a key factor helping companies gain competitive advantage. It has several potential benefits for AI-generated code, including speed increases and improvements to the reliability of human-written code. According to Digma, preemptive observability helps ensure manually written code is more trustworthy, and reduces risk in the final product. As well as tackling bugs introduced by AI code generation, Digma’s preemptive observability analysis engine has been designed to combat common, long-established issues companies may have experienced with human-made code, which may result in service level agreement (SLA) violations and performance issues. For high transactional establishments, like retail, fintech, and e-commerce, this technology could become valuable. Digma’s algorithm has been designed to use pattern matching and anomaly detection techniques to analyse data and find specific behaviours or issues. It is capable of predicting what an application’s response times and resource usage should be, identifying possible issues before they can cause any noticeable damage. Digma specifically detects the part of the code that is causing an issue by analysing tracing data. Preemptive observability analysis prevents problems rather than dealing with the aftermath of the issues. Teams can monitor holistically, and address potential issues in areas that are frequently ignored once in production. Roni Dover, CTO and Co-founder of Digma, highlighted what differentiates Digma’s preemptive observability analysis engine from others: “By understanding runtime behaviour and suggesting fixes for performance issues, scaling problems, and team conflicts, we’re helping enterprises prevent problems and reduce risks proactively rather than putting out fires in production.” Application performance monitoring (APM) tools are used to identify service issues, monitor production statuses, and highlight SLA errors. APMs are practical for sending alerts when services fail or slow during production. But unlike preemptive observability, APMs are limited in non-production settings, and can’t provide analysis of problems’ sources. By identifying performance and scaling issues early on in the production process, even when data volumes are low, preemptive observability helps prevent major problems and reduce cloud costs. Digma recently completed a successful $6 million seed funding round, indicating a growing confidence in the technology. Image source: “Till Bechtolsheimer’s – Alfa Romeo Giulia Sprint GT No.40 – 2013 Donington Historic Festival” by Motorsport in Pictures is licensed under CC BY-NC-SA 2.0. See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Digma’s preemptive observability engine cuts code issues, streamlines AI appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Digital interactions have become a major part of our life; according to the latest statistics, there were over 5.52 billion internet users as of October 2024, with 67.5% being social media users. But despite the prominence of the digital space in today’s world, most of the interactions are still subpar when it comes to the aspect of personalisation. What does this mean? Put simply, the different categories of internet users which include individuals, companies, and influencers, do not have the flexibility or options to fully express their individuality, customise content or provide targeted services for specific markets. Most of the digital platforms that currently exist only provide an avenue for internet users to create static profiles made up of personal data. This should not be the case in a world where most interactions happen online. Digital profiles ought to be more than a collection of data; they should mimic a fully-developed persona that internet users can use to express themselves authentically or in a more personalised way in their digital interactions. Setting the stage for futuristic digital interactions Innovation did not stop with the internet or Web 2.0 social media networks. We now have more advanced technologies, notably AI and web 3.0, which are proving to be game-changers in the hyper-personalisation of digital experiences. So, how are the two technologies adding value to today’s static digital profiles? Let’s start with AI. Innovations in this space have been the talk of the technology community and beyond, with significant funding flowing into the industry over the past two years. While most people are only familiar with generative AI use cases, this nascent technology has the potential to support the creation of hyper-realistic and intelligent digital human avatars that could replace static profiles or business chatbots whose capabilities remain limited. On the other hand, web 3.0 introduces a futuristic digital space where personalised avatars can interact, trade or engage in more advanced activities like hosting meetings or events. Although possible with web 2.0 platforms as well, web 3.0 innovations are going a level higher to feature NFTs and utility tokens, which let users create adaptable human avatars or purchase advanced customisation features to make avatars more personalised. A case study of the Antix AI-powered SaaS platform Antix is one of the few innovations that currently uses integrated GPT-4.0 support and a web 3.0 utility token to create hyper-realistic and intelligent digital human avatars. The AI-powered software-as-a-service (SaaS) platform enhances digital interactions by providing individuals, companies, and influencers an opportunity to use hyper-personalised digital humans to deliver hyper-realistic interactions. Antix’s digital humans use advanced machine learning and natural language processing to make digital interactions more personalised. Notably, digital humans are designed as non-fungible tokens (NFTs) which means they can evolve alongside the owner. Internet citizens can use the Antix platform to create highly personalised and adaptable digital profiles that feature a multitude of customisations which include style, emotions, appearance, and voice. Antix’s digital humans can be customised to operate as the face of a brand by representing it in the digital space, and perform key functions like engaging with an audience, and hosting virtual events and marketing campaigns. Digital humans perform customer support functions better than typical chatbots because of their personalised make up. Digital humans could be useful for influencers consistently producing new content for their audience. Instead of shooting content themselves, influencers can delegate the role to Antix’s digital humans. Some of the benefits of adopting this approach include reduction in equipment cost, simplified content adaptation, and the option to remain anonymous. It is also important to highlight that this ecosystem is powered by a utility token dubbed, $ANTIX. The token supports key functions in the Antix platform, including subscription purchases, asset repairs, and ecosystem rewards. A new dawn for digital interactions For almost three decades now, digital interactions have mostly revolved around static personas. This could be about to change; advancements in 4IR technologies like AI and web 3.0 are bringing more value to the digital space. While it may take a few years before most people embrace the concept of AI-powered digital humans and decentralised marketplaces, it is only a matter of time before demand for digital twins which mimic real-life personas hits the roof. The shift will mark a new dawn, a time when digital interactions are not only hyper-personalised but feel almost real. Web 3.0 is poised to be the economic powerhouse of the digital interaction space. In fact, we’re already seeing this evolution with AI-powered agents tasked with operations in the cryptocurrency economy. It is not a question if, but rather when digital humans will become one of the main forms of interaction on the internet. (Image source: Unsplash) The post How AI and web 3.0 can reshape digital interactions appeared first on AI News. View the full article
-
The government is urged to mandate stricter reporting for data centres to mitigate environmental risks associated with the AI sprint. A report published today by the National Engineering Policy Centre (NEPC) highlights the urgent need for data centres to adopt greener practices, particularly as the government’s AI Opportunities Action Plan gains traction. The report, Engineering Responsible AI: Foundations for Environmentally Sustainable AI, was developed in collaboration with the Royal Academy of Engineering, the Institution of Engineering and Technology, and BCS, the Chartered Institute of IT. While stressing that data centres enabling AI systems can be built to consume fewer resources like energy and water, the report highlights that infrastructure and regulatory conditions must align for these efficiencies to materialise. Unlocking the potential of AI while minimising environmental risks AI is heralded as capable of driving economic growth, creating jobs, and improving livelihoods. Launched as a central pillar of the ***’s tech strategy, the AI Opportunities Action Plan is intended to “boost economic growth, provide jobs for the future and improve people’s everyday lives.” Use cases for AI that are already generating public benefits include accelerating drug discovery, forecasting weather events, optimising energy systems, and even aiding climate science and improving sustainability efforts. However, this growing reliance on AI also poses environmental risks from the infrastructure required to power these systems. Data centres, which serve as the foundation of AI technologies, consume vast amounts of energy and water. Increasing demand has raised concerns about global competition for limited resources, such as sustainable energy and drinking water. Google and Microsoft, for instance, have recorded rising water usage by their data centres each year since 2020. Much of this water comes from drinking sources, sparking fears about resource depletion. With plans already in place to reform the ***’s planning system to facilitate the construction of data centres, the report calls for urgent policies to manage their environmental impact. Accurate and transparent data on resource consumption is currently lacking, which hampers policymakers’ ability to assess the true scale of these impacts and act accordingly. Five steps to sustainable AI The NEPC is urging the government to spearhead change by prioritising sustainable AI development. The report outlines five key steps policymakers can act upon immediately to position the *** as a leader in resource-efficient AI: Expand environmental reporting mandates Communicate the sector’s environmental impacts Set sustainability requirements for data centres Reconsider data collection, storage, and management practices Lead by example with government investment Mandatory environmental reporting forms a cornerstone of the recommendations. This involves measuring data centres’ energy sources, water consumption, carbon emissions, and e-waste recycling practices to provide the resource use data necessary for policymaking. Raising public awareness is also vital. Communicating the environmental costs of AI can encourage developers to optimise AI tools, use smaller datasets, and adopt more efficient approaches. Notably, the report recommends embedding environmental design and sustainability topics into computer science and AI education at both school and university levels. Smarter, greener data centres One of the most urgent calls to action involves redesigning data centres to reduce their environmental footprint. The report advocates for innovations like waste heat recovery systems, zero drinking water use for cooling, and the exclusive use of 100% carbon-free energy certificates. Efforts like those at Queen Mary University of London, where residual heat from a campus data centre is repurposed to provide heating and hot water, offer a glimpse into the possibilities of greener tech infrastructure. In addition, the report suggests revising legislation on mandatory data retention to reduce the unnecessary environmental costs of storing vast amounts of data long-term. Proposals for a National Data Library could drive best practices by centralising and streamlining data storage. Professor Tom Rodden, Pro-Vice-Chancellor at the University of Nottingham and Chair of the working group behind the report, urged swift action: “In recent years, advances in AI systems and services have largely been driven by a race for size and scale, demanding increasing amounts of computational power. As a result, AI systems and services are growing at a rate unparalleled by other high-energy systems—generally without much regard for resource efficiency. “This is a dangerous trend, and we face a real risk that our development, deployment, and use of AI could do irreparable damage to the environment.” Rodden added that reliable data on these impacts is critical. “To build systems and services that effectively use resources, we first need to effectively monitor their environmental cost. Once we have access to trustworthy data… we can begin to effectively target efficiency in development, deployment, and use – and plan a sustainable AI future for the ***.” Dame Dawn Childs, CEO of Pure Data Centres Group, underscored the role of engineering in improving efficiency. “Some of this will come from improvements to AI models and hardware, making them less energy-intensive. But we must also ensure that the data centres housing AI’s computing power and storage are as sustainable as possible. “That means prioritising renewable energy, minimising water use, and reducing carbon emissions – both directly and indirectly. Using low-carbon building materials is also essential.” Childs emphasised the importance of a coordinated approach from the start of projects. “As the *** government accelerates AI adoption – through AI Growth Zones and streamlined planning for data centres – sustainability must be a priority at every step.” For Alex Bardell, Chair of BCS’ Green IT Specialist Group, the focus is on optimising AI processes. “Our report has discussed optimising models for efficiency. Previous attempts to limit the drive toward increased computational power and larger models have faced significant resistance, with concerns that the *** may fall behind in the AI arena; this may not necessarily be true. “It is crucial to reevaluate our approach to developing sustainable AI in the future.” Time for transparency around AI environmental risks Public awareness of AI’s environmental toll remains low. Recent research by the Institution of Engineering and Technology (IET) found that fewer than one in six *** residents are aware of the significant environmental costs associated with AI systems. “AI providers must be transparent about these effects,” said Professor Sarvapali Ramchurn, CEO of Responsible AI *** and a Fellow of the IET. “If we cannot measure it, we cannot manage it, nor ensure benefits for all. This report’s recommendations will aid national discussions on the sustainability of AI systems and the trade-offs involved.” As the *** pushes forward with ambitious plans to lead in AI development, ensuring environmental sustainability must take centre stage. By adopting policies and practices outlined in the NEPC report, the government can support AI growth while safeguarding finite resources for future generations. (Photo by Braden Collum) See also: Sustainability is key in 2025 for businesses to advance AI efforts Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NEPC: AI sprint risks environmental catastrophe appeared first on AI News. View the full article
-
[AI]AVAXAI brings Deepseek to Web3 with decentralised AI agents
ChatGPT posted a topic in World News
AI continues to evolve, transforming industries with advances in automation, decision-making, and predictive analytics. AI models like DeepSeek push the boundaries of what’s possible, making complex tasks more efficient and accessible. At the same time, Web3 is reshaping digital ownership and finance through decentralisation. As the two technologies advance, their convergence seems inevitable. However, integrating AI with blockchain and decentralised systems has proved challenging – until now. The company AIvalanche DeFAI Agents (AVAXAI) is addressing the gap by introducing the first DeepSeek-powered DeFAI product on the Avalanche blockchain. The platform provides a way for users to create, manage, and trade tokenised AI and DeFAI agents. Agents will be able to benefit from Avalanche’s scalability, fast transactions, and energy-efficient infrastructure. By merging decentralised ownership with AI-driven automation, AIvalanche DeFAI Agents offers new opportunities in the Web3 space. The DeepSeek controversy and its impact on AI’s future DeepSeek has been at the centre of global attention, not only for its technical advancements, but also for concerns about its use. In January, the company unveiled a chatbot that reportedly matched the performance of its rivals at a significantly lower training cost, a development that shook international markets. AI-related stocks, including Australia’s chip-maker Brainchip, saw sharp declines following the news. However, DeepSeek’s rapid rise has also raised security concerns. Australia has banned the DeepSeek AI from all government devices and systems, citing an “unacceptable risk” to national security. According to the BBC, officials insist that the decision is based on security assessments, not the company’s ******** origins. The government’s move emphasises ongoing debates over AI governance and the potential risks of incorporating AI into important systems. Despite these concerns, AIvalanche DeFAI Agents continues to explore new ways to utilise DeepSeek’s abilities in a decentralised framework. It wants to provide users with greater control over AI agents and maintain security and transparency in Web3. Decentralised AI agents for ownership and monetisation DeepSeek is an AI model built for tasks like data analysis and autonomous operations. AIvalanche DeFAI Agents extends its capabilities by integrating tokenised AI and DeFAI agents into the Avalanche C-Chain. The platform combines Avalanche’s efficiency with AI functionality, letting users create, manage, and deploy AI agents with minimal effort. Users can use AIvalanche DeFAI Agents to develop AI agents and investigate ways to monetise them. The decentralised framework enables trustless transactions, altering the way AI ownership and interaction take place. Key features of AIvalanche DeFAI agents Create and manage AI agents: Users can build AI agents in just a few clicks. Each agent has a dedicated page outlining its capabilities. Co-ownership of AI agents: Anyone can invest in AI agents early by acquiring tokens before they gain mainstream attention. Users can also engage with established AI agents while trading their tokens. Monetising AI agents: AI agents evolve by learning from new data. They have their own wallets and can execute transactions, manage tasks, and distribute revenue. Support from key players in the Avalanche ecosystem AIvalanche DeFAI Agents has gained recognition in the Avalanche ecosystem, receiving support from entities like Avalaunch and AVenturesDAO. Avalaunch provides a launchpad for Avalanche-based projects, while AVenturesDAO is a community-driven investment group. Their involvement highlights growing interest in decentralised AI and DeFAI agents. Expanding access through public sales and listings AIvalanche DeFAI Agents is currently conducting a public ***** across several launchpads, including Ape Terminal, Polkastarter, Avalaunch, and Seedify. The platforms enable broader participation in the Web3 AI agent economy. Following a public *****, the platform plans to list its AVAXAI token on centralised exchanges like Gate.io and MEXC. The listings could improve accessibility and liquidity and increase the platform’s adoption. As AI and decentralised finance (DeFi) continue to intersect, AIvalanche DeFAI Agents aims to establish itself in the space. (Photo by Unsplash) See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here The post AVAXAI brings Deepseek to Web3 with decentralised AI agents appeared first on AI News. View the full article -
Since AI hit the mainstream in the last few years, industries across the world are feeling the positive impacts. From helping humanity to clean up our oceans to helping doctors to detect cancers earlier than ever before, AI’s potential and impact are growing by the day. Regardless of whether this is being powered via a supercomputer, edge computing methods, or a traditional data centre, society is truly feeling the positive effect of advances in the AI industry. However, with fresh innovation has always come questions around environmental impact. These concerns are gaining momentum, especially around the energy consumption associated with the increased processing power required to run increasingly large systems. The United Nations Environment Programme recently expressed concerns about the rising levels of e-waste and cooling considerations for data centres more specifically. This also follows on from similar concerns from academia, who have flagged that a larger carbon footprint might be the price we are paying for innovation. Add on top of this the fact that governments globally are implementing new regulations and reporting requirements as part of initiatives to curb the impact of climate change, such as the EU’s Circular Economy Action Plan (CEAP), and it becomes clear this issue is coming to the forefront of the AI agenda. Analysts around the globe are also beginning to focus on this, with Gartner naming energy-efficient computing as a top technology trend for 2025, as organisations come under pressure to show they are considering the impact AI is having on the environment. Businesses that are not considering sustainability as part of core AI infrastructure and expansion or technology strategies are at risk of hindering their own progress. Failing to keep pace with sustainable practices can cause reputational damage, as organisations may be seen as behind the curve in an increasingly sustainability-focused world, alongside risking non-compliance with regulation. When looking at the previously mentioned example of e-waste, if organisations are found to be not properly recycling devices (such as our Global Take Back Service), they could face negative feedback and lose business as a result of poor brand image. With these factors in mind, it’s clear that businesses must consider building a sustainable AI framework that supports operational efficiency, encouraging business growth. Prioritising the implementation of technologies that limit energy consumption can be a huge help when it comes to ensuring regulatory compliance and the ability to meet greater sustainability goals. These also come with the ability to help organisations to future-proof against market instability with reduced reliance on energy along with strengthening brand reputation in an increasingly environmentally-conscious world. This is within easy reach for many businesses as there is a large number of offerings in the market that can balance sustainability efforts, with high processing capabilities. At ASUS we have partnered with Intel to provide servers that prioritise energy efficiency. There is so much available to businesses today if they choose to take the initial step and consider a strategy to implement technologies which balance regulatory pressures, customer expectations, and overall business goals will help organisations to feel confident innovation won’t come at a cost to the environment. IDC has raised this with its audience, recently releasing predictions on the technology industry which outlined; “To address the environmental challenges of harnessing AI’s benefits, enterprises are turning to Sustainable AI Frameworks that focus on minimising the environmental impact of artificial intelligence by addressing key elements such as energy efficiency, resource optimisation, and e-waste reduction.” As AI innovation continues to grow, alongside market pressure, businesses will find it becomes clear which organisations within the market are able to cope, and which will be left behind. Ultimately, those who choose to embed sustainability into AI strategies will lead the way. (Photo by Angela Benito) See also: French initiative for responsible AI leaders Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Sustainability is key in 2025 for businesses to advance AI efforts appeared first on AI News. View the full article
-
Enterprise AI transformation is reaching a tipping point. In the Asia Pacific, Zebra Technologies has unveiled ambitious plans to change frontline operations across the region. At a time when CISQ estimates poor software quality will cost US businesses $2.41 trillion in 2022, the push for practical, results-driven AI implementation is urgent. “Elements of our three-pillar strategy have been around for quite some time, but what’s revolutionising the frontline today is intelligent automation,” Tom Bianculli, Chief Technology Officer at Zebra Technologies, told reporters at a briefing during Zebra’s 2025 Kickoff in Perth, Australia last week. “We’re not just digitising workflows – we’re connecting wearable technology with robotic workflows, enabling frontline workers to seamlessly interact with automation in ways that were impossible just five years ago.” Practical applications driving change The real-world impact of enterprise AI transformation is already evident in Zebra’s recent collaboration with a major North American retailer. The solution combines traditional AI with generative AI capabilities, enabling fast shelf analysis and automated task generation. “You snap a picture of a shelf, [and] within one second, the traditional AI identifies all the products on the shelf, identifies where there’s missing product, maybe misplaced product… and then it makes that information available to a Gen AI agent that then decides what should you do,” Bianculli explains. This level of automation has demonstrated significant operational improvements, reducing staffing requirements at the retailer by 25%. When it detects missing stock, the system automatically generates tasks for the right personnel, streamlining what was previously a multi-step manual process. APAC leading AI adoption The Asia Pacific region is emerging as a frontrunner in enterprise AI transformation. IBM research presented at the briefing indicates that 54% of APAC enterprises now expect AI to deliver longer-term innovation and revenue generation benefits. The region’s AI investment priorities for 2025 are clearly defined: – 21% focused on enhancing customer experiences – 18% directed toward business process automation – 16% invested in sales automation and customer lifecycle management Ryan Goh, Senior Vice President and General Manager of Asia Pacific at Zebra Technologies, points to practical implementations that are already driving results: “We have customers in e-commerce using ring scanners to scan packages, significantly improving their productivity compared to traditional scanning methods.” Innovation at the edge Zebra’s approach to AI deployment encompasses: – AI devices with native neural architecture for on-device processing – Multimodal experiences that mirror human cognitive capabilities – Gen AI agents optimising workload distribution between edge and cloud The company is advancing its activities in edge computing, with Bianculli revealing plans for on-device language models. This innovation mainly targets environments where internet connectivity is restricted or prohibited, ensuring AI capabilities remain accessible regardless of network conditions. Regional market dynamics The enterprise AI transformation journey varies significantly across APAC markets. India’s landscape is particularly dynamic, with the country’s GDP projected to grow 6.6% and manufacturing expected to surge by 7% YOY. Its commitment to AI is evident, with 96% of organisations surveyed by WEF actively running AI programmes. Japan presents a different scenario, with 1.2% projected GDP growth and some unique challenges to automation adoption. “We used to think that tablets are for retail, but the Bay Area proved us wrong,” Goh notes, highlighting unexpected applications in manufacturing and customer self-service solutions. Future trajectory Gartner’s projections indicate that by 2027, 25% of CIOs will implement augmented connected workforce initiatives that will halve the time required for competency development. Zebra is already moving in this direction with its Z word companion, which uses generative AI and large language models and is scheduled for pilot deployment with select customers in Q2 of this year. With a global presence spanning 120+ offices in 55 countries and 10,000+ channel partners across 185 countries, Zebra is positioned play strongly in the enterprise AI transformation across APAC. As the region moves from AI experimentation to full-scale deployment, the focus remains on delivering practical innovations that drive measurable business outcomes and operational efficiency. (Photo by ) See also: Walmart and Amazon drive retail transformation with AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here The post Zebra Technologies and enterprise AI in the APAC appeared first on AI News. View the full article
-
ESSEC Business School and Accenture have announced the launch of a new initiative, ‘AI for Responsible Leadership,’ which marks the 10th anniversary of the establishment of the role of Chair at ESSEC, titled the ESSEC Accenture Strategic Business Analytics Chair. The initiative aims to encourage the use of artificial intelligence by leaders in ways that are responsible and ethical, and that lead to high levels of professional performance. It aims to provide current and future leaders with the skills they require when faced with challenges in the future; economic, environmental, or social. Several organisations support the initiative, including institutions, businesses, and specialised groups, including ESSEC Metalab for Data, Technology & Society, and Accenture Research. Executive Director of the ESSEC Metalab, Abdelmounaim Derraz, spoke of the collaboration, saying, “Technical subjects are continuing to shake up business schools, and AI has opened up opportunities for collaboration between partner companies, researchers, and other members of the ecosystem (students, think tanks, associations, [and] public service).” ESSEC and Accenture aim to integrate perspectives from multiple fields of expertise, an approach that is a result of experimentation in the decade the Chair has existed. The elements of the initiative include workshops and talks designed to promote the exchange of knowledge and methods. It will also include a ‘barometer’ to help track AI’s implementation and overall impact on responsible leadership. The initiative will engage with a network of institutions and academic publications, and an annual Grand Prix will recognise projects that focus on and explore the subject of AI and leadership. Fabrice Marque, founder of the initiative and the current ESSEC Accenture Strategics Business Analytics Chair, said, “For years, we have explored the potential of using data and artificial intelligence in organisations. The synergies we have developed with our partners (Accenture, Accor, Dataiku, Engie, Eurofins, MSD, Orange) allowed us to evaluate and test innovative solutions before deploying them. “With this initiative, we’re taking a major step: bringing together an engaged ecosystem to sustainably transform how leaders think, decide, and act in the face of tomorrow’s challenges. Our ambition is clear: to make AI a lever for performance, innovation and responsibility for […] leaders.” Managing Director at Accenture and sponsor of the ESSEC/Accenture Chair and initiative, Aurélien Bouriot, said, “The ecosystem will benefit from the resources that Accenture puts at its disposal, and will also benefit our employees who participate.” Laetitia Cailleteau, Managing Director at Accenture and leader of Responsible AI & Generative AI for Europe, highlighted the importance of future leaders understanding all aspects of AI. “AI is a pillar of the ongoing industrial transformation. Tomorrow’s leaders must understand the technical, ethical, and human aspects and risks – and know how to manage them. In this way, they will be able to maximise value creation and generate a positive impact for the organisation, its stakeholders and society as a whole.” Image credit: Wikimedia Commons See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post French initiative for responsible AI leaders appeared first on AI News. View the full article
-
OpenAI is releasing a powerful agentic capability that enables ChatGPT to conduct complex, multi-step research tasks online. The feature, called Deep Research, reportedly achieves in tens of minutes what could take a human researcher hours or even days. OpenAI describes Deep Research as a significant milestone in its journey toward artificial general intelligence (AGI). “The ability to synthesise knowledge is a prerequisite for creating new knowledge,” says OpenAI. “For this reason, Deep Research marks a significant step toward our broader goal of developing AGI.” Agentic AI enables ChatGPT to assist with complex research Deep Research empowers ChatGPT to find, analyse, and synthesise information from hundreds of online sources autonomously. With just a prompt from the user, the tool can deliver a comprehensive report, comparable to the output of a research analyst, according to OpenAI. Drawing capabilities from a variant of OpenAI’s upcoming “o3” model, the aim is to free users from time-consuming, labour-intensive information gathering. Whether it’s a competitive analysis of streaming platforms, an informed policy review, or even personalised recommendations for a new commuter bike, Deep Research promises precise and reliable results. Importantly, every output includes full citations and transparent documentation—enabling users to verify the findings with ease. The tool appears particularly adept at uncovering niche or non-intuitive insights, making it an invaluable asset across industries like finance, science, policymaking, and engineering. But OpenAI also envisions Deep Research being useful for the average user, such as shoppers looking for hyper-personalised recommendations or a specific product. people will post lots of great examples, but here is a fun one: i am in japan right now and looking for an old NSX. i spent hours searching unsuccessfully for the perfect one. i was about to give up and deep research just…found it. — Sam Altman (@sama) February 3, 2025 This latest agentic capability operates through the user interface of ChatGPT; users simply select the “Deep Research” option in the message composer and type their query. Supporting files or spreadsheets can also be uploaded for additional context. Once initiated, the AI embarks on a rigorous multi-step process, which may take 5-30 minutes to complete. A sidebar provides updates on the actions taken and the sources consulted. Users can carry on with other tasks and will be notified when the final report is ready. The results are presented in the chat as detailed, well-documented reports. In the coming weeks, OpenAI plans to enhance these outputs further by embedding images, data visualisations, and graphs to deliver even greater clarity and context. Unlike GPT-4o – which excels in real-time, multimodal conversations – Deep Research prioritises depth and detail. Its ability to rigorously cite sources and provide comprehensive analysis sets it apart—shifting the focus from fast, summarised answers to well-documented, research-grade insights. Built for real-world challenges Deep Rsearch leverages sophisticated training methodologies, grounded in real-world browsing and reasoning tasks across diverse domains. Its model was trained via reinforcement learning to autonomously plan and execute multi-step research processes, including backtracking and adaptively refining its approach as new information becomes available. The tool can browse user-uploaded files, generate and iterate on graphs using Python, embed media such as generated images and web pages into responses, and cite exact sentences or passages from its sources. The result of this extensive training is a highly capable agent for tackling complex real-world problems. OpenAI evaluated Deep Research across a broad set of expert-level exams known as “Humanity’s Last Exam”. The exams – comprising over 3,000 questions covering topics from rocket science and linguistics to ecology and classics – test an AI’s competence in solving multifaceted problems. The results were impressive, with the model achieving a record-breaking 26.6% accuracy across these domains: GPT-4o: 3.3% Grok-2: 3.8% Claude 3.5 Sonnet: 4.3% OpenAI o1: 9.1% DeepSeek-R1: 9.4% Deep research: 26.6% (with browsing + Python tools) Deep Research also reached a new state-of-the-art performance on the GAIA benchmark, which evaluates AI models on real-world questions requiring reasoning, multi-modal fluency, and tool-use proficiency. Deep Research topped the leaderboard with a score of 72.57%. Limitations and challenges While the Deep Research agentic AI capability in ChatGPT signifies a bold step forward, OpenAI acknowledges that the technology is still in its early stages and comes with limitations. The system occasionally “hallucinates” facts or offers incorrect inferences, albeit at a notably reduced rate compared to existing GPT models, according to OpenAI. It also faces challenges in differentiating between authoritative sources and speculative content, and it struggles to calibrate its confidence levels—often displaying undue certainty for potentially uncertain findings. Minor formatting errors in reports and citations, as well as delays in initiating tasks, could also frustrate initial users. OpenAI says these issues are expected to improve over time with more usage and iterative refinements. OpenAI is rolling out the capability gradually, starting with Pro users, who will have access to up to 100 queries per month. Plus and Team tiers will follow suit, with Enterprise access arriving next. ***, Swiss, and European Economic Area residents are not yet able to access the feature, but OpenAI says it’s working on expanding its rollout to these regions. In the weeks ahead, OpenAI will expand the feature to ChatGPT’s mobile and desktop platforms. The long-term vision includes enabling connections to subscription-based or proprietary data sources, further enhancing the robustness and personalisation of its outputs. Looking further ahead, OpenAI envisions integrating Deep Research with “Operator,” an existing chatbot capability that takes real-world actions. This integration would allow ChatGPT to seamlessly handle tasks that require both asynchronous online research and real-world execution. (Photo by John Schnobrich) See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ChatGPT gains agentic capability for complex research appeared first on AI News. View the full article
-
OpenAI offers two versions of its chatbot, ChatGPT-4 and ChatGPT-3.5, each catering to different needs. ChatGPT-4 is the more advanced option, providing improved accuracy and reasoning, while ChatGPT-3.5 remains a solid choice, especially for those looking for a free AI tool. The right model depends on user needs – whether it’s a more powerful AI for complex tasks or a simple, accessible chatbot for everyday use. Both models are built on the same foundational AI concepts, but they have notable differences. ChatGPT-4 offers more advanced reasoning, a larger context window, and multimodal capabilities, making it better suited for complex problem-solving and content generation. In contrast, ChatGPT-3.5 is designed for general-purpose tasks and is easier to access since it’s free. While ChatGPT-4 requires a subscription, ChatGPT-3.5 is available at no cost, making it a practical option for casual users who don’t need advanced features. Who should choose ChatGPT-4? ChatGPT-4 is designed for users who need a more powerful AI model that can handle both text and image inputs. It can handle longer conversations, making it helpful for users who want thorough, context-rich interactions. It also supports internet browsing in specific plans, allowing for limited real-time information retrieval. However, this model is only available with subscription plans, which begin at $20 per month for individual users and progress to higher-tier options for teams and enterprises. While these plans offer extra features like a larger context window and better performance, they also require a financial commitment that may be unnecessary for users with basic AI needs. Who should choose ChatGPT-3.5? ChatGPT-3.5 remains a viable alternative for users looking for a free AI chatbot that does not require a subscription. It can perform a variety of general tasks, including answering questions, drafting text, and offering conversational support. While it lacks multimodal capabilities and has a smaller context window than ChatGPT-4, it is still a reliable tool for many common uses. The setup process is straightforward – users simply need to create an OpenAI account to start using the model via the web or through mobile apps. It supports voice interactions on mobile devices, making it more convenient for hands-free use. Businesses and professionals looking for a scalable AI solution will likely prefer ChatGPT-4, which provides more sophisticated responses, advanced reasoning, and additional enterprise features. Its ability to process multimodal inputs, evaluate data, and manage longer conversations makes it a more effective tool for professional and research-based tasks. Making the right choice: ChatGPT-4 or ChatGPT-3.5? For those deciding between the two, the choice largely depends on the intended use. ChatGPT-4 is the better option for users who require higher accuracy and enhanced reasoning. It is well-suited for professionals, researchers, and businesses seeking a more powerful AI tool. In comparison, ChatGPT-3.5 is ideal for users who need a simple and user-friendly AI model capable of handling a wide range of tasks. Are there better AI alternatives? While ChatGPT-4 and ChatGPT-3.5 are both capable AI tools, they may not be everyone’s cup of tea. Users looking for a free, multimodal AI tool with extensive real-time web search capabilities may find other models more suitable. Similarly, people who need AI specifically for coding and development may prefer a model optimised for those tasks. OpenAI’s models are designed to be general-purpose, but they may not meet the needs of users requiring highly specialised AI applications. For those exploring alternatives, Google Gemini, Anthropic Claude, and Microsoft Copilot are among the top competitors in the AI chatbot space. Google Gemini, previously known as Bard, integrates deeply with Google Search and offers strong multimodal capabilities. Many users appreciate its accessibility and free-tier offerings. Anthropic’s Claude is another option, particularly for those focused on ethical AI development and security. It features one of the largest context windows available, making it suitable for long-form content generation. Meanwhile, Microsoft Copilot integrates with Microsoft 365 applications and Bing, providing an AI assistant that seamlessly fits into productivity and development workflows. (Photo by Unsplash) See also: Microsoft and OpenAI probe alleged data theft by DeepSeek Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ChatGPT-4 vs. ChatGPT-3.5: Which to use? appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
2025 is here and investment in AI and software development is booming. Outsourcing software development has become important for businesses striving to remain competitive. However, building and maintaining an in-house software development team can be costly, time-consuming, and resource-intensive. As a result, many organisations are turning to outsourced software development companies to meet their technological needs. This document explores the benefits of partnering with an outsourced software development company, provides a comprehensive understanding of how this approach can drive innovation, reduce costs, and accelerate growth. From access to global talent to enhanced scalability, outsourcing offers advantages that can impact a company’s bottom line. 1. Cost efficiency One of the primary drivers for outsourcing software development is cost efficiency. Developing software in-house involves expenses like recruitment, training, salaries, benefits, office space, equipment, and more. Costs can quickly escalate, especially for small to medium-sized businesses with limited budgets. Key cost-reduction factors: Lower labour costs: Companies outsource software development to tap into talent from regions where labour costs are significantly lower. For instance, developers in Eastern Europe, Asia, or Latin America often charge a fraction of their counterparts in North America or Western Europe. No overhead costs: By outsourcing, companies eliminate the need for physical office space, infrastructure, and utilities, as these are covered by the outsourcing partner. Flexible payment models: Many software development outsourcing companies offer flexible payments, like hourly rates or fixed project fees, allowing companies to choose the most cost-effective option. Example: A US-based company might pay $120,000 annually for an in-house developer. Outsourcing the same role to a highly skilled developer in India or Ukraine could cost as little as $30,000 annually. 2. Access to global talent and expertise The global talent pool for software development is large and highly-skilled. By outsourcing, companies can access experienced professionals who specialise in different technologies and methodologies. This advantage is particularly significant when dealing with niche technologies or emerging fields like artificial intelligence, blockchain, or the internet of things. Benefits of global talent access: Diverse skill sets: Software outsourcing companies often employ teams with a broad range of technical expertise, ensuring that their clients receive the top talent for their specific needs. Continuous learning: Developers in outsourcing firms typically work on diverse projects and industries, making them adept at solving complex challenges. Quick ramp-up times: Instead of spending months hiring and training, companies can use the expertise of an established development team immediately. Example: A startup requiring expertise in machine learning can partner with an outsourced team that already has a proven track record in building similar solutions, eliminating the need to hire and train a new team in-house. 3. Using artificial intelligence consultancy Trump’s $500 billion Stargate Initiative highlights the role of an artificial intelligence consultancy and development company in ensuring business continuity. As AI continues to transform industries, partnering with an outsourced software development company that offers AI consultancy can unlock innovation and efficiency. AI-powered solutions can automate processes, enhance customer experiences, and provide valuable insights through data analytics. Benefits of AI consultancy: Custom AI solutions: Outsourced teams with AI expertise can develop tailored solutions that address specific business challenges, like predictive analytics, chatbots, or recommendation systems. Expert guidance: Collaborating with experienced AI consultants ensures that companies adopt the most effective strategies and technologies for their needs. Scalable AI development: Outsourcing enables businesses to scale their AI initiatives without the need to invest heavily in in-house expertise. 4. Focus on core business activities Outsourcing software development allows businesses to focus on their core competencies rather than getting distracted by technical complexities. By delegating software development tasks to an external team, companies can allocate internal resources to other, more strategic activities. Key benefits: Reduced management overhead: Outsourcing partners handle day-to-day operations, project management, and quality assurance, freeing up time for company leadership to focus on core business activities. Enhanced productivity: Internal teams can concentrate on areas where they add the most value, like business strategy or product development. Streamlined operations: Outsourcing reduces the administrative burden of managing an in-house development team, like issues around performance reviews and HR. 5. Scalability and flexibility The ability to scale up or down quickly is important. Software development outsourcing provides scalability, enabling companies to adjust their team size and project scope without long-term commitments. How outsourcing supports scalability: On-demand resources: Outsourcing partners can provide additional developers, designers, or testers as project requirements grow. Project-based engagements: Companies can hire outsourcing firms for specific software development projects, avoiding the need to maintain full-time staff when the project ends. Seasonal flexibility: Businesses experiencing seasonal spikes can scale up their development team temporarily, then scale down after the peak *******. Example: A retail company launching a new e-commerce platform ahead of the holiday season can partner with an outsourcing firm to build and deploy the platform quickly, then reduce the team size after the holiday rush. 6. Faster time-to-market Speed is an important factor in software development, especially in competitive industries. Outsourcing lets companies accelerate development timelines by using pre-established teams. Contributors to faster delivery: Round-the-clock development: By outsourcing to teams in different time zones, companies can achieve continuous development cycles, reducing overall project duration. Experienced teams: Established outsourcing firms have standardised processes and best practice in place, ensuring rapid development without compromising quality. Focus on deadlines: Outsourcing partners are contractually obligated to meet project timelines, providing an additional layer of accountability. Example: A fintech startup can partner with an outsourced team to launch a minimum viable product (MVP) in three months, gaining a competitive advantage in the market. 7. Access to advanced tools and technologies Keeping up with the latest tools, frameworks, and technologies can be challenging for in-house teams. Outsourcing companies invest heavily in state-of-the-art development tools and training to remain competitive, giving their clients access to cutting-edge solutions. Benefits of advanced tools: Optimised workflows: Outsourced teams use advanced project management and collaboration tools. Modern development practices: Many outsourcing firms specialise in Agile, DevOps, and other modern methodologies that accelerate the software development process. Cybersecurity expertise: Established outsourcing partners have robust security protocols to safeguard sensitive data, reducing the risk of breaches. 8. Risk mitigation Software development projects are inherently risky, with challenges ranging from budget overruns to missed deadlines and technical failures. Outsourcing partners help mitigate these risks by offering specialised expertise and proven processes. How outsourcing reduces risk: Established processes: Reputable outsourcing firms have defined workflows and quality assurance protocols to minimise errors. Shared accountability: Contracts with outsourcing partners can include SLAs that define deliverables, timelines, and quality standards. Flexible problem-solving: Outsourced teams have experience navigating complex projects, making them adept at identifying and resolving potential issues early. Example: A healthcare company developing a compliance-heavy application can rely on an outsourced team’s expertise in navigating regulatory requirements, ensuring the product meets industry standards. 9. Innovation and competitive advantage Outsourcing software development can drive innovation by bringing fresh perspectives and ideas to the table. External teams often work across various industries, enabling them to apply best practices and creative solutions to new challenges. Key innovation drivers: Cross-industry expertise: Outsourced teams draw insights from diverse projects, fostering innovative solutions. Access to emerging technologies: Many outsourcing firms specialise in cutting-edge technologies, helping companies stay ahead of market trends. Collaborative ideation: Partnering with an external team encourages knowledge sharing and brainstorming. Example: A retail brand looking to integrate augmented reality into its shopping experience can collaborate with an outsourced team specialising in the technology for an engaging customer experience. 10. Long-term partnership opportunities Many businesses view outsourcing not just as a one-time solution but a long-term strategic partnership. Building a relationship with a trusted outsourcing partner can lead to consistent quality and innovation. Benefits of long-term collaboration: Deeper understanding: Over time, outsourced teams gain a better understanding of the client’s business goals and processes, resulting in more tailored solutions. Cost savings: Long-term partnerships often come with negotiated rates and better resource allocation, reducing overall costs. Reliability: Partnering with a reliable outsourcing company ensures consistent delivery and support for future projects. Conclusion Partnering with an outsourced app development company offers numerous benefits, from cost efficiency and global talent access to innovation and scalability. By using the expertise of external teams, companies can achieve their goals without overextending internal resources. While outsourcing is not without its challenges, like communication barriers and cultural differences, these can be mitigated by choosing the right partner and establishing clear expectations from the outset. Ultimately, the decision to outsource software development can be a game-changer for businesses looking to innovate, grow, and thrive. (Image source: Unsplash) The post The benefits of partnering with outsourced developers appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Next week marks the beginning of a new era for AI regulations as the first obligations of the EU AI Act take effect. While the full compliance requirements won’t come into force until mid-2025, the initial phase of the EU AI Act begins February 2nd and includes significant prohibitions on specific AI applications. Businesses across the globe that operate in the EU must now navigate a regulatory landscape with strict rules and high stakes. The new regulations prohibit the deployment or use of several high-risk AI systems. These include applications such as social scoring, emotion recognition, real-time remote biometric identification in public spaces, and other scenarios deemed unacceptable under the Act. Companies found in violation of the rules could face penalties of up to 7% of their global annual turnover, making it imperative for organisations to understand and comply with the restrictions. Early compliance challenges “It’s finally here,” says Levent Ergin, Chief Strategist for Climate, Sustainability, and AI at Informatica. “While we’re still in a phased approach, businesses’ hard-earned preparations for the EU AI Act will now face the ultimate test.” Ergin highlights that even though most compliance requirements will not take effect until mid-2025, the early prohibitions set a decisive tone. “For businesses, the pressure in 2025 is twofold. They must demonstrate tangible ROI from AI investments while navigating challenges around data quality and regulatory uncertainty. It’s already the perfect storm, with 89% of large businesses in the EU reporting conflicting expectations for their generative AI initiatives. At the same time, 48% say technology limitations are a major barrier to moving AI pilots into production,” he remarks. Ergin believes the key to compliance and success lies in data governance. “Without robust data foundations, organisations risk stagnation, limiting their ability to unlock AI’s full potential. After all, isn’t ensuring strong data governance a core principle that the EU AI Act is built upon?” To adapt, companies must prioritise strengthening their approach to data quality. “Strengthening data quality and governance is no longer optional, it’s critical. To ensure both compliance and prove the value of AI, businesses must invest in making sure data is accurate, holistic, integrated, up-to-date and well-governed,” says Ergin. “This isn’t just about meeting regulatory demands; it’s about enabling AI to deliver real business outcomes. As 82% of EU companies plan to increase their GenAI investments in 2025, ensuring their data is AI-ready will be the difference between those who succeed and those who remain in the starting blocks.” EU AI Act has no borders The extraterritorial scope of the EU AI Act means non-EU organisations are assuredly not off the hook. As Marcus Evans, a partner at Norton Rose Fulbright, explains, the Act applies far beyond the EU’s borders. “The AI Act will have a truly global application,” says Evans. “That’s because it applies not only to organisations in the EU using AI or those providing, importing, or distributing AI to the EU market, but also AI provision and use where the output is used in the EU. So, for instance, a company using AI for recruitment in the EU – even if it is based elsewhere – would still be captured by these new rules.” Evans advises businesses to start by auditing their AI use. “At this stage, businesses must first understand where AI is being used in their organisation so that they can then assess whether any use cases may trigger the prohibitions. Building on that initial inventory, a wider governance process can then be introduced to ensure AI use is assessed, remains outside the prohibitions, and complies with the AI Act.” While organisations work to align their AI practices with the new regulations, additional challenges remain. Compliance requires addressing other legal complexities such as data protection, intellectual property (IP), and discrimination risks. Evans emphasises that raising AI literacy within organisations is also a critical step. “Any organisations in scope must also take measures to ensure their staff – and anyone else dealing with the operation and use of their AI systems on their behalf – have a sufficient level of AI literacy,” he states. “AI literacy will play a vital role in AI Act compliance, as those involved in governing and using AI must understand the risks they are managing.” Encouraging responsible innovation The EU AI Act is being hailed as a milestone for responsible AI development. By prohibiting harmful practices and requiring transparency and accountability, the regulation seeks to balance innovation with ethical considerations. “This framework is a pivotal step towards building a more responsible and sustainable future for artificial intelligence,” says Beatriz Sanz Sáiz, AI Sector Leader at EY Global. Sanz Sáiz believes the legislation fosters trust while providing a foundation for transformative technological progress. “It has the potential to foster further trust, accountability, and innovation in AI development, as well as strengthen the foundations upon which the technology continues to be built,” Sanz Sáiz asserts. “It is critical that we focus on eliminating bias and prioritising fundamental rights like fairness, equity, and privacy. Responsible AI development is a crucial step in the quest to further accelerate innovation.” What’s prohibited under the EU AI Act? To ensure compliance, businesses need to be crystal-clear on which activities fall under the EU AI Act’s strict prohibitions. The current list of prohibited activities includes: Harmful subliminal, manipulative, and deceptive techniques Harmful exploitation of vulnerabilities Unacceptable social scoring Individual crime risk assessment and prediction (with some exceptions) Untargeted scraping of internet or CCTV material to develop or expand facial recognition databases Emotion recognition in areas such as the workplace and education (with some exceptions) Biometric categorisation to infer sensitive categories (with some exceptions) Real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes (with some exceptions) The Commission’s forthcoming guidance on which “AI systems” fall under these categories will be critical for businesses seeking to ensure compliance and reduce legal risks. Additionally, companies should anticipate further clarification and resources at the national and EU levels, such as the upcoming webinar hosted by the AI Office. A new landscape for AI regulations The early implementation of the EU AI Act represents just the beginning of what is a remarkably complex and ambitious regulatory endeavour. As AI continues to play an increasingly pivotal role in business strategy, organisations must learn to navigate new rules and continuously adapt to future changes. For now, businesses should focus on understanding the scope of their AI use, enhancing data governance, educating staff to build AI literacy, and adopting a proactive approach to compliance. By doing so, they can position themselves as leaders in a fast-evolving AI landscape and unlock the technology’s full potential while upholding ethical and legal standards. (Photo by Guillaume Périgois) See also: ChatGPT Gov aims to modernise US government agencies Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post EU AI Act: What businesses need to know as regulations go live appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Microsoft and OpenAI are investigating a potential breach of the AI firm’s system by a group allegedly linked to ******** AI startup DeepSeek. According to Bloomberg, the investigation stems from suspicious data extraction activity detected in late 2024 via OpenAI’s application programming interface (API), sparking broader concerns over international AI competition. Microsoft, OpenAI’s largest financial backer, first identified the large-scale data extraction and informed the ChatGPT maker of the incident. Sources believe the activity may have violated OpenAI’s terms of service, or that the group may have exploited loopholes to bypass restrictions limiting how much data they could collect. DeepSeek has quickly risen to prominence in the competitive AI landscape, particularly with the release of its latest model, R-1, on 20 January. Billed as a rival to OpenAI’s ChatGPT in performance but developed at a significantly lower cost, R-1 has shaken up the tech industry. Its release triggered a sharp decline in tech and AI stocks that wiped billions from US markets in a single week. David Sacks, the White House’s newly appointed “crypto and AI czar,” alleged that DeepSeek may have employed questionable methods to achieve its AI’s capabilities. In an interview with Fox News, Sacks noted evidence suggesting that DeepSeek had used “distillation” to train its AI models using outputs from OpenAI’s systems. “There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI’s models, and I don’t think OpenAI is very happy about this,” Sacks told the network. Model distillation involves training one AI system using data generated by another, potentially allowing a competitor to develop similar functionality. This method, when applied without proper authorisation, has stirred ethical and intellectual property debates as the global race for AI supremacy heats up. OpenAI declined to comment specifically on the accusations against DeepSeek but acknowledged the broader risk posed by model distillation, particularly by ******** companies. “We know PRC-based companies — and others — are constantly trying to distill the models of leading US AI companies,” a spokesperson for OpenAI told Bloomberg. Geopolitical and security concerns Growing tensions around AI innovation now extend into national security. CNBC reported that the US Navy has banned its personnel from using DeepSeek’s products, citing fears that the ******** government could exploit the platform to access sensitive information. In an email dated 24 January, the Navy warned its staff against using DeepSeek AI “in any capacity” due to “potential security and ethical concerns associated with the model’s origin and usage.” Critics have highlighted DeepSeek’s privacy policy, which permits the collection of data such as IP addresses, device information, and even keystroke patterns—a scope of data collection considered excessive by some experts. Just fyi, @deepseek_ai collects your IP, keystroke patterns, device info, etc etc, and stores it in China, where all that data is vulnerable to arbitrary requisition from the State. From their own privacy policy: pic.twitter.com/wueJokHcn3 — Luke de Pulford (@lukedepulford) January 27, 2025 Earlier this week, DeepSeek stated it was facing “large-scale malicious attacks” against its systems. A banner on its website informed users of a temporary sign-up restriction. The growing competition between the US and China in particular in the AI sector has underscored wider concerns regarding technological ownership, ethical governance, and national security. Experts warn that as AI systems advance and become increasingly integral to global economic and strategic planning, disputes over data usage and intellectual property are only likely to intensify. Accusations such as those against DeepSeek amplify alarm over China’s rapid development in the field and its potential quest to bypass US-led safeguards through reverse engineering and other means. While OpenAI and Microsoft continue their investigation into the alleged misuse of OpenAI’s platform, businesses and governments alike are paying close attention. The case could set a precedent for how AI developers police model usage and enforce terms of service. For now, the response from both US and ******** stakeholders highlights how AI innovation has become not just a race for technological dominance, but a fraught geopolitical contest that is shaping 21st-century power dynamics. (Image by Mohamed Hassan) See also: Qwen 2.5-Max outperforms DeepSeek V3 in some benchmarks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Microsoft and OpenAI probe alleged data theft by DeepSeek appeared first on AI News. View the full article
-
Alibaba’s response to DeepSeek is Qwen 2.5-Max, the company’s latest Mixture-of-Experts (MoE) large-scale model. Qwen 2.5-Max boasts pretraining on over 20 trillion tokens and fine-tuning through cutting-edge techniques like Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). With the API now available through Alibaba Cloud and the model accessible for exploration via Qwen Chat, the ******** tech giant is inviting developers and researchers to see its breakthroughs firsthand. Outperforming peers When comparing Qwen 2.5-Max’s performance against some of the most prominent AI models on a variety of benchmarks, the results are promising. Evaluations included popular metrics like the MMLU-Pro for college-level problem-solving, LiveCodeBench for coding expertise, LiveBench for overall capabilities, and Arena-Hard for assessing models against human preferences. According to Alibaba, “Qwen 2.5-Max outperforms DeepSeek V3 in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, while also demonstrating competitive results in other assessments, including MMLU-Pro.” (Credit: Alibaba) The instruct model – designed for downstream tasks like chat and coding – competes directly with leading models such as GPT-4o, Claude-3.5-Sonnet, and DeepSeek V3. Among these, Qwen 2.5-Max managed to outperform rivals in several key areas. Comparisons of base models also yielded promising outcomes. While proprietary models like GPT-4o and Claude-3.5-Sonnet remained out of reach due to access restrictions, Qwen 2.5-Max was assessed against leading public options such as DeepSeek V3, Llama-3.1-405B (the largest open-weight dense model), and Qwen2.5-72B. Again, Alibaba’s newcomer demonstrated exceptional performance across the board. “Our base models have demonstrated significant advantages across most benchmarks,” Alibaba stated, “and we are optimistic that advancements in post-training techniques will elevate the next version of Qwen 2.5-Max to new heights.” The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and post-trained with curated SFT and RLHF recipes. It achieves competitive… pic.twitter.com/oHVl16vfje — Qwen (@Alibaba_Qwen) January 28, 2025 Making Qwen 2.5-Max accessible To make the model more accessible to the global community, Alibaba has integrated Qwen 2.5-Max with its Qwen Chat platform, where users can interact directly with the model in various capacities—whether exploring its search capabilities or testing its understanding of complex queries. For developers, the Qwen 2.5-Max API is now available through Alibaba Cloud under the model name “qwen-max-2025-01-25”. Interested users can get started by registering an Alibaba Cloud account, activating the Model Studio service, and generating an API key. The API is even compatible with OpenAI’s ecosystem, making integration straightforward for existing projects and workflows. This compatibility lowers the barrier for those eager to test their applications with the model’s capabilities. Alibaba has made a strong statement of intent with Qwen 2.5-Max. The company’s ongoing commitment to scaling AI models is not just about improving performance benchmarks but also about enhancing the fundamental thinking and reasoning abilities of these systems. “The scaling of data and model size not only showcases advancements in model intelligence but also reflects our unwavering commitment to pioneering research,” Alibaba noted. Looking ahead, the team aims to push the boundaries of reinforcement learning to foster even more advanced reasoning skills. This, they say, could enable their models to not only match but surpass human intelligence in solving intricate problems. The implications for the industry could be profound. As scaling methods improve and Qwen models break new ground, we are likely to see further ripples across AI-driven fields globally that we’ve seen in recent weeks. (Photo by Maico Amorim) See also: ChatGPT Gov aims to modernise US government agencies Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Qwen 2.5-Max outperforms DeepSeek V3 in some benchmarks appeared first on AI News. View the full article
-
OpenAI has launched ChatGPT Gov, a specially designed version of its AI chatbot tailored for use by US government agencies. ChatGPT Gov aims to harness the potential of AI to enhance efficiency, productivity, and service delivery while safeguarding sensitive data and complying with stringent security requirements. “We believe the US government’s adoption of artificial intelligence can boost efficiency and productivity and is crucial for maintaining and enhancing America’s global leadership in this technology,” explained OpenAI. The company emphasised how its AI solutions present “enormous potential” for tackling complex challenges in the public sector, ranging from improving public health and infrastructure to bolstering national security. By introducing ChatGPT Gov, OpenAI hopes to offer tools that “serve the national interest and the public good, aligned with democratic values,” while assisting policymakers in responsibly integrating AI to enhance services for the American people. The role of ChatGPT Gov Public sector organisations can deploy ChatGPT Gov within their own Microsoft Azure environments, either through Azure’s commercial cloud or the specialised Azure Government cloud. This self-hosting capability ensures that agencies can meet strict security, privacy, and compliance standards, such as IL5, CJIS, ITAR, and FedRAMP High. OpenAI believes this infrastructure will not only help facilitate compliance with cybersecurity frameworks, but also speed up internal authorisation processes for handling non-public sensitive data. The tailored version of ChatGPT incorporates many of the features found in the enterprise version, including: The ability to save and share conversations within a secure government workspace. Uploading text and image files for streamlined workflows. Access to GPT-4o, OpenAI’s state-of-the-art model capable of advanced text interpretation, summarisation, coding, image analysis, and mathematics. Customisable GPTs, which enable users to create and share specifically tailored models for their agency’s needs. A built-in administrative console to help CIOs and IT departments manage users, groups, security protocols such as single sign-on (SSO), and more. These features ensure that ChatGPT Gov is not merely a tool for innovation, but an infrastructure supportive of secure and efficient operations across US public-sector entities. OpenAI says it’s actively working to achieve FedRAMP Moderate and High accreditations for its fully managed SaaS product, ChatGPT Enterprise, a step that would bolster trust in its AI offerings for government use. Additionally, the company is exploring ways to expand ChatGPT Gov’s capabilities into Azure’s classified regions for even more secure environments. “ChatGPT Gov reflects our commitment to helping US government agencies leverage OpenAI’s technology today,” the company said. A better track record in government than most politicians Since January 2024, ChatGPT has seen widespread adoption among US government agencies, with over 90,000 users across more than 3,500 federal, state, and local agencies having already sent over 18 million messages to support a variety of operational tasks. Several notable agencies have highlighted how they are employing OpenAI’s AI tools for meaningful outcomes: The Air Force Research Laboratory: The lab uses ChatGPT Enterprise for administrative purposes, including improving access to internal resources, basic coding assistance, and boosting AI education efforts. Los Alamos National Laboratory: The laboratory leverages ChatGPT Enterprise for scientific research and innovation. This includes work within its Bioscience Division, which is evaluating ways GPT-4o can safely advance bioscientific research in laboratory settings. State of Minnesota: Minnesota’s Enterprise Translations Office uses ChatGPT Team to provide faster, more accurate translation services to multilingual communities across the state. The integration has resulted in significant cost savings and reduced turnaround times. Commonwealth of Pennsylvania: Employees in Pennsylvania’s pioneering AI pilot programme reported that ChatGPT Enterprise helped them reduce routine task times, such as analysing project requirements, by approximately 105 minutes per day on days they used the tool. These early use cases demonstrate the transformative potential of AI applications across various levels of government. Beyond delivering tangible improvements to government workflows, OpenAI seeks to foster public trust in artificial intelligence through collaboration and transparency. The company said it is committed to working closely with government agencies to align its tools with shared priorities and democratic values. “We look forward to collaborating with government agencies to enhance service delivery to the American people through AI,” OpenAI stated. As other governments across the globe begin adopting similar technologies, America’s proactive approach may serve as a model for integrating AI into the public sector while safeguarding against risks. Whether supporting administrative workflows, research initiatives, or language services, ChatGPT Gov stands as a testament to the growing role AI will play in shaping the future of effective governance. (Photo by Dave Sherrill) See also: Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ChatGPT Gov aims to modernise US government agencies appeared first on AI News. View the full article
-
Ericsson has launched Cognitive Labs, a research-driven initiative dedicated to advancing AI for telecoms. Operating virtually rather than from a single physical base, Cognitive Labs will explore AI technologies such as Graph Neural Networks (GNNs), Active Learning, and Large-Scale Language Models (LLMs). According to Ericsson, these innovations form the backbone of the company’s solutions for the next generation of mobile communications and signal the company’s commitment to extending AI’s transformative potential to other sectors, such as healthcare. The initiative places a strong emphasis on open collaboration, engaging with the open-source community to further the development of AI technologies for wider societal benefit. “Cognitive Labs represents our commitment to lead innovation in AI and make a significant contribution to the open source community,” said Jean-Christophe Laneri, Head of Cognitive Network Solutions at Ericsson. “Our mission is to create world-class research that can benefit the academic community and enhances our products to better serve our customers.” A triad of Ericsson AI labs Central to the Cognitive Labs initiative are three distinct research arms, each focused on a specialised area of AI: GAI Lab (Geometric Artificial Intelligence Lab): This lab explores Geometric AI, emphasising explainability in geometric learning, graph generation, and temporal GNNs. Ericsson anticipates this work will make significant contributions to areas like drug discovery and enable the creation of more efficient pharmaceuticals. MLR Lab (Machine Learning and Reasoning Lab): Focusing on training model optimisation and reinforcement learning, this lab aims to advance energy-efficient training for AI models and support the creation of digital twins that simulate physical realities. The potential benefits include reductions in time and energy costs, key priorities in sustainable AI development. FAI Lab (Fundamental Artificial Intelligence Lab): Dedicated to foundational AI models like Large-Scale Language Models, this lab seeks to shape the future of AI applications in telecoms. Its primary mission is to automate processes and improve technological efficiencies crucial to the sector. By organising Cognitive Labs into these specialised units, Ericsson looks to tackle diverse challenges while maintaining a cohesive focus on leveraging AI for wide-ranging applications. Open-source collaboration In line with its ties to the open-source community, Cognitive Labs prioritises the development of reusable AI libraries. This accessibility helps both researchers and industry professionals accelerate progress across multiple domains—from healthcare to digital communications. Additionally, Ericsson hopes to attract leading global researchers and data scientists to its fold, enhancing its credentials as a leader in AI innovation. Spain is emerging as a cornerstone for Cognitive Labs, with Ericsson Spain playing a leading role due to its robust R&D activities in Madrid and Málaga. This focus has been bolstered by a new agreement with Universidad Pontificia Comillas, specifically with its School of Engineering (Comillas ICAI). The alliance between Ericsson and ICAI is set to accelerate AI research through joint publications and active participation in open-source initiatives. Teams of Ericsson’s top engineers and data scientists will closely collaborate with the university’s research teams to drive innovation at both national and European levels. “The close collaboration between top data scientists and engineers from Ericsson and ICAI’s teams will enable Ericsson Spain to lead innovation at the national and European levels, strengthening its position as a benchmark in technology and development,” Ericsson stated. The partnership underscores Ericsson’s commitment not only to its Cognitive Labs initiative but also to developing Spain’s expertise in cutting-edge AI technologies. Through these R&D efforts, the company seeks to reinforce its position as a leading innovator in telecoms and beyond. With AI set to play a critical role in shaping the next generation of mobile communications, Ericsson’s Cognitive Labs marks a vital step forward—not only for the company but also for industries and communities that stand to benefit from its far-reaching, transformative potential. See also: World Economic Forum unveils blueprint for equitable AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Ericsson launches Cognitive Labs to pioneer telecoms AI research appeared first on AI News. View the full article
-
DeepSeek is grappling with service disruptions and restricting new account sign-ups to combat what it describes as “large-scale malicious attacks.” The ******** firm’s chat app, which recently soared to the top of Apple’s App Store, issued a notice on its website stating that only users with China-based phone numbers (+86) would be permitted to register for the foreseeable future. In the notice, DeepSeek said: “DeepSeek’s online services have recently faced large-scale malicious attacks. To ensure continued service, registration is temporarily limited to +86 phone numbers. Existing users can log in as usual.” As of writing, sign-ups have reopened with a warning that “registration may be busy” and to please wait and try again. Despite the challenges, existing users remain unaffected in terms of access. DeepSeek’s status page reported ongoing issues on Monday morning, adding to a turbulent day for the breakthrough AI platform. DeepSeek faces test after explosive growth DeepSeek has quickly become a disruptive force in the race for AI supremacy. In addition to gaining widespread user adoption, the app’s powerful open-source AI model has sparked waves of concern among US-based Big Tech firms. The app’s strength lies in its ability to deliver robust AI performance on less-advanced chips, creating a more cost-effective and accessible solution compared to high-profile rivals such as OpenAI’s ChatGPT. This key differentiation shook investor confidence on Monday, leading to a significant dip in share prices for major US technology firms. Industry watchers suggest that such shocks may become more frequent as innovative competitors like DeepSeek challenge the dominance of traditional tech players. Nigel Green, CEO of global financial advisory firm deVere Group, commented: “China’s technological advances, particularly in AI, are eroding the US’s ability to use tariffs as a tool to maintain global supremacy. “The balance of power is shifting, and Washington must recognise that it can’t always dictate terms to Beijing as it once did. This new reality will have far-reaching consequences for investors and policymakers. “AI will be the defining technology of our time, and the race to dominate it will shape global markets for decades to come.” Green went on to highlight the potential for non-traditional players and regions to seize the AI crown. “Traditional tech giants are no longer the guaranteed winners. The focus must now shift to sectors and regions that are driving the next wave of innovation. This includes not only AI but also the critical infrastructure needed to support and secure it.” The latest developments come against the broader canvas of growing competition between China and the US in the domain of AI and emerging technologies. Despite export restrictions on US hardware, Beijing’s AI sector has been powering ahead—spurred by significant government investments and a burgeoning ecosystem of innovative startups. The rise of apps like DeepSeek signals that the playing field is no longer tilted decisively in favour of Silicon Valley. For years, US companies have been buoyed by leading semiconductor technologies from domestic firms like NVIDIA. China’s ability to deliver competitive AI capabilities on less-advanced hardware could mark a significant turning point. However, while AI innovation is ramping up globally, DeepSeek’s struggles highlight the growing pains that can accompany explosive growth. (Photo by Markus Spiske) See also: Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DeepSeek restricts sign-ups amid ‘large-scale malicious attacks’ appeared first on AI News. View the full article
-
OpenAI has unveiled Operator, a tool that integrates seamlessly with web browsers to perform tasks autonomously. From filling out forms to ordering groceries, Operator promises to simplify repetitive online activities by interacting directly with websites through clicks, typing, and scrolling. Designed around a new model called the Computer-Using Agent (CUA), Operator combines GPT-4o’s vision recognition with advanced reasoning capabilities—allowing it to function as a virtual “human-in-the-browser.” Yet, for all its innovation, industry experts see room for refinement. Yiannis Antoniou, Head of AI, Data, and Analytics at specialist consultancy Lab49, shared his insights on Operator’s significance and positioning in the competitive landscape of agent AI systems. Agentic AI through a familiar interface “OpenAI’s announcement of Operator, its latest foray into the agentic AI wars, is both fascinating and incomplete,” said Antoniou, who has over two decades of experience designing AI systems for financial services firms. “Clearly influenced by Anthropic Claude’s Computer Use system, introduced back in October, Operator streamlines the experience by removing the need for complex infrastructure and focusing on a familiar interface: the browser.” By designing Operator to operate within an environment users already understand, the web browser, OpenAI sidesteps the need for bespoke APIs or integrations. “By leveraging the world’s most popular interface, OpenAI enhances the user experience and captures immediate interest from the general public. This browser-centric approach creates significant potential for widespread adoption, something Anthropic – despite its early-mover advantage – has struggled to achieve.” Unlike some competing systems that may feel technical or niche in their application, Operator’s browser-focused framework lowers the barrier to entry and is a step forward in OpenAI’s efforts to democratise AI. Unique take on usability and security One of the hallmarks of Operator is its emphasis on adaptability and security, implemented through human-in-the-loop protocols. Antoniou acknowledged these thoughtful usability features but noted that more work is needed. “Architecturally, Operator’s browser integration closely mirrors Claude’s system. Both involve taking screenshots of the user’s browser and sending them for analysis, as well as controlling the screen via virtual keystrokes and mouse movements. However, Operator introduces thoughtful usability touches. “Features like custom instructions for specific websites add a layer of personalisation, and the emphasis on human-in-the-loop safeguards against unauthorised actions – such as purchases, sending emails, or applying for jobs – demonstrate OpenAI’s awareness of potential security risks posed by malicious websites, but more work is clearly needed to make this system widely safe across a variety of scenarios.” OpenAI has implemented a multi-layered safety framework for Operator, including takeover mode for secure inputs, user confirmations prior to significant actions, and monitoring systems to detect adversarial behavior. Furthermore, users can delete browsing data and manage privacy settings directly within the tool. However, Antoniou emphasised that these measures are still evolving—particularly as Operator encounters complex or sensitive tasks. OpenAI Operator further democratises AI Antoniou also sees the release of Operator as a pivotal moment for the consumer AI landscape, albeit one that is still in its early stages. “Overall, this is an excellent first attempt at building an agentic system for everyday users, designed around how they naturally interact with technology. As the system develops – with added capabilities and more robust security controls – this limited rollout, priced at $200/month, will serve as a testing ground. “Once matured and extended to lower subscription tiers and the free version, Operator has the potential to usher in the era of consumer-facing agents, further democratising AI and embedding it into daily life.” Designed initially for Pro users at a premium price point, Operator provides OpenAI with an opportunity to learn from early adopters and refine its capabilities. Antoniou noted that while $200/month might not yet justify the system’s value for most users, investment in making Operator more powerful and accessible could lead to significant competitive advantages for OpenAI in the long run. “Is it worth $200/month? Perhaps not yet. But as the system evolves, OpenAI’s moat will grow, making it harder for competitors to catch up. Now, the challenge shifts back to Anthropic and Google—both of whom have demonstrated similar capabilities in niche or engineering-focused products—to respond and stay in the game,” he concludes. As OpenAI continues to fine-tune Operator, the potential to revolutionise how people interact with technology becomes apparent. From collaborations with companies like Instacart, DoorDash, and Uber to use cases in the public sector, Operator aims to balance innovation with trust and safety. While early limitations and pricing may deter widespread adoption for now, these hurdles might only be temporary as OpenAI commits to enhancing usability and accessibility over time. See also: OpenAI argues against ChatGPT data deletion in Indian court Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Yiannis Antoniou, Lab49: OpenAI Operator kickstarts era of browser AI agents appeared first on AI News. View the full article
-
When it comes to movies buzzing with Oscar potential, Brady Corbet’s The Brutalist is a standout this awards season. The visually stunning drama transports viewers to the post-World War II era, unravelling the story of László Tóth, played by Adrien Brody. Tóth, a fictional Hungarian-Jewish architect, starts over in the United States after being forced to leave his family behind as he emigrates. Beyond its vintage allure, something modern brews in the background: the use of AI. Specifically, AI was employed to refine Brody’s and co-star Felicity Jones’ Hungarian pronunciation. The decision has sparked lively debates about technology’s role in film-making. The role of AI in The Brutalist According to Dávid Jancsó, the film’s editor, the production team turned to Respeecher, an AI software developed by a Ukrainian company, to tweak the actors’ Hungarian dialogue. Speaking to RedShark News (as cited by Mashable SEA), Jancsó explained that Hungarian – a Uralic language known for its challenging sounds – was a significant hurdle for the actors, despite their talent and dedication. Respeecher’s software isn’t magic, but just a few years ago, it would have seemed wondrous. It creates a voice model based on a speaker’s characteristics and adjusts specific elements, like pronunciation. In this case, it was used to fine-tune the letter and vowel sounds that Brody and Jones found tricky. Most of the corrections were minimal, with Jancsó himself providing some replacement sounds to preserve the authenticity of the performances. “Most of their Hungarian dialogue has a part of me talking in there,” he joked, emphasising the care taken to maintain the actors’ original delivery. Respeecher: AI behind the scenes The is not Respeecher’s first foray into Hollywood. The software is known for restoring iconic voices like that of Darth Vader for the Obi-Wan Kenobi series, and has recreated Edith Piaf’s voice for an upcoming biopic. Outside of film, Respeecher has helped to preserve endangered languages like Crimean Tatar. For The Brutalist, the AI tool wasn’t just a luxury – it was a time and budget saver. With so much dialogue in Hungarian, manually editing every line would have required painstaking, manual work. Jancsó said that using AI sped up the process significantly, an important factor given the film’s modest $10 million budget. Beyond voice: AI’s other roles in the film AI was also used in other aspects of the production process, used for example to generate some of Tóth’s architectural drawings and complete buildings in the film’s Venice Biennale sequence. However, director Corbet has clarified that these images were not fully AI-generated; instead, the AI was used for specific background elements. Corbet and Jancsó have been candid about their perspectives on AI in film-making. Jancsó sees it as a valuable tool, saying, “There’s nothing in the film using AI that hasn’t been done before. It just makes the process a lot faster.” Corbet added that the software’s purpose was to enhance authenticity, not replace the actors’ hard work. A broader conversation The debate surrounding AI in the film industry isn’t new. From script-writing to music production, concerns about generative AI’s impact were central to the 2023 Writers Guild of America (WGA) and SAG-AFTRA strikes. Although agreements have been reached to regulate the use of AI, the topic remains a hot-button issue. The Brutalist awaits a possible Oscar nomination. From its story line to its cinematic style, the film wears its ambition on its sleeve. It’s not just a celebration of the postwar Brutalist architectural movement, it’s also a nod to classic American cinema. Shot in the rarely used VistaVision format, the film captures the grandeur of mid-20th-century film-making. Adding to its nostalgic charm, it includes a 15-minute intermission during its epic three-and-a-half-hour runtime. Yet the use of AI has given a new dimension to the ongoing conversation about AI in the creative industry. Whether people see AI as a betrayal of craftsmanship or an exciting innovative tool that can add to a final creation, one thing is certain: AI continues to transform how stories are delivered on screen. See also: AI music sparks new copyright battle in US courts Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post How AI helped refine Hungarian accents in The Brutalist appeared first on AI News. View the full article
-
OpenAI, the powerhouse behind ChatGPT, is ramping up efforts to dominate the enterprise market with a suite of AI tools tailored for business users. The company recently revealed its plans to introduce a series of enhancements designed to make AI integration seamless for companies of all sizes. This includes updates to its flagship AI agent technology, expected to transform workplace productivity by automating complex workflows, from financial analysis to customer service. “Businesses are looking for solutions that go beyond surface-level assistance. Our agents are designed to provide in-depth, actionable insights,” said Sarah Friar, CFO of OpenAI. “This is particularly relevant as enterprises seek to streamline operations in today’s competitive landscape.” OpenAI’s corporate strategy builds on its ongoing collaborations with tech leaders such as Microsoft, which has already integrated OpenAI’s technology into its Azure cloud platform. Analysts say these partnerships position OpenAI to rival established enterprise solutions providers like Salesforce and Oracle. AI research assistant tools As part of its enterprise-focused initiatives, OpenAI is emphasising the development of AI research tools that cater to specific industries. For instance, its AI models are being trained on legal and medical data to create highly specialised assistants that could redefine research-intensive sectors. This focus aligns with the broader market demand for AI-driven solutions that enhance decision-making and efficiency. Infrastructure for expansion OpenAI’s rapid growth strategy is supported by a robust infrastructure push. The company has committed to building state-of-the-art data centers in Europe and Asia, aiming to lower latency and improve service reliability for global users. These investments reflect OpenAI’s long-term vision of becoming a critical enabler in the AI-driven global economy. Challenges and issues However, challenges persist. The company faces mounting pressure from regulators concerned about data privacy and the ethical implications of deploying powerful AI tools. Critics also question the sustainability of OpenAI’s ambitious growth targets, given its significant operational costs and strong competition from other tech giants. Despite these hurdles, OpenAI remains optimistic about its trajectory. With plans to unveil its expanded portfolio at the upcoming Global AI Summit, the company is well-positioned to strengthen its foothold in the burgeoning AI enterprise market. (Editor’s note: This article is sponsored by AI Tools Network) See also: OpenAI argues against ChatGPT data deletion in Indian court Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI targets business sector with advanced AI tools appeared first on AI News. View the full article
-
[AI]OpenAI argues against ChatGPT data deletion in Indian court
ChatGPT posted a topic in World News
OpenAI has argued in an Indian court that removing the training data behind ChatGPT service would clash with its legal obligations in the United States. The statement was issued in response to a lawsuit filed by Indian news agency ANI, which accused the AI business of using its content without permission. The Microsoft-backed AI giant stated that Indian courts lack jurisdiction in the case since OpenAI has no offices nor operations in the country. In its January 10 filing to the Delhi High Court, OpenAI emphasised that it is already defending similar lawsuits in the US, where it is required to preserve its training data during ongoing litigation. The case, filed by ANI in November, is one of India’s most closely-watched lawsuits involving the use of AI. ANI alleges that OpenAI utilised its published content without authorisation to train ChatGPT and is demanding the deletion of its data from the company’s systems. A global battle over copyright and AI OpenAI is no stranger to such disputes, facing a wave of lawsuits from copyright holders worldwide. In the US, the New York Times filed a similar case against the company, accusing it of misusing its content. OpenAI has consistently denied such allegations, claiming its systems rely on the fair use of publicly available data. During a November hearing in Delhi, OpenAI told the court it would no longer use ANI’s content. However, ANI argued that its previously published material remains stored in ChatGPT’s repositories and must be deleted. In its rebuttal, OpenAI highlighted that it is legally obligated under US law to retain training data while related cases are pending. “The company is under a legal obligation, under the laws of the United States, to preserve, and not delete, the said training data,” OpenAI stated in its filing. Jurisdiction dispute OpenAI also argued that the relief ANI is seeking falls outside the jurisdiction of Indian courts. It pointed out that the company has “no office or permanent establishment in India,” and its servers, which store ChatGPT’s training data, are located outside the country. ANI, which is partially owned by Reuters, countered the claim, saying the Delhi court has the authority to hear the case and that it will file a detailed response. A Reuters spokesperson declined to comment on proceedings, but has stated that the agency has no involvement in ANI’s business operations. Concerns over competition ANI has also expressed concern about unfair competition, citing OpenAI’s partnerships with major news organisations like Time Magazine, The Financial Times, and France’s Le Monde. ANI says that these agreements give OpenAI an edge. The agency further claimed that ChatGPT reproduces verbatim or similar excerpts of its works in response to user prompts. OpenAI, on the other hand, claimed that ANI deliberately used its own articles as prompts to “manipulate ChatGPT” to file the lawsuit. The case is scheduled to be heard by the Delhi High Court on January 28. Meanwhile, OpenAI is transitioning from a non-profit to a for-profit company, raising $6.6 billion last year. In recent months, OpenAI has secured high-profile deals with media outlets from around the world, highlighting its efforts to strengthen its commercial partnerships while managing regulatory concerns worldwide. (Photo by Unsplash) See also: DeepSeek-R1 reasoning models rival OpenAI in performance Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI argues against ChatGPT data deletion in Indian court appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content] -
The World Economic Forum (WEF) has released a blueprint outlining how AI can drive inclusivity in global economic growth and societal progress. However, it also highlights the challenges in ensuring its benefits are equitably distributed across all nations and peoples. Developed in partnership with KPMG, the blueprint offers nine strategic objectives to support government leaders, organisations, and key stakeholders through every phase of the AI lifecycle – from innovation to deployment – at local, national, and international levels. These strategies aim to bridge disparities in AI access, infrastructure, advanced computing, and skill development to promote sustainable, long-term growth. Cathy Li, Head of AI, Data, and the Metaverse at the WEF, said: “Leveraging AI for economic growth and societal progress is a shared goal, yet countries and regions have very different starting points. “This blueprint serves as a compass, guiding decision-makers toward impact-oriented collaboration and practical solutions that can unlock AI’s full potential.” Call for regional collaboration and local empowerment Central to the ‘Blueprint for Intelligent Economies’ is the belief that successful AI adoption must reflect the specific needs of local communities—with strong leadership and collaboration among governments, businesses, entrepreneurs, civil society organisations, and end users. Solly Malatsi, South Africa’s Minister of Communications and Digital Technologies, commented: “The significant potential of AI remains largely untapped in many regions worldwide. Establishing an inclusive and competitive AI ecosystem will become a crucial priority for all nations. “Collaboration among multiple stakeholders at the national, regional, and global levels will be essential in fostering growth and prosperity through AI for everyone.” By tailoring approaches to reflect geographic and cultural nuances, the WEF report suggests nations can create AI systems that address local challenges while also providing a robust bedrock for innovation, investment, and ethical governance. Case studies from nations at varying stages of AI maturity are used throughout the report to illustrate practical, scalable solutions. For example, cross-border cooperation on shared AI frameworks and pooled resources (such as energy or centralised databanks) is highlighted as a way to overcome resource constraints. Public-private subsidies to make AI-ready devices more affordable present another equitable way forward. These mechanisms aim to lower barriers for local businesses and innovators, enabling them to adopt AI tools and scale their operations. Hatem Dowidar, Chief Executive Officer of E&, said: “All nations have a unique opportunity to advance their economic and societal progress through AI. This requires a collaborative approach of intentional leadership from governments supported by active engagement with all stakeholders at all stages of the AI journey. “Regional and global collaborations remain fundamental pathways to address shared challenges and opportunities, ensure equitable access to key AI capabilities, and responsibly maximise its transformative potential for a lasting value for all.” Priority focus areas While the blueprint features nine strategic objectives, three have been singled out as priority focus areas for national AI strategies: Building sustainable AI infrastructure Resilient, scalable, and environmentally sustainable AI infrastructure is essential for innovation. However, achieving this vision will require substantial investment, energy, and cross-sector collaboration. Nations must coordinate efforts to ensure that intelligent economies grow in both an equitable and eco-friendly manner. Curating diverse and high-quality datasets AI’s potential hinges on the quality of the data it can access. This strategic objective addresses barriers such as data accessibility, imbalance, and ownership. By ensuring that datasets are inclusive, diverse, and reflective of local languages and cultures, developers can create equitable AI models that avoid bias and meet the needs of all communities. Establishing robust ethical and safety guardrails Governance frameworks are critical for reducing risks like misuse, bias, and ethical breaches. By setting high standards at the outset, nations can cultivate trust in AI systems, laying the groundwork for responsible deployment and innovation. These safeguards are especially vital for promoting human-centred AI that benefits all of society. The overall framework outlined in the report has three layers: Foundation layer: Focuses on sustainable energy, diverse data curation, responsible AI infrastructure, and efficient investment mechanisms. Growth layer: Embeds AI into workflows, processes, and devices to accelerate sectoral adoption and boost innovation. People layer: Prioritises workforce skills, empowerment, and ethical considerations, ensuring that AI shapes society in a beneficial and inclusive way. A blueprint for global AI adoption The Forum is also championing a multi-stakeholder approach to global AI adoption, blending public and private collaboration. Policymakers are being encouraged to implement supportive legislation and incentives to spark innovation and broaden AI’s reach. Examples include lifelong learning programmes to prepare workers for the AI-powered future and financial policies that enable greater technology access in underserved regions. The WEF’s latest initiative reflects growing global recognition that AI will be a cornerstone of the future economy. However, it remains clear that the benefits of this transformative technology will need to be shared equitably to drive societal progress and ensure no one is left behind. The Blueprint for Intelligent Economies provides a roadmap for nations to harness AI while addressing the structural barriers that could otherwise deepen existing inequalities. By fostering inclusivity, adopting robust governance, and placing communities at the heart of decision-making, the WEF aims to guide governments, businesses, and innovators toward a sustainable and intelligent future. See also: *** Government signs off sweeping AI action plan Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post World Economic Forum unveils blueprint for equitable AI appeared first on AI News. View the full article
-
AI coding tools leverage machine learning, deep learning, and natural language processing to assist developers in writing and optimising code. These tools are trained on vast code repositories and datasets, allowing them to analyse programming patterns and provide intelligent recommendations. Types of AI coding tools AI-powered coding tools can be categorised into several types based on their functionality: AI code completion tools — Provide real-time suggestions and auto-complete lines of code. AI code generators — Generate full scripts, functions, or even applications based on natural language prompts. AI debugging and error detection tools — Identify syntax errors, logical issues, and security vulnerabilities. AI test automation tools — Create and execute test cases with minimal human intervention. AI code optimisation Tools — Improve performance by suggesting better coding practices. AI security and compliance tools — Analyse code for vulnerabilities and help developers follow security best practices. AI documentation generators — Automate inline comments, API documentation, and explanations. By incorporating these tools into their workflow, developers can write cleaner, more efficient code and reduce the time spent on repetitive tasks. Best free AI coding tools for 2025 Below, we explore seven of the best free AI coding tools available today and provide an analysis of their features, benefits, and how they can enhance the software development process. 1. Qodo Qodo is an AI-powered coding assistant designed to help developers generate, optimise, and debug code easily. It offers real-time code suggestions, detects syntax errors, and improves the overall quality of the codebase. Key features: Intelligent code completion: Predicts and suggests relevant code snippets. AI-powered debugging: Identifies bugs and logical errors in real-time. Code optimisation: Suggests performance improvements and cleaner code structures. Multi-language support: Works with multiple programming languages. Benefits: Qodo streamlines software development by reducing manual effort in writing and reviewing code. It is particularly beneficial for new developers who need guidance on best coding practices and experienced programmers looking to improve efficiency. Use cases: Writing clean and optimised code. Debugging and improving code quality. Learning new programming concepts with AI-driven recommendations. 2. Kite Kite was a popular AI-powered autocomplete tool that provided developers with real-time code suggestions and documentation assistance. Although it has been discontinued, it significantly influenced modern AI coding assistants. Key features: Python-focused autocompletion: Provided predictive code completions. Inline documentation: Showed documentation snippets inside the IDE. Machine learning-based suggestions: Improved over time with usage. Benefits: Despite its discontinuation, Kite demonstrated the power of AI in streamlining software development, influencing newer AI-powered coding tools available today. Use cases: Assisted Python developers with intelligent code suggestions. Provided real-time documentation to reduce context switching. 3. Amazon CodeWhisperer Amazon CodeWhisperer is a free AI-powered coding assistant that provides intelligent code completions and recommendations based on natural language input. It integrates seamlessly with AWS services, making it an excellent tool for cloud-based development. Key features: Real-time code suggestions: Predicts and generates relevant code snippets. AWS cloud integration: Works with AWS Lambda, EC2, and other services. Multi-language support: Supports Python, JavaScript, and more. Benefits: Amazon CodeWhisperer boosts productivity by offering context-aware code recommendations, making it ideal for developers working with cloud-based applications. Use cases: Writing serverless applications on AWS. Generating API integrations with minimal manual coding. Reducing development time for cloud-based applications. 4. Ponicode Ponicode is an AI-powered unit testing and code quality analysis tool that ensures high test coverage and optimised code. Key features: AI-generated unit tests: Automates test case creation. Code quality analysis: Detects inefficiencies in the code. Multi-language support: Works with JavaScript, Python, and more. Benefits: Ponicode helps developers write reliable, maintainable code by automating testing and quality checks, making it an excellent tool for teams prioritising test-driven development (TDD). Use cases: Ensuring code coverage with AI-generated test cases. Automating software testing processes. 5. IntelliCode Microsoft IntelliCode enhances coding productivity by offering AI-powered autocomplete, refactoring suggestions, and best coding practices based on thousands of open-source projects. Key features: Context-aware code recommendations: Learns from open-source projects. Automated code refactoring: Suggests optimised coding solutions. Supports multiple languages: Works with Python, Java, C++, and more. Benefits: IntelliCode enables developers to write maintainable code while minimising syntax errors and reducing repetitive coding tasks. Use cases: Improving team productivity by enforcing best coding practices. Optimising legacy codebases. 6. YOGI Bot YOGI Bot is an AI-powered chatbot that assists developers with debugging, code explanations, and best practice recommendations. Key features: AI-driven code assistance: Provides real-time support for debugging and explanations. Multi-language support: Works across various programming languages. Benefits: YOGI Bot is useful for both new and experienced developers who need immediate coding assistance and explanations. Use cases: Learning programming concepts interactively. Troubleshooting coding errors with AI-driven assistance. 7. CodeT5 CodeT5, developed by Salesforce Research, is an AI-powered code generation and translation model that helps developers write high-quality code efficiently. Key features: AI-generated code snippets: Produces entire code blocks. Code translation and optimisation: Helps improve existing code. Benefits: CodeT5 streamlines development by reducing manual effort and improving code quality. Use cases: Automating coding tasks for web and software development. Enhancing productivity with AI-powered suggestions. Why use AI coding tools? AI coding tools are gaining popularity due to their ability to streamline development processes, reduce coding errors, and enhance productivity. Here’s why developers should consider integrating AI into their workflow: Increased efficiency AI-powered tools speed up coding by suggesting relevant code snippets, autocompleting functions, and automating repetitive tasks. This allows developers to focus on solving complex problems rather than spending time on boilerplate code. Improved code quality AI-driven code review tools analyse code for potential errors, security vulnerabilities, and performance issues, ensuring high-quality and optimised code. Reduced debugging time Manually finding and fixing bugs can be time-consuming. AI debugging tools help detect issues in real-time and provide potential fixes, reducing time spent on troubleshooting. Enhanced collaboration AI-powered tools facilitate team collaboration by providing code suggestions, and documentation, making it easier for teams to work together efficiently. Automated testing and deployment AI-driven test automation ensures that applications are tested before deployment, reducing manual effort and potentially speeding up the software release cycle. Better security Security-focused AI tools can help identify vulnerabilities and suggest best practices, helping developers write more secure code that complies with industry standards. Accessibility for beginners AI coding assistants help beginners learn programming by providing real-time feedback, suggesting improvements, and simplifying complex coding concepts. 7 top free AI coding tools AI coding tools leverage machine learning, deep learning, and natural language processing to assist developers in writing and optimising code. These tools are trained on vast code repositories and datasets, allowing them to analyse programming patterns and provide intelligent recommendations. Types of AI coding tools AI-powered coding tools can be categorised into several types based on their functionality: AI code completion tools — Provide real-time suggestions and auto-complete lines of code. AI code generators — Generate full scripts, functions, or even applications based on natural language prompts. AI debugging and error detection tools — Identify syntax errors, logical issues, and security vulnerabilities. AI test automation tools — Create and execute test cases with minimal human intervention. AI code optimisation Tools — Improve performance by suggesting better coding practices. AI security and compliance tools — Analyse code for vulnerabilities and help developers follow security best practices. AI documentation generators — Automate inline comments, API documentation, and explanations. By incorporating these tools into their workflow, developers can write cleaner, more efficient code and reduce the time spent on repetitive tasks. (Image source: Unsplash) The post 7 top free AI coding tools appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
According to Rodolphe Malaguti, Product Strategy and Transformation at Conga, poor data structures and legacy systems are hindering the potential of AI in transforming public services. Taxpayer-funded services in the ***, from the NHS to local councils, are losing out on potential productivity savings of £45 billion per year due to an overwhelming reliance on outdated technology—a figure equivalent to the total cost of running every primary school in the country for a year. A report published this week highlights how nearly half of public services are still not accessible online. This forces British citizens to engage in time-consuming and frustrating processes such as applying for support in person, enduring long wait times on hold, or travelling across towns to council offices. Public sector workers are similarly hindered by inefficiencies, such as sifting through mountains of physical letters, which slows down response times and leaves citizens to bear the brunt of government red tape. “As this report has shown, there is clearly a gap between what the government and public bodies intend to achieve with their digital projects and what they actually deliver,” explained Malaguti. “The public sector still relies heavily upon legacy systems and has clearly struggled to tackle existing poor data structures and inefficiencies across key departments. No doubt this has had a clear impact on decision-making and hindered vital services for vulnerable citizens.” The struggles persist even in deeply personal and critical scenarios. For example, the current process for registering a death still demands a physical presence, requiring grieving individuals to manage cumbersome bureaucracy while mourning the loss of a loved one. Other outdated processes unnecessarily burden small businesses—one striking example being the need to publish notices in local newspapers simply to purchase a lorry licence, creating further delays and hindering economic growth. A lack of coordination between departments amplifies these challenges. In some cases, government bodies are using over 500 paper-based processes, leaving systems fragmented and inefficient. Vulnerable individuals suffer disproportionately under this disjointed framework. For instance, patients with long-term health conditions can be forced into interactions with up to 40 different services, repeating the same information as departments repeatedly fail to share data. “The challenge is that government leaders have previously focused on technology and online interactions, adding layers to services whilst still relying on old data and legacy systems—this has ultimately led to inefficiencies across departments,” added Malaguti. “Put simply, they have failed to address existing issues or streamline their day-to-day operations. It is critical that data is more readily available and easily shared between departments, particularly if leaders are hoping to employ new technology like AI to analyse this data and drive better outcomes or make strategic decisions for the public sector as a whole.” Ageing Infrastructure: High costs and security risks The report underscores that ageing infrastructure comes at a steep financial and operational cost. More than one-in-four digital systems used across the ***’s central government are outdated, with this figure ballooning to 70 percent in some departments. Maintenance costs for legacy systems are significantly higher, up to three-to-four times more, compared to keeping technology up-to-date. Furthermore, a growing number of these outdated systems are now classified as “red-rated” for reliability and cybersecurity risk. Alarmingly, NHS England experienced 123 critical service outages last year alone. These outages often meant missed appointments and forced healthcare workers to resort to paper-based systems, making it harder for patients to access care when they needed it most. Malaguti stresses that addressing such challenges goes beyond merely upgrading technology. “The focus should be on improving data structure, quality, and timeliness. All systems, data, and workflows must be properly structured and fully optimised prior to implementation for these technologies to be effective. Public sector leaders should look to establish clear measurable objectives, as they continue to improve service delivery and core mission impacts.” Transforming public services In response to these challenges, Technology Secretary Peter Kyle is announcing an ambitious overhaul of public sector technology to usher in a more modern, efficient, and accessible system. Emphasising the use of AI, digital tools, and “common sense,” the goal is to reform how public services are designed and delivered—streamlining operations across local government, the NHS, and other critical departments. A package of tools known as ‘Humphrey’ – named after the fictional Whitehall official in popular BBC drama ‘Yes, Minister’ – is set to be made available to all civil servants soon, with some available today. Humphrey includes: Consult: Analyses the thousands of responses received during government consultations within hours, presenting policymakers and experts with interactive dashboards to directly explore public feedback. Parlex: A tool that enables policymakers to search and analyze decades of parliamentary debate, helping them refine their thinking and manage bills more effectively through both the Commons and the Lords. Minute: A secure AI transcription service that creates customisable meeting summaries in the formats needed by public servants. It is currently being used by multiple central departments in meetings with ministers and is undergoing trials with local councils. Redbox: A generative AI tool tailored to assist civil servants with everyday tasks, such as summarising policies and preparing briefings. Lex: A tool designed to support officials in researching the law by providing analysis and summaries of relevant legislation for specific, complex issues. The new tools and changes will help to tackle the inefficiencies highlighted in the report while delivering long-term cost savings. By reducing the burden of administrative tasks, the reforms aim to enable public servants, such as doctors and nurses, to spend more time helping the people they serve. For businesses, this could mean faster approvals for essential licences and permits, boosting economic growth and innovation. “The government’s upcoming reforms and policy updates, where it is expected to deliver on its ‘AI Opportunities Action Plan,’ [will no doubt aim] to speed up processes,” said Malaguti. “Public sector leaders need to be more strategic with their investments and approach these projects with a level head, rolling out a programme in a phased manner, considering each phase of their operations.” This sweeping transformation will also benefit from an expanded role for the Government Digital Service (GDS). Planned measures include using the GDS to identify cybersecurity vulnerabilities in public sector systems that could be exploited by hackers, enabling services to be made more robust and secure. Such reforms are critical to protect citizens, particularly as the reliance on digital solutions increases. The broader aim of these reforms is to modernise the ***’s public services to reflect the convenience and efficiencies demanded in a digital-first world. By using technologies like AI, the government hopes to make interactions with public services faster and more intuitive while saving billions for taxpayers in the long run. As technology reshapes the future of how services are delivered, leaders must ensure they are comprehensively addressing the root causes of inefficiency—primarily old data infrastructure and fragmented workflows. Only then can technological solutions, whether AI or otherwise, achieve their full potential in helping services deliver for the public. (Photo by Claudio Schwarz) See also: Biden’s executive order targets energy needs for AI data centres Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Rodolphe Malaguti, Conga: Poor data hinders AI in public services appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]