Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    206
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. COVID-19 has, in a sense, transformed Zoom from a business-only tool into a household name. Now, the $19 billion video-calling giant is looking to redefine itself, which means leaving behind much of what has made it a mainstay throughout its decade-plus history. Graeme Geddes, Zoom’s chief growth officer, recently told Fortune, “Zoom is so much more than just video meetings. Video is our heritage—so we’re going to continue to lean in there, push the market, there’s a lot of innovation that we’re doing—but we’re so much more than that.” The company’s new aspiration? “We want to be known as an AI-first collaboration platform,” Geddes declared. Though the rush to adopt AI is now a staple in the tech industry—with giants like Alphabet and Microsoft regularly discussing the technology on earnings calls—Zoom’s shift neatly dovetails with its efforts to extend its reach beyond simple video conferencing, aiming to enhance overall productivity. In an effort to better cater to the needs of a hybrid world, Zoom introduced its suite of tools earlier this year for both remote and in-person employees, named Zoom Workplace. This platform includes everything from virtual whiteboards and guest check-ins to workspace booking and tech solutions, as well as feedback forms. Zoom also recently acquired the employee engagement platform Workvivo for approximately €250 million ($272 million). This acquisition, as Geddes points out, “has nothing to do with video.” Zoom’s evolution extends to customer-facing solutions as well. “We’re helping our customers in the way that their customers show up to their website, having a chatbot automation service that can escalate into a phone call,” Geddes explained. “A lot of workflows that have no video involved.” This strategic shift comes at a crucial time for Zoom. As businesses increasingly distance themselves from pandemic-era work styles and implement return-to-office mandates, the demand for remote video conferencing has decreased. Consequently, Zoom’s stock has returned to pre-pandemic levels, dropping from a peak of $559 in October 2020 to around $60 currently. Jacqueline Barrett, an economist and founder of the Bright Arc, reflects on the initial pandemic response: “At the start of the pandemic, I think there were tons of people who flocked to Zoom. There was probably a little bit of overexcitement in terms of the stock, with people anticipating that the growth was going to be like that indefinitely.” The market landscape has also become more competitive. “There’s so many other players in the market that are offering these new features that have already bundled things together or that are constantly unveiling new features with generative AI,” Barrett added. “If it’s not the legacy players like Google or Microsoft or Cisco, there’s so many startups that are focused on pretty much every little niche imaginable with generative AI.” The challenge Zoom faces with this response is not one-dimensional, as evidenced by its varied features. The company is expanding its products and utilising AI to amplify its technical capabilities. For example, as Geddes recounted, Zoom’s AI companion can automate note-taking and brief the next steps or action items during a meeting, whether all attendees are present in the conference room. However, what’s most intriguing is that this is only the beginning of Zoom’s AI applications; it is also exploring the creation of digital twins or deepfake avatars. Eric Yuan, the founder and CEO of Zoom, stated that the AI-powered avatars would replicate the real owner’s voice and appearance, and also act independently during meetings, making business decisions for the owner. “Today we all spend a lot of time either making phone calls, joining meetings, sending emails, deleting some spam emails, and replying to some text messages, still very busy,” Yuan explained. “But in the future, I can send a digital version of myself to join so I can go to the beach.” While this technology is still in development, it has already proven to be a useful AI feature for Zoom. Geddes shared how he used the Zoom smart summary feature to stay informed about meetings during his international travels, enabling him to make important decisions and keep projects on schedule. As it transitions, Zoom clearly aims to do more than just adjust to the post-pandemic world; it is actively setting the course for the future of work and collaboration. By adopting AI-driven solutions and moving beyond its traditional video conferencing base, Zoom is dedicated to keeping its leading position in business communication and productivity tools as the workplace evolves. (Photo by LinkedIn Sales Solutions) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Zoom now wants to be known ‘as an AI-first collaboration platform’ appeared first on AI News. View the full article
  2. As the *** gears up for its general election, industry leaders are weighing in on the potential impact on technology and AI regulation. With economic challenges at the forefront of political debates, experts argue that the next government must prioritise technological innovation and efficiency to drive growth and maintain the ***’s competitive edge. Rupal Karia, Country Leader ***&I at Celonis, emphasises the need for immediate action to address inefficiencies in both private and public sectors. “The next government needs to channel a more immediate focus on removing inefficiencies within *** businesses, which both the private and public sector are being weighed down by,” Karia states. Karia advocates for the use of process intelligence to provide “data-based methods of generating positive impact at the top, the bottom, and the green line.” While political parties focus on long-term strategies such as infrastructure investments and industrial policies, Karia suggests that leveraging technology for efficiency gains could yield more immediate results. “Delivering fast growth is tough, but in the meantime businesses can become leaner and more agile, gaining maximum value within their current processes,” Karia explains. James Hall, VP & Country Manager, ***&I at Snowflake, predicts a significant focus on AI investment and regulation in the next government. He anticipates the appointment of chief AI officers across government departments to ensure AI aligns with manifesto priorities. Furthermore, Hall also emphasises the importance of a robust data strategy, stating, “A foundational data strategy with governance at its core will help meet AI goals.” Hall proposes several initiatives to boost AI innovation and data utilisation: An AI fund to promote public-private partnerships Use of synthetic data to commercialise assets globally while maintaining privacy Industry-specific AI regulations, particularly for sectors like healthcare and pharmaceuticals Stronger agreements on medical data usage in the pharmaceutical industry A dedicated office to oversee data and AI initiatives, ensuring diverse voices are heard in policymaking On the topic of AI regulation, Hall suggests a nuanced approach: “It would be beneficial to establish industry-specific rules, with particular attention paid to sectors like healthcare and pharmaceuticals and their unique needs.” Both experts agree that embracing AI and data-driven technologies is crucial for the ***’s future economic success. “These steps will be crucial for a new government to support data-driven industries and ensure they can capitalise on AI, thus positioning the *** as a global innovation powerhouse whilst ensuring sustainable growth and protecting national interests,” Hall concludes. As the election approaches, it ******** to be seen how political parties will address these technological challenges and opportunities in their manifestos. The outcome could significantly shape the ***’s approach to AI regulation and its position in the global tech landscape. (Photo by Chris Robert) See also: EU probes Microsoft-OpenAI and Google-Samsung AI deals Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Industry experts call for tailored AI rules in post-election *** appeared first on AI News. View the full article
  3. The semiconductor industry, which is a cornerstone of modern technology and economic prosperity, has been dealing with a serious labour shortage for some time. The skills shortage appears to be worsening, with more than one million additional skilled workers required by 2030 to meet demand in the semiconductor industry, according to Deloitte. This pervasive issue extends beyond the US, affecting key players worldwide and threatening to impede the sector’s growth and innovation. Since countries have been striving to expand their semiconductor capabilities to meet escalating global demand, particularly since the pandemic, a skilled worker shortage has emerged as a critical bottleneck, undermining efforts to maintain and advance technological leadership in this vital industry. With over two million direct employees worldwide in 2021 and more than one million extra skilled professionals required by 2030, Deloitte expects that more than 100,000 hires are needed every year. For background, there are less than 100,000 graduate students enrolling in electrical engineering and computer science in the US each year, as per Deloitte’s data. Even countries like Taiwan, South Korea, China, Japan, and Europe are facing challenges in finding enough qualified workers to meet the demands of their rapidly expanding semiconductor sectors. For instance, Taiwan had a shortfall of over 30,000 semiconductor workers in late 2021, and South Korea is projected to face a similar shortfall over the next decade. China’s shortfall is even more severe, with estimates suggesting a need for over 300,000 additional workers​, even before the current chip growth and supply chain problems. This shortage is attributed to several factors. Many nations have seen their semiconductor manufacturing expertise erode over the years as production moved offshore. In the US, for example, the industry accounts for only about 12% of global chip production, with most of the advanced manufacturing know-how residing in Asia​. The lack of awareness about semiconductor careers among potential recruits also contributes to the talent gap, making it difficult to attract new workers to the field​. To top it off, the competition for semiconductor talent has also been showing signs of getting even tighter. CHIPS Act and workforce development In response to this growing issue, the US has introduced measures under the CHIPS and Science Act, aimed at boosting the domestic semiconductor industry and addressing the labour shortage. The Act allocates substantial funding towards the development of the semiconductor workforce, focusing particularly on technician roles and jobs that do not require a bachelor’s degree. This is significant because about 60% of new semiconductor positions fall into these categories, according to McKinsey’s report.​ The CHIPS Act, passed in 2022, promotes various initiatives to build a robust talent pipeline. However, according to a recent report by Bloomberg, the US government is intensifying its efforts to address the semiconductor labor shortage through new initiatives, under the CHIPS Act, highlighting a significant expansion of educational and training programs aimed at developing a skilled workforce tailored to the industry. “The program, described as a workforce partner alliance, will use some of the $5 billion in federal funding set aside for a new National Semiconductor Technology Center. The NSTC plans to award grants to as many as 10 workforce development projects with budgets of $500,000 to $2 million,” Bloomberg noted. The NSTC will also be launching additional application processes in the coming months, and officials will determine the total level of spending once all the proposals have been considered. All of the finance comes from the 2022 Chips and Science Act, the landmark law that set aside $39 billion in grants to boost US chipmaking, plus $11 billion for semiconductor research and development, including the NSTC Labour shortage: A long-term problem Even with all these efforts, the semiconductor industry is likely to continue facing labour shortages in the long-term. The report from McKinsey highlights that even with substantial investments in education and training, the sector will struggle to find enough skilled workers to meet its needs. This is compounded by issues such as lack of career advancement opportunities, workplace inflexibility, and insufficient support, which drive many employees to leave the industry​, according to various analyses. Moreover, the competition for semiconductor talent is intensifying globally. Companies like Taiwan’s TSMC are recruiting experienced semiconductor workers from the US, India, Canada, Japan, and Europe. This global competition underscores the urgent need for collaborative initiatives to attract and retain skilled workers in the semiconductor industry​. After all, the labor shortage in the semiconductor industry is a complex challenge that requires multifaceted solutions. (Photo by Vishnu Mohanan) See also: US clamps down on China-bound investments Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Global semiconductor shortage: How the US plans to close the talent gap appeared first on AI News. View the full article
  4. The ********* Union has intensified its antitrust scrutiny on AI deals, starting with high-profile collaborations between Microsoft-OpenAI and Google-Samsung. Margrethe Vestager, the ********* Commission’s executive vice president for competition policy, warned that AI is “developing at breakneck speed” and revealed that multiple preliminary investigations are underway into various AI-related market practices. Her concerns about potential anti-competitive practices stem from major tech companies’ manoeuvres in the AI sector since the advent of ChatGPT. The commission’s move highlights the bloc’s growing concern over tech giants’ potential monopolistic power in the rapidly evolving AI sector. The scrutiny focuses on recent deals and collaborations involving Microsoft and Google in AI. These initiatives, often involving strategic partnerships and acquisitions, have drawn regulatory attention due to their potential to undermine competition and innovation. Vestager emphasised that the commission has several preliminary antitrust investigations underway concerning various practices within AI-related markets, although specific details were not disclosed. Microsoft’s partnership with OpenAI Microsoft’s multibillion-dollar partnership with OpenAI represents one of the most significant collaborations in the AI industry. This partnership, initiated in 2019 and expanded in subsequent years, involves Microsoft investing heavily in OpenAI, providing cloud computing resources through its Azure platform, and integrating OpenAI’s advanced models into Microsoft’s products and services. The collaboration aims to accelerate AI research and development, with notable advancements such as the GPT-3 language model and the more recent ChatGPT. However, this alliance has raised concerns about market dominance and potential barriers to entry for smaller AI firms. Vestager said in a speech that the ********* Commission started reviewing the deal last year to see whether it broke EU merger rules but dropped it after concluding Microsoft hadn’t gained control of OpenAI. “Microsoft has invested $13 billion in OpenAI over the years. But we have to make sure that partnerships like this do not become a disguise for one partner getting a controlling influence over the other,” she said while signalling that the commission would take another tack to examine the deal and the industry more broadly. It’s using the bloc’s antitrust rules, which target abusive behaviour by companies with a dominant market position. After reviewing responses from major AI companies requested in March this year, the EU Commission is requesting specific information about the Microsoft-OpenAI agreement. Vestager said they aim to determine if exclusivity clauses could potentially harm competition in the AI market. The EU wants “to understand whether certain exclusivity clauses could hurt competitors,” she said. Also in question: Google and Samsung’s partnership Google’s AI-related arrangement with Samsung also draws significant attention. The partnership leverages Samsung’s hardware capabilities with Google’s AI prowess to develop innovative consumer electronics and mobile technologies. This includes integrating Google’s AI algorithms into Samsung devices and enhancing features like voice recognition, camera functionality, and personalized user experiences. While this collaboration promises to bring advanced AI-driven functionalities to a broad consumer base, it also raises questions about competitive fairness, particularly regarding access to critical technologies and market influence. Vestager said EU regulators have sent information requests “to better understand the effects of Google’s arrangement with Samsung” to pre-install Gemini Nano, the smallest version of Google’s Gemini AI foundation model, on some devices from the South Korean tech company. What’s next? With tech giants like Microsoft and Google, also prominent players in the global AI landscape, actively expanding their AI capabilities through acquisitions and partnerships, regulators are growing more curious about market dominance and its implications for fair competition. This would have inevitably prompted regulatory intervention from the EU sooner or later. In response to the EU’s actions, Microsoft and Google have reaffirmed their commitment to comply with regulatory requirements while continuing to innovate responsibly in AI technologies. They emphasise the potential benefits of their AI initiatives, including advancements in healthcare, sustainability, and other critical sectors. Yet, the outcome of the EU’s antitrust scrutiny could have significant implications for how major tech companies operate in Europe’s AI market. It may lead to regulatory measures to foster a more level playing field and ensure that smaller competitors have fair opportunities to compete and innovate. (Photo by Guillaume Périgois) See also: Coalition of news publishers sue Microsoft and OpenAI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post EU probes Microsoft-OpenAI and Google-Samsung AI deals appeared first on AI News. View the full article
  5. Microsoft has disclosed a new type of AI jailbreak ******* dubbed “Skeleton Key,” which can bypass responsible AI guardrails in multiple generative AI models. This technique, capable of subverting most safety measures built into AI systems, highlights the critical need for robust security measures across all layers of the AI stack. The Skeleton Key jailbreak employs a multi-turn strategy to convince an AI model to ignore its built-in safeguards. Once successful, the model becomes unable to distinguish between malicious or unsanctioned requests and legitimate ones, effectively giving attackers full control over the AI’s output. Microsoft’s research team successfully tested the Skeleton Key technique on several prominent AI models, including Meta’s Llama3-70b-instruct, Google’s Gemini Pro, OpenAI’s GPT-3.5 Turbo and GPT-4, Mistral Large, Anthropic’s Claude 3 Opus, and Cohere Commander R Plus. All of the affected models complied fully with requests across various risk categories, including explosives, bioweapons, political content, self-harm, racism, drugs, graphic ****, and *********. The ******* works by instructing the model to augment its behaviour guidelines, convincing it to respond to any request for information or content while providing a warning if the output might be considered offensive, harmful, or ********. This approach, known as “Explicit: forced instruction-following,” proved effective across multiple AI systems. “In bypassing safeguards, Skeleton Key allows the user to cause the model to produce ordinarily forbidden behaviours, which could range from production of harmful content to overriding its usual decision-making rules,” explained Microsoft. In response to this discovery, Microsoft has implemented several protective measures in its AI offerings, including Copilot AI assistants. Microsoft says that it has also shared its findings with other AI providers through responsible disclosure procedures and updated its Azure AI-managed models to detect and block this type of ******* using Prompt Shields. To mitigate the risks associated with Skeleton Key and similar jailbreak techniques, Microsoft recommends a multi-layered approach for AI system designers: Input filtering to detect and block potentially harmful or malicious inputs Careful prompt engineering of system messages to reinforce appropriate behaviour Output filtering to prevent the generation of content that breaches safety criteria ****** monitoring systems trained on adversarial examples to detect and mitigate recurring problematic content or behaviours Microsoft has also updated its PyRIT (Python Risk Identification Toolkit) to include Skeleton Key, enabling developers and security teams to test their AI systems against this new threat. The discovery of the Skeleton Key jailbreak technique underscores the ongoing challenges in securing AI systems as they become more prevalent in various applications. (Photo by Matt Artz) See also: Think tank calls for AI incident reporting system Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Microsoft details ‘Skeleton Key’ AI jailbreak appeared first on AI News. View the full article
  6. This week, OpenAI has decisively blocked access to its site from mainland China and Hong Kong, cutting off developers and companies from some of the most advanced AI technologies available today. OpenAI’s move is not surprising given the increasing geopolitical tensions and technology rivalry; however, it represents an inflection point in AI that will further turn up the heat on rather icy tech cold war. The result is massive repercussions on the future AI landscape in China and worldwide and will lay much groundwork for fierce competition among AI superpowers in the future. In the face of increased governmental demands and rivalry for AI dominance, OpenAI’s choice protects the company’s intellectual property while navigating geopolitical difficulties. The move underscores the deepening digital divide between China and Western countries, which ******** one of the defining elements in this tech war era. However, as OpenAI cuts ties with China, it also marks a ******* tech decoupling trend in which the US and ******** tech ecosystems are pulling apart further, according to some experts. Implications for ******** AI players OpenAI’s blockade presents both challenges and opportunities for ******** AI companies. On one hand, the absence of OpenAI’s advanced models, such as GPT-4, from the ******** market could slow the adoption and integration of cutting-edge AI technologies. This is particularly relevant for startups and smaller companies that lack the resources to develop similar models independently. “OpenAI’s move, which is set to go into effect on July 9, could affect ******** companies developing their services based on OpenAI’s large language models (LLMs),” a South China Morning Post report stated, citing experts. However, it can also act as a spark that propels innovation in China, driving ******** companies even further towards producing their technologies. It could create a new AI research ***** and make the ******** landscape more energetic and self-sufficient. On the other hand, the blockade creates a vacuum that domestic giants like Alibaba, Baidu, and Tencent are well-positioned to fill. Those companies have the financial muscle, talent, and infrastructure to accelerate their AI research and development, leading to even more active efforts by these players in AI innovation and building homegrown alternatives for OpenAI. Besides, the ******** government has aggressively funded its tech industry with large investments and favorable regulations. In turn, we may see a new rush of AI research that would increase competition between domestic ******** players and bring China in line with its overseas counterparts. Global AI dynamics The move by OpenAI has ramifications beyond China. The potential of this move to shift global AI dynamics is very real, and it looks increasingly likely that we could see an even more fragmented AI landscape. While the US and China are busy defining their dominance, other countries and regions may align with one side based on access to AI technologies. This is particularly the case for Southeast Asia and ******** countries where China has strong economic ties – they would likely favour more ******** AI solutions. However, ********* and North ********* states could increase their dependence on *********-based AI solutions. This split could have profound implications for international consortia, data exchanges and the evolution of worldwide AI norms. The blockade also raises crucial questions of ethics and security. In this context, OpenAI is exercising digital sovereignty—it controls who can and cannot reap the fruits of its technology. The moves are part of a broader clampdown now taking place at all levels of the AI stack to ensure such technologies are built and deployed in ways that meet decent standards and ethics, including security aspects. This challenges China in strategically positioning its burgeoning AI sector so that other nations do not see it as threatening. Yet, as the AI race heats up, we need to put ethics inevitably and international collaboration as top priorities and for some that see China as an essential market, those companies will have to find a way to work around the complicated geopolitical hurdles. Apple, for instance, is reportedly seeking local partners to provide services that comply with Beijing’s stringent AI regulations, including the standards set by the China Electronic Standardisation Institute last year. After all, the future of AI hinges not only on technological advancements but also on the geopolitical strategies and policies that govern its development and deployment. (Photo: Jonathan Kemper) See also: Apple is reportedly getting free ChatGPT access Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Tech war escalates: OpenAI shuts door on China appeared first on AI News. View the full article
  7. SoftBank Group, the ********* technology investment firm, has announced a strategic ****** venture with Tempus AI, a company specialising in AI-driven medical data analysis and treatment recommendations. This partnership was revealed by SoftBank’s CEO, Masayoshi Son, during a briefing in Tokyo, marking another significant move in SoftBank’s recent series of AI investments as the company ramps up its investment activities following a ******* of relative quiet. Earlier this year, SoftBank invested approximately $200 million in Tempus during its Series G funding round, preceding Tempus’s Nasdaq listing in June. Tempus is renowned for its genomic testing services and AI-powered treatment and clinical trial recommendations in the ******* States, leveraging a comprehensive database of millions of patient clinical records. As reported by Reuters, the partnership is hoped to enable these advanced services to be deployed in Japan, making it one of the first non-US healthcare markets with this type of connected health capabilities. “Working with Tempus, we’ll develop services at pace in Japan. With the database of 7.7 million U.S. patients, we’re at a running start,” Son said about the venture. The partnership is anticipated to close in July, subject to usual closing conditions, and will involve an investment of 15 billion yen (close to $93 million) from each party. Google’s support for Tempus AI Tempus AI has also recently caught the eye of Google, an Alphabet company that is still on a spending spree to acquire and develop artificial intelligence technologies. Google’s support is crucial for Tempus, as the search giant has been a major player in deploying AI over time. This includes standout systems like AlphaGo and foundational innovations such as the transformer architecture used in ChatGPT. Tempus uses AI technology to develop what it describes as “intelligent diagnostics,” which are diagnostic tests tailored specifically to the patients they apply to. The initiative is designed to improve the efficacy of existing treatments and speed up the development of new therapies. On June 14, 2024, Tempus conducted its IPO on the Nasdaq stock exchange. The company’s stock fared well, surging as much as 15% during its first day of trading and closing nearly 9 per cent higher. The market capitalisation of Tempus AI reached $6 billion. Google’s financial involvement with Tempus AI began in June 2020 when Tempus issued a $330 million convertible promissory note to Google as part of a cloud services agreement. Later that year, Tempus provided Google with $80 million of preferred stock to partially satisfy the original note. Over the ******* from 2002 to 2023, revenue at Tempus AI grew by 183%. Its adjusted earnings before interest, taxes, depreciation, and amortisation are improving, although it has yet to reach profitability. The quality of its technology is evident from the current client base. Tempus has worked with approximately 95% of the world’s top 20 publicly traded biopharma companies. In addition to its collaborations with over 200 pharmaceutical companies, Tempus is used in over half of U.S. academic medical centres and connects with over 7,000 physicians. This partnership between SoftBank and Tempus AI, coupled with Tempus’s market lead and its continuous strategic partnerships with numerous tech giants, establishes it as a significant participant among companies addressing new AI-powered healthcare services. (Photo by Piron Guillaume) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post SoftBank launches healthcare venture with Tempus AI appeared first on AI News. View the full article
  8. In a move that has further strained the already tense US-China relations, the Biden administration has advanced plans to restrict ********* investments in key ******** technology sectors. This decision, announced by the US Treasury Department, has sparked a swift and sharp rebuke from Beijing, highlighting the deepening rift between the world’s two largest economies. The proposed rules, focusing on curbing investments in AI, quantum computing, and semiconductors, represent the latest salvo in what many observers call a “tech cold war.” These restrictions aim to prevent China from gaining ground in technologies critical to national security, particularly those with potential military applications. China’s Ministry of Commerce responded with “severe concern and resolute opposition,” accusing the US of politicizing and weaponizing trade and commerce issues. The ministry’s statement urges the US to “respect the rules of a market economy and the principle of fair competition,” calling for cancelling the proposed rules and improving economic relations. The ******** government’s strong reaction underscores the significance of these restrictions. Beijing views them as an attempt to hinder China’s technological progress and economic development, a claim it has frequently levelled against Washington in recent years. The ministry went further, asserting that the US move would “pressure the normal development of China’s industry” and disrupt the “security and stability” of global supply chains. This latest development is part of a broader pattern of increasing technological rivalry between the US and China. The trade dispute began in 2018 under the Trump administration and has already resulted in substantial tariffs on both sides. Additionally, the US has taken steps to restrict the activities of numerous ******** tech firms within its borders and has encouraged global enterprises to limit their business in China. US draws new battle lines in tech race with China As Bloomberg puts it, the recently released Notice of Proposed Rulemaking (NPRM) is essentially one of several bureaucratic steps set in motion by an executive order issued last August. The proposed US rules are comprehensive in scope, covering various types of investments, including equity acquisitions, certain debt financing, ****** ventures, and even some limited partner investments in non-US pooled investment funds. However, the proposal includes exemptions, such as investments in publicly traded companies and full ownership buyouts, possibly to balance national security concerns with maintaining some level of economic engagement. The focus on AI in these restrictions is particularly noteworthy. The US administration has expressed concerns about China developing AI applications for weapons targeting and mass surveillance, highlighting the dual-use nature of this technology and the ethical considerations surrounding its development. This emphasis on AI reflects its growing importance in future technological and economic competitiveness. The price of this tech tug-of-war The potential impact of these rules extends far beyond the immediate US-China relationship. They could lead to a further decoupling of the US and ******** tech ecosystems, potentially accelerating China’s efforts to achieve technological self-sufficiency. Moreover, these restrictions could have ripple effects on international collaborations in scientific research and technological development, potentially slowing progress across the board. From a geopolitical perspective, this move will likely further complicate US-China relations, which are already strained by trade disputes and human rights concerns. It may also prompt other countries to reassess their policies regarding tech investments and knowledge sharing with China. The challenge for the Biden administration will be to effectively protect US national security interests without stifling innovation or causing undue economic harm. China’s assertion of its right to take countermeasures adds another layer of uncertainty to an already complex situation. How Beijing responds could have significant implications for global trade and technology development. (Photo by Chenyu Guan) See also: US introduces new AI chip export restrictions Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post US clamps down on China-bound investments appeared first on AI News. View the full article
  9. The Centre for Long-Term Resilience (CLTR) has called for a comprehensive incident reporting system to urgently address a critical gap in AI regulation plans. According to the CLTR, AI has a history of failing in unexpected ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014. As AI becomes more integrated into society, the frequency and impact of these incidents are likely to increase. The think tank argues that a well-functioning incident reporting regime is essential for effective AI regulation, drawing parallels with safety-critical industries such as aviation and medicine. This view is supported by a broad consensus of experts, as well as the US and ******** governments and the ********* Union. The report outlines three key benefits of implementing an incident reporting system: Monitoring real-world AI safety risks to inform regulatory adjustments Coordinating rapid responses to major incidents and investigating root causes Identifying early warnings of potential large-scale future harms Currently, the ***’s AI regulation lacks an effective incident reporting framework. This gap leaves the Department for Science, Innovation & Technology (DSIT) without visibility on various critical incidents, including: Issues with highly capable foundation models Incidents from the *** Government’s own AI use in public services Misuse of AI systems for malicious purposes Harms caused by AI companions, tutors, and therapists The CLTR warns that without a proper incident reporting system, DSIT may learn about novel harms through news outlets rather than through established reporting processes. To address this gap, the think tank recommends three immediate steps for the *** Government: Government incident reporting system: Establish a system for reporting incidents from AI used in public services. This can be a straightforward extension of the Algorithmic Transparency Recording Standard (ATRS) to include public sector AI incidents, feeding into a government body and potentially shared with the public for transparency. Engage regulators and experts: Commission regulators and consult with experts to identify the most concerning gaps, ensuring effective coverage of priority incidents and understanding stakeholder needs for a functional regime. Build DSIT capacity: Develop DSIT’s capability to monitor, investigate, and respond to incidents, potentially through a pilot AI incident database. This would form part of DSIT’s central function, initially focusing on the most urgent gaps but eventually expanding to include all reports from *** regulators. These recommendations aim to enhance the government’s ability to responsibly improve public services, ensure effective coverage of priority incidents, and develop the necessary infrastructure for collecting and responding to AI incident reports. Veera Siivonen, CCO and Partner at Saidot, commented: As AI continues to advance and permeate various aspects of society, the implementation of a robust incident reporting system could prove crucial in mitigating risks and ensuring the safe development and deployment of AI technologies. See also: SoftBank chief: Forget AGI, ASI will be here within 10 years Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Think tank calls for AI incident reporting system appeared first on AI News. View the full article
  10. Alibaba Cloud has taken a step towards globalising its AI offerings by unveiling an version of ModelScope, its open-source AI model community. The move aims to bring generative AI capabilities to a wider audience of businesses and developers worldwide. ModelScope, which embodies Alibaba Cloud’s concept of “Model-as-a-Service,” transforms AI models into readily available and deployable services. Since its launch in mainland China in 2022, the platform has grown to become the country’s largest AI model community, boasting over five million developer users. With this international expansion, developers around the globe will now have access to more than 5,000 advanced AI models. The platform also welcomes user-contributed models, fostering a collaborative ecosystem for AI development. The English version of ModelScope provides a comprehensive suite of tools and resources to support developers in bringing their AI projects to fruition. This includes access to over 1,500 high-quality ********-language datasets and an extensive range of toolkits for data processing. Moreover, the platform offers various modules that allow developers to customise model inference, training, and evaluation with minimal coding requirements. Alibaba Cloud announced the English version of ModelScope during the 2024 Computer Vision and Pattern Recognition (CVPR) Conference in Seattle. This annual event brings together academics, researchers, and business leaders for a five-day exploration of cutting-edge developments in AI and machine learning through workshops, panels, and keynotes. The company’s presence at CVPR was further bolstered by the acceptance of more than 30 papers from Alibaba Group, with six selected as ***** and highlighted papers. This achievement underscores Alibaba’s commitment to advancing the field of AI research and development. Conference attendees also had the opportunity to experience firsthand the capabilities of Alibaba’s proprietary Qwen model series at the company’s booth. The demonstration showcased the model’s impressive image and video generation capabilities, providing a glimpse into the potential applications of Alibaba’s AI technologies. The launch of the English version of ModelScope represents a significant milestone in Alibaba Cloud’s strategy to expand its AI offerings globally. As businesses and developers worldwide increasingly seek to harness the power of AI, platforms like ModelScope are set to play a crucial role in democratising access to advanced AI capabilities. With its extensive collection of models, datasets, and development tools, Alibaba Cloud’s ModelScope will help to accelerate AI innovation and adoption on a global scale. (Image Source: www.alibabagroup.com) See also: SoftBank chief: Forget AGI, ASI will be here within 10 years Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Cloud launches English version of AI model hub appeared first on AI News. View the full article
  11. Apple has reportedly entered into discussions with Meta to integrate the latter’s generative AI model into its newly unveiled personalised AI system, Apple Intelligence. Sources familiar with the talks have revealed that Apple has also been considering partnerships with startups Anthropic and Perplexity to integrate their generative AI technologies. This coming together of major players in the tech industry and groundbreaking startups signifies a pivotal moment in AI. For years, we’ve watched tech behemoths like Apple, Google, and Meta (formerly Facebook) fiercely guard their technological advancements, treating their innovations as closely held trade secrets. This approach has driven competition and spurred rapid progress but has also led to fragmentation and inefficiencies in the broader tech ecosystem. As we embark on the next generation of AI technologies, these tech giants are starting to see that there is much more to gain from collaborating. Given their intense rivalry and divergent philosophies about user privacy and data use, the hypothetical Apple-Meta partnership is notable. This unexpected alliance begs the question: What has changed? The answer ***** in the breathtaking pace of AI advancement and the realisation that no single company can go alone in this new frontier, no matter how large or innovative. Generative AI, in particular, represents a paradigm shift in computing, fundamentally reimagining our interaction with technology. Its vast implications and numerous applications push tech giants beyond their comfort zones. By potentially integrating Meta’s generative AI into Apple Intelligence, Apple acknowledges that hardware and traditional software expertise alone can’t secure AI leadership. Meta’s openness to sharing its AI with a competitor suggests it values widespread adoption over exclusivity. For consumers, this collaboration promises a new era of intelligent digital interactions. Imagine an AI system that responds to your needs with unprecedented accuracy and anticipates and adapts to your preferences. This integration could transform user engagement, making technology an even more intuitive part of daily life. Notably, Apple’s commitment to privacy adds a layer of trust to these advancements, addressing a key concern in today’s digital landscape. In short, users can expect sophisticated AI features without compromising their personal information. The inclusion of AI startups like Anthropic and Perplexity in these discussions is equally significant. It demonstrates that innovative ideas and cutting-edge research are not the sole domain of established tech giants in the rapidly evolving field of AI. These startups bring fresh perspectives and specialised expertise that could prove crucial in developing more advanced and ethically sound AI systems. This open approach may drive AI development and deployment faster in places we have never seen before. Imagine Siri understanding and speaking multiple languages simultaneously with the power of Apple’s natural language processing software, Meta’s billions of users’ social interactions data, Anthropic’s AI safety lens and frankly unbeatable problem solving through Perplexity. This might lead to an AI assistant that is not only more powerful – is not just more advanced and capacious as a system, but also one that has depth, ethics, high fidelity model inferences about human needs. What about ethical considerations and regulatory challenges? The integration of powerful generative AI models into widely used platforms like Apple’s raises important ethical and regulatory questions. Issues such as data privacy, algorithmic bias, and the potential misuse of AI-generated content need careful consideration. Will this further centralise tech power among the existing few, or open new doors for startups and other smaller players? Most important of all, how do we proceed with the development and deployment of these AI systems responsibly, with built in mechanisms to safely guard against misuse? As we attempt to do so in uncharted waters, it’s increasingly obvious that regulators and policymakers will have a major role to play in having to weigh incentives for innovation against public interests. Perhaps, it may even require creating new data sharing structures, AI governance practices and ways for companies to work together – that which reside beyond today’s antitrust and data protection laws. See also: Mark Zuckerberg: AI will be built into all of Meta’s products Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. The post Could an Apple-Meta partnership redefine the AI landscape? appeared first on AI News. View the full article
  12. SoftBank founder and CEO Masayoshi Son has claimed that artificial super intelligence (ASI) could be a reality within the next decade. Speaking at SoftBank’s annual meeting in Tokyo on June 21, Son painted a picture of a future where AI far surpasses human intelligence, potentially revolutionising life as we know it. Son asserted that by 2030, AI could be “one to 10 times smarter than humans,” and by 2035, it might reach a staggering “10,000 times smarter” than human intelligence. SoftBank’s CEO made a clear distinction between artificial general intelligence (AGI) and ASI. According to Son, AGI would be equivalent to a human “genius,” potentially up to 10 times more capable than an average person. ASI, however, would be in a league of its own, with capabilities 10,000 times beyond human potential. Son’s predictions align with the goals of Safe Superintelligence Inc. (SSI), founded by Ilya Sutskever, former chief scientist at OpenAI, along with Daniel Levy and Daniel ******. SSI’s mission, as stated on their website, is to “approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.” The timing of these announcements underscores the growing focus on superintelligent AI within the tech industry. While SoftBank appears to be prioritising the development of ASI, SSI is emphasising the importance of safety in this pursuit. As stated by SSI’s founders, “We plan to advance capabilities as fast as possible while making sure our safety always ******** ahead.” It’s worth noting that the scientific community has yet to reach a consensus on the feasibility or capabilities of AGI or ASI. Current AI systems, while impressive in specific domains, are still far from achieving human-level reasoning across all areas. Son’s speech took an unexpectedly personal turn when he linked the development of ASI to his own sense of purpose and mortality. “SoftBank was founded for what purpose? For what purpose was Masayoshi Son born? It may sound strange, but I think I was born to realise ASI. I am super serious about it,” he declared. Son’s predictions and SoftBank’s apparent pivot towards ASI development, coupled with the formation of SSI, raise important questions about the future of AI and its potential impact on society. While the promise of superintelligent AI is enticing, it also brings concerns about job displacement, ethical considerations, and the potential risks associated with creating an intelligence that far surpasses our own. Whether Son’s vision of ASI within a decade proves prescient or overly optimistic ******** to be seen, but one thing is certain: the race towards superintelligent AI is heating up, with major players positioning themselves at the forefront. See also: Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post SoftBank chief: Forget AGI, ASI will be here within 10 years appeared first on AI News. View the full article
  13. Apple announced on Friday that it would block its highly anticipated Apple Intelligence AI features, iPhone Mirroring, and SharePlay Screen Sharing for EU users. While not entirely unexpected, this decision underscores the growing tension between rapid technological advancement and the EU’s stringent regulatory framework, particularly the Digital Markets Act (DMA) and General Data Protection Regulation (GDPR). From the EU’s perspective, this delay represents both a triumph and a challenge. It demonstrates the effectiveness of regulations safeguarding user privacy and promoting fair competition. The DMA and GDPR have forced tech giants to pause and reconsider their approaches, potentially leading to more user-centric and privacy-conscious products. However, this victory comes with a price: the risk of falling behind in the global AI race. As other regions forge ahead with less restrictive policies, the EU must carefully balance its regulatory stance with the need to foster innovation and maintain competitiveness in the global tech landscape. For Apple, this delay is likely a calculated move. The company backs the decision by citing security and privacy reasons, which helps keep up its brand profile as a reputed tech giant that cares about privacy. All in all, this could preserve user ****** while giving Apple more time to adjust how its AI functions to be likewise compatible with EU law. But it also introduces competition and raises the risk that Apple will cede potential ground to competitors who might manage to navigate the regulatory environment faster. Nevertheless, postponing AI offerings of other tech behemoths such as Meta and Google in the EU also indicates a broader industry-wide challenge. Many of those companies say they need large, trained AI systems to work correctly but claim that GDPR restrictions drastically limit what they can do in practice. That begs the question: Can advanced AI technology coexist with some of the world’s strictest data protection regulations? Apple’s AI product would most certainly receive scrutiny compared to its competitors. The core difficulty is the data-hungry nature of modern AI systems. To provide personalised and effective services, these AIs require access to enormous datasets, which may violate GDPR principles such as data minimisation and purpose limitation. However, Apple could have an advantage in this area. Its emphasis on on-device processing and differential privacy approaches may enable it to develop AI features more compliant with EU standards. If successful, this might establish a new norm for privacy-preserving AI, providing Apple an advantage in the ********* market. And it’s not Apple’s first encounter with EU regulation. In September 2021, the company complained about parts of the DMA rules that would have forced it to allow users to sideload apps from its App Store for the first time. Apple claimed that doing so would jeopardise user privacy and security, reinforcing its long-standing belief in the sanctity of its closed ecosystem. Furthermore, Apple’s recent move to prohibit progressive web applications (PWAs) in the EU has caused developer objections. Many saw this decision as yet another attempt to resist regulatory pressure. However, in an unexpected turn of events, the EU concluded that Apple’s treatment of PWAs did not breach DMA guidelines, prompting the company to reconsider its decision. Global implications: Fragmentation or harmonisation? These incidents shed light on the intricate relationship between tech companies and regulators. Companies like Apple are known for resisting regulations they perceive as too strict. However, they must also be ready to adjust their strategies when their understanding of the rules is questioned. The EU delay of Apple’s AI features is more than a bump in the road. It illustrates the complex relationship between legal and technological innovation. Finding that balance will be vital as we go forward. Regulators and the tech industry will both need to adapt to build a world where high-powered AI is allowed to operate while also respecting human rights and privacy. It is a reminder that there are no clear courses to follow in the constantly changing world of AI. Governments, in turn, will need to be ready for fresh thinking and creative formulation if we want the powers of AI brought to the good in ways that are true to the values and rights on which our digital society rests. However, the timing of the controversy raises questions about the future of global tech development. Will the digital landscape continue to fragment, with different functionalities available in other geographies based on what is permissible by that jurisdiction’s regulations? Or is it the direction of a more harmonised global approach to tech regulation and development? As consumers, we find ourselves in a constant struggle between the forces of innovation and regulation. As technology advances, we are eager to embrace the newest AI-powered features that enhance our digital experiences and cater to our individual needs. However, it is equally important to us to prioritise protecting our privacy and data. Companies such as Apple face the challenge of pushing the boundaries of what is possible with AI and establishing new benchmarks for privacy and security. To sum up, Apple’s decision to delay its AI features in the EU is a major story in the continuing discussion of tech innovation and regulation. It highlights the need for a more sophisticated and collaborative strategy to form our digital future. As we go down this path, it will be all the more important to have open and constructive conversations with all stakeholders—tech firms, regulators, users—to come up with solutions that promote innovation while safeguarding basic rights. Indeed, the future of AI fundamentally in Europe and on a global scale might be at stake as we struggle through these stormy seas. (Image Credit: Apple) See also: Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Apple’s EU AI delay: Innovation vs regulation appeared first on AI News. View the full article
  14. Anthropic has launched Claude 3.5 Sonnet, its mid-tier model that outperforms competitors and even surpasses Anthropic’s current top-tier Claude 3 Opus in various evaluations. Claude 3.5 Sonnet is now accessible for free on Claude.ai and the Claude iOS app, with higher rate limits for Claude Pro and Team plan subscribers. It’s also available through the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model is priced at $3 per million input tokens and $15 per million output tokens, featuring a 200K token context window. Anthropic claims that Claude 3.5 Sonnet “sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval).” The model demonstrates enhanced capabilities in understanding nuance, humour, and complex instructions, while excelling at producing high-quality content with a natural tone. Operating at twice the speed of Claude 3 Opus, Claude 3.5 Sonnet is well-suited for complex tasks such as context-sensitive customer support and multi-step workflow orchestration. In an internal agentic coding evaluation, it solved 64% of problems, significantly outperforming Claude 3 Opus at 38%. The model also showcases improved vision capabilities, surpassing Claude 3 Opus on standard vision benchmarks. This advancement is particularly noticeable in tasks requiring visual reasoning, such as interpreting charts and graphs. Claude 3.5 Sonnet can accurately transcribe text from imperfect images, a valuable feature for industries like retail, logistics, and financial services. Alongside the model launch, Anthropic introduced Artifacts on Claude.ai, a new feature that enhances user interaction with the AI. This feature allows users to view, edit, and build upon Claude’s generated content in real-time, creating a more collaborative work environment. Despite its significant intelligence leap, Claude 3.5 Sonnet maintains Anthropic’s commitment to safety and privacy. The company states, “Our models are subjected to rigorous testing and have been trained to reduce misuse.” External experts, including the ***’s AI Safety Institute (*** AISI) and child safety experts at Thorn, have been involved in testing and refining the model’s safety mechanisms. Anthropic emphasises its dedication to user privacy, stating, “We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so. To date we have not used any customer or user-submitted data to train our generative models.” Looking ahead, Anthropic plans to release Claude 3.5 Haiku and Claude 3.5 Opus later this year to complete the Claude 3.5 model family. The company is also developing new modalities and features to support more business use cases, including integrations with enterprise applications and a memory feature for more personalised user experiences. (Image Credit: Anthropic) See also: OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks appeared first on AI News. View the full article
  15. Elon Musk’s startup, xAI, has just announced that it will rely on Dell and Super Micro for server racks to support its gigantic supercomputer project. Musk announced this collaboration on his social media platform, X, marking a key development in xAI’s goal to assemble what he has repeatedly called “the world’s biggest supercomputer.” Server racks are an integral part of high-performance computing infrastructure, providing the skeleton needed to store and organise the various computing components necessary for supercomputer operations. These engineered rooms are designed to promote optimal efficiency and airflow—which is of vital importance in the world of supercomputing—by taking advantage of limited floor space. Server racks, such as those used in xAI’s Grok for large-scale AI model training, are essential components of the server infrastructure that support the immense computational power required for these workloads. Hundreds of thousands of power-hungry AI chips are needed for these projects to achieve the desired scale, and there are insufficient production cycles available in semiconductor foundries. xAI’s project is massive; thus, heat management was especially challenging on their scale. Current technology just isn’t fast enough, and supercomputers—which can perform calculations thousands of times faster—get so hot that the chips inside them degrade in performance over time. This issue is only exacerbated by the need for thousands of power-hungry AI chips required to train more advanced AI models like xAI’s Grok. Partnership details: Dell and Super Micro’s roles According to Musk, Dell Technologies will be responsible for assembling half of the racks for xAI’s supercomputer. Super Micro Computer, referred to as “SMC” by Musk, will provide the remaining half. Super Micro, known for its close ties with chip firms like Nvidia and its expertise in liquid-cooling technology, has confirmed this partnership to Reuters. San Francisco-based Super Micro is renowned for its innovative approaches to server design, particularly its liquid-cooling technology. This technology is crucial for managing the extreme heat generated by high-performance computing systems, allowing for more efficient operation and potentially extending the lifespan of components. In a related development, Dell CEO Michael Dell announced on X that the company is collaborating with Nvidia to build an “AI factory” that will power the next version of xAI’s chatbot, Grok. This collaboration underscores the extensive computational resources that advanced AI model training requires. Musk has previously stated that training the Grok 2 model required approximately 20,000 Nvidia H100 graphic processing units (GPUs), and future versions might need up to 100,000 of these chips. According to The Information, the proposed supercomputer is expected to be operational by fall 2025. Both Dell Technologies and Super Micro Computer bring extensive experience and expertise to this project. Dell has been a trusted supplier of servers and data centre infrastructure for decades, powering many of the world’s largest cloud computing platforms and supercomputing facilities, such as the Frontera supercomputer at the Texas Advanced Computing Center. Super Micro has established itself as a leader in providing high-performance, energy-efficient server solutions. Their innovations in liquid cooling and blade server architectures are widely utilised by cloud providers, enterprises, and research institutions for demanding workloads like AI and high-performance computing. Implications for AI and supercomputing technologies The collaboration between xAI, Dell Technologies, and Super Micro Computer represents a significant milestone in the advancement of AI and supercomputing technologies. As the project progresses, it will likely push the boundaries of high-performance computing and contribute to the rapid evolution of artificial intelligence capabilities. This partnership also highlights the growing importance of specialised hardware in the AI industry. As AI models become increasingly complex and data-intensive, the demand for high-performance computing solutions is expected to continue rising, potentially reshaping the landscape of the tech industry in the coming years. See also: Dell, Intel and University of Cambridge deploy the ***’s fastest AI supercomputer Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post xAI secures Dell Technologies and Super Micro support for supercomputer project appeared first on AI News. View the full article
  16. Riding the AI wave, semiconductor giant Broadcom has joined the stampede of companies chasing Nvidia for a piece of the lucrative AI chip market. The computing and software conglomerate is up more than 66% in the past year, as it makes power moves to establish itself as one of the most dominant players in AI today. Broadcom has been making aggressive moves that have piqued the interest of analysts and investors, from buying VMware in a $61 billion deal to strengthen its data centre and cloud chops, to investments in AI chip R&D. Central to Broadcom’s AI aspirations now is its fast-growing AI-connected chip business, which the company said it now forecasts will bring in an astounding $11 billion in revenue for fiscal 2024, up from a previous forecast of $10 billion. Combined with a 15% jump in Broadcom Inc shares recently, the upward revision reinforces the strong appetite for chips driving the rise of generative AI. Morningstar analysts agreed in a note to Reuters that Broadcom ******** “we continue to see Broadcom as incredibly well-positioned to benefit from rising generative AI investment in the long term,” — a view widely held on Wall Street. A second explanation for the upswing in AI for Broadcom is essentially through buying or investing very well over the years. They come as one of three planned spinoffs in Dell’s drive to become a full-fledged cloud computing firm with the $61 billion acquisition of VMware next year, which also added $50 billion to its market capitalisation, per FactSet data. In addition, Broadcom has been ramping up its own custom AI chip business, inking deals with tech giants including Alphabet’s Google and Meta Platforms. In March, the company revealed that a third unidentified customer was using its custom AI chips, which gave its business credibility in this high-stakes industry. According to Reuters, “At an investor conference on Wednesday, Broadcom said it will produce the new custom AI chips for ‘hyperscaler’ buyers that are mostly Alphabet’s Google and Meta Platforms.” Diversified revenue streams and investor optimism Broadcom’s software division, bolstered by the VMware acquisition, added $2.7 billion to its second-quarter revenue, further diversifying the company’s revenue streams and positioning it as a formidable force in the AI ecosystem. The company’s stock performance reflects this optimism, with shares surging 76% over the past 12 months and closing at a staggering $1,495.5 on Wednesday. Broadcom’s recent announcement of a 10-for-1 stock split, a move reminiscent of Nvidia’s strategy, is expected to further fuel investor enthusiasm. “It’s a sure-***** way to send your stock soaring,” Triple D Trading analyst Dennis ***** told Reuters, commenting on the stock split, adding that the move was “right out of Nvidia’s book.” Nvidia’s dominance and competitive pressure Broadcom is progressing, but Nvidia is still the leader in this space. For years, Nvidia has benefited from the first-mover advantage of producing AI chips that cater to a long tail of applications while favoring innovation over turning in big volume. While that balance appears to be in flux, Broadcom’s recent wins shed light on how the writing may be on the wall for a ***** in AI now benefiting more than just one company. This new contender has even forced Nvidia’s CEO Jensen Huang to admit that ‘a resurgent Broadcom (and other start-ups) have planners at Nvidia nervous. Huang, for his part, stressed how the company must continue to innovate to ensure that it ******** ahead of its peers. Nevertheless, competition has yet to make a dent on Nvidia’s enviable lead in the AI chip market as the company busily cranks out its top-performing AI tech. Broadcom vs Nvidia: The battle for AI chip supremacy Unlike Nvidia’s graphics processing units which have long dominated the industry, Broadcom’s custom AI chips – or application-specific integrated circuits – might offer a great business opportunity for tech giants with massive and steady-state AI workloads. These bespoke chips require considerable initial capital investment but they can offer large cost savings in both CapEx and power consumption, which sets them up as a more cost-effective (if less general purpose) option to Nvidia’s. Also bullish on Broadcom is Piper Sandler analyst Harsh Kumar, who writes, “We continue to see [Broadcom] as the best AI play [excluding Nvidia] due to its strong positioning in the custom ASIC business along with its strong software portfolio.” Broadcom’s multi-pronged strategy of pumping cash into buyouts, offering bespoke chips and expanding into a software business has established it as a strong rival, as the AI revolution shows few signs of abating. Nvidia is still the undisputed leader in the industry, but Broadcom’s bold AI play here was enough to light a ***** under investors and analysts both, sowing the seeds of what could become an epic showdown in the market for AI chips. (Photo by Kenny Eliason) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Broadcom’s AI surge challenges Nvidia’s dominance appeared first on AI News. View the full article
  17. Ilya Sutskever, former chief scientist at OpenAI, has revealed his next major project after departing the AI research company he co-founded in May. Alongside fellow OpenAI alumnus Daniel Levy and Apple’s former AI lead Daniel ******, the trio has formed Safe Superintelligence Inc. (SSI), a startup solely focused on building safe superintelligent systems. See also: Meta unveils five AI models for multi-modal processing, music generation, and more Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ appeared first on AI News. View the full article
  18. Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems. The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research and collaboration for over a decade. As AI rapidly innovates, Meta believes working with the global community is crucial. “By publicly sharing this research, we hope to inspire iterations and ultimately help advance AI in a responsible way,” said Meta. Chameleon: Multi-modal text and image processing Among the releases are key components of Meta’s ‘Chameleon’ models under a research license. Chameleon is a family of multi-modal models that can understand and generate both text and images simultaneously—unlike most large language models which are typically unimodal. “Just as humans can process the words and images simultaneously, Chameleon can process and deliver both image and text at the same time,” explained Meta. “Chameleon can take any combination of text and images as input and also output any combination of text and images.” Potential use cases are virtually limitless from generating creative captions to prompting new scenes with text and images. Multi-token prediction for faster language model training Meta has also released pretrained models for code completion that use ‘multi-token prediction’ under a non-commercial research license. Traditional language model training is inefficient by predicting just the next word. Multi-token models can predict multiple future words simultaneously to train faster. “While [the one-word] approach is simple and scalable, it’s also inefficient. It requires several orders of magnitude more text than what children need to learn the same degree of language fluency,” said Meta. JASCO: Enhanced text-to-music model On the creative side, Meta’s JASCO allows generating music clips from text while affording more control by accepting inputs like chords and beats. “While existing text-to-music models like MusicGen rely mainly on text inputs for music generation, our new model, JASCO, is capable of accepting various inputs, such as chords or beat, to improve control over generated music outputs,” explained Meta. AudioSeal: Detecting AI-generated speech Meta claims AudioSeal is the first audio watermarking system designed to detect AI-generated speech. It can pinpoint the specific segments generated by AI within larger audio clips up to 485x faster than previous methods. “AudioSeal is being released under a commercial license. It’s just one of several lines of responsible research we have shared to help prevent the misuse of generative AI tools,” said Meta. Improving text-to-image diversity Another important release aims to improve the diversity of text-to-image models which can often exhibit geographical and cultural biases. Meta developed automatic indicators to evaluate potential geographical disparities and conducted a large 65,000+ annotation study to understand how people globally perceive geographic representation. “This enables more diversity and better representation in AI-generated images,” said Meta. The relevant code and annotations have been released to help improve diversity across generative models. By publicly sharing these groundbreaking models, Meta says it hopes to foster collaboration and drive innovation within the AI community. (Photo by Dima Solomin) See also: NVIDIA presents latest advancements in visual AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News. View the full article
  19. The fast-food industry has been turning to AI for years in the hopes that it might improve efficiency, cut down on labour costs, and, ideally, make customers’ lives easier. One of the pioneers was McDonald’s, which in 2021 teamed up with IBM to test an AI system for voice ordering at around 100 drive-thru locations throughout the US. Customers were supposed to be able to drive up, loudly state their orders into the drive-thru microphone, and then have an AI record all of that, instead of a human crew member. Theoretically, the AI should be able to comprehend more complicated requests, offer recommendations based on past orders and also reduce error, all while simultaneously continuing to provide fast service and without getting bogged down by staffing problems. Yet McDonald’s is already giving up on the AI drive-thru feature just a few years after introducing it. Viral videos exposed the system as overwhelmed with more simple orders, taking requests from the wrong cars and recommending peculiar food couplings, like ice cream and bacon. Following these public failures, McDonald’s declared that it will no longer use IBM for the partnership after July 31, 2024. This spectacular flameout raises the question: does AI really have a role to play in the drive-thru of the future, or is ‘Burgerbot’ just destined to be the next in a long line of extremely smart things that fundamentally, scornfully refuse to understand humans and, thus, fail? The ***** of AI drive-thrus But McDonald’s woes aside, the benefits that attracted the company and competitors to AI drive-thrus in the first place haven’t disappeared. Labour costs are going up while staff are getting ******* to find, so the prospect of having an AI take orders could offer significant cost reductions in the future. AI systems also promise something extremely compelling: absolute understanding of even the most twisted, crazy customer issues or requests you can think of. Armed with prior ordering data, the AI was able to provide custom recommendations aimed at boosting average spend. AI ordering, furthermore, could lead to an even speedier, less-friction-filled drive-thru experience. An AI could immediately process an order rather than requiring a human crew member to repeat it back. Those efficiencies could be even more significant during peak hours when cars are sprawled around the building and the drive-thru is jammed. The negatives – and where McDonald has stumbled Of course, the viral blunders that shut down McDonald’s AI drive-thru dreams tells a cautionary tale true to life when it comes to some of the real pitfalls with which this new tech grapples. Despite years of training, McDonald’s AI system still seemed to frequently fail at the very core task of taking accurate orders. And such failures during the ordering process could result in customer dissatisfaction and expensive order inaccuracies – which in turn could wipe out any potential labour savings. And then there is the looming question of whether AI will ever be able to understand the constant stream of idiosyncratic phrasings, accents, and off-menu requests as well as humans. The McDonald’s experience also hints at the “uncanny valley” challenge of AI-powered ordering. While often impressive, the current state of AI can still produce responses that seem almost human, but not quite – leading to an unsettling experience for some customers. A human voice actor may be required to dub over the AI. The future of AI drive-thrus Despite pulling the plug on its initial system, the fast food giant still believes AI will be a big player in the drive-thrus of the future. The IBM project, which testing gave the company “confidence that a voice ordering solution for drive-thru will be part of our restaurants’ future,” is part of a broader effort by McDonald’s to develop “long-term, scalable solutions,” a company spokesperson said. Chipotle, Wendy’s, Taco Bell, and Pizza Hut are among a number of other big chains that are also making big moves to integrate AI drive-thru ordering into their businesses. The stakes are high, with the entire industry standing to save billions in labour costs if this technology works. The consensus among most experts, however, is that AI drive-thru ordering is just beginning and that major milestones must still be met before it goes truly universal. But with AI language models advancing so fast, it may not be long before the ordering-caution and other ordering issue plaguing McDonald’s will be a thing of the past. There may also be a phased handoff ******* where passengers have the choice of interacting with an AI or a human flight attendant. It might build rapport and increase trust over time, which might be an advantage for AI drive-thrus. But at the end of the day, it sounds like McDonalds was still a few months to years ahead of the curve and just rolled this out more aggressively than it should have right away. Or, as one Silicon Valley investor quipped yesterday: “You can’t be a pioneer without taking a few arrows in the back.” For the fast-food industry, streamlining drive-thrus with AI ******** the next frontier – it’s just going to take more time. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post The rise and fall of AI at the McDonald’s drive-thru appeared first on AI News. View the full article
  20. Artificial intelligence is transforming numerous industries, and the online slot gaming sector in the *** is no exception. The integration of AI in online slots is not just a trend but a revolution that is reshaping the gaming landscape. This advancement offers players a more personalised, secure, and engaging gaming experience. As technology evolves, the allure of no minimum ******** slots in the *** becomes increasingly appealing, providing accessibility to a broader audience. Let’s delve into how AI is revolutionising online slot gaming in the *** and what this means for both players and developers. AI integration in slot machine development The development of slot machines has come a long way from their mechanical beginnings to the sophisticated digital versions we see today. AI is at the forefront of this evolution, significantly enhancing the creation and functionality of these games. Enhanced game design AI enables developers to create more dynamic and visually appealing games. By analysing vast amounts of player data, AI can generate themes, graphics, and narratives that are tailored to the preferences of different player demographics. This means that games can be more immersive and engaging, drawing players into captivating storylines and visually stunning environments that are continually evolving based on player interactions. Adaptive gameplay One of the most exciting applications of AI in slot machine development is the ability to adjust gameplay in real-time. AI algorithms monitor player behaviour and adapt the difficulty and features of the game accordingly. For example, if a player is struggling, the game might become slightly easier to keep them engaged, or if a player is doing exceptionally well, the game might present more challenges to maintain excitement. This adaptive gameplay ensures that players remain interested and challenged, providing a more satisfying gaming experience. Personalised player experience Personalisation is a key aspect of modern online gaming, and AI is instrumental in delivering a customised experience for each player. Machine learning for player insights AI uses machine learning to analyse player behaviour and preferences, allowing for highly personalised game suggestions and promotional offers. By understanding what types of games a player enjoys and how they like to play, AI can recommend new games or bonuses that are most likely to appeal to them. This level of personalisation enhances the player’s experience and increases their engagement with the platform. Customised in-game experience Beyond recommendations, AI also customises the gameplay itself. For instance, AI can adjust game mechanics, such as bonus frequencies and difficulty levels, based on individual player data. This means that each gaming session is unique and tailored to the player’s specific preferences, making the gaming experience more enjoyable and engaging. Enhancing security and fairness Security and fairness are critical components of online gaming, and AI plays a significant role in ensuring both. ****** detection and prevention AI is highly effective in identifying and preventing fraudulent activities. By continuously monitoring player behavior and transaction patterns, AI can detect anomalies that may indicate fraudulent actions. When such activities are detected, AI can intervene in real-time to prevent losses and protect players and operators alike. This capability is crucial for maintaining the integrity and trustworthiness of online gaming platforms. Ensuring fair play Ensuring fair play is paramount in online gaming, and AI enhances this by maintaining the integrity of random number generators (RNG). AI algorithms ensure that the outcomes of slot games are truly random and free from manipulation. This transparency is essential for player trust, as it guarantees that every player has an equal chance of winning based on luck, without any external interference. AI’s role in responsible gambling AI is also a powerful tool in promoting responsible gambling practices, helping to mitigate the risks associated with gambling addiction. Predictive analytics for problem gambling AI can analyse playing patterns to identify behaviours that may indicate problem gambling. By using predictive analytics, AI can spot early signs of addiction, such as excessive spending or extended gaming sessions. This allows gaming platforms to intervene proactively, offering support and resources to players who may be at risk. Intervention strategies Once potential problem gambling behavior is identified, AI can implement intervention strategies. This might include sending notifications to players about their gaming habits, providing self-assessment tools, or limiting the amount of time and money a player can spend on the platform. These strategies are designed to help players maintain a healthy relationship with gambling and prevent the escalation of problematic behaviours. Future prospects Looking ahead, the potential of AI in the online slot gaming industry is immense. As AI technology continues to advance, we can expect even more sophisticated and personalised gaming experiences. AI-driven innovation will likely lead to entirely new types of games that we cannot yet imagine, offering unparalleled levels of engagement and excitement. AI is revolutionising the online slot gaming industry in the ***, offering enhanced game design, personalised player experiences, improved security, and responsible gambling measures. As the technology continues to evolve, the future of online slot gaming looks incredibly promising, with AI playing a pivotal role in shaping this exciting landscape. For players and developers alike, the integration of AI presents endless possibilities for innovation and improvement in the world of online gaming. The post The impact of AI on online slot gaming in the *** appeared first on AI News. View the full article
  21. While some may think Snapchat is fading, the app continues to attract a considerable number of active users. Acknowledging past shortcomings in machine learning utilisation, Snap’s CEO Evan Spiegel announced a new, assertive strategy to integrate AI and machine learning technologies into its services, marking a substantial departure from its long-term focus on revising its advertising approach. In an interview with Bloomberg, Spiegel emphasised the need to improve their machine learning capabilities to reach cutting-edge standards. “We needed to improve there and bring together some of our most senior machine learning folks to just talk about what it would look like for us to get to state of the art and really invest,” he stated. Soon afterward, Snap debuted its newest generative AI technology that allows phone cameras to create more lifelike lenses—the features on the app that let you turn into a dog or have giant bug eyes—when recording videos and taking photos. Snapchat hopes that this change will help it compete more effectively with other social media platforms. Snap has been a pioneer in augmented reality (AR) technology, which layers digital effects onto real-world images or videos. Although Snap still operates in the shadow of larger rivals such as Meta, the company is making a significant bet on more sophisticated and, frankly, more fun AR lenses. They hope these will attract new users and advertisers to the Snapchat platform. The company also unveiled that AR developers can now create AI-powered lenses, and Snapchatters will be able to extensively use these lenses in their content. Additionally, Snap announced a new iteration of its developer program: Lens Studio. This more advanced version of the software, introduced late last year, initially allowed creators to build their own AR experiences for Snapchat. Now, it extends to websites and other apps. With the improved Lens Studio, Snap’s CTO Bobby Murphy said that the time required to create AR effects would be dramatically reduced from weeks to minutes or hours, and that it would also facilitate the development of more sophisticated work. “What’s fun for us is that these tools both stretch the creative space in which people can work, but they’re also easy to use, so newcomers can build something unique very quickly,” Murphy explained in an interview with Reuters. The new Lens Studio includes a suite of generative AI tools, such as an AI assistant that can answer developers’ questions if they need help. Another tool allows artists to type a prompt and automatically generate a three-dimensional image that they can use for their AR lens, eliminating the need to develop a 3D model from scratch. Early AR technologies only allowed users to perform simple tasks, such as placing a hat on someone’s head in a video. However, according to Murphy, Snap’s improvements will make it kind of hard to tell whether a digital hat is actually being worn, with the hat moving seamlessly with the person’s movements and the lighting on the hat matching the video perfectly. Snap also eventually plans to create AR lenses that cover everything from your head to your toes—not just your face. Building a new wardrobe for individuals is really hard to do right go right now, said Murphy. Through its generative AI capabilities, Snap will provide advanced AR experiences to distinguish Snapchat from its peers and attract new users, even though it might struggle to gain users relative to its scale compared with giants like Meta. See also: NVIDIA presents latest advancements in visual AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Snap introduces advanced AI for next-level augmented reality appeared first on AI News. View the full article
  22. On Inishbofin, a small island off the western coast of Ireland where the population hovers around 170 and the main industries are farming, fishing and tourism, a quiet technology revolution has been taking place. Artificial intelligence (AI), once thought to be the exclusive domain of big cities and tech hubs, is making its way to the furthest corners of rural Ireland, empowering locals with cutting-edge tools to boost their businesses and preserve their traditional crafts. It is all part of Microsoft’s ambitious ‘Skill Up Ireland’ initiative, which aims to provide every person in Ireland with the opportunity to learn AI skills. The program has partnered with the Irish government and various organisations to deliver AI training and resources to communities across the country, leaving no one behind in the era of rapid technological advancement. One recent beneficiary of this program is Andrew Murray, the general manager of the 22-room Doonmore Hotel on Inishbofin. A native of the island, Murray comes from a family that has lived on Inishbofin for generations, with his parents founding the hotel in 1969. Despite the remote location, Murray is eager to embrace AI as a tool to streamline his operations and save time. “What I’m interested in the most is the power of AI to save time for people like me,” Murray said. “Because time is the most precious thing we have, and it’s finite. There are only 24 hours in a day.” Through an AI introduction class, Murray discovered the possibilities of tools such as Microsoft Copilot, an AI-powered assistant for everything from scheduling to data analysis to creating content. He intends to use these tools to oversee things like scheduling staff and inventory management as well as invoicing and pricing – tasks that he has normally spent hours, if not days, doing completely manually. But Murray is not alone in his enthusiasm for AI on Inishbofin. Catherine O’Connor, a weaver who draws inspiration from the island’s natural colors and textures, has also embraced the technology. Initially wary of the AI training, O’Connor quickly became “absorbed by it” once she realised its potential to help her market her handmade scarves, table runners, and wall hangings. “Every piece has a story behind it,” O’Connor explained. “You can get a scarf at the five-and-dime store, but a handmade scarf takes hours and hours to make. It’s a totally different level. So you have to find the right words to use.” Now, with the help of Copilot, O’Connor can write engaging descriptions of her creations for marketing her craft on a proper e-commerce platform and help people understand her work more accurately and visualise the creation. Another Copilot user, Inishbofin-based florist Patricia Concannon, plans to also use Copilot to make her website and Instagram captions more engaging which should prove useful in helping her reach new customers and attracting a wider audience for her floral displays. The AI training on Inishbofin is just one element of Microsoft’s wider ‘Skill Up Ireland’ programme aimed at upskill and reskill over in Ireland, which includes Dream Space, an immersive learning experience to introduce STEM and AI skills to every one of the country’s one million students and their teachers. Kevin Marshall, head of Learning & Skills for Microsoft Ireland, said the rapid growth in the prevalence of AI in the last few years has necessitated upskilling and reskilling programmes. He continued: “At the same time, with the ********** of generative AI in the last 18 months, there’s a real need to educate people on what this is, to show them that it’s not ****** magic. The challenge, however, ***** in the ever-evolving nature of AI technology. “The teaching is non-invasive, it’s collaborative,” Marshall explained. “The programs teach the basic foundations and core principles of AI. Here’s what it can do. Here are the risks and the ethical issues. Here are the opportunities. And here’s where you go play with it.” Programmes like ‘Skill Up Ireland’ are an opportunity for rural communities like Inishbofin not to be left behind through the digital divide as AI significantly impacts industries and the way that we live and work. Audrey Murray, a felt artist and teaching assistant on the island, summed it up: “AI has to be another step, I suppose, bringing us closer to the world and bringing the world here.” And with Microsoft’s promise of creating AI skills for all in Ireland, the remote extremities of the Emerald Isle are on the brink of being catapulted into the future, when the very latest technologies are melded with ancient skills and lifeways. Meanwhile, for the inhabitants of Inishbofin, the opportunities are yet to reveal themselves. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI comes to Ireland’s remote Islands through Microsoft’s ‘Skill Up’ program appeared first on AI News. View the full article
  23. NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception. “Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said Jan Kautz, VP of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.” Among the over 50 NVIDIA research projects being presented, two papers have been selected as finalists for CVPR’s Best Paper Awards – one exploring the training dynamics of diffusion models and another on high-definition maps for self-driving cars. Additionally, NVIDIA has won the CVPR Autonomous Grand Challenge’s End-to-End Driving at Scale track, outperforming over 450 entries globally. This milestone demonstrates NVIDIA’s pioneering work in using generative AI for comprehensive self-driving vehicle models, also earning an Innovation Award from CVPR. One of the headlining research projects is JeDi, a new technique that allows creators to rapidly customise diffusion models – the leading approach for text-to-image generation – to depict specific objects or characters using just a few reference images, rather than the time-intensive process of fine-tuning on custom datasets. Another breakthrough is FoundationPose, a new foundation model that can instantly understand and track the 3D pose of objects in videos without per-object training. It set a new performance record and could unlock new AR and robotics applications. NVIDIA researchers also introduced NeRFDeformer, a method to edit the 3D scene captured by a Neural Radiance Field (NeRF) using a single 2D snapshot, rather than having to manually reanimate changes or recreate the NeRF entirely. This could streamline 3D scene editing for graphics, robotics, and digital twin applications. On the visual language front, NVIDIA collaborated with MIT to develop VILA, a new family of vision language models that achieve state-of-the-art performance in understanding images, videos, and text. With enhanced reasoning capabilities, VILA can even comprehend internet memes by combining visual and linguistic understanding. NVIDIA’s visual AI research spans numerous industries, including over a dozen papers exploring novel approaches for autonomous vehicle perception, mapping, and planning. Sanja Fidler, VP of NVIDIA’s AI Research team, is presenting on the potential of vision language models for self-driving cars. The breadth of NVIDIA’s CVPR research exemplifies how generative AI could empower creators, accelerate automation in manufacturing and healthcare, while propelling autonomy and robotics forward. (Photo by v2osk) See also: NLEPs: Bridging the gap between LLMs and symbolic reasoning Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NVIDIA presents latest advancements in visual AI appeared first on AI News. View the full article
  24. Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and ******** Python programs to solve user queries, then output solutions in natural language. While LLMs like ChatGPT have demonstrated impressive performance on various tasks, they often struggle with problems requiring numerical or symbolic reasoning. NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualisation. This approach offers several advantages, including improved accuracy, transparency, and efficiency. Users can investigate generated programs and fix errors directly, avoiding the need to rerun entire models for troubleshooting. Additionally, a single NLEP can be reused for multiple tasks by replacing certain variables. The researchers found that NLEPs enabled GPT-4 to achieve over 90% accuracy on various symbolic reasoning tasks, outperforming task-specific prompting methods by 30% Beyond accuracy improvements, NLEPs could enhance data privacy by running programs locally, eliminating the need to send sensitive user data to external companies for processing. The technique may also boost the performance of smaller language models without costly retraining. However, NLEPs rely on a model’s program generation capability and may not work as well with smaller models trained on limited datasets. Future research will explore methods to make smaller LLMs generate more effective NLEPs and investigate the impact of prompt variations on reasoning robustness. The research, supported in part by the Center for Perceptual and Interactive Intelligence of Hong Kong, will be presented at the Annual Conference of the North ********* Chapter of the Association for Computational Linguistics later this month. (Photo by Alex Azabache) See also: Apple is reportedly getting free ChatGPT access Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News. View the full article
  25. The ********* Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data. Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes. Since the public release of OpenAI’s ChatGPT, backed by Microsoft 18 months ago, there has been significant growth in interest and investment in generative AI technologies. These applications, capable of writing text, creating images, and producing audio content at record speeds, have attracted considerable attention. However, the rise in AI activity accompanying these changes prompts an intriguing question: How do AI developers actually source the data needed to train their models? Is it through the use of unauthorised copyrighted material? Implementing the AI Act The EU’s AI Act, intended to be implemented gradually over the next two years, aims to address these issues. New laws take time to embed, and a gradual rollout allows regulators the necessary time to adapt to the new laws and for businesses to adjust to their new obligations. However, the implementation of some rules ******** in doubt. One of the more contentious sections of the Act stipulates that organisations deploying general-purpose AI models, such as ChatGPT, must provide “detailed summaries” of the content used to train them. The newly established AI Office has announced plans to release a template for organisations to follow in early 2025, following consultation with stakeholders. AI companies have expressed strong resistance to revealing their training data, describing this information as trade secrets that would provide competitors with an unfair advantage if made public. The level of detail required in these transparency reports will have significant implications for both smaller AI startups and major tech companies like Google and Meta, which have positioned AI technology at the center of their future operations. Over the past year, several top technology companies—Google, OpenAI, and Stability AI—have faced lawsuits from creators who claim their content was used without permission to train AI models. Under growing scrutiny, however, some tech companies have, in the past two years, pierced their own corporate veil and negotiated content-licensing deals with individual media outlets and websites. Some creators and lawmakers remain concerned that these measures are not sufficient. ********* lawmakers’ divide In Europe, differences among lawmakers are stark. Dragos Tudorache, who led the drafting of the AI Act in the ********* Parliament, argues that AI companies should be required to open-source their datasets. Tudorache emphasises the importance of transparency so that creators can determine whether their work has been used to train AI algorithms. Conversely, under the leadership of President Emmanuel Macron, the French government has privately opposed introducing rules that could hinder the competitiveness of ********* AI startups. French Finance Minister Bruno Le Maire has emphasised the need for Europe to be a world leader in AI, not merely a consumer of ********* and ******** products. The AI Act acknowledges the need to balance the protection of trade secrets with the facilitation of rights for parties with legitimate interests, including copyright holders. However, striking this balance ******** a significant challenge. Different industries vary on this matter. Matthieu Riouf, CEO of the AI-powered image-editing firm Photoroom, compares the situation to culinary practices, claiming there’s a secret part of the recipe that the best chefs wouldn’t share. He represents just one instance on the laundry list of possible scenarios where this type of ****** could be rampant. However, Thomas Wolf, co-founder of one of the world’s top AI startups, Hugging Face, argues that while there will always be an appetite for transparency, it doesn’t mean that the entire industry will adopt a transparency-first approach. A series of recent controversies have driven home just how complicated this all is. OpenAI demonstrated the latest version of ChatGPT in a public session, where the company was roundly criticised for using a synthetic voice that sounded nearly identical to that of actress Scarlett Johansson. These examples point to the potential for AI technologies to violate personal and proprietary rights. Throughout the development of these regulations, there has been heated debate about their potential effects on future innovation and competitiveness in the AI world. In particular, the French government has urged that innovation, not regulation, should be the starting point, given the dangers of regulating aspects that have not been fully comprehended. The way the EU regulates AI transparency could have significant impacts on tech companies, digital creators, and the overall digital landscape. Policymakers thus face the challenge of fostering innovation in the dynamic AI industry while simultaneously guiding it towards safe, ethical decisions and preventing IP infringement. In sum, if adopted, the EU AI Act would be a significant step toward greater transparency in AI development. However, the practical implementation of these regulations and their industry results could be far off. Moving forward, especially at the dawn of this new regulatory paradigm, the balance between innovation, ethical AI development, and the protection of intellectual property will remain a central and contested issue for stakeholders of all stripes to grapple with. See also: Apple is reportedly getting free ChatGPT access Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post EU AI legislation sparks controversy over data transparency appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.