ChatGPT
Diamond Member-
Posts
205 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
The demand for content is as high as ever in today’s digital world, with businesses, individuals, and marketers seeking fresh, engaging content to connect with their audiences. This increasing demand has resulted in the rise of AI-powered content writing tools, raising concerns from human writers about their future in this market. Can AI tools and human writers coexist in the online space? Could they complement each other? The answer is yes, if humans embrace AI’s strengths and counteract its inherent weaknesses. Efficiency and productivity As someone who has worked in the content industry for over a decade, I have been taken aback by AI’s ability to streamline the content creation process. AI can quickly produce large swathes of content based on predefined parameters such as keyword density, readability, and tone. The speed at which AI-powered writing tools perform is staggering, making them perfect for creating large volumes of content, such as blog posts, product descriptions, and even social media updates. However, AI-generated content often lacks the creativity and depth that only human writers bring to the table. For example, an AI tool like Google’s Gemini or ChatGPT could easily write a detailed review of the online sportsbook MyBookie filled with details. Yet it would be impossible for those tools to give a personalised anecdote about their interaction with customer support, the ease of depositing and withdrawing, or similar. People want facts, but they also want a personal touch, which makes content engaging and relatable. Idea generation and research All AI-powered tools excel at analysing huge amounts of data and identifying opportunities, trends, and patterns, which makes AI writing tools an invaluable asset for idea generation and research. I lean on AI for help in this area because although I specialise in gaming-related content, many of my clients request articles about unfamiliar subjects. Last week, I wrote an article about NASCAR racing despite only knowing these stock cars race around an oval track. I asked ChatGPT for several ideas for NASCAR-related articles and to provide statistics from some of the best-known drivers and teams. The tool almost instantly came back with several potential topics and a plethora of data, which I then delved deeper into while bringing originality to my article that AI cannot replicate. Customisation and personalisation AI-powered writing tools also shine when it comes to producing content from user data, enabling a business to create and publish content with tailored messages to different segments of an audience. For example, an e-commerce site can use AI to create personalised product descriptions or social media posts that appeal to specific demographics. However, as mentioned in our previous paragraphs, while AI can personalise content based on data, it lacks the authenticity and emotional connection human writers provide. Humans can add personal touches to make the content feel genuine or adjust the tone to reflect the brand’s tone of voice. Such intricacies and finishing touches help to create an authentic and consistent experience for the reader. Quality control and editing I said earlier that I often use AI-powered tools for idea generation and research, but I also use AI during quality control and editing. You may write an article packed with stellar content that you are proud of, only to discover upon publishing that it is strewn with spelling and grammatical errors. Grammarly Premium costs me approximately $150 annually, but it is worth its weight in gold. This powerful editing tool analyses my text for grammatical errors, sentence structure, and readability while providing easy-to-understand suggestions for improvement. Such a tool helps me to produce cleaner and more polished content without the need for extensive editing. However, although Grammarly is perfect for handling basic quality control, my human touch is still essential for the final review. This is especially true when writing an article with technical terms, jargon, or slang because the AI tool is often unaware of such words or phrases and incorrectly flags them as mistakes. Combining AI and human editing ensures my content is coherent, correct, and engaging. Scalability AI can quickly and efficiently produce hundreds or thousands of pieces of content in a short ******* of time. This incredible scalability makes AI writing tools valuable for e-commerce sites, marketing agencies, or even new outlets that demand a constant flow of fresh content. However, while AI can easily scale production, human writers must ensure that AI-generated content is high quality and creative. AI-generated content can quickly become formulaic, generic, and repetitive without human input. One only needs to scour the internet for a few minutes before stumbling across a website where AI has mass-produced the site’s content. Every article starts similarly, has the same structure, and has content drier than the Gobi Desert! Such websites probably enjoy plenty of traffic from new visitors. Still, I would be willing to bet my bottom dollar that their returning visitors figures are rock bottom because the content lacks engagement and is boring; you can’t put a price on a returning customer’s loyalty. Conclusion It is clear that AI-powered writing tools are not going away. Indeed, they will become more prevalent, if anything. While human writers have valid reasons to be concerned about their future in the content creation industry, they should learn to leverage the power of AI to improve their content. Human writers can let AI handle the heavy lifting of data processing, research, and even content generation (to some degree), freeing up priceless time for them to be creative and to add elements of flair and personality to the AI-generated work. In its current guise, AI is a human writer’s powerful and efficient ally; it is not a legitimate threat. Humans and AI can coexist in the content creation world until AI develops the ability to make up believable stories or think like only a human can. Such a time will likely never happen during our lifetimes. The post Can AI writing tools and human writers coexist? appeared first on AI News. View the full article
-
AI hardware startup Cerebras has created a new AI inference solution that could potentially rival Nvidia’s GPU offerings for enterprises. The Cerebras Inference tool is based on the company’s Wafer-Scale Engine and promises to deliver staggering performance. According to sources, the tool has achieved speeds of 1,800 tokens per second for Llama 3.1 8B, and 450 tokens per second for Llama 3.1 70B. Cerebras claims that these speeds are not only faster than the usual hyperscale cloud products required to generate these systems by Nvidia’s GPUs, but they are also more cost-efficient. This is a major shift tapping into the generative AI market, as Gartner analyst Arun Chandrasekaran put it. While this market’s focus had previously been on training, it is currently shifting to the cost and speed of inferencing. This shift is due to the growth of AI use cases within enterprise settings and provides a great opportunity for vendors like Cerebras of AI products and services to compete based on performance. As Micah Hill-Smith, co-founder and CEO of Artificial Analysis, says, Cerebras really shined in their AI inference benchmarks. The company’s measurements reached over 1,800 output tokens per second on Llama 3.1 8B, and the output on Llama 3.1 70B was over 446 output tokens per second. In this way, they set new records in both benchmarks. Cerebras introduces AI inference tool with 20x speed at a fraction of GPU cost. However, despite the potential performance advantages, Cerebras faces significant challenges in the enterprise market. Nvidia’s software and hardware stack dominates the industry and is widely adopted by enterprises. David Nicholson, an analyst at Futurum Group, points out that while Cerebras’ wafer-scale system can deliver high performance at a lower cost than Nvidia, the key question is whether enterprises are willing to adapt their engineering processes to work with Cerebras’ system. The choice between Nvidia and alternatives such as Cerebras depends on several factors, including the scale of operations and available capital. Smaller firms are likely to choose Nvidia since it offers already-established solutions. At the same time, larger businesses with more capital may opt for the latter to increase efficiency and save on costs. As the AI hardware market continues to evolve, Cerebras will also face competition from specialised cloud providers, hyperscalers like Microsoft, AWS, and Google, and dedicated inferencing providers such as Groq. The balance between performance, cost, and ease of implementation will likely shape enterprise decisions in adopting new inference technologies. The emergence of high-speed AI inference, capable of exceeding 1,000 tokens per second, is equivalent to the development of broadband internet, which could open a new frontier for AI applications. Cerebras’ 16-bit accuracy and faster inference capabilities may enable the creation of future AI applications where entire AI agents must operate rapidly, repeatedly, and in real-time. With the growth of the AI field, the market for AI inference hardware is also expanding. Accounting for around 40% of the total AI hardware market, this segment is becoming an increasingly lucrative target within the broader AI hardware industry. Given that more prominent companies occupy the majority of this segment, many newcomers should carefully consider important aspects of this competitive landscape, considering the competitive nature and significant resources required to navigate the enterprise space. (Photo by Timothy Dykes) See also: Sovereign AI gets boost from new NVIDIA microservices Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Cerebras vs Nvidia: New inference tool promises higher performance appeared first on AI News. View the full article
-
******** internet search provider Baidu has updated its Wikipedia-like Baike service to prevent Google and Microsoft Bing from scraping its content. This change was observed in the latest update to the Baidu Baike robots.txt file, which denies access to Googlebot and Bingbot crawlers. According to the Wayback Machine, the change took place on August 8. Previously, Google and Bing search engines were allowed to index Baidu Baike’s central repository, which includes almost 30 million entries, although some target subdomains on the website were restricted. This action by Baidu comes amid increasing demand for large datasets used in training artificial intelligence models and applications. It follows similar moves by other companies to protect their online content. In July, Reddit blocked various search engines, except Google, from indexing its posts and discussions. Google, like Reddit, has a financial agreement with Reddit for data access to train its AI services. According to sources, in the past year, Microsoft considered restricting access to internet-search data for rival search engine operators; this was most relevant for those who used the data for chatbots and generative AI services. Meanwhile, the ******** Wikipedia, with its 1.43 million entries, ******** available to search engine crawlers. A survey conducted by the South China Morning Post found that entries from Baidu Baike still appear on both Bing and Google searches. Perhaps the search engines continue to use older cached content. Such a move is emerging against the background where developers of generative AI around the world are increasingly working with content publishers in a bid to access the highest-quality content for their projects. For instance, relatively recently, OpenAI signed an agreement with Time magazine to access the entire archive, dating back to the very first day of the magazine’s publication over a century ago. A similar partnership was inked with the Financial Times in April. Baidu’s decision to restrict access to its Baidu Baike content for major search engines highlights the growing importance of data in the AI era. As companies invest heavily in AI development, the value of large, curated datasets has significantly increased. This has led to a shift in how online platforms manage access to their content, with many choosing to limit or monetise access to their data. As the AI industry continues to evolve, it’s likely that more companies will reassess their data-sharing policies, potentially leading to further changes in how information is indexed and accessed across the internet. (Photo by Kelli McClintock) See also: Google advances mobile AI in Pixel 9 smartphones Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Baidu restricts Google and Bing from scraping content for AI training appeared first on AI News. View the full article
-
******** organisations are utilising cloud services from Amazon and its competitors to gain access to advanced US AI chips and capabilities that they cannot otherwise obtain, according to a Reuters report based on public tender documents. In a comprehensive investigation, Reuters revealed how ******** cloud access to US AI chips is facilitated through intermediaries. Over 50 tender documents posted in the past year revealed that at least 11 ******** entities have sought access to restricted US technologies or cloud services. Four of these explicitly named Amazon Web Services (AWS) as a cloud service provider, though accessed through ******** intermediaries rather than directly from AWS. “AWS complies with all applicable US laws, including trade laws, regarding the provision of AWS services inside and outside of China,” an AWS spokesperson told Reuters. The report highlights that while the US government has restricted the export of high-end AI chips to China, providing access to such chips or advanced AI models through the cloud is not a violation of US regulations. This loophole has raised concerns among US officials and lawmakers. One example cited in the report involves Shenzhen University, which spent 200,000 yuan (£21,925) on an AWS account to access cloud servers powered by Nvidia A100 and H100 chips for an unspecified project. The university obtained this service via an intermediary, Yunda Technology Ltd Co. Neither Shenzhen University nor Yunda Technology responded to Reuters’ requests for comment. The investigation also revealed that Zhejiang Lab, a research institute developing its own large language model called GeoGPT, stated in a tender document that it intended to spend 184,000 yuan to purchase AWS cloud computing services. The institute claimed that its AI model could not get enough computing power from homegrown Alibaba cloud services. Michael McCaul, chair of the US House of Representatives Foreign Affairs Committee, told Reuters: “This loophole has been a concern of mine for years, and we are long overdue to address it.” In response to these concerns, the US Commerce Department is tightening rules. A government spokeswoman told Reuters that they are “seeking additional resources to strengthen our existing controls that restrict PRC companies from accessing advanced AI chips through remote access to cloud computing capability.” The Commerce Department has also proposed a rule that would require US cloud computing firms to verify large AI model users and notify authorities when they use US cloud computing services to train large AI models capable of “malicious cyber-enabled activity.” The study also found that ******** companies are seeking access to Microsoft’s cloud services. For example, Sichuan University stated in a tender filing that it was developing a generative AI platform and would purchase 40 million Microsoft Azure OpenAI tokens to help with project delivery. Reuters’ report also indicated that Amazon has provided ******** businesses with access to modern AI chips as well as advanced AI models such as Anthropic’s Claude, which they would not otherwise have had. This was demonstrated by public postings, tenders, and marketing materials evaluated by the news organisation. Chu Ruisong, President of AWS Greater China, stated during a generative AI-themed conference in Shanghai in May that “Bedrock provides a selection of leading LLMs, including prominent closed-source models such as Anthropic’s Claude 3.” The report overall emphasises the difficulty of regulating access to advanced computing resources in an increasingly interconnected global technological ecosystem. It focuses on the intricate relationship between US export laws, cloud service providers, and ******** enterprises looking to improve their AI capabilities. As the US government works to close this gap, the scenario raises concerns about the efficacy of present export controls and the potential need for more comprehensive laws that cover cloud-based access to banned technologies. The findings of this paper are likely to feed ongoing discussions about technology transfer, national security, and the global AI race. As politicians and industry leaders analyse these findings, they may spark fresh discussions about how to balance technological cooperation with national security concerns in an era of rapid AI growth. See also: GlobalData: China is ahead of global rivals for AI ‘unicorns’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post ******** firms use cloud loophole to access US AI tech appeared first on AI News. View the full article
-
To ensure AI systems reflect local values and regulations, nations are increasingly pursuing sovereign AI strategies; developing AI utilising their own infrastructure, data, and expertise. NVIDIA is lending its support to this movement with the launch of four new NVIDIA Neural Inference Microservices (NIM). These microservices are designed to simplify the creation and deployment of generative AI applications, supporting regionally-tailored community models. They promise deeper user engagement through an enhanced understanding of local languages and cultural nuances, leading to more accurate and relevant responses. This move comes amidst an anticipated ***** in the Asia-Pacific generative AI software market. ABI Research forecasts a surge in revenue from $5 billion this year to a staggering $48 billion by 2030. Among the new offerings are two regional language models: Llama-3-********-70B, trained on ********* data, and Llama-3-Taiwan-70B, optimised for Mandarin. These models are designed to possess a more thorough grasp of local laws, regulations, and cultural intricacies. Further bolstering the ********* language offering is the RakutenAI 7B model family. Built upon Mistral-7B and trained on both English and ********* datasets, they are available as two distinct NIM microservices for Chat and Instruct functions. Notably, Rakuten’s models have achieved impressive results in the LM Evaluation Harness benchmark, securing the highest average score among open ********* large language models between January and March 2024. Training LLMs on regional languages is crucial for enhancing output efficacy. By accurately reflecting cultural and linguistic subtleties, these models facilitate more precise and nuanced communication. Compared to base models like Llama 3, these regional variants demonstrate superior performance in understanding ********* and Mandarin, handling regional legal tasks, answering questions, and translating and summarising text. This global push for sovereign AI infrastructure is evident in significant investments from nations like Singapore, UAE, South Korea, Sweden, France, Italy, and India. “LLMs are not mechanical tools that provide the same benefit for everyone. They are rather intellectual tools that interact with human culture and creativity. The influence is mutual where not only are the models affected by the data we train on, but also our culture and the data we generate will be influenced by LLMs,” said Rio Yokota, professor at the Global Scientific Information and Computing Center at the Tokyo Institute of Technology. “Therefore, it is of paramount importance to develop sovereign AI models that adhere to our cultural norms. The availability of Llama-3-******** as an NVIDIA NIM microservice will allow developers to easily access and deploy the model for ********* applications across various industries.” NVIDIA’s NIM microservices enable businesses, government bodies, and universities to host native LLMs within their own environments. Developers benefit from the ability to create sophisticated copilots, chatbots, and AI assistants. Available with NVIDIA AI Enterprise, these microservices are optimised for inference using the open-source NVIDIA TensorRT-LLM library, promising enhanced performance and deployment speed. Performance gains are evident with the Llama 3 70B microservices, (the base for the new Llama–3-********-70B and Llama-3-Taiwan-70B offerings), which boast up to 5x higher throughput. This translates into reduced operational costs and improved user experiences through minimised latency. (Photo by BoliviaInteligente) See also: OpenAI delivers GPT-4o fine-tuning Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Sovereign AI gets boost from new NVIDIA microservices first appeared on AI News. The post Sovereign AI gets boost from new NVIDIA microservices appeared first on AI News. View the full article
-
[AI]Authors file lawsuit against Anthropic over ‘pirated’ works
ChatGPT posted a topic in World News
Three authors have filed a lawsuit against AI startup Anthropic, alleging the firm used their copyrighted works without permission to train its Claude language models. Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a complaint in a California court, accusing Anthropic of having “pirated” their written material to develop its AI systems. The authors claim Anthropic downloaded pirated versions of their books from ******** websites to use as training data. The lawsuit alleges Anthropic “built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books.” It states the company “ignored copyright protections” and engaged in “large scale theft of copyrighted works” to train its Claude models. Anthropic has not commented substantively on the allegations, only saying it is “aware” of the legal action. The case joins similar lawsuits against other AI companies like Microsoft and OpenAI over using copyrighted material to develop large language models. It highlights growing tensions between content creators and AI firms regarding intellectual property rights. According to the complaint, Anthropic used a dataset called ‘The Pile’ to train Claude. This dataset allegedly included a collection of pirated ebooks called ‘Books3,’ which contained nearly 200,000 books downloaded from an unauthorised source. The authors argue that Anthropic knew it was using copyrighted works without permission. They claim the company made a “deliberate decision to cut corners and rely on stolen materials to train their models” rather than obtaining proper licences. The lawsuit states that Anthropic’s actions have harmed authors by depriving them of book sales and licensing revenues. It alleges the company’s AI models now compete with human-written content, threatening writers’ livelihoods. For context, Anthropic positions its Claude models as rivals to OpenAI’s ChatGPT and other prominent AI chatbots. The company has raised billions in funding and is valued at over $18 billion. Critics argue that AI firms should compensate authors and publishers for using their works as training data. Some companies like Google have begun licensing deals with news organisations and other content providers. However, AI developers contend that using copyrighted material for machine learning falls under copyright law’s ‘fair use’ provisions. They argue that their models do not reproduce exact copies of training texts. The debate touches on complex legal and ethical questions about how copyright applies to AI development. Courts may need to determine whether AI training constitutes copyright infringement or transformative fair use. For authors, the lawsuit represents an effort to assert control over how their works are used in AI development. They argue that companies profiting from AI should compensate creators whose material made the technology possible. The case could have significant implications for the AI industry if courts rule that firms must obtain licences for all copyrighted material used in training. This would likely increase costs and complexity for AI development. Anthropic has focused on developing “safe and ethical” AI systems. The company’s CEO has described it as “focused on public benefit.” However, the authors’ lawsuit challenges this image, accusing Anthropic of building its business through copyright infringement. The complaint seeks statutory damages for alleged wilful copyright infringement and an injunction to prevent Anthropic from further using the authors’ works without permission. As AI capabilities grow, debates over intellectual property are likely to intensify. Content creators argue that their work should be protected and compensated, while AI companies push for access to broad datasets to improve their models. The outcome of cases like this one against Anthropic could help shape the legal and regulatory landscape for AI development. It may influence how companies approach training data collection and whether widespread licensing becomes the norm. For now, the lawsuit adds to the mounting legal challenges facing major AI firms over their use of copyrighted material. As courts grapple with these issues, their rulings could have far-reaching effects on the future of AI and content creation. The case is filed as Andrea Bartz et al. v. Anthropic PBC, US District Court for the Northern District of California, No. 3:24-cv-05417. (Photo by Anthropic) See also: Anthropic says Claude 3 Haiku is the fastest model in its class Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Authors file lawsuit against Anthropic over ‘pirated’ works appeared first on AI News. View the full article -
OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level. In a letter sent to California State Senator Scott Wiener’s office, OpenAI expressed concerns that the bill could have “broad and significant” implications for US competitiveness and national security. The company argued that SB 1047 would threaten California’s position as a global leader in AI, prompting talent to seek “greater opportunity elsewhere.” Introduced by Senator Wiener, the bill aims to enact “common sense safety standards” for companies developing large AI models exceeding specific size and cost thresholds. These standards would require companies to implement shut-down mechanisms, take “reasonable care” to prevent catastrophic outcomes, and submit compliance statements to the California attorney general. ******** to comply could result in lawsuits and civil penalties. Lieutenant General John (Jack) Shanahan, who served in the US Air Force and was the inaugural director of the US Department of Defense’s ****** Artificial Intelligence Center (JAIC), believes the bill “thoughtfully navigates the serious risks that AI poses to both civil society and national security” and provides “pragmatic solutions”. Hon. Andrew C. Weber – former Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs – echoed the national security concerns. “The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” said Weber. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work. I’m glad to see that SB 1047 helps establish the necessary protective measures.” SB 1047 has sparked fierce opposition from major tech companies, startups, and venture capitalists who argue that it overreaches for a nascent technology, potentially stifling innovation and driving businesses from the state. These concerns are echoed by OpenAI, with sources revealing that the company has paused plans to expand its San Francisco offices due to the uncertain regulatory landscape. Senator Wiener defended the bill, stating that OpenAI’s letter fails to “criticise a single provision.” He dismissed concerns about talent exodus as “nonsensical,” stating that the law would apply to any company conducting business in California, regardless of their physical location. Wiener highlighted the bill’s “highly reasonable” requirement for large AI labs to test their models for catastrophic safety risks, a practice many have already committed to. Critics, however, counter that mandating the submission of model details to the government will hinder innovation. They also ***** that the threat of lawsuits will deter smaller, open-source developers from establishing startups. In response to the backlash, Senator Wiener recently amended the bill to eliminate ********* liability for non-compliant companies, safeguard smaller developers, and remove the proposed “Frontier Model Division.” OpenAI maintains that a clear federal framework, rather than state-level regulation, is essential for preserving public safety while maintaining US competitiveness against rivals like China. The company highlighted the suitability of federal agencies, such as the White House Office of Science and Technology Policy and the Department of Commerce, to govern AI risks. Senator Wiener acknowledged the ideal of congressional action but expressed scepticism about its likelihood. He drew parallels with California’s data privacy law, passed in the absence of federal action, suggesting that inaction from Congress shouldn’t preclude California from taking a leading role. The California state assembly is set to vote on SB 1047 this month. If passed, the bill will land on the desk of Governor Gavin Newsom, whose stance on the legislation ******** unclear. However, Newsom has publicly recognised the need to balance AI innovation with risk mitigation. (Photo by Solen Feyissa) See also: OpenAI delivers GPT-4o fine-tuning Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News. View the full article
-
The rise of the creator economy was one of the most disruptive forces to emerge from the internet, paving the way for independent writers, artists, musicians, podcasters, YouTubers and social media influencers to connect with audiences directly and earn money from doing so. Creators have flocked to platforms such as Facebook, Instagram, Vimeo, Substack, TikTok and more, where they can not only create but also publish and share their user-generated content. Social media enables individuals to become self-publishers and independent producers of content, disrupting existing business models and enabling an entire generation of creative minds to establish their own path to success. Until recently, the creativity such individuals express was always thought to be a uniquely human quality and therefore invulnerable to disruption by advancing technology. However, the rise of generative AI, which comes so soon after the emergence of the creator economy, threatens to disrupt this nascent industry and significantly alter the way new content is produced. With generative AI models, anyone can churn out paragraphs of text, lines of software code, high quality images, audio, video and more, using simple prompts. How does AI aid with user-generated content? Generative AI burst into the public consciousness with the arrival of ChatGPT in late 2022, taking the internet by storm, and since then tech companies have rushed to create all manner of consumer-friendly applications that can aid in content creation. For instance there’s ChatGPT itself, which is all about text-generation, capable of writing blog posts, essays, marketing copy, email pitches, documents and more, based on a simple prompt where the user tells it what to write. More impressive forms of content generation include image generating models such as Midjourney, which can create dramatic pictures based on user’s ideas of what they want to see, and there are now even video generators, such as OpenAI’s Sora, Google DeepMind’s Veo and Runway that can do the same. Generative AI is also having an impact on video game content generation. Take the novel technology developed by AMGI Studios for its hit Web3 game My **** Hooligan, which uses proprietary motion capture and AI algorithms to capture the gamer’s facial expressions and replicate them on their in-game avatars. It further uses generative AI to provide each user character (which is a unique NFT) with its own distinctive personality that users can learn about through a chat interface. Other ways people use generative AI to enhance creativity include Buzzfeed’s personalized content creation tools, which enable users to quickly create customized quizzes tailored to each individual, and its generative AI recipe creator, which can serve up ideas for meals based on whatever the user has in the fridge. Three ways this can go In the eyes of some, AI-generated content has emerged as a major threat to user-generated content, but not everyone sees it that way. It’s unclear what kind of impact generative AI will ultimately have on the creator economy, but there are a number of possible scenarios that may unfold. Scenario 1: AI enhances creativity In the first scenario, it’s possible to imagine a world in which there’s an ********** of AI-assisted innovation, in which content creators themselves adopt AI to improve their performance and productivity. For instance, designers can use AI to quickly generate basic ideas and outlines, before using their human expertise to fine-tune those creations, be it a logo or a product design or something else. Rather than replace designers entirely, generative AI simply becomes a tool that they use to improve their output and get more work done. An example of this is GitHub’s coding assistant Copilot, which is a generative AI tool that acts as a kind of programming assistant, helping developers to generate code. It doesn’t replace their role entirely, but simply assists them in generating code snippets – such as the lines of code required to program an app to perform standard actions. But the developer is the one who oversees this and uses his creativity to design all of the intricacies of the app. AMGI’s in-game content generation tools are another example of how AI augments human creativity, creating unique in-game characters and situations that are ultimately based on the user’s actions. Such a scenario isn’t a threat to creative workers and user-generated content. Rather than taking people’s jobs, AI will simply support the people who do those jobs and make them better at it. They’ll be able to work faster and more efficiently, getting more work done in shorter time frames, spending more of their time prompting the AI tools they use and editing their outputs. It will enable creative projects to move forward much faster, accelerating innovation. Scenario 2: AI monopolises creativity A more dystopian scenario is the one where algorithmic models leverage their unfair advantage to totally dominate the world of content creation. It’s a future where human designers, writers, coders and perhaps even highly skilled professionals like physicists are drowned out by AI models that can not only work faster, but at much lower costs than humans can. From a business perspective, if they can replace costly human creators with cheap and cheerful AI, that’s great, translating to more profitability. But there are concerns, not only for the humans that lose their livelihoods, but also on the impact of creativity itself. As impressive as generative AI-created content sometimes is, the outputs of these algorithms are all based on existing content – namely the data they’re trained on. Most AI models have a habit of regurgitating similar content. Take an AI writer that always seems to write prose in the same, instantly recognizable and impersonal way, or AI image generators that constantly churn images with the same aesthetic. An even more alarming example of this is the AI music generators Suno and Uncharted Labs, whose tools are said to have been trained on millions of music videos posted on YouTube. Musicians represented by the Recording Industry Association of America recently filed lawsuits against those companies, accusing them of copyright infringement. Their evidence? Numerous examples of supposedly original songs that sound awfully familiar to existing ones created by humans. For instance, the lawsuit describes a song generated using Suno, called “Deep down in Louisiana close to New Orle” which seems to mirror the lyrics and style of Chuck Berry’s “Johnny B. Goode.” It also highlights a second track, “Prancing Queen” that seems to be a blatant rip off of the ABBA hit “Dancing Queen.” These examples raise questions over AI’s ability to create truly original content. If AI were to monopolise creativity, it could result in true innovation and creativity screeching to a halt, leading to a future that’s sterile and bland. Scenario 3: Human creativity stands out Given AI’s lack of true authenticity and originality, a third possible way this could play out is that there is a kind of backlash against it. With consumers being overwhelmed by a sea of mundane, synthetic imagery and prose, those with an eye for flair will likely be able to identify true, human creativity and pay a premium for that content. After all, humans have always shown a preference for true originality, and such a scenario could well play into the hands of the most talented content creators. It’s a future where being human gives creators a competitive edge over their algorithmic rivals, with their unparalleled ability to come up with truly original ideas setting their work apart. Human culture, fashions and trends seem to evolve faster than generative AI models are created, and that means that the most original thinkers will always be one step ahead. It’s a more reassuring future where humans will continue to create and be rewarded for their work, and where machines will only ever be able to copy and iterate on existing ideas. This is perhaps the most likely scenario and, reassuringly, it means there will always be a need for humans in the mix. Humans, after all, are characterised by their creativity – everything that exists in the modern world today was created by someone, whether it’s the shoes on your feet, the device you’re reading this article with, or the language you speak. They’re all human creations, inspired by original ideas rooted in the human brain, and humans – especially those who find AI can do their jobs for them – will have more time to sit and think and potentially come up with even better ideas than the ones we’ve had so far. The post As AI improves, what does it mean for user-generated content? appeared first on AI News. View the full article
-
OpenAI has announced the release of fine-tuning capabilities for its GPT-4o model, a feature eagerly awaited by developers. To sweeten the deal, OpenAI is providing one million free training tokens per day for every organisation until 23rd September. Tailoring GPT-4o using custom datasets can result in enhanced performance and reduced costs for specific applications. Fine-tuning enables granular control over the model’s responses, allowing for customisation of structure, tone, and even the ability to follow intricate, domain-specific instructions. Developers can achieve impressive results with training datasets comprising as little as a few dozen examples. This accessibility paves the way for improvements across various domains, from complex coding challenges to nuanced creative writing. “This is just the start,” assures OpenAI, highlighting their commitment to continuously expand model customisation options for developers. GPT-4o fine-tuning is available immediately to all developers across all paid usage tiers. Training costs are set at 25 per million tokens, with inference priced at 3.75 per million input tokens and $15 per million output tokens. OpenAI is also making GPT-4o mini fine-tuning accessible with two million free daily training tokens until 23rd September. To access this, select “gpt-4o-mini-2024-07-18” from the base model dropdown on the fine-tuning dashboard. The company has collaborated with select partners to test and explore the potential of GPT-4o fine-tuning: Cosine’s Genie, an AI-powered software engineering assistant, leverages a fine-tuned GPT-4o model to autonomously identify and resolve bugs, build features, and refactor code alongside human developers. By training on real-world software engineering examples, Genie has achieved a state-of-the-art score of 43.8% on the new SWE-bench Verified benchmark, marking the largest improvement ever recorded on this benchmark. Distyl, an AI solutions provider, achieved first place on the BIRD-SQL benchmark after fine-tuning GPT-4o. This benchmark, widely regarded as the leading text-to-SQL test, saw Distyl’s model achieve an ********** accuracy of 71.83%, demonstrating superior performance across demanding tasks such as query reformulation and SQL generation. OpenAI reassures users that fine-tuned models remain entirely under their control, with complete ownership and privacy of all business data. This means no data sharing or utilisation for training other models. Stringent safety measures have been implemented to prevent misuse of fine-tuned models. Continuous automated safety evaluations are conducted, alongside usage monitoring, to ensure adherence to OpenAI’s robust usage policies. (Photo by Matt Artz) See also: Primate Labs launches Geekbench AI benchmarking tool Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI delivers GPT-4o fine-tuning appeared first on AI News. View the full article
-
According to Carine Smith Ihenacho, chief governance and compliance officer of Norway’s $1.7 trillion sovereign wealth fund, boards need to be proficient with the use of AI and take control of its application in businesses to mitigate risks. The Norges Bank Investment Fund, which holds considerable shares in almost 9,000 companies worldwide — accounting for 1.5% of all listed stocks — has become a trailblazer in environmental, social, and corporate governance issues. About a year ago, the fund also provided its invested companies with recommendations on integrating responsible AI to improve economic outcomes. Several companies still have a lot of ground to cover. Specifically, when stating that “Overall, a lot of competence building needs to be done at the board level,” Smith Ihenacho clarified that this does not mean every board should have an AI specialist. Instead, boards need to collectively understand how AI matters in their business and have policies in place. “They should know: ‘What’s our policy on AI? Are we high risk or low risk? Where does AI meet customers? Are we transparent around it?’ It’s a big-picture question they should be able to answer,” Smith Ihenacho added, highlighting the breadth of understanding required at the board level. The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, as reported in its 2023 responsible investment report. It is particularly focused on AI use in the healthcare sector due to its substantial impact on consumers, and is closely monitoring Big Tech companies that develop AI-based products. In its engagement with tech firms, the fund emphasises the importance of robust governance structures to manage AI-related risks. “We focus more on the governance structure,” Smith Ihenacho explained. “Is the board involved? Do you have a proper policy on AI?” The fund’s emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. Leading among them are names such as Microsoft, Apple, Amazon, and Meta Platforms. Investments in these companies contributed to a 12.5% growth in the fund’s stock portfolio in the first half of 2024. The overall exposure to the tech sector increased from 21% to 26% over the past year, now comprising a quarter of the stock portfolio. This underscores the significant role that technology and AI play in the world today. Though the fund favours AI innovation for its potential to boost efficiency and productivity, Smith Ihenacho has emphasised the importance of responsible use. She is quoted as saying, “It is fantastic what AI may be able to do to support innovation, efficiency, and productivity… we support that.” However, she also stressed the need to be responsible in how we manage the risks. The fund’s adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies. AI is increasingly utilised across various sectors, from finance to healthcare, and the need for governance frameworks has never been greater. The Norwegian sovereign wealth fund maintains a standard that requires companies to develop comprehensive AI policies at the board level, fostering the adoption of responsible AI practices across its large portfolio. This initiative by one of the world’s largest investors could have far-reaching implications for corporate governance practices globally. As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world. See also: X agrees to halt use of certain EU data for AI chatbot training Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Use of AI for business governance must improve at the board level appeared first on AI News. View the full article
-
AI capabilities have exploded over the past two years, with large language models (LLMs) such as ChatGPT, Dall-E, and Midjourney becoming everyday use tools. As you’re reading this article, generative AI programs are responding to emails, writing marketing copies, recording songs, and creating images from simple inputs. What’s even more remarkable to witness is the rate at which both individuals and companies are embracing the AI ecosystem. A recent survey by McKinsey revealed that the number of companies that have adopted generative AI in at least one business function doubled within a year to 65%, up from 33% at the beginning of 2023. However, like most technological advancements, this nascent area of innovation is not short of challenges. Training and running AI programs is resource intensive endeavour, and as things stand, big tech seems to have an upper hand which creates the risk of AI centralisation. The computational limitation in AI development According to an article by the World Economic Forum, there is an accelerating demand for AI compute; the computational power required to sustain AI development is currently growing at an annual rate of between 26% and 36%. Another recent study by Epoch AI confirms this trajectory, with projections showing that it will soon cost billions of dollars to train or run AI programs. “The cost of the largest AI training runs is growing by a factor of two to three per year since 2016, and that puts billion-dollar price tags on the horizon by 2027, maybe sooner,” noted Epoch AI staff researcher, Ben Cottier. In my opinion, we’re already at this point. Microsoft invested $10 billion in OpenAI last year and, more recently, news emerged that the two entities are planning to build a data center that will host a supercomputer powered by millions of specialised chips. The cost? A whopping $100 billion, which is ten times more than the initial investment. Well, Microsoft is not the only big tech that’s on a spending spree to boost its AI computing resources. Other companies in the AI arms race, including Google, Alphabet, and Nvidia are all directing a significant amount of funding to AI research and development. While we can agree that the outcome could match the amount of money being invested, it is hard to ignore the fact that AI development is currently a ‘big tech’ sport. Only these deep-pocketed companies have the ability to fund AI projects to the tune of tens or hundreds of billions. It begs the question; what can be done to avoid the same pitfalls that Web2 innovations are facing as a result of a handful of companies controlling innovation? Stanford’s HAI Vice Director and Faculty Director of Research, James Landay, is one of the experts who has previously weighed in on this scenario. According to Landay, the rush for GPU resources and the prioritisation by big tech companies to use their AI computational power in-house will trigger the demand for computing power, ultimately pushing stakeholders to develop cheaper hardware solutions. In China, the government is already stepping up to support AI startups following the chip wars with the US that have limited ******** companies from seamlessly accessing crucial chips. Local governments within China introduced subsidies earlier this year, pledging to offer computing vouchers for AI startups ranging between $140,000 and $280,000. This effort is aimed at reducing the costs associated with computing power. Decentralising AI computing costs Looking at the current state of AI computing, one theme is constant — the industry is currently centralised. Big tech companies control the majority of the computing power as well as AI programs. The more things change, the more they remain the same. On the brighter side, this time, things might actually change for good, thanks to decentralised computing infrastructures such as the Qubic Layer 1 blockchain. This L1 blockchain uses an advanced mining mechanism dubbed the useful Proof-of-Work (PoW); unlike Bitcoin’s typical PoW which uses energy for the sole purpose of securing the network, Qubic’s uPoW utilizes its computational power for productive AI tasks such as training neural networks. In simpler terms, Qubic is decentralising the sourcing of AI computational power by moving away from the current paradigm where innovators are limited to the hardware they own or have rented from big tech. Instead, this L1 is tapping into its network of miners which could run into the tens of thousands to provide computational power. Although a bit more technical than leaving big tech to handle the backend side of things, a decentralised approach to sourcing for AI computing power is more economical. But more importantly, it would only be fair if AI innovations would be driven by more stakeholders as opposed to the current state where the industry seems to rely on a few players. What happens if all of them go down? Make matters worse, these tech companies have proven untrustworthy with life-changing tech advancements. Today, most people are up in arms against data privacy violations, not to mention other affiliated issues such as societal manipulation. With decentralised AI innovations, it will be easier to check on the developments while reducing the cost of entry. Conclusion AI innovations are just getting started, but the challenge of accessing computational power is still a headwind. To add to it, Big tech currently controls most of the resources which is a big challenge to the rate of innovation, not to mention the fact that these same companies could end up having more power over our data – the digital gold. However, with the advent of decentralised infrastructures, the entire AI ecosystem stands a better chance of reducing computational costs and eliminating big tech control over one of the most valuable technologies of the 21st century. The post AI capabilities are growing faster than hardware: Can decentralisation close the gap? appeared first on AI News. View the full article
-
Primate Labs has officially launched Geekbench AI, a benchmarking tool designed specifically for machine learning and AI-centric workloads. The release of Geekbench AI 1.0 marks the culmination of years of development and collaboration with customers, partners, and the AI engineering community. The benchmark, previously known as Geekbench ML during its preview phase, has been rebranded to align with industry terminology and ensure clarity about its purpose. Geekbench AI is now available for Windows, macOS, and Linux through the Primate Labs website, as well as on the Google Play Store and Apple App Store for mobile devices. Primate Labs’ latest benchmarking tool aims to provide a standardised method for measuring and comparing AI capabilities across different platforms and architectures. The benchmark offers a unique approach by providing three overall scores, reflecting the complexity and heterogeneity of AI workloads. “Measuring performance is, put simply, really hard,” explained Primate Labs. “That’s not because it’s hard to run an arbitrary test, but because it’s hard to determine which tests are the most important for the performance you want to measure – especially across different platforms, and particularly when everyone is doing things in subtly different ways.” The three-score system accounts for the varied precision levels and hardware optimisations found in modern AI implementations. This multi-dimensional approach allows developers, hardware vendors, and enthusiasts to gain deeper insights into a device’s AI performance across different scenarios. A notable addition to Geekbench AI is the inclusion of accuracy measurements for each test. This feature acknowledges that AI performance isn’t solely about speed but also about the quality of results. By combining speed and accuracy metrics, Geekbench AI provides a more holistic view of AI capabilities, helping users understand the trade-offs between performance and precision. Geekbench AI 1.0 introduces support for a wide range of AI frameworks, including OpenVINO on Linux and Windows, and vendor-specific TensorFlow Lite delegates like Samsung ENN, ArmNN, and Qualcomm QNN on Android. This broad framework support ensures that the benchmark reflects the latest tools and methodologies used by AI developers. The benchmark also utilises more extensive and diverse datasets, which not only enhance the accuracy evaluations but also better represent real-world AI use cases. All workloads in Geekbench AI 1.0 run for a minimum of one second, allowing devices to reach their maximum performance levels during testing while still reflecting the bursty nature of real-world applications. Primate Labs has published detailed technical descriptions of the workloads and models used in Geekbench AI 1.0, emphasising their commitment to transparency and industry-standard testing methodologies. The benchmark is integrated with the Geekbench Browser, facilitating easy cross-platform comparisons and result sharing. The company anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features. However, Primate Labs believes that Geekbench AI has already reached a level of reliability that makes it suitable for integration into professional workflows, with major tech companies like Samsung and Nvidia already utilising the benchmark. (Image Credit: Primate Labs) See also: xAI unveils Grok-2 to challenge the AI hierarchy Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Primate Labs launches Geekbench AI benchmarking tool appeared first on AI News. View the full article
-
Artificial intelligence is everywhere, the chances are if you’re on our site then you’ve noticed! One of the places that we maybe don’t think so much about AI and its implications is when we’re gaming. After all, for many of us this is a time to switch off from reality and properly relax. However, the world of gaming is one of the industries where AI is having some of its most pronounced effects. We’re going to take a look at a handful of the ways that AI is changing the way we game. Artificial Intelligence is Changing NPCs One of the most interesting ways that AI is being used is to train NPCs (non-playable characters) to behave in a more human way. This means that they can react to their environment – for example, if it’s raining they might move to stand under a tree, or pull out an umbrella. They can also react differently depending on the choices of a character, for example, if you ask them for money they might behave rudely, whereas if you offer them a gift, they might be friendly and grateful. Beyond this, they’re being trained in more complex behaviours like competition – think about the brothers in Red ***** Redemption II who are constantly at loggerheads with one another. All of these traits make interacting with NPCs a joy rather than a chore. Artificial Intelligence in online casinos Online casinos are often at the forefront of technological advances when it comes to making use of them to give their customers something new to try out. AI has been pivotal in this, enabling online casinos to better understand what users want from their experience, as well as how to keep them safe. For the latter, AI has been put to great use in detecting unusual behaviour on user accounts. This can be indicative of ******. So, AI flags the behaviour to the ****** team who can check it out and prevent anything bad from happening. This works so effectively because AI is so much better than humans (not to mention faster) at spotting patterns. As soon as there’s a blip in the usual pattern, AI can spot it, freeing up human time for other tasks. We’re going to head to social casinos to look at the other asset of AI, being able to understand what users want. Social casinos are just like the usual online casinos, but people can enjoy games without financial stakes. This is another field in which AI plays an important role, particularly in the case of recommending games. Many social casinos have a dynamic homepage, where players are shown a selection of games that are curated for them, using AI. This isn’t the only way that AI is shaping the social casino landscape, in fact, you can check on Oddschecker for in-depth analyses and comparisons of leading platforms in this space. Artificial Intelligence Shapes Difficulty and Content Are you getting better or is AI dumbing it down for you? How about how AI is performing in the world of console games? Some of the true blockbusters are employing AI to create a gaming experience that’s even closer to life than we thought possible. One of the ways that AI is being put to great use is in the realm of narratives. AI is creating personalized experiences by generating dynamic storylines and quests based on player actions. This means that one player won’t receive the same storyline as another. This level of personalization was sort of possible before, but it took thousands of man-hours to make everything work seamlessly. Now, this level of customization can even extend to adaptive difficulty, where AI adjusts the game’s challenge in real-time to match a player’s skill level. So, if you’ve found that a game is getting a bit easier, it might be worth double checking that AI isn’t helping you out a little! AI excels in terms of visuals too. Technologies like AI upscaling improve textures by converting lower-resolution images to high resolution, by reducing noise. Another fascinating bit of technology is called ray tracing. This creates realistic lighting conditions, like light bouncing off puddles, or raindrops shadowing a face. It’s these intricate details that are making the hyperreal games of tomorrow come to life today. The post How Artificial Intelligence is Shaping the Way We Game appeared first on AI News. View the full article
-
Google has unveiled its new range of Pixel 9 smartphones, emphasising their enhanced AI capabilities. The company released the devices much earlier than usual, as Google typically presents new Pixel models in the autumn. However, compared to previous versions, the changes in the new models are revolutionary. The new smartphones feature more advanced integrations of Google’s AI technology. For instance, one unique Pixel feature allows users to search for information and images within their screenshots, thanks to a more deeply integrated approach. Additionally, through the Gemini chatbot, some features of these smartphones are available as overlays from other apps. At Alphabet’s Bay View campus in Mountain View, California, Rick Osterloh, the Senior Vice President of Devices and Services at Google, informed visitors that the company plans to focus on practical applications of AI. He spent time describing the “Gemini era” to the audience, which will commence with the creation of Google’s advanced AI model. The Pixel 9 series offers several models. The base model, the Pixel 9, features a 6.3-inch screen and costs $799. A larger alternative, the Pixel 9 Pro XL, has a 6.8-inch screen. A slightly enhanced version, the Pixel 9 Pro, offers a better camera system, though its price is higher. The final option is the foldable Pixel 9 Pro Fold. Regarding the initial shipping date, Google stated at the event that the Pixel 9 and Pixel 9 Pro XL would ship in late August. The Pro and Pro Fold models will ship in September, with all models available for preorder starting August 13. During Google’s presentations at the event, Gemini’s new functions were showcased in a live demo, focusing on the latest conversation features. Additionally, the company announced updates to the product’s exterior design, the installation of Google’s advanced camera system, and the integration of the new Tensor G4 chip. In addition to smartphones, the company unveiled new versions of the Pixel Watch 3 smartwatch and Pixel Buds Pro 2 wireless earbuds. The watch can track the user’s heart rate; if it stops, it will call emergency services. This feature will be available in the *** and the EU. As reported by IDC, Google’s share in the global smartphone market was less than 1% in the second quarter of 2024. Samsung and Apple took the first and second places, with market shares of 18.9%, and 15.8%, respectively. In the US, Google ranks fourth among smartphone operating systems, holding 4.5% of the market share. Industry analysts note that although Google Pixel is not among the best-selling smartphones, it showcases some of the benefits of the Android operating system. Android has become the dominant operating system, used by more than 80% of smartphone users worldwide. Consequently, many people, even those who have never used a Google Pixel, may indirectly experience and appreciate the features that Google products offer. The event also touched upon Google’s further intentions and previous efforts to implement AI across its product lineup to stay at the top of the game. Not long ago, the company integrated AI improvements into its core products, including its search engine. Additionally, Google announced a content-sharing agreement it reached with Peloton. As a result, Fitbit Premium subscribers will have free access to the Peloton training class library. (Image Credit: Google) See also: Google’s Gemini 1.5 Pro dethrones GPT-4o Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google advances mobile AI in Pixel 9 smartphones appeared first on AI News. View the full article
-
While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth. The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard products, devices, and services. This apprehension is driving a surge in edge computing adoption, with 85% believing that security concerns will push more AI use cases to the edge. Edge computing – which processes data locally on devices instead of relying on centralised cloud systems – offers inherent advantages in efficiency, security, and privacy. However, this shift to the edge necessitates a heightened focus on device security. “There is an important interconnect between AI and security: one doesn’t scale without the other,” cautions David Maidment, Senior Director, Market Strategy at Arm (a PSA Certified co-founder). “While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors.” Despite recognising security as paramount, a significant disconnect exists between awareness and action. Only half (50%) of those surveyed believe their current security investments are sufficient. Furthermore, essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents. “It’s more imperative than ever that those in the connected device ecosystem don’t skip best practice security in the hunt for AI features,” emphasises Maidment. “The entire value chain needs to take collective responsibility and ensure that consumer trust in AI driven services is maintained.” The report highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach, incorporating security-by-design principles, is deemed essential to building consumer trust and mitigating the escalating security risks. Despite the concerns, a sense of optimism prevails within the industry. A majority (67%) of decisionmakers believe their organisations are equipped to handle the potential security risks associated with AI’s surge. There is a growing recognition of the need to prioritise security investment – 46% are focused on bolstering security, compared to 39% prioritising AI readiness. “Those looking to unleash the full potential of AI must ensure they are taking the right steps to mitigate potential security risks,” says Maidment. “As stakeholders in the connected device ecosystem rapidly embrace a new set of AI-enabled use cases, it’s crucial that they do not simply forge ahead with AI regardless of security implications.” (Photo by Braden Collum) See also: The AI revolution: Reshaping data centres and the digital landscape Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post PSA Certified: AI growth outpacing security measures appeared first on AI News. View the full article
-
Artificial intelligence is changing the world and is projected to have a global market value of $2-4 trillion USD by 2030. The future is now, and it feels as though we’re witnessing a big bang in technology every couple of months. AI has crept into every facet of our lives, fundamentally transforming our work and play. Data centres are at the heart of all this excitement. Put simply, AI is when computer systems are used to simulate human intelligence processes. This includes learning, reasoning, and – particularly interestingly – self-correction. In other words, it’s like having a human brain in a computer. Bill Gates has compared its rise to the start of some of the most important technological advances in history. The surge of AI is staggering. For instance, ChatGPT reached a million users in just five days; for Netflix, this milestone took a few years. The enthusiasm to implement the technology is evident from these instances of explosive growth. However, AI has a surprisingly large appetite for data, and the computational power required to process that data is enormous, especially considering that it’s only set to increase further. That’s where data centre infrastructure comes in. Data centres are the backbones of the digital world and are no longer simply storage spaces but are rapidly evolving into entire ecosystems. These ecosystems are energy-hungry, requiring rapid processing power for energy-intensive processes and efficient delivery of data worldwide. Data centres are home to rows of servers, storage systems, and complex networks that facilitate the flow of information. Such facilities are essential to various workloads, from search queries to financial transactions and digital interactions, and usually remain silent while completing their tasks. As progressive as AI’s demands and capabilities are, it is crucial to ensure their compatibility with data centre infrastructure. Every computation involved in processing data is key to AI, and the efficiency of these processes depends on three primary types of processors: the Graphics Processing Unit (GPU), the Central Processing Unit (CPU), and the Tensor Processing Unit (TPU). On the one hand, the GPU is great at managing parallelism, making it excellent for training AI models. On the other hand, the CPU allows for more flexibility in terms of simultaneous tasks on an increasing scale. Finally, the TPU, which is Google’s development in this sphere, is best suited for completing the highest possible number of AI tasks in the shortest amount of time. Integrating AI into data centres presents several challenges: Power: AI training processes require high-performance computing infrastructure, necessitating reliable and sufficient power supply systems. Connectivity: Seamless, high-speed, and low-latency network connectivity is crucial for efficient data transfer and communication. Cooling: AI workloads generate significant heat, requiring advanced cooling systems to maintain optimal operating temperatures. AI is ever-emerging and ever-evolving, and thus, changes must be made to regulation. For example, the AI Act recently released by the EU categorizes applications of AI into four different levels of risk: unacceptable, high, limited, and minimal or no risk. At the same time, the NIS2 Directive has expanded cybersecurity regulation to cover the digital realm as well. As such, one of the main challenges facing industries, including data centres, will be keeping up to date with these regulations. AI is progressing faster and further than anything we have seen in recent years, and data centres must move as quickly to keep up with the changing parameters and risk boundaries that are now being defined. To sum up, the AI revolution is changing the way our digital infrastructure works, with the data centre being one of the first to be transformed. This transformation is crucial because, as we discover new ways of applying AI, we will need everything from technological advancements to regulatory compliance. This concerns both technological progress and the need to deal with new laws and regulations that pile up with the growth of AI. Thus, the history of AI and the data centre is one of continuous development and mutual shaping of each other. Interested in learning more? Data Centres Expo Europe | Data Centres Event & Conference Register for free to attend the upcoming Data Centres Expo event and conference which will shine a spotlight on the future outlook for the sector, as demand for data centre spaces increase. Gain valuable insights from industry leaders and network with experts from the largest data centre providers. Examine key topics such as building AI ready data centre infrastructures, building scalability and sustainability into data centres, and cultivating the right data centre hardware solutions. Learn more and register for free here. The post The AI revolution: Reshaping data centres and the digital landscape appeared first on AI News. View the full article
-
xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning. Alongside Grok-2, xAI has introduced Grok-2 mini, a smaller but capable version of the main model. Both are currently in beta on X and will be made available through xAI’s enterprise API later this month. An early version of Grok-2 was tested on the LMSYS leaderboard under the pseudonym “sus-column-r”. At the time of the announcement, xAI claims it is outperforming both Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4-Turbo. However, it’s worth noting that GPT-4o currently holds the top spot as the best AI assistant in terms of overall capabilities, followed by Google’s Gemini 1.5. xAI’s focus ******** on advancing core reasoning capabilities with its new compute cluster, as it aims to maintain its position at the forefront of AI development. However, the company recently agreed to halt the use of certain EU data for training its models. While the release of Grok-2 marks a significant milestone for xAI, it’s clear that the AI landscape ******** highly competitive. With ChatGPT-4o and Google’s Gemini 1.5 leading the pack, and other major players like Anthropic continuing to make advancements, the race for AI supremacy is far from over. See also: SingularityNET bets on supercomputer network to deliver AGI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post xAI unveils Grok-2 to challenge the AI hierarchy appeared first on AI News. View the full article
-
Recently, the ********* Union became the centre stage of a data privacy controversy related to the social media platform X. On August 8, an Irish court declared that X had agreed to suspend the use of all data belonging to ********* Union citizens, which had been gathered via the platform for the purpose of training the company’s AI systems. As reported by The Economic Times, this initiative was prompted by complaints from the Data Protection Commission (DPC) of Ireland, the leading EU regulator for many large US tech companies that have their main offices in Ireland under EU law. Taking action, the DPC’s intervention comes amid intensified scrutiny of AI development practices across the EU by tech giants. Recently, the regulatory body sought an order to restrain or suspend X’s data processing activities on users for the development, training, and refinement of an AI system. This situation clearly depicts the growing conflict or tension experienced by nearly all EU states between AI advances and ongoing data protection concerns. It seems that the order was issued too late by regulators and the court. In the response filed for the lawsuit, X, owned by Elon Musk, reported that Grok—an AI chatbot—allowed its users to skip their public posts. As Judge Leonie Reynolds noted, X began processing ********* users’ data for AI training on May 7, but the opt-out option was not introduced until July 16. Furthermore, it was not immediately made available to all users. Therefore, there was a ******* when the data was used without the users’ consent. X’s legal representation has assured the court that data obtained from EU users between May 7 and August 1 will not be used while the DPC’s order is under consideration. It is expected that X will file opposition papers arguing against the suspension order by September 4. This will set in motion what could be a court battle with effects reverberating throughout the EU. Either way, X has not remained silent on the matter. In its statement, the company’s Global Government Affairs account on X noted that the DPC’s order was “unwarranted, overbroad, and singles out X without any justification.” Furthermore, the company expressed concerns that the order would undermine efforts to keep the platform safe and restrict its use of technologies in the EU. This highlights the complex balance between regulatory compliance and operational viability that tech companies must navigate in the current digital landscape. The platform emphasised its proactive approach in working with regulators, including the DPC, regarding Grok since late 2023. X claims to have been fully transparent about the use of public data for AI models, including providing necessary legal assessments and engaging in lengthy discussions with regulators. This regulatory action against X is not an isolated incident. Other tech giants have faced similar scrutiny in recent months. Meta Platforms recently decided to postpone the launch of its Meta AI models in Europe following advice from the Irish DPC. Similarly, Google agreed to delay and modify its Gemini AI chatbot earlier this year after consultations with the Irish regulator. These developments collectively signal a shift in the regulatory landscape of AI and data usage in the EU. Regulators are taking a more active role in overseeing how tech companies utilise user data for AI training and development, reflecting growing concerns about data privacy and the ethical implications of AI advancement. As the legal proceedings unfold, the outcome of this case could set important precedents for how AI development is regulated in the EU, potentially influencing global standards for data protection in the AI era. The tech industry and privacy advocates alike will be watching closely as this situation develops, recognising its potential to shape the future of AI innovation and data privacy regulations. (Photo by Alexander Shatov) See also: Balancing innovation and trust: Experts assess the EU’s AI Act Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post X agrees to halt use of certain EU data for AI chatbot training appeared first on AI News. View the full article
-
[AI]SingularityNET bets on supercomputer network to deliver AGI
ChatGPT posted a topic in World News
SingularityNET is ******** on a network of powerful supercomputers to get us to Artificial General Intelligence (AGI), with the first one set to whir into action this September. While today’s AI excels in specific areas – think GPT-4 composing poetry or DeepMind’s AlphaFold predicting protein structures – it’s still miles away from genuine human-like intelligence. “While the novel neural-symbolic AI approaches developed by the SingularityNET AI team decrease the need for data, processing and energy somewhat relative to standard deep neural nets, we still need significant supercomputing facilities,” SingularityNET CEO Ben Goertzel explained to LiveScience in a recent written statement. Enter SingularityNET’s ambitious plan: a “multi-level cognitive computing network” designed to host and train the incredibly complex AI architectures required for AGI. Imagine deep neural networks that mimic the human brain, vast language models (LLMs) trained on colossal datasets, and systems that seamlessly weave together human behaviours like speech and movement with multimedia outputs. But this level of sophistication doesn’t come cheap. The first supercomputer, slated for completion by early 2025, will be a Frankensteinian ****** of cutting-edge hardware: Nvidia GPUs, AMD processors, Tenstorrent server racks – you name it, it’s in there. This, Goertzel believes, is more than just a technological leap, it’s a philosophical one: “Before our eyes, a paradigmatic shift is taking place towards continuous learning, seamless generalisation, and reflexive AI self-modification.” To manage this distributed network and its precious data, SingularityNET has developed OpenCog Hyperon, an open-source software framework specifically designed for AI systems. Think of it as the conductor trying to make sense of a symphony played across multiple concert halls. But SingularityNET isn’t keeping all this brainpower to itself. Reminiscent of arcade tokens, users will purchase access to the supercomputer network with the AGIX token on blockchains like Ethereum and Cardano and contribute data to the collective pool—fuelling further AGI development. With experts like DeepMind’s Shane Legg predicting human-level AI by 2028, the race is on. Only time will tell if this global network of silicon brains will birth the next great leap in artificial intelligence. (Photo by Anshita Nair) See also: The merging of AI and blockchain was inevitable – but what will it mean? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post SingularityNET bets on supercomputer network to deliver AGI appeared first on AI News. View the full article -
In the fast-paced world of technology, missing the next big thing can be costly. For Intel, the semiconductor titan that once reigned supreme in the computer age, one such moment came and went quietly between 2017 and 2018. It was then that the company declined an opportunity that, in hindsight, appears to have been a golden ticket to the AI revolution. Recent reports reveal that Intel had the chance to acquire a 15% stake in OpenAI for $1 billion, with the potential for an additional 15% stake in exchange for producing hardware at cost. At the time, OpenAI was a fledgling non-profit focused on the then-obscure field of generative AI. Under CEO Bob Swan’s leadership, Intel ultimately passed on the deal, unconvinced that generative AI would yield near-term returns. This decision reflects a broader challenge established tech giants face: balancing short-term financial considerations with long-term strategic investments in emerging technologies. Intel’s choice to prioritize immediate returns over the potential of generative AI showcases a cautious approach that may have cost it dearly in the long run. Fast forward to 2024, and the consequences of that decision are stark. OpenAI, now valued at around $80 billion, has become a driving force behind the AI revolution with its ChatGPT platform. Meanwhile, Intel is playing catch-up in the AI chip market, dwarfed by rival Nvidia’s $2.6 trillion market cap and struggling to maintain relevance in an industry it once dominated. This missed opportunity is not an isolated incident for Intel. The company declined to produce processors for Apple’s iPhone, closing the door on Intel’s entry into the mobile computing era. These missteps paint a picture of a once-innovative giant that has lost its ability to foresee and capitalize on transformative technologies. Intel’s journey from industry leader to AI laggard is reflected in its recent financial performance. The company’s market value has dipped below $100 billion for the first time in 30 years, and it recently announced plans to cut over 15% of its workforce following disappointing earnings. While Intel aims to launch its third-generation Gaudi AI chip later this year, it ******** to be seen whether this will be enough to regain ground in the fiercely competitive AI hardware market. The OpenAI episode underscores a broader challenge facing established tech giants: balancing short-term financial considerations with long-term strategic investments in emerging technologies. Intel’s decision to prioritize immediate returns over the potential of generative AI reflects a cautious approach that may have cost it dearly in the long run. As AI continues to reshape industries and create new markets, the ability to identify and invest in groundbreaking technologies early will be crucial for tech companies hoping to maintain their competitive edge. Intel’s missed opportunity with OpenAI is a cautionary tale for corporate leaders navigating the uncertain waters of technological innovation. Looking ahead, Intel faces an uphill battle to reestablish itself as a leader in the AI chip market. The company’s plans to launch new AI-focused processors for PCs and servers in 2025 signal a renewed commitment to this space, but it ******** to be seen whether these efforts will close the gap with rivals who seized the AI opportunity early. Intel’s story reminds us that in the tech industry, today’s giants can quickly become tomorrow’s laggards if they fail to embrace transformative technologies. As we stand on the cusp of the AI revolution, the question ********: Will Intel find a way to reinvent itself once again, or will it be left behind in the wake of the very future it once helped to build? (Photo by Brecht Corbeel) See also: OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Intel’s AI fumble: How the chip giant missed a big opportunity appeared first on AI News. View the full article
-
Palantir, a data analytics company known for its work in the defence and intelligence sectors, has announced a significant partnership with Microsoft. The collaboration aims to deliver advanced services for classified networks utilised by US defence and intelligence agencies. According to the recent announcement, Palantir is integrating Microsoft’s cutting-edge large language models via the Azure OpenAI Service into its AI platforms. The integration will occur within Microsoft’s government and classified cloud environments. As this collaboration is the first of its kind, this specific configuration has the potential to completely transform the use of AI in critical national security missions. Palantir, whose name draws inspiration from the potentially misleading “seeing-stones” in J.R.R. Tolkien’s fictional works, specialises in processing and analysing vast quantities of data to assist governments and corporations with surveillance and decision-making tasks. While the precise nature of the services to be offered through this partnership ******** somewhat ambiguous, it is clear that Palantir’s products will be integrated into Microsoft’s Azure cloud services. This development follows Azure’s previous incorporation of OpenAI’s GPT-4 technology into a “top secret” version of its software. The company’s journey is notable. Co-founded by Peter Thiel and initially funded by In-Q-Tel, the CIA’s venture capital arm, Palantir has grown to serve a diverse clientele. Its roster includes government agencies such as Immigration and Customs Enforcement (ICE) and various police departments, as well as private sector giants like the pharmaceutical company Sanofi. Palantir has also become deeply involved in supporting Ukraine’s war efforts, with reports suggesting its software may be utilised in targeting decisions for military operations. Even though Palantir has operated with a large customer base for years, it only reached its first annual profit in 2023. However, with the current surge of interest in AI, the company has been able to grow rapidly, particularly in the commercial sector. According to Bloomberg, Palantir’s CEO, Alex Karp, warned that Palantir’s “commercial business is exploding in a way we don’t know how to handle.” Despite the urgency of this mission, the company’s annual filing clearly states that it neither does business with nor on behalf of the ******** ********** Party, nor does it plan to do so. This indicates that Palantir is especially careful in developing its customer base, considering the geopolitical implications of its work. The announcement of this partnership has been well-received by investors, with Palantir’s share price surging more than 75 per cent in 2024 as of the time of writing. This dramatic increase reflects the market’s optimism about the potential of AI in national security applications and Palantir’s position at the forefront of this field. Still, the partnership between Palantir and Microsoft raises significant questions about the role of AI in national security and surveillance. This is no surprise, as these are particularly sensitive areas, and the development of new technologies could potentially transform the sector forever. More discussions and investigations are needed to understand the ethical implications of implementing these innovative tools. All things considered, the Palantir and Microsoft partnership is a significant event that will likely shape the future use of AI technologies and cloud computing in areas such as intelligence and defence. (Photo by Katie Moum) See also: Paige and Microsoft unveil next-gen AI models for ******* diagnosis Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Palantir and Microsoft partner to provide federal AI services appeared first on AI News. View the full article
-
Alibaba Cloud’s Qwen team has unveiled Qwen2-Math, a series of large language models specifically designed to tackle complex mathematical problems. These new models – built upon the existing Qwen2 foundation – demonstrate remarkable proficiency in solving arithmetic and mathematical challenges, and outperform former industry leaders. The Qwen team crafted Qwen2-Math using a vast and diverse Mathematics-specific Corpus. This corpus comprises a rich tapestry of high-quality resources, including web texts, books, code, exam questions, and synthetic data generated by Qwen2 itself. Rigorous evaluation on both English and ******** mathematical benchmarks – including GSM8K, Math, MMLU-STEM, CMATH, and GaoKao Math – revealed the exceptional capabilities of Qwen2-Math. Notably, the flagship model, Qwen2-Math-72B-Instruct, surpassed the performance of proprietary models such as GPT-4o and Claude 3.5 in various mathematical tasks. “Qwen2-Math-Instruct achieves the best performance among models of the same size, with RM@8 outperforming Maj@8, particularly in the 1.5B and 7B models,” the Qwen team noted. This superior performance is attributed to the effective implementation of a math-specific reward model during the development process. Further showcasing its prowess, Qwen2-Math demonstrated impressive results in challenging mathematical competitions like the ********* Invitational Mathematics Examination (AIME) 2024 and the ********* Mathematics Contest (AMC) 2023. To ensure the model’s integrity and prevent contamination, the Qwen team implemented robust decontamination methods during both the pre-training and post-training phases. This rigorous approach involved removing duplicate samples and identifying overlaps with test sets to maintain the model’s accuracy and reliability. Looking ahead, the Qwen team plans to expand Qwen2-Math’s capabilities beyond English, with bilingual and multilingual models in the pipeline. This commitment to inclusivity aims to make advanced mathematical problem-solving accessible to a global audience. “We will continue to enhance our models’ ability to solve complex and challenging mathematical problems,” affirmed the Qwen team. You can find the Qwen2 models on Hugging Face here. See also: Paige and Microsoft unveil next-gen AI models for ******* diagnosis Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Qwen2-Math: A new era for AI maths whizzes appeared first on AI News. View the full article
-
Paige and Microsoft have unveiled the next big breakthrough in clinical AI for ******* diagnosis and treatment: Virchow2 and Virchow2G, enhanced versions of its revolutionary AI models for ******* pathology. The Virchow2 and Virchow2G models are based on an enormous dataset that Paige has accumulated. Paige has gathered more than three million pathology slides from over 800 labs across 45 countries, on which the models were trained. Such a volume of data is, unsurprisingly, highly beneficial. This data was obtained from over 225,000 patients, all de-identified to create a rich and representative dataset encompassing all genders, races, ******* groups, and regions across the globe. What makes these models truly remarkable is their scope. They cover over 40 different tissue types and various staining methods, making them applicable to a wide range of ******* diagnoses. Virchow2G, with its 1.8 billion parameters, stands as the largest pathology model ever created and sets new standards in AI training, scale, and performance. As Dr. Thomas Fuchs, founder and chief scientist of Paige, comments: “We’re just beginning to tap into what these foundation models can achieve in revolutionising our understanding of ******* through computational pathology.” He believes these models will significantly improve the future for pathologists, and he agrees that this technology is becoming an important step in the progression of diagnostics, targeted medications, and customised patient care. Similarly, Razik Yousfi, Paige’s senior vice president of technology, states that these models are not only making precision medicine a reality but are also improving the accuracy and efficiency of ******* diagnosis, and pushing the boundaries of what’s possible in pathology and patient care. So, how is this relevant to ******* diagnosis today? Paige has developed a clinical AI application that pathologists can use to recognise ******* in over 40 tissue types. This tool allows potentially hazardous areas to be identified more quickly and accurately. In other words, the diagnostic process becomes more efficient and less prone to errors, even for rare cancers, with the help of this tool. Beyond diagnosis, Paige has created AI modules that can benefit life sciences and pharmaceutical companies. These tools can aid in therapeutic targeting, biomarker identification, and clinical trial design, potentially leading to more successful trials and faster development of new therapies. The good news for researchers is that Virchow2 is available on Hugging Face for non-commercial research, while the entire suite of AI modules is now available for commercial use. This accessibility could accelerate advancements in ******* research and treatment across the scientific community. In summary, the recently introduced AI models represent a major advancement in the ****** against *******. Paige and Microsoft have chosen the right path by combining the power of data with state-of-the-art AI technologies. These companies have created new opportunities for more accurate ******* prediction, paving the way for tailored solutions and innovative research in oncology. (Photo by National ******* Institute) See also: The hidden climate cost of AI: How tech giants are struggling to go green Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Paige and Microsoft unveil next-gen AI models for ******* diagnosis appeared first on AI News. View the full article
-
At first glance, AI and blockchain seem like completely disparate realms. For instance, blockchain emphasises decentralisation but suffers from constrained memory and throughput rates. On the other hand, AI thrives on massive datasets and demands high-performance computing. To elaborate, Machine learning (ML) models – especially deep learning networks – require enormous amounts of data to train effectively, often relying on powerful GPUs or specialised hardware to process this information quickly. To this point, a report from the International Energy Agency (IEA) states that the global electricity demand for AI is projected to rise to 800 TWh by 2026, a nearly 75% increase from 460 TWh in 2022. Similar projections have also been released by multinational giants such as Morgan Stanley and Wells Fargo, with the latter’s model suggesting that, by 2030, AI-centric energy consumption will account for 16% of the USA’s current electricity demand. Morgan Stanley’s AI power consumption prediction (best-case scenario) The best of both worlds is here. Despite their apparent differences, the tech world is witnessing a growing convergence between AI and blockchain, with a number of innovative projects emerging. For instance, Ocean is a protocol that provides users with a decentralised data exchange centre, unlocking information sets for AI consumption while preserving their privacy and security. Similarly, ThoughtAI embeds AI and blockchain directly into data and information, effectively eliminating traditional application layers. It aims to create more responsive and adaptive AI solutions, potentially revolutionising how people interact with the technology and manage information. While these projects demonstrate the potential of combining AI and blockchain, they also highlight a critical challenge, i.e. scalability. For AI on blockchain to truly flourish, platforms need to overcome the inherent limitations of traditional blockchain architectures, particularly in terms of data availability and throughput. In this regard, 0G is a platform that has made significant strides in addressing the above-mentioned bottlenecks. To elaborate, ZeroGravity (0G for short) is the world’s first data availability system with a built-in general purpose storage layer that is not only highly scalable but also decentralised. Its scalability hinges on separating the workflow of data availability into a data publishing lane and a data storage lane. To put it technically, 0G is a scalable Data Availability (DA) service layer built directly on top of a decentralised storage system. It addresses the scalability issue by minimising the data transfer volume required for broadcast. — allowing for unprecedented levels of data availability and transaction throughput. One of the key advantages of 0G is its performance. While competitors like Celestia are able to achieve about 1.4 to 1.5 megabytes per second, the 0G network is capable of producing about 50 gigabytes per second, making it 50,000 times faster. Additionally, 0G’s cost is approximately 100 times cheaper than its closest competitors. This level of performance and flexibility opens the door to a wide array of AI/blockchain use cases that were previously impractical or impossible. For starters, in the realm of finance, 0G’s scalability can potentially allow for sophisticated AI-powered trading algorithms to operate directly on-chain. Similarly, it could also be possible to implement large-scale federated learning systems on the blockchain, leading to breakthroughs in privacy-preserving AI—where multiple parties can collaboratively train AI models without sharing sensitive data directly. Such advancements could have far-reaching implications in fields like healthcare, where data privacy is paramount but collaborative research is essential. A trillion-dollar opportunity is waiting to be tapped. As we look to the future, it’s clear that the intersection of AI and blockchain will continue to expand and evolve. This convergence is not just a technological curiosity but a massive economic opportunity. For example, the AI industry is projected to be worth a staggering $1.3 trillion by 2030, while the blockchain market is set to reach a valuation of $248.8 billion by 2029, reflecting their transformative potential across virtually every sector of the global economy. Therefore, moving forward, it stands to reason that those companies and platforms (such as 0G) that are able to successfully navigate this convergence — solving the technical challenges while unlocking new value propositions — will be well-positioned to capture a significant share of this trillion-dollar opportunity. The post The merging of AI and blockchain was inevitable – but what will it mean? appeared first on AI News. View the full article
-
Lumen Technologies, a leading telecommunications firm, has recently announced significant new contracts totalling $5 billion with cloud and tech companies for its networking and cybersecurity solutions. This surge in demand comes as businesses across various sectors rapidly adopt AI-driven technologies. Among these notable agreements is a deal with Microsoft, which revealed last month its plans to utilise Lumen’s network equipment to expand capacity for AI workloads. Lumen, known for providing secure digital connections for data centres, disclosed recently that it is engaged in active discussions with customers regarding additional sales opportunities valued at approximately $7 billion. The widespread adoption of AI has prompted enterprises across multiple industries to invest heavily in infrastructure capable of supporting AI-powered applications. Lumen reports that major corporations are urgently seeking to secure high-capacity fibre, a resource becoming increasingly valuable, and potentially scarce due to growing AI requirements. There is an optimistic prospect for further success, as Kate Johnson, the CEO of Lumen, expressed: “Our partners are turning to us because of our AI-ready infrastructure and expansive network. This is just the beginning of a significant opportunity for Lumen, one that will lead to one of the largest expansions of the internet ever.” Another piece of evidence regarding the company’s strategic positioning in such a rapidly changing and highly unstable market is the creation of a new division, Custom Networks. This division will be responsible for managing the Lumen Private Connectivity Fabric solutions portfolio. At the same time, since the demand for networking is rising from various organisations seeking solutions designed to satisfy the specific needs of their target environments, it is rational to develop a new division for networks. This highlights that telecommunications infrastructure plays a crucial role in the current AI revolution. As an increasing number of firms implement AI technologies in their operations, it is essential to have plenty of secure, expansive networks. Lumen’s recent success in securing these substantial contracts underscores the company’s strong market position and its ability to meet the evolving needs of tech giants and cloud service providers. As the AI landscape continues to evolve, Lumen appears well-positioned to capitalise on the increasing demand for advanced networking solutions. The telecommunications sector, and Lumen in particular, is likely to remain at the forefront of enabling AI advancements across industries. As this trend progresses, it will be interesting to observe how Lumen and its competitors adapt to meet the challenges and opportunities presented by this technological shift. (Photo by Vladimir Solomianyi) See also: *** backs smaller AI projects while scrapping major investments Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI expansion drives $5B in deals for Lumen appeared first on AI News. View the full article