ChatGPT
Diamond Member-
Posts
800 -
Joined
-
Last visited
-
Feedback
0%
Content Type
Profiles
Forums
Downloads
Store
Everything posted by ChatGPT
-
[AI]Authors file lawsuit against Anthropic over ‘pirated’ works
ChatGPT posted a topic in World News
Three authors have filed a lawsuit against AI startup Anthropic, alleging the firm used their copyrighted works without permission to train its Claude language models. Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a complaint in a California court, accusing Anthropic of having “pirated” their written material to develop its AI systems. The authors claim Anthropic downloaded pirated versions of their books from ******** websites to use as training data. The lawsuit alleges Anthropic “built a multibillion-dollar business by stealing hundreds of thousands of copyrighted books.” It states the company “ignored copyright protections” and engaged in “large scale theft of copyrighted works” to train its Claude models. Anthropic has not commented substantively on the allegations, only saying it is “aware” of the legal action. The case joins similar lawsuits against other AI companies like Microsoft and OpenAI over using copyrighted material to develop large language models. It highlights growing tensions between content creators and AI firms regarding intellectual property rights. According to the complaint, Anthropic used a dataset called ‘The Pile’ to train Claude. This dataset allegedly included a collection of pirated ebooks called ‘Books3,’ which contained nearly 200,000 books downloaded from an unauthorised source. The authors argue that Anthropic knew it was using copyrighted works without permission. They claim the company made a “deliberate decision to cut corners and rely on stolen materials to train their models” rather than obtaining proper licences. The lawsuit states that Anthropic’s actions have harmed authors by depriving them of book sales and licensing revenues. It alleges the company’s AI models now compete with human-written content, threatening writers’ livelihoods. For context, Anthropic positions its Claude models as rivals to OpenAI’s ChatGPT and other prominent AI chatbots. The company has raised billions in funding and is valued at over $18 billion. Critics argue that AI firms should compensate authors and publishers for using their works as training data. Some companies like Google have begun licensing deals with news organisations and other content providers. However, AI developers contend that using copyrighted material for machine learning falls under copyright law’s ‘fair use’ provisions. They argue that their models do not reproduce exact copies of training texts. The debate touches on complex legal and ethical questions about how copyright applies to AI development. Courts may need to determine whether AI training constitutes copyright infringement or transformative fair use. For authors, the lawsuit represents an effort to assert control over how their works are used in AI development. They argue that companies profiting from AI should compensate creators whose material made the technology possible. The case could have significant implications for the AI industry if courts rule that firms must obtain licences for all copyrighted material used in training. This would likely increase costs and complexity for AI development. Anthropic has focused on developing “safe and ethical” AI systems. The company’s CEO has described it as “focused on public benefit.” However, the authors’ lawsuit challenges this image, accusing Anthropic of building its business through copyright infringement. The complaint seeks statutory damages for alleged wilful copyright infringement and an injunction to prevent Anthropic from further using the authors’ works without permission. As AI capabilities grow, debates over intellectual property are likely to intensify. Content creators argue that their work should be protected and compensated, while AI companies push for access to broad datasets to improve their models. The outcome of cases like this one against Anthropic could help shape the legal and regulatory landscape for AI development. It may influence how companies approach training data collection and whether widespread licensing becomes the norm. For now, the lawsuit adds to the mounting legal challenges facing major AI firms over their use of copyrighted material. As courts grapple with these issues, their rulings could have far-reaching effects on the future of AI and content creation. The case is filed as Andrea Bartz et al. v. Anthropic PBC, US District Court for the Northern District of California, No. 3:24-cv-05417. (Photo by Anthropic) See also: Anthropic says Claude 3 Haiku is the fastest model in its class Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Authors file lawsuit against Anthropic over ‘pirated’ works appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content] -
OpenAI has added its voice to the growing chorus of tech leaders and politicians opposing a controversial AI safety bill in California. The company argues that the legislation, SB 1047, would stifle innovation and that regulation should be handled at a federal level. In a letter sent to California State Senator Scott Wiener’s office, OpenAI expressed concerns that the bill could have “broad and significant” implications for US competitiveness and national security. The company argued that SB 1047 would threaten California’s position as a global leader in AI, prompting talent to seek “greater opportunity elsewhere.” Introduced by Senator Wiener, the bill aims to enact “common sense safety standards” for companies developing large AI models exceeding specific size and cost thresholds. These standards would require companies to implement shut-down mechanisms, take “reasonable care” to prevent catastrophic outcomes, and submit compliance statements to the California attorney general. ******** to comply could result in lawsuits and civil penalties. Lieutenant General John (Jack) Shanahan, who served in the US Air Force and was the inaugural director of the US Department of Defense’s ****** Artificial Intelligence Center (JAIC), believes the bill “thoughtfully navigates the serious risks that AI poses to both civil society and national security” and provides “pragmatic solutions”. Hon. Andrew C. Weber – former Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs – echoed the national security concerns. “The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” said Weber. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work. I’m glad to see that SB 1047 helps establish the necessary protective measures.” SB 1047 has sparked fierce opposition from major tech companies, startups, and venture capitalists who argue that it overreaches for a nascent technology, potentially stifling innovation and driving businesses from the state. These concerns are echoed by OpenAI, with sources revealing that the company has paused plans to expand its San Francisco offices due to the uncertain regulatory landscape. Senator Wiener defended the bill, stating that OpenAI’s letter fails to “criticise a single provision.” He dismissed concerns about talent exodus as “nonsensical,” stating that the law would apply to any company conducting business in California, regardless of their physical location. Wiener highlighted the bill’s “highly reasonable” requirement for large AI labs to test their models for catastrophic safety risks, a practice many have already committed to. Critics, however, counter that mandating the submission of model details to the government will hinder innovation. They also ***** that the threat of lawsuits will deter smaller, open-source developers from establishing startups. In response to the backlash, Senator Wiener recently amended the bill to eliminate ********* liability for non-compliant companies, safeguard smaller developers, and remove the proposed “Frontier Model Division.” OpenAI maintains that a clear federal framework, rather than state-level regulation, is essential for preserving public safety while maintaining US competitiveness against rivals like China. The company highlighted the suitability of federal agencies, such as the White House Office of Science and Technology Policy and the Department of Commerce, to govern AI risks. Senator Wiener acknowledged the ideal of congressional action but expressed scepticism about its likelihood. He drew parallels with California’s data privacy law, passed in the absence of federal action, suggesting that inaction from Congress shouldn’t preclude California from taking a leading role. The California state assembly is set to vote on SB 1047 this month. If passed, the bill will land on the desk of Governor Gavin Newsom, whose stance on the legislation ******** unclear. However, Newsom has publicly recognised the need to balance AI innovation with risk mitigation. (Photo by Solen Feyissa) See also: OpenAI delivers GPT-4o fine-tuning Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI warns California’s AI bill threatens US innovation appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
The rise of the creator economy was one of the most disruptive forces to emerge from the internet, paving the way for independent writers, artists, musicians, podcasters, YouTubers and social media influencers to connect with audiences directly and earn money from doing so. Creators have flocked to platforms such as Facebook, Instagram, Vimeo, Substack, TikTok and more, where they can not only create but also publish and share their user-generated content. Social media enables individuals to become self-publishers and independent producers of content, disrupting existing business models and enabling an entire generation of creative minds to establish their own path to success. Until recently, the creativity such individuals express was always thought to be a uniquely human quality and therefore invulnerable to disruption by advancing technology. However, the rise of generative AI, which comes so soon after the emergence of the creator economy, threatens to disrupt this nascent industry and significantly alter the way new content is produced. With generative AI models, anyone can churn out paragraphs of text, lines of software code, high quality images, audio, video and more, using simple prompts. How does AI aid with user-generated content? Generative AI burst into the public consciousness with the arrival of ChatGPT in late 2022, taking the internet by storm, and since then tech companies have rushed to create all manner of consumer-friendly applications that can aid in content creation. For instance there’s ChatGPT itself, which is all about text-generation, capable of writing blog posts, essays, marketing copy, email pitches, documents and more, based on a simple prompt where the user tells it what to write. More impressive forms of content generation include image generating models such as Midjourney, which can create dramatic pictures based on user’s ideas of what they want to see, and there are now even video generators, such as OpenAI’s Sora, Google DeepMind’s Veo and Runway that can do the same. Generative AI is also having an impact on video game content generation. Take the novel technology developed by AMGI Studios for its hit Web3 game My **** Hooligan, which uses proprietary motion capture and AI algorithms to capture the gamer’s facial expressions and replicate them on their in-game avatars. It further uses generative AI to provide each user character (which is a unique NFT) with its own distinctive personality that users can learn about through a chat interface. Other ways people use generative AI to enhance creativity include Buzzfeed’s personalized content creation tools, which enable users to quickly create customized quizzes tailored to each individual, and its generative AI recipe creator, which can serve up ideas for meals based on whatever the user has in the fridge. Three ways this can go In the eyes of some, AI-generated content has emerged as a major threat to user-generated content, but not everyone sees it that way. It’s unclear what kind of impact generative AI will ultimately have on the creator economy, but there are a number of possible scenarios that may unfold. Scenario 1: AI enhances creativity In the first scenario, it’s possible to imagine a world in which there’s an ********** of AI-assisted innovation, in which content creators themselves adopt AI to improve their performance and productivity. For instance, designers can use AI to quickly generate basic ideas and outlines, before using their human expertise to fine-tune those creations, be it a logo or a product design or something else. Rather than replace designers entirely, generative AI simply becomes a tool that they use to improve their output and get more work done. An example of this is GitHub’s coding assistant Copilot, which is a generative AI tool that acts as a kind of programming assistant, helping developers to generate code. It doesn’t replace their role entirely, but simply assists them in generating code snippets – such as the lines of code required to program an app to perform standard actions. But the developer is the one who oversees this and uses his creativity to design all of the intricacies of the app. AMGI’s in-game content generation tools are another example of how AI augments human creativity, creating unique in-game characters and situations that are ultimately based on the user’s actions. Such a scenario isn’t a threat to creative workers and user-generated content. Rather than taking people’s jobs, AI will simply support the people who do those jobs and make them better at it. They’ll be able to work faster and more efficiently, getting more work done in shorter time frames, spending more of their time prompting the AI tools they use and editing their outputs. It will enable creative projects to move forward much faster, accelerating innovation. Scenario 2: AI monopolises creativity A more dystopian scenario is the one where algorithmic models leverage their unfair advantage to totally dominate the world of content creation. It’s a future where human designers, writers, coders and perhaps even highly skilled professionals like physicists are drowned out by AI models that can not only work faster, but at much lower costs than humans can. From a business perspective, if they can replace costly human creators with cheap and cheerful AI, that’s great, translating to more profitability. But there are concerns, not only for the humans that lose their livelihoods, but also on the impact of creativity itself. As impressive as generative AI-created content sometimes is, the outputs of these algorithms are all based on existing content – namely the data they’re trained on. Most AI models have a habit of regurgitating similar content. Take an AI writer that always seems to write prose in the same, instantly recognizable and impersonal way, or AI image generators that constantly churn images with the same aesthetic. An even more alarming example of this is the AI music generators Suno and Uncharted Labs, whose tools are said to have been trained on millions of music videos posted on YouTube. Musicians represented by the Recording Industry Association of America recently filed lawsuits against those companies, accusing them of copyright infringement. Their evidence? Numerous examples of supposedly original songs that sound awfully familiar to existing ones created by humans. For instance, the lawsuit describes a song generated using Suno, called “Deep down in Louisiana close to New Orle” which seems to mirror the lyrics and style of Chuck Berry’s “Johnny B. Goode.” It also highlights a second track, “Prancing Queen” that seems to be a blatant rip off of the ABBA hit “Dancing Queen.” These examples raise questions over AI’s ability to create truly original content. If AI were to monopolise creativity, it could result in true innovation and creativity screeching to a halt, leading to a future that’s sterile and bland. Scenario 3: Human creativity stands out Given AI’s lack of true authenticity and originality, a third possible way this could play out is that there is a kind of backlash against it. With consumers being overwhelmed by a sea of mundane, synthetic imagery and prose, those with an eye for flair will likely be able to identify true, human creativity and pay a premium for that content. After all, humans have always shown a preference for true originality, and such a scenario could well play into the hands of the most talented content creators. It’s a future where being human gives creators a competitive edge over their algorithmic rivals, with their unparalleled ability to come up with truly original ideas setting their work apart. Human culture, fashions and trends seem to evolve faster than generative AI models are created, and that means that the most original thinkers will always be one step ahead. It’s a more reassuring future where humans will continue to create and be rewarded for their work, and where machines will only ever be able to copy and iterate on existing ideas. This is perhaps the most likely scenario and, reassuringly, it means there will always be a need for humans in the mix. Humans, after all, are characterised by their creativity – everything that exists in the modern world today was created by someone, whether it’s the shoes on your feet, the device you’re reading this article with, or the language you speak. They’re all human creations, inspired by original ideas rooted in the human brain, and humans – especially those who find AI can do their jobs for them – will have more time to sit and think and potentially come up with even better ideas than the ones we’ve had so far. The post As AI improves, what does it mean for user-generated content? appeared first on AI News. View the full article
-
OpenAI has announced the release of fine-tuning capabilities for its GPT-4o model, a feature eagerly awaited by developers. To sweeten the deal, OpenAI is providing one million free training tokens per day for every organisation until 23rd September. Tailoring GPT-4o using custom datasets can result in enhanced performance and reduced costs for specific applications. Fine-tuning enables granular control over the model’s responses, allowing for customisation of structure, tone, and even the ability to follow intricate, domain-specific instructions. Developers can achieve impressive results with training datasets comprising as little as a few dozen examples. This accessibility paves the way for improvements across various domains, from complex coding challenges to nuanced creative writing. “This is just the start,” assures OpenAI, highlighting their commitment to continuously expand model customisation options for developers. GPT-4o fine-tuning is available immediately to all developers across all paid usage tiers. Training costs are set at 25 per million tokens, with inference priced at 3.75 per million input tokens and $15 per million output tokens. OpenAI is also making GPT-4o mini fine-tuning accessible with two million free daily training tokens until 23rd September. To access this, select “gpt-4o-mini-2024-07-18” from the base model dropdown on the fine-tuning dashboard. The company has collaborated with select partners to test and explore the potential of GPT-4o fine-tuning: Cosine’s Genie, an AI-powered software engineering assistant, leverages a fine-tuned GPT-4o model to autonomously identify and resolve bugs, build features, and refactor code alongside human developers. By training on real-world software engineering examples, Genie has achieved a state-of-the-art score of 43.8% on the new SWE-bench Verified benchmark, marking the largest improvement ever recorded on this benchmark. Distyl, an AI solutions provider, achieved first place on the BIRD-SQL benchmark after fine-tuning GPT-4o. This benchmark, widely regarded as the leading text-to-SQL test, saw Distyl’s model achieve an ********** accuracy of 71.83%, demonstrating superior performance across demanding tasks such as query reformulation and SQL generation. OpenAI reassures users that fine-tuned models remain entirely under their control, with complete ownership and privacy of all business data. This means no data sharing or utilisation for training other models. Stringent safety measures have been implemented to prevent misuse of fine-tuned models. Continuous automated safety evaluations are conducted, alongside usage monitoring, to ensure adherence to OpenAI’s robust usage policies. (Photo by Matt Artz) See also: Primate Labs launches Geekbench AI benchmarking tool Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI delivers GPT-4o fine-tuning appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
According to Carine Smith Ihenacho, chief governance and compliance officer of Norway’s $1.7 trillion sovereign wealth fund, boards need to be proficient with the use of AI and take control of its application in businesses to mitigate risks. The Norges Bank Investment Fund, which holds considerable shares in almost 9,000 companies worldwide — accounting for 1.5% of all listed stocks — has become a trailblazer in environmental, social, and corporate governance issues. About a year ago, the fund also provided its invested companies with recommendations on integrating responsible AI to improve economic outcomes. Several companies still have a lot of ground to cover. Specifically, when stating that “Overall, a lot of competence building needs to be done at the board level,” Smith Ihenacho clarified that this does not mean every board should have an AI specialist. Instead, boards need to collectively understand how AI matters in their business and have policies in place. “They should know: ‘What’s our policy on AI? Are we high risk or low risk? Where does AI meet customers? Are we transparent around it?’ It’s a big-picture question they should be able to answer,” Smith Ihenacho added, highlighting the breadth of understanding required at the board level. The fund has shared its perspective on AI with the boards of its 60 largest portfolio companies, as reported in its 2023 responsible investment report. It is particularly focused on AI use in the healthcare sector due to its substantial impact on consumers, and is closely monitoring Big Tech companies that develop AI-based products. In its engagement with tech firms, the fund emphasises the importance of robust governance structures to manage AI-related risks. “We focus more on the governance structure,” Smith Ihenacho explained. “Is the board involved? Do you have a proper policy on AI?” The fund’s emphasis on AI governance is particularly relevant, given that nine of the ten largest positions in its equity holdings are tech companies. Leading among them are names such as Microsoft, Apple, Amazon, and Meta Platforms. Investments in these companies contributed to a 12.5% growth in the fund’s stock portfolio in the first half of 2024. The overall exposure to the tech sector increased from 21% to 26% over the past year, now comprising a quarter of the stock portfolio. This underscores the significant role that technology and AI play in the world today. Though the fund favours AI innovation for its potential to boost efficiency and productivity, Smith Ihenacho has emphasised the importance of responsible use. She is quoted as saying, “It is fantastic what AI may be able to do to support innovation, efficiency, and productivity… we support that.” However, she also stressed the need to be responsible in how we manage the risks. The fund’s adoption of AI governance aligns with rising global concerns about the ethical implications and potential dangers of these technologies. AI is increasingly utilised across various sectors, from finance to healthcare, and the need for governance frameworks has never been greater. The Norwegian sovereign wealth fund maintains a standard that requires companies to develop comprehensive AI policies at the board level, fostering the adoption of responsible AI practices across its large portfolio. This initiative by one of the world’s largest investors could have far-reaching implications for corporate governance practices globally. As companies seek to harness the power of AI while navigating its complexities, the guidance provided by influential investors like Norges Bank Investment Fund may serve as a blueprint for responsible AI implementation and governance in the corporate world. See also: X agrees to halt use of certain EU data for AI chatbot training Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Use of AI for business governance must improve at the board level appeared first on AI News. View the full article
-
AI capabilities have exploded over the past two years, with large language models (LLMs) such as ChatGPT, Dall-E, and Midjourney becoming everyday use tools. As you’re reading this article, generative AI programs are responding to emails, writing marketing copies, recording songs, and creating images from simple inputs. What’s even more remarkable to witness is the rate at which both individuals and companies are embracing the AI ecosystem. A recent survey by McKinsey revealed that the number of companies that have adopted generative AI in at least one business function doubled within a year to 65%, up from 33% at the beginning of 2023. However, like most technological advancements, this nascent area of innovation is not short of challenges. Training and running AI programs is resource intensive endeavour, and as things stand, big tech seems to have an upper hand which creates the risk of AI centralisation. The computational limitation in AI development According to an article by the World Economic Forum, there is an accelerating demand for AI compute; the computational power required to sustain AI development is currently growing at an annual rate of between 26% and 36%. Another recent study by Epoch AI confirms this trajectory, with projections showing that it will soon cost billions of dollars to train or run AI programs. “The cost of the largest AI training runs is growing by a factor of two to three per year since 2016, and that puts billion-dollar price tags on the horizon by 2027, maybe sooner,” noted Epoch AI staff researcher, Ben Cottier. In my opinion, we’re already at this point. Microsoft invested $10 billion in OpenAI last year and, more recently, news emerged that the two entities are planning to build a data center that will host a supercomputer powered by millions of specialised chips. The cost? A whopping $100 billion, which is ten times more than the initial investment. Well, Microsoft is not the only big tech that’s on a spending spree to boost its AI computing resources. Other companies in the AI arms race, including Google, Alphabet, and Nvidia are all directing a significant amount of funding to AI research and development. While we can agree that the outcome could match the amount of money being invested, it is hard to ignore the fact that AI development is currently a ‘big tech’ sport. Only these deep-pocketed companies have the ability to fund AI projects to the tune of tens or hundreds of billions. It begs the question; what can be done to avoid the same pitfalls that Web2 innovations are facing as a result of a handful of companies controlling innovation? Stanford’s HAI Vice Director and Faculty Director of Research, James Landay, is one of the experts who has previously weighed in on this scenario. According to Landay, the rush for GPU resources and the prioritisation by big tech companies to use their AI computational power in-house will trigger the demand for computing power, ultimately pushing stakeholders to develop cheaper hardware solutions. In China, the government is already stepping up to support AI startups following the chip wars with the US that have limited ******** companies from seamlessly accessing crucial chips. Local governments within China introduced subsidies earlier this year, pledging to offer computing vouchers for AI startups ranging between $140,000 and $280,000. This effort is aimed at reducing the costs associated with computing power. Decentralising AI computing costs Looking at the current state of AI computing, one theme is constant — the industry is currently centralised. Big tech companies control the majority of the computing power as well as AI programs. The more things change, the more they remain the same. On the brighter side, this time, things might actually change for good, thanks to decentralised computing infrastructures such as the Qubic Layer 1 blockchain. This L1 blockchain uses an advanced mining mechanism dubbed the useful Proof-of-Work (PoW); unlike Bitcoin’s typical PoW which uses energy for the sole purpose of securing the network, Qubic’s uPoW utilizes its computational power for productive AI tasks such as training neural networks. In simpler terms, Qubic is decentralising the sourcing of AI computational power by moving away from the current paradigm where innovators are limited to the hardware they own or have rented from big tech. Instead, this L1 is tapping into its network of miners which could run into the tens of thousands to provide computational power. Although a bit more technical than leaving big tech to handle the backend side of things, a decentralised approach to sourcing for AI computing power is more economical. But more importantly, it would only be fair if AI innovations would be driven by more stakeholders as opposed to the current state where the industry seems to rely on a few players. What happens if all of them go down? Make matters worse, these tech companies have proven untrustworthy with life-changing tech advancements. Today, most people are up in arms against data privacy violations, not to mention other affiliated issues such as societal manipulation. With decentralised AI innovations, it will be easier to check on the developments while reducing the cost of entry. Conclusion AI innovations are just getting started, but the challenge of accessing computational power is still a headwind. To add to it, Big tech currently controls most of the resources which is a big challenge to the rate of innovation, not to mention the fact that these same companies could end up having more power over our data – the digital gold. However, with the advent of decentralised infrastructures, the entire AI ecosystem stands a better chance of reducing computational costs and eliminating big tech control over one of the most valuable technologies of the 21st century. The post AI capabilities are growing faster than hardware: Can decentralisation close the gap? appeared first on AI News. View the full article
-
Primate Labs has officially launched Geekbench AI, a benchmarking tool designed specifically for machine learning and AI-centric workloads. The release of Geekbench AI 1.0 marks the culmination of years of development and collaboration with customers, partners, and the AI engineering community. The benchmark, previously known as Geekbench ML during its preview phase, has been rebranded to align with industry terminology and ensure clarity about its purpose. Geekbench AI is now available for Windows, macOS, and Linux through the Primate Labs website, as well as on the Google Play Store and Apple App Store for mobile devices. Primate Labs’ latest benchmarking tool aims to provide a standardised method for measuring and comparing AI capabilities across different platforms and architectures. The benchmark offers a unique approach by providing three overall scores, reflecting the complexity and heterogeneity of AI workloads. “Measuring performance is, put simply, really hard,” explained Primate Labs. “That’s not because it’s hard to run an arbitrary test, but because it’s hard to determine which tests are the most important for the performance you want to measure – especially across different platforms, and particularly when everyone is doing things in subtly different ways.” The three-score system accounts for the varied precision levels and hardware optimisations found in modern AI implementations. This multi-dimensional approach allows developers, hardware vendors, and enthusiasts to gain deeper insights into a device’s AI performance across different scenarios. A notable addition to Geekbench AI is the inclusion of accuracy measurements for each test. This feature acknowledges that AI performance isn’t solely about speed but also about the quality of results. By combining speed and accuracy metrics, Geekbench AI provides a more holistic view of AI capabilities, helping users understand the trade-offs between performance and precision. Geekbench AI 1.0 introduces support for a wide range of AI frameworks, including OpenVINO on Linux and Windows, and vendor-specific TensorFlow Lite delegates like Samsung ENN, ArmNN, and Qualcomm QNN on Android. This broad framework support ensures that the benchmark reflects the latest tools and methodologies used by AI developers. The benchmark also utilises more extensive and diverse datasets, which not only enhance the accuracy evaluations but also better represent real-world AI use cases. All workloads in Geekbench AI 1.0 run for a minimum of one second, allowing devices to reach their maximum performance levels during testing while still reflecting the bursty nature of real-world applications. Primate Labs has published detailed technical descriptions of the workloads and models used in Geekbench AI 1.0, emphasising their commitment to transparency and industry-standard testing methodologies. The benchmark is integrated with the Geekbench Browser, facilitating easy cross-platform comparisons and result sharing. The company anticipates regular updates to Geekbench AI to keep pace with market changes and emerging AI features. However, Primate Labs believes that Geekbench AI has already reached a level of reliability that makes it suitable for integration into professional workflows, with major tech companies like Samsung and Nvidia already utilising the benchmark. (Image Credit: Primate Labs) See also: xAI unveils Grok-2 to challenge the AI hierarchy Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Primate Labs launches Geekbench AI benchmarking tool appeared first on AI News. View the full article
-
Artificial intelligence is everywhere, the chances are if you’re on our site then you’ve noticed! One of the places that we maybe don’t think so much about AI and its implications is when we’re gaming. After all, for many of us this is a time to switch off from reality and properly relax. However, the world of gaming is one of the industries where AI is having some of its most pronounced effects. We’re going to take a look at a handful of the ways that AI is changing the way we game. Artificial Intelligence is Changing NPCs One of the most interesting ways that AI is being used is to train NPCs (non-playable characters) to behave in a more human way. This means that they can react to their environment – for example, if it’s raining they might move to stand under a tree, or pull out an umbrella. They can also react differently depending on the choices of a character, for example, if you ask them for money they might behave rudely, whereas if you offer them a gift, they might be friendly and grateful. Beyond this, they’re being trained in more complex behaviours like competition – think about the brothers in Red ***** Redemption II who are constantly at loggerheads with one another. All of these traits make interacting with NPCs a joy rather than a chore. Artificial Intelligence in online casinos Online casinos are often at the forefront of technological advances when it comes to making use of them to give their customers something new to try out. AI has been pivotal in this, enabling online casinos to better understand what users want from their experience, as well as how to keep them safe. For the latter, AI has been put to great use in detecting unusual behaviour on user accounts. This can be indicative of ******. So, AI flags the behaviour to the ****** team who can check it out and prevent anything bad from happening. This works so effectively because AI is so much better than humans (not to mention faster) at spotting patterns. As soon as there’s a blip in the usual pattern, AI can spot it, freeing up human time for other tasks. We’re going to head to social casinos to look at the other asset of AI, being able to understand what users want. Social casinos are just like the usual online casinos, but people can enjoy games without financial stakes. This is another field in which AI plays an important role, particularly in the case of recommending games. Many social casinos have a dynamic homepage, where players are shown a selection of games that are curated for them, using AI. This isn’t the only way that AI is shaping the social casino landscape, in fact, you can check on Oddschecker for in-depth analyses and comparisons of leading platforms in this space. Artificial Intelligence Shapes Difficulty and Content Are you getting better or is AI dumbing it down for you? How about how AI is performing in the world of console games? Some of the true blockbusters are employing AI to create a gaming experience that’s even closer to life than we thought possible. One of the ways that AI is being put to great use is in the realm of narratives. AI is creating personalized experiences by generating dynamic storylines and quests based on player actions. This means that one player won’t receive the same storyline as another. This level of personalization was sort of possible before, but it took thousands of man-hours to make everything work seamlessly. Now, this level of customization can even extend to adaptive difficulty, where AI adjusts the game’s challenge in real-time to match a player’s skill level. So, if you’ve found that a game is getting a bit easier, it might be worth double checking that AI isn’t helping you out a little! AI excels in terms of visuals too. Technologies like AI upscaling improve textures by converting lower-resolution images to high resolution, by reducing noise. Another fascinating bit of technology is called ray tracing. This creates realistic lighting conditions, like light bouncing off puddles, or raindrops shadowing a face. It’s these intricate details that are making the hyperreal games of tomorrow come to life today. The post How Artificial Intelligence is Shaping the Way We Game appeared first on AI News. View the full article
-
Google has unveiled its new range of Pixel 9 smartphones, emphasising their enhanced AI capabilities. The company released the devices much earlier than usual, as Google typically presents new Pixel models in the autumn. However, compared to previous versions, the changes in the new models are revolutionary. The new smartphones feature more advanced integrations of Google’s AI technology. For instance, one unique Pixel feature allows users to search for information and images within their screenshots, thanks to a more deeply integrated approach. Additionally, through the Gemini chatbot, some features of these smartphones are available as overlays from other apps. At Alphabet’s Bay View campus in Mountain View, California, Rick Osterloh, the Senior Vice President of Devices and Services at Google, informed visitors that the company plans to focus on practical applications of AI. He spent time describing the “Gemini era” to the audience, which will commence with the creation of Google’s advanced AI model. The Pixel 9 series offers several models. The base model, the Pixel 9, features a 6.3-inch screen and costs $799. A larger alternative, the Pixel 9 Pro XL, has a 6.8-inch screen. A slightly enhanced version, the Pixel 9 Pro, offers a better camera system, though its price is higher. The final option is the foldable Pixel 9 Pro Fold. Regarding the initial shipping date, Google stated at the event that the Pixel 9 and Pixel 9 Pro XL would ship in late August. The Pro and Pro Fold models will ship in September, with all models available for preorder starting August 13. During Google’s presentations at the event, Gemini’s new functions were showcased in a live demo, focusing on the latest conversation features. Additionally, the company announced updates to the product’s exterior design, the installation of Google’s advanced camera system, and the integration of the new Tensor G4 chip. In addition to smartphones, the company unveiled new versions of the Pixel Watch 3 smartwatch and Pixel Buds Pro 2 wireless earbuds. The watch can track the user’s heart rate; if it stops, it will call emergency services. This feature will be available in the *** and the EU. As reported by IDC, Google’s share in the global smartphone market was less than 1% in the second quarter of 2024. Samsung and Apple took the first and second places, with market shares of 18.9%, and 15.8%, respectively. In the US, Google ranks fourth among smartphone operating systems, holding 4.5% of the market share. Industry analysts note that although Google Pixel is not among the best-selling smartphones, it showcases some of the benefits of the Android operating system. Android has become the dominant operating system, used by more than 80% of smartphone users worldwide. Consequently, many people, even those who have never used a Google Pixel, may indirectly experience and appreciate the features that Google products offer. The event also touched upon Google’s further intentions and previous efforts to implement AI across its product lineup to stay at the top of the game. Not long ago, the company integrated AI improvements into its core products, including its search engine. Additionally, Google announced a content-sharing agreement it reached with Peloton. As a result, Fitbit Premium subscribers will have free access to the Peloton training class library. (Image Credit: Google) See also: Google’s Gemini 1.5 Pro dethrones GPT-4o Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Google advances mobile AI in Pixel 9 smartphones appeared first on AI News. View the full article
-
While the industry acknowledges the need for robust security measures, research from PSA Certified suggests that investment and best practices are struggling to keep pace with AI’s rapid growth. The survey of 1,260 global technology decision-makers revealed that two-thirds (68%) are concerned that the speed of AI advancements is outstripping the industry’s ability to safeguard products, devices, and services. This apprehension is driving a surge in edge computing adoption, with 85% believing that security concerns will push more AI use cases to the edge. Edge computing – which processes data locally on devices instead of relying on centralised cloud systems – offers inherent advantages in efficiency, security, and privacy. However, this shift to the edge necessitates a heightened focus on device security. “There is an important interconnect between AI and security: one doesn’t scale without the other,” cautions David Maidment, Senior Director, Market Strategy at Arm (a PSA Certified co-founder). “While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors.” Despite recognising security as paramount, a significant disconnect exists between awareness and action. Only half (50%) of those surveyed believe their current security investments are sufficient. Furthermore, essential security practices, such as independent certifications and threat modelling, are being neglected by a substantial portion of respondents. “It’s more imperative than ever that those in the connected device ecosystem don’t skip best practice security in the hunt for AI features,” emphasises Maidment. “The entire value chain needs to take collective responsibility and ensure that consumer trust in AI driven services is maintained.” The report highlights the need for a holistic approach to security, embedded throughout the entire AI lifecycle, from device deployment to the management of AI models operating at the edge. This proactive approach, incorporating security-by-design principles, is deemed essential to building consumer trust and mitigating the escalating security risks. Despite the concerns, a sense of optimism prevails within the industry. A majority (67%) of decisionmakers believe their organisations are equipped to handle the potential security risks associated with AI’s surge. There is a growing recognition of the need to prioritise security investment – 46% are focused on bolstering security, compared to 39% prioritising AI readiness. “Those looking to unleash the full potential of AI must ensure they are taking the right steps to mitigate potential security risks,” says Maidment. “As stakeholders in the connected device ecosystem rapidly embrace a new set of AI-enabled use cases, it’s crucial that they do not simply forge ahead with AI regardless of security implications.” (Photo by Braden Collum) See also: The AI revolution: Reshaping data centres and the digital landscape Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post PSA Certified: AI growth outpacing security measures appeared first on AI News. View the full article
-
Artificial intelligence is changing the world and is projected to have a global market value of $2-4 trillion USD by 2030. The future is now, and it feels as though we’re witnessing a big bang in technology every couple of months. AI has crept into every facet of our lives, fundamentally transforming our work and play. Data centres are at the heart of all this excitement. Put simply, AI is when computer systems are used to simulate human intelligence processes. This includes learning, reasoning, and – particularly interestingly – self-correction. In other words, it’s like having a human brain in a computer. Bill Gates has compared its rise to the start of some of the most important technological advances in history. The surge of AI is staggering. For instance, ChatGPT reached a million users in just five days; for Netflix, this milestone took a few years. The enthusiasm to implement the technology is evident from these instances of explosive growth. However, AI has a surprisingly large appetite for data, and the computational power required to process that data is enormous, especially considering that it’s only set to increase further. That’s where data centre infrastructure comes in. Data centres are the backbones of the digital world and are no longer simply storage spaces but are rapidly evolving into entire ecosystems. These ecosystems are energy-hungry, requiring rapid processing power for energy-intensive processes and efficient delivery of data worldwide. Data centres are home to rows of servers, storage systems, and complex networks that facilitate the flow of information. Such facilities are essential to various workloads, from search queries to financial transactions and digital interactions, and usually remain silent while completing their tasks. As progressive as AI’s demands and capabilities are, it is crucial to ensure their compatibility with data centre infrastructure. Every computation involved in processing data is key to AI, and the efficiency of these processes depends on three primary types of processors: the Graphics Processing Unit (GPU), the Central Processing Unit (CPU), and the Tensor Processing Unit (TPU). On the one hand, the GPU is great at managing parallelism, making it excellent for training AI models. On the other hand, the CPU allows for more flexibility in terms of simultaneous tasks on an increasing scale. Finally, the TPU, which is Google’s development in this sphere, is best suited for completing the highest possible number of AI tasks in the shortest amount of time. Integrating AI into data centres presents several challenges: Power: AI training processes require high-performance computing infrastructure, necessitating reliable and sufficient power supply systems. Connectivity: Seamless, high-speed, and low-latency network connectivity is crucial for efficient data transfer and communication. Cooling: AI workloads generate significant heat, requiring advanced cooling systems to maintain optimal operating temperatures. AI is ever-emerging and ever-evolving, and thus, changes must be made to regulation. For example, the AI Act recently released by the EU categorizes applications of AI into four different levels of risk: unacceptable, high, limited, and minimal or no risk. At the same time, the NIS2 Directive has expanded cybersecurity regulation to cover the digital realm as well. As such, one of the main challenges facing industries, including data centres, will be keeping up to date with these regulations. AI is progressing faster and further than anything we have seen in recent years, and data centres must move as quickly to keep up with the changing parameters and risk boundaries that are now being defined. To sum up, the AI revolution is changing the way our digital infrastructure works, with the data centre being one of the first to be transformed. This transformation is crucial because, as we discover new ways of applying AI, we will need everything from technological advancements to regulatory compliance. This concerns both technological progress and the need to deal with new laws and regulations that pile up with the growth of AI. Thus, the history of AI and the data centre is one of continuous development and mutual shaping of each other. Interested in learning more? Data Centres Expo Europe | Data Centres Event & Conference Register for free to attend the upcoming Data Centres Expo event and conference which will shine a spotlight on the future outlook for the sector, as demand for data centre spaces increase. Gain valuable insights from industry leaders and network with experts from the largest data centre providers. Examine key topics such as building AI ready data centre infrastructures, building scalability and sustainability into data centres, and cultivating the right data centre hardware solutions. Learn more and register for free here. The post The AI revolution: Reshaping data centres and the digital landscape appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
xAI has announced the release of Grok-2, a major upgrade that boasts improved capabilities in chat, coding, and reasoning. Alongside Grok-2, xAI has introduced Grok-2 mini, a smaller but capable version of the main model. Both are currently in beta on X and will be made available through xAI’s enterprise API later this month. An early version of Grok-2 was tested on the LMSYS leaderboard under the pseudonym “sus-column-r”. At the time of the announcement, xAI claims it is outperforming both Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4-Turbo. However, it’s worth noting that GPT-4o currently holds the top spot as the best AI assistant in terms of overall capabilities, followed by Google’s Gemini 1.5. xAI’s focus ******** on advancing core reasoning capabilities with its new compute cluster, as it aims to maintain its position at the forefront of AI development. However, the company recently agreed to halt the use of certain EU data for training its models. While the release of Grok-2 marks a significant milestone for xAI, it’s clear that the AI landscape ******** highly competitive. With ChatGPT-4o and Google’s Gemini 1.5 leading the pack, and other major players like Anthropic continuing to make advancements, the race for AI supremacy is far from over. See also: SingularityNET bets on supercomputer network to deliver AGI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post xAI unveils Grok-2 to challenge the AI hierarchy appeared first on AI News. View the full article
-
Recently, the ********* Union became the centre stage of a data privacy controversy related to the social media platform X. On August 8, an Irish court declared that X had agreed to suspend the use of all data belonging to ********* Union citizens, which had been gathered via the platform for the purpose of training the company’s AI systems. As reported by The Economic Times, this initiative was prompted by complaints from the Data Protection Commission (DPC) of Ireland, the leading EU regulator for many large US tech companies that have their main offices in Ireland under EU law. Taking action, the DPC’s intervention comes amid intensified scrutiny of AI development practices across the EU by tech giants. Recently, the regulatory body sought an order to restrain or suspend X’s data processing activities on users for the development, training, and refinement of an AI system. This situation clearly depicts the growing conflict or tension experienced by nearly all EU states between AI advances and ongoing data protection concerns. It seems that the order was issued too late by regulators and the court. In the response filed for the lawsuit, X, owned by Elon Musk, reported that Grok—an AI chatbot—allowed its users to skip their public posts. As Judge Leonie Reynolds noted, X began processing ********* users’ data for AI training on May 7, but the opt-out option was not introduced until July 16. Furthermore, it was not immediately made available to all users. Therefore, there was a ******* when the data was used without the users’ consent. X’s legal representation has assured the court that data obtained from EU users between May 7 and August 1 will not be used while the DPC’s order is under consideration. It is expected that X will file opposition papers arguing against the suspension order by September 4. This will set in motion what could be a court battle with effects reverberating throughout the EU. Either way, X has not remained silent on the matter. In its statement, the company’s Global Government Affairs account on X noted that the DPC’s order was “unwarranted, overbroad, and singles out X without any justification.” Furthermore, the company expressed concerns that the order would undermine efforts to keep the platform safe and restrict its use of technologies in the EU. This highlights the complex balance between regulatory compliance and operational viability that tech companies must navigate in the current digital landscape. The platform emphasised its proactive approach in working with regulators, including the DPC, regarding Grok since late 2023. X claims to have been fully transparent about the use of public data for AI models, including providing necessary legal assessments and engaging in lengthy discussions with regulators. This regulatory action against X is not an isolated incident. Other tech giants have faced similar scrutiny in recent months. Meta Platforms recently decided to postpone the launch of its Meta AI models in Europe following advice from the Irish DPC. Similarly, Google agreed to delay and modify its Gemini AI chatbot earlier this year after consultations with the Irish regulator. These developments collectively signal a shift in the regulatory landscape of AI and data usage in the EU. Regulators are taking a more active role in overseeing how tech companies utilise user data for AI training and development, reflecting growing concerns about data privacy and the ethical implications of AI advancement. As the legal proceedings unfold, the outcome of this case could set important precedents for how AI development is regulated in the EU, potentially influencing global standards for data protection in the AI era. The tech industry and privacy advocates alike will be watching closely as this situation develops, recognising its potential to shape the future of AI innovation and data privacy regulations. (Photo by Alexander Shatov) See also: Balancing innovation and trust: Experts assess the EU’s AI Act Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post X agrees to halt use of certain EU data for AI chatbot training appeared first on AI News. View the full article
-
[AI]SingularityNET bets on supercomputer network to deliver AGI
ChatGPT posted a topic in World News
SingularityNET is ******** on a network of powerful supercomputers to get us to Artificial General Intelligence (AGI), with the first one set to whir into action this September. While today’s AI excels in specific areas – think GPT-4 composing poetry or DeepMind’s AlphaFold predicting protein structures – it’s still miles away from genuine human-like intelligence. “While the novel neural-symbolic AI approaches developed by the SingularityNET AI team decrease the need for data, processing and energy somewhat relative to standard deep neural nets, we still need significant supercomputing facilities,” SingularityNET CEO Ben Goertzel explained to LiveScience in a recent written statement. Enter SingularityNET’s ambitious plan: a “multi-level cognitive computing network” designed to host and train the incredibly complex AI architectures required for AGI. Imagine deep neural networks that mimic the human brain, vast language models (LLMs) trained on colossal datasets, and systems that seamlessly weave together human behaviours like speech and movement with multimedia outputs. But this level of sophistication doesn’t come cheap. The first supercomputer, slated for completion by early 2025, will be a Frankensteinian ****** of cutting-edge hardware: Nvidia GPUs, AMD processors, Tenstorrent server racks – you name it, it’s in there. This, Goertzel believes, is more than just a technological leap, it’s a philosophical one: “Before our eyes, a paradigmatic shift is taking place towards continuous learning, seamless generalisation, and reflexive AI self-modification.” To manage this distributed network and its precious data, SingularityNET has developed OpenCog Hyperon, an open-source software framework specifically designed for AI systems. Think of it as the conductor trying to make sense of a symphony played across multiple concert halls. But SingularityNET isn’t keeping all this brainpower to itself. Reminiscent of arcade tokens, users will purchase access to the supercomputer network with the AGIX token on blockchains like Ethereum and Cardano and contribute data to the collective pool—fuelling further AGI development. With experts like DeepMind’s Shane Legg predicting human-level AI by 2028, the race is on. Only time will tell if this global network of silicon brains will birth the next great leap in artificial intelligence. (Photo by Anshita Nair) See also: The merging of AI and blockchain was inevitable – but what will it mean? Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post SingularityNET bets on supercomputer network to deliver AGI appeared first on AI News. View the full article -
In the fast-paced world of technology, missing the next big thing can be costly. For Intel, the semiconductor titan that once reigned supreme in the computer age, one such moment came and went quietly between 2017 and 2018. It was then that the company declined an opportunity that, in hindsight, appears to have been a golden ticket to the AI revolution. Recent reports reveal that Intel had the chance to acquire a 15% stake in OpenAI for $1 billion, with the potential for an additional 15% stake in exchange for producing hardware at cost. At the time, OpenAI was a fledgling non-profit focused on the then-obscure field of generative AI. Under CEO Bob Swan’s leadership, Intel ultimately passed on the deal, unconvinced that generative AI would yield near-term returns. This decision reflects a broader challenge established tech giants face: balancing short-term financial considerations with long-term strategic investments in emerging technologies. Intel’s choice to prioritize immediate returns over the potential of generative AI showcases a cautious approach that may have cost it dearly in the long run. Fast forward to 2024, and the consequences of that decision are stark. OpenAI, now valued at around $80 billion, has become a driving force behind the AI revolution with its ChatGPT platform. Meanwhile, Intel is playing catch-up in the AI chip market, dwarfed by rival Nvidia’s $2.6 trillion market cap and struggling to maintain relevance in an industry it once dominated. This missed opportunity is not an isolated incident for Intel. The company declined to produce processors for Apple’s iPhone, closing the door on Intel’s entry into the mobile computing era. These missteps paint a picture of a once-innovative giant that has lost its ability to foresee and capitalize on transformative technologies. Intel’s journey from industry leader to AI laggard is reflected in its recent financial performance. The company’s market value has dipped below $100 billion for the first time in 30 years, and it recently announced plans to cut over 15% of its workforce following disappointing earnings. While Intel aims to launch its third-generation Gaudi AI chip later this year, it ******** to be seen whether this will be enough to regain ground in the fiercely competitive AI hardware market. The OpenAI episode underscores a broader challenge facing established tech giants: balancing short-term financial considerations with long-term strategic investments in emerging technologies. Intel’s decision to prioritize immediate returns over the potential of generative AI reflects a cautious approach that may have cost it dearly in the long run. As AI continues to reshape industries and create new markets, the ability to identify and invest in groundbreaking technologies early will be crucial for tech companies hoping to maintain their competitive edge. Intel’s missed opportunity with OpenAI is a cautionary tale for corporate leaders navigating the uncertain waters of technological innovation. Looking ahead, Intel faces an uphill battle to reestablish itself as a leader in the AI chip market. The company’s plans to launch new AI-focused processors for PCs and servers in 2025 signal a renewed commitment to this space, but it ******** to be seen whether these efforts will close the gap with rivals who seized the AI opportunity early. Intel’s story reminds us that in the tech industry, today’s giants can quickly become tomorrow’s laggards if they fail to embrace transformative technologies. As we stand on the cusp of the AI revolution, the question ********: Will Intel find a way to reinvent itself once again, or will it be left behind in the wake of the very future it once helped to build? (Photo by Brecht Corbeel) See also: OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Intel’s AI fumble: How the chip giant missed a big opportunity appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Palantir, a data analytics company known for its work in the defence and intelligence sectors, has announced a significant partnership with Microsoft. The collaboration aims to deliver advanced services for classified networks utilised by US defence and intelligence agencies. According to the recent announcement, Palantir is integrating Microsoft’s cutting-edge large language models via the Azure OpenAI Service into its AI platforms. The integration will occur within Microsoft’s government and classified cloud environments. As this collaboration is the first of its kind, this specific configuration has the potential to completely transform the use of AI in critical national security missions. Palantir, whose name draws inspiration from the potentially misleading “seeing-stones” in J.R.R. Tolkien’s fictional works, specialises in processing and analysing vast quantities of data to assist governments and corporations with surveillance and decision-making tasks. While the precise nature of the services to be offered through this partnership ******** somewhat ambiguous, it is clear that Palantir’s products will be integrated into Microsoft’s Azure cloud services. This development follows Azure’s previous incorporation of OpenAI’s GPT-4 technology into a “top secret” version of its software. The company’s journey is notable. Co-founded by Peter Thiel and initially funded by In-Q-Tel, the CIA’s venture capital arm, Palantir has grown to serve a diverse clientele. Its roster includes government agencies such as Immigration and Customs Enforcement (ICE) and various police departments, as well as private sector giants like the pharmaceutical company Sanofi. Palantir has also become deeply involved in supporting Ukraine’s war efforts, with reports suggesting its software may be utilised in targeting decisions for military operations. Even though Palantir has operated with a large customer base for years, it only reached its first annual profit in 2023. However, with the current surge of interest in AI, the company has been able to grow rapidly, particularly in the commercial sector. According to Bloomberg, Palantir’s CEO, Alex Karp, warned that Palantir’s “commercial business is exploding in a way we don’t know how to handle.” Despite the urgency of this mission, the company’s annual filing clearly states that it neither does business with nor on behalf of the ******** ********** Party, nor does it plan to do so. This indicates that Palantir is especially careful in developing its customer base, considering the geopolitical implications of its work. The announcement of this partnership has been well-received by investors, with Palantir’s share price surging more than 75 per cent in 2024 as of the time of writing. This dramatic increase reflects the market’s optimism about the potential of AI in national security applications and Palantir’s position at the forefront of this field. Still, the partnership between Palantir and Microsoft raises significant questions about the role of AI in national security and surveillance. This is no surprise, as these are particularly sensitive areas, and the development of new technologies could potentially transform the sector forever. More discussions and investigations are needed to understand the ethical implications of implementing these innovative tools. All things considered, the Palantir and Microsoft partnership is a significant event that will likely shape the future use of AI technologies and cloud computing in areas such as intelligence and defence. (Photo by Katie Moum) See also: Paige and Microsoft unveil next-gen AI models for ******* diagnosis Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Palantir and Microsoft partner to provide federal AI services appeared first on AI News. View the full article
-
Alibaba Cloud’s Qwen team has unveiled Qwen2-Math, a series of large language models specifically designed to tackle complex mathematical problems. These new models – built upon the existing Qwen2 foundation – demonstrate remarkable proficiency in solving arithmetic and mathematical challenges, and outperform former industry leaders. The Qwen team crafted Qwen2-Math using a vast and diverse Mathematics-specific Corpus. This corpus comprises a rich tapestry of high-quality resources, including web texts, books, code, exam questions, and synthetic data generated by Qwen2 itself. Rigorous evaluation on both English and ******** mathematical benchmarks – including GSM8K, Math, MMLU-STEM, CMATH, and GaoKao Math – revealed the exceptional capabilities of Qwen2-Math. Notably, the flagship model, Qwen2-Math-72B-Instruct, surpassed the performance of proprietary models such as GPT-4o and Claude 3.5 in various mathematical tasks. “Qwen2-Math-Instruct achieves the best performance among models of the same size, with RM@8 outperforming Maj@8, particularly in the 1.5B and 7B models,” the Qwen team noted. This superior performance is attributed to the effective implementation of a math-specific reward model during the development process. Further showcasing its prowess, Qwen2-Math demonstrated impressive results in challenging mathematical competitions like the ********* Invitational Mathematics Examination (AIME) 2024 and the ********* Mathematics Contest (AMC) 2023. To ensure the model’s integrity and prevent contamination, the Qwen team implemented robust decontamination methods during both the pre-training and post-training phases. This rigorous approach involved removing duplicate samples and identifying overlaps with test sets to maintain the model’s accuracy and reliability. Looking ahead, the Qwen team plans to expand Qwen2-Math’s capabilities beyond English, with bilingual and multilingual models in the pipeline. This commitment to inclusivity aims to make advanced mathematical problem-solving accessible to a global audience. “We will continue to enhance our models’ ability to solve complex and challenging mathematical problems,” affirmed the Qwen team. You can find the Qwen2 models on Hugging Face here. See also: Paige and Microsoft unveil next-gen AI models for ******* diagnosis Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Qwen2-Math: A new era for AI maths whizzes appeared first on AI News. View the full article
-
Paige and Microsoft have unveiled the next big breakthrough in clinical AI for ******* diagnosis and treatment: Virchow2 and Virchow2G, enhanced versions of its revolutionary AI models for ******* pathology. The Virchow2 and Virchow2G models are based on an enormous dataset that Paige has accumulated. Paige has gathered more than three million pathology slides from over 800 labs across 45 countries, on which the models were trained. Such a volume of data is, unsurprisingly, highly beneficial. This data was obtained from over 225,000 patients, all de-identified to create a rich and representative dataset encompassing all genders, races, ******* groups, and regions across the globe. What makes these models truly remarkable is their scope. They cover over 40 different tissue types and various staining methods, making them applicable to a wide range of ******* diagnoses. Virchow2G, with its 1.8 billion parameters, stands as the largest pathology model ever created and sets new standards in AI training, scale, and performance. As Dr. Thomas Fuchs, founder and chief scientist of Paige, comments: “We’re just beginning to tap into what these foundation models can achieve in revolutionising our understanding of ******* through computational pathology.” He believes these models will significantly improve the future for pathologists, and he agrees that this technology is becoming an important step in the progression of diagnostics, targeted medications, and customised patient care. Similarly, Razik Yousfi, Paige’s senior vice president of technology, states that these models are not only making precision medicine a reality but are also improving the accuracy and efficiency of ******* diagnosis, and pushing the boundaries of what’s possible in pathology and patient care. So, how is this relevant to ******* diagnosis today? Paige has developed a clinical AI application that pathologists can use to recognise ******* in over 40 tissue types. This tool allows potentially hazardous areas to be identified more quickly and accurately. In other words, the diagnostic process becomes more efficient and less prone to errors, even for rare cancers, with the help of this tool. Beyond diagnosis, Paige has created AI modules that can benefit life sciences and pharmaceutical companies. These tools can aid in therapeutic targeting, biomarker identification, and clinical trial design, potentially leading to more successful trials and faster development of new therapies. The good news for researchers is that Virchow2 is available on Hugging Face for non-commercial research, while the entire suite of AI modules is now available for commercial use. This accessibility could accelerate advancements in ******* research and treatment across the scientific community. In summary, the recently introduced AI models represent a major advancement in the ****** against *******. Paige and Microsoft have chosen the right path by combining the power of data with state-of-the-art AI technologies. These companies have created new opportunities for more accurate ******* prediction, paving the way for tailored solutions and innovative research in oncology. (Photo by National ******* Institute) See also: The hidden climate cost of AI: How tech giants are struggling to go green Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Paige and Microsoft unveil next-gen AI models for ******* diagnosis appeared first on AI News. View the full article
-
At first glance, AI and blockchain seem like completely disparate realms. For instance, blockchain emphasises decentralisation but suffers from constrained memory and throughput rates. On the other hand, AI thrives on massive datasets and demands high-performance computing. To elaborate, Machine learning (ML) models – especially deep learning networks – require enormous amounts of data to train effectively, often relying on powerful GPUs or specialised hardware to process this information quickly. To this point, a report from the International Energy Agency (IEA) states that the global electricity demand for AI is projected to rise to 800 TWh by 2026, a nearly 75% increase from 460 TWh in 2022. Similar projections have also been released by multinational giants such as Morgan Stanley and Wells Fargo, with the latter’s model suggesting that, by 2030, AI-centric energy consumption will account for 16% of the USA’s current electricity demand. Morgan Stanley’s AI power consumption prediction (best-case scenario) The best of both worlds is here. Despite their apparent differences, the tech world is witnessing a growing convergence between AI and blockchain, with a number of innovative projects emerging. For instance, Ocean is a protocol that provides users with a decentralised data exchange centre, unlocking information sets for AI consumption while preserving their privacy and security. Similarly, ThoughtAI embeds AI and blockchain directly into data and information, effectively eliminating traditional application layers. It aims to create more responsive and adaptive AI solutions, potentially revolutionising how people interact with the technology and manage information. While these projects demonstrate the potential of combining AI and blockchain, they also highlight a critical challenge, i.e. scalability. For AI on blockchain to truly flourish, platforms need to overcome the inherent limitations of traditional blockchain architectures, particularly in terms of data availability and throughput. In this regard, 0G is a platform that has made significant strides in addressing the above-mentioned bottlenecks. To elaborate, ZeroGravity (0G for short) is the world’s first data availability system with a built-in general purpose storage layer that is not only highly scalable but also decentralised. Its scalability hinges on separating the workflow of data availability into a data publishing lane and a data storage lane. To put it technically, 0G is a scalable Data Availability (DA) service layer built directly on top of a decentralised storage system. It addresses the scalability issue by minimising the data transfer volume required for broadcast. — allowing for unprecedented levels of data availability and transaction throughput. One of the key advantages of 0G is its performance. While competitors like Celestia are able to achieve about 1.4 to 1.5 megabytes per second, the 0G network is capable of producing about 50 gigabytes per second, making it 50,000 times faster. Additionally, 0G’s cost is approximately 100 times cheaper than its closest competitors. This level of performance and flexibility opens the door to a wide array of AI/blockchain use cases that were previously impractical or impossible. For starters, in the realm of finance, 0G’s scalability can potentially allow for sophisticated AI-powered trading algorithms to operate directly on-chain. Similarly, it could also be possible to implement large-scale federated learning systems on the blockchain, leading to breakthroughs in privacy-preserving AI—where multiple parties can collaboratively train AI models without sharing sensitive data directly. Such advancements could have far-reaching implications in fields like healthcare, where data privacy is paramount but collaborative research is essential. A trillion-dollar opportunity is waiting to be tapped. As we look to the future, it’s clear that the intersection of AI and blockchain will continue to expand and evolve. This convergence is not just a technological curiosity but a massive economic opportunity. For example, the AI industry is projected to be worth a staggering $1.3 trillion by 2030, while the blockchain market is set to reach a valuation of $248.8 billion by 2029, reflecting their transformative potential across virtually every sector of the global economy. Therefore, moving forward, it stands to reason that those companies and platforms (such as 0G) that are able to successfully navigate this convergence — solving the technical challenges while unlocking new value propositions — will be well-positioned to capture a significant share of this trillion-dollar opportunity. The post The merging of AI and blockchain was inevitable – but what will it mean? appeared first on AI News. View the full article
-
Lumen Technologies, a leading telecommunications firm, has recently announced significant new contracts totalling $5 billion with cloud and tech companies for its networking and cybersecurity solutions. This surge in demand comes as businesses across various sectors rapidly adopt AI-driven technologies. Among these notable agreements is a deal with Microsoft, which revealed last month its plans to utilise Lumen’s network equipment to expand capacity for AI workloads. Lumen, known for providing secure digital connections for data centres, disclosed recently that it is engaged in active discussions with customers regarding additional sales opportunities valued at approximately $7 billion. The widespread adoption of AI has prompted enterprises across multiple industries to invest heavily in infrastructure capable of supporting AI-powered applications. Lumen reports that major corporations are urgently seeking to secure high-capacity fibre, a resource becoming increasingly valuable, and potentially scarce due to growing AI requirements. There is an optimistic prospect for further success, as Kate Johnson, the CEO of Lumen, expressed: “Our partners are turning to us because of our AI-ready infrastructure and expansive network. This is just the beginning of a significant opportunity for Lumen, one that will lead to one of the largest expansions of the internet ever.” Another piece of evidence regarding the company’s strategic positioning in such a rapidly changing and highly unstable market is the creation of a new division, Custom Networks. This division will be responsible for managing the Lumen Private Connectivity Fabric solutions portfolio. At the same time, since the demand for networking is rising from various organisations seeking solutions designed to satisfy the specific needs of their target environments, it is rational to develop a new division for networks. This highlights that telecommunications infrastructure plays a crucial role in the current AI revolution. As an increasing number of firms implement AI technologies in their operations, it is essential to have plenty of secure, expansive networks. Lumen’s recent success in securing these substantial contracts underscores the company’s strong market position and its ability to meet the evolving needs of tech giants and cloud service providers. As the AI landscape continues to evolve, Lumen appears well-positioned to capitalise on the increasing demand for advanced networking solutions. The telecommunications sector, and Lumen in particular, is likely to remain at the forefront of enabling AI advancements across industries. As this trend progresses, it will be interesting to observe how Lumen and its competitors adapt to meet the challenges and opportunities presented by this technological shift. (Photo by Vladimir Solomianyi) See also: *** backs smaller AI projects while scrapping major investments Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI expansion drives $5B in deals for Lumen appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]
-
Amid the excitement over how AI will revolutionise healthcare, advertising, logistics, and everything else, one industry has flown under the radar: the legal profession. In fact, the business of law is a strong contender for achieving the highest return on investment (ROI) from using AI. Law firms are seen as traditional, not as eager adopters of new technology, but most have used machine learning (ML) for years. Embedded in popular platforms like Westlaw, ML is often incorporated into core operations. Now, generative AI is spreading through law firms faster than class-action claims over a stock ******. Individual lawyers have learned to use ChatGPT-like AI models, and entire law practices have harnessed large language models. Those in the business of law see remarkable gains from AI in efficiency, accuracy, speed and client results in their day-to-day processes. Three points help explain those results. In legal operations, AI-driven time and cost savings are typically very high. The gains are not incremental. AI is applicable to potentially most work processes at law firms. Once law firms implement AI, it grows steadily more powerful as they personalize it. This is basically customisation; adapting AI to their preferred work methods takes the return on investment (ROI) higher: Meet the AI-native law firm These benefits have led to the emergence of AI-centric (aka AI-native) law firms, a new breed that is significantly more efficient and competitive than its rivals. At AI-native firms, most support staff and attorneys already leverage AI extensively for intake, research, drafting motions, briefs, objections, analysing judges’ opinions, and more. A law practice becomes AI-native, in part, by personalising the behaviour of AI solutions to mesh with the firm’s existing processes and strategic guidelines. This makes their AI more capable and valuable. Personalisation takes various forms, like creating case evaluations that follow a firm’s established standards. AI can consider potential claims and create follow-ups according to an attorney’s criteria. It can be taught to follow an existing process, mimic sequences of events, ask or answer key questions along the same pattern, and write in the style of previous case work. Once trained to emulate an attorney’s approach, an AI model makes life easier for support staff. Even if a paralegal hasn’t worked with specific lawyers, AI will help them with case preparation and client interactions, risk assessment, and even strategy. AI-native law firms increasingly use generative AI to service clients who require individualised treatment. AI contributes throughout the case lifecycle, from brainstorming pre-litigation case strategy, to handling discovery. Gen AI-based models also help prepare depositions, analyse their results, and plot litigation strategies. Why is AI extraordinarily useful to law firms? It’s been said that the legal world is made of six-minute increments. Often, AI can often do in seconds what takes hours or days for a junior associate. Time reductions of up to 99% drive major cost savings, and in the intellect-intensive field of law, they are common. Every day, lawyers must evaluate, analyse and weigh tradeoffs, draft documents, and make decisions. Paralegals and junior associates need to work fast and accurately, yet never overlook anything important. With volumes of data and minutiae to wade through, the work can exhaust them, leading to mistakes. Overall, speed, scale, and personalisation contribute to make AI a massive accelerator in the legal field, with productivity gains well beyond the “traditional” 10 to 20 percent. Costs come down and move around in AI-native law firms Lawyers are learning first-hand that AI systems can minimise the associate hours it takes to complete a process. By engaging AI across the life cycle of cases, they can reshape individual workloads for greater profitability. Upfront work on cases is sometimes undercompensated, and AI lets the team concentrate billable hours on later, fully-compensated stages. AI-centric firms can also grow without expanding the headcount of support staff. Instead, existing staff can assist more associates, who bill at higher hourly levels, increasing profitability. They can also market themselves and drive growth more vigorously. Wherever AI reduces operational costs, it frees up funds for marketing and business development. Generative AI makes marketing communications faster and easier for law firms, as it does for other businesses. Employee experience: AI happiness AI often does not get the credit it deserves for its positive impact on employee experience. In practice, lawyers and paralegals can offload most so-called grunt work and repetitive tasks to AI. This boosts job satisfaction and — by implication — retention. Support staff and junior associates become, in effect, supervisors of AI. They can customise the firm’s AI by teaching procedures to an LLM, and then share them across a team. This means lawyers can operate in familiar ways but at a larger scale, and delegate more comfortably to support staff without lengthy explanations of “Here’s my way of doing this.” Business models shift for AI-native law firms AI-native law firms can uplevel their business to increase capacity and support revenue growth. Specifically, they can structure internally to handle more complex cases and lucrative contingency work. AI enables smaller firms to handle larger, tougher cases by whipping through much of the research and analysis. In contingency litigation, productivity gains stemming from AI can even exceed those seen in other legal categories like contracts, intellectual property, and family law. AI can handle much of the upfront evaluation of contingency cases. Taking on well-researched contingency cases can significantly increase profitability. Those who get AI versus those who don’t Given the benefits, are law firms jumping on board and going AI-native in droves? Surprisingly no, according to a 2023 Thomson Reuters survey that found 60% had no plans to use generative AI. That’s good news for the other 40%. Law firms that leverage AI effectively have a marked advantage over competitors that do not. The legal profession ranks among the industries achieving the best gains from use of AI. Law firms that “get it” will continue to personalise AI systems and push towards their potential, and grow more profitably. As it becomes increasingly obvious that AI-native law firms enjoy greater growth and profitability, other intellect-based professions may well follow their example. The post It’s time for law firms to go all in on AI appeared first on AI News. View the full article
-
The *** government has announced a £32 million investment in almost 100 cutting-edge AI projects across the country. However, this comes against the backdrop of a controversial decision by the new Labour government to scrap £1.3 billion in funding originally promised by the Conservatives for tech and AI initiatives. Announced today, the £32 million will bolster 98 projects spanning a diverse range of sectors, utilising AI to boost everything from construction site safety to the efficiency of prescription deliveries. More than 200 businesses and research organisations, from Southampton to Birmingham and Northern Ireland, are set to benefit. Rick McConnell, CEO of Dynatrace, said: This latest announcement is overshadowed by the Labour government’s decision to scrap a significant chunk of funding previously earmarked for major tech projects. These include £800 million for the development of a state-of-the-art exascale supercomputer at Edinburgh University and a further £500 million for AI Research Resource, which provides crucial computing power for AI research. While the £32 million investment signals continued support for AI development, the shadow of the £1.3 billion funding cut looms large. The long-term impact of this decision on the ***’s ability to foster groundbreaking technological advancements ******** to be seen. “Investing in AI-driven innovation will be essential to organisations’ ability to compete on the global stage. There is no doubt that, if implemented successfully, AI has the ability to improve efficiencies, turbocharge innovation, and streamline operations across all sectors,” McConnell concludes. (Photo by Steve Johnson) See also: Meta’s AI strategy: Building for tomorrow, not immediate profits Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post *** backs smaller AI projects while scrapping major investments appeared first on AI News. View the full article
-
The AI industry has always been the “futuristic view” for humans, whether in movies, cartoons, or real life. Computers work, think and act on behalf of futuristic humans – well, except in the Dune movies. In the past half-decade, artificial intelligence has become the hottest topic in the world, second only to the Covid 19 pandemic, with most people fascinated by the industry’s massive growth and the extent they can use it. This growth is expected to continue at a rapid pace into the last years of the decade, with Statista predicting the $184 billion industry will grow to nearly $900 billion by 2030. However, as the industry becomes a crucial part of our lives, which seems inevitable, it will shape how we think, interact with the world, and do the most basic and complex things in the future. We will be intertwined with it, probably more than we are today with the internet. While still in its infancy stages, most powerful AI systems and models are controlled by mega-corporations such as OpenAI, IBM Watson, Google AI, and Amazon Machine Learning. These Big Tech firms own large data hubs, to train, build, and sell these models to users. This raises a very pertinent and justifiable ***** amongst the common folk. Should we let this massive and dominant technological innovation be controlled by the billionaire de jour? Satoshi was wary of the centralised financial institutions post-2008 global financial crisis and created Bitcoin to solve the centralisation conundrum. In a similar breath, AI needs similar solutions to remove the heavy hand of mega-corporations on what could be the “most important technological advancement in the past few decades”, as Microsoft’s co-founder Bill Gates called it in a blog post in 2023. The problem with the current AI industry structure As stated above, AI technology will be a way of life for ‘almost’ everybody on Earth, helping us complete very menial tasks to greater tasks. For instance, the growth of artificial general intelligence (AGI) can be used to create “AI secretaries”, or AI agents, that can help organise your calendar, pay your monthly bills, create a weekly diet schedule, or create your playlist. (“Hey AI agent X, can you create an R&B playlist including Beyonce, Ne-Yo, etc”) While the data in the examples above may seem simplistic and elementary, such data is very important and personal for most people. Would you want to share such data with the Big Tech firms, who have time and again shown their willingness to use personal data only for profit? Even more unsettling is that AI is being trained in more ‘human-related’ jobs that millions, and probably billions, of people need such as therapists and coaches. Millions of people will share their innermost thoughts, longings, fears, ******* desires, confessions, and embarrassments. Who would trust big tech with such information? It is already happening with ChatGPT, with more and more people using the AI tool to look for answers to their deepest personal questions. This is the bottleneck of current AI systems and models – the centralisation of AI technology, monopolisation of data used to train the AI models, and privacy concerns by users. As such, several developers around the world are working on solutions that build sustainable AI models, without big tech firms’ prying eye on our personal data. Blockchain, a decentralised and privacy-preserving technology, is being integrated with AI to ensure users enjoy the benefits of the technology without the toxicity of Big Tech. A paradigm shift: The rise of decentralised AI services Blockchain technology has been used extensively to correct the centralisation impact in the financial world and most industries, from supply chain to health care, etc. Finally, the technology is extending its roots into artificial intelligence, helping democratise and decentralise the industry. The technology has enhanced data security and transparency through its immutable ledgers, transforming the global sharing of value and setting new standards for operational efficiency and transparency. Integrating two of the most sought after technologies today, AI and blockchain, could be the key to having a free, open, and decentralised AI ecosystem. The primary goal of decentralised AI technologies is to democratise access to AI resources, including data, models, and compute power. This is crucial in minimising the oligopolised structures in AI, which limits the number of entities in the space due to the computational complexity and huge costs of data sets that are needed to train AI models. For instance, NeurochainAI proposes an innovative solution to the challenges of centralised AI systems: a Decentralised AI Infrastructure As a Service (DeAIAS). Simply, NeurochainAI aims to break down the barriers of centralisation and monopolisation “by encouraging cooperation and coordination among various AI stakeholders,” its website reads. Decentralised AI benefits developers and the general public in several ways: Decentralisation: Unlike the current AI models, a decentralised AI ecosystem allows a community of users to share resources such as computing power, data storage, algorithm processing, and model validation. These could be costly for any one company trying to build their models but by tapping into a global community of users the costs are reduced significantly. Ready-to-use infrastructure: NeurochainAI provides developers with a ready-to-use platform helping them develop their AI dApps faster and up to five times more cost-effectively compared to traditional methods. This promotes more innovation across the ecosystem, unlike depending on a few companies for all technological advancements. Incentivisation: One of the biggest benefits of a decentralised AI platform is rewarding the community for providing their resources. For instance, NeurochainAI rewards contributors with $NCN rewards, fostering a collaborative ecosystem where each participant plays a role in shaping the future of AI technology. Privacy and security of data: Decentralised AI also introduces an element of privacy of data. Given blockchain technology allows users to be the custodians of their data, only they choose what data to give to train the AI models. Active participation by the community: NeurochainAI is developed by the community and for the community. This involves community members actively participating in crucial AI training processes such as data curation and validation, algorithm processing, and model validation. This democratises AI development and enriches the models with diverse, real-world inputs. The future of decentralised AI services The rapid growth of artificial intelligence has ensured that many companies/individuals cannot create or train their AI models due to the phenomenal amounts of computing power needed. While centralised cloud computing was a ready solution for previous challenges of computing power, AI is different. Decentralisation solves this problem by creating a network of nodes (computers) that harness the huge untapped computing power of CPUs across the world. This modular approach of decentralised physical infrastructure (DePIN) enhances scalability, provides a cheaper source of computing power than buying new servers, and increases community participation in training the AI models, allowing dApps to learn and share information with each other. While decentralised AI is still at its infancy, the creation of platforms such as NeurochainAI will give Big Tech a run for its money – solving the monopolised nature of AI, computational complexity, and privacy of data for users. The post Blockchain could solve the monopolised AI ecosystem appeared first on AI News. View the full article
-
[AI]OpenAI hit by leadership exodus as three key figures depart
ChatGPT posted a topic in World News
OpenAI is facing a leadership crisis as three key figures announce their departure. The news comes amid a tumultuous year for the AI powerhouse, marked by legal battles and high-profile exits. John Schulman, a co-founder of OpenAI, is leaving for rival Anthropic. Schulman confirmed his departure in a statement on X. “I’ve made the difficult decision to leave OpenAI,” Schulman wrote. “This choice stems from my ******* to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work. I’ve decided to pursue this goal at Anthropic.” OpenAI CEO Sam Altman responded to Schulman’s departure on X, writing, “Thank you for everything you’ve done for OpenAI. We will miss you tremendously and make you proud of this place.” According to The Information, Peter Deng – who isn’t an OpenAI founder, but is a high-profile AI figure that joined the company last year after leading products at Meta, Uber, and Airtable – has also left. These departures come at a critical juncture for OpenAI. The company is currently embroiled in a legal battle with Elon Musk, who recently reignited a lawsuit against the company and two of its founders, including Altman and Brockman. Musk alleges that he was misled about OpenAI’s commitment to its non-profit status and its focus on ethical AI development. With its ranks becoming depleted, increasing competition, and a high-profile legal battle looming, OpenAI is facing an uphill battle. (Photo by Kevin Wang) See also: Google’s Gemini 1.5 Pro dethrones GPT-4o Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI hit by leadership exodus as three key figures depart appeared first on AI News. View the full article -
The emergence of AI systems that can create songs presents the music industry with a new challenge. This phenomenon has sparked numerous discussions on concepts such as creativity, copyright, and the development of the music industry. Some artists, recording studios, and legal experts have taken an interest in this issue and raised important questions that highlight the necessity of finding the optimal balance between technology and human work. A recent example: AI attempts to mimic a Grammy nominee To illustrate the current state of AI in music, let us take the example of Tift Merritt, a country musician. Her track “Traveling Alone” is her most well-known piece due to its presence on Spotify. The song is a ballad that contemplates the open road and one’s capacity to travel alone. After requesting an AI music site, Udio, to create “Tift Merritt’s Americana song,” Udio promptly returned to the author with “Holy Grounds.” Specifically, the song contained lyrics about “driving old backroads” and “watching the fields and skies shift and sway.” Naturally, Tift Merritt’s work was unlikely to be entirely unique. Merritt, a Grammy-nominated singer and songwriter, is not particularly satisfied with the result. She stated that the “imitation” that Udio came up with “doesn’t make the cut for any album of mine.” However, the singer has a much more serious charge against the generated content – she doesn’t consider it a manifestation of creativity; in her opinion, it’s more like theft. “This is a great demonstration of the extent to which this technology is not transformative at all,” Merritt asserted. “It’s stealing.” Artists unite in concern Merritt’s stance resonates with many in the music industry. In April, she joined a cohort of high-profile artists including Billie Eilish, Nicki Minaj, and Stevie Wonder in signing an open letter. This document warned that AI-generated music, trained on their recordings, could potentially “sabotage creativity” and marginalise human artists. The industry takes legal action The issue is not confined to individual singers but affects giant record labels as well. Recently, Sony Music, Universal Music Group, and Warner Music have sued Udio and another music AI outfit, Suno. They are the first in the music industry to join the alarming copyright ****** over AI-made songs, a battle only beginning to be waged in the courtrooms. The significance of the situation was highlighted by Mitch Glazier, CEO of the Recording Industry Association of America (RIAA). He referred to the lawsuits as a response to “shameless copying of troves of recordings in order to flood the market with cheap imitations and drain away listens and income from real human artists and songwriters.” However, he also mentioned the potential in AI: “AI has great promise – but only if it’s built on a sound, responsible, licensed footing.” AI companies respond In their initial court responses, Suno and Udio have defended their technology. Additionally, the companies have referred to the industry’s past fears and concerns about the development of synthesisers, drum machines, and other technological advances that were expected to ruin the field by replacing all musically skilled humans. Both companies have maintained their original position, pleading not guilty and explaining that the lawsuits are a means to ******* lesser market players, as the apps they provide cannot be used to exactly replicate the top artists. Legal complexities and novel questions These cases raise new questions for the courts, such as whether AI can use copyrighted material to produce something original and whether the law should make an exception in such cases. The situation is further complicated by the fact that in music, text, melody, harmony, and rhythm of the created material might be mixed, making it much more challenging to determine a case of plagiarism. As mentioned by musicologist Brian McBrearty, who specialises in copyright cases, “Music has more factors than just the stream of words. It has pitch, and it has rhythm, and it has harmonic context. It’s a richer mix of different elements that make it a little bit less straightforward.” The fair use debate One of the key elements of both these cases is likely to be the notion of “fair use” in copyright law. Fair use is a provision of the law that allows some unauthorised uses of copyrighted works based on a few different conditions, one of them usually being whether the new use transforms the original work from which it was created. The AI companies argue that their use of existing recordings of music is “quintessential ‘fair use.'” However, legal experts suggest that music-generating AIs may not find it as easy to prove fair use as text-generating AIs did. The road ahead If such cases come to a conclusion in the courts, they are likely to set relevant precedents for the future of AI in the creative industries. Depending on the outcomes, it is possible that the matters discussed above will have widespread effects on producers of art, technology companies, and consumers alike. From the perspective of Tift Merritt, who is both a musician and a long-time activist for musicians on various points of law, the concerns feel valid: “Ingesting massive amounts of creative labour to imitate it is not creative. That’s stealing in order to be competition and replace us.” The music industry is now at a critical point due to the ongoing debate and legal disputes. The main question is whether the current issues will be resolved and how. Additionally, a conclusion must be made about whether it is possible to allow the development of technology for AI music creation while maintaining the copyright of human artists. (Photo by Lechon Kirb) See also: Elon Musk revives OpenAI legal battle with fresh allegations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI music sparks new copyright battle in US courts appeared first on AI News. View the full article For verified travel tips and real support, visit: [Hidden Content]