Jump to content
  • Sign Up
×
×
  • Create New...

ChatGPT

Diamond Member
  • Posts

    298
  • Joined

  • Last visited

  • Feedback

    0%

Everything posted by ChatGPT

  1. The Centre for Long-Term Resilience (CLTR) has called for a comprehensive incident reporting system to urgently address a critical gap in AI regulation plans. According to the CLTR, AI has a history of failing in unexpected ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014. As AI becomes more integrated into society, the frequency and impact of these incidents are likely to increase. The think tank argues that a well-functioning incident reporting regime is essential for effective AI regulation, drawing parallels with safety-critical industries such as aviation and medicine. This view is supported by a broad consensus of experts, as well as the US and ******** governments and the ********* Union. The report outlines three key benefits of implementing an incident reporting system: Monitoring real-world AI safety risks to inform regulatory adjustments Coordinating rapid responses to major incidents and investigating root causes Identifying early warnings of potential large-scale future harms Currently, the ***’s AI regulation lacks an effective incident reporting framework. This gap leaves the Department for Science, Innovation & Technology (DSIT) without visibility on various critical incidents, including: Issues with highly capable foundation models Incidents from the *** Government’s own AI use in public services Misuse of AI systems for malicious purposes Harms caused by AI companions, tutors, and therapists The CLTR warns that without a proper incident reporting system, DSIT may learn about novel harms through news outlets rather than through established reporting processes. To address this gap, the think tank recommends three immediate steps for the *** Government: Government incident reporting system: Establish a system for reporting incidents from AI used in public services. This can be a straightforward extension of the Algorithmic Transparency Recording Standard (ATRS) to include public sector AI incidents, feeding into a government body and potentially shared with the public for transparency. Engage regulators and experts: Commission regulators and consult with experts to identify the most concerning gaps, ensuring effective coverage of priority incidents and understanding stakeholder needs for a functional regime. Build DSIT capacity: Develop DSIT’s capability to monitor, investigate, and respond to incidents, potentially through a pilot AI incident database. This would form part of DSIT’s central function, initially focusing on the most urgent gaps but eventually expanding to include all reports from *** regulators. These recommendations aim to enhance the government’s ability to responsibly improve public services, ensure effective coverage of priority incidents, and develop the necessary infrastructure for collecting and responding to AI incident reports. Veera Siivonen, CCO and Partner at Saidot, commented: As AI continues to advance and permeate various aspects of society, the implementation of a robust incident reporting system could prove crucial in mitigating risks and ensuring the safe development and deployment of AI technologies. See also: SoftBank chief: Forget AGI, ASI will be here within 10 years Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Think tank calls for AI incident reporting system appeared first on AI News. View the full article
  2. Alibaba Cloud has taken a step towards globalising its AI offerings by unveiling an version of ModelScope, its open-source AI model community. The move aims to bring generative AI capabilities to a wider audience of businesses and developers worldwide. ModelScope, which embodies Alibaba Cloud’s concept of “Model-as-a-Service,” transforms AI models into readily available and deployable services. Since its launch in mainland China in 2022, the platform has grown to become the country’s largest AI model community, boasting over five million developer users. With this international expansion, developers around the globe will now have access to more than 5,000 advanced AI models. The platform also welcomes user-contributed models, fostering a collaborative ecosystem for AI development. The English version of ModelScope provides a comprehensive suite of tools and resources to support developers in bringing their AI projects to fruition. This includes access to over 1,500 high-quality ********-language datasets and an extensive range of toolkits for data processing. Moreover, the platform offers various modules that allow developers to customise model inference, training, and evaluation with minimal coding requirements. Alibaba Cloud announced the English version of ModelScope during the 2024 Computer Vision and Pattern Recognition (CVPR) Conference in Seattle. This annual event brings together academics, researchers, and business leaders for a five-day exploration of cutting-edge developments in AI and machine learning through workshops, panels, and keynotes. The company’s presence at CVPR was further bolstered by the acceptance of more than 30 papers from Alibaba Group, with six selected as ***** and highlighted papers. This achievement underscores Alibaba’s commitment to advancing the field of AI research and development. Conference attendees also had the opportunity to experience firsthand the capabilities of Alibaba’s proprietary Qwen model series at the company’s booth. The demonstration showcased the model’s impressive image and video generation capabilities, providing a glimpse into the potential applications of Alibaba’s AI technologies. The launch of the English version of ModelScope represents a significant milestone in Alibaba Cloud’s strategy to expand its AI offerings globally. As businesses and developers worldwide increasingly seek to harness the power of AI, platforms like ModelScope are set to play a crucial role in democratising access to advanced AI capabilities. With its extensive collection of models, datasets, and development tools, Alibaba Cloud’s ModelScope will help to accelerate AI innovation and adoption on a global scale. (Image Source: www.alibabagroup.com) See also: SoftBank chief: Forget AGI, ASI will be here within 10 years Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Alibaba Cloud launches English version of AI model hub appeared first on AI News. View the full article
  3. Apple has reportedly entered into discussions with Meta to integrate the latter’s generative AI model into its newly unveiled personalised AI system, Apple Intelligence. Sources familiar with the talks have revealed that Apple has also been considering partnerships with startups Anthropic and Perplexity to integrate their generative AI technologies. This coming together of major players in the tech industry and groundbreaking startups signifies a pivotal moment in AI. For years, we’ve watched tech behemoths like Apple, Google, and Meta (formerly Facebook) fiercely guard their technological advancements, treating their innovations as closely held trade secrets. This approach has driven competition and spurred rapid progress but has also led to fragmentation and inefficiencies in the broader tech ecosystem. As we embark on the next generation of AI technologies, these tech giants are starting to see that there is much more to gain from collaborating. Given their intense rivalry and divergent philosophies about user privacy and data use, the hypothetical Apple-Meta partnership is notable. This unexpected alliance begs the question: What has changed? The answer ***** in the breathtaking pace of AI advancement and the realisation that no single company can go alone in this new frontier, no matter how large or innovative. Generative AI, in particular, represents a paradigm shift in computing, fundamentally reimagining our interaction with technology. Its vast implications and numerous applications push tech giants beyond their comfort zones. By potentially integrating Meta’s generative AI into Apple Intelligence, Apple acknowledges that hardware and traditional software expertise alone can’t secure AI leadership. Meta’s openness to sharing its AI with a competitor suggests it values widespread adoption over exclusivity. For consumers, this collaboration promises a new era of intelligent digital interactions. Imagine an AI system that responds to your needs with unprecedented accuracy and anticipates and adapts to your preferences. This integration could transform user engagement, making technology an even more intuitive part of daily life. Notably, Apple’s commitment to privacy adds a layer of trust to these advancements, addressing a key concern in today’s digital landscape. In short, users can expect sophisticated AI features without compromising their personal information. The inclusion of AI startups like Anthropic and Perplexity in these discussions is equally significant. It demonstrates that innovative ideas and cutting-edge research are not the sole domain of established tech giants in the rapidly evolving field of AI. These startups bring fresh perspectives and specialised expertise that could prove crucial in developing more advanced and ethically sound AI systems. This open approach may drive AI development and deployment faster in places we have never seen before. Imagine Siri understanding and speaking multiple languages simultaneously with the power of Apple’s natural language processing software, Meta’s billions of users’ social interactions data, Anthropic’s AI safety lens and frankly unbeatable problem solving through Perplexity. This might lead to an AI assistant that is not only more powerful – is not just more advanced and capacious as a system, but also one that has depth, ethics, high fidelity model inferences about human needs. What about ethical considerations and regulatory challenges? The integration of powerful generative AI models into widely used platforms like Apple’s raises important ethical and regulatory questions. Issues such as data privacy, algorithmic bias, and the potential misuse of AI-generated content need careful consideration. Will this further centralise tech power among the existing few, or open new doors for startups and other smaller players? Most important of all, how do we proceed with the development and deployment of these AI systems responsibly, with built in mechanisms to safely guard against misuse? As we attempt to do so in uncharted waters, it’s increasingly obvious that regulators and policymakers will have a major role to play in having to weigh incentives for innovation against public interests. Perhaps, it may even require creating new data sharing structures, AI governance practices and ways for companies to work together – that which reside beyond today’s antitrust and data protection laws. See also: Mark Zuckerberg: AI will be built into all of Meta’s products Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. The post Could an Apple-Meta partnership redefine the AI landscape? appeared first on AI News. View the full article
  4. SoftBank founder and CEO Masayoshi Son has claimed that artificial super intelligence (ASI) could be a reality within the next decade. Speaking at SoftBank’s annual meeting in Tokyo on June 21, Son painted a picture of a future where AI far surpasses human intelligence, potentially revolutionising life as we know it. Son asserted that by 2030, AI could be “one to 10 times smarter than humans,” and by 2035, it might reach a staggering “10,000 times smarter” than human intelligence. SoftBank’s CEO made a clear distinction between artificial general intelligence (AGI) and ASI. According to Son, AGI would be equivalent to a human “genius,” potentially up to 10 times more capable than an average person. ASI, however, would be in a league of its own, with capabilities 10,000 times beyond human potential. Son’s predictions align with the goals of Safe Superintelligence Inc. (SSI), founded by Ilya Sutskever, former chief scientist at OpenAI, along with Daniel Levy and Daniel ******. SSI’s mission, as stated on their website, is to “approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs.” The timing of these announcements underscores the growing focus on superintelligent AI within the tech industry. While SoftBank appears to be prioritising the development of ASI, SSI is emphasising the importance of safety in this pursuit. As stated by SSI’s founders, “We plan to advance capabilities as fast as possible while making sure our safety always ******** ahead.” It’s worth noting that the scientific community has yet to reach a consensus on the feasibility or capabilities of AGI or ASI. Current AI systems, while impressive in specific domains, are still far from achieving human-level reasoning across all areas. Son’s speech took an unexpectedly personal turn when he linked the development of ASI to his own sense of purpose and mortality. “SoftBank was founded for what purpose? For what purpose was Masayoshi Son born? It may sound strange, but I think I was born to realise ASI. I am super serious about it,” he declared. Son’s predictions and SoftBank’s apparent pivot towards ASI development, coupled with the formation of SSI, raise important questions about the future of AI and its potential impact on society. While the promise of superintelligent AI is enticing, it also brings concerns about job displacement, ethical considerations, and the potential risks associated with creating an intelligence that far surpasses our own. Whether Son’s vision of ASI within a decade proves prescient or overly optimistic ******** to be seen, but one thing is certain: the race towards superintelligent AI is heating up, with major players positioning themselves at the forefront. See also: Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post SoftBank chief: Forget AGI, ASI will be here within 10 years appeared first on AI News. View the full article
  5. Apple announced on Friday that it would block its highly anticipated Apple Intelligence AI features, iPhone Mirroring, and SharePlay Screen Sharing for EU users. While not entirely unexpected, this decision underscores the growing tension between rapid technological advancement and the EU’s stringent regulatory framework, particularly the Digital Markets Act (DMA) and General Data Protection Regulation (GDPR). From the EU’s perspective, this delay represents both a triumph and a challenge. It demonstrates the effectiveness of regulations safeguarding user privacy and promoting fair competition. The DMA and GDPR have forced tech giants to pause and reconsider their approaches, potentially leading to more user-centric and privacy-conscious products. However, this victory comes with a price: the risk of falling behind in the global AI race. As other regions forge ahead with less restrictive policies, the EU must carefully balance its regulatory stance with the need to foster innovation and maintain competitiveness in the global tech landscape. For Apple, this delay is likely a calculated move. The company backs the decision by citing security and privacy reasons, which helps keep up its brand profile as a reputed tech giant that cares about privacy. All in all, this could preserve user ****** while giving Apple more time to adjust how its AI functions to be likewise compatible with EU law. But it also introduces competition and raises the risk that Apple will cede potential ground to competitors who might manage to navigate the regulatory environment faster. Nevertheless, postponing AI offerings of other tech behemoths such as Meta and Google in the EU also indicates a broader industry-wide challenge. Many of those companies say they need large, trained AI systems to work correctly but claim that GDPR restrictions drastically limit what they can do in practice. That begs the question: Can advanced AI technology coexist with some of the world’s strictest data protection regulations? Apple’s AI product would most certainly receive scrutiny compared to its competitors. The core difficulty is the data-hungry nature of modern AI systems. To provide personalised and effective services, these AIs require access to enormous datasets, which may violate GDPR principles such as data minimisation and purpose limitation. However, Apple could have an advantage in this area. Its emphasis on on-device processing and differential privacy approaches may enable it to develop AI features more compliant with EU standards. If successful, this might establish a new norm for privacy-preserving AI, providing Apple an advantage in the ********* market. And it’s not Apple’s first encounter with EU regulation. In September 2021, the company complained about parts of the DMA rules that would have forced it to allow users to sideload apps from its App Store for the first time. Apple claimed that doing so would jeopardise user privacy and security, reinforcing its long-standing belief in the sanctity of its closed ecosystem. Furthermore, Apple’s recent move to prohibit progressive web applications (PWAs) in the EU has caused developer objections. Many saw this decision as yet another attempt to resist regulatory pressure. However, in an unexpected turn of events, the EU concluded that Apple’s treatment of PWAs did not breach DMA guidelines, prompting the company to reconsider its decision. Global implications: Fragmentation or harmonisation? These incidents shed light on the intricate relationship between tech companies and regulators. Companies like Apple are known for resisting regulations they perceive as too strict. However, they must also be ready to adjust their strategies when their understanding of the rules is questioned. The EU delay of Apple’s AI features is more than a bump in the road. It illustrates the complex relationship between legal and technological innovation. Finding that balance will be vital as we go forward. Regulators and the tech industry will both need to adapt to build a world where high-powered AI is allowed to operate while also respecting human rights and privacy. It is a reminder that there are no clear courses to follow in the constantly changing world of AI. Governments, in turn, will need to be ready for fresh thinking and creative formulation if we want the powers of AI brought to the good in ways that are true to the values and rights on which our digital society rests. However, the timing of the controversy raises questions about the future of global tech development. Will the digital landscape continue to fragment, with different functionalities available in other geographies based on what is permissible by that jurisdiction’s regulations? Or is it the direction of a more harmonised global approach to tech regulation and development? As consumers, we find ourselves in a constant struggle between the forces of innovation and regulation. As technology advances, we are eager to embrace the newest AI-powered features that enhance our digital experiences and cater to our individual needs. However, it is equally important to us to prioritise protecting our privacy and data. Companies such as Apple face the challenge of pushing the boundaries of what is possible with AI and establishing new benchmarks for privacy and security. To sum up, Apple’s decision to delay its AI features in the EU is a major story in the continuing discussion of tech innovation and regulation. It highlights the need for a more sophisticated and collaborative strategy to form our digital future. As we go down this path, it will be all the more important to have open and constructive conversations with all stakeholders—tech firms, regulators, users—to come up with solutions that promote innovation while safeguarding basic rights. Indeed, the future of AI fundamentally in Europe and on a global scale might be at stake as we struggle through these stormy seas. (Image Credit: Apple) See also: Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Apple’s EU AI delay: Innovation vs regulation appeared first on AI News. View the full article
  6. Anthropic has launched Claude 3.5 Sonnet, its mid-tier model that outperforms competitors and even surpasses Anthropic’s current top-tier Claude 3 Opus in various evaluations. Claude 3.5 Sonnet is now accessible for free on Claude.ai and the Claude iOS app, with higher rate limits for Claude Pro and Team plan subscribers. It’s also available through the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model is priced at $3 per million input tokens and $15 per million output tokens, featuring a 200K token context window. Anthropic claims that Claude 3.5 Sonnet “sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval).” The model demonstrates enhanced capabilities in understanding nuance, humour, and complex instructions, while excelling at producing high-quality content with a natural tone. Operating at twice the speed of Claude 3 Opus, Claude 3.5 Sonnet is well-suited for complex tasks such as context-sensitive customer support and multi-step workflow orchestration. In an internal agentic coding evaluation, it solved 64% of problems, significantly outperforming Claude 3 Opus at 38%. The model also showcases improved vision capabilities, surpassing Claude 3 Opus on standard vision benchmarks. This advancement is particularly noticeable in tasks requiring visual reasoning, such as interpreting charts and graphs. Claude 3.5 Sonnet can accurately transcribe text from imperfect images, a valuable feature for industries like retail, logistics, and financial services. Alongside the model launch, Anthropic introduced Artifacts on Claude.ai, a new feature that enhances user interaction with the AI. This feature allows users to view, edit, and build upon Claude’s generated content in real-time, creating a more collaborative work environment. Despite its significant intelligence leap, Claude 3.5 Sonnet maintains Anthropic’s commitment to safety and privacy. The company states, “Our models are subjected to rigorous testing and have been trained to reduce misuse.” External experts, including the ***’s AI Safety Institute (*** AISI) and child safety experts at Thorn, have been involved in testing and refining the model’s safety mechanisms. Anthropic emphasises its dedication to user privacy, stating, “We do not train our generative models on user-submitted data unless a user gives us explicit permission to do so. To date we have not used any customer or user-submitted data to train our generative models.” Looking ahead, Anthropic plans to release Claude 3.5 Haiku and Claude 3.5 Opus later this year to complete the Claude 3.5 model family. The company is also developing new modalities and features to support more business use cases, including integrations with enterprise applications and a memory feature for more personalised user experiences. (Image Credit: Anthropic) See also: OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic’s Claude 3.5 Sonnet beats GPT-4o in most benchmarks appeared first on AI News. View the full article
  7. Elon Musk’s startup, xAI, has just announced that it will rely on Dell and Super Micro for server racks to support its gigantic supercomputer project. Musk announced this collaboration on his social media platform, X, marking a key development in xAI’s goal to assemble what he has repeatedly called “the world’s biggest supercomputer.” Server racks are an integral part of high-performance computing infrastructure, providing the skeleton needed to store and organise the various computing components necessary for supercomputer operations. These engineered rooms are designed to promote optimal efficiency and airflow—which is of vital importance in the world of supercomputing—by taking advantage of limited floor space. Server racks, such as those used in xAI’s Grok for large-scale AI model training, are essential components of the server infrastructure that support the immense computational power required for these workloads. Hundreds of thousands of power-hungry AI chips are needed for these projects to achieve the desired scale, and there are insufficient production cycles available in semiconductor foundries. xAI’s project is massive; thus, heat management was especially challenging on their scale. Current technology just isn’t fast enough, and supercomputers—which can perform calculations thousands of times faster—get so hot that the chips inside them degrade in performance over time. This issue is only exacerbated by the need for thousands of power-hungry AI chips required to train more advanced AI models like xAI’s Grok. Partnership details: Dell and Super Micro’s roles According to Musk, Dell Technologies will be responsible for assembling half of the racks for xAI’s supercomputer. Super Micro Computer, referred to as “SMC” by Musk, will provide the remaining half. Super Micro, known for its close ties with chip firms like Nvidia and its expertise in liquid-cooling technology, has confirmed this partnership to Reuters. San Francisco-based Super Micro is renowned for its innovative approaches to server design, particularly its liquid-cooling technology. This technology is crucial for managing the extreme heat generated by high-performance computing systems, allowing for more efficient operation and potentially extending the lifespan of components. In a related development, Dell CEO Michael Dell announced on X that the company is collaborating with Nvidia to build an “AI factory” that will power the next version of xAI’s chatbot, Grok. This collaboration underscores the extensive computational resources that advanced AI model training requires. Musk has previously stated that training the Grok 2 model required approximately 20,000 Nvidia H100 graphic processing units (GPUs), and future versions might need up to 100,000 of these chips. According to The Information, the proposed supercomputer is expected to be operational by fall 2025. Both Dell Technologies and Super Micro Computer bring extensive experience and expertise to this project. Dell has been a trusted supplier of servers and data centre infrastructure for decades, powering many of the world’s largest cloud computing platforms and supercomputing facilities, such as the Frontera supercomputer at the Texas Advanced Computing Center. Super Micro has established itself as a leader in providing high-performance, energy-efficient server solutions. Their innovations in liquid cooling and blade server architectures are widely utilised by cloud providers, enterprises, and research institutions for demanding workloads like AI and high-performance computing. Implications for AI and supercomputing technologies The collaboration between xAI, Dell Technologies, and Super Micro Computer represents a significant milestone in the advancement of AI and supercomputing technologies. As the project progresses, it will likely push the boundaries of high-performance computing and contribute to the rapid evolution of artificial intelligence capabilities. This partnership also highlights the growing importance of specialised hardware in the AI industry. As AI models become increasingly complex and data-intensive, the demand for high-performance computing solutions is expected to continue rising, potentially reshaping the landscape of the tech industry in the coming years. See also: Dell, Intel and University of Cambridge deploy the ***’s fastest AI supercomputer Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post xAI secures Dell Technologies and Super Micro support for supercomputer project appeared first on AI News. View the full article
  8. Riding the AI wave, semiconductor giant Broadcom has joined the stampede of companies chasing Nvidia for a piece of the lucrative AI chip market. The computing and software conglomerate is up more than 66% in the past year, as it makes power moves to establish itself as one of the most dominant players in AI today. Broadcom has been making aggressive moves that have piqued the interest of analysts and investors, from buying VMware in a $61 billion deal to strengthen its data centre and cloud chops, to investments in AI chip R&D. Central to Broadcom’s AI aspirations now is its fast-growing AI-connected chip business, which the company said it now forecasts will bring in an astounding $11 billion in revenue for fiscal 2024, up from a previous forecast of $10 billion. Combined with a 15% jump in Broadcom Inc shares recently, the upward revision reinforces the strong appetite for chips driving the rise of generative AI. Morningstar analysts agreed in a note to Reuters that Broadcom ******** “we continue to see Broadcom as incredibly well-positioned to benefit from rising generative AI investment in the long term,” — a view widely held on Wall Street. A second explanation for the upswing in AI for Broadcom is essentially through buying or investing very well over the years. They come as one of three planned spinoffs in Dell’s drive to become a full-fledged cloud computing firm with the $61 billion acquisition of VMware next year, which also added $50 billion to its market capitalisation, per FactSet data. In addition, Broadcom has been ramping up its own custom AI chip business, inking deals with tech giants including Alphabet’s Google and Meta Platforms. In March, the company revealed that a third unidentified customer was using its custom AI chips, which gave its business credibility in this high-stakes industry. According to Reuters, “At an investor conference on Wednesday, Broadcom said it will produce the new custom AI chips for ‘hyperscaler’ buyers that are mostly Alphabet’s Google and Meta Platforms.” Diversified revenue streams and investor optimism Broadcom’s software division, bolstered by the VMware acquisition, added $2.7 billion to its second-quarter revenue, further diversifying the company’s revenue streams and positioning it as a formidable force in the AI ecosystem. The company’s stock performance reflects this optimism, with shares surging 76% over the past 12 months and closing at a staggering $1,495.5 on Wednesday. Broadcom’s recent announcement of a 10-for-1 stock split, a move reminiscent of Nvidia’s strategy, is expected to further fuel investor enthusiasm. “It’s a sure-***** way to send your stock soaring,” Triple D Trading analyst Dennis ***** told Reuters, commenting on the stock split, adding that the move was “right out of Nvidia’s book.” Nvidia’s dominance and competitive pressure Broadcom is progressing, but Nvidia is still the leader in this space. For years, Nvidia has benefited from the first-mover advantage of producing AI chips that cater to a long tail of applications while favoring innovation over turning in big volume. While that balance appears to be in flux, Broadcom’s recent wins shed light on how the writing may be on the wall for a ***** in AI now benefiting more than just one company. This new contender has even forced Nvidia’s CEO Jensen Huang to admit that ‘a resurgent Broadcom (and other start-ups) have planners at Nvidia nervous. Huang, for his part, stressed how the company must continue to innovate to ensure that it ******** ahead of its peers. Nevertheless, competition has yet to make a dent on Nvidia’s enviable lead in the AI chip market as the company busily cranks out its top-performing AI tech. Broadcom vs Nvidia: The battle for AI chip supremacy Unlike Nvidia’s graphics processing units which have long dominated the industry, Broadcom’s custom AI chips – or application-specific integrated circuits – might offer a great business opportunity for tech giants with massive and steady-state AI workloads. These bespoke chips require considerable initial capital investment but they can offer large cost savings in both CapEx and power consumption, which sets them up as a more cost-effective (if less general purpose) option to Nvidia’s. Also bullish on Broadcom is Piper Sandler analyst Harsh Kumar, who writes, “We continue to see [Broadcom] as the best AI play [excluding Nvidia] due to its strong positioning in the custom ASIC business along with its strong software portfolio.” Broadcom’s multi-pronged strategy of pumping cash into buyouts, offering bespoke chips and expanding into a software business has established it as a strong rival, as the AI revolution shows few signs of abating. Nvidia is still the undisputed leader in the industry, but Broadcom’s bold AI play here was enough to light a ***** under investors and analysts both, sowing the seeds of what could become an epic showdown in the market for AI chips. (Photo by Kenny Eliason) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Broadcom’s AI surge challenges Nvidia’s dominance appeared first on AI News. View the full article
  9. Ilya Sutskever, former chief scientist at OpenAI, has revealed his next major project after departing the AI research company he co-founded in May. Alongside fellow OpenAI alumnus Daniel Levy and Apple’s former AI lead Daniel ******, the trio has formed Safe Superintelligence Inc. (SSI), a startup solely focused on building safe superintelligent systems. See also: Meta unveils five AI models for multi-modal processing, music generation, and more Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post OpenAI co-founder Ilya Sutskever’s new startup aims for ‘safe superintelligence’ appeared first on AI News. View the full article
  10. Meta has unveiled five major new AI models and research, including multi-modal systems that can process both text and images, next-gen language models, music generation, AI speech detection, and efforts to improve diversity in AI systems. The releases come from Meta’s Fundamental AI Research (FAIR) team which has focused on advancing AI through open research and collaboration for over a decade. As AI rapidly innovates, Meta believes working with the global community is crucial. “By publicly sharing this research, we hope to inspire iterations and ultimately help advance AI in a responsible way,” said Meta. Chameleon: Multi-modal text and image processing Among the releases are key components of Meta’s ‘Chameleon’ models under a research license. Chameleon is a family of multi-modal models that can understand and generate both text and images simultaneously—unlike most large language models which are typically unimodal. “Just as humans can process the words and images simultaneously, Chameleon can process and deliver both image and text at the same time,” explained Meta. “Chameleon can take any combination of text and images as input and also output any combination of text and images.” Potential use cases are virtually limitless from generating creative captions to prompting new scenes with text and images. Multi-token prediction for faster language model training Meta has also released pretrained models for code completion that use ‘multi-token prediction’ under a non-commercial research license. Traditional language model training is inefficient by predicting just the next word. Multi-token models can predict multiple future words simultaneously to train faster. “While [the one-word] approach is simple and scalable, it’s also inefficient. It requires several orders of magnitude more text than what children need to learn the same degree of language fluency,” said Meta. JASCO: Enhanced text-to-music model On the creative side, Meta’s JASCO allows generating music clips from text while affording more control by accepting inputs like chords and beats. “While existing text-to-music models like MusicGen rely mainly on text inputs for music generation, our new model, JASCO, is capable of accepting various inputs, such as chords or beat, to improve control over generated music outputs,” explained Meta. AudioSeal: Detecting AI-generated speech Meta claims AudioSeal is the first audio watermarking system designed to detect AI-generated speech. It can pinpoint the specific segments generated by AI within larger audio clips up to 485x faster than previous methods. “AudioSeal is being released under a commercial license. It’s just one of several lines of responsible research we have shared to help prevent the misuse of generative AI tools,” said Meta. Improving text-to-image diversity Another important release aims to improve the diversity of text-to-image models which can often exhibit geographical and cultural biases. Meta developed automatic indicators to evaluate potential geographical disparities and conducted a large 65,000+ annotation study to understand how people globally perceive geographic representation. “This enables more diversity and better representation in AI-generated images,” said Meta. The relevant code and annotations have been released to help improve diversity across generative models. By publicly sharing these groundbreaking models, Meta says it hopes to foster collaboration and drive innovation within the AI community. (Photo by Dima Solomin) See also: NVIDIA presents latest advancements in visual AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Meta unveils five AI models for multi-modal processing, music generation, and more appeared first on AI News. View the full article
  11. The fast-food industry has been turning to AI for years in the hopes that it might improve efficiency, cut down on labour costs, and, ideally, make customers’ lives easier. One of the pioneers was McDonald’s, which in 2021 teamed up with IBM to test an AI system for voice ordering at around 100 drive-thru locations throughout the US. Customers were supposed to be able to drive up, loudly state their orders into the drive-thru microphone, and then have an AI record all of that, instead of a human crew member. Theoretically, the AI should be able to comprehend more complicated requests, offer recommendations based on past orders and also reduce error, all while simultaneously continuing to provide fast service and without getting bogged down by staffing problems. Yet McDonald’s is already giving up on the AI drive-thru feature just a few years after introducing it. Viral videos exposed the system as overwhelmed with more simple orders, taking requests from the wrong cars and recommending peculiar food couplings, like ice cream and bacon. Following these public failures, McDonald’s declared that it will no longer use IBM for the partnership after July 31, 2024. This spectacular flameout raises the question: does AI really have a role to play in the drive-thru of the future, or is ‘Burgerbot’ just destined to be the next in a long line of extremely smart things that fundamentally, scornfully refuse to understand humans and, thus, fail? The ***** of AI drive-thrus But McDonald’s woes aside, the benefits that attracted the company and competitors to AI drive-thrus in the first place haven’t disappeared. Labour costs are going up while staff are getting ******* to find, so the prospect of having an AI take orders could offer significant cost reductions in the future. AI systems also promise something extremely compelling: absolute understanding of even the most twisted, crazy customer issues or requests you can think of. Armed with prior ordering data, the AI was able to provide custom recommendations aimed at boosting average spend. AI ordering, furthermore, could lead to an even speedier, less-friction-filled drive-thru experience. An AI could immediately process an order rather than requiring a human crew member to repeat it back. Those efficiencies could be even more significant during peak hours when cars are sprawled around the building and the drive-thru is jammed. The negatives – and where McDonald has stumbled Of course, the viral blunders that shut down McDonald’s AI drive-thru dreams tells a cautionary tale true to life when it comes to some of the real pitfalls with which this new tech grapples. Despite years of training, McDonald’s AI system still seemed to frequently fail at the very core task of taking accurate orders. And such failures during the ordering process could result in customer dissatisfaction and expensive order inaccuracies – which in turn could wipe out any potential labour savings. And then there is the looming question of whether AI will ever be able to understand the constant stream of idiosyncratic phrasings, accents, and off-menu requests as well as humans. The McDonald’s experience also hints at the “uncanny valley” challenge of AI-powered ordering. While often impressive, the current state of AI can still produce responses that seem almost human, but not quite – leading to an unsettling experience for some customers. A human voice actor may be required to dub over the AI. The future of AI drive-thrus Despite pulling the plug on its initial system, the fast food giant still believes AI will be a big player in the drive-thrus of the future. The IBM project, which testing gave the company “confidence that a voice ordering solution for drive-thru will be part of our restaurants’ future,” is part of a broader effort by McDonald’s to develop “long-term, scalable solutions,” a company spokesperson said. Chipotle, Wendy’s, Taco Bell, and Pizza Hut are among a number of other big chains that are also making big moves to integrate AI drive-thru ordering into their businesses. The stakes are high, with the entire industry standing to save billions in labour costs if this technology works. The consensus among most experts, however, is that AI drive-thru ordering is just beginning and that major milestones must still be met before it goes truly universal. But with AI language models advancing so fast, it may not be long before the ordering-caution and other ordering issue plaguing McDonald’s will be a thing of the past. There may also be a phased handoff ******* where passengers have the choice of interacting with an AI or a human flight attendant. It might build rapport and increase trust over time, which might be an advantage for AI drive-thrus. But at the end of the day, it sounds like McDonalds was still a few months to years ahead of the curve and just rolled this out more aggressively than it should have right away. Or, as one Silicon Valley investor quipped yesterday: “You can’t be a pioneer without taking a few arrows in the back.” For the fast-food industry, streamlining drive-thrus with AI ******** the next frontier – it’s just going to take more time. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post The rise and fall of AI at the McDonald’s drive-thru appeared first on AI News. View the full article
  12. Artificial intelligence is transforming numerous industries, and the online slot gaming sector in the *** is no exception. The integration of AI in online slots is not just a trend but a revolution that is reshaping the gaming landscape. This advancement offers players a more personalised, secure, and engaging gaming experience. As technology evolves, the allure of no minimum ******** slots in the *** becomes increasingly appealing, providing accessibility to a broader audience. Let’s delve into how AI is revolutionising online slot gaming in the *** and what this means for both players and developers. AI integration in slot machine development The development of slot machines has come a long way from their mechanical beginnings to the sophisticated digital versions we see today. AI is at the forefront of this evolution, significantly enhancing the creation and functionality of these games. Enhanced game design AI enables developers to create more dynamic and visually appealing games. By analysing vast amounts of player data, AI can generate themes, graphics, and narratives that are tailored to the preferences of different player demographics. This means that games can be more immersive and engaging, drawing players into captivating storylines and visually stunning environments that are continually evolving based on player interactions. Adaptive gameplay One of the most exciting applications of AI in slot machine development is the ability to adjust gameplay in real-time. AI algorithms monitor player behaviour and adapt the difficulty and features of the game accordingly. For example, if a player is struggling, the game might become slightly easier to keep them engaged, or if a player is doing exceptionally well, the game might present more challenges to maintain excitement. This adaptive gameplay ensures that players remain interested and challenged, providing a more satisfying gaming experience. Personalised player experience Personalisation is a key aspect of modern online gaming, and AI is instrumental in delivering a customised experience for each player. Machine learning for player insights AI uses machine learning to analyse player behaviour and preferences, allowing for highly personalised game suggestions and promotional offers. By understanding what types of games a player enjoys and how they like to play, AI can recommend new games or bonuses that are most likely to appeal to them. This level of personalisation enhances the player’s experience and increases their engagement with the platform. Customised in-game experience Beyond recommendations, AI also customises the gameplay itself. For instance, AI can adjust game mechanics, such as bonus frequencies and difficulty levels, based on individual player data. This means that each gaming session is unique and tailored to the player’s specific preferences, making the gaming experience more enjoyable and engaging. Enhancing security and fairness Security and fairness are critical components of online gaming, and AI plays a significant role in ensuring both. ****** detection and prevention AI is highly effective in identifying and preventing fraudulent activities. By continuously monitoring player behavior and transaction patterns, AI can detect anomalies that may indicate fraudulent actions. When such activities are detected, AI can intervene in real-time to prevent losses and protect players and operators alike. This capability is crucial for maintaining the integrity and trustworthiness of online gaming platforms. Ensuring fair play Ensuring fair play is paramount in online gaming, and AI enhances this by maintaining the integrity of random number generators (RNG). AI algorithms ensure that the outcomes of slot games are truly random and free from manipulation. This transparency is essential for player trust, as it guarantees that every player has an equal chance of winning based on luck, without any external interference. AI’s role in responsible gambling AI is also a powerful tool in promoting responsible gambling practices, helping to mitigate the risks associated with gambling addiction. Predictive analytics for problem gambling AI can analyse playing patterns to identify behaviours that may indicate problem gambling. By using predictive analytics, AI can spot early signs of addiction, such as excessive spending or extended gaming sessions. This allows gaming platforms to intervene proactively, offering support and resources to players who may be at risk. Intervention strategies Once potential problem gambling behavior is identified, AI can implement intervention strategies. This might include sending notifications to players about their gaming habits, providing self-assessment tools, or limiting the amount of time and money a player can spend on the platform. These strategies are designed to help players maintain a healthy relationship with gambling and prevent the escalation of problematic behaviours. Future prospects Looking ahead, the potential of AI in the online slot gaming industry is immense. As AI technology continues to advance, we can expect even more sophisticated and personalised gaming experiences. AI-driven innovation will likely lead to entirely new types of games that we cannot yet imagine, offering unparalleled levels of engagement and excitement. AI is revolutionising the online slot gaming industry in the ***, offering enhanced game design, personalised player experiences, improved security, and responsible gambling measures. As the technology continues to evolve, the future of online slot gaming looks incredibly promising, with AI playing a pivotal role in shaping this exciting landscape. For players and developers alike, the integration of AI presents endless possibilities for innovation and improvement in the world of online gaming. The post The impact of AI on online slot gaming in the *** appeared first on AI News. View the full article
  13. While some may think Snapchat is fading, the app continues to attract a considerable number of active users. Acknowledging past shortcomings in machine learning utilisation, Snap’s CEO Evan Spiegel announced a new, assertive strategy to integrate AI and machine learning technologies into its services, marking a substantial departure from its long-term focus on revising its advertising approach. In an interview with Bloomberg, Spiegel emphasised the need to improve their machine learning capabilities to reach cutting-edge standards. “We needed to improve there and bring together some of our most senior machine learning folks to just talk about what it would look like for us to get to state of the art and really invest,” he stated. Soon afterward, Snap debuted its newest generative AI technology that allows phone cameras to create more lifelike lenses—the features on the app that let you turn into a dog or have giant bug eyes—when recording videos and taking photos. Snapchat hopes that this change will help it compete more effectively with other social media platforms. Snap has been a pioneer in augmented reality (AR) technology, which layers digital effects onto real-world images or videos. Although Snap still operates in the shadow of larger rivals such as Meta, the company is making a significant bet on more sophisticated and, frankly, more fun AR lenses. They hope these will attract new users and advertisers to the Snapchat platform. The company also unveiled that AR developers can now create AI-powered lenses, and Snapchatters will be able to extensively use these lenses in their content. Additionally, Snap announced a new iteration of its developer program: Lens Studio. This more advanced version of the software, introduced late last year, initially allowed creators to build their own AR experiences for Snapchat. Now, it extends to websites and other apps. With the improved Lens Studio, Snap’s CTO Bobby Murphy said that the time required to create AR effects would be dramatically reduced from weeks to minutes or hours, and that it would also facilitate the development of more sophisticated work. “What’s fun for us is that these tools both stretch the creative space in which people can work, but they’re also easy to use, so newcomers can build something unique very quickly,” Murphy explained in an interview with Reuters. The new Lens Studio includes a suite of generative AI tools, such as an AI assistant that can answer developers’ questions if they need help. Another tool allows artists to type a prompt and automatically generate a three-dimensional image that they can use for their AR lens, eliminating the need to develop a 3D model from scratch. Early AR technologies only allowed users to perform simple tasks, such as placing a hat on someone’s head in a video. However, according to Murphy, Snap’s improvements will make it kind of hard to tell whether a digital hat is actually being worn, with the hat moving seamlessly with the person’s movements and the lighting on the hat matching the video perfectly. Snap also eventually plans to create AR lenses that cover everything from your head to your toes—not just your face. Building a new wardrobe for individuals is really hard to do right go right now, said Murphy. Through its generative AI capabilities, Snap will provide advanced AR experiences to distinguish Snapchat from its peers and attract new users, even though it might struggle to gain users relative to its scale compared with giants like Meta. See also: NVIDIA presents latest advancements in visual AI Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Snap introduces advanced AI for next-level augmented reality appeared first on AI News. View the full article
  14. On Inishbofin, a small island off the western coast of Ireland where the population hovers around 170 and the main industries are farming, fishing and tourism, a quiet technology revolution has been taking place. Artificial intelligence (AI), once thought to be the exclusive domain of big cities and tech hubs, is making its way to the furthest corners of rural Ireland, empowering locals with cutting-edge tools to boost their businesses and preserve their traditional crafts. It is all part of Microsoft’s ambitious ‘Skill Up Ireland’ initiative, which aims to provide every person in Ireland with the opportunity to learn AI skills. The program has partnered with the Irish government and various organisations to deliver AI training and resources to communities across the country, leaving no one behind in the era of rapid technological advancement. One recent beneficiary of this program is Andrew Murray, the general manager of the 22-room Doonmore Hotel on Inishbofin. A native of the island, Murray comes from a family that has lived on Inishbofin for generations, with his parents founding the hotel in 1969. Despite the remote location, Murray is eager to embrace AI as a tool to streamline his operations and save time. “What I’m interested in the most is the power of AI to save time for people like me,” Murray said. “Because time is the most precious thing we have, and it’s finite. There are only 24 hours in a day.” Through an AI introduction class, Murray discovered the possibilities of tools such as Microsoft Copilot, an AI-powered assistant for everything from scheduling to data analysis to creating content. He intends to use these tools to oversee things like scheduling staff and inventory management as well as invoicing and pricing – tasks that he has normally spent hours, if not days, doing completely manually. But Murray is not alone in his enthusiasm for AI on Inishbofin. Catherine O’Connor, a weaver who draws inspiration from the island’s natural colors and textures, has also embraced the technology. Initially wary of the AI training, O’Connor quickly became “absorbed by it” once she realised its potential to help her market her handmade scarves, table runners, and wall hangings. “Every piece has a story behind it,” O’Connor explained. “You can get a scarf at the five-and-dime store, but a handmade scarf takes hours and hours to make. It’s a totally different level. So you have to find the right words to use.” Now, with the help of Copilot, O’Connor can write engaging descriptions of her creations for marketing her craft on a proper e-commerce platform and help people understand her work more accurately and visualise the creation. Another Copilot user, Inishbofin-based florist Patricia Concannon, plans to also use Copilot to make her website and Instagram captions more engaging which should prove useful in helping her reach new customers and attracting a wider audience for her floral displays. The AI training on Inishbofin is just one element of Microsoft’s wider ‘Skill Up Ireland’ programme aimed at upskill and reskill over in Ireland, which includes Dream Space, an immersive learning experience to introduce STEM and AI skills to every one of the country’s one million students and their teachers. Kevin Marshall, head of Learning & Skills for Microsoft Ireland, said the rapid growth in the prevalence of AI in the last few years has necessitated upskilling and reskilling programmes. He continued: “At the same time, with the ********** of generative AI in the last 18 months, there’s a real need to educate people on what this is, to show them that it’s not ****** magic. The challenge, however, ***** in the ever-evolving nature of AI technology. “The teaching is non-invasive, it’s collaborative,” Marshall explained. “The programs teach the basic foundations and core principles of AI. Here’s what it can do. Here are the risks and the ethical issues. Here are the opportunities. And here’s where you go play with it.” Programmes like ‘Skill Up Ireland’ are an opportunity for rural communities like Inishbofin not to be left behind through the digital divide as AI significantly impacts industries and the way that we live and work. Audrey Murray, a felt artist and teaching assistant on the island, summed it up: “AI has to be another step, I suppose, bringing us closer to the world and bringing the world here.” And with Microsoft’s promise of creating AI skills for all in Ireland, the remote extremities of the Emerald Isle are on the brink of being catapulted into the future, when the very latest technologies are melded with ancient skills and lifeways. Meanwhile, for the inhabitants of Inishbofin, the opportunities are yet to reveal themselves. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI comes to Ireland’s remote Islands through Microsoft’s ‘Skill Up’ program appeared first on AI News. View the full article
  15. NVIDIA researchers are presenting new visual generative AI models and techniques at the Computer Vision and Pattern Recognition (CVPR) conference this week in Seattle. The advancements span areas like custom image generation, 3D scene editing, visual language understanding, and autonomous vehicle perception. “Artificial intelligence, and generative AI in particular, represents a pivotal technological advancement,” said Jan Kautz, VP of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.” Among the over 50 NVIDIA research projects being presented, two papers have been selected as finalists for CVPR’s Best Paper Awards – one exploring the training dynamics of diffusion models and another on high-definition maps for self-driving cars. Additionally, NVIDIA has won the CVPR Autonomous Grand Challenge’s End-to-End Driving at Scale track, outperforming over 450 entries globally. This milestone demonstrates NVIDIA’s pioneering work in using generative AI for comprehensive self-driving vehicle models, also earning an Innovation Award from CVPR. One of the headlining research projects is JeDi, a new technique that allows creators to rapidly customise diffusion models – the leading approach for text-to-image generation – to depict specific objects or characters using just a few reference images, rather than the time-intensive process of fine-tuning on custom datasets. Another breakthrough is FoundationPose, a new foundation model that can instantly understand and track the 3D pose of objects in videos without per-object training. It set a new performance record and could unlock new AR and robotics applications. NVIDIA researchers also introduced NeRFDeformer, a method to edit the 3D scene captured by a Neural Radiance Field (NeRF) using a single 2D snapshot, rather than having to manually reanimate changes or recreate the NeRF entirely. This could streamline 3D scene editing for graphics, robotics, and digital twin applications. On the visual language front, NVIDIA collaborated with MIT to develop VILA, a new family of vision language models that achieve state-of-the-art performance in understanding images, videos, and text. With enhanced reasoning capabilities, VILA can even comprehend internet memes by combining visual and linguistic understanding. NVIDIA’s visual AI research spans numerous industries, including over a dozen papers exploring novel approaches for autonomous vehicle perception, mapping, and planning. Sanja Fidler, VP of NVIDIA’s AI Research team, is presenting on the potential of vision language models for self-driving cars. The breadth of NVIDIA’s CVPR research exemplifies how generative AI could empower creators, accelerate automation in manufacturing and healthcare, while propelling autonomy and robotics forward. (Photo by v2osk) See also: NLEPs: Bridging the gap between LLMs and symbolic reasoning Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NVIDIA presents latest advancements in visual AI appeared first on AI News. View the full article
  16. Researchers have introduced a novel approach called natural language embedded programs (NLEPs) to improve the numerical and symbolic reasoning capabilities of large language models (LLMs). The technique involves prompting LLMs to generate and ******** Python programs to solve user queries, then output solutions in natural language. While LLMs like ChatGPT have demonstrated impressive performance on various tasks, they often struggle with problems requiring numerical or symbolic reasoning. NLEPs follow a four-step problem-solving template: calling necessary packages, importing natural language representations of required knowledge, implementing a solution-calculating function, and outputting results as natural language with optional data visualisation. This approach offers several advantages, including improved accuracy, transparency, and efficiency. Users can investigate generated programs and fix errors directly, avoiding the need to rerun entire models for troubleshooting. Additionally, a single NLEP can be reused for multiple tasks by replacing certain variables. The researchers found that NLEPs enabled GPT-4 to achieve over 90% accuracy on various symbolic reasoning tasks, outperforming task-specific prompting methods by 30% Beyond accuracy improvements, NLEPs could enhance data privacy by running programs locally, eliminating the need to send sensitive user data to external companies for processing. The technique may also boost the performance of smaller language models without costly retraining. However, NLEPs rely on a model’s program generation capability and may not work as well with smaller models trained on limited datasets. Future research will explore methods to make smaller LLMs generate more effective NLEPs and investigate the impact of prompt variations on reasoning robustness. The research, supported in part by the Center for Perceptual and Interactive Intelligence of Hong Kong, will be presented at the Annual Conference of the North ********* Chapter of the Association for Computational Linguistics later this month. (Photo by Alex Azabache) See also: Apple is reportedly getting free ChatGPT access Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post NLEPs: Bridging the gap between LLMs and symbolic reasoning appeared first on AI News. View the full article
  17. The ********* Union recently introduced the AI Act, a new governance framework compelling organisations to enhance transparency regarding their AI systems’ training data. Should this legislation come into force, it could penetrate the defences that many in Silicon Valley have built against such detailed scrutiny of AI development and deployment processes. Since the public release of OpenAI’s ChatGPT, backed by Microsoft 18 months ago, there has been significant growth in interest and investment in generative AI technologies. These applications, capable of writing text, creating images, and producing audio content at record speeds, have attracted considerable attention. However, the rise in AI activity accompanying these changes prompts an intriguing question: How do AI developers actually source the data needed to train their models? Is it through the use of unauthorised copyrighted material? Implementing the AI Act The EU’s AI Act, intended to be implemented gradually over the next two years, aims to address these issues. New laws take time to embed, and a gradual rollout allows regulators the necessary time to adapt to the new laws and for businesses to adjust to their new obligations. However, the implementation of some rules ******** in doubt. One of the more contentious sections of the Act stipulates that organisations deploying general-purpose AI models, such as ChatGPT, must provide “detailed summaries” of the content used to train them. The newly established AI Office has announced plans to release a template for organisations to follow in early 2025, following consultation with stakeholders. AI companies have expressed strong resistance to revealing their training data, describing this information as trade secrets that would provide competitors with an unfair advantage if made public. The level of detail required in these transparency reports will have significant implications for both smaller AI startups and major tech companies like Google and Meta, which have positioned AI technology at the center of their future operations. Over the past year, several top technology companies—Google, OpenAI, and Stability AI—have faced lawsuits from creators who claim their content was used without permission to train AI models. Under growing scrutiny, however, some tech companies have, in the past two years, pierced their own corporate veil and negotiated content-licensing deals with individual media outlets and websites. Some creators and lawmakers remain concerned that these measures are not sufficient. ********* lawmakers’ divide In Europe, differences among lawmakers are stark. Dragos Tudorache, who led the drafting of the AI Act in the ********* Parliament, argues that AI companies should be required to open-source their datasets. Tudorache emphasises the importance of transparency so that creators can determine whether their work has been used to train AI algorithms. Conversely, under the leadership of President Emmanuel Macron, the French government has privately opposed introducing rules that could hinder the competitiveness of ********* AI startups. French Finance Minister Bruno Le Maire has emphasised the need for Europe to be a world leader in AI, not merely a consumer of ********* and ******** products. The AI Act acknowledges the need to balance the protection of trade secrets with the facilitation of rights for parties with legitimate interests, including copyright holders. However, striking this balance ******** a significant challenge. Different industries vary on this matter. Matthieu Riouf, CEO of the AI-powered image-editing firm Photoroom, compares the situation to culinary practices, claiming there’s a secret part of the recipe that the best chefs wouldn’t share. He represents just one instance on the laundry list of possible scenarios where this type of ****** could be rampant. However, Thomas Wolf, co-founder of one of the world’s top AI startups, Hugging Face, argues that while there will always be an appetite for transparency, it doesn’t mean that the entire industry will adopt a transparency-first approach. A series of recent controversies have driven home just how complicated this all is. OpenAI demonstrated the latest version of ChatGPT in a public session, where the company was roundly criticised for using a synthetic voice that sounded nearly identical to that of actress Scarlett Johansson. These examples point to the potential for AI technologies to violate personal and proprietary rights. Throughout the development of these regulations, there has been heated debate about their potential effects on future innovation and competitiveness in the AI world. In particular, the French government has urged that innovation, not regulation, should be the starting point, given the dangers of regulating aspects that have not been fully comprehended. The way the EU regulates AI transparency could have significant impacts on tech companies, digital creators, and the overall digital landscape. Policymakers thus face the challenge of fostering innovation in the dynamic AI industry while simultaneously guiding it towards safe, ethical decisions and preventing IP infringement. In sum, if adopted, the EU AI Act would be a significant step toward greater transparency in AI development. However, the practical implementation of these regulations and their industry results could be far off. Moving forward, especially at the dawn of this new regulatory paradigm, the balance between innovation, ethical AI development, and the protection of intellectual property will remain a central and contested issue for stakeholders of all stripes to grapple with. See also: Apple is reportedly getting free ChatGPT access Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post EU AI legislation sparks controversy over data transparency appeared first on AI News. View the full article
  18. AI is in pretty much everyone’s conversations right now, with people using it (successfully and unsuccessfully) for a vast range of different things. Let’s face it: we’ve got stars in our eyes when it comes to AI right now – but what’s it doing to one of the vast industries on our planet, the casino industry? How is it shaking up games from the core? Let’s find out! Many games are being totally revolutionised by AI stepping onto the scene, so let’s get into the nitty-gritty of which games are changing, what’s happening, and how AI is leaving its footprint on this world of online casino games! Personalisation in the slots First up: personalisation. AI really shines when it comes to personalising the slots, because an AI can analyse each player’s individual behaviour and start tailoring what the game shows to match. Imagine you’re playing at your favourite slot, and you get a bunch of free spins come up – but none of them are quite what you wanted and they’re just not doing it for you today. We all know that feeling of disappointment… and honestly, it gets directed at the company, because why don’t they know you better than that? Isn’t marketing meant to be good these days? Well, AI is changing all that and cutting the frustration that comes with it! It is capable of tracking what bonuses you use and what games you play (and even when and how you play them), and that means that suddenly, casinos can offer much more tailored options when you’re playing on the slots. Free spins for your favourite game ever, just as you sit down to relax on a Friday night? Yes, that’s much more likely now! Personalised bonus games? These are also creeping onto the scene, along with game features that are specifically honed to tick your “like” box and give you the best possible gaming experience. And it’s only because of AI that this is becoming possible – sure, casinos tried to offer this kind of personalisation in the past, but it was simply too much for humans to manage. Of course, you must have been playing the slots for a while for this to work; the AI is dependent on having historical data of how you play and when you play to bring up these offers. The longer you’ve been playing and the more consistent your patterns are, the better the AI will be able to come up with offers that are perfect for you. As this system improves, maybe we’ll see slots that are completely unique for each individual – it could happen! And as AI becomes increasingly accepted and legislation is put into place for it, we’re going to see more and more innovation in this space. Learning about bluffing in poker How do you teach a computer to bluff? We’re not going to pretend it’s easy; it’s proven a major challenge for those building AIs, getting a computer to mimic a human’s ability to deceive other players. However, we’re pretty much there, and AIs can now be incorporated into the online world of poker – one of the most popular casino games on the planet. So, first off, they have created an AI that’s good at poker; there’s been major progress in advancing how the absolute best AI can play, and it’s doing well. However, that’s not actually enough for casinos: they don’t want an AI that can beat human players every time, because who would ever play against that? They need an AI that can understand nuance, make mistakes occasionally, and lose – but in convincing ways that are still satisfying to play against. Now that’s a real challenge! But if they’re successful, there’s going to be big rewards: some people would much rather play against a computer than other humans, provided the computer makes a satisfying opponent. This is likely to be an ongoing process as the AIs master how to play in each context, but it already looks promising to us! Of course, there are wary about teaching computers how to lie effectively… after all, sci-fi books and films have shown us exactly why that could be a bad idea. For the casino industry, though, it’s looking tantalising. Conclusion AI isn’t “big” in most casino games yet, because it hasn’t had time to infiltrate them… but we’re likely to see it edging in from the fringes and changing more and more things about how we play and enjoy games online as the years come. It’s overly exciting to imagine how it might revolutionise classic games like poker, blackjack, roulette, the slots, and more. However, we’re just going to have to “wait and see” here, because AI is only just unfolding its metaphorical wings and starting to flap. The post AI in casino games: A whole new world waiting to be dealt appeared first on AI News. View the full article
  19. Apple’s newly-announced partnership with OpenAI – which brings ChatGPT capabilities to iOS 18, iPadOS 18, and macOS Sequoia – comes without any direct money exchange. According to a Bloomberg report by Mark Gurman, “Apple isn’t paying OpenAI as part of the partnership.” Instead, the Cupertino-based company is leveraging its massive user base and device ecosystem as currency. “Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments,” Gurman’s sources explained. Gurman notes that OpenAI could find a silver lining by encouraging Apple users to subscribe to ChatGPT Plus, priced at $20 per month. If subscribers sign up through Apple devices, the iPhone maker will likely even claim a commission. Apple’s AI strategy extends beyond OpenAI. The company is reportedly in talks to offer Google’s Gemini chatbot as an additional option later this year, signalling its intent to provide users with diverse AI experiences without necessarily having to make such major investments itself. (Image Credit: Apple) The long-term vision for Apple involves capturing a slice of the revenue generated from monetising chatbot results on its operating systems. This move anticipates a shift in user behaviour, with more people relying on AI assistants rather than traditional search engines like Google. While Apple’s AI plans are ambitious, challenges remain. The report highlights that the company has yet to secure a deal with a local ******** provider for chatbot features, though discussions with local firms like Baidu and Alibaba are underway. Initially, Apple Intelligence will be limited to US English, with expanded language support planned for the following year. The Apple-OpenAI deal represents a novel approach to collaboration in the AI space, where brand exposure and technological integration are valued as much as, if not more than, direct financial compensation. See also: Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Apple is reportedly getting free ChatGPT access appeared first on AI News. View the full article
  20. Elon Musk has dropped his lawsuit against OpenAI, the company he co-founded in 2015. Court filings from the Superior Court of California reveal that Musk called off the legal action on June 11th, just a day before an informal conference was scheduled to discuss the discovery process. Musk had initially sued OpenAI in March 2024, alleging breach of contracts, unfair business practices, and ******** in fiduciary duty. He claimed that his contributions to the company were made “in exchange for and in reliance on promises that those assets were irrevocably dedicated to building AI for public benefit, with only safety as a countervailing concern.” The lawsuit sought remedies for “breach of contract, promissory estoppel, breach of fiduciary duty, unfair business practices, and accounting,” as well as specific performance, restitution, and damages. However, Musk’s filings to withdraw the case provided no explanation for abandoning the lawsuit. OpenAI had previously called Musk’s claims “incoherent” and that his inability to produce a contract made his breach claims difficult to prove, stating that documents provided by Musk “contradict his allegations as to the alleged terms of the agreement.” The withdrawal of the lawsuit comes at a time when Musk is strongly opposing Apple’s plans to integrate ChatGPT into its operating systems. During Apple’s keynote event announcing Apple Intelligence for iOS 18, iPadOS 18, and macOS Sequoia, Musk threatened to ban Apple devices from his companies, calling the integration “an unacceptable security violation.” Despite assurances from Apple and OpenAI that user data would only be shared with explicit consent and that interactions would be secure, Musk questioned Apple’s ability to ensure data security, stating, “Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.” Since bringing the lawsuit against OpenAI, Musk has also created his own AI company, xAI, and secured over $6 billion in funding for his plans to advance the Grok chatbot on his social network, X. While Musk’s reasoning for dropping the OpenAI lawsuit ******** unclear, his actions suggest a potential shift in focus towards advancing his own AI endeavours while continuing to vocalise his criticism of OpenAI through social media rather than the courts. See also: DuckDuckGo releases portal giving private access to AI models Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Musk ends OpenAI lawsuit while slamming Apple’s ChatGPT plans appeared first on AI News. View the full article
  21. In the ever-evolving digital landscape, where AI is rapidly transforming the way we interact and communicate, WebUtility’s ChatGPT Prompt Generator emerges as a game-changer. This innovative tool empowers users to harness the full potential of ChatGPT, one of the most advanced language models developed by OpenAI. At its core, the ChatGPT Prompt Generator is designed to simplify the process of crafting tailored prompts for ChatGPT. By leveraging the tool’s intuitive interface, users can effortlessly create prompts that align with their specific needs, whether they’re seeking assistance with customer support, content creation, or creative writing endeavors. ChatGPT prompt generator tool features and benefits The beauty of this tool ***** in its user-friendly approach. With just a few clicks, users can select the desired action, such as ‘Create’, ‘Explain’, ‘Analyse’ or ‘Write’, and then specify the focus area. This level of customization ensures that the generated prompts are contextually relevant and tailored to the user’s requirements. But the true power of the ChatGPT Prompt Generator extends beyond mere convenience. By automating the prompt creation process, the tool saves users valuable time and effort, enabling them to engage with ChatGPT in a more efficient and productive manner. Gone are the days of generic or irrelevant responses – every conversation is now tailored to the user’s specific needs. One of the standout features of this tool is its ability to understand natural language and adapt to various contexts. Powered by cutting-edge AI technology, the ChatGPT Prompt Generator ensures that the generated prompts are thoughtful, contextually appropriate, and designed to elicit meaningful responses from ChatGPT. Whether you’re a business professional seeking to streamline customer interactions, a content creator looking to generate engaging material, or a writer exploring new creative avenues, the ChatGPT Prompt Generator is your ultimate companion. By harnessing the power of AI, this tool empowers you to unlock the limitless potential of ChatGPT and elevate your conversations to new heights. For those seeking to explore the vast realm of AI tools further, the AI Tools Directory at AI Parabellum is a treasure trove of resources. This comprehensive directory curates a wide range of AI-powered tools, spanning various domains and applications, ensuring that users can find the perfect solution for their specific needs. Final words In the rapidly evolving world of AI, the WebUtiliy’s ChatGPT Prompt Generator stand as beacon of innovation, empowering users to harness the power of cutting-edge technology and unlock new realms of possibility. Embrace the future of AI-driven conversations. The post ChatGPT Prompt Generator: Unleashing the power of AI conversations appeared first on AI News. View the full article
  22. DuckDuckGo has released a platform that allows users to interact with popular AI chatbots privately, ensuring that their data ******** secure and protected. The service, accessible at Duck.ai, is globally available and features a light and clean user interface. Users can choose from four AI models: two closed-source models and two open-source models. The closed-source models are OpenAI’s GPT-3.5 Turbo and Anthropic’s Claude 3 Haiku, while the open-source models are Meta’s Llama-3 70B and Mistral AI’s Mixtral 8x7b. What sets DuckDuckGo AI Chat apart is its commitment to user privacy. Neither DuckDuckGo nor the chatbot providers can use user data to train their models, ensuring that interactions remain private and anonymous. DuckDuckGo also strips away metadata, such as server or IP addresses, so that queries appear to originate from the company itself rather than individual users. The company has agreements in place with all model providers to ensure that any saved chats are completely deleted within 30 days, and that none of the chats made on the platform can be used to train or improve the models. This makes preserving privacy easier than changing the privacy settings for each service. In an era where online services are increasingly hungry for user data, DuckDuckGo’s AI Chat service is a breath of fresh air. The company’s commitment to privacy is a direct response to the growing concerns about data collection and usage in the AI industry. By providing a private and anonymous platform for users to interact with AI chatbots, DuckDuckGo is setting a new standard for the industry. DuckDuckGo’s AI service is free to use within a daily limit, and the company is considering launching a paid tier to reduce or eliminate these limits. The service is designed to be a complementary partner to its search engine, allowing users to switch between search and AI chat for a more comprehensive search experience. “We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points.” the company explained. “If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw.” To accommodate that user workflow, DuckDuckGo has made AI Chat accessible through DuckDuckGo Private Search for quick access. The launch of DuckDuckGo AI Chat comes at a time when the AI industry is facing increasing scrutiny over data privacy and usage. The service is a welcome addition for privacy-conscious individuals, joining the recent launch of Venice AI by crypto entrepreneur Erik Voorhees. Venice AI features an uncensored AI chatbot and image generator that doesn’t require accounts and doesn’t retain data.. As the AI industry continues to evolve, it’s clear that privacy will remain a top concern for users. With the launch of DuckDuckGo AI Chat, the company is taking a significant step towards providing users with a private and secure platform for interacting with AI chatbots. See also: AI pioneers turn whistleblowers and demand safeguards Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post DuckDuckGo releases portal giving private access to AI models appeared first on AI News. View the full article
  23. OpenAI is facing a wave of internal strife and external criticism over its practices and the potential risks posed by its technology. In May, several high-profile employees departed from the company, including Jan Leike, the former head of OpenAI’s “super alignment” efforts to ensure advanced AI systems remain aligned with human values. Leike’s exit came shortly after OpenAI unveiled its new flagship GPT-4o model, which it touted as “magical” at its Spring Update event. According to reports, Leike’s departure was driven by constant disagreements over security measures, monitoring practices, and the prioritisation of flashy product releases over safety considerations. Leike’s exit has opened a Pandora’s box for the AI firm. Former OpenAI board members have come forward with allegations of psychological ****** levelled against CEO Sam Altman and the company’s leadership. The growing internal turmoil at OpenAI coincides with mounting external concerns about the potential risks posed by generative AI technology like the company’s own language models. Critics have warned about the imminent existential threat of advanced AI surpassing human capabilities, as well as more immediate risks like job displacement and the weaponisation of AI for misinformation and manipulation campaigns. In response, a group of current and former employees from OpenAI, Anthropic, DeepMind, and other leading AI companies have penned an open letter addressing these risks. “We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies,” the letter states. “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.” The letter, which has been signed by 13 employees and endorsed by AI pioneers Yoshua Bengio and Geoffrey Hinton, outlines four core demands aimed at protecting whistleblowers and fostering greater transparency and accountability around AI development: That companies will not enforce non-disparagement clauses or retaliate against employees for raising risk-related concerns. That companies will facilitate a verifiably anonymous process for employees to raise concerns to boards, regulators, and independent experts. That companies will support a culture of open criticism and allow employees to publicly share risk-related concerns, with appropriate protection of trade secrets. That companies will not retaliate against employees who share confidential risk-related information after other processes have *******. “They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood,” said Daniel Kokotajlo, a former OpenAI employee who left due to concerns over the company’s values and lack of responsibility. The demands come amid reports that OpenAI has forced departing employees to sign non-disclosure agreements preventing them from criticising the company or risk losing their vested equity. OpenAI CEO Sam Altman admitted being “embarrassed” by the situation but claimed the company had never actually clawed back anyone’s vested equity. As the AI revolution charges forward, the internal strife and whistleblower demands at OpenAI underscore the growing pains and unresolved ethical quandaries surrounding the technology. See also: OpenAI disrupts five covert influence operations Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post AI pioneers turn whistleblowers and demand safeguards appeared first on AI News. View the full article
  24. Amazon will harness computer vision and AI to ensure customers receive products in pristine condition and further its sustainability efforts. The initiative – dubbed “Project P.I.” (short for “private investigator”) – operates within Amazon fulfilment centres across North America, where it will scan millions of products daily for defects. Project P.I. leverages generative AI and computer vision technologies to detect issues such as damaged products or incorrect colours and sizes before they reach customers. The AI model not only identifies defects but also helps uncover the root causes, enabling Amazon to implement preventative measures upstream. This system has proven highly effective in the sites where it has been deployed, accurately identifying product issues among the vast number of items processed each month. Before any item is dispatched, it passes through an imaging tunnel where Project P.I. evaluates its condition. If a defect is detected, the item is isolated and further investigated to determine if similar products are affected. Amazon associates review the flagged items and decide whether to resell them at a discount via Amazon’s Second Chance site, donate them, or find alternative uses. This technology aims to act as an extra pair of eyes, enhancing manual inspections at several North ********* fulfilment centres, with plans for expansion throughout 2024. Dharmesh Mehta, Amazon’s VP of Worldwide Selling Partner Services, said: “We want to get the experience right for customers every time they shop in our store. “By leveraging AI and product imaging within our operations facilities, we are able to efficiently detect potentially damaged products and address more of those issues before they ever reach a customer, which is a win for the customer, our selling partners, and the environment.” Project P.I. also plays a crucial role in Amazon’s sustainability initiatives. By preventing damaged or defective items from reaching customers, the system helps reduce unwanted returns, wasted packaging, and unnecessary carbon emissions from additional transportation. Kara Hurst, Amazon’s VP of Worldwide Sustainability, commented: “AI is helping Amazon ensure that we’re not just delighting customers with high-quality items, but we’re extending that customer obsession to our sustainability work by preventing less-than-perfect items from leaving our facilities, and helping us avoid unnecessary carbon emissions due to transportation, packaging, and other steps in the returns process.” In parallel, Amazon is utilising a generative AI system equipped with a Multi-Modal LLM (MLLM) to investigate the root causes of negative customer experiences. When defects reported by customers slip through initial checks, this system reviews customer feedback and analyses images from fulfilment centres to understand what went wrong. For example, if a customer receives the wrong size of a product, the system examines the product labels in fulfilment centre images to pinpoint the error. This technology is also beneficial for Amazon’s selling partners, especially the small and medium-sized businesses that make up over 60% of Amazon’s sales. By making defect data more accessible, Amazon helps these sellers rectify issues quickly and reduce future errors. (Photo by Andrew Stickelman) See also: X now permits AI-generated ****** content Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Amazon will use computer vision to spot defects before dispatch appeared first on AI News. View the full article
  25. TickLab, founded by visionary CTO Yasir Albayati, is at the forefront of innovation in the financial sector, specialising in deploying advanced decentralised AI into finance. Our company operates as a quantitative hedge fund, focusing on crypto, stocks, and forex markets. With the launch of our cutting-edge Quantitative Decentralised AI Hedge Fund, we offer investors the unparalleled opportunity to capitalise on market movements at microsecond speed. At TickLab, we are committed to harnessing the power of our Quant Hedge Fund resources with just a single click. This ease of access ensures that our clients can seamlessly integrate our advanced financial tools into their investment strategies. A cornerstone of our innovation is E.D.I.T.H., an AI language model meticulously developed and trained by TickLab.IO. Unlike other AI models like ChatGPT, Bard, or Grok, E.D.I.T.H. is designed specifically for the finance and real estate industries. It provides comprehensive services including financial analysis, investment advice, portfolio management, market predictions, real estate insights, regulatory compliance, and risk management. Leveraging extensive financial and real estate data, E.D.I.T.H. delivers accurate and relevant information, making it an indispensable tool for professionals in these fields. Harnessing the Power of Machine Learning and Deep Learning At TickLab, our innovative approach is deeply rooted in the advanced capabilities of machine learning (ML) and deep learning (DL). Our quant hedge fund leverages these technologies to analyse vast amounts of data, identifying patterns and trends that are invisible to traditional financial analysis methods. By utilising sophisticated ML algorithms, we can predict market movements with high precision, allowing us to ******** trades at optimal times. Deep learning, a subset of ML, plays a crucial role in our data analysis and decision-making processes. Our deep learning models are designed to process complex data sets, learning from historical data to make informed predictions about future market behaviour. This enables us to create robust trading strategies that adapt to ever-changing market conditions. Artificial Intelligence: The Future of Finance Artificial Intelligence (AI) is the backbone of TickLab’s operations. Our AI systems are designed to perform tasks that traditionally require human intelligence, such as analysing market trends, managing portfolios, and providing investment advice. By automating these processes, we not only increase efficiency but also reduce the potential for human error. Our AI-driven approach extends beyond simple automation. We develop intelligent systems that continuously learn and improve, ensuring that our hedge fund stays ahead of the curve. This dynamic learning capability allows us to refine our strategies and maintain a competitive edge in the fast-paced world of finance. Connecting through Advanced APIs Our sophisticated API connects seamlessly to our quant auto-trading systems, ensuring that our clients can leverage the full potential of our AI-powered solutions. By integrating with our API, clients gain access to real-time data and analytics, enabling them to make informed investment decisions quickly and efficiently. This integration ensures that our advanced trading algorithms are accessible and easy to use, empowering investors to maximise their returns. At TickLab, we are not just keeping up with the future of finance; we are leading the way. Join us on this exciting journey and experience the future of investment and financial analysis today. Follow TickLab.IO on Twitter and visit their website to find out more. Done by XLAB Research. The post TickLab: Revolutionizing Finance with AI-Powered Quant Hedge Fund and E.D.I.T.H. appeared first on AI News. View the full article

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.