Diamond Member Pelican Press 0 Posted April 10, 2024 Diamond Member Share Posted April 10, 2024 This is the hidden content, please Sign In or Sign Up ’s AI just got ears This is the hidden content, please Sign In or Sign Up AI chatbots are already capable of “seeing” the world through images and video. But now, This is the hidden content, please Sign In or Sign Up has announced audio-to-speech functionalities as part of its latest update to Gemini Pro. In Gemini 1.5 Pro, the chatbot can now “hear” audio files uploaded into its system and then extract the text information. The company has made this LLM version available as a public preview on its Vertex AI development platform. This will allow more enterprise-focused users to experiment with the feature and expand its base after a more private rollout in February when the model was first announced. This was originally offered only to a limited group of developers and enterprise customers. 1. Breaking down + understanding a long video I uploaded the entire NBA dunk contest from last night and asked which dunk had the highest score. Gemini 1.5 was incredibly able to find the specific perfect 50 dunk and details from just its long context video understanding! This is the hidden content, please Sign In or Sign Up — Rowan Cheung (@rowancheung) This is the hidden content, please Sign In or Sign Up This is the hidden content, please Sign In or Sign Up shared the details about the update at its This is the hidden content, please Sign In or Sign Up , which is currently taking place in Las Vegas. After calling the Gemini Ultra LLM that powers its Gemini Advanced chatbot the most powerful model of its Gemini family, This is the hidden content, please Sign In or Sign Up is now calling Gemini 1.5 Pro its most capable generative model. The company added that this version is better at learning without additional tweaking of the model. Gemini 1.5 Pro is multimodal in that it can interpret different types of audio into text, including TV shows, movies, radio broadcasts, and conference call recordings. It’s even multilingual in that it can process audio in several different languages. The LLM may also be able to create transcripts from videos; however, its quality may be unreliable, This is the hidden content, please Sign In or Sign Up . When first announced, This is the hidden content, please Sign In or Sign Up explained that Gemini 1.5 Pro used a token system to process raw data. A million tokens equate to approximately 700,000 words or 30,000 lines of code. In media form, it equals an hour of video or around 11 hours of audio. There have been some private preview demos of Gemini 1.5 Pro that demonstrate how the LLM is able to find specific moments in a video transcript. For example, This is the hidden content, please Sign In or Sign Up got early access and detailed how his demo found an exact action shot in a sports contest and summarized the event, as seen in the tweet embedded above. However, This is the hidden content, please Sign In or Sign Up noted that other early adopters, including ******* Wholesale Mortgage, TBS, and Replit, are opting for more enterprise-focused use cases, such as mortgage underwriting, automating metadata tagging, and generating, explaining, and updating code. Editors’ Recommendations This is the hidden content, please Sign In or Sign Up Computing,ai,gemini 1.5 pro, This is the hidden content, please Sign In or Sign Up #Googles #ears This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/13436-google%E2%80%99s-ai-just-got-ears/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.