Diamond Member Pelican Press 0 Posted March 23 Diamond Member Share Posted March 23 Nvidia Releases Chat With RTX, an AI Chatbot That Runs Locally on Windows PC Nvidia has released an artificial intelligence (AI)-powered chatbot called Chat with RTX that runs locally on a PC and does not need to connect to the Internet. The GPU maker has been at the forefront of the AI industry since the generative AI *****, with its advanced AI chips powering AI products and services. Nvidia also has an AI platform that provides end-to-end solutions for enterprises. The company is now building its own chatbots, and Chat with RTX is its first offering. The Nvidia chatbot is currently a demo app available for free. Calling it a personalised AI chatbot, Nvidia This is the hidden content, please Sign In or Sign Up the tool on Tuesday (February 13). Users intending to download the software will need a Windows PC or workstation that runs on an RTX 30 or 40-series GPU with a minimum of 8GB VRAM. Once downloaded, the app can be installed with a few clicks and be used right away. Since it is a local chatbot, Chat with RTX does not have any knowledge of the outside world. However, users can feed it with their own personal data, such as documents, files, and more, and customise it to run queries on them. One such use case can be feeding it large volumes of work-related documents and then asking it to summarise, analyse, or answer a specific question that could take hours to find manually. Similarly, it can be an effective research tool to skim through multiple studies and papers. It supports text, pdf, doc/docx, and xml file formats. Additionally, the AI **** also accepts This is the hidden content, please Sign In or Sign Up video and playlist URLs and using the transcriptions of the videos, it can answer queries or summarise the video. For this functionality, it will require internet access. As per the demo video, Chat with RTX essentially is a Web server along with a Python instance that does not contain the information of a large language model (LLM) when it is freshly downloaded. Users can pick between Mistral or Llama 2 models to train it, and then use their own data to run queries. The company states that the chatbot leverages open-source projects such as retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration for its functionality. According to a This is the hidden content, please Sign In or Sign Up by The Verge, the app is approximately 40GB in size and the Python instance can occupy up to 3GB of RAM. One particular issue pointed out by the publication is that the chatbot creates JSON files inside the folders you ask it to index. So, feeding it your entire document folder or a large parent folder might be troublesome. Affiliate links may be automatically generated – see our ethics statement for details. This is the hidden content, please Sign In or Sign Up nvidia chat with rtx ai chatbot released runs locally on windows pc nvidia,artificial intelligence,ai,chatbots #Nvidia #Releases #Chat #RTX #Chatbot #Runs #Locally #Windows This is the hidden content, please Sign In or Sign Up Link to comment https://hopzone.eu/forums/topic/6412-nvidia-releases-chat-with-rtx-an-ai-chatbot-that-runs-locally-on-windows-pc/ Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now