LangChain Multi-Query Retriever for RAG
James Briggs James Briggs
63.1K subscribers
25,025 views
0

 Published On Oct 28, 2023

In this video, we'll learn about an advanced technique for RAG in LangChain called "Multi-Query". Multi-query allows us to broaden our search score by using an LLM to turn one query into multiple, allowing us to search a broader vector space and return a higher variety of results. In this example, we use OpenAI's text-embedding-ada-002, gpt-3.5-turbo, Pinecone vector database, and of course the LangChain library.

đź“Ś Code:
https://github.com/pinecone-io/exampl...

🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-si...

👋🏼 AI Consulting:
https://aurelio.ai

đź‘ľ Discord:
  / discord  

Twitter:   / jamescalam  
LinkedIn:   / jamescalam  

👉 Thumbnail credit @LaCarnevali

00:00 LangChain Multi-Query
00:31 What is Multi-Query in RAG?
01:50 RAG Index Code
02:56 Creating a LangChain MultiQueryRetriever
07:16 Adding Generation to Multi-Query
08:51 RAG in LangChain using Sequential Chain
11:18 Customizing LangChain Multi Query
13:41 Reducing Multi Query Hallucination
16:56 Multi Query in a Larger RAG Pipeline

#artificialintelligence #nlp #ai #openai #chatbot #langchain #vectordb

show more

Share/Embed