Skip to content

Building RAG Applications with LangChain

Overview RAG (Retrieval-Augmented Generation) combines document retrieval with LLM generation. Instead of relying solely on the model’s training data, RAG fetches relevant context from your documents. Architecture Query → Embed → Vector Search → Retrieve Docs → LLM + Context → Response Installation pip install langchain langchain-community chromadb sentence-transformers Loading Documents from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader # Single PDF loader = PyPDFLoader("document.pdf") docs = loader.load() # Directory of files loader = DirectoryLoader("./docs", glob="/*.pdf") docs = loader.load() Splitting Documents from langchain.text_splitter import RecursiveCharacterTextSplitter splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, separators=["\n\n", "\n", " ", ""] ) chunks = splitter.split_documents(docs) Creating Embeddings from langchain_community.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-MiniLM-L6-v2" ) Vector Store from langchain_community.vectorstores import Chroma vectorstore = Chroma.from_documents( documents=chunks, embedding=embeddings, persist_directory="./chroma_db" ) # Search results = vectorstore.similarity_search("What is machine learning?", k=3) RAG Chain from langchain_community.llms import Ollama from langchain.chains import RetrievalQA llm = Ollama(model="mistral") retriever = vectorstore.as_retriever(search_kwargs={"k": 3}) qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True ) response = qa_chain.invoke({"query": "Summarize the main findings"}) print(response["result"]) Production Tips Chunk size: 500-1000 tokens works well for most use cases Overlap: 10-20% overlap prevents context loss at boundaries Reranking: Use a cross-encoder to rerank retrieved documents Hybrid search: Combine vector search with keyword search (BM25) Key Resources LangChain Documentation ChromaDB RAG Paper

December 1, 2025 · 1 min · 206 words · BlogIA Team

Vector Databases Explained

Overview Vector databases store and search high-dimensional embeddings. They’re essential for semantic search, recommendation systems, and RAG applications. How They Work Text → Embedding Model → Vector [0.1, -0.3, 0.8, ...] → Store → ANN Search Database Comparison Database Hosting Open Source Best For Pinecone Managed No Production, scale Weaviate Both Yes Hybrid search Chroma Self-host Yes Local dev, RAG Milvus Both Yes Large scale Qdrant Both Yes Filtering pgvector Self-host Yes Postgres users Chroma (Local Development) import chromadb from chromadb.utils import embedding_functions client = chromadb.Client() ef = embedding_functions.SentenceTransformerEmbeddingFunction() collection = client.create_collection("docs", embedding_function=ef) collection.add( documents=["Machine learning is...", "Deep learning uses..."], ids=["doc1", "doc2"] ) results = collection.query(query_texts=["What is ML?"], n_results=2) Pinecone (Production) from pinecone import Pinecone pc = Pinecone(api_key="your-key") index = pc.Index("my-index") index.upsert(vectors=[ {"id": "doc1", "values": [0.1, 0.2, ...], "metadata": {"source": "wiki"}} ]) results = index.query(vector=[0.1, 0.2, ...], top_k=5, include_metadata=True) Qdrant (Filtering) from qdrant_client import QdrantClient from qdrant_client.models import Filter, FieldCondition, MatchValue client = QdrantClient("localhost", port=6333) # Search with filters results = client.search( collection_name="docs", query_vector=[0.1, 0.2, ...], query_filter=Filter( must=[FieldCondition(key="category", match=MatchValue(value="tech"))] ), limit=10 ) Choosing a Database Prototyping: Chroma (no setup) Production SaaS: Pinecone (managed) Self-hosted scale: Milvus or Qdrant Existing Postgres: pgvector Hybrid search: Weaviate Key Resources Chroma Pinecone Qdrant

December 1, 2025 · 1 min · 199 words · BlogIA Team

Automate CVE Analysis with LLMs and RAG 🚀

Practical tutorial: Automate CVE analysis with LLMs and RAG

January 8, 2026 · 4 min · 672 words · BlogIA Academy