Skip to content

Building RAG Applications with LangChain

Overview RAG (Retrieval-Augmented Generation) combines document retrieval with LLM generation. Instead of relying solely on the model’s training data, RAG fetches relevant context from your documents. Architecture Query → Embed → Vector Search → Retrieve Docs → LLM + Context → Response Installation pip install langchain langchain-community chromadb sentence-transformers Loading Documents from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader # Single PDF loader = PyPDFLoader("document.pdf") docs = loader.load() # Directory of files loader = DirectoryLoader("./docs", glob="/*.pdf") docs = loader.load() Splitting Documents from langchain.text_splitter import RecursiveCharacterTextSplitter splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, separators=["\n\n", "\n", " ", ""] ) chunks = splitter.split_documents(docs) Creating Embeddings from langchain_community.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings( model_name="sentence-transformers/all-MiniLM-L6-v2" ) Vector Store from langchain_community.vectorstores import Chroma vectorstore = Chroma.from_documents( documents=chunks, embedding=embeddings, persist_directory="./chroma_db" ) # Search results = vectorstore.similarity_search("What is machine learning?", k=3) RAG Chain from langchain_community.llms import Ollama from langchain.chains import RetrievalQA llm = Ollama(model="mistral") retriever = vectorstore.as_retriever(search_kwargs={"k": 3}) qa_chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True ) response = qa_chain.invoke({"query": "Summarize the main findings"}) print(response["result"]) Production Tips Chunk size: 500-1000 tokens works well for most use cases Overlap: 10-20% overlap prevents context loss at boundaries Reranking: Use a cross-encoder to rerank retrieved documents Hybrid search: Combine vector search with keyword search (BM25) Key Resources LangChain Documentation ChromaDB RAG Paper

December 1, 2025 · 1 min · 206 words · BlogIA Team

Vector Databases Explained

Overview Vector databases store and search high-dimensional embeddings. They’re essential for semantic search, recommendation systems, and RAG applications. How They Work Text → Embedding Model → Vector [0.1, -0.3, 0.8, ...] → Store → ANN Search Database Comparison Database Hosting Open Source Best For Pinecone Managed No Production, scale Weaviate Both Yes Hybrid search Chroma Self-host Yes Local dev, RAG Milvus Both Yes Large scale Qdrant Both Yes Filtering pgvector Self-host Yes Postgres users Chroma (Local Development) import chromadb from chromadb.utils import embedding_functions client = chromadb.Client() ef = embedding_functions.SentenceTransformerEmbeddingFunction() collection = client.create_collection("docs", embedding_function=ef) collection.add( documents=["Machine learning is...", "Deep learning uses..."], ids=["doc1", "doc2"] ) results = collection.query(query_texts=["What is ML?"], n_results=2) Pinecone (Production) from pinecone import Pinecone pc = Pinecone(api_key="your-key") index = pc.Index("my-index") index.upsert(vectors=[ {"id": "doc1", "values": [0.1, 0.2, ...], "metadata": {"source": "wiki"}} ]) results = index.query(vector=[0.1, 0.2, ...], top_k=5, include_metadata=True) Qdrant (Filtering) from qdrant_client import QdrantClient from qdrant_client.models import Filter, FieldCondition, MatchValue client = QdrantClient("localhost", port=6333) # Search with filters results = client.search( collection_name="docs", query_vector=[0.1, 0.2, ...], query_filter=Filter( must=[FieldCondition(key="category", match=MatchValue(value="tech"))] ), limit=10 ) Choosing a Database Prototyping: Chroma (no setup) Production SaaS: Pinecone (managed) Self-hosted scale: Milvus or Qdrant Existing Postgres: pgvector Hybrid search: Weaviate Key Resources Chroma Pinecone Qdrant

December 1, 2025 · 1 min · 199 words · BlogIA Team

Text Embeddings Guide

Overview Text embeddings convert text into numerical vectors that capture semantic meaning. Similar texts have similar vectors, enabling semantic search and clustering. Embedding Models Comparison Model Dimensions Speed Quality all-MiniLM-L6-v2 384 Fast Good all-mpnet-base-v2 768 Medium Better e5-large-v2 1024 Slow Excellent text-embedding-3-small 1536 API Excellent nomic-embed-text 768 Fast Very good Sentence Transformers from sentence_transformers import SentenceTransformer model = SentenceTransformer('all-MiniLM-L6-v2') sentences = [ "Machine learning is a subset of AI", "Deep learning uses neural networks", "The weather is nice today" ] embeddings = model.encode(sentences) print(embeddings.shape) # (3, 384) Semantic Similarity from sklearn.metrics.pairwise import cosine_similarity query = "What is artificial intelligence?" query_embedding = model.encode([query]) similarities = cosine_similarity(query_embedding, embeddings)[0] # [0.82, 0.75, 0.12] - first two are similar, third is not Hugging Face Transformers from transformers import AutoTokenizer, AutoModel import torch tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2") model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2") def get_embedding(text): inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True) with torch.no_grad(): outputs = model(**inputs) # Mean pooling return outputs.last_hidden_state.mean(dim=1) OpenAI Embeddings from openai import OpenAI client = OpenAI() response = client.embeddings.create( model="text-embedding-3-small", input="Machine learning is fascinating" ) embedding = response.data[0].embedding # 1536 dimensions Local with Ollama import requests response = requests.post('http://localhost:11434/api/embeddings', json={ 'model': 'nomic-embed-text', 'prompt': 'Machine learning is fascinating' }) embedding = response.json()['embedding'] Use Cases Semantic Search # Index documents doc_embeddings = model.encode(documents) # Search query_embedding = model.encode([query]) similarities = cosine_similarity(query_embedding, doc_embeddings)[0] top_indices = similarities.argsort()[-5:][::-1] Clustering from sklearn.cluster import KMeans embeddings = model.encode(texts) kmeans = KMeans(n_clusters=5) clusters = kmeans.fit_predict(embeddings) Classification from sklearn.linear_model import LogisticRegression embeddings = model.encode(texts) classifier = LogisticRegression() classifier.fit(embeddings, labels) Best Practices Normalize embeddings: For cosine similarity Batch processing: Encode in batches for speed Cache embeddings: Don’t recompute for same text Match training domain: Use domain-specific models when available Key Resources Sentence Transformers MTEB Leaderboard OpenAI Embeddings

December 1, 2025 · 2 min · 282 words · BlogIA Team