Generative AI: LangChain

Learning Hub

A structured curriculum for mastering Generative AI with LangChain

This site helps learners master Generative AI with strong foundations and practical building approaches. From LangChain basics to enterprise RAG systems - your journey to AI expertise starts here.

Home

Lesson 5: LangChain Model Component Deep Dive

Lesson 5 of 5
Why this matters

The Model Component is the core interface between LangChain and AI models. Understanding its types, implementation methods, and practical applications is crucial for building chatbots, semantic search, and RAG applications. This lesson provides comprehensive coverage of both Language Models and Embedding Models.


Overview of the LangChain Model Component

Purpose and Importance

The Model Component is a core part of the LangChain framework. It provides a unified interface to connect, manage, and interact with different AI models, abstracting away their provider-specific complexities.

Functionality

Its core function is to simplify communication with different models—both Language Models (LMs) and Embedding Models (EMs)—allowing developers to seamlessly switch between providers like OpenAI, Anthropic, Google, and Hugging Face with minimal code modifications.


Types of Models in LangChain

LangChain primarily supports two categories of models: (A) Language Models and (B) Embedding Models.

Language Models (LMs)

These models handle text-to-text operations, meaning they take text as input and return text as output. They are primarily used in chatbots, summarization systems, and text-generation applications.

Feature Description
Input/Output Text input (prompt) → Text output (response)
Use Case Conversational AI, content generation, summarization, translation

Classification: LLMs vs. Chat Models

The industry is transitioning from traditional Large Language Models (LLMs) to more conversational Chat Models, which support memory and role awareness.

Feature LLMs (Large Language Models) Chat Models
Purpose Free-form text generation (e.g., summarization, code generation) Designed for conversational AI (e.g., chatbots, assistants)
Input/Output Single string input → String output Sequence of chat messages → Chat response
Training Trained on large text corpora (books, Wikipedia) Fine-tuned on chat datasets for dialogue handling
Memory Stateless (no context retention) Supports conversation memory
Role Awareness No role specification Allows role assignment (e.g., "You are a financial advisor")
LangChain Integration BaseLLM class (deprecated) BaseChatModel class (recommended)
Chat Models vs LLMs

Summary: While LLMs are general-purpose, Chat Models are context-aware, role-sensitive, and better suited for conversational systems. LangChain's future direction focuses primarily on Chat Models.

Embedding Models (EMs)

Embedding Models convert text into numerical vector representations, allowing semantic comparison and similarity searches.

Feature Description
Input/Output Text input → Numerical vector (embedding)
Purpose Represent text meaning mathematically for comparison
Primary Use Case Semantic Search, Document Retrieval, RAG Applications
Cost Efficiency Low cost (e.g., ~$0.20 per 1M tokens with OpenAI)
Dimension Control Developers can choose embedding dimensions (larger = more context but higher cost)

These embeddings form the foundation for similarity search, knowledge retrieval, and context injection in RAG systems.


Implementation: Language Models

Closed-Source (Proprietary) Models

Closed-source models require API keys and paid access. LangChain provides native integrations for multiple major providers:

OpenAI (GPT Models)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Initialize the model
model = ChatOpenAI(
    model="gpt-3.5-turbo",
    temperature=0.7,
    max_tokens=150
)

# Use the model
response = model.invoke("What is LangChain?")
print(response.content)
  • Class Used: ChatOpenAI (for chat models) or OpenAI (for LLMs)
  • Setup: Requires an OpenAI API key stored in a .env file
  • Response Format: Returns output under a content key with metadata such as token usage and response time

Anthropic (Claude Models)

1
2
3
4
5
6
7
8
9
from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(
    model="claude-3-sonnet-20240229",
    temperature=0.5
)

response = model.invoke("Explain machine learning in simple terms")
print(response.content)

Google (Gemini Models)

1
2
3
4
5
6
7
8
9
from langchain_google_genai import ChatGoogleGenerativeAI

model = ChatGoogleGenerativeAI(
    model="gemini-pro",
    temperature=0.3
)

response = model.invoke("What are the benefits of vector databases?")
print(response.content)
Provider Flexibility

LangChain enforces API standardization, meaning developers can swap between these providers without rewriting application logic. Switching between providers requires changing only 1–2 lines of code.

Open-Source Models

These models are freely available and can be downloaded or fine-tuned locally.

Advantages

  • No recurring API cost
  • Full control and data privacy
  • Suitable for offline/local environments

Disadvantages

  • Require powerful GPUs
  • Slower inference compared to hosted APIs
  • May generate less polished responses

Integration Methods

Method Class Used Description
Hugging Face Inference API ChatHuggingFace + HuggingFaceEndpoint Accesses Hugging Face-hosted models using an API key (HUGGINGFACEHUB_ACCESS_TOKEN)
Local Deployment ChatHuggingFace + HuggingFacePipeline Downloads and runs models (e.g., TinyLlama) directly on local hardware
1
2
3
4
5
6
7
8
9
10
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint

# Using Hugging Face Inference API
endpoint = HuggingFaceEndpoint(
    repo_id="microsoft/DialoGPT-medium",
    huggingfacehub_api_token="your-token-here"
)

model = ChatHuggingFace(llm=endpoint)
response = model.invoke("Tell me about neural networks")

Common Model Parameters

LangChain allows developers to fine-tune model behavior using adjustable parameters.

Parameter Function Recommended Use
Temperature Controls creativity/randomness (0–2) 0–0.3 for factual or code tasks; 0.9–1.5+ for creative writing or brainstorming
Max Completion Tokens Sets output length limit Useful for cost control and managing verbosity

Implementation: Embedding Models

Embedding models are vital for applications requiring semantic understanding of text, like document search and RAG pipelines.

Closed-Source: OpenAI Embeddings

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
from langchain_openai import OpenAIEmbeddings

# Initialize embeddings model
embeddings = OpenAIEmbeddings(
    model="text-embedding-ada-002"
)

# Embed a single query
query_embedding = embeddings.embed_query("What is machine learning?")
print(f"Query embedding dimension: {len(query_embedding)}")

# Embed multiple documents
documents = [
    "Machine learning is a subset of artificial intelligence.",
    "Deep learning uses neural networks with multiple layers.",
    "Natural language processing deals with text and speech."
]

doc_embeddings = embeddings.embed_documents(documents)
print(f"Number of document embeddings: {len(doc_embeddings)}")
  • Functions:
    • embed_query(text) → Embeds a single query into a vector.
    • embed_documents(list) → Embeds multiple texts into 2D vectors.
  • Use Case: Ideal for scalable, high-accuracy semantic search in production-grade RAG systems.

Open-Source: Hugging Face Embeddings

1
2
3
4
5
6
7
8
9
10
11
12
13
from langchain_huggingface import HuggingFaceEmbeddings

# Initialize with a sentence transformer model
embeddings = HuggingFaceEmbeddings(
    model_name="all-MiniLM-L6-v2"
)

# Generate embeddings locally
query_embedding = embeddings.embed_query("What is artificial intelligence?")
doc_embeddings = embeddings.embed_documents([
    "AI is the simulation of human intelligence in machines.",
    "Machine learning is a branch of AI."
])
Open-Source Embeddings

Open-source embeddings are ideal for experimentation, research, or local RAG setups with limited budgets. They provide local generation of embeddings without internet dependency, though they may slightly lag in semantic precision compared to paid APIs.


This example demonstrates how embeddings enable document similarity search, a key step in RAG-based systems.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

# Sample documents
documents = [
    "LangChain is a framework for developing applications with language models.",
    "Vector databases store high-dimensional embeddings for similarity search.",
    "Machine learning algorithms learn patterns from data automatically.",
    "Natural language processing enables computers to understand human language."
]

# User query
query = "Tell me about frameworks for language model applications"

# Generate embeddings
doc_embeddings = embeddings.embed_documents(documents)
query_embedding = embeddings.embed_query(query)

# Calculate similarity scores
similarities = cosine_similarity(
    [query_embedding],
    doc_embeddings
)[0]

# Find most similar document
best_match_idx = np.argmax(similarities)
best_score = similarities[best_match_idx]

print(f"Most relevant document: {documents[best_match_idx]}")
print(f"Similarity score: {best_score:.4f}")

Process Overview

  1. Embedding Generation: Convert each document and the user query into embeddings (vectors).
  2. Similarity Calculation: Use Cosine Similarity to measure closeness between the query vector and document vectors.
  3. Retrieval: Identify the document with the highest similarity score—it's deemed most relevant to the user query.

Future Optimization

To improve performance and cost efficiency:

  • Store embeddings once in a Vector Database (e.g., Pinecone, ChromaDB, FAISS).
  • Avoid recomputing embeddings for the same documents repeatedly.
  • Enable faster and scalable RAG query responses.

Key Takeaways

  1. The Model Component acts as the core interface between LangChain and AI models.
  2. LangChain supports both closed and open-source model ecosystems seamlessly.
  3. Chat Models are the future—offering role-based, context-aware conversations.
  4. Embedding Models are the backbone of semantic search and knowledge retrieval.
  5. Proper parameter tuning and vector management enable efficient, cost-effective, and scalable AI pipelines.
Hands-On Practice

Practice Exercise: Try implementing both a Chat Model and an Embedding Model from different providers. Compare their outputs and performance characteristics. Experiment with different temperature settings and observe how they affect the model's responses.