LangChain Decoded: Part 1 - Models

An exploration of the LangChain framework and modules in multiple parts; this post covers Models.

LangChain is an open-source framework created by Harrison Chase to aid the development of applications leveraging the power of large language models (LLMs). It can be used for chatbots, text summarisation, data generation, code understanding, question answering, evaluation, and more. The LangChain framework was designed with two main principles in mind, namely that LLM-powered applications will:

  • Be data-aware i.e. connect a language model with other data sources
  • Be agentic i.e. allow a language model to interact with its environment

Informed by these principles, LangChain offers several modules for a variety of use cases, and is building out a robust ecosystem too. I dipped my toes with LangChain in the past (here and here), but with only a superficial understanding of its prowess. In this multi-part series, I explore various LangChain modules and use cases, and document my journey via Python notebooks on GitHub. Feel free to follow along and fork the repository, or use individual notebooks on Google Colab. Shoutout to the official LangChain documentation though - much of the code is borrowed or influenced by it, and I'm thankful for the clarity it offers.

Over the course of this series, I'll dive into the following topics:

  1. Models (this post)
  2. Embeddings
  3. Prompts
  4. Indexes
  5. Memory
  6. Chains
  7. Agents
  8. Callbacks

Getting Started

LangChain is available on PyPi, so it can be easily installed with pip. By default, the dependencies (e.g. model providers, data stores) are not installed, and should be installed separately based on your specific needs. LangChain also offers an implementation in JavaScript, but we'll only use the Python libraries here.

LangChain supports several model providers, but this tutorial will only focus on OpenAI (unless explicitly stated otherwise). Set the OpenAI API key via the OPENAI_API_KEY environment variable, or directly inside the notebook (or your Python code); if you don't have the key, you can get it here. Obviously, the first option is preferred in general, but especially in production - do not commit your API key accidentally to GitHub!

Follow along in your own Jupyter Python notebook, or click the link below to open the notebook directly in Google Colab.

Open In Colab

# Install the LangChain package
!pip install langchain

# Install the OpenAI package
!pip install openai

# Configure the API key
import os

openai_api_key = os.environ.get('OPENAI_API_KEY', 'sk-XXX')

LangChain: Large Language Models (LLMs)

The LLM class is designed as a standard interface to LLM providers like OpenAI, Cohere, HuggingFace etc. In this notebook, we'll interface with the OpenAI LLM wrapper, and carry out a few basic operations. The example below uses the text-davinci-003 model; feel free to use a different one. Note that you cannot use a chat model like gpt-3.5-turbo here; see the next section for that instead.

# Use the OpenAI LLM wrapper and text-davinci-003 model
from langchain.llms import OpenAI

llm = OpenAI(model_name="text-davinci-003", openai_api_key=openai_api_key)

The simplest thing to do is call the API with a string (e.g. question), and get a string in return (e.g. response).

# Generate a simple text response
llm("Why is the sky blue?")

*** Response ***
The sky is blue because of a process called Rayleigh scattering. When sunlight passes through the atmosphere, the molecules of air scatter the light in all directions, with blue light being scattered more than other colors because of its shorter wavelengths.

You can also call the API with a list of inputs; the example below uses a single one, but you can request for multiple text generations simultaneously. Let's also print the provider-specific output information.

# Show the generation output instead
llm_result = llm.generate(["Why is the sky blue?"])
llm_result.llm_output

*** Response ***
{'token_usage': {'total_tokens': 47,
  'prompt_tokens': 6,
  'completion_tokens': 41},
 'model_name': 'text-davinci-003'}

You can track OpenAI token usage, including the charges incurred, for a single or multiple API calls - just wrap them all inside the same callback context.

# Track OpenAI token usage for a single API call
from langchain.callbacks import get_openai_callback

with get_openai_callback() as cb:
    result = llm("Why is the sky blue?")

    print(f"Total Tokens: {cb.total_tokens}")
    print(f"\tPrompt Tokens: {cb.prompt_tokens}")
    print(f"\tCompletion Tokens: {cb.completion_tokens}")
    print(f"Total Cost (USD): ${cb.total_cost}")

*** Response ***
Total Tokens: 79
	Prompt Tokens: 6
	Completion Tokens: 73
Total Cost (USD): $0.00158

The LangChain documentation covers several more (and advanced) use cases, including some with prompt templates and chains, which I'll cover in subsequent posts. I'll skip those for now but feel free to play around in your notebook. The documentation also covers integrations with various other LLM providers.

LangChain: Chat Models

Chat models are a variation of language models - they use language models under the hood, but interface with applications using chat messages instead of a text in / text out approach. In this notebook, we'll interface with the OpenAI Chat wrapper, define a system message for the chatbot, and pass along the human message. You can also pass multiple messages (Human and AI) as context in each API call.

  • SystemMessage: Helpful context for the chatbot
  • HumanMessage: Actual message from the user
  • AIMessage: Response from the chatbot
# Define system message for the chatbot, and pass human message
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage
)

chat = ChatOpenAI(temperature=0, openai_api_key=openai_api_key)

messages = [
    SystemMessage(content="You are a helpful assistant that translates English to Spanish."),
    HumanMessage(content="Translate this sentence from English to Spanish. I'm hungry, give me food.")
]

chat(messages)

*** Response ***
AIMessage(content='Tengo hambre, dame comida.', additional_kwargs={})

By default, the ChatOpenAI class uses the gpt-3.5-turbo model; you can validate this using print(chat.model_name). Chat models are an evolving concept, both for the LLM providers and LangChain, and the API specification isn't well developed at the moment. But keep an eye out as these models gain popularity over time.

The next post in this series covers LangChain Embeddings - do follow along if you liked this post. Finally, check out this handy compendium of all LangChain posts.

References

Subscribe to alphasec

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe