Skip to content

Installation Guide

This guide covers everything you need to install and configure LangStruct for your projects.

Python

3.8 or higher (3.10+ recommended)

Operating System

Linux, macOS, or Windows

Memory

2GB RAM minimum (4GB+ recommended)

API Access

Google Gemini (recommended), OpenAI, Anthropic, or local LLM server

uv is faster and more reliable:

Terminal window
# Install uv first
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add LangStruct to your project
uv add langstruct

LangStruct supports several optional features through additional packages:

Terminal window
pip install "langstruct[pandas]"
Terminal window
pip install "langstruct[viz]"
Terminal window
pip install "langstruct[all]"

LangStruct supports multiple LLM providers. Set up at least one:

Terminal window
export GOOGLE_API_KEY="your-google-api-key"

Supported models:

  • gemini-2.5-flash (recommended - fastest & cheapest)
  • gemini-2.5-pro (for complex tasks)
  • gemini-2.0-flash (previous generation)
  • gemini-2.0-pro (previous generation)
Terminal window
export OPENAI_API_KEY="your-openai-api-key"

Supported models:

  • gpt-4o (recommended for production)
  • gpt-4o-mini (recommended for development)
  • gpt-4-turbo
  • gpt-3.5-turbo
Terminal window
export ANTHROPIC_API_KEY="your-anthropic-api-key"

Supported models:

  • claude-3-5-sonnet-20241022 (recommended)
  • claude-3-5-haiku-20241022
  • claude-3-opus-20240229
Terminal window
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_API_VERSION="2024-02-01"

LangStruct works with local models via:

Terminal window
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama2
# Use in LangStruct
export OLLAMA_BASE_URL="http://localhost:11434"
Terminal window
# Install vLLM
pip install vllm
# Start vLLM server
python -m vllm.entrypoints.openai.api_server \
--model microsoft/DialoGPT-medium \
--port 8000
Terminal window
# Google Gemini (recommended)
export GOOGLE_API_KEY="your-google-api-key"
# OpenAI
export OPENAI_API_KEY="sk-..."
export OPENAI_ORG_ID="org-..." # Optional
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# LangStruct specific
export LANGSTRUCT_DEFAULT_MODEL="gemini-2.5-flash"
export LANGSTRUCT_CACHE_DIR="~/.langstruct"
export LANGSTRUCT_LOG_LEVEL="INFO"

Test your installation:

import langstruct
# Check version
print(f"LangStruct version: {langstruct.__version__}")
# Test basic functionality
from langstruct import LangStruct, Schema, Field
class TestSchema(Schema):
message: str = Field(description="A simple message")
# This will test your API connection
extractor = LangStruct(schema=TestSchema, model="gemini-2.5-flash")
result = extractor.extract("Hello, LangStruct!")
print(f"Success! Extracted: {result.entities}")
Terminal window
# If you get import errors, reinstall with dependencies
pip uninstall langstruct
pip install langstruct[all]
# Test your Google API key
from google import genai
client = genai.Client() # Uses GOOGLE_API_KEY
response = client.models.list()
print("Google Gemini connection successful")
# Or test OpenAI
import openai
client = openai.OpenAI() # Uses OPENAI_API_KEY
response = client.models.list()
print("OpenAI connection successful")
Terminal window
# Use user installation if needed
pip install --user langstruct
# Or use virtual environment (recommended)
python -m venv langstruct-env
source langstruct-env/bin/activate # On Windows: langstruct-env\Scripts\activate
pip install langstruct
import langstruct
# Enable result caching (recommended)
langstruct.configure(
cache_enabled=True,
cache_ttl=3600 # 1 hour
)
Terminal window
# Install async dependencies
pip install "langstruct[async]"
import asyncio
from langstruct import AsyncLangStruct
async def extract_data():
extractor = AsyncLangStruct(schema=YourSchema)
result = await extractor.extract("Your text here")
return result

For contributing or development:

Terminal window
# Clone repository
git clone https://github.com/langstruct/langstruct.git
cd langstruct
# Install in development mode
uv sync --dev
# Or with pip
pip install -e ".[dev,test]"
# Run tests
pytest
# Run linting
ruff check .
mypy src/

Use the official Docker image:

FROM python:3.11-slim
WORKDIR /app
# Install LangStruct
RUN pip install langstruct[all]
# Copy your application
COPY . .
# Set API keys via environment variables
ENV GOOGLE_API_KEY=""
ENV OPENAI_API_KEY=""
ENV ANTHROPIC_API_KEY=""
CMD ["python", "your_app.py"]

Or use docker-compose:

version: '3.8'
services:
langstruct-app:
image: python:3.11-slim
volumes:
- .:/app
working_dir: /app
environment:
- GOOGLE_API_KEY=${GOOGLE_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
command: >
sh -c "pip install langstruct[all] && python your_app.py"

Now that LangStruct is installed:

Need help? Check our troubleshooting guide or ask in GitHub Discussions.