Python
3.8 or higher (3.10+ recommended)
This guide covers everything you need to install and configure LangStruct for your projects.
Python
3.8 or higher (3.10+ recommended)
Operating System
Linux, macOS, or Windows
Memory
2GB RAM minimum (4GB+ recommended)
API Access
Google Gemini (recommended), OpenAI, Anthropic, or local LLM server
uv is faster and more reliable:
# Install uv firstcurl -LsSf https://astral.sh/uv/install.sh | sh
# Add LangStruct to your projectuv add langstruct
pip install langstruct
conda install -c conda-forge langstruct
LangStruct supports several optional features through additional packages:
pip install "langstruct[pandas]"
pip install "langstruct[excel]"
pip install "langstruct[parquet]"
pip install "langstruct[viz]"
pip install "langstruct[jupyter]"
pip install "langstruct[all]"
LangStruct supports multiple LLM providers. Set up at least one:
export GOOGLE_API_KEY="your-google-api-key"
Supported models:
gemini-2.5-flash
(recommended - fastest & cheapest)gemini-2.5-pro
(for complex tasks)gemini-2.0-flash
(previous generation)gemini-2.0-pro
(previous generation)export OPENAI_API_KEY="your-openai-api-key"
Supported models:
gpt-4o
(recommended for production)gpt-4o-mini
(recommended for development)gpt-4-turbo
gpt-3.5-turbo
export ANTHROPIC_API_KEY="your-anthropic-api-key"
Supported models:
claude-3-5-sonnet-20241022
(recommended)claude-3-5-haiku-20241022
claude-3-opus-20240229
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"export AZURE_OPENAI_API_KEY="your-azure-api-key"export AZURE_OPENAI_API_VERSION="2024-02-01"
LangStruct works with local models via:
# Install Ollamacurl -fsSL https://ollama.ai/install.sh | sh
# Pull a modelollama pull llama2
# Use in LangStructexport OLLAMA_BASE_URL="http://localhost:11434"
# Install vLLMpip install vllm
# Start vLLM serverpython -m vllm.entrypoints.openai.api_server \ --model microsoft/DialoGPT-medium \ --port 8000
# Google Gemini (recommended)export GOOGLE_API_KEY="your-google-api-key"
# OpenAIexport OPENAI_API_KEY="sk-..."export OPENAI_ORG_ID="org-..." # Optional
# Anthropicexport ANTHROPIC_API_KEY="sk-ant-..."
# LangStruct specificexport LANGSTRUCT_DEFAULT_MODEL="gemini-2.5-flash"export LANGSTRUCT_CACHE_DIR="~/.langstruct"export LANGSTRUCT_LOG_LEVEL="INFO"
Create a .env
file in your project root:
GOOGLE_API_KEY=your-google-api-keyOPENAI_API_KEY=sk-your-key-hereANTHROPIC_API_KEY=sk-ant-your-key-hereLANGSTRUCT_DEFAULT_MODEL=gemini-2.5-flash
Create ~/.langstruct/config.yaml
:
default_model: "gemini-2.5-flash"cache_enabled: truecache_dir: "~/.langstruct/cache"log_level: "INFO"
models: google: api_key: "${GOOGLE_API_KEY}" openai: api_key: "${OPENAI_API_KEY}" org_id: "${OPENAI_ORG_ID}" anthropic: api_key: "${ANTHROPIC_API_KEY}"
Test your installation:
import langstruct
# Check versionprint(f"LangStruct version: {langstruct.__version__}")
# Test basic functionalityfrom langstruct import LangStruct, Schema, Field
class TestSchema(Schema): message: str = Field(description="A simple message")
# This will test your API connectionextractor = LangStruct(schema=TestSchema, model="gemini-2.5-flash")result = extractor.extract("Hello, LangStruct!")
print(f"Success! Extracted: {result.entities}")
# If you get import errors, reinstall with dependenciespip uninstall langstructpip install langstruct[all]
# Test your Google API keyfrom google import genaiclient = genai.Client() # Uses GOOGLE_API_KEYresponse = client.models.list()print("Google Gemini connection successful")
# Or test OpenAIimport openaiclient = openai.OpenAI() # Uses OPENAI_API_KEYresponse = client.models.list()print("OpenAI connection successful")
# Use user installation if neededpip install --user langstruct
# Or use virtual environment (recommended)python -m venv langstruct-envsource langstruct-env/bin/activate # On Windows: langstruct-env\Scripts\activatepip install langstruct
import langstruct
# Enable result caching (recommended)langstruct.configure( cache_enabled=True, cache_ttl=3600 # 1 hour)
# Install async dependenciespip install "langstruct[async]"
import asynciofrom langstruct import AsyncLangStruct
async def extract_data(): extractor = AsyncLangStruct(schema=YourSchema) result = await extractor.extract("Your text here") return result
For contributing or development:
# Clone repositorygit clone https://github.com/langstruct/langstruct.gitcd langstruct
# Install in development modeuv sync --dev
# Or with pippip install -e ".[dev,test]"
# Run testspytest
# Run lintingruff check .mypy src/
Use the official Docker image:
FROM python:3.11-slim
WORKDIR /app
# Install LangStructRUN pip install langstruct[all]
# Copy your applicationCOPY . .
# Set API keys via environment variablesENV GOOGLE_API_KEY=""ENV OPENAI_API_KEY=""ENV ANTHROPIC_API_KEY=""
CMD ["python", "your_app.py"]
Or use docker-compose:
version: '3.8'services: langstruct-app: image: python:3.11-slim volumes: - .:/app working_dir: /app environment: - GOOGLE_API_KEY=${GOOGLE_API_KEY} - OPENAI_API_KEY=${OPENAI_API_KEY} - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} command: > sh -c "pip install langstruct[all] && python your_app.py"
Now that LangStruct is installed:
Quick Start
Basic Usage
Examples
API Reference
Need help? Check our troubleshooting guide or ask in GitHub Discussions.