Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

uniquejava/spring-boot-langchain4j-example

Repository files navigation

LangChain4j Spring Boot Example

A comprehensive Spring Boot application demonstrating LangChain4j integration with OpenAI GPT models, showcasing both high-level and low-level API usage patterns.

简体中文

Overview

This project demonstrates how to integrate LangChain4j with Spring Boot to build AI-powered applications. It includes examples of:

  • High-level AI Services using @AiService interfaces
  • Low-level ChatModel API for direct model interaction
  • Streaming responses with Server-Sent Events (SSE)
  • Tool integration for AI function calling
  • Observability with tracing and monitoring

Prerequisites

  • Java 17+
  • Maven 3.6+
  • OpenAI API key
  • Docker & Docker Compose (for observability stack)

Quick Start

Option 1: Using Docker Compose (Recommended)

1. Install Langfuse (Optional - for LLM Observability)

For comprehensive LLM observability and tracing, you can install Langfuse separately:

# Clone Langfuse repository
git clone https://github.com/langfuse/langfuse.git
cd langfuse
# Start Langfuse with Docker Compose
docker compose up
# Langfuse will be available at http://localhost:3000
# Open http://localhost:3000 in your browser to access the Langfuse UI

Note: This step is optional. The main application's docker-compose.yml includes Langfuse, so you can skip this if you want to run all services together.

2. Set Environment Variables

# Copy the environment template and customize it
cp .env.example .env
# Edit .env file with your actual values:
# OPENAI_API_KEY=your_openai_api_key_here
# LANGFUSE_SECRET_KEY=your_langfuse_secret_key_here

3. Start All Services

docker-compose up -d

This will start:

  • Spring Boot Application (port 8082)
  • Langfuse (port 3000) - LLM observability platform
  • Zipkin (port 9411) - Distributed tracing
  • OpenTelemetry Collector (ports 4317/4318) - Telemetry data collection
  • PostgreSQL (port 5432) - Langfuse database
  • Redis (port 6379) - Caching

4. Test the Endpoints

Option 2: Manual Installation

1. Set Environment Variables

# Copy the environment template and customize it
cp .env.example .env
# Edit .env file with your actual values:
# OPENAI_API_KEY=your_openai_api_key_here
# OPENAI_BASE_URL=https://api.openai.com/v1 # Optional: for custom OpenAI-compatible endpoints
# LANGFUSE_SECRET_KEY=your_langfuse_secret_key_here

2. Run the Application

./mvnw spring-boot:run

The application will start on port 8082.

3. Test the Endpoints

Use the provided test.http file in src/test/resources/ or make curl requests:

# High-level AI service with time tool
curl "http://localhost:8082/assistant?message=What is the current time?"
# High-level AI service with math tool
curl "http://localhost:8082/assistant?message=What is 15 + 27?"
# Streaming AI service
curl "http://localhost:8082/streamingAssistant?message=Tell me a joke"
# Low-level ChatModel API
curl "http://localhost:8082/model?message=What is the capital of Germany?"

Architecture

Docker Compose Architecture

┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Spring Boot │ │ OTel Collector│ │ Langfuse │
│ Application │───▶│ (otel-collector)│───▶│ (LLM Observability)│
│ :8082 │ │ :4317/4318 │ │ :3000 │
└─────────────────┘ └─────────────────┘ └─────────────────┘
 │ │ │
 │ ▼ │
 │ ┌─────────────────┐ │
 │ │ Zipkin │ │
 └──────────────▶│ (Distributed │◀─────────────┘
 │ Tracing) │
 │ :9411 │
 └─────────────────┘
Data Flow:
- Spring Boot App → OpenTelemetry Collector → Langfuse & Zipkin
- All traces and metrics are automatically collected and exported
- Tool executions are tracked with detailed metadata

High-Level AI Services (Recommended)

Located in src/main/java/dev/langchain4j/example/aiservice/:

  • Assistant: Interface-based AI service with automatic method mapping
  • StreamingAssistant: Reactive streaming variant using WebFlux
  • AssistantController: REST endpoints for consuming AI services
  • AssistantTools: Custom tools that AI can invoke (e.g., currentTime, add)
  • MyChatModelListener: Request/response logging and monitoring

Low-Level API

Located in src/main/java/dev/langchain4j/example/lowlevel/:

  • ChatModelController: Direct usage of ChatModel for fine-grained control

API Endpoints

Endpoint Method Description Features
/assistant GET High-level AI service Tool integration, chat memory
/streamingAssistant GET Streaming AI service SSE, real-time responses
/model GET Low-level ChatModel Direct model access

Request Parameters

  • message (required): The user message to send to the AI

Configuration

Application Configuration

Configuration is managed through application.yml:

langchain4j:
 open-ai:
 chat-model:
 api-key: ${OPENAI_API_KEY}
 base-url: ${OPENAI_BASE_URL}
 model-name: gpt-4o-mini
 streaming-chat-model:
 api-key: ${OPENAI_API_KEY}
 base-url: ${OPENAI_BASE_URL}
 model-name: gpt-4o-mini

Customization

  • Model Selection: Change model-name to use different OpenAI models
  • Logging: Uncomment log-requests and log-responses for debugging
  • Base URL: Configure custom OpenAI-compatible endpoints

Features

1. Tool Integration

AI services can invoke custom tools:

@Tool("Returns the current time")
String currentTime();
@Tool("Adds two numbers together")
double add(double a, double b);

2. Langfuse Integration

The application integrates with Langfuse for observability through OpenTelemetry:

  • Automatic Trace Export: All traces are automatically exported to Langfuse
  • Tool Execution Tracking: Every tool execution is traced with detailed attributes
  • Error Monitoring: Tool execution errors are captured and reported
  • Performance Metrics: Tool execution timing and success rates

Configuration:

management:
 otlp:
 tracing:
 endpoint: ${LANGFUSE_ENDPOINT:http://localhost:3000/public/api/otel}
 headers:
 Authorization: Basic ${LANGFUSE_SECRET_KEY:}

Each tool execution includes:

  • Tool name and type
  • Input parameters
  • Execution result
  • Success/error status
  • Execution timing

3. Streaming Responses

Non-blocking streaming using Server-Sent Events:

Flux<String> stream(String message);

4. Chat Memory

Prototype-scoped chat memory ensures conversation isolation:

@MemoryId
String conversationId;

5. Observability

  • Micrometer metrics integration
  • OpenTelemetry tracing support
  • Langfuse integration for comprehensive observability
  • Custom chat model listeners
  • Spring Boot Actuator endpoints

Development

Building

./mvnw clean compile
./mvnw clean package

Testing

./mvnw test

Monitoring

Docker Compose Services

Access all monitoring services:

  • Langfuse Dashboard: http://localhost:3000 - LLM observability and tracing
  • Zipkin: http://localhost:9411 - Distributed tracing visualization
  • OpenTelemetry Collector: http://localhost:13133 - Collector health check
  • Spring Boot Actuator: http://localhost:8082/actuator - Application metrics
Standalone Langfuse Installation

If you prefer to run Langfuse separately (useful for development or when not using the full Docker Compose stack):

# Clone and start Langfuse
git clone https://github.com/langfuse/langfuse.git
cd langfuse
docker compose up
# Access Langfuse at http://localhost:3000

Benefits of standalone Langfuse:

  • Dedicated Environment: Isolate Langfuse from your main application
  • Development Convenience: Start/stop Langfuse independently
  • Resource Management: Better control over resource allocation
  • Multi-Project Support: Single Langfuse instance can monitor multiple applications

Spring Boot Actuator Endpoints

  • /actuator/health - Application health
  • /actuator/metrics - Application metrics
  • /actuator/info - Application information

Viewing Traces

  1. Langfuse: Navigate to http://localhost:3000 to see:

    • Complete trace visualizations
    • Tool execution details
    • Performance metrics
    • Error analysis
  2. Zipkin: Navigate to http://localhost:9411 to see:

    • Service dependency maps
    • Trace timelines
    • Span details

Adding New Features

New AI Services

  1. Create an interface with @AiService annotation
  2. Define methods using LangChain4j annotations (@UserMessage, @V, etc.)
  3. Register as a bean in configuration
  4. Add REST endpoints in controller

New Tools

  1. Add methods to AssistantTools class with @Tool annotation
  2. Spring automatically registers tools for AI use
  3. Ensure proper method signatures for AI integration

Custom Listeners

Implement ChatModelListener for:

  • Request/response logging
  • Performance monitoring
  • Custom metrics collection

Best Practices

  1. Use High-Level APIs: Prefer @AiService interfaces for better maintainability
  2. Implement Tool Validation: Validate tool inputs and handle errors gracefully
  3. Monitor Usage: Use observability features to track AI service performance
  4. Secure API Keys: Never commit API keys to version control
  5. Handle Streaming: Properly manage SSE connections and error handling

Troubleshooting

Docker Compose Issues

  • Services Not Starting: Check if all required environment variables are set in .env
  • Port Conflicts: Ensure ports 3000, 5432, 6379, 9411, 4317, 4318, 8082 are available
  • Build Failures: Run docker-compose build --no-cache to rebuild images
  • Memory Issues: Increase Docker memory allocation to at least 4GB

Common Application Issues

  • Missing API Key: Ensure OPENAI_API_KEY environment variable is set
  • Langfuse Connection: Verify Langfuse is accessible at http://localhost:3000
  • Streaming Issues: Verify SSE support in HTTP clients
  • Tool Errors: Check tool method signatures and exception handling

Langfuse Issues

Standalone Langfuse Troubleshooting

If you're running Langfuse separately:

# Check if Langfuse is running
docker ps | grep langfuse
# Check Langfuse logs
docker logs -f langfuse_server_1
# Restart Langfuse
docker compose restart
# Clean restart (removes volumes)
docker compose down -v && docker compose up -d

Common Langfuse Issues

  • Port Conflicts: Ensure port 3000 is available
  • Database Issues: PostgreSQL container may need more time to start
  • Memory Issues: Langfuse requires at least 2GB RAM
  • Connection Issues: Verify network connectivity between app and Langfuse

Connecting App to Standalone Langfuse

If running Langfuse separately, update your .env file:

# Edit .env file to point to the standalone Langfuse
LANGFUSE_SECRET_KEY=your_langfuse_secret_key_here

Then restart your application to pick up the new environment variables.

Docker Commands

# View logs for all services
docker-compose logs -f
# View logs for specific service
docker-compose logs -f langchain4j-app
# Restart specific service
docker-compose restart langchain4j-app
# Stop all services
docker-compose down
# Stop services and remove volumes
docker-compose down -v
# Rebuild and restart
docker-compose up --build -d

Health Checks

Check service health:

# Check Spring Boot app health
curl http://localhost:8082/actuator/health
# Check Langfuse health
curl http://localhost:3000/api/health
# Check OpenTelemetry Collector health
curl http://localhost:13133

Debug Mode

Enable debug logging in application.yml or via environment variable:

# For Docker Compose, add to docker-compose.yml:
environment:
 - LOGGING_LEVEL_DEV_LANGCHAIN4J=DEBUG
# Or in application.yml:
logging:
 level:
 dev.langchain4j: DEBUG
 io.opentelemetry: DEBUG

Dependencies

  • Spring Boot 3.4.2: Web framework and dependency injection
  • LangChain4j 1.7.1: AI/LLM integration framework
  • WebFlux: Reactive programming support
  • Micrometer: Metrics collection
  • OpenTelemetry: Distributed tracing
  • OpenTelemetry OTLP Exporter: For Langfuse integration

License

This project is part of the LangChain4j examples collection.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

AltStyle によって変換されたページ (->オリジナル) /