Files
localGPT/backend
Devin AI fb75541eb3 fix: resolve Docker networking issue for Ollama connectivity
- Modified OllamaClient to read OLLAMA_HOST environment variable
- Updated docker-compose.yml to pass OLLAMA_HOST to backend service
- Changed docker.env to use Docker gateway IP (172.18.0.1:11434)
- Configured Ollama service to bind to 0.0.0.0:11434 for container access
- Added test script to verify Ollama connectivity from within container
- All backend tests now pass including chat functionality

Co-Authored-By: PromptEngineer <jnfarooq@outlook.com>
2025-07-15 21:34:17 +00:00
..
2025-07-11 00:17:15 -07:00
2025-07-11 00:17:15 -07:00

localGPT Backend

Simple Python backend that connects your frontend to Ollama for local LLM chat.

Prerequisites

  1. Install Ollama (if not already installed):

    # Visit https://ollama.ai or run:
    curl -fsSL https://ollama.ai/install.sh | sh
    
  2. Start Ollama:

    ollama serve
    
  3. Pull a model (optional, server will suggest if needed):

    ollama pull llama3.2
    

Setup

  1. Install Python dependencies:

    pip install -r requirements.txt
    
  2. Test Ollama connection:

    python ollama_client.py
    
  3. Start the backend server:

    python server.py
    

Server will run on http://localhost:8000

API Endpoints

Health Check

GET /health

Returns server status and available models.

Chat

POST /chat
Content-Type: application/json

{
  "message": "Hello!",
  "model": "llama3.2:latest",
  "conversation_history": []
}

Returns:

{
  "response": "Hello! How can I help you?",
  "model": "llama3.2:latest",
  "message_count": 1
}

Testing

Test the chat endpoint:

curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!", "model": "llama3.2:latest"}'

Frontend Integration

Your React frontend should connect to:

  • Backend: http://localhost:8000
  • Chat endpoint: http://localhost:8000/chat

What's Next

This simple backend is ready for:

  • Real-time chat with local LLMs
  • 🔜 Document upload for RAG
  • 🔜 Vector database integration
  • 🔜 Streaming responses
  • 🔜 Chat history persistence