Current Configuration

dns Base URL
Loading...
smart_toy Model
Loading...
vpn_key API Key
check_circle Configured

Configuration Examples

Ollama (Local LLM)

Free Local Recommended

Run AI models locally for complete privacy and no API costs.

Setup Instructions

Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2
.env Configuration
LLM_BASE_URL=http://localhost:11434/v1
LLM_API_KEY=not-needed
LLM_MODEL=llama3.2

OpenAI

Paid Cloud

Use OpenAI's GPT models for high-quality responses.

Setup Instructions

  1. Get your API key from OpenAI Platform
  2. Create a .env file with the configuration below
.env Configuration
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=sk-your-openai-api-key-here
LLM_MODEL=gpt-4

Groq

Free Tier Cloud Fast

Ultra-fast inference with a generous free tier.

Setup Instructions

  1. Get your API key from Groq Console
  2. Create a .env file with the configuration below
.env Configuration
LLM_BASE_URL=https://api.groq.com/openai/v1
LLM_API_KEY=your-groq-api-key
LLM_MODEL=llama3-70b-8192

Together AI

Paid Cloud

Access to a wide range of open-source models.

.env Configuration
LLM_BASE_URL=https://api.together.xyz/v1
LLM_API_KEY=your-together-api-key
LLM_MODEL=meta-llama/Llama-3-70b-chat-hf

vLLM (Local)

Free Local Advanced

High-performance inference engine for local deployment.

.env Configuration
LLM_BASE_URL=http://localhost:8000/v1
LLM_API_KEY=not-needed
LLM_MODEL=meta-llama/Llama-3-8b-instruct

How to Apply Configuration

1

Create .env file

Create a file named .env in the project root directory

2

Add configuration

Copy the configuration from your chosen provider above

3

Restart server

Restart the Flask application to apply the new configuration

arrow_back Back to Chat