Archon/.env.example
Rasmus Widing e98f52aa57 Address code review feedback: improve error handling and documentation
- Implement fail-fast error handling for configuration errors
- Distinguish between critical config errors (fail) and network issues (use defaults)
- Add detailed error logging with stack traces for debugging
- Document new crawler settings in .env.example
- Add inline comments explaining safe defaults

Critical configuration errors (ValueError, KeyError, TypeError) now fail fast
as per alpha principles, while transient errors still fall back to safe defaults
with prominent error logging.
2025-08-15 16:02:00 +03:00

41 lines
1.7 KiB
Plaintext

# Minimal startup configuration - only Supabase connection required
# All other settings (API keys, model choices, RAG flags) are managed via the Settings page
# Get your SUPABASE_URL from the Data API section of your Supabase project settings -
# https://supabase.com/dashboard/project/<your project ID>/settings/api
SUPABASE_URL=
# Get your SUPABASE_SERVICE_KEY from the API Keys section of your Supabase project settings -
# https://supabase.com/dashboard/project/<your project ID>/settings/api-keys
# On this page it is called the service_role secret.
SUPABASE_SERVICE_KEY=
# Optional: Set log level for debugging
LOGFIRE_TOKEN=
LOG_LEVEL=INFO
# Service Ports Configuration
# These ports are used for external access to the services
HOST=localhost
ARCHON_SERVER_PORT=8181
ARCHON_MCP_PORT=8051
ARCHON_AGENTS_PORT=8052
ARCHON_UI_PORT=3737
ARCHON_DOCS_PORT=3838
# Embedding Configuration
# Dimensions for embedding vectors (1536 for OpenAI text-embedding-3-small)
EMBEDDING_DIMENSIONS=1536
# NOTE: All other configuration has been moved to database management!
# Run the credentials_setup.sql file in your Supabase SQL editor to set up the credentials table.
# Then use the Settings page in the web UI to manage:
# - OPENAI_API_KEY (encrypted)
# - MODEL_CHOICE
# - TRANSPORT settings
# - RAG strategy flags (USE_CONTEXTUAL_EMBEDDINGS, USE_HYBRID_SEARCH, etc.)
# - Crawler settings:
# * CRAWL_MAX_CONCURRENT (default: 10) - Max concurrent pages per crawl operation
# * CRAWL_BATCH_SIZE (default: 50) - URLs processed per batch
# * MEMORY_THRESHOLD_PERCENT (default: 80) - Memory % before throttling
# * DISPATCHER_CHECK_INTERVAL (default: 0.5) - Memory check interval in seconds