# Minimal startup configuration - only Supabase connection required # All other settings (API keys, model choices, RAG flags) are managed via the Settings page # Get your SUPABASE_URL from the Data API section of your Supabase project settings - # https://supabase.com/dashboard/project//settings/api SUPABASE_URL= # ⚠️ CRITICAL: You MUST use the SERVICE ROLE key, NOT the Anon key! ⚠️ # # COMMON MISTAKE: Using the anon (public) key will cause ALL saves to fail with "permission denied"! # # How to get the CORRECT key: # 1. Go to: https://supabase.com/dashboard/project//settings/api # 2. In the Settings menu, click on "API keys" # 3. Find "Project API keys" section # 4. You will see TWO keys - choose carefully: # ❌ anon (public): WRONG - This is shorter, starts with "eyJhbGc..." and contains "anon" in the JWT # ✅ service_role (secret): CORRECT - This is longer and contains "service_role" in the JWT # # The service_role key is typically much longer than the anon key. # If you see errors like "Failed to save" or "Permission denied", you're using the wrong key! # # On the Supabase dashboard, it's labeled as "service_role" under "Project API keys" SUPABASE_SERVICE_KEY= # Optional: Set log level for debugging LOGFIRE_TOKEN= LOG_LEVEL=INFO # Service Ports Configuration # These ports are used for external access to the services HOST=localhost ARCHON_SERVER_PORT=8181 ARCHON_MCP_PORT=8051 ARCHON_AGENTS_PORT=8052 ARCHON_UI_PORT=3737 ARCHON_DOCS_PORT=3838 # Embedding Configuration # Dimensions for embedding vectors (1536 for OpenAI text-embedding-3-small) EMBEDDING_DIMENSIONS=1536 # NOTE: All other configuration has been moved to database management! # Run the credentials_setup.sql file in your Supabase SQL editor to set up the credentials table. # Then use the Settings page in the web UI to manage: # - OPENAI_API_KEY (encrypted) # - MODEL_CHOICE # - TRANSPORT settings # - RAG strategy flags (USE_CONTEXTUAL_EMBEDDINGS, USE_HYBRID_SEARCH, etc.) # - Crawler settings: # * CRAWL_MAX_CONCURRENT (default: 10) - Max concurrent pages per crawl operation # * CRAWL_BATCH_SIZE (default: 50) - URLs processed per batch # * MEMORY_THRESHOLD_PERCENT (default: 80) - Memory % before throttling # * DISPATCHER_CHECK_INTERVAL (default: 0.5) - Memory check interval in seconds