Docker support for Archon V4

This commit is contained in:
Cole Medin 2025-02-28 08:35:18 -06:00
parent 4e72bc77ce
commit ee5d70c4c8
26 changed files with 1057 additions and 281 deletions

38
.dockerignore Normal file
View File

@ -0,0 +1,38 @@
# Ignore specified folders
iterations/
venv/
.langgraph_api/
.github/
__pycache__/
.env
# Git related
.git/
.gitignore
.gitattributes
# Python cache
*.pyc
*.pyo
*.pyd
.Python
*.so
.pytest_cache/
# Environment files
.env.local
.env.development.local
.env.test.local
.env.production.local
# Logs
*.log
# IDE specific files
.idea/
.vscode/
*.swp
*.swo
# Keep the example env file for reference
!.env.example

28
Dockerfile Normal file
View File

@ -0,0 +1,28 @@
FROM python:3.12-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first for better caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application
COPY . .
# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV PYTHONPATH=/app
# Expose port for Streamlit
EXPOSE 8501
# Expose port for the Archon Service (started within Streamlit)
EXPOSE 8100
# Set the entrypoint to run Streamlit directly
CMD ["streamlit", "run", "streamlit_ui.py", "--server.port=8501", "--server.address=0.0.0.0"]

View File

@ -39,12 +39,35 @@ Archon demonstrates three key principles in modern AI development:
Since V4 is the current version of Archon, all the code for V4 is in both the main directory and `archon/iterations/v4-streamlit-ui-overhaul` directory. Since V4 is the current version of Archon, all the code for V4 is in both the main directory and `archon/iterations/v4-streamlit-ui-overhaul` directory.
### Prerequisites ### Prerequisites
- Docker (optional but preferred)
- Python 3.11+ - Python 3.11+
- Supabase account (for vector database) - Supabase account (for vector database)
- OpenAI/OpenRouter API key or Ollama for local LLMs - OpenAI/OpenRouter API key or Ollama for local LLMs
### Installation ### Installation
#### Option 1: Docker (Recommended)
1. Clone the repository:
```bash
git clone https://github.com/coleam00/archon.git
cd archon
```
2. Run the Docker setup script:
```bash
# This will build both containers and start Archon
python run_docker.py
```
3. Access the Streamlit UI at http://localhost:8501.
> **Note:** `run_docker.py` will automatically:
> - Build the MCP server container
> - Build the main Archon container
> - Run Archon with the appropriate port mappings
> - Use environment variables from `.env` file if it exists
#### Option 2: Local Python Installation
1. Clone the repository: 1. Clone the repository:
```bash ```bash
git clone https://github.com/coleam00/archon.git git clone https://github.com/coleam00/archon.git
@ -58,20 +81,22 @@ source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt pip install -r requirements.txt
``` ```
### Quick Start 3. Start the Streamlit UI:
1. Start the Streamlit UI:
```bash ```bash
streamlit run streamlit_ui.py streamlit run streamlit_ui.py
``` ```
2. Follow the guided setup process in the Intro section of the Streamlit UI: 4. Access the Streamlit UI at http://localhost:8501.
- **Environment**: Configure your API keys and model settings
- **Database**: Set up your Supabase vector database ### Setup Process
- **Documentation**: Crawl and index the Pydantic AI documentation
- **Agent Service**: Start the agent service for generating agents After installation, follow the guided setup process in the Intro section of the Streamlit UI:
- **Chat**: Interact with Archon to create AI agents - **Environment**: Configure your API keys and model settings
- **MCP** (optional): Configure integration with AI IDEs - **Database**: Set up your Supabase vector database
- **Documentation**: Crawl and index the Pydantic AI documentation
- **Agent Service**: Start the agent service for generating agents
- **Chat**: Interact with Archon to create AI agents
- **MCP** (optional): Configure integration with AI IDEs
The Streamlit interface will guide you through each step with clear instructions and interactive elements. The Streamlit interface will guide you through each step with clear instructions and interactive elements.
There are a good amount of steps for the setup but it goes quick! There are a good amount of steps for the setup but it goes quick!
@ -99,6 +124,7 @@ There are a good amount of steps for the setup but it goes quick!
- [Learn more about V3](iterations/v3-mcp-support/README.md) - [Learn more about V3](iterations/v3-mcp-support/README.md)
### V4: Current - Streamlit UI Overhaul ### V4: Current - Streamlit UI Overhaul
- Docker support
- Comprehensive Streamlit interface for managing all aspects of Archon - Comprehensive Streamlit interface for managing all aspects of Archon
- Guided setup process with interactive tabs - Guided setup process with interactive tabs
- Environment variable management through the UI - Environment variable management through the UI
@ -114,8 +140,8 @@ There are a good amount of steps for the setup but it goes quick!
- V8: Autonomous Framework Learning - Self-updating framework adapters - V8: Autonomous Framework Learning - Self-updating framework adapters
### Future Integrations ### Future Integrations
- Docker
- LangSmith - LangSmith
- MCP marketplace
- Other frameworks besides Pydantic AI - Other frameworks besides Pydantic AI
- Other vector databases besides Supabase - Other vector databases besides Supabase
@ -124,8 +150,13 @@ There are a good amount of steps for the setup but it goes quick!
### Core Files ### Core Files
- `streamlit_ui.py`: Comprehensive web interface for managing all aspects of Archon - `streamlit_ui.py`: Comprehensive web interface for managing all aspects of Archon
- `graph_service.py`: FastAPI service that handles the agentic workflow - `graph_service.py`: FastAPI service that handles the agentic workflow
- `mcp_server.py`: MCP server script for AI IDE integration - `run_docker.py`: Script to build and run Archon Docker containers
- `requirements.txt`: Project dependencies - `Dockerfile`: Container definition for the main Archon application
### MCP Integration
- `mcp/`: Model Context Protocol server implementation
- `mcp_server.py`: MCP server script for AI IDE integration
- `Dockerfile`: Container definition for the MCP server
### Archon Package ### Archon Package
- `archon/`: Core agent and workflow implementation - `archon/`: Core agent and workflow implementation
@ -139,7 +170,29 @@ There are a good amount of steps for the setup but it goes quick!
- `site_pages.sql`: Database setup commands - `site_pages.sql`: Database setup commands
- `env_vars.json`: Environment variables defined in the UI are stored here (included in .gitignore, file is created automatically) - `env_vars.json`: Environment variables defined in the UI are stored here (included in .gitignore, file is created automatically)
### Database Setup ## Deployment Options
- **Docker Containers**: Run Archon in isolated containers with all dependencies included
- Main container: Runs the Streamlit UI and graph service
- MCP container: Provides MCP server functionality for AI IDEs
- **Local Python**: Run directly on your system with a Python virtual environment
### Docker Architecture
The Docker implementation consists of two containers:
1. **Main Archon Container**:
- Runs the Streamlit UI on port 8501
- Hosts the Graph Service on port 8100
- Built from the root Dockerfile
- Handles all agent functionality and user interactions
2. **MCP Container**:
- Implements the Model Context Protocol for AI IDE integration
- Built from the mcp/Dockerfile
- Communicates with the main container's Graph Service
- Provides a standardized interface for AI IDEs like Windsurf, Cursor, and Cline
When running with Docker, the `run_docker.py` script automates building and starting both containers with the proper configuration.
## Database Setup
The Supabase database uses the following schema: The Supabase database uses the following schema:

View File

@ -53,8 +53,10 @@ openai_client=None
if is_ollama: if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key) openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else: elif get_env_var("OPENAI_API_KEY"):
openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY")) openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY"))
else:
openai_client = None
if get_env_var("SUPABASE_URL"): if get_env_var("SUPABASE_URL"):
supabase: Client = Client( supabase: Client = Client(

View File

@ -66,4 +66,4 @@ async def invoke_agent(request: InvokeRequest):
if __name__ == "__main__": if __name__ == "__main__":
import uvicorn import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8100) uvicorn.run(app, host="0.0.0.0", port=8100)

View File

@ -0,0 +1,38 @@
# Ignore specified folders
iterations/
venv/
.langgraph_api/
.github/
__pycache__/
.env
# Git related
.git/
.gitignore
.gitattributes
# Python cache
*.pyc
*.pyo
*.pyd
.Python
*.so
.pytest_cache/
# Environment files
.env.local
.env.development.local
.env.test.local
.env.production.local
# Logs
*.log
# IDE specific files
.idea/
.vscode/
*.swp
*.swo
# Keep the example env file for reference
!.env.example

View File

@ -1,2 +0,0 @@
# Auto detect text files and perform LF normalization
* text=auto

View File

@ -0,0 +1,28 @@
FROM python:3.12-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements first for better caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application
COPY . .
# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV PYTHONPATH=/app
# Expose port for Streamlit
EXPOSE 8501
# Expose port for the Archon Service (started within Streamlit)
EXPOSE 8100
# Set the entrypoint to run Streamlit directly
CMD ["streamlit", "run", "streamlit_ui.py", "--server.port=8501", "--server.address=0.0.0.0"]

View File

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2025 oTTomator and Archon contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,6 +1,6 @@
# Archon V4 - Streamlit UI Overhaul # Archon V4 - Streamlit UI Overhaul (and Docker Support)
This is the fourth iteration of the Archon project, building upon V3 by adding a comprehensive Streamlit UI for managing all aspects of Archon. The system retains the core LangGraph workflow and MCP support from V3, but now provides a unified interface for environment configuration, database setup, documentation crawling, agent service management, and MCP integration. This is the fourth iteration of the Archon project, building upon V3 by adding a comprehensive Streamlit UI for managing all aspects of Archon and Docker support. The system retains the core LangGraph workflow and MCP support from V3, but now provides a unified interface for environment configuration, database setup, documentation crawling, agent service management, and MCP integration.
What makes V4 special is its guided setup process that walks users through each step of configuring and running Archon. The Streamlit UI eliminates the need for manual configuration of environment variables, database setup, and service management, making Archon much more accessible to users without extensive technical knowledge. What makes V4 special is its guided setup process that walks users through each step of configuring and running Archon. The Streamlit UI eliminates the need for manual configuration of environment variables, database setup, and service management, making Archon much more accessible to users without extensive technical knowledge.
@ -8,36 +8,53 @@ The core remains an intelligent documentation crawler and RAG (Retrieval-Augment
This version continues to support both local LLMs with Ollama and cloud-based LLMs through OpenAI/OpenRouter. This version continues to support both local LLMs with Ollama and cloud-based LLMs through OpenAI/OpenRouter.
## Features ## Key Features
- Comprehensive Streamlit UI with multiple tabs for different functions - **Comprehensive Streamlit UI**: Unified interface for all Archon functionality
- Guided setup process with interactive instructions - **Docker Support**: Containerized deployment with automated build and run scripts
- Environment variable management through the UI - **Guided Setup Process**: Step-by-step instructions for configuration
- Database setup and configuration simplified - **Environment Variable Management**: Configure all settings through the UI
- Documentation crawling with progress tracking - **Database Setup**: Automated creation of Supabase tables and indexes
- Agent service control and monitoring - **Documentation Crawler**: Fetch and process documentation for RAG
- MCP configuration through the UI - **Agent Service Management**: Start/stop the agent service from the UI
- Multi-agent workflow using LangGraph - **MCP Integration**: Configure and manage MCP for AI IDE integration
- Specialized agents for reasoning, routing, and coding - **Multiple LLM Support**: OpenAI, OpenRouter, and local Ollama models
- Pydantic AI documentation crawling and chunking - **Multi-agent workflow using LangGraph**: Manage multiple agents simultaneously
- Vector database storage with Supabase
- Semantic search using OpenAI embeddings
- RAG-based question answering
- Support for code block preservation
- MCP server support for AI IDE integration
## Prerequisites ## Prerequisites
- Docker (optional but preferred)
- Python 3.11+ - Python 3.11+
- Supabase account (for vector database) - Supabase account (for vector database)
- OpenAI/OpenRouter API key or Ollama for local LLMs - OpenAI/OpenRouter API key or Ollama for local LLMs
## Installation ## Installation
### Option 1: Docker (Recommended)
1. Clone the repository: 1. Clone the repository:
```bash ```bash
git clone https://github.com/coleam00/archon.git git clone https://github.com/coleam00/archon.git
cd archon cd archon/iterations/v4-streamlit-ui-overhaul
```
2. Run the Docker setup script:
```bash
# This will build both containers and start Archon
python run_docker.py
```
3. Access the Streamlit UI at http://localhost:8501.
> **Note:** `run_docker.py` will automatically:
> - Build the MCP server container
> - Build the main Archon container
> - Run Archon with the appropriate port mappings
> - Use environment variables from `.env` file if it exists
### Option 2: Local Python Installation
1. Clone the repository:
```bash
git clone https://github.com/coleam00/archon.git
cd archon/iterations/v4-streamlit-ui-overhaul
``` ```
2. Install dependencies: 2. Install dependencies:
@ -47,18 +64,18 @@ source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt pip install -r requirements.txt
``` ```
## Usage 3. Start the Streamlit UI:
Start the Streamlit UI:
```bash ```bash
streamlit run streamlit_ui.py streamlit run streamlit_ui.py
``` ```
The interface will be available at `http://localhost:8501` 4. Access the Streamlit UI at http://localhost:8501.
### Streamlit UI Tabs ### Streamlit UI Tabs
The Streamlit interface will guide you through each step with clear instructions and interactive elements.
There are a good amount of steps for the setup but it goes quick!
The Streamlit UI provides the following tabs: The Streamlit UI provides the following tabs:
1. **Intro**: Overview and guided setup process 1. **Intro**: Overview and guided setup process
@ -117,13 +134,18 @@ The MCP tab simplifies the process of configuring MCP for AI IDEs:
- Copy configuration to clipboard - Copy configuration to clipboard
- Get step-by-step instructions for your specific IDE - Get step-by-step instructions for your specific IDE
## Project Structure ## Architecture
### Core Files ### Core Files
- `streamlit_ui.py`: Comprehensive web interface for managing all aspects of Archon - `streamlit_ui.py`: Comprehensive web interface for managing all aspects of Archon
- `graph_service.py`: FastAPI service that handles the agentic workflow - `graph_service.py`: FastAPI service that handles the agentic workflow
- `mcp_server.py`: MCP server script for AI IDE integration - `run_docker.py`: Script to build and run Archon Docker containers
- `requirements.txt`: Project dependencies - `Dockerfile`: Container definition for the main Archon application
### MCP Integration
- `mcp/`: Model Context Protocol server implementation
- `mcp_server.py`: MCP server script for AI IDE integration
- `Dockerfile`: Container definition for the MCP server
### Archon Package ### Archon Package
- `archon/`: Core agent and workflow implementation - `archon/`: Core agent and workflow implementation
@ -135,7 +157,29 @@ The MCP tab simplifies the process of configuring MCP for AI IDEs:
- `utils/`: Utility functions and database setup - `utils/`: Utility functions and database setup
- `utils.py`: Shared utility functions - `utils.py`: Shared utility functions
- `site_pages.sql`: Database setup commands - `site_pages.sql`: Database setup commands
- `env_vars.json`: Environment variables defined in the UI - `env_vars.json`: Environment variables defined in the UI are stored here (included in .gitignore, file is created automatically)
## Deployment Options
- **Docker Containers**: Run Archon in isolated containers with all dependencies included
- Main container: Runs the Streamlit UI and graph service
- MCP container: Provides MCP server functionality for AI IDEs
- **Local Python**: Run directly on your system with a Python virtual environment
### Docker Architecture
The Docker implementation consists of two containers:
1. **Main Archon Container**:
- Runs the Streamlit UI on port 8501
- Hosts the Graph Service on port 8100
- Built from the root Dockerfile
- Handles all agent functionality and user interactions
2. **MCP Container**:
- Implements the Model Context Protocol for AI IDE integration
- Built from the mcp/Dockerfile
- Communicates with the main container's Graph Service
- Provides a standardized interface for AI IDEs like Windsurf, Cursor, and Cline
When running with Docker, the `run_docker.py` script automates building and starting both containers with the proper configuration.
## Contributing ## Contributing

View File

@ -53,8 +53,10 @@ openai_client=None
if is_ollama: if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key) openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else: elif get_env_var("OPENAI_API_KEY"):
openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY")) openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY"))
else:
openai_client = None
if get_env_var("SUPABASE_URL"): if get_env_var("SUPABASE_URL"):
supabase: Client = Client( supabase: Client = Client(

View File

@ -66,4 +66,4 @@ async def invoke_agent(request: InvokeRequest):
if __name__ == "__main__": if __name__ == "__main__":
import uvicorn import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8100) uvicorn.run(app, host="0.0.0.0", port=8100)

View File

@ -1,10 +0,0 @@
{
"mcpServers": {
"archon": {
"command": "C:\\Users\\colem\\oTTomator\\archon\\venv\\Scripts\\python.exe",
"args": [
"C:\\Users\\colem\\oTTomator\\archon\\mcp_server.py"
]
}
}
}

View File

@ -0,0 +1,38 @@
# Ignore specified folders
iterations/
venv/
.langgraph_api/
.github/
__pycache__/
.env
# Git related
.git/
.gitignore
.gitattributes
# Python cache
*.pyc
*.pyo
*.pyd
.Python
*.so
.pytest_cache/
# Environment files
.env.local
.env.development.local
.env.test.local
.env.production.local
# Logs
*.log
# IDE specific files
.idea/
.vscode/
*.swp
*.swo
# Keep the example env file for reference
!.env.example

View File

@ -0,0 +1,19 @@
FROM python:3.12-slim
WORKDIR /app
# Copy requirements file and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the MCP server files
COPY . .
# Expose port for MCP server
EXPOSE 8100
# Set environment variables
ENV PYTHONUNBUFFERED=1
# Command to run the MCP server
CMD ["python", "mcp_server.py"]

View File

@ -1,27 +1,44 @@
import os
import sys
import asyncio
import threading
from mcp.server.fastmcp import FastMCP from mcp.server.fastmcp import FastMCP
import requests from datetime import datetime
from dotenv import load_dotenv
from typing import Dict, List from typing import Dict, List
import threading
import requests
import asyncio
import uuid import uuid
from utils.utils import write_to_log import sys
from graph_service import app import os
import uvicorn
# Load environment variables from .env file
load_dotenv()
# Initialize FastMCP server # Initialize FastMCP server
mcp = FastMCP("archon") mcp = FastMCP("archon")
# Store active threads # Store active threads
active_threads: Dict[str, List[str]] = {} active_threads: Dict[str, List[str]] = {}
# FastAPI service URL # FastAPI service URL
GRAPH_SERVICE_URL = "http://127.0.0.1:8100" GRAPH_SERVICE_URL = os.getenv("GRAPH_SERVICE_URL", "http://localhost:8100")
def write_to_log(message: str):
"""Write a message to the logs.txt file in the workbench directory.
Args:
message: The message to log
"""
# Get the directory one level up from the current file
current_dir = os.path.dirname(os.path.abspath(__file__))
parent_dir = os.path.dirname(current_dir)
workbench_dir = os.path.join(parent_dir, "workbench")
log_path = os.path.join(workbench_dir, "logs.txt")
os.makedirs(workbench_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
log_entry = f"[{timestamp}] {message}\n"
with open(log_path, "a", encoding="utf-8") as f:
f.write(log_entry)
@mcp.tool() @mcp.tool()
async def create_thread() -> str: async def create_thread() -> str:

View File

@ -0,0 +1,3 @@
mcp==1.2.1
python-dotenv==1.0.1
requests==2.32.3

View File

@ -0,0 +1,126 @@
#!/usr/bin/env python
"""
Simple script to build and run Archon Docker containers.
"""
import os
import subprocess
import platform
import time
from pathlib import Path
def run_command(command, cwd=None):
"""Run a command and print output in real-time."""
print(f"Running: {' '.join(command)}")
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=False,
cwd=cwd
)
for line in process.stdout:
try:
decoded_line = line.decode('utf-8', errors='replace')
print(decoded_line.strip())
except Exception as e:
print(f"Error processing output: {e}")
process.wait()
return process.returncode
def check_docker():
"""Check if Docker is installed and running."""
try:
subprocess.run(
["docker", "--version"],
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
return True
except (subprocess.SubprocessError, FileNotFoundError):
print("Error: Docker is not installed or not in PATH")
return False
def main():
"""Main function to build and run Archon containers."""
# Check if Docker is available
if not check_docker():
return 1
# Get the base directory
base_dir = Path(__file__).parent.absolute()
# Check for .env file
env_file = base_dir / ".env"
env_args = []
if env_file.exists():
print(f"Using environment file: {env_file}")
env_args = ["--env-file", str(env_file)]
else:
print("No .env file found. Continuing without environment variables.")
# Build the MCP container
print("\n=== Building Archon MCP container ===")
mcp_dir = base_dir / "mcp"
if run_command(["docker", "build", "-t", "archon-mcp:latest", "."], cwd=mcp_dir) != 0:
print("Error building MCP container")
return 1
# Build the main Archon container
print("\n=== Building main Archon container ===")
if run_command(["docker", "build", "-t", "archon:latest", "."], cwd=base_dir) != 0:
print("Error building main Archon container")
return 1
# Check if the container is already running
try:
result = subprocess.run(
["docker", "ps", "-q", "--filter", "name=archon-container"],
check=True,
capture_output=True,
text=True
)
if result.stdout.strip():
print("\n=== Stopping existing Archon container ===")
run_command(["docker", "stop", "archon-container"])
run_command(["docker", "rm", "archon-container"])
except subprocess.SubprocessError:
pass
# Run the Archon container
print("\n=== Starting Archon container ===")
cmd = [
"docker", "run", "-d",
"--name", "archon-container",
"-p", "8501:8501",
"-p", "8100:8100",
"--add-host", "host.docker.internal:host-gateway"
]
# Add environment variables if .env exists
if env_args:
cmd.extend(env_args)
# Add image name
cmd.append("archon:latest")
if run_command(cmd) != 0:
print("Error starting Archon container")
return 1
# Wait a moment for the container to start
time.sleep(2)
# Print success message
print("\n=== Archon is now running! ===")
print("-> Access the Streamlit UI at: http://localhost:8501")
print("-> MCP container is ready to use - see the MCP tab in the UI.")
print("\nTo stop Archon, run: docker stop archon-container && docker rm archon-container")
return 0
if __name__ == "__main__":
exit(main())

View File

@ -43,6 +43,27 @@ from archon.archon_graph import agentic_flow
# Load environment variables from .env file # Load environment variables from .env file
load_dotenv() load_dotenv()
# Initialize clients
openai_client = None
base_url = get_env_var('BASE_URL') or 'https://api.openai.com/v1'
api_key = get_env_var('LLM_API_KEY') or 'no-llm-api-key-provided'
is_ollama = "localhost" in base_url.lower()
if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
elif get_env_var("OPENAI_API_KEY"):
openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY"))
else:
openai_client = None
if get_env_var("SUPABASE_URL"):
supabase: Client = Client(
get_env_var("SUPABASE_URL"),
get_env_var("SUPABASE_SERVICE_KEY")
)
else:
supabase = None
# Set page config - must be the first Streamlit command # Set page config - must be the first Streamlit command
st.set_page_config( st.set_page_config(
page_title="Archon - Agent Builder", page_title="Archon - Agent Builder",
@ -175,25 +196,6 @@ def reload_archon_graph():
except Exception as e: except Exception as e:
st.error(f"Error reloading Archon modules: {str(e)}") st.error(f"Error reloading Archon modules: {str(e)}")
return False return False
# Initialize clients
openai_client = None
base_url = get_env_var('BASE_URL') or 'https://api.openai.com/v1'
api_key = get_env_var('LLM_API_KEY') or 'no-llm-api-key-provided'
is_ollama = "localhost" in base_url.lower()
if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else:
openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY"))
if get_env_var("SUPABASE_URL"):
supabase: Client = Client(
get_env_var("SUPABASE_URL"),
get_env_var("SUPABASE_SERVICE_KEY")
)
else:
supabase = None
# Configure logfire to suppress warnings (optional) # Configure logfire to suppress warnings (optional)
logfire.configure(send_to_logfire='never') logfire.configure(send_to_logfire='never')
@ -241,10 +243,10 @@ def generate_mcp_config(ide_type):
else: # macOS or Linux else: # macOS or Linux
python_path = os.path.join(base_path, 'venv', 'bin', 'python') python_path = os.path.join(base_path, 'venv', 'bin', 'python')
server_script_path = os.path.join(base_path, 'mcp_server.py') server_script_path = os.path.join(base_path, 'mcp', 'mcp_server.py')
# Create the config dictionary # Create the config dictionary for Python
config = { python_config = {
"mcpServers": { "mcpServers": {
"archon": { "archon": {
"command": python_path, "command": python_path,
@ -253,15 +255,121 @@ def generate_mcp_config(ide_type):
} }
} }
# Create the config dictionary for Docker
docker_config = {
"mcpServers": {
"archon": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GRAPH_SERVICE_URL",
"archon-mcp:latest"
],
"env": {
"GRAPH_SERVICE_URL": "http://host.docker.internal:8100"
}
}
}
}
# Return appropriate configuration based on IDE type # Return appropriate configuration based on IDE type
if ide_type == "Windsurf": if ide_type == "Windsurf":
return json.dumps(config, indent=2) return json.dumps(python_config, indent=2), json.dumps(docker_config, indent=2)
elif ide_type == "Cursor": elif ide_type == "Cursor":
return f"{python_path} {server_script_path}" return f"{python_path} {server_script_path}", f"docker run --rm -p 8100:8100 archon:latest python mcp_server.py"
elif ide_type == "Cline": elif ide_type == "Cline":
return json.dumps(config, indent=2) # Assuming Cline uses the same format as Windsurf return json.dumps(python_config, indent=2), json.dumps(docker_config, indent=2) # Assuming Cline uses the same format as Windsurf
else: else:
return "Unknown IDE type selected" return "Unknown IDE type selected", "Unknown IDE type selected"
def mcp_tab():
"""Display the MCP configuration interface"""
st.header("MCP Configuration")
st.write("Select your AI IDE to get the appropriate MCP configuration:")
# IDE selection with side-by-side buttons
col1, col2, col3 = st.columns(3)
with col1:
windsurf_button = st.button("Windsurf", use_container_width=True, key="windsurf_button")
with col2:
cursor_button = st.button("Cursor", use_container_width=True, key="cursor_button")
with col3:
cline_button = st.button("Cline", use_container_width=True, key="cline_button")
# Initialize session state for selected IDE if not present
if "selected_ide" not in st.session_state:
st.session_state.selected_ide = None
# Update selected IDE based on button clicks
if windsurf_button:
st.session_state.selected_ide = "Windsurf"
elif cursor_button:
st.session_state.selected_ide = "Cursor"
elif cline_button:
st.session_state.selected_ide = "Cline"
# Display configuration if an IDE is selected
if st.session_state.selected_ide:
selected_ide = st.session_state.selected_ide
st.subheader(f"MCP Configuration for {selected_ide}")
python_config, docker_config = generate_mcp_config(selected_ide)
# Configuration type tabs
config_tab1, config_tab2 = st.tabs(["Docker Configuration", "Python Configuration"])
with config_tab1:
st.markdown("### Docker Configuration")
st.code(docker_config, language="json" if selected_ide != "Cursor" else None)
st.markdown("#### Requirements:")
st.markdown("- Docker installed")
st.markdown("- Run the setup script to build and start both containers:")
st.code("python run_docker.py", language="bash")
with config_tab2:
st.markdown("### Python Configuration")
st.code(python_config, language="json" if selected_ide != "Cursor" else None)
st.markdown("#### Requirements:")
st.markdown("- Python 3.11+ installed")
st.markdown("- Virtual environment created and activated")
st.markdown("- All dependencies installed via `pip install -r requirements.txt`")
st.markdown("- Must be running Archon not within a container")
# Instructions based on IDE type
st.markdown("---")
st.markdown("### Setup Instructions")
if selected_ide == "Windsurf":
st.markdown("""
#### How to use in Windsurf:
1. Click on the hammer icon above the chat input
2. Click on "Configure"
3. Paste the JSON from your preferred configuration tab above
4. Click "Refresh" next to "Configure"
""")
elif selected_ide == "Cursor":
st.markdown("""
#### How to use in Cursor:
1. Go to Cursor Settings > Features > MCP
2. Click on "+ Add New MCP Server"
3. Name: Archon
4. Type: command (equivalent to stdio)
5. Command: Paste the command from your preferred configuration tab above
""")
elif selected_ide == "Cline":
st.markdown("""
#### How to use in Cline:
1. From the Cline extension, click the "MCP Server" tab
2. Click the "Edit MCP Settings" button
3. The MCP settings file should be displayed in a tab in VS Code
4. Paste the JSON from your preferred configuration tab above
5. Cline will automatically detect and start the MCP server
""")
async def chat_tab(): async def chat_tab():
"""Display the chat interface for talking to Archon""" """Display the chat interface for talking to Archon"""
@ -302,70 +410,6 @@ async def chat_tab():
st.session_state.messages.append({"type": "ai", "content": response_content}) st.session_state.messages.append({"type": "ai", "content": response_content})
def mcp_tab():
"""Display the MCP configuration interface"""
st.header("MCP Configuration")
st.write("Select your AI IDE to get the appropriate MCP configuration:")
# IDE selection with side-by-side buttons
col1, col2, col3 = st.columns(3)
with col1:
windsurf_button = st.button("Windsurf", use_container_width=True, key="windsurf_button")
with col2:
cursor_button = st.button("Cursor", use_container_width=True, key="cursor_button")
with col3:
cline_button = st.button("Cline", use_container_width=True, key="cline_button")
# Initialize session state for selected IDE if not present
if "selected_ide" not in st.session_state:
st.session_state.selected_ide = None
# Update selected IDE based on button clicks
if windsurf_button:
st.session_state.selected_ide = "Windsurf"
elif cursor_button:
st.session_state.selected_ide = "Cursor"
elif cline_button:
st.session_state.selected_ide = "Cline"
# Display configuration if an IDE is selected
if st.session_state.selected_ide:
selected_ide = st.session_state.selected_ide
st.subheader(f"MCP Configuration for {selected_ide}")
config = generate_mcp_config(selected_ide)
# Display the configuration
st.code(config, language="json" if selected_ide != "Cursor" else None)
# Instructions based on IDE type
if selected_ide == "Windsurf":
st.markdown("""
### How to use in Windsurf:
1. Click on the hammer icon above the chat input
2. Click on "Configure"
3. Paste the JSON above as the MCP config
4. Click "Refresh" next to "Configure"
""")
elif selected_ide == "Cursor":
st.markdown("""
### How to use in Cursor:
1. Go to Cursor Settings > Features > MCP
2. Click on "+ Add New MCP Server"
3. Name: Archon
4. Type: command (equivalent to stdio)
5. Command: Paste the command above
""")
elif selected_ide == "Cline":
st.markdown("""
### How to use in Cline:
1. From the Cline extension, click the "MCP Server" tab
2. Click the "Edit MCP Settings" button
3. The MCP settings file should be displayed in a tab in VS Code
4. Paste the JSON above as the MCP config
5. Cline will automatically detect and start the MCP server
""")
def intro_tab(): def intro_tab():
"""Display the introduction and setup guide for Archon""" """Display the introduction and setup guide for Archon"""
# Display the banner image # Display the banner image

View File

@ -1,10 +0,0 @@
{
"mcpServers": {
"archon": {
"command": "[path to Archon]\\archon\\venv\\Scripts\\python.exe",
"args": [
"[path to Archon]\\archon\\mcp_server.py"
]
}
}
}

38
mcp/.dockerignore Normal file
View File

@ -0,0 +1,38 @@
# Ignore specified folders
iterations/
venv/
.langgraph_api/
.github/
__pycache__/
.env
# Git related
.git/
.gitignore
.gitattributes
# Python cache
*.pyc
*.pyo
*.pyd
.Python
*.so
.pytest_cache/
# Environment files
.env.local
.env.development.local
.env.test.local
.env.production.local
# Logs
*.log
# IDE specific files
.idea/
.vscode/
*.swp
*.swo
# Keep the example env file for reference
!.env.example

16
mcp/Dockerfile Normal file
View File

@ -0,0 +1,16 @@
FROM python:3.12-slim
WORKDIR /app
# Copy requirements file and install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the MCP server files
COPY . .
# Expose port for MCP server
EXPOSE 8100
# Command to run the MCP server
CMD ["python", "mcp_server.py"]

112
mcp/mcp_server.py Normal file
View File

@ -0,0 +1,112 @@
from mcp.server.fastmcp import FastMCP
from datetime import datetime
from dotenv import load_dotenv
from typing import Dict, List
import threading
import requests
import asyncio
import uuid
import sys
import os
# Load environment variables from .env file
load_dotenv()
# Initialize FastMCP server
mcp = FastMCP("archon")
# Store active threads
active_threads: Dict[str, List[str]] = {}
# FastAPI service URL
GRAPH_SERVICE_URL = os.getenv("GRAPH_SERVICE_URL", "http://localhost:8100")
def write_to_log(message: str):
"""Write a message to the logs.txt file in the workbench directory.
Args:
message: The message to log
"""
# Get the directory one level up from the current file
current_dir = os.path.dirname(os.path.abspath(__file__))
parent_dir = os.path.dirname(current_dir)
workbench_dir = os.path.join(parent_dir, "workbench")
log_path = os.path.join(workbench_dir, "logs.txt")
os.makedirs(workbench_dir, exist_ok=True)
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
log_entry = f"[{timestamp}] {message}\n"
with open(log_path, "a", encoding="utf-8") as f:
f.write(log_entry)
@mcp.tool()
async def create_thread() -> str:
"""Create a new conversation thread for Archon.
Always call this tool before invoking Archon for the first time in a conversation.
(if you don't already have a thread ID)
Returns:
str: A unique thread ID for the conversation
"""
thread_id = str(uuid.uuid4())
active_threads[thread_id] = []
write_to_log(f"Created new thread: {thread_id}")
return thread_id
def _make_request(thread_id: str, user_input: str, config: dict) -> str:
"""Make synchronous request to graph service"""
response = requests.post(
f"{GRAPH_SERVICE_URL}/invoke",
json={
"message": user_input,
"thread_id": thread_id,
"is_first_message": not active_threads[thread_id],
"config": config
}
)
response.raise_for_status()
return response.json()
@mcp.tool()
async def run_agent(thread_id: str, user_input: str) -> str:
"""Run the Archon agent with user input.
Only use this tool after you have called create_thread in this conversation to get a unique thread ID.
If you already created a thread ID in this conversation, do not create another one. Reuse the same ID.
After you receive the code from Archon, always implement it into the codebase unless asked not to.
Args:
thread_id: The conversation thread ID
user_input: The user's message to process
Returns:
str: The agent's response which generally includes the code for the agent
"""
if thread_id not in active_threads:
write_to_log(f"Error: Thread not found - {thread_id}")
raise ValueError("Thread not found")
write_to_log(f"Processing message for thread {thread_id}: {user_input}")
config = {
"configurable": {
"thread_id": thread_id
}
}
try:
result = await asyncio.to_thread(_make_request, thread_id, user_input, config)
active_threads[thread_id].append(user_input)
return result['response']
except Exception as e:
raise
if __name__ == "__main__":
write_to_log("Starting MCP server")
# Run MCP server
mcp.run(transport='stdio')

3
mcp/requirements.txt Normal file
View File

@ -0,0 +1,3 @@
mcp==1.2.1
python-dotenv==1.0.1
requests==2.32.3

126
run_docker.py Normal file
View File

@ -0,0 +1,126 @@
#!/usr/bin/env python
"""
Simple script to build and run Archon Docker containers.
"""
import os
import subprocess
import platform
import time
from pathlib import Path
def run_command(command, cwd=None):
"""Run a command and print output in real-time."""
print(f"Running: {' '.join(command)}")
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=False,
cwd=cwd
)
for line in process.stdout:
try:
decoded_line = line.decode('utf-8', errors='replace')
print(decoded_line.strip())
except Exception as e:
print(f"Error processing output: {e}")
process.wait()
return process.returncode
def check_docker():
"""Check if Docker is installed and running."""
try:
subprocess.run(
["docker", "--version"],
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
return True
except (subprocess.SubprocessError, FileNotFoundError):
print("Error: Docker is not installed or not in PATH")
return False
def main():
"""Main function to build and run Archon containers."""
# Check if Docker is available
if not check_docker():
return 1
# Get the base directory
base_dir = Path(__file__).parent.absolute()
# Check for .env file
env_file = base_dir / ".env"
env_args = []
if env_file.exists():
print(f"Using environment file: {env_file}")
env_args = ["--env-file", str(env_file)]
else:
print("No .env file found. Continuing without environment variables.")
# Build the MCP container
print("\n=== Building Archon MCP container ===")
mcp_dir = base_dir / "mcp"
if run_command(["docker", "build", "-t", "archon-mcp:latest", "."], cwd=mcp_dir) != 0:
print("Error building MCP container")
return 1
# Build the main Archon container
print("\n=== Building main Archon container ===")
if run_command(["docker", "build", "-t", "archon:latest", "."], cwd=base_dir) != 0:
print("Error building main Archon container")
return 1
# Check if the container is already running
try:
result = subprocess.run(
["docker", "ps", "-q", "--filter", "name=archon-container"],
check=True,
capture_output=True,
text=True
)
if result.stdout.strip():
print("\n=== Stopping existing Archon container ===")
run_command(["docker", "stop", "archon-container"])
run_command(["docker", "rm", "archon-container"])
except subprocess.SubprocessError:
pass
# Run the Archon container
print("\n=== Starting Archon container ===")
cmd = [
"docker", "run", "-d",
"--name", "archon-container",
"-p", "8501:8501",
"-p", "8100:8100",
"--add-host", "host.docker.internal:host-gateway"
]
# Add environment variables if .env exists
if env_args:
cmd.extend(env_args)
# Add image name
cmd.append("archon:latest")
if run_command(cmd) != 0:
print("Error starting Archon container")
return 1
# Wait a moment for the container to start
time.sleep(2)
# Print success message
print("\n=== Archon is now running! ===")
print("-> Access the Streamlit UI at: http://localhost:8501")
print("-> MCP container is ready to use - see the MCP tab in the UI.")
print("\nTo stop Archon, run: docker stop archon-container && docker rm archon-container")
return 0
if __name__ == "__main__":
exit(main())

View File

@ -43,6 +43,27 @@ from archon.archon_graph import agentic_flow
# Load environment variables from .env file # Load environment variables from .env file
load_dotenv() load_dotenv()
# Initialize clients
openai_client = None
base_url = get_env_var('BASE_URL') or 'https://api.openai.com/v1'
api_key = get_env_var('LLM_API_KEY') or 'no-llm-api-key-provided'
is_ollama = "localhost" in base_url.lower()
if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
elif get_env_var("OPENAI_API_KEY"):
openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY"))
else:
openai_client = None
if get_env_var("SUPABASE_URL"):
supabase: Client = Client(
get_env_var("SUPABASE_URL"),
get_env_var("SUPABASE_SERVICE_KEY")
)
else:
supabase = None
# Set page config - must be the first Streamlit command # Set page config - must be the first Streamlit command
st.set_page_config( st.set_page_config(
page_title="Archon - Agent Builder", page_title="Archon - Agent Builder",
@ -175,25 +196,6 @@ def reload_archon_graph():
except Exception as e: except Exception as e:
st.error(f"Error reloading Archon modules: {str(e)}") st.error(f"Error reloading Archon modules: {str(e)}")
return False return False
# Initialize clients
openai_client = None
base_url = get_env_var('BASE_URL') or 'https://api.openai.com/v1'
api_key = get_env_var('LLM_API_KEY') or 'no-llm-api-key-provided'
is_ollama = "localhost" in base_url.lower()
if is_ollama:
openai_client = AsyncOpenAI(base_url=base_url,api_key=api_key)
else:
openai_client = AsyncOpenAI(api_key=get_env_var("OPENAI_API_KEY"))
if get_env_var("SUPABASE_URL"):
supabase: Client = Client(
get_env_var("SUPABASE_URL"),
get_env_var("SUPABASE_SERVICE_KEY")
)
else:
supabase = None
# Configure logfire to suppress warnings (optional) # Configure logfire to suppress warnings (optional)
logfire.configure(send_to_logfire='never') logfire.configure(send_to_logfire='never')
@ -241,10 +243,10 @@ def generate_mcp_config(ide_type):
else: # macOS or Linux else: # macOS or Linux
python_path = os.path.join(base_path, 'venv', 'bin', 'python') python_path = os.path.join(base_path, 'venv', 'bin', 'python')
server_script_path = os.path.join(base_path, 'mcp_server.py') server_script_path = os.path.join(base_path, 'mcp', 'mcp_server.py')
# Create the config dictionary # Create the config dictionary for Python
config = { python_config = {
"mcpServers": { "mcpServers": {
"archon": { "archon": {
"command": python_path, "command": python_path,
@ -253,15 +255,121 @@ def generate_mcp_config(ide_type):
} }
} }
# Create the config dictionary for Docker
docker_config = {
"mcpServers": {
"archon": {
"command": "docker",
"args": [
"run",
"-i",
"--rm",
"-e",
"GRAPH_SERVICE_URL",
"archon-mcp:latest"
],
"env": {
"GRAPH_SERVICE_URL": "http://host.docker.internal:8100"
}
}
}
}
# Return appropriate configuration based on IDE type # Return appropriate configuration based on IDE type
if ide_type == "Windsurf": if ide_type == "Windsurf":
return json.dumps(config, indent=2) return json.dumps(python_config, indent=2), json.dumps(docker_config, indent=2)
elif ide_type == "Cursor": elif ide_type == "Cursor":
return f"{python_path} {server_script_path}" return f"{python_path} {server_script_path}", f"docker run --rm -p 8100:8100 archon:latest python mcp_server.py"
elif ide_type == "Cline": elif ide_type == "Cline":
return json.dumps(config, indent=2) # Assuming Cline uses the same format as Windsurf return json.dumps(python_config, indent=2), json.dumps(docker_config, indent=2) # Assuming Cline uses the same format as Windsurf
else: else:
return "Unknown IDE type selected" return "Unknown IDE type selected", "Unknown IDE type selected"
def mcp_tab():
"""Display the MCP configuration interface"""
st.header("MCP Configuration")
st.write("Select your AI IDE to get the appropriate MCP configuration:")
# IDE selection with side-by-side buttons
col1, col2, col3 = st.columns(3)
with col1:
windsurf_button = st.button("Windsurf", use_container_width=True, key="windsurf_button")
with col2:
cursor_button = st.button("Cursor", use_container_width=True, key="cursor_button")
with col3:
cline_button = st.button("Cline", use_container_width=True, key="cline_button")
# Initialize session state for selected IDE if not present
if "selected_ide" not in st.session_state:
st.session_state.selected_ide = None
# Update selected IDE based on button clicks
if windsurf_button:
st.session_state.selected_ide = "Windsurf"
elif cursor_button:
st.session_state.selected_ide = "Cursor"
elif cline_button:
st.session_state.selected_ide = "Cline"
# Display configuration if an IDE is selected
if st.session_state.selected_ide:
selected_ide = st.session_state.selected_ide
st.subheader(f"MCP Configuration for {selected_ide}")
python_config, docker_config = generate_mcp_config(selected_ide)
# Configuration type tabs
config_tab1, config_tab2 = st.tabs(["Docker Configuration", "Python Configuration"])
with config_tab1:
st.markdown("### Docker Configuration")
st.code(docker_config, language="json" if selected_ide != "Cursor" else None)
st.markdown("#### Requirements:")
st.markdown("- Docker installed")
st.markdown("- Run the setup script to build and start both containers:")
st.code("python run_docker.py", language="bash")
with config_tab2:
st.markdown("### Python Configuration")
st.code(python_config, language="json" if selected_ide != "Cursor" else None)
st.markdown("#### Requirements:")
st.markdown("- Python 3.11+ installed")
st.markdown("- Virtual environment created and activated")
st.markdown("- All dependencies installed via `pip install -r requirements.txt`")
st.markdown("- Must be running Archon not within a container")
# Instructions based on IDE type
st.markdown("---")
st.markdown("### Setup Instructions")
if selected_ide == "Windsurf":
st.markdown("""
#### How to use in Windsurf:
1. Click on the hammer icon above the chat input
2. Click on "Configure"
3. Paste the JSON from your preferred configuration tab above
4. Click "Refresh" next to "Configure"
""")
elif selected_ide == "Cursor":
st.markdown("""
#### How to use in Cursor:
1. Go to Cursor Settings > Features > MCP
2. Click on "+ Add New MCP Server"
3. Name: Archon
4. Type: command (equivalent to stdio)
5. Command: Paste the command from your preferred configuration tab above
""")
elif selected_ide == "Cline":
st.markdown("""
#### How to use in Cline:
1. From the Cline extension, click the "MCP Server" tab
2. Click the "Edit MCP Settings" button
3. The MCP settings file should be displayed in a tab in VS Code
4. Paste the JSON from your preferred configuration tab above
5. Cline will automatically detect and start the MCP server
""")
async def chat_tab(): async def chat_tab():
"""Display the chat interface for talking to Archon""" """Display the chat interface for talking to Archon"""
@ -302,70 +410,6 @@ async def chat_tab():
st.session_state.messages.append({"type": "ai", "content": response_content}) st.session_state.messages.append({"type": "ai", "content": response_content})
def mcp_tab():
"""Display the MCP configuration interface"""
st.header("MCP Configuration")
st.write("Select your AI IDE to get the appropriate MCP configuration:")
# IDE selection with side-by-side buttons
col1, col2, col3 = st.columns(3)
with col1:
windsurf_button = st.button("Windsurf", use_container_width=True, key="windsurf_button")
with col2:
cursor_button = st.button("Cursor", use_container_width=True, key="cursor_button")
with col3:
cline_button = st.button("Cline", use_container_width=True, key="cline_button")
# Initialize session state for selected IDE if not present
if "selected_ide" not in st.session_state:
st.session_state.selected_ide = None
# Update selected IDE based on button clicks
if windsurf_button:
st.session_state.selected_ide = "Windsurf"
elif cursor_button:
st.session_state.selected_ide = "Cursor"
elif cline_button:
st.session_state.selected_ide = "Cline"
# Display configuration if an IDE is selected
if st.session_state.selected_ide:
selected_ide = st.session_state.selected_ide
st.subheader(f"MCP Configuration for {selected_ide}")
config = generate_mcp_config(selected_ide)
# Display the configuration
st.code(config, language="json" if selected_ide != "Cursor" else None)
# Instructions based on IDE type
if selected_ide == "Windsurf":
st.markdown("""
### How to use in Windsurf:
1. Click on the hammer icon above the chat input
2. Click on "Configure"
3. Paste the JSON above as the MCP config
4. Click "Refresh" next to "Configure"
""")
elif selected_ide == "Cursor":
st.markdown("""
### How to use in Cursor:
1. Go to Cursor Settings > Features > MCP
2. Click on "+ Add New MCP Server"
3. Name: Archon
4. Type: command (equivalent to stdio)
5. Command: Paste the command above
""")
elif selected_ide == "Cline":
st.markdown("""
### How to use in Cline:
1. From the Cline extension, click the "MCP Server" tab
2. Click the "Edit MCP Settings" button
3. The MCP settings file should be displayed in a tab in VS Code
4. Paste the JSON above as the MCP config
5. Cline will automatically detect and start the MCP server
""")
def intro_tab(): def intro_tab():
"""Display the introduction and setup guide for Archon""" """Display the introduction and setup guide for Archon"""
# Display the banner image # Display the banner image