Remove deprecated PRP testing scripts and dead code

- Removed python/src/server/testing/ folder containing deprecated test utilities
- These PRP viewer testing tools were used during initial development
- No longer needed as functionality has been integrated into main codebase
- No dependencies or references found in production code
This commit is contained in:
Rasmus Widing 2025-08-25 09:47:59 +03:00 committed by Wirasm
parent 468463997d
commit 85f5f2ac93
6 changed files with 0 additions and 1152 deletions

View File

@ -1,149 +0,0 @@
# PRP Viewer Testing Tools
This directory contains testing tools for the Archon PRP Viewer to ensure consistent rendering between the Milkdown editor view and the PRPViewer (beautiful view).
## PRP Viewer Test Tool
### Purpose
The `prp_viewer_test.py` script identifies rendering inconsistencies between different document views in the Archon UI. It helps diagnose issues like:
- Missing sections in one view but not the other
- Image placeholder rendering problems
- JSON artifacts appearing as raw text
- Content length mismatches
- Format handling differences between markdown strings and structured PRP objects
### Prerequisites
1. **Environment Setup**
```bash
# Ensure you have the required environment variables
cp .env.example .env
# Edit .env to include:
# VITE_SUPABASE_URL=your_supabase_url
# VITE_SUPABASE_ANON_KEY=your_supabase_anon_key
```
2. **Start Archon UI Server**
```bash
cd archon-ui-main
npm run dev
# Server should be running on http://localhost:3737
```
3. **Python Dependencies**
The script uses Playwright (already installed via crawl4ai) and other dependencies from the server requirements.
### Usage
There are two ways to run the test:
#### Option 1: From Host Machine (Recommended)
```bash
# From the project root directory
python run_prp_viewer_test.py <PROJECT_UUID>
# Example with the template showcase project
python run_prp_viewer_test.py b4cebbce-6a2c-48c8-9583-050ddf3fb9e3
```
#### Option 2: From Inside Docker Container
```bash
# Run from inside the Archon-Server container
source .env && docker exec -e SUPABASE_URL="$SUPABASE_URL" -e SUPABASE_SERVICE_KEY="$SUPABASE_SERVICE_KEY" -e ARCHON_UI_PORT="$ARCHON_UI_PORT" Archon-Server python /app/src/server/testing/prp_viewer_test.py --project-id <PROJECT_UUID>
# Copy results back to host
docker cp Archon-Server:/app/test_results ./test_results_docker
```
**Note:** Running from the host machine is recommended as it has better access to the UI server and can take screenshots in non-headless mode for debugging.
### Output
The tool generates several output files:
1. **ViewInconsistencies*{project_id}*{timestamp}.json**
- Detailed JSON report of all issues found
- Includes document metadata, specific issues, and screenshots paths
2. **Summary*{project_id}*{timestamp}.txt**
- Human-readable summary of test results
- Lists common issues and breakdown by document
3. **Screenshots**
- `markdown_{timestamp}.png` - Captures of the Milkdown editor view
- `beautiful_{timestamp}.png` - Captures of the PRPViewer view
### Understanding the Results
The JSON output includes:
```json
{
"project_id": "uuid",
"test_date": "ISO timestamp",
"documents": [
{
"doc_id": "doc_id",
"title": "Document Title",
"type": "prp|technical|business",
"issues": [
{
"type": "missing_section|image_placeholder|json_artifact|etc",
"description": "Details about the issue"
// Additional issue-specific fields
}
]
}
],
"summary": {
"total_documents": 5,
"documents_with_issues": 3,
"common_issues": ["image_placeholders", "missing_sections"],
"issue_breakdown": {
"missing_section": 4,
"image_placeholder": 3,
"json_artifact": 2
}
}
}
```
### Common Issues and Fixes
1. **Image Placeholders**
- Issue: `[Image #1]` not rendering properly
- Fix: Ensure proper markdown conversion in `processContent` function
2. **Missing Sections**
- Issue: Sections visible in markdown but not in beautiful view
- Fix: Add section handlers in PRPViewer component
3. **JSON Artifacts**
- Issue: Raw JSON displayed instead of formatted content
- Fix: Improve content type detection and formatting
4. **Content Structure Mismatch**
- Issue: Documents stored as both strings and objects
- Fix: Normalize document structure before rendering
### Next Steps
After running the test tool:
1. Review the generated report to identify patterns
2. Fix the most common issues first
3. Re-run tests to verify fixes
4. Consider adding automated tests to CI/CD pipeline
### Troubleshooting
- **"Cannot connect to Archon UI server"**: Ensure the UI dev server is running on port 3737
- **"Missing Supabase credentials"**: Check your .env file has the required variables
- **"No documents found"**: Verify the project ID exists and has documents
- **Browser not launching**: Try setting `headless=True` in the script for server environments

View File

@ -1 +0,0 @@
# Testing module for Archon server components

View File

@ -1,54 +0,0 @@
#!/usr/bin/env python3
"""Debug connectivity to UI server"""
import asyncio
import os
import aiohttp
async def test_connectivity():
# Determine if we're in Docker
in_docker = os.path.exists("/.dockerenv")
# Test different URLs
urls_to_test = []
if in_docker:
print("Running inside Docker container")
urls_to_test = [
"http://host.docker.internal:3738",
"http://host.docker.internal:3737",
"http://frontend:5173",
"http://Archon-UI:5173",
]
else:
print("Running on host machine")
urls_to_test = [
"http://localhost:3738",
"http://localhost:3737",
]
print("\nTesting connectivity to UI server...")
async with aiohttp.ClientSession() as session:
for url in urls_to_test:
try:
print(f"\nTrying {url}...")
async with session.get(url, timeout=aiohttp.ClientTimeout(total=5)) as response:
print(f" Status: {response.status}")
if response.status == 200:
content = await response.text()
print(f" Success! Response length: {len(content)} chars")
has_root = 'id="root"' in content
print(f" Contains 'root' element: {has_root}")
else:
print(" Non-200 status code")
except Exception as e:
print(f" Failed: {type(e).__name__}: {e}")
print("\nDone testing connectivity")
if __name__ == "__main__":
asyncio.run(test_connectivity())

View File

@ -1,388 +0,0 @@
#!/usr/bin/env python3
"""
PRP Data Validator
This script validates PRP document structure and content directly from the database
without needing to render the UI. It identifies potential rendering issues by analyzing
the document data structure.
Usage:
docker exec Archon-Server python /app/src/server/testing/prp_data_validator.py --project-id <PROJECT_UUID>
"""
import argparse
import json
import os
from datetime import datetime
from pathlib import Path
from typing import Any
from dotenv import load_dotenv
from supabase import Client, create_client
# Load environment variables
if os.path.exists("/.dockerenv") and os.path.exists("/app/.env"):
load_dotenv("/app/.env")
else:
load_dotenv()
class PRPDataValidator:
"""Validates PRP document data structure"""
def __init__(self, project_id: str, output_dir: str = "./test_results"):
self.project_id = project_id
self.output_dir = Path(output_dir)
self.output_dir.mkdir(exist_ok=True)
# Initialize Supabase client
supabase_url = os.getenv("SUPABASE_URL") or os.getenv("VITE_SUPABASE_URL")
supabase_key = os.getenv("SUPABASE_SERVICE_KEY") or os.getenv("VITE_SUPABASE_ANON_KEY")
if not supabase_url or not supabase_key:
raise ValueError("Missing Supabase credentials in environment")
self.supabase: Client = create_client(supabase_url, supabase_key)
# Results storage
self.results = {
"project_id": project_id,
"validation_date": datetime.now().isoformat(),
"documents": [],
"summary": {"total_documents": 0, "documents_with_issues": 0, "common_issues": []},
}
def fetch_project_data(self) -> dict[str, Any]:
"""Fetch project and its documents from database"""
try:
# Fetch project
project_response = (
self.supabase.table("archon_projects")
.select("*")
.eq("id", self.project_id)
.execute()
)
if not project_response.data:
raise ValueError(f"Project {self.project_id} not found")
project = project_response.data[0]
# Fetch all document types from project
documents = []
# Check if project has docs array
if project.get("docs"):
for doc in project["docs"]:
documents.append({
"id": doc.get("id", f"doc_{len(documents)}"),
"title": doc.get("title", "Untitled"),
"type": doc.get("document_type", doc.get("type", "unknown")),
"content": doc.get("content", doc),
"source": "project.docs",
"raw_data": doc,
})
# Check if project has prd field
if project.get("prd"):
documents.append({
"id": "prd_main",
"title": project.get("prd", {}).get("title", "Main PRD"),
"type": "prd",
"content": project["prd"],
"source": "project.prd",
"raw_data": project["prd"],
})
return {"project": project, "documents": documents}
except Exception as e:
print(f"Error fetching project data: {e}")
raise
def validate_document_structure(self, doc: dict[str, Any]) -> list[dict[str, Any]]:
"""Validate a single document's structure and identify issues"""
issues = []
content = doc.get("content", doc.get("raw_data", {}))
# Check if content is a string or object
if isinstance(content, str):
# Raw markdown string
issues.append({
"type": "raw_markdown_string",
"description": "Document stored as raw markdown string instead of structured object",
"impact": "May not render properly in PRPViewer",
"recommendation": "Convert to structured PRP object format",
})
# Check for image placeholders
if "[Image #" in content:
import re
placeholders = re.findall(r"\[Image #(\d+)\]", content)
issues.append({
"type": "image_placeholders",
"count": len(placeholders),
"placeholders": placeholders,
"description": f"Found {len(placeholders)} image placeholder(s)",
"impact": "Images will show as text placeholders",
})
elif isinstance(content, dict):
# Structured object
# Check for nested content field
if "content" in content and isinstance(content["content"], (str, dict)):
issues.append({
"type": "nested_content_field",
"description": "Document has nested 'content' field",
"impact": "May cause double-wrapping in rendering",
})
# Check for mixed content types
string_fields = []
object_fields = []
array_fields = []
for key, value in content.items():
if isinstance(value, str):
string_fields.append(key)
# Check for JSON strings
if value.strip().startswith("{") or value.strip().startswith("["):
try:
json.loads(value)
issues.append({
"type": "json_string_field",
"field": key,
"description": f"Field '{key}' contains JSON as string",
"impact": "Will render as raw JSON text instead of formatted content",
})
except:
pass
# Check for image placeholders in strings
if "[Image #" in value:
import re
placeholders = re.findall(r"\[Image #(\d+)\]", value)
if placeholders:
issues.append({
"type": "image_placeholders_in_field",
"field": key,
"count": len(placeholders),
"description": f"Field '{key}' contains {len(placeholders)} image placeholder(s)",
})
elif isinstance(value, dict):
object_fields.append(key)
elif isinstance(value, list):
array_fields.append(key)
# Check for missing expected PRP sections
expected_sections = [
"goal",
"why",
"what",
"context",
"user_personas",
"user_flows",
"success_metrics",
"implementation_plan",
"technical_implementation",
"validation_gates",
]
missing_sections = [s for s in expected_sections if s not in content]
if missing_sections:
issues.append({
"type": "missing_sections",
"sections": missing_sections,
"description": f"Missing {len(missing_sections)} expected PRP sections",
"impact": "Incomplete PRP structure",
})
# Check for sections that might not render
metadata_fields = ["title", "version", "author", "date", "status", "document_type"]
renderable_sections = [k for k in content.keys() if k not in metadata_fields]
if len(renderable_sections) == 0:
issues.append({
"type": "no_renderable_content",
"description": "Document has no renderable sections (only metadata)",
"impact": "Nothing will display in the viewer",
})
else:
issues.append({
"type": "invalid_content_type",
"content_type": type(content).__name__,
"description": f"Content is of type {type(content).__name__}, expected string or dict",
"impact": "Cannot render this content type",
})
return issues
def analyze_milkdown_compatibility(self, doc: dict[str, Any]) -> list[dict[str, Any]]:
"""Analyze if document will convert properly to markdown for Milkdown editor"""
issues = []
content = doc.get("content", doc.get("raw_data", {}))
if isinstance(content, dict):
# Check convertPRPToMarkdown compatibility
# Based on the function in MilkdownEditor.tsx
# Check for complex nested structures
for key, value in content.items():
if isinstance(value, dict) and any(
isinstance(v, (dict, list)) for v in value.values()
):
issues.append({
"type": "complex_nesting",
"field": key,
"description": f"Field '{key}' has complex nested structure",
"impact": "May not convert properly to markdown",
})
# Check for non-standard field names
if not key.replace("_", "").isalnum():
issues.append({
"type": "non_standard_field_name",
"field": key,
"description": f"Field '{key}' has non-standard characters",
"impact": "May not display properly as section title",
})
return issues
def run_validation(self):
"""Run all validations"""
print(f"Starting PRP Data Validation for project {self.project_id}")
# Fetch project data
print("Fetching project data...")
project_data = self.fetch_project_data()
documents = project_data["documents"]
if not documents:
print("No documents found in project")
return
print(f"Found {len(documents)} documents to validate")
self.results["summary"]["total_documents"] = len(documents)
# Validate each document
for i, doc in enumerate(documents):
print(f"\nValidating document {i + 1}/{len(documents)}: {doc['title']} ({doc['type']})")
# Structure validation
structure_issues = self.validate_document_structure(doc)
# Milkdown compatibility
milkdown_issues = self.analyze_milkdown_compatibility(doc)
all_issues = structure_issues + milkdown_issues
result = {
"doc_id": doc["id"],
"title": doc["title"],
"type": doc["type"],
"source": doc["source"],
"issues": all_issues,
"issue_count": len(all_issues),
}
self.results["documents"].append(result)
if all_issues:
self.results["summary"]["documents_with_issues"] += 1
print(f" Found {len(all_issues)} issues")
else:
print(" ✓ No issues found")
# Analyze common issues
self.analyze_common_issues()
# Save results
self.save_results()
print(f"\nValidation completed. Results saved to {self.output_dir}")
print(
f"Summary: {self.results['summary']['documents_with_issues']} out of {self.results['summary']['total_documents']} documents have issues"
)
def analyze_common_issues(self):
"""Analyze and summarize common issues across all documents"""
issue_counts = {}
for doc in self.results["documents"]:
for issue in doc["issues"]:
issue_type = issue["type"]
issue_counts[issue_type] = issue_counts.get(issue_type, 0) + 1
# Sort by frequency
common_issues = sorted(issue_counts.items(), key=lambda x: x[1], reverse=True)
self.results["summary"]["common_issues"] = [issue[0] for issue in common_issues[:5]]
self.results["summary"]["issue_breakdown"] = dict(common_issues)
def save_results(self):
"""Save validation results to file"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
# Save JSON report
json_filename = f"DataValidation_{self.project_id}_{timestamp}.json"
json_filepath = self.output_dir / json_filename
with open(json_filepath, "w") as f:
json.dump(self.results, f, indent=2)
# Save human-readable report
txt_filename = f"DataValidationSummary_{self.project_id}_{timestamp}.txt"
txt_filepath = self.output_dir / txt_filename
with open(txt_filepath, "w") as f:
f.write("PRP Data Validation Summary\n")
f.write("===========================\n\n")
f.write(f"Project ID: {self.project_id}\n")
f.write(f"Validation Date: {self.results['validation_date']}\n")
f.write(f"Total Documents: {self.results['summary']['total_documents']}\n")
f.write(
f"Documents with Issues: {self.results['summary']['documents_with_issues']}\n\n"
)
f.write("Common Issues:\n")
for issue_type, count in self.results["summary"].get("issue_breakdown", {}).items():
f.write(f" - {issue_type}: {count} occurrences\n")
f.write("\nDetailed Issues by Document:\n")
f.write("----------------------------\n")
for doc in self.results["documents"]:
if doc["issues"]:
f.write(f"\n{doc['title']} ({doc['type']}):\n")
for issue in doc["issues"]:
f.write(f" - [{issue['type']}] {issue['description']}\n")
if "impact" in issue:
f.write(f" Impact: {issue['impact']}\n")
if "recommendation" in issue:
f.write(f" Fix: {issue['recommendation']}\n")
print("\nResults saved:")
print(f" - JSON report: {json_filepath}")
print(f" - Summary: {txt_filepath}")
def main():
"""Main entry point"""
parser = argparse.ArgumentParser(description="Validate PRP document data structure")
parser.add_argument("--project-id", required=True, help="UUID of the project to validate")
parser.add_argument(
"--output-dir", default="./test_results", help="Directory to save validation results"
)
args = parser.parse_args()
# Run validation
validator = PRPDataValidator(args.project_id, args.output_dir)
validator.run_validation()
if __name__ == "__main__":
main()

View File

@ -1,536 +0,0 @@
#!/usr/bin/env python3
"""
PRP Viewer Test Tool
This script tests the rendering consistency between the Milkdown editor view
and the PRPViewer (beautiful view) in the Archon UI.
Usage:
python prp_viewer_test.py --project-id <PROJECT_UUID> [--output-dir <DIR>]
Requirements:
- Archon UI server running on port 3737
- Database connection configured via environment variables
- Playwright installed (via crawl4ai dependency)
"""
import argparse
import asyncio
import json
import os
from datetime import datetime
from pathlib import Path
from typing import Any
from dotenv import load_dotenv
from playwright.async_api import Browser, Page, async_playwright
from supabase import Client, create_client
# Load environment variables
# When in Docker, load from the mounted .env file
if os.path.exists("/.dockerenv") and os.path.exists("/app/.env"):
load_dotenv("/app/.env")
else:
load_dotenv()
class PRPViewerTester:
"""Tests PRP Viewer rendering consistency"""
def __init__(self, project_id: str, output_dir: str = "./test_results"):
self.project_id = project_id
self.output_dir = Path(output_dir)
self.output_dir.mkdir(exist_ok=True)
# Initialize Supabase client
supabase_url = os.getenv("SUPABASE_URL") or os.getenv("VITE_SUPABASE_URL")
supabase_key = os.getenv("SUPABASE_SERVICE_KEY") or os.getenv("VITE_SUPABASE_ANON_KEY")
if not supabase_url or not supabase_key:
raise ValueError("Missing Supabase credentials in environment")
self.supabase: Client = create_client(supabase_url, supabase_key)
# When running inside Docker, use host.docker.internal
if os.path.exists("/.dockerenv"):
ui_port = os.getenv("ARCHON_UI_PORT", "3737")
self.base_url = f"http://host.docker.internal:{ui_port}"
else:
# When running on host, use localhost
ui_port = os.getenv("ARCHON_UI_PORT", "3737")
self.base_url = f"http://localhost:{ui_port}"
# Results storage
self.results = {
"project_id": project_id,
"test_date": datetime.now().isoformat(),
"documents": [],
"summary": {"total_documents": 0, "documents_with_issues": 0, "common_issues": []},
}
async def fetch_project_data(self) -> dict[str, Any]:
"""Fetch project and its documents from database"""
try:
# Fetch project
project_response = (
self.supabase.table("archon_projects")
.select("*")
.eq("id", self.project_id)
.execute()
)
if not project_response.data:
raise ValueError(f"Project {self.project_id} not found")
project = project_response.data[0]
# Fetch all document types from project
documents = []
# Check if project has docs array
if project.get("docs"):
for doc in project["docs"]:
documents.append({
"id": doc.get("id", f"doc_{len(documents)}"),
"title": doc.get("title", "Untitled"),
"type": doc.get("document_type", doc.get("type", "unknown")),
"content": doc.get("content", doc),
"source": "project.docs",
})
# Check if project has prd field
if project.get("prd"):
documents.append({
"id": "prd_main",
"title": project.get("prd", {}).get("title", "Main PRD"),
"type": "prd",
"content": project["prd"],
"source": "project.prd",
})
return {"project": project, "documents": documents}
except Exception as e:
print(f"Error fetching project data: {e}")
raise
async def capture_view_content(self, page: Page, view_type: str) -> dict[str, Any]:
"""Capture content from a specific view"""
try:
# Wait for view to load
await page.wait_for_load_state("networkidle")
await asyncio.sleep(2) # Additional wait for React rendering
if view_type == "markdown":
# Capture Milkdown editor content
selector = ".milkdown-editor"
await page.wait_for_selector(selector, timeout=10000)
# Get raw markdown content
markdown_content = await page.evaluate("""
() => {
const editor = document.querySelector('.milkdown-editor');
if (!editor) return null;
// Try to get content from various possible sources
const prosemirror = editor.querySelector('.ProseMirror');
if (prosemirror) {
return {
text: prosemirror.innerText,
html: prosemirror.innerHTML,
sections: Array.from(prosemirror.querySelectorAll('h1, h2, h3, h4, h5, h6')).map(h => ({
level: h.tagName,
text: h.innerText
}))
};
}
return {
text: editor.innerText,
html: editor.innerHTML,
sections: []
};
}
""")
# Take screenshot
screenshot_path = self.output_dir / f"{view_type}_{datetime.now().timestamp()}.png"
await page.screenshot(path=str(screenshot_path), full_page=True)
return {
"type": view_type,
"content": markdown_content,
"screenshot": str(screenshot_path),
}
elif view_type == "beautiful":
# Capture PRPViewer content
selector = ".prp-viewer"
await page.wait_for_selector(selector, timeout=10000)
# Get rendered content
viewer_content = await page.evaluate("""
() => {
const viewer = document.querySelector('.prp-viewer');
if (!viewer) return null;
return {
text: viewer.innerText,
html: viewer.innerHTML,
sections: Array.from(viewer.querySelectorAll('h1, h2, h3, h4, h5, h6')).map(h => ({
level: h.tagName,
text: h.innerText,
parent: h.parentElement?.className || ''
})),
images: Array.from(viewer.querySelectorAll('img')).map(img => ({
src: img.src,
alt: img.alt,
displayed: img.naturalWidth > 0
})),
jsonArtifacts: Array.from(viewer.querySelectorAll('pre')).map(pre => ({
content: pre.innerText,
isJson: (() => {
try {
JSON.parse(pre.innerText);
return true;
} catch {
return false;
}
})()
}))
};
}
""")
# Take screenshot
screenshot_path = self.output_dir / f"{view_type}_{datetime.now().timestamp()}.png"
await page.screenshot(path=str(screenshot_path), full_page=True)
return {
"type": view_type,
"content": viewer_content,
"screenshot": str(screenshot_path),
}
except Exception as e:
print(f"Error capturing {view_type} view: {e}")
return {"type": view_type, "error": str(e), "content": None}
async def compare_views(
self, doc: dict[str, Any], markdown_view: dict[str, Any], beautiful_view: dict[str, Any]
) -> list[dict[str, Any]]:
"""Compare the two views and identify issues"""
issues = []
# Check if both views loaded successfully
if not markdown_view.get("content") or not beautiful_view.get("content"):
issues.append({
"type": "render_failure",
"description": "One or both views failed to render",
"markdown_loaded": bool(markdown_view.get("content")),
"beautiful_loaded": bool(beautiful_view.get("content")),
})
return issues
markdown_content = markdown_view["content"]
beautiful_content = beautiful_view["content"]
# Compare sections
markdown_sections = {s["text"].lower() for s in markdown_content.get("sections", [])}
beautiful_sections = {s["text"].lower() for s in beautiful_content.get("sections", [])}
# Find missing sections
missing_in_beautiful = markdown_sections - beautiful_sections
missing_in_markdown = beautiful_sections - markdown_sections
for section in missing_in_beautiful:
issues.append({
"type": "missing_section",
"section": section,
"visible_in": ["markdown"],
"missing_from": ["beautiful_view"],
})
for section in missing_in_markdown:
issues.append({
"type": "missing_section",
"section": section,
"visible_in": ["beautiful_view"],
"missing_from": ["markdown"],
})
# Check for image placeholder issues
if beautiful_content.get("images"):
for img in beautiful_content["images"]:
if "placeholder-image-" in img["src"] or not img["displayed"]:
issues.append({
"type": "image_placeholder",
"src": img["src"],
"alt": img["alt"],
"displayed": img["displayed"],
})
# Check for JSON artifacts (raw JSON visible in the view)
if beautiful_content.get("jsonArtifacts"):
for artifact in beautiful_content["jsonArtifacts"]:
if artifact["isJson"]:
issues.append({
"type": "json_artifact",
"description": "Raw JSON visible instead of formatted content",
"preview": artifact["content"][:100] + "..."
if len(artifact["content"]) > 100
else artifact["content"],
})
# Check for significant content length differences
markdown_length = len(markdown_content.get("text", ""))
beautiful_length = len(beautiful_content.get("text", ""))
if markdown_length > 0 and beautiful_length > 0:
length_ratio = beautiful_length / markdown_length
if length_ratio < 0.5 or length_ratio > 2.0:
issues.append({
"type": "content_length_mismatch",
"markdown_length": markdown_length,
"beautiful_length": beautiful_length,
"ratio": length_ratio,
})
return issues
async def test_document(self, browser: Browser, doc: dict[str, Any]) -> dict[str, Any]:
"""Test a single document's rendering"""
result = {
"doc_id": doc["id"],
"title": doc["title"],
"type": doc["type"],
"source": doc["source"],
"issues": [],
}
try:
# Create a new page for testing
page = await browser.new_page()
# Navigate to the project's docs tab
url = f"{self.base_url}/projects/{self.project_id}"
print(f"Navigating to: {url}")
# Vite dev server might block direct navigation, so use browser context
try:
await page.goto(url, wait_until="domcontentloaded", timeout=30000)
except Exception as e:
print(f"Initial navigation failed: {e}")
# Try without waiting for full load
await page.goto(url, wait_until="commit", timeout=30000)
# Click on the Docs tab first
try:
# Look for the Docs tab button and click it
await page.wait_for_selector('button:has-text("Docs")', timeout=5000)
await page.click('button:has-text("Docs")')
await asyncio.sleep(1) # Wait for tab to switch
except:
print("Could not find Docs tab button, it might already be selected")
# Wait for any sign of the page being loaded
try:
# First wait for React app to be ready
await page.wait_for_selector("#root", timeout=10000)
# Then wait for either docs content or project content
await page.wait_for_selector(
'h2:has-text("Project Docs"), .prp-viewer, .milkdown-editor', timeout=15000
)
print("Page loaded successfully")
except Exception as e:
print(f"Warning: Page might not have loaded fully: {e}")
# Take a screenshot for debugging
await page.screenshot(path=f"/app/test_results/debug_{doc['id']}.png")
# Select the document (if multiple docs exist)
# Look for document cards in the horizontal scroll area
try:
# Wait for document cards to be visible
await page.wait_for_selector(".flex.gap-4 .cursor-pointer", timeout=5000)
# Try to find and click on the document by title
doc_cards = await page.query_selector_all(".flex.gap-4 .cursor-pointer")
for card in doc_cards:
card_text = await card.inner_text()
if doc["title"] in card_text:
await card.click()
print(f"Selected document: {doc['title']}")
await asyncio.sleep(1) # Wait for selection
break
except Exception as e:
print(f"Could not select document: {e}")
# Document might already be selected or is the only one
# Test markdown view
await page.click('button:has-text("Markdown"), [data-view="markdown"]')
markdown_view = await self.capture_view_content(page, "markdown")
# Test beautiful view
await page.click(
'button:has-text("Beautiful"), button:has-text("View"), [data-view="beautiful"]'
)
beautiful_view = await self.capture_view_content(page, "beautiful")
# Compare views
issues = await self.compare_views(doc, markdown_view, beautiful_view)
result["issues"] = issues
# Store view data for debugging
result["views"] = {
"markdown": {
"screenshot": markdown_view.get("screenshot"),
"sections_found": len(markdown_view.get("content", {}).get("sections", []))
if markdown_view.get("content")
else 0,
},
"beautiful": {
"screenshot": beautiful_view.get("screenshot"),
"sections_found": len(beautiful_view.get("content", {}).get("sections", []))
if beautiful_view.get("content")
else 0,
},
}
await page.close()
except Exception as e:
result["issues"].append({"type": "test_error", "error": str(e)})
return result
async def run_tests(self):
"""Run all tests"""
print(f"Starting PRP Viewer tests for project {self.project_id}")
# Fetch project data
print("Fetching project data...")
project_data = await self.fetch_project_data()
documents = project_data["documents"]
if not documents:
print("No documents found in project")
return
print(f"Found {len(documents)} documents to test")
self.results["summary"]["total_documents"] = len(documents)
# Launch browser
async with async_playwright() as p:
print("Launching browser...")
# Always use headless mode in Docker
headless = os.path.exists("/.dockerenv")
browser = await p.chromium.launch(headless=headless)
try:
# Test each document
for i, doc in enumerate(documents):
print(
f"\nTesting document {i + 1}/{len(documents)}: {doc['title']} ({doc['type']})"
)
result = await self.test_document(browser, doc)
self.results["documents"].append(result)
if result["issues"]:
self.results["summary"]["documents_with_issues"] += 1
# Small delay between documents
await asyncio.sleep(2)
finally:
await browser.close()
# Analyze common issues
self.analyze_common_issues()
# Save results
self.save_results()
print(f"\nTest completed. Results saved to {self.output_dir}")
print(
f"Summary: {self.results['summary']['documents_with_issues']} out of {self.results['summary']['total_documents']} documents have issues"
)
def analyze_common_issues(self):
"""Analyze and summarize common issues across all documents"""
issue_counts = {}
for doc in self.results["documents"]:
for issue in doc["issues"]:
issue_type = issue["type"]
issue_counts[issue_type] = issue_counts.get(issue_type, 0) + 1
# Sort by frequency
common_issues = sorted(issue_counts.items(), key=lambda x: x[1], reverse=True)
self.results["summary"]["common_issues"] = [issue[0] for issue in common_issues[:5]]
self.results["summary"]["issue_breakdown"] = dict(common_issues)
def save_results(self):
"""Save test results to file"""
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"ViewInconsistencies_{self.project_id}_{timestamp}.json"
filepath = self.output_dir / filename
with open(filepath, "w") as f:
json.dump(self.results, f, indent=2)
# Also save a summary report
summary_file = self.output_dir / f"Summary_{self.project_id}_{timestamp}.txt"
with open(summary_file, "w") as f:
f.write("PRP Viewer Test Summary\n")
f.write("======================\n\n")
f.write(f"Project ID: {self.project_id}\n")
f.write(f"Test Date: {self.results['test_date']}\n")
f.write(f"Total Documents: {self.results['summary']['total_documents']}\n")
f.write(
f"Documents with Issues: {self.results['summary']['documents_with_issues']}\n\n"
)
f.write("Common Issues:\n")
for issue_type, count in self.results["summary"].get("issue_breakdown", {}).items():
f.write(f" - {issue_type}: {count} occurrences\n")
f.write("\nDetailed Issues by Document:\n")
f.write("---------------------------\n")
for doc in self.results["documents"]:
if doc["issues"]:
f.write(f"\n{doc['title']} ({doc['type']}):\n")
for issue in doc["issues"]:
f.write(f" - {issue['type']}: {issue.get('description', issue)}\n")
async def main():
"""Main entry point"""
parser = argparse.ArgumentParser(description="Test PRP Viewer rendering consistency")
parser.add_argument("--project-id", required=True, help="UUID of the project to test")
parser.add_argument(
"--output-dir", default="./test_results", help="Directory to save test results"
)
args = parser.parse_args()
# Check if UI server is running
# Determine UI URL based on environment
if os.path.exists("/.dockerenv"):
ui_port = os.getenv("ARCHON_UI_PORT", "3737")
ui_url = f"http://host.docker.internal:{ui_port}"
else:
ui_port = os.getenv("ARCHON_UI_PORT", "3737")
ui_url = f"http://localhost:{ui_port}"
# Skip UI connectivity check for now - Vite dev server may block direct requests
print(f"Using UI server at {ui_url}")
print("Note: Skipping connectivity check as Vite dev server may block direct HTTP requests")
# Run tests
tester = PRPViewerTester(args.project_id, args.output_dir)
await tester.run_tests()
if __name__ == "__main__":
asyncio.run(main())

View File

@ -1,24 +0,0 @@
#!/bin/bash
# Run PRP Viewer test from within Docker container
PROJECT_ID=$1
if [ -z "$PROJECT_ID" ]; then
echo "Usage: ./run_prp_test.sh <PROJECT_ID>"
exit 1
fi
echo "Running PRP Viewer test for project: $PROJECT_ID"
# The UI runs on the host at port 3738, but inside Docker we need to use the container name
docker exec -e ARCHON_UI_URL="http://host.docker.internal:3738" \
-e VITE_SUPABASE_URL="$VITE_SUPABASE_URL" \
-e VITE_SUPABASE_ANON_KEY="$VITE_SUPABASE_ANON_KEY" \
Archon-Server \
python /app/src/server/testing/prp_viewer_test.py --project-id "$PROJECT_ID" --output-dir /app/test_results
# Copy results back to host
echo "Copying test results to host..."
docker cp Archon-Server:/app/test_results ./test_results_$(date +%Y%m%d_%H%M%S)
echo "Test complete. Results copied to ./test_results_$(date +%Y%m%d_%H%M%S)"