Features

Production-ready SDLC orchestration with multi-provider support (Anthropic, OpenAI, Google, DeepSeek), quality-gated iterations, and comprehensive cost tracking across five feature-parity implementations.

SDLC Orchestration Mode

Complete 7-phase pipeline: Analyst, Project Manager, Developers, Integration Architect, QA Review, Feedback Coordinator, and Summary Generator.

Multi-Provider Support

Mix and match Anthropic Claude, OpenAI GPT, Google Gemini, and DeepSeek models per role. Extensible architecture for adding new providers.

Quality-Gated Iterations

Automatic re-iteration until completion threshold is met. Configurable scoring weights: critical (50%), major (20%), minor (10%), acceptance criteria (20%). Score capping prevents inflated scores.

Per-Provider Cost Tracking

Real-time token usage and cost breakdown by provider. Track API calls, input/output tokens, and costs separately for each LLM.

Parallel Development

Up to 5 developers work simultaneously with strict file assignment enforcement. Each developer can only create files they are assigned.

Independent QA Review

Up to 3 QA agents independently review ALL files. Each identifies critical, major, and minor issues with deduplication.

Integration Architect Phase

Static analysis verifies cross-file consistency: CSS classes used in HTML exist, HTML IDs referenced in JS exist, JS element selectors match HTML structure, and module imports are valid.

Cross-Language Parity

Five feature-complete implementations in Node.js, Python, Go, .NET, and Next.js Web with identical pipelines and equivalent concurrency models.

Web Dashboard

Next.js 16 web frontend with SSE-driven live dashboard, event-sourced run history with full replay, conversational requirements builder, real-time phase progress, and an integrated output file browser with ZIP downloads.

OIDC Authentication

Web frontend secured behind direct OAuth/OIDC via Rdn.Identity with single-user lockdown via ALLOWED_USER, protecting LLM API keys from unauthorized use.

User Check-In Points

Interactive prompts between iterations showing completion score, critical issues, and costs. Accept results early or continue iterating.

Graceful Ctrl+C

Immediately cancels in-flight API requests and displays a final usage report. Uses AbortController (Node.js), CancellationToken (.NET), and context cancellation (Go/Python).

Visual & Interactive QA

Three QA modes — code review, visual screenshot capture, and full browser automation — each progressively deeper. See detailed breakdown below.

QA Modes In Depth

Three configurable testing modes, each building on the previous. Select per-project based on complexity and coverage requirements.

Code QA

Default Mode

  • Multi-agent independent review of ALL generated files
  • Cross-file integration checks: CSS/HTML selectors, JS/HTML element IDs, import validation
  • Structured issue output with severity levels (critical, major, minor)
  • Acceptance criteria evaluation against original requirements

Visual QA

Code + Screenshots

  • Launches headless browser via Playwright (Node.js/Python) or chromedp (Go)
  • Captures screenshots at configurable viewports (desktop 1280x720, mobile 375x667)
  • Configurable wait strategies: networkidle, load, domcontentloaded
  • Multi-modal QA prompts include screenshots for layout, alignment, and responsiveness analysis

Interactive QA

Full Automation

  • Turn-based tool use with 16 browser tools: navigate, screenshot, get_page_info, click, type, select, hover, get_text, get_value, get_attribute, is_visible, count_elements, get_console_logs, evaluate, wait_for, report_test
  • Mandatory ACTION, VERIFY, COMPARE, REPORT testing protocol
  • Auto-failure detection for failed element interactions and suspicious patterns (no test reports = CRITICAL)
  • Max 20 turns per agent with tool use tracking

SDLC Mode CLI

# Run SDLC orchestration with quality threshold

# Node.js

$ node swarm.js --mode sdlc --config ./swarm-config.json --threshold 0.8

# Python

$ python swarm.py --mode sdlc --config ./swarm-config.json --threshold 0.8

# Go

$ ./swarm -mode sdlc -config ./swarm-config.json -threshold 0.8

# .NET

$ dotnet run -- --mode sdlc --config ./swarm-config.json --threshold 0.8

# Web (Docker)

$ docker compose up --build     # http://localhost:3000

# Key SDLC options:

--mode sdlc        Enable SDLC orchestration pipeline

--config <path>    Configuration file with role settings

--threshold <n>   Completion threshold 0.0-1.0 (default: 0.8)

--max-iterations  Max refinement iterations (default: 10)

--max-cost <n>    Stop if total cost exceeds $n

Web Dashboard

A Next.js 16 web frontend that replaces the CLI with an interactive browser experience. Secured behind OIDC authentication, with real-time SSE streaming, event-sourced run history, and full replay.

Chat Builder

Requirements

  • Conversational LLM chat to flesh out requirements
  • Uses the analyst role's configured model
  • Generates structured requirements.md draft
  • Split editor view with config panel

Live Dashboard

SSE Streaming

  • Animated spinners and live timers per phase
  • Per-developer and per-QA progress rows with telemetry
  • Score bar, check-in modal, and token usage table
  • Event replay on reconnect (leave and return mid-run)
  • Run history with full event-sourced replay
  • Home dashboard with stats, success rate, and total cost

Output Browser

File Viewer

  • File tree of generated code and docs
  • Source code viewer with language labels
  • Live HTML preview via sandboxed iframe
  • Preview/Source toggle for HTML files
  • ZIP download for entire output or archived runs

Sample SDLC Output

🚀 SDLC ORCHESTRATOR STARTING

============================================================

Requirements loaded from: ./inputs/requirements.md

Max developers: 5 | Max QA agents: 3 | Max iterations: 10

Completion threshold: 80%

############################################################

# ITERATION 1

############################################################

============================================================

📋 PHASE 1: ANALYST

🤖 MODEL: openai/gpt-4o-mini

============================================================

  * Analyzing requirements... [3.2s]

  ✅ Analysis complete: 4 components, 12 acceptance criteria

============================================================

📊 PHASE 2: PROJECT MANAGER

🤖 MODEL: deepseek/deepseek-reasoner

============================================================

  * Creating work assignments... [2.1s]

  ✅ Created 4 work assignments

============================================================

🔨 PHASE 3: DEVELOPERS (4 parallel)

🤖 MODEL: anthropic/claude-3-haiku-20240307

============================================================

  ✅ Developer 0: 2 files [8.5s] | 3,456 tokens | $0.0123

  ✅ Developer 1: 1 file [6.2s] | 2,100 tokens | $0.0087

  ✅ Developer 2: 2 files [7.8s] | 2,890 tokens | $0.0098

  ✅ Developer 3: 1 file [5.1s] | 1,567 tokens | $0.0065

============================================================

🔗 PHASE 4: INTEGRATION ARCHITECT

🤖 MODEL: deepseek/deepseek-chat

============================================================

  * Checking cross-file consistency... [4.3s]

  ✅ 2 integration issues detected and fixed

============================================================

🔍 PHASE 5: QA REVIEW (interactive mode)

🤖 MODEL: anthropic/claude-haiku-4-5-20251001

============================================================

  🧪 Running interactive browser testing...

  ✅ Browser ready at http://localhost:54321/index.html

  🧪 QA 0: Starting interactive testing...

    [████████████████░░░░] turn 16/20 (24 tool uses)

  ✅ QA 0 interactive: 16/20 turns, 24 tool uses | 3 issue(s) | 8/10 tests | $0.0068

  📸 Capturing screenshots for code review...

  ✅ 2 screenshot(s) captured

  📝 Now running code review...

  ✅ QA 0: 2 issues [4.1s] | 12,340 tokens | $0.0045

  ✅ QA 1: 1 issue [3.8s] | 11,200 tokens | $0.0041

============================================================

📈 PHASE 6: FEEDBACK COORDINATOR

🤖 MODEL: openai/gpt-4o-mini

============================================================

  * Analyzing feedback... [2.5s]

  Score: [████████████████░░░░|] 82% (threshold: 80%)

  ✅ Threshold met!

============================================================

📝 PHASE 7: SUMMARY GENERATOR

🤖 MODEL: google/gemini-2.5-flash

============================================================

  * Generating summary... [1.8s]

  ✅ Summary saved to outputs/docs/summary.md

============================================================

🎉 SDLC ORCHESTRATION COMPLETE

============================================================

📊 Final Score: 82%

   Iterations: 1 | Files Created: 6 | Total Time: 2m 34s

🧮 TOKEN USAGE SUMMARY

============================================================

💰 COST BY PROVIDER:

Provider         Calls        Tokens          Cost

anthropic          12       45,230     $0.0678 (54%)

openai              6       28,100     $0.0342 (27%)

deepseek            4       18,200     $0.0126 (10%)

google              4       15,670     $0.0099 (8%)

TOTAL              26      107,200     $0.1245

SDLC Pipeline Flow

1

Analyst

Decomposes requirements

2

PM

Assigns work to devs

3

Devs

Parallel code gen

4

Architect

Static analysis

5

QA

Issue detection

6

Feedback

Score & iterate

7

Summary

Final docs

If score < threshold after Phase 6, loop back to Phase 3 (Developers) with QA feedback