System Design

SDLC Orchestrator Architecture

A 7-phase software development pipeline with multi-provider support (Anthropic, OpenAI, Google, DeepSeek), quality-gated iterations, and per-provider cost tracking.

SDLC Pipeline Flow

Requirements

1. Analyst

2. PM

Dev 0

Dev 1

Dev N

4. Integration

QA 0

QA 1

QA 2

6. Feedback

Output

If completion score < threshold, loop back to Developers (Phase 3) with QA feedback

Phase Details

1

Analyst

Decomposes requirements into components with file lists and dependencies, acceptance criteria (testable, measurable), technical notes and complexity estimation.

2

Project Manager

Creates non-overlapping work assignments. Validates no file is assigned to multiple developers. Groups related files to minimize integration issues.

3

Developers (Parallel)

Each developer receives assigned files only and can only create files they are assigned (enforced). Receives team coordination context showing all assignments.

4

Integration Architect

Static analysis verifies cross-file consistency: CSS classes used in HTML exist in CSS files, HTML IDs referenced in JS exist in HTML, JS element selectors match HTML structure, module imports are valid.

5

QA Review (Parallel)

Each QA agent independently reviews ALL files (not just a subset). Checks critical integration points. Marks severity: critical, major, minor.

6

Feedback Coordinator

Deduplicates similar issues across QA agents. Calculates completion score using weighted algorithm. Decides if threshold is met or iteration needed.

Completion Scoring Algorithm

// Weighted completion score calculation

finalScore = (criticalScore × 0.50) +

(majorScore × 0.20) +

(minorScore × 0.10) +

(acScore × 0.20)

// Issue penalties per occurrence:

criticalScore = max(0, 1.0 - (criticalIssues × 1.00))

majorScore    = max(0, 1.0 - (majorIssues × 0.25))

minorScore    = max(0, 1.0 - (minorIssues × 0.10))

acScore       = passedCriteria / totalCriteria

// Score capping (prevents artificially high scores):

if criticalIssues >= 1: maxScore = max(0.3, 0.6 - count × 0.15)

if majorIssues >= 3:   maxScore = max(0.4, 0.75 - (count-2) × 0.1)

Project Structure

# Multi-language SDLC orchestrator implementations

rdn-swarm/

src/

nodejs/

swarm.js                 # Main entry point

sdlc-orchestrator.js     # 2000+ lines - SDLC pipeline

providers/               # Anthropic, OpenAI, Google

swarm-config.json        # Role-specific configuration

python/

swarm.py                 # Main entry point

sdlc/                     # Orchestrator, config, types

sdlc/providers/           # Multi-provider support

go/

swarm.go                 # Main entry point

sdlc/orchestrator.go     # 2000+ lines - goroutines

providers/               # LLM provider clients

dotnet/

Swarm.cs                 # Main CLI interface

SdlcOrchestrator.cs      # Full 7-phase pipeline

Providers/               # LLM provider interfaces

web/

lib/web-orchestrator.ts   # SDLCOrchestrator subclass + SSE events

lib/orchestrator/        # Copied from nodejs/ (JS, not rewritten)

src/app/                  # Next.js pages + API routes

src/hooks/useRunEvents.ts # SSE consumer → useReducer state machine

Extensible Provider Architecture

Anthropic

Claude Models

  • claude-opus-4-6
  • claude-sonnet-4-5-20250929
  • claude-haiku-4-5-20251001
  • Native SDK integration

OpenAI

GPT Models

  • gpt-5.2-codex / gpt-5.1-codex
  • gpt-5 / gpt-5-mini / gpt-5-nano
  • gpt-4.1 / gpt-4o / gpt-4o-mini
  • OpenAI SDK integration

Google

Gemini Models

  • gemini-2.5-flash / gemini-2.5-pro
  • gemini-3-pro / gemini-3-flash
  • gemini-2.5-flash-lite
  • REST API integration

DeepSeek

Reasoning Models

  • deepseek-reasoner
  • deepseek-chat
  • OpenAI-compatible API
  • Lowest cost per token

New providers can be added by implementing the base provider interface with createMessage() method

Concurrency Models

Node.js

Event Loop

  • Promise.all for parallel execution
  • Promise.race for semaphore slots
  • Executing array tracks active

Python

asyncio

  • asyncio.Semaphore(n) limits
  • asyncio.gather for parallelism
  • async with for clean acquire

Go

Goroutines

  • Channel-based semaphore
  • sync.WaitGroup for join
  • chromedp for browser automation

.NET

async/await

  • SemaphoreSlim for limits
  • Task.WhenAll for parallelism
  • Native HttpClient API

Next.js

SSE + EventEmitter

  • Wraps Node.js orchestrator
  • SSE stream to browser
  • useReducer state machine

Web Frontend Architecture

The 5th implementation is a Next.js web frontend that replaces the CLI with a browser-based UI. The orchestrator runs server-side in the same Node.js process — no separate API server.

Browser ←—SSE—→ Next.js Server (port 3000)
                ├── Auth (next-auth → OIDC provider)
                ├── RunManager (singleton)
                │   ├── WebOrchestrator (extends SDLCOrchestrator)
                │   │   ├── EventEmitter → SSE stream
                │   │   ├── Promise-based userCheckIn (replaces readline)
                │   │   └── Console capture (CLI output → SSE events)
                │   ├── RunArchive (event sourcing → filesystem)
                │   └── Event broadcast → SSE subscribers
                ├── Providers (anthropic, openai, google, deepseek)
                └── Static output serving (/output/*)

SSE Event Streaming

WebOrchestrator subclasses SDLCOrchestrator and overrides each phase method to emit typed events (phase:start, phase:complete, developer:complete, qa:complete, score:update, etc.) before/after calling super. A useReducer hook on the client accumulates events into a full RunState.

Event Sourcing & Run History

Every run event is stored. Mid-run disconnects replay seamlessly on reconnect. Completed runs are archived to disk with full event history, enabling exact replay through the same dashboard UI. A home dashboard shows aggregate stats (total runs, success rate, average score, total cost).

OIDC + User Lockdown

next-auth v5 with a generic OIDC provider (Rdn.Identity, Auth0, Keycloak, etc.). Middleware protects all routes. ALLOWED_USER env var restricts login to a single username, preventing unauthorized LLM API key usage.

Single Process, No Rewrite

The Node.js orchestrator JS files are copied directly into the Next.js project and imported as-is. WebOrchestrator wraps them with SSE event emission. No separate API server, no IPC — the orchestrator runs inside the same Next.js Node.js process.

Visual & Interactive QA

The AppCap module enables three QA testing modes, configurable per project. Each mode builds on the previous.

Code Mode

Default

  • Traditional code review only
  • No browser automation
  • Fastest execution

Visual Mode

Screenshots

  • Code review + screenshots
  • Desktop (1280×720) capture
  • Mobile (375×667) capture

Interactive Mode

Full Testing

  • Click, type, navigate actions
  • Element inspection tools
  • Console log checking

// Interactive QA tools available to agents (16 tools):

// Navigation

navigate(url)                   // Navigate to URL

screenshot(name?)               // Capture current page

get_page_info()                 // Get URL, title, viewport

// Element Interaction

click(selector)                 // Click element

type(selector, text)            // Type into input

select(selector, value)         // Select dropdown option

hover(selector)                 // Hover over element

// Element Inspection

get_text(selector)              // Get element text

get_value(selector)             // Get input value

get_attribute(selector, attr)   // Get element attribute

is_visible(selector)            // Check element visibility

count_elements(selector)        // Count matching elements

// Page Analysis

get_console_logs()              // Get browser console logs

evaluate(script)                // Run JavaScript in page

wait_for(selector, timeout?)    // Wait for element

// Test Reporting

report_test(name, passed, details?) // Record test result

Configuration (swarm-config.json)

// Per-role provider, model, and orchestration configuration

{

"roles": {

"analyst": {

"provider": "openai",

"model": "gpt-4o-mini",

"maxTokens": 4000,

"description": "Analyzes requirements and creates structured plans with acceptance criteria"

},

"projectManager": {

"provider": "deepseek",

"model": "deepseek-reasoner",

"maxTokens": 8000,

"description": "Coordinates work distribution and assigns tasks to developers and QA"

},

"developer": {

"provider": "anthropic",

"model": "claude-3-haiku-20240307",

"maxTokens": 4000,

"description": "Implements code based on assigned tasks"

},

"integrationArchitect": {

"provider": "deepseek",

"model": "deepseek-chat",

"maxTokens": 8000,

"description": "Architects cross-file consistency (CSS/HTML selectors, JS/HTML element IDs, imports/exports)"

},

"qa": {

"provider": "anthropic",

"model": "claude-haiku-4-5-20251001",

"maxTokens": 16000,

"description": "Independently reviews ALL files for quality, bugs, and adherence to requirements"

},

"feedbackCoordinator": {

"provider": "openai",

"model": "gpt-4o-mini",

"maxTokens": 4000,

"description": "Aggregates and deduplicates QA feedback, calculates completion scores"

},

"summaryGenerator": {

"provider": "google",

"model": "gemini-2.5-flash",

"maxTokens": 4000,

"description": "Generates final user-facing summaries of completed work"

}

},

"orchestration": {

"maxDevelopers": 5,

"maxQAAgents": 3,

"maxIterations": 10,

"maxCost": 2.0,

"completionThreshold": 0.8,

"checkInFrequency": 1,

"autoApproveUnderCost": null,

"outputDir": "./outputs/code",

"docsDir": "./outputs/docs",

"logsDir": "./outputs/logs",

"stateFile": "./outputs/shared_state.json",

"requirementsPath": "./inputs/requirements.md"

},

"scoring": {

"weights": {

"criticalIssues": 0.5,

"majorIssues": 0.2,

"minorIssues": 0.1,

"acceptanceCriteria": 0.2

},

"penalties": {

"criticalIssue": 1.0,

"majorIssue": 0.25,

"minorIssue": 0.1

}

},

"appCap": {

"enabled": true,

"mode": "interactive",

"viewports": [

{ "name": "desktop", "width": 1280, "height": 720 },

{ "name": "mobile", "width": 375, "height": 667 }

],

"waitStrategy": "networkidle",

"waitTimeoutMs": 5000,

"interactiveTimeoutMs": 30000,

"persistToDisk": true

},

"logging": {

"level": "info",

"verbose": true

}

}

Quick Start

1

Clone Repo

git clone github.com/jreidell/rdn-swarm
2

Set API Keys

export ANTHROPIC_API_KEY=...
3

Edit Requirements

vim inputs/requirements.md
4

Run SDLC

node swarm.js --mode sdlc