SDLC Orchestration

AI-Driven Software Factory

A multi-agent SDLC orchestrator that coordinates Analysts, Project Managers, Developers, and QA agents across multiple LLM providers to iteratively build and refine complete applications. Five feature-parity implementations plus a Next.js web frontend with real-time SSE dashboard, run history with replay, and OIDC auth via Rdn.Identity.

swarm --mode sdlc

PHASE 1: ANALYST

  Analysis complete | Components: 4

PHASE 2: PROJECT MANAGER

  Work plan created | Developers: 4

PHASE 3: DEVELOPERS

Dev 0: index.html

Dev 1: style.css

Dev 2: calculator.js

Dev 3: script.js

PHASE 4: INTEGRATION ARCHITECT

  2 integration issues fixed

PHASE 5: QA REVIEW

13 issues | Critical: 4 | Major: 2

Completion Score: 64.8%

Cost: $0.14 (anthropic | openai | deepseek | google)

7 Phases

Full SDLC

4 Providers

Claude + GPT + Gemini + DeepSeek

7-Phase SDLC Pipeline

A complete software development lifecycle with specialized agents for analysis, planning, development, integration, QA, and documentation - with iterative refinement until quality thresholds are met.

Multi-Provider Support

Mix Anthropic Claude, OpenAI GPT, Google Gemini, and DeepSeek models per role. Extensible architecture makes adding new providers straightforward.

Quality-Gated Iterations

Automatic re-iteration until completion threshold is met. Configurable scoring weights for critical, major, and minor issues.

Per-Provider Cost Tracking

Real-time token usage and cost breakdown by provider and role. Set max cost limits to control spending across iterations.

5 Implementations, One Pipeline

Node.js

v24+ with ES Modules

  • Promise.all/race for concurrency
  • @anthropic-ai/sdk integration
  • Native async/await patterns

Python

3.12+ with asyncio

  • asyncio.Semaphore for limits
  • Anthropic async client
  • asyncio.gather for parallelism

Go

1.25+ with chromedp

  • Goroutines + WaitGroup
  • chromedp for browser automation
  • Channel-based semaphore

.NET

.NET 10 with C#

  • Task.WhenAll for parallelism
  • SemaphoreSlim for limits
  • Native HttpClient API

Next.js

Next.js 16 + React 19

  • SSE real-time dashboard
  • OIDC auth via Rdn.Identity
  • Docker-ready deployment

Per-Provider Cost Breakdown

Track token usage and costs across all providers. See exactly where your budget is going with detailed breakdowns by role and provider.

💰 COST BY PROVIDER:

--------------------------------------------------

Provider         Calls       Tokens         Cost

--------------------------------------------------

anthropic         12       44,405  $ 0.0678 (54%)

openai              6       28,100  $ 0.0342 (27%)

deepseek            4       18,200  $ 0.0126 (10%)

google              4       15,670  $ 0.0099 (8%)

--------------------------------------------------

TOTAL              26      106,375  $ 0.1245

ANTHROPIC BREAKDOWN:

  API calls:     12

  Input tokens:  37,315

  Output tokens: 7,090

Completion Score Calculation

Weighted scoring algorithm: critical issues (50%), major (20%), minor (10%), and acceptance criteria (20%). Score capping prevents artificially high scores when critical issues remain.

Cost Limiting

Set maximum cost thresholds with --max-cost. Orchestrator stops automatically when budget is reached.

User Check-In Points

Interactive prompts between iterations showing score, issues, and cost. Accept results early or continue iterating.

7 SDLC Phases

Each phase uses specialized agents with role-specific prompts and configurable LLM providers. The pipeline iterates until quality thresholds are met.

Analyst

Requirements decomposition

PM

Work assignment

Developers

Parallel code gen

Architect

Static analysis

QA Review

Issue detection

Feedback

Score calculation

Summary

Documentation

Explore the Architecture

Dive into the technical details of multi-agent orchestration, concurrency patterns, and production-ready coordination mechanisms.