22 specialised AI agents, each with a defined role, working in parallel — coordinated by a single orchestrator, governed by 8 safety rules.
Every agent follows the same lifecycle: spawn, read context, check prior knowledge, execute, submit to a quality gate, and store what it learned.
Each agent is spawned with a precise context package, a scoped task brief, and access to shared memory. Here is the full team that worked on the Algorithmix project.
Central coordinator that spawns all other agents, manages phase transitions, routes tasks, and enforces the 12-phase BACON methodology.
Chrome browser agents crawling algorithmix.com in parallel. Each instance targets a different section of the site, extracting content, metadata, and structure.
Conducts competitive benchmarking, market research, and reference analysis. Multiple instances research different competitor domains simultaneously.
Performs holistic analysis of the entire system, identifies interconnections, builds system dynamics models, and maps feedback loops between components.
Applies de Bono's Six Thinking Hats methodology to examine every recommendation from six perspectives: facts, emotions, risks, benefits, creativity, and process.
Applies TRIZ contradiction resolution to identify inventive solutions. Resolves trade-offs like "professional appearance vs. personal warmth" using 40 inventive principles.
Creates detailed buyer personas with demographic data, pain points, goals, and complete customer journey maps from awareness through advocacy.
Writes German and English copy including headlines, body text, CTAs, meta descriptions, and microcopy. Ensures consistent tone of voice across all 23 pages.
Defines brand positioning using the AIDA framework (Attention, Interest, Desire, Action). Creates messaging hierarchy, value propositions, and differentiation strategy.
Implements the redesigned site using Astro and Tailwind CSS. Builds responsive layouts, component library, and page templates with performance-first architecture.
Executes the full V-model testing pyramid: Technical Unit Tests, Functional Unit Tests, System Integration Tests, and Regression Tests. Validates every agent's output.
Writes the final report, creates Mermaid diagrams, generates PDF exports, and maintains version history. Iterated from v1 through v13 based on gate feedback.
Monitors the entire process for methodology adherence. Runs SSC (Start-Stop-Continue) retrospectives and ensures NPSL governance rules are followed throughout.
Elisabeth (British) and Finn (Norwegian) provide real-time audio status updates via text-to-speech during execution. The orchestrator literally talks to the operator.
Traditional agency: 1-2 analysts working sequentially. BACON-AI: up to 14 agents working in parallel, each producing artifacts that feed into the next phase.
Key Insight
The SE-Agent Observer and Voice Announcer run continuously across all phases — they are "always-on" agents that monitor and narrate the entire lifecycle. Meanwhile, discovery agents (scrapers, researchers) run in parallel batches, and their outputs feed directly into the analysis phase without any handoff meetings.
Agents don't just produce output — they review each other's work. Every artifact passes through at least one peer review before reaching the quality gate.
Research Analysts cross-check Browser Scraper outputs against live site data. Catches extraction errors, missing pages, and stale content before analysis begins.
Quality Assurance agent runs the full V-model test pyramid on every Frontend Developer output. Failed tests trigger automatic correction loops.
The SE-Agent Observer monitors the Documentation Manager for methodology adherence, ensuring the report accurately reflects what was actually done.
Systems Architect reviews Innovation Engineer proposals to ensure TRIZ-derived solutions are technically feasible within the existing system constraints.
The v5 report claimed "all tests passed" — governance rule SA-001 (Anti-Optimism Bias) automatically flagged this as a premature success declaration. The subsequent audit found 14 factual issues that were corrected across versions v6 through v13.
Not a gimmick — the orchestrator literally talks to the operator while working. Status announcements via text-to-speech keep the human in the loop without requiring screen-watching.
en-GB-SoniaNeural · British English
Pre-action announcements. Before every significant operation, Elisabeth tells the operator what is about to happen and why.
"Hello Colin, I'm about to deploy the redesigned site to staging. This will overwrite the current preview with the v13 build including all 14 corrected findings."
nb-NO-FinnNeural · Norwegian
Task completion confirmations. After each significant milestone, Finn confirms success in Norwegian — unmistakable audio feedback.
"Hei Colin! Alle 121 sider er skannet og analysert. Rapport v13 er klar for gjennomgang!"
{
"voice_team": {
"elisabeth": {
"engine": "edge-tts",
"voice_id": "en-GB-SoniaNeural",
"role": "pre-action-announcements",
"trigger": "BEFORE every tool use",
"template": "Hello Colin, I'm about to {ACTION}. This will {DESCRIPTION}."
},
"finn": {
"engine": "edge-tts",
"voice_id": "nb-NO-FinnNeural",
"role": "task-completion-confirmations",
"trigger": "AFTER significant completions",
"template": "Hei Colin! {TASK} completed successfully!"
}
},
"enforcement": "mandatory",
"fallback": "never-skip — log if TTS unavailable"
}
Total agent activity across the Algorithmix engagement