AI Implementation Portfolio

Rachel Laurie
AI Implementation Specialist

Real-world builder, not just a trainer. I have architected and deployed production AI systems — from multi-agent pipelines and knowledge graphs to automated data extraction and intelligent triage workflows.

Available for contract roles · Melbourne (remote-first)

1

Executive Summary

What I actually do

I bridge the gap between AI capability and real business implementation. My work is hands-on — I design, build, and operate multi-agent systems, automated data pipelines, and intelligent workflows, then turn those into reusable modules that can be deployed for clients through my consulting practice, Leoma.

I work across the full stack: from prompt engineering and agent orchestration to database design, serverless Python, and Cloudflare infrastructure. Every system I document as an Architecture Decision Record (ADR) — 87 decisions made, tracked, and reasoned.

87
Architecture Decisions
24
Active Projects
6
Automated Pipelines
5
AI Agents
478
Scored Innovation Modules
84
Product Offerings

What makes this different

Production, not prototype

Every pipeline listed here runs live. Jobs are scraped daily. Newsletters are auto-extracted. Agents dispatch real work.

Documented decisions

87 ADRs covering tool selection, architecture tradeoffs, and rejected alternatives — with explicit reasoning on each.

Multi-model orchestration

Claude (Sonnet/Opus/Haiku), Gemini, Ollama local, and Workers AI — routed by task complexity and cost.

Values-led

Composable Ethics Stack underpins all client work. AI as augmentation, not replacement — especially for nonprofits and marginalised communities.

2

AI Tools & Stack

Tool / Service How I use it Status
Core AI / LLM
Claude Code (Sonnet 4.6 / Haiku 4.5 / Opus 4.6)Primary development environment. Multi-agent crew (Dotti/Finn/Sage). Custom skills via slash commands.Production
Claude DesktopInteractive business analyst (Nova). Strategy, specs, MCP tool orchestration.Daily use
Gemini 2.5 FlashGmail bill extraction pipeline. Android daily ops (voice, calendar, email).Production
LiteLLMOpenAI-compatible proxy in Docker stack. Routes model calls, tracks spend, load-balances providers.Running
Ollama (llama3.2)Local inference for privacy-sensitive tasks, offline use, Zotero reference management.Running
Cloudflare Workers AIServerless entity extraction in Pavilion Factory newsletter pipeline.Production
OpenRouterMulti-model access and evaluation harness.Available
Agent Frameworks & Orchestration
Pydantic AI 1.25All structured LLM output in Modal/Python pipelines. Enforced by ADR-068 — no raw json.loads().Standard
n8nWorkflow automation. Free Agent Dispatcher, pipeline error handling, job alert emails, newsletter ingestion.Production
ModalServerless Python. ETL pipelines, context API (SSE), job scraper, Chrome extension backend.Production
WindmillScheduled cron scripts (ADR-076). Complements n8n for scheduled/batch work.Active
Data & Storage
Neo4jKnowledge graph. Contacts, decisions, content units, Pavilion Factory output. Agent memory backend.Core
PostgreSQL (local Docker)Primary operational data. Jobs (598), tools registry, resume modules (233), events, organisations.Core
NeonCloud Postgres. Innovation Portfolio (478 modules), tools registry cloud sync.Active
DuckDB / MotherDuckAnalytics and ad-hoc queries over large datasets. Cloud analytics layer.Active
Cloudflare R2Object storage for archives, assets, and backups. liflode-archives, liflode-backups, liflode-assets buckets.Production
Automation & Infrastructure
CloudflareDNS, Tunnel (liflode-dispatch), Workers AI, R2, Email Routing, Pages deployments.Core
TraefikLocal reverse proxy. Routes *.localhost to Docker services without port numbers.Core
Docker ComposeCore local stack: Neo4j, Postgres, n8n, LiteLLM, Traefik, Ollama, NeoDash.Core
Infisical (EU)Secrets management for all services and scripts. Profile-based secret injection.Core
GitHubSource control. gh CLI for automation. Repos: liflode-scripts, liflode-docs, chrome-extensions.Active
Business / PM
FiberyBusiness PM layer. Initiative → Project → Task. Strategy Lab scoring. Module/Product pipeline. 5 spaces.Daily use
LinearTech/crew issue tracking. Crew work routing. API via curl + GraphQL.Daily use
XeroAccounting (Grow plan). OAuth integration. Bill extraction pipeline target.Integrating
TimelyPassive time tracking via desktop + mobile apps.Active
Capture & Chrome Extensions
Event Saver (orange)Captures events from any page → Modal → events_registry. Deduplication + contact linking.Production
Job Saver (green)Captures job ads from any page → Modal → jobs table in Postgres.Production
Contact Saver (blue)Captures organisation profiles → Postgres organisations_registry + optional Fibery elevation.Production
Bookmark Classifier80,030 raw bookmark URLs in pipeline for classification → registry tables.In progress
Data Visualisation
ManimMathematical animation for educational AI content. Preferred over Motion Canvas (ADR-010).Active
FlourishInteractive data stories. Client-facing visualisations.Active
KumuSystems thinking maps. Stakeholder networks, ecosystem mapping.Active
NeoDashNeo4j graph dashboards. Live queries, no-code charting over knowledge graph data.Active
GrafanaOperational metrics dashboards over Postgres data.Available
Document Processing
DoclingPDF parsing with layout understanding. Document ingestion pipelines. (ADR-053)Active
UnstructuredWord/PPT/HTML extraction for broad document types. (ADR-054)Active
ZoteroReference management. SQLite-based. Ollama-powered metadata enrichment. (ADR-021)Active
3

Automated Pipelines

1

Pavilion Factory — Newsletter & Bookmark Ingestion

Newsletters arrive via email/RSS and are automatically ingested by n8n. A Cloudflare Workers AI call extracts named entities, topics, and key insights from each article. The structured output is written into Neo4j as a knowledge graph — creating connections between people, organisations, tools, and concepts — while the raw content is archived to Cloudflare R2. This means every newsletter I've ever read is now queryable by topic, entity, or relationship.
flowchart LR A([Email / RSS]) --> B[n8n Trigger] B --> C{Workers AI\nEntity Extraction} C --> D[(Neo4j\nKnowledge Graph)] C --> E[(R2 Archive\nliflode-archives)] D --> F([Queryable\nby topic/entity])
2

Job Search Pipeline — Scrape to Application

Job opportunities are captured two ways: a Modal-hosted scraper polls SEEK and LinkedIn daily, while a Chrome extension lets me save jobs from any board with one click. Both routes land in the same Postgres jobs table (598 records). I triage in NocoDB, flag interesting roles, and the elevation script pushes apply=true jobs to Fibery for managed application tracking. Claude then generates tailored resumes and cover letters by pulling from 233 resume modules stored in Postgres.
flowchart TB A([Job Boards\nSEEK / LinkedIn]) --> B[Modal Scraper] A2([Any Job Page]) --> B2[Chrome Extension\nJob Saver] B --> C[(Postgres\njobs table)] B2 --> C C --> D[NocoDB Triage] D -->|apply=true| E[elevate_to_fibery.py] E --> F[(Fibery Job Ad\nManaged Application)] F --> G{Claude Code} G --> H([Tailored Resume]) G --> I([Cover Letter])
3

Chrome Extension Capture Suite — 5-Extension Data Layer

A suite of five Chrome extensions provides a one-click capture layer for different data types. All extensions share a single Modal serverless backend (the event-saver endpoint, routed by type field), so there's no per-extension server to maintain. Events, jobs, and organisations land in local Postgres; select records are optionally elevated to Fibery Contacts for active relationship management. The shared access key (evt-8d1982eb) authenticates all extensions without per-extension credentials.
flowchart LR A([Event Saver\n🟠]) --> M B([Job Saver\n🟢]) --> M C([Contact Saver\n🔵]) --> M D([Bookmark Classifier]) --> M E([Future Extensions]) --> M M[Modal event-saver\nServerless Router] --> P[(Postgres / Neon\nLocal + Cloud)] M -->|apply=true\nor VIP contact| F[(Fibery\nElevation)]
4

Innovation Portfolio — Idea to Product Build

Raw ideas are captured using the /idea slash command in Claude Code. The idea is scored against a multi-dimensional framework (market fit, effort, strategic alignment) and stored as a Module in Fibery, Neo4j, and Postgres simultaneously. When modules are mature enough, they're grouped into a Product using the /product command, which generates a value statement and one-pager. The product then generates a Linear project with individual module-level build issues — routing crew agents to implement.
flowchart LR A([Raw Idea]) --> B{/idea skill\nClaude Code} B --> C[Scoring\nMulti-dimensional] C --> D[(Module\nFibery + Neo4j\n+ Postgres)] D -->|grouped| E{/product skill} E --> F[(Strategy Lab\nProduct)] F --> G[Linear Project\n+ Module Issues] G --> H([Crew Build\nDotti / Finn / Sage])
5

Free Agent Dispatcher — Multi-Agent Crew Orchestration

I work with Nova (Claude Desktop, interactive) to translate ideas into specifications. Nova creates Linear issues with a crew label (dotti, finn, or sage). An n8n workflow polls Linear and dispatches each issue to the appropriate headless Claude Code agent — Dotti (Haiku) for data queries and reports, Finn (Sonnet) for implementation, Sage (Opus) for architecture and complex debugging. Outputs are committed to GitHub, written to Postgres/Neo4j, or returned via the task system. Every run is logged to the claude_task_agents table.
flowchart TD A([Rachel]) --> B[Nova\nClaude Desktop\nBusiness Analyst] B --> C[(Linear Issue\nw/ crew label)] C --> D{n8n Dispatcher\npHXCG9xSSX43CWUM} D -->|label: dotti| E[Dotti\nHaiku · Reports] D -->|label: finn| F[Finn\nSonnet · Build] D -->|label: sage| G[Sage\nOpus · Architecture] E --> H[(Postgres\nNeo4j\nGitHub)] F --> H G --> H H --> I([claude_task_agents\nAudit Log])
6

Gmail Cleanup Pipeline — Bill Extraction & Newsletter Unsub

Two parallel Gmail automation tracks. Track 1: the Gmail API scans for bills and invoices; each email body is passed to Gemini 2.5 Flash with a structured extraction prompt; the output JSON is validated with Pydantic and queued for Xero import via the Bills API. Track 2: a newsletter scanner identifies subscription emails, builds an unsubscribe list, and a bulk-unsubscribe script processes them — reducing inbox volume and removing inactive subscriptions at scale.
flowchart LR G([Gmail API]) --> B[extract_bills.py] G --> S[gmail_newsletter_scan.py] B --> GF{Gemini 2.5 Flash\nStructured Extraction} GF --> J[Validated JSON\nPydantic] J --> X[(Xero Bills API)] S --> U[Unsub List] U --> US[gmail_newsletter_unsub.py] US --> D([Inbox Cleaned])
4

Multi-Agent System

The agent system splits interactive and headless work. Nova (Claude Desktop) handles real-time business analysis — translating conversations into specs, creating Linear issues, and orchestrating crew routing. The headless crew operates as autonomous Claude Code CLI agents, each with a specific model, role, and label that determines how the dispatcher routes work.

Every agent run is logged to claude_task_agents in Postgres via a PostToolUse hook (ADR-084), creating a full audit trail of who did what and when. Incomplete tasks surface in the inbox for review.

Nova
Claude Desktop · Interactive
Business Analyst & DevOps Bridge
  • Real-time strategy & spec work
  • Translates ideas into crew issues
  • Fibery + Linear MCP access
  • Context API (SSE remote MCP)
Dotti
Haiku 4.5 · Headless CLI
Data Analyst
  • Reports, queries, data analysis
  • Registry management
  • Admin tasks
  • Label: dotti
Finn
Sonnet 4.6 · Headless CLI
Developer / DevOps
  • Coding & implementation
  • Deployments, scripts
  • System builds
  • Label: finn
Sage
Opus 4.6 · Headless CLI
Architect
  • Complex debugging
  • Architecture decisions
  • ADR authoring
  • Label: sage

Dispatch Flow

flowchart LR R([Rachel]) <-->|conversation| N[Nova\nClaude Desktop] N -->|creates issue| L[(Linear\nwith label)] L --> D{n8n Dispatcher} D -->|dotti| DO[Dotti · Haiku\nfinn-run.ps1 variant] D -->|finn| FI[Finn · Sonnet\nfinn-run.ps1] D -->|sage| SA[Sage · Opus\ncrew-run.ps1] DO --> LOG[(claude_task_agents\nPostgres)] FI --> LOG SA --> LOG LOG --> IN([Inbox\nfor Rachel review])
5

Systems Architecture

The architecture follows a clear three-layer model: a local Docker stack handles persistence and real-time orchestration; a cloud layer provides scale, scheduled jobs, and external integrations; the AI layer sits across both, with Claude Code operating locally and Modal/Gemini operating in cloud. All traffic between local and cloud passes through a single Cloudflare Tunnel (no open ports).

flowchart TB subgraph AI["AI Layer"] CC[Claude Code\nSonnet/Haiku/Opus] CD[Claude Desktop\nNova] GEM[Gemini 2.5 Flash] end subgraph LOCAL["Local Docker Stack (Windows 11)"] NEO[(Neo4j\nbolt:7687)] PG[(PostgreSQL\n:5432)] N8N[n8n\nn8n.localhost] LIT[LiteLLM\nOpenAI proxy] OLL[Ollama\nllama3.2] TRF[Traefik\nReverse Proxy] NDash[NeoDash] GR[Grafana] end subgraph CLOUD["Cloud Layer"] MOD[Modal\nServerless Python] NEO_N[Neon\nCloud Postgres] MD[MotherDuck\nDuckDB Analytics] CF[Cloudflare\nDNS · Tunnel · R2 · Workers AI] GH[GitHub\ncurious-owl11] INF[Infisical EU\nSecrets] FIB[Fibery\nBusiness PM] LIN[Linear\nTech Issues] WIN[Windmill\nScheduled Scripts] end subgraph CAPTURE["Capture Layer"] EXT[Chrome Extensions\n×5] SCRP[Job Scraper\nModal] GMAIL[Gmail API] end CC <--> LOCAL CC <--> CLOUD CD <--> FIB CD <--> LIN GEM --> GMAIL EXT --> MOD SCRP --> MOD GMAIL --> GEM MOD --> NEO_N MOD --> PG CF -->|Tunnel| LOCAL N8N --> NEO N8N --> PG N8N --> LIN INF -.->|secrets| CC INF -.->|secrets| MOD

Key Design Decisions

Bash-first, MCP where needed

Claude Code uses curl/psql/cypher-shell directly. Only Fibery (no usable API for graph queries) justifies an MCP server. ADR-045.

No open inbound ports

Cloudflare Tunnel (liflode-dispatch, 07bf7694) provides all external access. Local services never exposed directly.

Pydantic AI for all structured LLM output

Enforced by ADR-068. Every Modal/Python LLM call with structured output uses pydantic-ai, not raw json.loads().

Infisical EU only

All secrets via eu.infisical.com. Profile auto-loads GEMINI_API_KEY, LINEAR_API_KEY, WINDMILL_TOKEN. ANTHROPIC_API_KEY intentionally excluded from shell env.

6

Appendix

Architecture Decision Records (ADR-001 — ADR-087)
ADRTitleCategory
ADR-001MCP Token OptimizationMCP
ADR-002Secrets Management — InfisicalSecurity
ADR-003AI Council ArchitectureAgents
ADR-004Nova / Finn Manager-Worker WorkflowAgents
ADR-005Neo4j Knowledge GraphData
ADR-006Iris Orchestration ArchitectureAgents
ADR-007GitHub Repository StructureDevOps
ADR-008Windmill MCP IntegrationAutomation
ADR-009Modal ETL PlatformInfra
ADR-010Manim Mathematical Animation ToolViz
ADR-011Chezmoi Dotfiles ManagementDevOps
ADR-012DroidMind Android Automation via MCPMCP
ADR-013Fibery API IntegrationIntegration
ADR-014D3.js Data Visualization LibraryViz
ADR-015Neon PostgreSQL CloudData
ADR-016MotherDuck AnalyticsData
ADR-017Cloudflare R2 StorageInfra
ADR-018Astro Web FrameworkWeb
ADR-019Zoho Mail & CRMBusiness
ADR-020Gamma PresentationsContent
ADR-021Zotero Reference ManagementResearch
ADR-022Flourish Data VisualizationViz
ADR-023Kumu Systems ThinkingViz
ADR-024Python Runtime Environment (3.13)DevOps
ADR-025n8n Workflow AutomationAutomation
ADR-026Automation Platform Selection (n8n vs Windmill)Automation
ADR-027Infisical → Modal Auto-Deploy PipelineDevOps
ADR-027bMCP vs API Decision FrameworkMCP
ADR-028SurrealDB Evaluation — DeferredData
ADR-029Job Search Automation SystemAutomation
ADR-030Neo4j Agent Memory ArchitectureData
ADR-031Unified MCP ArchitectureMCP
ADR-032Context Assembly LayerAgents
ADR-033Remove n8n MCP Server from Claude CodeMCP
ADR-034NeoDash Neo4j DashboardViz
ADR-035Fibery Workspace ArchitectureBusiness
ADR-036Chat Import Auto-Ingest Drop FolderPipeline
ADR-037n8n Dedicated Postgres DatabaseData
ADR-037bNotebookLM Rejection for Research WorkflowResearch
ADR-038Granola Rejection for Meeting NotesBusiness
ADR-038bMigrate Tools Registry to Neon Cloud PostgresData
ADR-039Business Operations Stack — Salesmate, Xero, TimelyBusiness
ADR-040ADHD Productivity Intelligence — Product ConceptProduct
ADR-041Email Platform — Cloudflare Routing + MS OutlookInfra
ADR-042Notion CSV Extraction — Type-Based Registry ArchitectureData
ADR-043Notion Markdown Cleanup PipelinePipeline
ADR-044Innovation Portfolio System — Headless Component ModelProduct
ADR-045Claude Code Tool Architecture — 1 Business MCP + BashMCP
ADR-046Context API SSE Remote MCP EndpointMCP
ADR-047Separate Postgres DB for Bookmarks ConsolidationData
ADR-047bliflode-local — Desktop Extension for Local Data AccessMCP
ADR-048Motion Canvas — Deferred (Manim Preferred)Viz
ADR-049PersonaPlex Voice AI — Deferred (Hardware Blocker)AI
ADR-050Ghost vs WriteFreely for Long-Form Content FederationContent
ADR-051Fediverse — Self-Hosted vs Public Instance StrategyContent
ADR-052D2 Diagram-as-Code ToolViz
ADR-053Docling for Document ParsingPipeline
ADR-054Unstructured for Broad Document ExtractionPipeline
ADR-055Job Application Pipeline — Scrape to Postgres TriagePipeline
ADR-056Letta Stateful Agents TrialAgents
ADR-057Product Hunt Feb 2026 Tool AssessmentResearch
ADR-058Free Agent Dispatcher — n8nAgents
ADR-059Cloudflare Named TunnelInfra
ADR-060Innovation Portfolio Data ArchitectureData
ADR-061Claude Code Fibery MCPMCP
ADR-062Job Alert Email PipelinePipeline
ADR-063Pavilion Factory Innovation PipelinePipeline
ADR-064Business Decision Records — FiberyBusiness
ADR-065Promptfoo Scenario TestingAgents
ADR-066Cloudflare Workers AI — Factory 1 IngestionAI
ADR-067LiteLLM LLM ProxyAI
ADR-068Pydantic AI for Structured OutputsAgents
ADR-069Zotero SQLite + Ollama ManagementResearch
ADR-070Google Drive + Gmail CleanupOperations
ADR-071Data Storytelling & Visualisation StackViz
ADR-072Pipeline Monitoring & AlertingDevOps
ADR-073Dynamic Context Assembly & CompressionAgents
ADR-074Newsletter Pipeline — Quinn / Factory 1Pipeline
ADR-075Animation Tools Comparative EvaluationViz
ADR-076Fibery Module Entity TypeBusiness
ADR-077Tool Watchlist MonitorResearch
ADR-078n8n vs Windmill — Role BoundariesAutomation
ADR-079Naming SystemBusiness
ADR-080Content Knowledge SystemContent
ADR-081Gmail Cleanup ApproachOperations
ADR-082Innovation Portfolio Module Scoring PipelinePipeline
ADR-083Historic Data Consolidation & Registry ArchitectureData
ADR-084Task Agent Registry & Incomplete Task InboxAgents
ADR-085Fibery / Linear Two-Sided Org StructureBusiness
ADR-086Knowledge-First Research SystemResearch
ADR-087Chrome Extension Capture SuitePipeline

About This Portfolio

This document was generated from live system data — ADR count, module counts, and pipeline descriptions reflect the actual state of the system as of February 2026. The site is deployed to Cloudflare Pages and print-optimised for PDF export.

Contact: rachel@liflode.com  ·  Practice: Leoma (leoma.ai)  ·  Location: Geelong / Melbourne, Victoria, Australia