Home  /  About

A studio, not an agency.

We build production AI software. Mostly for ourselves — 21 products and counting — and occasionally for clients we want to work with.

What we believe

Shipping is the only proof. AI is the most over-demoed technology in software. The thing that matters is whether someone pays for the result on Tuesday morning. Every project on our portfolio page is something a real user can sign up for today.

Boring infrastructure is the moat. The flashy bit is the prompt. The reason it works in production is everything around the prompt: routing, evals, queues, billing, observability. We’ve already built that layer once, so each new product gets it for free.

Small surface, deep stack. Two-person teams ship more than ten-person teams that need permission. We keep the studio small on purpose.

How the portfolio came about

Each product started as something we wanted to use ourselves, or a problem a friend kept describing. Email triage came from drowning in inboxes across three businesses. Ghost Writer came from wanting to publish without spending Sunday writing. The trading bot came from getting tired of staring at charts. We turn the itch into a service, charge a sensible amount, and let it run.

The studio underneath is the thing we get paid to build for other people: the same patterns, the same shared infrastructure, the same speed.

The toolkit

  • LLMs: Anthropic Claude (Opus, Sonnet, Haiku) as the default; Google Gemini and OpenAI behind LiteLLM/OpenRouter for cost and latency variants.
  • Backends: Python (FastAPI, Django) and Node.js (Fastify, Express). Postgres for state, Redis for queues, n8n for workflows.
  • Frontends: Next.js, React, plain HTML when that’s the right answer. React Native + Expo for mobile.
  • Plumbing: Stripe, Brevo, Google OAuth, GlitchTip, Umami, Ghost CMS for marketing sites.
  • Deployment: Docker Compose on a small fleet of self-hosted boxes; not because it’s fashionable, because it’s cheaper and we know exactly what’s running.

Why we run open source

A studio is only as honest as the boring infrastructure underneath it. Behind every product on the portfolio page sits an open-source stack we run ourselves — Umami for analytics, Cal.com for scheduling, LiteLLM for routing across model providers, Paperless-NGX for documents, Open WebUI for direct model access, Speedtest Tracker for the studio’s own internet, Portainer for container ops, Syncthing for files, plus n8n and Ghost for workflow and content. Each one replaces a SaaS most teams pay for.

This isn’t ideology. It’s knowing what we recommend. When we suggest a stack to a client, we already know the failure modes, the upgrade pain, and the realistic running cost — because we run it ourselves. The same tools we trust for the studio are the tools we can stand up for you on day one, on your hardware, under your auth.

If you want to work with us

Read the services page to see how engagements work, then drop a note. We reply within one business day.