The problem
The gap between “I have an idea” and “I have a prompt that works” is wider than power-users remember. Most people type the rough idea, get a generic answer, and decide AI “doesn’t get it” — when the actual problem is that the prompt was missing context, role, format, and constraints.
What we’re building
Prompt Builder is a three-step web wizard. Step one captures the goal. Step two asks Claude to generate the right clarifying questions for that specific goal — not a fixed checklist. Step three assembles a polished prompt with role, context, constraints, format, and examples baked in. Users can copy it to clipboard or send straight into Claude / ChatGPT / Gemini.
The AI angle
The interesting bit is step two. Generic prompt scaffolds (“You are an expert in X”) get average results. Adaptive question generation — where the LLM decides what it needs to know based on the goal — gets dramatically better results. We’re tuning the question-asker more than the prompt-writer.
How it’ll be used
- Knowledge workers who use AI daily but don’t want to study prompt engineering.
- Teams onboarding new AI users who want a guardrail.
- Content teams producing structured outputs at volume.
Where we are
The three-step flow works. We’re iterating on the question-asker prompt against a corpus of “before / after” pairs. Public launch follows confidence that the polished prompts beat the rough inputs by a measurable margin across the test set.