The problem
The first wave of AI recipe apps generated impressive-looking dishes that nobody could actually cook. Either the ingredients didn’t exist in your supermarket, the steps assumed equipment you didn’t have, or the macros didn’t add up to the diet you said you were on. We wanted a system that took constraints seriously and let the AI freelance only inside them.
What we built
ChefForge is a FastAPI backend with a clean recipe-and-plan schema. Users describe their household once — how many people, dietary restrictions, equipment, weekly budget, ingredients to avoid, ingredients to use up — and ChefForge plans a week of dinners that hit all the constraints. Each recipe is independently solvable: rendered with portion-scaled quantities, total cost, total time, and macros that sum back to the user’s daily budget.
The AI angle
Generation happens in two passes. The first asks for a balanced weekly outline. The second expands each slot into a full recipe and verifies that the resulting plan still sums to the user’s constraints — if it doesn’t, the LLM is told exactly what’s wrong and asked to revise. The verifier is deterministic code, not another LLM call, which is why ChefForge stays tight even when the underlying model gets creative.
How it’s used
- Busy households who want one less weekly decision.
- People on specific eating regimes — high-protein, gluten-free, low-FODMAP — who can’t trust generic recipe sites.
- Cooking-curious beginners who need pacing, not inspiration.
What it taught us
That “the AI got it wrong” usually means the verifier was missing. Ninety percent of complaints traced back to a constraint we hadn’t encoded as a hard check. The fix isn’t a smarter prompt — it’s another check.