Blueprint Lang: A Declarative DSL That Compiles to TypeScript You'd Actually Ship
Blueprint is a declarative language for writing web services that compiles to production-ready TypeScript. No runtime dependency, no vendor lock-in -- just clean Hono, Drizzle, and Zod code that happens to be generated from a language you can read in five minutes.
thelacanians
The Problem with Generated Code
Most code generation tools produce code that looks like it was written by a committee of interns during a hackathon. Sprawling utility functions nobody asked for, framework abstractions three layers deep, configuration files that outnumber your actual source files. You end up maintaining the generator’s opinions instead of your own application.
Blueprint takes a different position: what if the generated code was just… normal? Standard libraries, standard patterns, standard TypeScript. The kind of code a senior engineer would write if they had infinite patience and zero tolerance for boilerplate.
What Blueprint Actually Is
Blueprint is a declarative DSL for defining web services. You describe your API endpoints, data models, middleware, and background jobs in a flat, readable syntax. The compiler turns it into production-ready TypeScript using Hono for routing, Drizzle ORM for database access, Zod for validation, BullMQ for job queues, and Vitest for tests.
The generated code has zero dependency on Blueprint. Delete the .bp files, uninstall the compiler, and your application still runs. There is no runtime, no framework, no SDK to vendor-lock you into the ecosystem.
The Arrow System
Blueprint’s syntax is built around a visual data flow that reads like a diagram:
@ "Create a new todo"
POST /api/todos {
<- title string required
|> todo = save todo { title: title }
-> 201 { id: todo.id, title: todo.title, done: todo.done }
}
Four symbols tell you everything:
@declares intent — what this endpoint does, in plain language<-marks input — data flowing into the handler|>marks a processing step — data being transformed->marks output — data flowing out to the client
Read it top to bottom and you have a complete picture of the request lifecycle. No scrolling through middleware chains, no tracing through dependency injection containers, no grepping for where the response actually gets sent.
What the Compiler Produces
That todo endpoint compiles to this TypeScript:
todosRoutes.post('/api/todos',
zValidator('json', postTodosSchema),
async (c) => {
const { title } = c.req.valid('json');
const todo = (await db.insert(schema.todos)
.values({ title })
.returning())[0];
return c.json({
id: todo.id,
title: todo.title,
done: todo.done
}, 201);
}
);
Hono route. Zod validation. Drizzle insert. Typed response. This is code you would write by hand. The compiler just writes it faster and never forgets the validation middleware.
Flat by Force
Blueprint has no if/else. No for loops. No nesting deeper than one level. This is not a limitation — it is a design decision.
Instead, Blueprint provides constrained alternatives:
@ "Complete a todo if it exists"
PATCH /api/todos/:id/complete {
<- id uuid required
|> todo = find first todo { id: id }
guard todo else { -> 404 { error: "Todo not found" } }
|> updated = update todo { id: id } set { done: true }
-> 200 { id: updated.id, done: updated.done }
}
guard handles validation and early returns. when handles conditional logic. map handles iteration. Every handler reads as a linear sequence of steps, because that is what HTTP handlers actually are: receive input, validate it, do something, return output.
The flatness constraint is not about taste. It is about making the code readable for both humans and LLMs. Deeply nested control flow is where bugs hide. Blueprint forces them into the open.
Models and Enums
Data modeling follows the same flat philosophy:
model todo {
id uuid primary default:uuid
title string required
done boolean default:false
created_at timestamp default:now
}
enum priority {
low
medium
high
critical
}
The compiler generates Drizzle schema definitions, Zod validators, and TypeScript types from these declarations. One source of truth, three outputs. Change the model, and the validation, types, and database schema all update together.
LLM-Native Design
Here is where Blueprint gets interesting. The language was designed from the ground up to work with AI code generation — not as an afterthought, but as a core design principle.
Intent annotations (@) are not comments. They are semantic metadata that tells the compiler (and any LLM reading the code) what a block of code is supposed to do. When an LLM generates Blueprint, the intent annotation provides a built-in correctness check: does the implementation match the stated intent?
Generation slots take this further:
@ "Calculate shipping cost based on weight and destination"
|> cost = @> calculate shipping cost from weight, destination
The @> marker is an explicit invitation for an LLM to fill in the implementation. Running bp generate calls the Anthropic API to resolve these slots into concrete code. The result is committed to your codebase as plain Blueprint — the @> marker disappears, replaced by actual logic you can read, review, and modify.
This is a fundamentally different approach from “AI writes all your code.” Blueprint defines the structure, the contracts, the data flow. The LLM fills in the business logic where you explicitly ask it to. You maintain control over the architecture while delegating the implementation details that are tedious but well-defined.
Beyond CRUD
Blueprint handles more than REST endpoints. The same flat, declarative syntax extends to:
Middleware:
middleware auth {
<- authorization header required
|> token = extract bearer token from authorization
guard token else { -> 401 { error: "Missing token" } }
|> user = verify jwt token
guard user else { -> 401 { error: "Invalid token" } }
set user = user
}
Background workers:
worker send_welcome_email {
<- user_id uuid required
<- email string required
|> user = find first user { id: user_id }
|> send email to email subject "Welcome" body "Hello, ${user.name}"
}
WebSocket and SSE streams, scheduled jobs, test suites — all using the same arrow syntax, the same flat structure, the same principle of making data flow visible at a glance.
Why This Matters
We build production systems for companies that cannot afford to ship vibe code. The irony of most code generation tools is that they produce exactly the kind of code that creates tech debt: functional but fragile, clever but unmaintainable, fast to write but slow to debug.
Blueprint inverts this. The generated TypeScript is boring in the best possible way. It uses the libraries your team already knows. It follows patterns your linter already enforces. It produces code that a new hire can read on day one without needing to understand a custom framework.
The DSL itself is learnable in an afternoon. The arrow system is intuitive enough that non-engineers can read a .bp file and understand what an endpoint does. That matters when you are reviewing AI-generated code, because the first question is always: does this do what I think it does?
Blueprint makes that question easy to answer. And that is the entire point.
Getting Started
Blueprint is open source. Install the CLI, point it at a new directory, and define your first model and endpoint. The compiler will generate a complete, runnable Hono application with typed routes, validated inputs, and a test scaffold.
# Install Blueprint
npm install -g @aspect/blueprint
# Initialize a new project
bp init my-api
# Generate TypeScript from your .bp files
bp compile
# Resolve any @> generation slots
bp generate
The generated code is yours. No runtime dependency, no license gotchas, no upgrade treadmill. If Blueprint disappeared tomorrow, your application would keep running without changes. That is the kind of tool we think the ecosystem needs more of.