Skip to main content

From Schema to Deployment: A Full-Stack Workflow with CrowVault

The Full-Stack Bootstrapping Problem

Experienced engineers often joke that the hardest part of a new service is the first day. That observation points at something real: the mechanical work required to establish a production-grade baseline is substantial, repetitive, and nearly identical across projects. A relational schema needs normalization decisions, index selection, and migration scaffolding. A domain model needs aggregates, value objects, repository interfaces, and domain events. An API spec needs endpoint definitions, request/response schemas, authentication wiring, and error models. Infrastructure needs build reproducibility, resource limits, health checks, and rollout strategy.

None of this work is intellectually interesting. It is load-bearing boilerplate — necessary, time-consuming, and easy to get subtly wrong (a missing index, a Dockerfile that runs as root, a retry without jitter). CrowVault's 133 tools are organized around eliminating exactly this category of work. The following walkthrough demonstrates what that looks like end-to-end for a Task Manager service with tasks, projects, and users entities.

Step 1 — Design the Database Schema

The first tool call produces a normalized PostgreSQL schema with foreign keys, check constraints, and index recommendations for the target access patterns:

bash
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "database-mcp",
    "tool": "design_schema",
    "args": {
      "domain": "task-manager",
      "entities": ["task", "project", "user"],
      "features": ["soft-delete", "audit-fields", "optimistic-locking"],
      "database": "postgresql"
    }
  }'
bash
-- Generated schema (excerpt)
CREATE TABLE users (
  id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  email       TEXT NOT NULL UNIQUE,
  display_name TEXT NOT NULL CHECK (char_length(display_name) BETWEEN 2 AND 80),
  created_at  TIMESTAMPTZ NOT NULL DEFAULT now(),
  updated_at  TIMESTAMPTZ NOT NULL DEFAULT now(),
  deleted_at  TIMESTAMPTZ
);

CREATE TABLE projects (
  id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  owner_id    UUID NOT NULL REFERENCES users(id) ON DELETE RESTRICT,
  name        TEXT NOT NULL CHECK (char_length(name) BETWEEN 1 AND 120),
  status      TEXT NOT NULL DEFAULT 'active' CHECK (status IN ('active','archived')),
  version     INTEGER NOT NULL DEFAULT 1,  -- optimistic locking
  created_at  TIMESTAMPTZ NOT NULL DEFAULT now(),
  updated_at  TIMESTAMPTZ NOT NULL DEFAULT now(),
  deleted_at  TIMESTAMPTZ
);

CREATE TABLE tasks (
  id          UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  project_id  UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
  assignee_id UUID REFERENCES users(id) ON DELETE SET NULL,
  title       TEXT NOT NULL CHECK (char_length(title) BETWEEN 1 AND 255),
  status      TEXT NOT NULL DEFAULT 'todo'
                CHECK (status IN ('todo','in_progress','review','done')),
  priority    INTEGER NOT NULL DEFAULT 2 CHECK (priority BETWEEN 1 AND 4),
  due_at      TIMESTAMPTZ,
  version     INTEGER NOT NULL DEFAULT 1,
  created_at  TIMESTAMPTZ NOT NULL DEFAULT now(),
  updated_at  TIMESTAMPTZ NOT NULL DEFAULT now(),
  deleted_at  TIMESTAMPTZ
);

-- Access-pattern indexes
CREATE INDEX idx_tasks_project_status ON tasks(project_id, status) WHERE deleted_at IS NULL;
CREATE INDEX idx_tasks_assignee       ON tasks(assignee_id) WHERE deleted_at IS NULL;

The schema includes soft-delete columns on all entities, a version column for optimistic locking, and partial indexes that exclude deleted rows — the kind of detail that is easy to skip when writing schema by hand but matters significantly once query volume grows. For a deeper look at what this tool produces across more complex domains, see the PostgreSQL schema design post.

Step 2 — Generate the Domain Layer

With the schema in place, the next call generates the Task aggregate — the root entity that enforces business invariants and owns all mutations to the task and its child objects:

bash
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "fullstack-enhanced",
    "tool": "generate_aggregate",
    "args": {
      "name": "Task",
      "properties": ["title", "status", "priority", "assigneeId", "dueAt"],
      "invariants": ["status-transition", "priority-range"],
      "withDomainEvents": true,
      "language": "typescript"
    }
  }'
typescript
export class Task extends AggregateRoot {
  private constructor(
    id: TaskId,
    private _title: TaskTitle,
    private _status: TaskStatus,
    private _priority: Priority,
    private _assigneeId: UserId | null,
    private _dueAt: Date | null,
    private readonly _version: number
  ) {
    super(id);
  }

  static create(props: CreateTaskProps): Task {
    const task = new Task(
      TaskId.generate(),
      TaskTitle.of(props.title),
      TaskStatus.TODO,
      Priority.of(props.priority ?? 2),
      props.assigneeId ? UserId.of(props.assigneeId) : null,
      props.dueAt ?? null,
      1
    );
    task.addDomainEvent(new TaskCreatedEvent(task.id, task._title.value));
    return task;
  }

  transition(next: TaskStatus): void {
    if (!ALLOWED_TRANSITIONS[this._status.value].includes(next.value)) {
      throw new InvalidStatusTransitionError(this._status.value, next.value);
    }
    this._status = next;
    this.addDomainEvent(new TaskStatusChangedEvent(this.id, next));
  }

  assign(userId: UserId): void {
    this._assigneeId = userId;
    this.addDomainEvent(new TaskAssignedEvent(this.id, userId));
  }

  get version(): number { return this._version; }
}

The aggregate enforces the status transition invariant (you cannot move a task from todo directly to done without going through in_progress) and emits typed domain events on every state change. Those events are the integration points for downstream consumers — notification services, analytics pipelines, audit logs. For the full picture on DDD patterns, see the domain model post.

Step 3 — Create the API Specification

With domain model in hand, the OpenAPI spec defines the contract between this service and its consumers. Generating it from the same resource list used in the schema step keeps all three artefacts consistent:

bash
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "api-mcp",
    "tool": "generate_openapi_spec",
    "args": {
      "service": "task-manager",
      "resources": ["task", "project", "user"],
      "withAuth": true,
      "authSchemes": ["bearerJwt", "apiKey"],
      "pagination": "cursor",
      "errorFormat": "rfc7807"
    }
  }'
bash
openapi: "3.1.0"
info:
  title: Task Manager API
  version: "1.0.0"
  description: Task and project management service

security:
  - bearerAuth: []
  - apiKeyAuth: []

paths:
  /v1/tasks:
    get:
      summary: List tasks
      parameters:
        - { name: project_id, in: query, required: true, schema: { type: string, format: uuid } }
        - { name: status,     in: query, schema: { type: string, enum: [todo, in_progress, review, done] } }
        - { name: cursor,     in: query, schema: { type: string } }
        - { name: limit,      in: query, schema: { type: integer, default: 20, maximum: 100 } }
      responses:
        "200":
          description: Paginated task list
          content:
            application/json:
              schema: { $ref: "#/components/schemas/TaskPage" }
        "401": { $ref: "#/components/responses/Unauthorized" }
        "422": { $ref: "#/components/responses/ValidationError" }

    post:
      summary: Create task
      requestBody:
        required: true
        content:
          application/json:
            schema: { $ref: "#/components/schemas/CreateTaskRequest" }
      responses:
        "201": { description: Task created, content: { application/json: { schema: { $ref: "#/components/schemas/Task" } } } }
        "422": { $ref: "#/components/responses/ValidationError" }

  /v1/tasks/{id}/status:
    patch:
      summary: Transition task status
      parameters:
        - { name: id, in: path, required: true, schema: { type: string, format: uuid } }
      requestBody:
        required: true
        content:
          application/json:
            schema:
              type: object
              required: [status, version]
              properties:
                status:  { type: string, enum: [in_progress, review, done] }
                version: { type: integer, description: Optimistic lock version }
      responses:
        "200": { description: Status updated }
        "409": { $ref: "#/components/responses/ConflictError" }

The spec includes cursor-based pagination (more efficient than offset for large result sets), RFC 7807 problem detail error responses, and optimistic locking on the status transition endpoint — the version field in the PATCH body matches the version column in the schema from step 1, keeping the contract and the data model consistent.

Step 4 — Generate Infrastructure

Two more tool calls produce the container and orchestration layer. The Dockerfile call specifies the Node.js runtime and the distroless base image for the production stage, which eliminates the shell and package manager from the final image — reducing attack surface and image size simultaneously:

bash
# Dockerfile
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "devops-mcp",
    "tool": "generate_dockerfile",
    "args": {
      "runtime": "node",
      "nodeVersion": "22",
      "baseImage": "distroless",
      "withHealthcheck": true,
      "nonRootUser": true
    }
  }'

# Kubernetes deployment
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "devops-mcp",
    "tool": "generate_k8s_deployment",
    "args": {
      "name": "task-manager",
      "replicas": 3,
      "strategy": "RollingUpdate",
      "resourceLimits": { "cpu": "500m", "memory": "256Mi" },
      "withHPA": true,
      "withPDB": true
    }
  }'

The Kubernetes manifest includes a HorizontalPodAutoscaler targeting 70% CPU utilization and a PodDisruptionBudget requiring at least two pods available during voluntary disruptions — standard production defaults that take time to write from memory and are easy to forget. See the DevOps automation post for a detailed look at the infrastructure generation tools.

Step 5 — Add Resilience

The final tool calls add resilience to the service's outbound calls — database queries, calls to other internal services, third-party API integrations. A circuit breaker wraps each downstream dependency; a retry policy with exponential backoff and jitter handles transient failures:

bash
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "api-mcp",
    "tool": "generate_circuit_breaker",
    "args": { "runtime": "node", "failureThreshold": 5, "recoveryTimeMs": 30000, "withMetrics": true }
  }'

The generated CircuitBreaker class and withRetry function are covered in full in the resilience patterns post. They compose cleanly: wrap the circuit breaker's call() method with withRetry(), and retries stop automatically as soon as the breaker opens — preventing retry storms against a failing dependency.

The Result: Schema to Deployment in Minutes

Six API calls. Here is what was produced:

  • Database schema — normalized PostgreSQL with soft-delete, audit fields, optimistic locking, and access-pattern indexes.
  • Domain aggregate — typed TypeScript class with invariant enforcement and domain event emission.
  • OpenAPI 3.1 spec — full CRUD endpoints with cursor pagination, JWT + API key auth, and RFC 7807 error responses.
  • Dockerfile — multi-stage build, distroless production image, non-root user, health check.
  • Kubernetes manifests — Deployment, HPA, PDB, resource limits, rolling update strategy.
  • Resilience layer — circuit breaker with state metrics, retry with exponential backoff and jitter.

This is a complete production scaffold. There are no placeholders to fill in, no TODOs to chase down, no half-finished implementations waiting to bite you in a code review. Each generated artefact is idiomatic, typed, and immediately usable. The code is yours — no runtime SDK, no vendor lock-in, no required dependency on CrowVault once the generation is complete.

What would take an experienced engineer four to eight hours to write correctly — accounting for the research time on cursor pagination semantics, the right PDB configuration, the Lua script for atomic rate limiting — is done in the time it takes to run six curl commands.

The remaining work, the part that actually differentiates your product, is the business logic: the task prioritization algorithm, the notification routing rules, the project permissions model. That is where your engineering time should go.

CrowVault's Developer plan includes 2,000 tool calls per month — enough to bootstrap several services and run a full development workflow. Create an account and run your first tool call in under two minutes. The API documentation covers authentication, tool discovery, batch execution, and the full catalogue of 133 tools across database, API, DevOps, and full-stack domains.