Skip to main content

Automate Your DevOps: Dockerfiles, K8s Manifests, and Helm Charts in Seconds

The DevOps Boilerplate Problem

Every microservice you deploy needs the same set of files: a Dockerfile to build the image, Kubernetes manifests to run it, a Helm chart to make it configurable across environments, and a CI/CD pipeline to tie it all together. The content changes from service to service, but the structure is almost identical.

The standard solution is copy-paste from the last service you deployed. This works until it doesn't. The payment service has a multi-stage Dockerfile from six months ago when you were more careful. The notification service has a single-stage build that runs as root because someone was in a hurry. The Kubernetes manifests disagree on resource limits because different engineers wrote them. The Helm values files have diverged enough that templating them consistently across environments requires reading every file manually before every deployment.

Configuration drift is how "it works on my machine" becomes "it worked in staging but not production." The problem isn't that engineers don't know how to write correct DevOps configuration — it's that writing it from scratch each time is slow enough that people reach for the nearest template instead of the correct one.

CrowVault's devops MCP server generates consistent, production-grade DevOps configuration from a structured description of your service. Same conventions, every time.

Generating Production Dockerfiles

The generate_dockerfile tool takes your service description and produces a multi-stage Dockerfile with the security and performance defaults you'd want in production:

bash
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "devops-mcp",
    "tool": "generate_dockerfile",
    "args": {
      "service": "payment-api",
      "runtime": "node",
      "port": 3000,
      "nodeVersion": "20",
      "packageManager": "npm"
    }
  }'

The output is a multi-stage build that ships a minimal runtime image:

dockerfile
# ── Stage 1: build ────────────────────────────────────────────
FROM node:20-alpine AS builder
WORKDIR /app

# Install dependencies first (separate layer for caching)
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

# Copy source and build
COPY tsconfig.json ./
COPY src/ ./src/
RUN npm run build

# ── Stage 2: runtime ──────────────────────────────────────────
FROM node:20-alpine AS runtime
WORKDIR /app

# Non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Only production artifacts
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist        ./dist

# Drop privileges
USER appuser

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1

CMD ["node", "dist/index.js"]

Four things here that are easy to get wrong when copying from a template:

Layer caching orderpackage*.json is copied before the source files. Docker invalidates the layer cache when a file changes. If you copy source before running npm ci, every code change triggers a full dependency reinstall. The correct order keeps the node_modules layer stable across code-only changes, cutting build times from minutes to seconds.

Multi-stage build — the final image contains only the compiled output and production node_modules. No TypeScript compiler, no build toolchain, no source files. A typical Node service built this way is 60–80% smaller than a single-stage build, which means faster pulls in CI and a meaningfully smaller attack surface.

Non-root user — running as root inside a container is a container escape risk. If your application process is compromised, root inside the container can interact with the kernel in ways a non-root user cannot. The generated Dockerfile creates a dedicated user and switches to it before the final CMD.

Healthcheck — Kubernetes will mark a pod as ready when the container starts, not when your application is ready to serve traffic, unless a healthcheck is defined. The generated HEALTHCHECK instruction tells Docker (and Kubernetes, via liveness probes) when the application is actually healthy.

Kubernetes Manifests in One Call

A minimal production Kubernetes deployment for a single service should include at minimum: a Deployment with pod security context and resource limits, a Service to expose it to other pods, a ConfigMap for environment configuration, and a HorizontalPodAutoscaler to handle load spikes. Writing all four correctly — with the right label selectors, correct resource units, and sane autoscaling thresholds — takes time.

bash
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "devops-mcp",
    "tool": "generate_k8s_deployment",
    "args": {
      "service": "payment-api",
      "image": "gcr.io/my-project/payment-api",
      "port": 3000,
      "replicas": 2,
      "resources": {
        "requests": { "cpu": "100m", "memory": "128Mi" },
        "limits":   { "cpu": "500m", "memory": "512Mi" }
      },
      "autoscaling": { "minReplicas": 2, "maxReplicas": 10, "targetCPU": 70 }
    }
  }'

The output is a complete, multi-document YAML file:

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: payment-api
  labels:
    app: payment-api
    version: "1.0.0"
spec:
  replicas: 2
  selector:
    matchLabels:
      app: payment-api
  template:
    metadata:
      labels:
        app: payment-api
    spec:
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 2000
      containers:
        - name: payment-api
          image: gcr.io/my-project/payment-api:latest
          ports:
            - containerPort: 3000
          envFrom:
            - configMapRef:
                name: payment-api-config
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "500m"
              memory: "512Mi"
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 15
            periodSeconds: 20
          readinessProbe:
            httpGet:
              path: /ready
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: payment-api
spec:
  selector:
    app: payment-api
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: payment-api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: payment-api
  minReplicas: 2
  maxReplicas: 10
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

The output uses the current autoscaling/v2 API (not the deprecated v2beta2), sets both liveness and readiness probes (different paths — readiness gates traffic, liveness triggers restarts), and enforces the same non-root user in the pod security context that the Dockerfile sets. The consistency between Dockerfile and Kubernetes manifest is something you'd have to maintain manually when working from separate templates.

Helm Charts with Sensible Defaults

Kubernetes manifests work fine for a single environment. As soon as you have staging and production with different image tags, replica counts, and resource limits, you want templating. Helm is the standard answer, but writing a Helm chart from scratch means understanding Go templating syntax well enough to avoid the pitfalls that make Helm charts frustrating to maintain.

bash
curl -s -X POST https://api.crowvault.ai/v1/tools/call \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "server": "devops-mcp",
    "tool": "generate_helm_chart",
    "args": {
      "service": "payment-api",
      "port": 3000,
      "environments": ["staging", "production"],
      "withIngress": true,
      "withSecrets": true
    }
  }'

The generated values.yaml establishes the full configuration surface of the chart — every field that varies between environments is a value, not a hardcoded string in a template:

yaml
# values.yaml — defaults (override per environment)
replicaCount: 1

image:
  repository: gcr.io/my-project/payment-api
  tag: "latest"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: payment.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: payment-api-tls
      hosts:
        - payment.example.com

resources:
  requests:
    cpu: "100m"
    memory: "128Mi"
  limits:
    cpu: "500m"
    memory: "512Mi"

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

secrets:
  existingSecret: ""   # set to use an external secret manager ref

Per-environment overrides are clean: values-production.yaml sets replicaCount: 3 and the production image tag; values-staging.yaml sets replicaCount: 1 and disables autoscaling. The chart structure is generated with _helpers.tpl following Helm's naming conventions, so it integrates cleanly with ArgoCD or Flux without modification.

The Complete DevOps Pipeline

A Dockerfile, Kubernetes manifests, and a Helm chart cover the deployment target. You still need the pipeline that builds, tests, and deploys to it. The generate_github_actions tool produces a complete workflow file: build the Docker image, push to your registry, update the Helm values with the new image tag, and deploy to your cluster via helm upgrade --install. The generate_gitlab_ci tool does the same for GitLab pipelines with stages, caches, and environment-specific deployment gates.

For teams running service meshes, generate_istio_config produces VirtualService and DestinationRule manifests for traffic splitting and canary deployments. For security-conscious teams, generate_network_policy produces Kubernetes NetworkPolicy objects that restrict ingress and egress to only what the service explicitly needs — useful for compliance in regulated environments.

All of these tools are available on the same API. You can script a complete service scaffold — Dockerfile, K8s manifests, Helm chart, GitHub Actions workflow — in a single shell script that runs in under a minute. Every new service starts from the same baseline, which means code review can focus on what's different about this service rather than whether someone remembered to set resource limits.

The Developer plan gives you 500 tool calls per month — enough to cover the full DevOps configuration for several services. The API documentation covers all 26 devops tools including generate_argocd_app, generate_terraform_module, generate_prometheus_config, and optimize_docker_image for analyzing and reducing image size. Create an account and generate your first production Dockerfile in two minutes.