Skip to content

Worker Configuration

Workers are defined in YAML files. Each file describes one worker.

Managed vs Unmanaged

  • Managed: has a docker: section. Mecha controls the Docker container lifecycle.
  • Adapter: has an adapter: section. Mecha runs an in-process adapter translating a native LLM API.
  • Unmanaged: has an endpoint: field. Mecha just calls it.

Managed Worker (Docker)

yaml
name: claude-reviewer
docker:
  image: mecha-worker:latest     # required
  cwd: /path/to/project                 # host dir → /workspace in container
  resources:
    cpu: 4                              # CPU cores
    memory: 8G                          # memory limit (M or G)
    pids: 256                           # process limit
  lifecycle: persistent                 # "persistent" (default) or "disposable"
  env:                                  # environment variables
    CLAUDE_MODEL: claude-sonnet-4-6
    CLAUDE_SYSTEM_PROMPT: "You review code."
    CLAUDE_ALLOWED_TOOLS: "Read,Grep,Glob,Bash"
    CLAUDE_EFFORT: high
  credentials: [claude]                 # subscription credential mounts (read-only)
  token: claude.xiaolaidev              # from ~/.mecha/secrets.yml (mutually exclusive with credentials)
  plugins:                              # Claude Code plugins installed at start
    - pr-review-toolkit
  labels:                               # custom Docker labels
    team: security
timeout: 30m                            # task timeout

Fields

FieldRequiredDefaultDescription
nameYesUnique worker name. Must match [a-zA-Z0-9][a-zA-Z0-9_.-]*
docker.imageYesDocker image to run
docker.cwdNoHost directory mounted read-write to /workspace
docker.resources.cpuNounlimitedCPU cores
docker.resources.memoryNounlimitedMemory limit (512M, 4G)
docker.resources.pidsNounlimitedMax processes
docker.lifecycleNopersistentpersistent (reuse container) or disposable (new container per task)
docker.hostNolocal socketDocker daemon URL (e.g. unix:///var/run/docker.sock)
docker.envNo{}Environment variables passed to container
docker.tokenNoToken reference from ~/.mecha/secrets.yml
docker.exposeNofalseBind to 0.0.0.0 instead of 127.0.0.1 (network-accessible)
docker.api_keyNoBearer auth key for /task endpoint. Required when expose: true
docker.credentialsNo[]CLI credential mounts, read-only ([claude], [codex], or [claude, codex])
docker.pluginsNo[]Claude Code plugins installed at container start
docker.plugin_marketplacesNo[]Plugin marketplace URLs added before plugin install
docker.labelsNo{}Custom Docker container labels
timeoutNo10mMax task execution time

Workspace Mount

When docker.cwd is set, the directory is bind-mounted read-write into the container at /workspace. The container runs as your host user (matching UID/GID) to avoid permission issues.

yaml
docker:
  cwd: /Users/me/projects/my-repo    # must be an existing directory

The path is validated:

  • Must exist and be a directory (not a file)
  • Symlinks are resolved before checking (no traversal)
  • Sensitive host paths are blocked:
Blocked PathsReason
/etc, /proc, /sys, /dev, /bootSystem directories
$HOME (home directory itself)Contains sensitive subdirs
~/.ssh, ~/.gnupgCredential stores
~/.aws, ~/.config/gcloudCloud credentials
~/.mechaMecha's own config and secrets
~/.claude, ~/.codex, ~/.geminiCLI credentials (allowed via docker.credentials)

mecha doctor re-checks these paths for workers already in the registry.

Disposable (One-Shot) Containers

Set lifecycle: disposable to create a fresh container per task. The container is destroyed after the task completes.

yaml
name: sandbox-runner
docker:
  image: mecha-worker:latest
  lifecycle: disposable
  token: claude.xiaolaidev
timeout: 10m
  • persistent (default): container stays running, reused across tasks
  • disposable: new container per task, destroyed after completion

Disposable workers don't need worker start — the dispatch loop creates containers on demand.

Adapter Worker

Adapters translate native LLM APIs (Ollama, vLLM, OpenAI-compatible) into the mecha worker contract. They run in-process — no Docker required.

yaml
name: local-llm
adapter:
  type: ollama                       # "ollama" or "openai"
  upstream: http://localhost:11434   # base URL of the LLM API
  model: gemma2:9b                   # model name
timeout: 10m

Adapter Types

TypeUpstream APIHealth CheckTask Endpoint
ollamaOllama /api/chatGET /Chat completions
openaiOpenAI-compatible /v1/chat/completionsGET /v1/modelsChat completions

OpenAI-Compatible Example

Works with vLLM, LiteLLM, llama.cpp server, or any OpenAI-compatible API:

yaml
name: vllm-worker
adapter:
  type: openai
  upstream: http://gpu-server:8000
  model: meta-llama/Llama-3-70b
  api_key: ${VLLM_API_KEY}          # optional
timeout: 15m

Fields

FieldRequiredDescription
adapter.typeYesollama or openai
adapter.upstreamYesBase URL of the LLM API
adapter.modelYesModel name passed to the API
adapter.api_keyNoAPI key for authenticated endpoints

Mecha starts an in-process HTTP server when you run worker start. The adapter translates the worker contract (GET /health, POST /task) into native API calls.

Unmanaged Worker

yaml
name: my-ollama
endpoint: http://100.64.0.3:11434
timeout: 5m

Mecha doesn't manage the process. It just marks the worker online on start, probes GET /health, and calls POST /task on the endpoint.

Worker Image Contract

Every managed worker image must:

RequirementDetails
PortExpose 8080
HealthGET /health200 (ready) or 503 (busy)
TaskPOST /task → result contract JSON
HealthcheckInclude HEALTHCHECK directive in Dockerfile
ConfigRead all config from environment variables

POST /task Request

json
{
  "id": "task-abc123",
  "prompt": "Review this PR for security issues",
  "context": {
    "repo": "owner/repo",
    "diff": "..."
  }
}

POST /task Response

json
{
  "output": "The PR has a SQL injection vulnerability...",
  "metadata": {
    "model": "claude-sonnet-4-6",
    "duration_ms": 45000,
    "exit_code": 0
  }
}

Backend-Specific Env Vars

Claude

Claude uses the Agent SDK query() directly. Env vars map to SDK options:

Env VarSDK OptionValues
CLAUDE_MODELmodelclaude-sonnet-4-6, claude-opus-4-6, etc.
CLAUDE_SYSTEM_PROMPTsystemPromptAny string
CLAUDE_ALLOWED_TOOLSallowedToolsComma-separated: Read,Grep,Glob,Bash
CLAUDE_DISALLOWED_TOOLSdisallowedToolsComma-separated
CLAUDE_PERMISSION_MODEpermissionModeDefaults to bypassPermissions (SDK default for non-interactive use)
CLAUDE_EFFORTeffortlow, medium, high, max
CLAUDE_MAX_BUDGET_USDmaxBudgetUsde.g. 5.00
CLAUDE_MAX_TURNSmaxTurnse.g. 50

Codex (MCP Tool)

Codex runs as an MCP child process inside the Claude worker. It is auto-enabled when ~/.codex/auth.json is mounted via credentials: [codex] or when CODEX_API_KEY is set. See Dual-Agent Workers for details.

Gemini

Gemini is not supported as a managed Docker worker — its credential files are encrypted to the host machine. Use Gemini API endpoints as unmanaged workers instead.

State Machine

  • offline: definition exists, container stopped or absent
  • online: container running, health check passing, accepting tasks
  • busy: executing a task (returns 429 to new requests)
  • error: health check failed or container exited

Released under the ISC License.