DockerLogo

Secure by Default

Docker Sandboxes & Docker Compose Meets AI

Marco Franzon · Trento · 2026

A True Story

"I gave Claude Code full access to my laptop…
it deleted my node_modules in 3 seconds."

Now Watch This


docker sandbox run claude ~/my-project
  

Same agent. Same power. Zero risk to your host.

Quick Poll

Who has run an AI coding agent locally
without any isolation?

🙋 Raise your hand, no judgment (yet)

The Thesis

In the age of AI agents that run arbitrary code,
Docker gives us secure-by-default isolation.
no manual hardening required.

Docker Sandboxes + Compose = the new developer superpower

Why "Secure by Default" Matters for AI

  • AI agents are powerful but dangerous
  • Hallucinations → rm -rf /
  • Prompt injection → data exfiltration
  • Agents install packages, run scripts, modify files
  • They need autonomy, but we need boundaries

The Agent Landscape (2026)

  • Claude Code (Anthropic)
  • Codex CLI (OpenAI)
  • Gemini CLI (Google)
  • GitHub Copilot CLI
  • OpenCode, Kiro, cagent…

All of them want to: read files, run commands, install packages, build & test.

Containers: The Original Sandbox

Isolation Layers

  • Namespaces (PID, NET, MNT, USER)
  • cgroups (resource limits)
  • Dropped capabilities
  • seccomp / AppArmor / SELinux

Hardening Options

  • Rootless mode
  • Read-only filesystems
  • --no-new-privileges
  • User remapping

The Problem with Containers

Containers share the host kernel

A kernel exploit inside a container = host compromise

For untrusted AI-generated code, containers alone are not enough.

You don't want to remember: --cap-drop=ALL --security-opt=no-new-privileges --read-only every time

Compose: Security Built-in


services:
  agent-executor:
    image: my-agent:latest
    read_only: true
    cap_drop:
      - ALL
    security_opt:
      - no-new-privileges:true
    deploy:
      resources:
        limits:
          cpus: "2"
          memory: 4G
    networks:
      - isolated
    secrets:
      - api_key

networks:
  isolated:
    internal: true

secrets:
  api_key:
    environment: "AGENT_API_KEY"
  

Docker Sandboxes

The MicroVM Revolution

Docker Desktop 4.58+ · macOS & Windows

Each agent runs in a lightweight microVM
with its own private Docker daemon

Architecture

HOST MACHINE

MicroVM (Sandbox)

Private Docker Daemon

Isolated from host

AI Agent

Claude Code, Codex…

📁 Workspace sync

Bidirectional file sync

🌐 Network proxy

HTTP/HTTPS filtered

macOS: virtualization.framework · Windows: Hyper-V

Security Properties

  • Separate kernel: no shared kernel exploits
  • Private Docker daemon: no socket escape
  • Bidirectional file sync: only your workspace, not volume mounts
  • Network filtering proxy: HTTP/HTTPS only, raw TCP/UDP blocked
  • No access to host localhost, private networks, cloud metadata
  • Instant cleanup: docker sandbox rm

Network Policies

Blocked by Default

  • 10.0.0.0/8
  • 172.16.0.0/12
  • 192.168.0.0/16
  • 127.0.0.0/8 (localhost)
  • 169.254.0.0/16 (cloud metadata)

Two Modes

  • Allow mode: all except blocked CIDRs
  • Deny mode: only allowed hosts
docker sandbox network proxy \
  my-sandbox \
  --policy deny \
  --allow-host "*.npmjs.org"

"YOLO Mode", But Safe

Agents run unsupervised, no permission pop-ups

They can:

  • Edit code, install packages
  • docker build inside the sandbox
  • docker compose up full stacks
  • Run tests, linters, formatters

All safely isolated. You sleep at night.

🎬 Live Demo


# Create a sandbox for your project
docker sandbox run claude ~/my-project

# The agent edits code, installs packages, runs tests
# All inside the microVM, zero host impact

# Check running sandboxes
docker sandbox ls

# Peek inside
docker sandbox exec -it claude-my-project bash

# Clean up
docker sandbox rm claude-my-project
  

Isolation Approaches Compared

ApproachIsolationAgent Can Docker?DXSecure by Default?
Plain DockerLowYes (risky)GreatNo
gVisor / KataMedium/HighLimitedOKAlmost
Full VMHighYesPoorYes
Docker SandboxesHighYes (safe)BestYes

Docker Compose Meets AI

Compose isn't just for web apps anymore.

New AI-native patterns:

  • Models + MCP gateways + tools in one compose.yaml
  • Sandboxed agent executors alongside your services
  • Agents spinning up full Compose stacks for integration testing

Docker Model Runner

Run AI models as first-class Compose services

Docker Desktop 4.41+ · Compose v2.35+

  • Pull & run LLMs with docker compose up
  • OpenAI-compatible API, zero config
  • Auto-injected env vars for endpoints
  • Embeddings, chat, deterministic modes

Models in Compose

The models Top-Level Element


models:
  llm:
    model: ai/smollm2
    context_size: 4096
  embeddings:
    model: ai/all-minilm

services:
  app:
    image: my-ai-app
    models: [llm, embeddings]
      

Short syntax: auto-generated env vars

  • LLM_URL
  • LLM_MODEL
  • EMBEDDINGS_URL
  • EMBEDDINGS_MODEL

Your app just reads the env vars and talks to the model. No setup needed.

Model Runner: Fine-Tuning

Custom Env Vars (Long Syntax)


services:
  app:
    image: my-app
    models:
      llm:
        endpoint_var: AI_MODEL_URL
        model_var: AI_MODEL_NAME
      

Runtime Presets


models:
  # Deterministic (evals, CI)
  eval-model:
    model: ai/smollm2
    context_size: 4096
    runtime_flags:
      - "--temp"
      - "0"
      - "--top-k"
      - "1"
  # Creative (content gen)
  creative-model:
    model: ai/smollm2
    runtime_flags:
      - "--temp"
      - "1"
      - "--top-p"
      - "0.9"
      

Local Models + Sandboxes

Run coding agents inside sandboxes backed by a local model, no cloud, no API keys.

docker-compose.yml


services:
  agent:
    image: my-coding-agent
    models:
      llm:
        model: ai/qwen2.5-coder
    environment:
      - OPENAI_BASE_URL=${AI_MODEL_ENDPOINT_URL}
      - OPENAI_MODEL=${AI_MODEL_NAME}
      

opencode config


{
  "model": "qwen2.5-coder",
  "providers": {
    "docker-model-runner": {
      "name": "Docker Model Runner",
      "baseURL": "${OPENAI_BASE_URL}",
      "models": {
        "qwen2.5-coder": {
          "name": "qwen2.5-coder"
        }
      }
    }
  }
}
      

Try It Today


# 1. Update Docker Desktop to 4.58+
# 2. Run your first sandbox
docker sandbox run claude ~/your-project
# 3. Ship safer agents tomorrow
  

Docker Model Runner:


# Pull and run a local model
docker model pull ai/llama3.2
# Chat with it directly
docker model run ai/llama3.2 "Explain Docker sandboxes in one sentence"
  

Resources:

Thank You!

Questions?

🐳 Secure by default. Ship with confidence.