Skip to main content

Overview

Runners are responsible for executing your containerized application in a target environment. Buildr supports multiple execution backends, allowing you to run the same code locally for development or deploy to Kubernetes for production.

Runner Selection

Runners are selected using a factory pattern based on the executor parameter in runtime options:
runner/__init__.py:5-11
def select(spec):
    if spec == 'docker':
        return DockerRunner()
    if spec == 'metaparticle':
        return MetaparticleRunner()

    raise Exception('Unknown spec {}'.format(spec))

DockerRunner

Runs containers locally using Docker Engine (default)

MetaparticleRunner

Deploys to Kubernetes using Metaparticle compiler

DockerRunner

The DockerRunner executes containers on your local Docker daemon, ideal for development and testing.

Implementation

docker_runner.py:8-61
class DockerRunner:
    def __init__(self):
        self.docker_client = None

    def run(self, img, name, options):
        if self.docker_client is None:
            self.docker_client = APIClient(version='auto')

        ports = []
        host_config = None

        # Prepare port configuration
        if options.ports is not None and len(options.ports) > 0:
            for port_number in options.ports:
                ports.append(port_number)

            host_config = self.docker_client.create_host_config(
                port_bindings={p: p for p in ports}
            )

        # Launch docker container
        container = self.docker_client.create_container(
            img,
            name=name,
            ports=ports,
            host_config=host_config,
        )
        self.docker_client.start(container=container.get('Id'))

        self.container = container

        logger.info('Starting container {}'.format(container))

    def logs(self, *args, **kwargs):
        if self.docker_client is None:
            self.docker_client = APIClient(version='auto')

        # seems like we are hitting bug
        # https://github.com/docker/docker-py/issues/300
        log_stream = self.docker_client.logs(
            self.container.get('Id'),
            stream=True,
            follow=True
        )

        for line in log_stream:
            logger.info(line)

    def cancel(self, name):
        if self.docker_client is None:
            self.docker_client = APIClient(version='auto')
        self.docker_client.kill(self.container.get('Id'))
        self.docker_client.remove_container(self.container.get('Id'))

Running Containers

The run() method creates and starts a Docker container:
docker_runner.py:12-39
def run(self, img, name, options):
    if self.docker_client is None:
        self.docker_client = APIClient(version='auto')

    ports = []
    host_config = None

    # Prepare port configuration
    if options.ports is not None and len(options.ports) > 0:
        for port_number in options.ports:
            ports.append(port_number)

        host_config = self.docker_client.create_host_config(
            port_bindings={p: p for p in ports}
        )

    # Launch docker container
    container = self.docker_client.create_container(
        img,
        name=name,
        ports=ports,
        host_config=host_config,
    )
    self.docker_client.start(container=container.get('Id'))

    self.container = container

    logger.info('Starting container {}'.format(container))
1

Initialize Docker Client

Lazily initializes the Docker API client with automatic version negotiation
2

Configure Port Mappings

If ports are specified in runtime options, creates port bindings that map container ports to host ports:
# For ports=[8080, 9000]:
port_bindings = {8080: 8080, 9000: 9000}
3

Create Container

Creates the container with:
  • img: Image to run
  • name: Container name
  • ports: Exposed ports
  • host_config: Port bindings configuration
4

Start Container

Starts the created container using its ID
5

Store Reference

Saves the container reference for log streaming and cancellation

Port Mapping

Port configuration creates bidirectional bindings:
@Containerize(
    package={'name': 'web', 'repository': 'myrepo'},
    runtime={'ports': [8080]}
)
def main():
    # Container port 8080 maps to host port 8080
    # Access at http://localhost:8080
    ...
Currently, DockerRunner maps each container port to the same port number on the host. Custom port mapping (e.g., container:8080 → host:3000) is not yet supported.

Log Streaming

The logs() method streams container output to the console:
docker_runner.py:41-54
def logs(self, *args, **kwargs):
    if self.docker_client is None:
        self.docker_client = APIClient(version='auto')

    # seems like we are hitting bug
    # https://github.com/docker/docker-py/issues/300
    log_stream = self.docker_client.logs(
        self.container.get('Id'),
        stream=True,
        follow=True
    )

    for line in log_stream:
        logger.info(line)
stream=True returns a generator that yields log lines as they’re produced, rather than waiting for the container to exit.
follow=True keeps the stream open, continuously tailing logs like docker logs -f.
The code comments reference docker-py issue #300, related to log streaming behavior. Despite the bug, the current implementation works correctly for most use cases.

Cancellation

The cancel() method stops and removes the container:
docker_runner.py:56-61
def cancel(self, name):
    if self.docker_client is None:
        self.docker_client = APIClient(version='auto')
    self.docker_client.kill(self.container.get('Id'))
    self.docker_client.remove_container(self.container.get('Id'))
1

Kill Container

Sends SIGKILL to the container process (immediate termination)
2

Remove Container

Deletes the stopped container and its filesystem
kill() forcefully terminates the container without graceful shutdown. Consider using stop() for graceful termination if your application needs cleanup time.

MetaparticleRunner

The MetaparticleRunner deploys your application to Kubernetes using the Metaparticle compiler:

Implementation

metaparticle.py:6-63
class MetaparticleRunner:
    def cancel(self, name):
        subprocess.check_call(['mp-compiler', '-f', '.metaparticle/spec.json', '--delete'])

    def logs(self, name):
        subprocess.check_call(['mp-compiler', '-f', '.metaparticle/spec.json', '--deploy=false', '--attach=true'])

    def ports(self, portArray):
        result = []
        for port in portArray:
            result.append({
                'number': port,
                'protocol': 'TCP'
            })
        return result

    def run(self, img, name, options):
        svc = {
            "name": name,
            "guid": 1234567,
        }

        if options.replicas > 0 or options.shardSpec is not None:
            svc["services"] = [
                {
                    "name": name,
                    "replicas": options.replicas,
                    "shardSpec": options.shardSpec,
                    "containers": [
                        {"image": img}
                    ],
                    "ports": self.ports(options.ports)
                }
            ]
            svc["serve"] = {
                "name": name,
                "public": options.public
            }

        if options.jobSpec is not None:
            svc["jobs"] = [
                {
                    "name": name,
                    "replicas": options.jobSpec['iterations'],
                    "containers": [
                        {"image": img}
                    ]
                }
            ]

        if not os.path.exists('.metaparticle'):
            os.makedirs('.metaparticle')

        with open('.metaparticle/spec.json', 'w') as out:
            json.dump(svc, out)

        subprocess.check_call(['mp-compiler', '-f', '.metaparticle/spec.json'])

Kubernetes Deployment

The run() method generates a Metaparticle specification and deploys to Kubernetes:
1

Create Base Spec

Initializes the Metaparticle spec with name and GUID:
metaparticle.py:23-26
svc = {
    "name": name,
    "guid": 1234567,
}
2

Add Service Configuration

If replicas or sharding are specified, creates a service deployment:
metaparticle.py:28-43
if options.replicas > 0 or options.shardSpec is not None:
    svc["services"] = [
        {
            "name": name,
            "replicas": options.replicas,
            "shardSpec": options.shardSpec,
            "containers": [
                {"image": img}
            ],
            "ports": self.ports(options.ports)
        }
    ]
    svc["serve"] = {
        "name": name,
        "public": options.public
    }
3

Add Job Configuration

If jobSpec is provided, creates a Kubernetes Job:
metaparticle.py:45-54
if options.jobSpec is not None:
    svc["jobs"] = [
        {
            "name": name,
            "replicas": options.jobSpec['iterations'],
            "containers": [
                {"image": img}
            ]
        }
    ]
4

Write Spec File

Saves the spec to .metaparticle/spec.json:
metaparticle.py:56-60
if not os.path.exists('.metaparticle'):
    os.makedirs('.metaparticle')

with open('.metaparticle/spec.json', 'w') as out:
    json.dump(svc, out)
5

Deploy to Kubernetes

Invokes the Metaparticle compiler to deploy:
metaparticle.py:62
subprocess.check_call(['mp-compiler', '-f', '.metaparticle/spec.json'])

Deployment Modes

Long-running service with replicas:
@Containerize(
    package={'name': 'web', 'repository': 'myrepo'},
    runtime={
        'executor': 'metaparticle',
        'replicas': 3,
        'ports': [8080],
        'public': True
    }
)
def main():
    # Deployed as Kubernetes Deployment with 3 replicas
    # Service exposed publicly
    ...
Generates:
{
  "name": "web",
  "services": [{
    "name": "web",
    "replicas": 3,
    "containers": [{"image": "myrepo/web:latest"}],
    "ports": [{"number": 8080, "protocol": "TCP"}]
  }],
  "serve": {"name": "web", "public": true}
}

Port Configuration

Metaparticle converts port numbers to Kubernetes port specs:
metaparticle.py:13-20
def ports(self, portArray):
    result = []
    for port in portArray:
        result.append({
            'number': port,
            'protocol': 'TCP'
        })
    return result
# Input:
runtime={'ports': [8080, 9000]}

# Output:
[
    {'number': 8080, 'protocol': 'TCP'},
    {'number': 9000, 'protocol': 'TCP'}
]
Currently only TCP protocol is supported. UDP ports require manual Kubernetes manifest editing.

Log Attachment

The logs() method attaches to running pods without redeploying:
metaparticle.py:10-11
def logs(self, name):
    subprocess.check_call(['mp-compiler', '-f', '.metaparticle/spec.json', '--deploy=false', '--attach=true'])
  • --deploy=false: Skip deployment (assume already deployed)
  • --attach=true: Attach to logs of existing pods

Cleanup

The cancel() method deletes Kubernetes resources:
metaparticle.py:7-8
def cancel(self, name):
    subprocess.check_call(['mp-compiler', '-f', '.metaparticle/spec.json', '--delete'])
Deletes all resources created from the spec (Deployments, Services, Jobs, etc.).

Complete Examples

Local Docker Development

from metaparticle_pkg import Containerize
from six.moves import SimpleHTTPServer, socketserver

@Containerize(
    package={'name': 'dev-server', 'repository': 'myrepo'},
    runtime={
        'executor': 'docker',  # Run locally
        'ports': [8080]
    }
)
def main():
    handler = SimpleHTTPServer.SimpleHTTPRequestHandler
    httpd = socketserver.TCPServer(("", 8080), handler)
    print('Server running on http://localhost:8080')
    httpd.serve_forever()

if __name__ == '__main__':
    main()
1

First Run

Builds image and runs container locally
$ python server.py
# Builds myrepo/dev-server:latest
# Runs container with port 8080 mapped
# Access at http://localhost:8080
2

Subsequent Runs

Uses cached image (faster startup)
3

Stop

Press Ctrl+C to kill and remove container

Kubernetes Production Deployment

from metaparticle_pkg import Containerize
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello():
    return 'Hello from Kubernetes!'

@Containerize(
    package={
        'name': 'prod-api',
        'repository': 'gcr.io/myproject',
        'publish': True  # Push to GCR
    },
    runtime={
        'executor': 'metaparticle',  # Deploy to K8s
        'replicas': 5,
        'ports': [5000],
        'public': True  # Create LoadBalancer service
    }
)
def main():
    app.run(host='0.0.0.0', port=5000)

if __name__ == '__main__':
    main()
1

Build & Push

Builds and pushes image to Google Container Registry
2

Generate Spec

Creates .metaparticle/spec.json with 5 replicas
3

Deploy

Metaparticle compiler creates:
  • Kubernetes Deployment (5 pods)
  • LoadBalancer Service (public IP)
4

Monitor

Streams logs from all replicas

Choosing a Runner

Use DockerRunner When

  • Developing locally
  • Testing changes quickly
  • Running on a single machine
  • No scaling requirements
  • Debugging containerization issues

Use MetaparticleRunner When

  • Deploying to production
  • Need horizontal scaling
  • Require high availability
  • Want zero-downtime updates
  • Running distributed systems

Best Practices

Use the same container image in both environments:
import os

# Switch executor based on environment
executor = 'metaparticle' if os.getenv('ENV') == 'production' else 'docker'

@Containerize(
    package={'name': 'app', 'repository': 'myrepo'},
    runtime={'executor': executor}
)
def main():
    ...
Handle SIGTERM in your application for clean shutdowns:
import signal
import sys

def shutdown_handler(signum, frame):
    print('Shutting down gracefully...')
    # Close connections, save state, etc.
    sys.exit(0)

signal.signal(signal.SIGTERM, shutdown_handler)
For Kubernetes deployments, set resource limits in your Metaparticle spec:
{
  "containers": [{
    "image": "myrepo/app:latest",
    "resources": {
      "limits": {"cpu": "500m", "memory": "512Mi"},
      "requests": {"cpu": "100m", "memory": "128Mi"}
    }
  }]
}
Implement health check endpoints for Kubernetes:
@app.route('/health')
def health():
    return {'status': 'healthy'}, 200

Next Steps

Architecture

Understand the complete lifecycle

Containerization

Learn about image building

Build docs developers (and LLMs) love