Runners are responsible for executing your containerized application in a target environment. Buildr supports multiple execution backends, allowing you to run the same code locally for development or deploy to Kubernetes for production.
class DockerRunner: def __init__(self): self.docker_client = None def run(self, img, name, options): if self.docker_client is None: self.docker_client = APIClient(version='auto') ports = [] host_config = None # Prepare port configuration if options.ports is not None and len(options.ports) > 0: for port_number in options.ports: ports.append(port_number) host_config = self.docker_client.create_host_config( port_bindings={p: p for p in ports} ) # Launch docker container container = self.docker_client.create_container( img, name=name, ports=ports, host_config=host_config, ) self.docker_client.start(container=container.get('Id')) self.container = container logger.info('Starting container {}'.format(container)) def logs(self, *args, **kwargs): if self.docker_client is None: self.docker_client = APIClient(version='auto') # seems like we are hitting bug # https://github.com/docker/docker-py/issues/300 log_stream = self.docker_client.logs( self.container.get('Id'), stream=True, follow=True ) for line in log_stream: logger.info(line) def cancel(self, name): if self.docker_client is None: self.docker_client = APIClient(version='auto') self.docker_client.kill(self.container.get('Id')) self.docker_client.remove_container(self.container.get('Id'))
The run() method creates and starts a Docker container:
docker_runner.py:12-39
def run(self, img, name, options): if self.docker_client is None: self.docker_client = APIClient(version='auto') ports = [] host_config = None # Prepare port configuration if options.ports is not None and len(options.ports) > 0: for port_number in options.ports: ports.append(port_number) host_config = self.docker_client.create_host_config( port_bindings={p: p for p in ports} ) # Launch docker container container = self.docker_client.create_container( img, name=name, ports=ports, host_config=host_config, ) self.docker_client.start(container=container.get('Id')) self.container = container logger.info('Starting container {}'.format(container))
1
Initialize Docker Client
Lazily initializes the Docker API client with automatic version negotiation
2
Configure Port Mappings
If ports are specified in runtime options, creates port bindings that map container ports to host ports:
# For ports=[8080, 9000]:port_bindings = {8080: 8080, 9000: 9000}
3
Create Container
Creates the container with:
img: Image to run
name: Container name
ports: Exposed ports
host_config: Port bindings configuration
4
Start Container
Starts the created container using its ID
5
Store Reference
Saves the container reference for log streaming and cancellation
Port configuration creates bidirectional bindings:
Single Port
Multiple Ports
No Ports
@Containerize( package={'name': 'web', 'repository': 'myrepo'}, runtime={'ports': [8080]})def main(): # Container port 8080 maps to host port 8080 # Access at http://localhost:8080 ...
@Containerize( package={'name': 'app', 'repository': 'myrepo'}, runtime={'ports': [8080, 9000, 3000]})def main(): # All three ports mapped to same port numbers on host # http://localhost:8080 # http://localhost:9000 # http://localhost:3000 ...
@Containerize( package={'name': 'worker', 'repository': 'myrepo'} # No runtime.ports specified)def main(): # Container runs isolated, no external access # Good for background workers, batch jobs ...
Currently, DockerRunner maps each container port to the same port number on the host. Custom port mapping (e.g., container:8080 → host:3000) is not yet supported.
The logs() method streams container output to the console:
docker_runner.py:41-54
def logs(self, *args, **kwargs): if self.docker_client is None: self.docker_client = APIClient(version='auto') # seems like we are hitting bug # https://github.com/docker/docker-py/issues/300 log_stream = self.docker_client.logs( self.container.get('Id'), stream=True, follow=True ) for line in log_stream: logger.info(line)
Stream Mode
stream=True returns a generator that yields log lines as they’re produced, rather than waiting for the container to exit.
Follow Mode
follow=True keeps the stream open, continuously tailing logs like docker logs -f.
Known Issue
The code comments reference docker-py issue #300, related to log streaming behavior. Despite the bug, the current implementation works correctly for most use cases.
The cancel() method stops and removes the container:
docker_runner.py:56-61
def cancel(self, name): if self.docker_client is None: self.docker_client = APIClient(version='auto') self.docker_client.kill(self.container.get('Id')) self.docker_client.remove_container(self.container.get('Id'))
1
Kill Container
Sends SIGKILL to the container process (immediate termination)
2
Remove Container
Deletes the stopped container and its filesystem
kill() forcefully terminates the container without graceful shutdown. Consider using stop() for graceful termination if your application needs cleanup time.
Use the same container image in both environments:
import os# Switch executor based on environmentexecutor = 'metaparticle' if os.getenv('ENV') == 'production' else 'docker'@Containerize( package={'name': 'app', 'repository': 'myrepo'}, runtime={'executor': executor})def main(): ...
Graceful Shutdown
Handle SIGTERM in your application for clean shutdowns:
import signalimport sysdef shutdown_handler(signum, frame): print('Shutting down gracefully...') # Close connections, save state, etc. sys.exit(0)signal.signal(signal.SIGTERM, shutdown_handler)
Resource Limits
For Kubernetes deployments, set resource limits in your Metaparticle spec: