Before a better blog entry detailing a real, production ready example of dockerized micro-services (using free/open tools, most of the work, and design, would be reused if we used external cloud providers like GCP or AWS ECS), I’ll introduce another key technology in the ecosystem: the container management engine.
From a previous post: Docker: production usefulness
To run Docker in a safe robust way for a typical multi-host production environment requires very careful management of many variables:
- secured private image repository (index)
- orchestrating container deploys with zero downtime
- orchestrating container deploy roll-backs
- networking between containers on multiple hosts
- managing container logs
- managing container data (db, etc)
- creating images that properly handle init, logs, etc
- much much more…
Time to eat my words (or my quotes), let’s present Kubernetes:
A brief summary of what is: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/DESIGN.md
Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications
Kubernetes uses Docker to package, instantiate, and run containerized applications.
Kubernetes enables users to ask a cluster to run a set of containers. The system automatically chooses hosts to run those containers on.
The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on
The atomic element for Kubernetes is the pod.
Pods simplify application deployment and management by providing a higher-level abstraction than the raw, low-level container interface. Pods serve as units of deployment and horizontal scaling/replication. Co-location, fate sharing, coordinated replication, resource sharing, and dependency management are handled automatically.
Pods serve as units of deployment and horizontal scaling/replication. Co-location, fate sharing, coordinated replication, resource sharing, and dependency management are handled automatically.
A pod correspond to a colocated group of Docker containers with shared volumes.
Pods facilitate data sharing and communication among their constituents.
Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located, co-managed helper programs, such as:
- content management systems, file and data loaders, local cache managers, etc.
- log and checkpoint backup, compression, rotation, snapshotting, etc.
- data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
- proxies, bridges, and adapters
- controllers, managers, configurators, and updaters