Category Archives: Cloud Computing

PaaS as change enabler

In a previous post Some dead ends to be acknowledged the next year , I set “No SSH in user space” as goal for 2015.

For a sysadmin it’s a mind-blowing motto (it’s like abandoning decades of practices) but few people would understand the change it involves; not only technologically but also conceptually. Actually, it’s not even a goal, it’s a byproduct of a change of paradigm.

cartoons_dilbert-pointyHairedBossAlas, pursuing it would be the worst way to achieve it, there’s no way to sell it (except maybe if you’re Oracle sales).
What needs to be sold is the idea. The paradise of a continuous delivery utopia, invention driven and with a low profile bureaucracy (wait, it can be done, sign me in!).

First we should sell the idea that other cultures are possible and they are great, the future, the way to go. “Engineering culture at Spotify part1 and part2” explains the new age really well, check the videos, they are fun and the ideas in them are great.

Second, this paradigm, this new way of working, involves a change of mentality. People must move on with the times.

If the people are sold (I’m already btw) then finally, I’d be able to sell a technology change that would enable the paradigm change.

A brief mind map of the technology solution involved (Maybe it’s not the only way to achieve it but at the moment it’s the funnier):

mm-cp

The technology follows a micro-services paradigm. That’s because this way we can get high performance, isolation (which allow us for better agile process, more parallelization and quick releases… check the videos), resilient and fault tolerance and allow meeting elastic demands. With those features I’m able to support that paradigm change.

How I got those features, well not with the traditional server (standalone/virtualized) way, an IaaS for provisioning and a PaaS for delivering are needed.

  • A (private/public) cloud concept (for provisioning and economy of scale) allows the elasticity needed.
  • The container model/engine (docker for example) provides the isolation, performance, development friendliness while allowing fault tolerance with easy&quick availability and fast fleet.
  • The PaaS must provide all the facilities needed to run those containers (which are really complex, fleetthey are ephemeral, they are a lot, and there are a lot of fleets)
  • The data treatment is another issue, for better or worse. It must be acknowledge.

 

So after this, then I’d have met my goal of no more ssh (at user space) because each container runs a reduced set of tasks (usually one). But the involved mental process (to get it) isn’t straightforward.

Anyway there are important points left that need to be considered.

  • The applications need to be highly decoupled to be splitted in micro-services. It isn’t an easy task, more in legacy software. The apps need to be designed and programmed to run in cloud environments.
  • The complexity is increased brutally.Shifted from the app to the ecosystem. It’s reduced or controlled in two ways:
    • With an organization where each team is responsible of all the areas of a micro-services.
    • Infrastructure as Code, so the IaaS and PaaS are another program to manage (with the same checks and agile procedures of coding).
  • Cooperation and isolation barriers. One micro-service/One island. Team play although each team has a dedicate role it’s the way to resolve it along with an intelligent use of apis.
  • And of course the difficulties of distributed computing and Conway’s Law.

Kubernetes pre-101 (brief introduction)

Before a better blog entry detailing a real, production ready example of dockerized micro-services (using free/open tools, most of the work, and design, would be reused if we used external cloud providers like GCP or AWS ECS‌), I’ll introduce another key technology in the ecosystem: the container management engine.

From a previous post: Docker: production usefulness

To run Docker in a safe robust way for a typical multi-host production environment requires very careful management of many variables:

  • secured private image repository (index)
  • orchestrating container deploys with zero downtime
  • orchestrating container deploy roll-backs
  • networking between containers on multiple hosts
  • managing container logs
  • managing container data (db, etc)
  • creating images that properly handle init, logs, etc
  • much much more…

Time to eat my words (or my quotes), let’s present Kubernetes:

A brief summary of what is: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/DESIGN.md

Kubernetes is a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications

Kubernetes uses Docker to package, instantiate, and run containerized applications.

Kubernetes enables users to ask a cluster to run a set of containers. The system automatically chooses hosts to run those containers on.

The scheduler needs to take into account individual and collective resource requirements, quality of service requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, deadlines, and so on

The atomic element for Kubernetes is the pod.

Pods simplify application deployment and management by providing a higher-level abstraction than the raw, low-level container interface. Pods serve as units of deployment and horizontal scaling/replication. Co-location, fate sharing, coordinated replication, resource sharing, and dependency management are handled automatically.

Pods serve as units of deployment and horizontal scaling/replication. Co-location, fate sharing, coordinated replication, resource sharing, and dependency management are handled automatically.
A pod correspond to a colocated group of Docker containers with shared volumes.

Pods facilitate data sharing and communication among their constituents.

Their use:

Pods can be used to host vertically integrated application stacks, but their primary motivation is to support co-located, co-managed helper programs, such as:

  • content management systems, file and data loaders, local cache managers, etc.
  • log and checkpoint backup, compression, rotation, snapshotting, etc.
  • data change watchers, log tailers, logging and monitoring adapters, event publishers, etc.
  • proxies, bridges, and adapters
  • controllers, managers, configurators, and updaters