Tag Archives: change management

Cloud Computing & Conway’s Law

a previous post: PaaS as change enabler, I mentioned Conway’s law as a difficulty to be addressed when looking for implementing a cloud solution.

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations

Recently Gartner analyst: Thomas J. Bittman;Problems Encountered by 95% of Private Clouds researched which problems where suffering their clients with private clouds and found that 95% of them have problems with their solutions.

Amazon AWS (Private Clouds are things of the past) and other cloud providers/evangelists will parrot that “private clouds are inherently broken” but I can’t follow their logic on those reasons.

The problems encountered are related to the use of the technology not the technology itself, and most of them will occur implementing cloud, private or hybrid clouds.

If I had to extract a title it’d be “Companies don’t fully understand Cloud Computing“, they map their expertise, their knowledge and their organizational model to a cloud paradigm (hence Conway’s law) without fully committing/understanding all the consequences of a cloud model.

Anyway, those detected problems are critical. Addressing them mark the difference between a successful project or a failure for any company, and it usually involves a change of mindset which is cloud computing really about. A change of paradigm which without it, maybe there isn’t any difference to an advanced virtualization.

Docker: production usefulness

Goal: answer the question if Docker is viable and ready for production service.

Short Answer: NO

Not by a long shot. Here it’s explained perfectly: Docker Misconceptions.

Just this excerpt has a lot of weight behind.

To run Docker in a safe robust way for a typical multi-host production environment requires very careful management of many variables:

  • secured private image repository (index)
  • orchestrating container deploys with zero downtime
  • orchestrating container deploy roll-backs
  • networking between containers on multiple hosts
  • managing container logs
  • managing container data (db, etc)
  • creating images that properly handle init, logs, etc
  • much much more…

Long Answer: Not alone.

The short answer still applies, it’s just that I wanted to confirm them myself to get a broad image of where docker is, its capabilities: what is and what lacks for.

How:

After the migration of our servers to Poland RDC, we are decommissioning most of the servers in Local Country. Some of the remaining services that are left here can be consolidated in less servers, I’ve dockerized in microservices the few ones I’m still maintaining here: A zabbix proxy server, its progresql rbdms, and a R development environment for daily statistics of our mainframe and backend app.

Why:

Well, LI/LM will eventually remove our root access to our servers, red line… whatever… also I don’t like our Linux and Solaris patching process. A docker image provisioning service looks like a solution, wouldn’t they be applications and country business services?

Also someone mentioning docker as replacement for configuration software( the blog entry ‘Why docker? Why not Chef’  in particular) piqued my interest, a solution able to replace puppet or chef needs to be considered.

Never forget:

Our goal is to deploy applications in production, flawlessly, with accountability, the same application developed, tested and approved and later maintain them in a HA environment. Continuous development aims to the perfection of the change management process and every new tool in the chain goes toward that goal.

Where is Docker:

  1. It’s a clever idea for separating the application template (a docker image), the application instance: configured and running (a container) and it’s data (data container or volume fs).
  2. The Dockerfile it’s an easy and quick way to bootstrap an image and a container (the final product).
  3. The docker registry and github are big repositores of images and dockerfiles. And private registries are easy to setup. A nice idea for accessing, sharing, and improving images
  4. Self-contained, minimalist, good performance, easy to launch even in Windows or Mac Os X with boot2docker.

Where isn’t Docker:

  1. It doesn’t scale out of a developer box and is not designed for automatic processing . It needs orchestration, configuration and discovery services. To be able to roll containers, configure them dynamically and know how and where to call their linked services. All that info isn’t in docker and needs to be provided prior to trying to implement something minimally serious.
  2. There is no change management accountability, a dockerfile builds a image, modifying a container and committing it upgrades an image. there are some tools and commands to review the commits and differences (docker history and diff mainly) but they are very limited. 2 images pulled are identical but there is no way to reproduce from scratch 2 images from the same dockerfile. This for me is a NO-NO, the environments, their containers, the runtime must be reproducible from step 1 to N, the images could be compromised, the images can be out-of-sync with the containers, the container looks like a black box without knowing what happens after several iterations or if multiple people upgrades them. There is no documentation of the configuration inside them, it looks backwards but it makes sense using images that launches a puppet/chef/ansible process for its configuration (tackling the first point in a way).
  3. The image/docker registries are (almost) a joke. Those images and install, next, next, next usually are the same thing and looking for the image that fits your necessities isn’t viable. Dockerfiles look easier to setup than cookbooks/playbooks but they are also chaotic and without a framework behind them (there is a reason for Spring existence, isn’t it?). Containers/images don’t match the “One Size Fit All” idea, internationalization is a must for us, support of locales, timezones needs to be addressed. That’s forking and/or a private hub. So, the idea of pulling an image from the official docker hub and I’m ready, that I will be updated and patched just by syncing doesn’t ring true (except in a trivial/demo scenario). UPDATE: Attack on Wildfly shows how to extend an image.
  4. The persistence data model comes from an evil, twisted, tortured mind. The volume syntax, mount mapping, the recursive idea of data containers for the portability holy grail look like a hack. Actually it’s a nice, well thought concept if already exists a provisioning service who is going to provide that storage, IOPS, redundancy, backups, etc.and exists a configuration service for inventorying and handling the storage needs… the problem is that docker doesn’t come/expect one. It could be enough (and even perfect) from a developer POV, but if our POV is focused in HA and performance then docker alone isn’t enough.
  5. Monitoring and logging those microservices are another issue. It’s a question of how are built those images but the minimalist approach of some official repos, without sysvinit/systemd, syslog, supervisors or monitoring agent processes (aren’t they micro?) in a production environment need a re-think.. That functionality must be provided somehow, balancing the needs of portability, performance, integration/management of all them in a central hub while maintaining the microservices concept and its performance.

Where to go:

At least to another blog post, there’s enough material for another entry. Docker is a cog in all the projects involving ?AAS (this already gives dimension to the Docker project, it’s a key technology). The ecosystem is boiling with projects or potential solutions, more or less polished although all of them needing heavy integration/tailoring: Openstack, Heat, Mesos, Zookeeper, Marathon, Puppet/Chef/Ansible, Jenkins, Kubernetes, Openshift, Fig, Orchard, and so on…

My next blog entry: Attack on Wildfly shows some answers to these problems and a lot of its virtues, and there is a third one in the works that tackle a lot of these problems using Kubernetes.