Mindreframer

Docker.io - Build your own PAAS

by Roman Heinrich

Docker.io

https://github.com/dotcloud/docker

Articles:

IRC log:

https://botbot.me/freenode/docker/2013-03-18/?tz=Europe/Helsinki

ASCIICASTs:

- http://ascii.io/~kencochrane (by kencochrane) (http://kencochrane.net/)

https://news.ycombinator.com/item?id=5408002

Interesting Quotes from disscusion:

shykes:

Docker is the direct result of dotCloud's experience running hundreds of thousands of containers in production over the last 2 years. We tried very hard to put it in a form factor which makes it useful beyond the traditional PaaS.
We think Docker's API is a fundamental building block for running any process on the server. heyhoe

shykes:

Think of a Docker container as a universal build artifact. Your build step can be an arbitrary sequence of unix commands (download dependencies, build libssl, run shell scripts etc.). Docker can freeze the result of that build and guarantee that it will run in a repeatable and self-contained way, no matter where you run it.
So you get clean separation of build and run, which is a hugely important part of reliable deployment.

shykes

Other benefits: docker images are basically tarballs, which means they are much smaller.
And, importantly, Docker maintains a filesystem-level diff between versions of an image, and only needs to transmit each diff once. So you get tremendous bandwidth savings when transmitting multiple images created from the same base.

shykes

Docker solves the problem of running things in a repeatable and infrastructure-agnostic way. Mesos solves the problem of telling many nodes what to run and when.
In other words, Docker + Mesos is a killer combo. There is already experimentation underway to use Docker as an execution engine for Mesos.

derefr

It's a white-labelling of dotCloud's implementation of Heroku's "slug" concept (https://devcenter.heroku.com/articles/slug-compiler): basically, a SquashFS image with a known SHA, storing a precompiled runtime+libraries+code artifact that will never change, able to be union-mounted atop a "base image" (a chroot filesystem, possibly also a known-SHA SquashFS image), then spun up as an ephemeral LXC container. I actually use the idea in my own ad-hoc deployment process; they're very convenient for ensuring repeatability.
(As a side-note, this is an example of an interesting bit of game theory: in a niche, the Majority Player will tend to keep their tech proprietary to stay ahead, while the Second String will tend to release everything OSS in order to remove the Majority Player's advantages. This one is dotCloud taking a stab at Heroku, but you can also think of, for example, Atlassian--who runs Github-competitor Bitbucket--poking at Github by releasing a generic Git GUI client, whereas Github released a Github client.)

trotsky:

Container based virtualization can provide an impressive amount of isolation while improving density dramatically on light duty loads over virtualization. Solaris zones are very well regarded and are used for multi-tenant by Joyent, and many many linux hosts provide multi-tenant solutions based on virtuozzo which predates linux containers by a good number of years.
The main theoretical difference between hypervisor isolation and container isolation is one sits above the kernel, so a kernel level exploit only applies to a single virtual machine. With containers you're relying on the kernel to provide the isolation so you are still subject to (some) kernel level exploits.
Practically linux containers (the mainline implementation) have only provided full isolation in recent patches and probably shouldn't be considered full shaken out for something like full in the wild root level multi-tenant access.
They are super for application isolation for delivery of multiple single tenant workloads on one machine though - something people use hypervisors for quite a bit. The resources used can be a small fraction of what you're committing to with a hypervisor.

laumars

A very bad description would be: "containers are essentially what you get when you cross virtualisation with chroot".
The reason why I give that description to begin with is because containers solve a lot of problems that often lead people towards the virtualisation route. And to the guest "OS", all the applications think they're running on a unique machine from the host. However containers differ from virtualisation in that it's just one OS (one shared kernel). This means you can only run one unique OS (though if you know what you're doing, you can run multiple different distros of Linux in different containers - but you couldn't run FreeBSD nor Windows inside a Linux container). Containers can also have their own resources and network interfaces (both virtual devices and dedicated hardware passed through).
Because you're not virtualising hardware with containers and because you're only running one OS, containers do have performance advantages over virtualisation while still being just as secure. So I personally think they're a massively underrated and under utilised solution.
If you're interested in investigating a little more into containers, Linux also has OpenVZ, and FreeBSD and Solaris has Jails and Zones (respectively). The wikipedia articles on each of them also offer some good details (despite the stigma attached to wikipedia entries).

lukeschlather:

Personally I think the biggest value these days with para-virtualization like this is in development. I can be running twenty or so different applications on the same physical machine, and for the most part (as long as they're idle since I'm only working with one) I don't even notice that they're running.

bradrydzewski:

This is still technically a virtualization technique, known as "operating system-level virtualization". http://en.wikipedia.org/wiki/Operating_system-level_virtuali...
Here are some of the technologies explained:
cgroups: Linux kernel feature that allows resource limiting and metering, as well as process isolation. The process isolation, also called namespaces, is important because it prevents a process from seeing or terminating other running processes.
lxc: this is a utility that glues together cgroups and chroots to provide virtualization. It helps you easily setup a guest OS by downloading your favorite distro and unpacking it (kind of like debootstrap). It can then "boot" the guest OS by starting it's "init" process. The init process runs in its own namespace, inside a chroot. This is why they call LXC a chroot on steroids. It does everything that chroot does, with full process isolation and metering.
aufs: this is sometimes called a "stacked" file system. It allows you to mount one file system on top of another. Why is this important? Because if you are managing a large number of virtual machines, each one with 1GB+ OS, it uses a lot of disk space. Also, the slowest part of creating a new container is copying the distro (can take up to 30 seconds). Using something like AUFS gives you much better performance.
So what about security? Well, like every (relatively) new technology LXC has its issues. If you use Ubuntu 12.04 they provide a set of Apparmor scripts to mitigate known security risks (like disabling reboot or shutdown commands inside containers, and write access to the /sys filesystem).

laumars

Containers have existed as long as virtual machines have. FreeBSD implemented "Jails" back in the late 90s / yr2000. Linux also has OpenVZ and Solaris has Zones.
If you couple a container with a CoW file system that supports snapshotting (eg ZFS or BtrFS), then you can have most of the features you'd expect from virtualisation but without as heavy footprint.
Containers are an underrated and often forgotten solution in my opinion.

Companies using Docker (besides dotCloud)

More about LXC: