Docker at
La Ruche qui dit Oui !
Summary
- play with environments on demand, code picked from developper github branch
- developping with a fat production database
Play with environments on demand
Play with environments on demand
Sandbox and preproduction were not enough
- we love features
- many people would like to see it before production
- product owners
- other developpers
- quality assurance
- during demo sessions
- we do not have enough environnements for everybody
Play with environments on demand
APIs
- playing with an environment is playing with an API
- this API generates
docker-compose.yml files from templates
- this API launchs
docker-compose on this file
- runs on a
docker swarm cluster
Play with environments on demand
Every day usage
- we have a cli to play with this API
$ env-on-demand --help
Usage:
/usr/local/bin/env-on-demand [create|delete|details|list|logs]
Options:
--help: prints this help
create: create a new environment
list: list environments
delete: delete an environment
logs: get logs of your environment
details: get info of your environment
Play with environments on demand
Every day usage
- the developper can choose the name of its environment
- they can use fixtures or anonymized production dump
- an environment has a duration of life (a cron script cleans it)
- every developper have access to this service
- they just need a github token
$ env-on-demand create --help
Usage:
/usr/local/bin/env-on-demand create OPTIONS
Options:
--help: prints this help
--github-token: some github token ( please see https://github.com/settings/tokens )
--with-fixtures use fixtures instead of dump data
--keep-days number of days the environment should be up for
--name personnalize the name of your environment
--github-back: github back repo you want to deploy
--github-back-branch: github branch you want to deploy
--github-front: github front repo you want to deploy
--github-front-branch: github branch you want to deploy
Example:
- common:
/usr/local/bin/env-on-demand create
--github-token xxxxx (required)
--with-fixtures
--keep-days 5
--name myenv
- back only:
--github-back nclavaud/back-web
--github-back-branch master
- front:
--github-back nclavaud/back-web
--github-back-branch master
--github-front sinewyk/front-web
--github-front-branch master
Play with environments on demand
Every day usage
- to route the trafic to the environment we have a
proxy as a service API
- the
env on demand API attributes a dedicated port on the docker swarm cluster and call this API
- the url is based on the name:
${NAME}.env.dev.internal (DNS wildcard points to the proxy server)
$ env-on-demand create --name demo --github-token **** --keep-days 1
Sending POST {"name"=>"demo", "github-token"=>"****"} to http://envondemand.dev.internal:8080/envs
{
"status": "done",
"urls": [
"demoapi.env.dev.internal",
"demoadmin.env.dev.internal"
]
}
$ env-on-demand list
Sending GET to http://envondemand.dev.internal:8080/envs
[
"demo"
]
Play with environments on demand
Git and docker
- when the container starts on the
docker swarm cluster
- it clones our github repositories
- build the code
- starts the services
- every configuration comes from environment variables
- we run our own
docker registry using aws s3 as backend
How do we develop with a dump when the database is fat
How do we develop with a dump when the database is fat
Too fat, too fast
- we like to develop using dump
- until the dump is too fat (measurement: you can go drink some coffee before playing with your dump or not)
How do we develop with a dump when the database is fat
A centralized server
- client side
- we access an API to create a database
- we connect to the created database
- done from the developper machine automatically
$ curl -XPOST -d'{}' http://pg.dev.internal/servers
{"Id":"30c8e143ee0c24aa322d1918ee5dd37f00590ea63bed9c032046ccfaf111f332"}
$ curl -XGET http://pg.dev.internal/servers
[
{"Id":"30c8e143ee0c24aa322d1918ee5dd37f00590ea63bed9c032046ccfaf111f332"},
{"Id":"086d9ca45381db04c0eb75b0ab1f0f5644519b66bc27824b10c3be9133725901"}
]
$ curl -XGET http://pg.dev.internal/servers/30c8e143ee0c24aa322d1918ee5dd37f00590ea63bed9c032046ccfaf111f332
{
"Id":"30c8e143ee0c24aa322d1918ee5dd37f00590ea63bed9c032046ccfaf111f332",
"Name":"/pgondemand_5433","Expires_at":"1464266476",
"Host":"pg.dev.internal","Port":"5433","Status":"Running"
}
$ psql -h pg.dev.internal -p 5433 -U postgres -W
postgres=
$ curl -XDELETE http://pg.dev.internal/servers/30c8e143ee0c24aa322d1918ee5dd37f00590ea63bed9c032046ccfaf111f332
{"Id":"30c8e143ee0c24aa322d1918ee5dd37f00590ea63bed9c032046ccfaf111f332"}
How do we develop with a dump when the database is fat
We run a dabatase on snapshots
- server side
- a
BTRFS subvolume contains the database file system dump
- each time we want a fresh dump:
- a subvolume snapshot is done
- it takes less than 1s thanks to copy on write snapshot
- the
postgresql docker image is run with this snapshot as volume
- many
postgresql container run in parallel exposing postgresql on a different port on different snapshot
- the snapshot size does not increase while there is no write (cool for exploration and for $)
Marc MILLIEN <devops@lrqdo.fr>