Passing secrets to a Docker container

26

5

I have a base docker image which is used to run image analysis software. For each container created from the image, there are a set of configuration settings some of which are secrets (encryption keys, customer information, etc.) that are used by the software to analyze and distribute the processed images. How can I safely pass these secrets to a container?

PrestonM

Posted 2018-04-18T13:32:56.087

Reputation: 2 188

Hashicorp vault030 2018-04-18T14:38:33.493

Answers

23

You have 3 methods to get secrets to an app inside a docker container. The first 2 involve docker configuration. The last one is to have your apps directly fetch secrets from a secret store.

1 - Environment variables

According to "The 12 Factor App" guide, secrets are merely config, and they should always be set in the environment. You could set your secrets as environment variables during the docker run, and your app accesses them from there.

2 - Mounted volumes

You could have your secrets all within a particular configuration/secrets file, then mount that to your instance as a mounted volume.

3 - Fetch from secret store

As @030 mentioned, you can use Hashicorp Vault (or "Amazon Secrets Manager", or any service like that).
Your app, or a sidecar app can fetch the secrets it needs directly, without having to deal with any configuration on the Docker container. This method would allow you to use Dynamically created secrets (a very appealing feature of such systems) and without having to worry about the secrets being view-able from the file system or from inspecting the env variables of the docker container.

Personal Opinion

I believe env variables is the way to go. It's easier to manage, and you can still pull from a secret store like Hashicorp Vault, if you have you CI build system pull the secrets during the build and set them when you deploy. You get the best of both worlds, and the added benefit of your developers not needing to write application code to fetch secrets. Devs should be focused on their code functionality, and not dealing with admin tasks like fetching passwords.

Your application's code should be focused on it's own app functionality itself, and not dealing with backend tasks like fetching passwords. Just like the 12 Factor App states.

Edit: changed last sentence to remove implication of Developer vs SysAdmin silo-ing. The tasks themselves should be separate from a code perspective, but DevOps is about the same persons keeping both in mind and not be limited.

Personal Opinion (Update)

Per @Dirk's excellent comment (Passing secrets to a Docker container), there is a very strong argument to prioritize a secret store over ENV vars, due to not wanting to leak them.

BoomShadow

Posted 2018-04-18T13:32:56.087

Reputation: 1 024

What are admin tasks? Devs are responsible for secure apps as well. Apart from that, +1.030 2018-04-18T16:12:03.487

Definitely. I tend towards step 1, and in general the 12 factor app. Thanks for putting this together!PrestonM 2018-04-19T01:15:22.837

@030 admin tasks would be infrastructure and config shipping. Fetching passwords for the config is not what I would consider 'code functionality'. Those are tasks to enable the app to run. That can be passed to an automation system that is separate from the code base of the app itself. That app should just assume it has what it needs, and not worry about gathering the info as well.BoomShadow 2018-04-19T15:05:11.407

2This promotes silos. DevOps is doing things together instead of throwing things over the wall.030 2018-04-19T15:24:46.537

2The code should be silo'd off from the infrastructure components. The actual people could code both the infrastructure automation and the app code base, but the tasks themselves should be separate. I see the last sentence of my original answer was silo-ing off the devs, the people. That is a mistake. I'll edit that to be clearer.BoomShadow 2018-04-20T15:03:32.883

7

Putting secrets into environment variables offers various possibilities for them to be leaked. A few examples: Everybody with access to the Docker daemon on the machine running the container can see them using the inspect or exec commands. Environment variables often get dumped to stdout or into logfiles when running in some debug mode. All spawned child processes can read and expose them which might be out of your control. More information e.g. here: https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/

Dirk 2019-02-01T17:03:44.413

Thanks for the link @Dirk. I've never seen that so well put. I've known that something like Vault or Keywhiz is the ideal place to put secrets in the long run, but that's the most succinct argument I've ever read. Def swayed my opinion about prioritizing secret store sooner rather than later.BoomShadow 2019-02-04T01:29:56.937

Very nice and precise answer. Thanks for putting it together. A good consolidated writeup.Karthik Venkatesan 2019-02-04T04:23:45.287

1I am also grappling with this question. The thing I don't understand is, even if you use a credentials vault to secure your secrets, you still must authenticate to gain access to that vault, and that presumably requires some secret. The same concern applies to using a password-protected KeyStore file. Are we always stuck with passing at least the "meta credential" in the environment?Wheezil 2019-03-08T17:43:10.590

1@Wheezil a meta-credential is easier to secure than many specific credentials. you can frequently and automatically rotate the meta credential. the meta credential can go to a vault that is on a secured host and can have things like ip whitelisting so that it only accepts connections from your production subnets. you can also ensure that the vault uses encryption at rest and encryption in flight and mutual tsl and certificate pinning and all the other best practices that make things more secure.simbo1905 2019-03-09T06:15:37.037

@simbo1905 thanks for the clarification! What would you say is a good mechanism to secure the meta-credential? My current thinking is just a file on disk with restrictive permissions, but is there a better way?Wheezil 2019-03-09T14:11:01.490

k8s has the concept of secrets that you create then mount as volumes into the containers. the link above by Dirk shows the same thing with Swarm. so yes a simple file with restricted permissions visible to the container seems the standard practice. with k8s I can public key encrypt the file holding the secret and use an init container (a startup container that runs to completion before my webapp starts) that has the private key to decrypt the secret file so that my webapp all can see the decrypted secret.simbo1905 2019-03-09T18:32:01.013

1@Dirk (1) anyone with authenticated network access to the Docker daemon has full control over the containers, including RCE; there's no defense against that privilege level anyway (2) you can selectively restrict child processes from having access to env vars (3) spawned child process by default execute under the same user and have similar file access anyway. In any modern Linux distro, env vars are just as secret as the file system or remote file systems.Paul Draper 2019-04-05T07:01:23.340

1

There is another option only using pipe:

docker run -d -i --name $n alpine sh -c 'read A; echo "[$A]"; exec some-server'
docker exec -i $n sh -c 'cat > /proc/1/fd/0' <<< _a_secret_

First, create the docker daemon with -i, the command read A will hang waiting for the input from /proc/1/fd/0; Then run the second docker command, reading the secret from stdin and redirect to the last hanging process.

James Z.M. Gao

Posted 2018-04-18T13:32:56.087

Reputation: 11