Take the 2-minute tour ×
Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free.

I'm new to Docker and feeling my way around. My plan is to build a typical web app using Nginx+Rails+Postgress, all of which will be in the one container. I'm not (currently) doing anything complex like linking images.

I'm a lone developer, and my build process thus far is:

  1. Edit Dockerfile
  2. docker build
  3. Fix bugs, and if I like the outcome then commit the Dockerfile to a git repo.
  4. Iterate over steps 1-3 as I change the build.
  5. docker push my/image periodically, as useful versions emerge.

Why instead would I not:

  1. docker pull a basic image e.g. Ubuntu
  2. docker run -t -i my/image /bin/bash
  3. wget http://git.host.com/installation-script.sh | bash
  4. If bugs then scrap image and edit installation-script.sh to fix bugs.
  5. Iterate over 1-4.
  6. docker push my/image periodically, as above.

I'm aware of issues with 'wget shell-script | bash', however it would be more familiar to me.

Instinctively I feel that using a Dockerfile is the best way to go, but I'm not sure why. I think it would be useful for Docker beginners to understand why Dockerfile is (or isn't) best practice. If I was deploying linked containers would I realise the awesome power of the Dockerfile? Does Dockerfile affect the "quality" (size, whatever?) of the final image?

share|improve this question

migrated from serverfault.com Jul 27 at 15:32

This question came from our site for system and network administrators.

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.