In this section we are going to write a number of different Dockerfiles, then build them into images and run them as containers.Comments FROM
The FROM command brings in the base image for your image to run ontop of. All dockerfiles must start with a FROM command
The docker store has all these images for you. You can add a tag afterward with a ":", the tag allows you to set the version and possibly other variables eg 2.1-aspnetcore-runtime vs 2.1-sdk vs 2.0-sdk
If you do not want inherit from another image, then you can build and image from scratch. There is an explanation on docker hub - scratch.
The WORKDIR sets the working directory for future commands. ie commands will execute from that folder path. If the directory doesn't exist, it's created.
WORKDIR /app WORKDIR relative/myfolder
for example, if you wanted to do a "npm install" you have to switch to the folder containing the package.json file. Imagine you have /app1/pacakge.json and /app2/pacakge.json you would have to navigate to the correct folder first to install the pacakges.
WORKDIR /app1 RUN npm install WORKDIR /app2 RUN npm install
The COPY command copies one or many files to a specific location in the container.
COPY app.js /renamedApp.js COPY . /appFolder
The RUN command will run terminal commands. Use this to run commands your application needs such as restore packages.
>RUN mkdir abc & touch /abc/text.txt
The ENTRYPOINT allows you to specify a command for when the container runs. eg we can run > echo hello. This entrypoint can be overridden with --entrypoint on run.
ENTRYPOINT ["echo", "hello"]
Any CMD will be appended to the entrypoint. see CMDcollapse
If you would like to run "> echo hello world" the command can run it. You can overwrite the command by adding the command to the end of the run eg > docker run busybox sleep 100 (will sleep instead of echo)
CMD ["echo", "hello world"]
CMDs get appended to ENTRYPOINTs, therefore in the case where an entrypoint has already been specified such as ["echo","hello"] you can simply add the cmd which must go afterwards.
in this example the entrypoint will print out "hello" and then the command adds "world" which will also print out.collapse
The EXPOSE command does not actually do anything. To expose ports you have to use -p when you run a container regardless of this EXPOSE command. The purpose of this is more for documentation purposes. If you run "docker image inspect myimage" on your image it will show what you set here to be exposed so you know what to set -p to when you run the container.
EXPOSE 80 5050
Each command in a dockerfile will create a new layer with its own ID. Each of these layers will be cached so that next time you build it can use the cached layer, this is useful if there was a failure midway through the build.
If you do a docker pull on an image you will see multiple downloads. Each of those lines are seperate layers.
> docker pull node
Then try a docker history on an image and again you will see the layers. Take note of the dates created, you will see some layers are older than others. The reason for this only layers which are modified need to be updated.
> docker history node
Because of the layers there are a few optimizations you can make to your docker file.
- Commands which change the least should be at the top of the file, so that they do not have to rebuild on top of new layers (which changed). Things which change often like your code should be as low down in the dockerfile as possible so that as few layers as possible have to rebuild ontop of it.
- When you want to run multiple lines of the same thing eg multiple RUN commands, rather put them on one command so that it builds on layer for it and not multiple.
Considerations when containerizing your app
- Containers should be location agnostic - A container should be able to be run on any environment. With cloud and orchestrators which will add containers to any of their nodes, meaning there is no guarantee which node it will be on.
- Containers should be stateless - Containers are transient meaning they have short lifespans. Containers are regularly rebuilt, started and stopped,therefore information should not be stored on the container nor should the application handle its own state.
- Containers should be scalable - You should be able to run any number of instance of an application and be able to dynamiclly add and remove them without any data duplication/loss.
Demos of Dockerfiles
Create a folder and add a file to it named "Dockerfile" (case sensitive). You can then write your commands in that Dockerfile.
Once your Dockerfile is ready open the command prompt at its location, build it and run it.
> docker build .
> docker run dfa45e069b37
Basic command (Alpine Linux Container)
A very simple demo is to setup a linux environment and write to the console.
FROM alpine CMD ["echo", "hello world"]
Docker will run "$ echo hello world" in the alpine linux container.collapse
If you do not have node installed on you local, dont worry... you dont need it. Create another file called "app.js" and put the following Java Script code in it.
console.log("Docker is running this Node js file");
FROM node COPY app.js /docApp.js CMD ["node", "docApp.js"]
This Dockerfile brings in node and copies your app.js file to the container, and for demo purposes renames it to docApp.js. It then runs that file with node using the command "$ node docApp.js" (Running JS server side is that easy).collapse
To demonstrate the ease of handling dependencies with containers, lets run a Python file. Create another file called "app.py" and put the following Python code in it.
print("Docker is running this Python file")
FROM python COPY . /app/dockerRocks CMD ["python", "/app/dockerRocks/app.py"]
In this demo the "copy" line copies all the files in the current directory (which is only one in the demo app.py) and moves them to a specific folder, which we have to specify when we run our python file with "CMD".collapse
In this demo we will look at a few new commands but lets first create our dot net core console app with the following command:
> dotnet new console -n app
FROM microsoft/dotnet WORKDIR /app COPY ./app/*.csproj ./ RUN dotnet restore COPY ./app/ ./ RUN dotnet publish -c Release -o dist ENTRYPOINT ["dotnet", "dist/app.dll"]
Ok, so whats going on here... We first copy the csproj file to the container and restore its dependencies. We then copy the rest of the files to the container and publish the app with a release configuration to a folder named "dist". Then ENTRYPOINT runs it as an executable.collapse