Container

How to fix ‘docker: invalid reference format.’

Problem:

You want to start docker container but you see an error message like this:

docker: invalid reference format.

Solution:

Docker is telling you that the syntax of the docker image name (& version) is wrong. Note that this is not the same as docker not being able to find the image in the registry. Docker will not even be able to lookup the image in the registry if you see an invalid reference format error!

Common causes include:

  • You used a colon at the end of the image name, e.g. ubuntu: – omit the colon ; using just ubuntu will refer to ubuntu:latest
  • You used a dash at the end of the image name, e.g. ubuntu- – omit the dash ; using just ubuntu will refer to ubuntu:latest
  • You used variables like ubuntu:$VERSION but $VERSION is not set. Ensure you have set $VERSION to an appropriate (non-empty) value like latest or 18.04!
  • You used multiple colons like ubuntu:18.04:2 or ubuntu:ubuntu:18.04. Use only a single colon in the image name!
  • You mixed up the order of command line arguments, so another argument is being interpreted as the image name. Check the syntax of your docker command, e.g. docker run and compare some examples with your command.
Posted by Uli Köhler in Container, Docker

How to install & use nano in a running Docker container

If you want to interactively edit a file in a docker container, you might want to install an editor like GNU nano (for example to debug your config files) in your Docker container that allows you direct access to the container’s file system.

docker exec -it [container name or ID] bash -c 'apt-get -y update && apt -y install nano'

This will work for most debian/Ubuntu-based containers ; for other containers you might need to use a package manager other than apt

Now you can use docker exec -it to interactively edit a config file, e.g.:

docker exec -it [container name or ID] nano /etc/host.conf

 

Posted by Uli Köhler in Container, Docker

What’s the difference between ‘docker exec’ and ‘docker run’?

docker exec runs a program in a container that’s already running. For example, you can use it to create a backup in a container that currently runs your database server.

docker run starts a new container and runs a command in it. For example, your can use it to run a specific script in a container.

Posted by Uli Köhler in Container, Docker

How to automatically cleanup (prune) docker images daily

docker image prune provides an easy way to remove “unused” docker images from a system and hence fixes or significantly delays docker eating up all your disk space on e.g. automated disk space.

provides an easy wayI created a systemd-timer based daily prune routine using TechOverflow’s Simple systemd timer generator.

Quick install using

wget -qO- https://techoverflow.net/scripts/install-cleanup-docker.sh | sudo bash

This is the script which automatically creates & installs both systemd config files.

#!/bin/sh
# This script installs automated docker cleanup via "docker image prune"
# onto systemd-based systems.
# It requires that docker is installed properly

cat >/etc/systemd/system/PruneDocker.service <<EOF
[Unit]
Description=PruneDocker

[Service]
Type=oneshot
ExecStart=/usr/bin/docker image prune -f
WorkingDirectory=/tmp
EOF

cat >/etc/systemd/system/PruneDocker.timer <<EOF
[Unit]
Description=PruneDocker

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target
EOF

# Enable and start service
systemctl enable PruneDocker.timer && sudo systemctl start PruneDocker.timer

To view logs, use

journalctl -xfu PruneDocker.service

To view the status, use

sudo systemctl status PruneDocker.timer

To immediately cleanup your docker images, use

sudo systemctl start PruneDocker.service
Posted by Uli Köhler in Container, Docker

How to remove all unused docker images

Use

docker image prune -f

to remove all unused/old (“dangling”) docker images. -f will skip the confirmation prompt.

This command will remove all images that a) don’t have a tag themselves and b) no image with a tag depends on the image.

Example output:

Deleted Images:
deleted: sha256:b6e0e69c6ba1e811063921d18b52627a9fb905e0bdfd80e226e86958831df636
[...]
deleted: sha256:6a40d04ddf00ad0f1806df7b2d4a2d44a6d8031cab5c369a4bf3d1694d5c48b4

Total reclaimed space: 985.3MB

 

Posted by Uli Köhler in Container, Docker

How to fix docker-compose start ERROR: No containers to start

Problem:

When running docker-compose start, you see an error message like this:

Starting mongodb     ... error
Starting myapp       ... error

ERROR: No containers to start

Solution:

In order to start your containers, use docker-compose up, instead of docker-compose start!

Posted by Uli Köhler in Container, Docker

The super-simple docker-compose cheatsheet

Run these commands in the directory (or git repo) where docker-compose.yml is located!

Start all services

docker-compose up -d

-d means run in background (= daemonize).

Stop all services

docker-compose down

Restart all services

docker-compose restart

Update containers

docker-compose pull
docker-compose restart

View logs

docker-compose logs

To view and follow use

docker-compose logs -f

 

Start a specific service (and all the services it depends on)

docker-compose start myservice

Show info about which container images are being used

docker-compose images
Posted by Uli Köhler in Container, Docker

Minimal local nginx setup using Docker

If you have not installed Docker, see our guide at How to install docker and docker-compose on Ubuntu in 30 seconds

1. Create your nginx config file (my-nginx.conf). This is a template that reverse proxys TechOverflow:

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    location / {
        proxy_pass https://techoverflow.net;
        proxy_http_version 1.1;
    }
}

3. Start nginx using docker:

docker run -it -p 80:80 --rm -v $(pwd)/my-nginx.conf:/etc/nginx/conf.d/default.conf nginx:latest

4. Go to http://localhost and see the result!

Explanation of the docker command:

  • docker run -it: Create a new docker container and run it in interactive mode (i.e. it will not run in the background, once you kill the command, nginx will exit)
  • -p 80:80: Makes port 80 of the nginx server (the standard HTTP port) available on the host’s port 80. The first 80 is the host port whereas the second port 80 is the container’s port.
  • --rm: Once the container is stopped, delete it!
  • -v $(pwd)/my-nginx.conf:/etc/nginx/conf.d/default.conf: Map my-nginx.conf in the current directory ($(pwd)) to /etc/nginx/conf.d/default.conf on the container.
  • nginx:latest: In the container run the official nginx image from DockerHub in the latest version.

Explanation of the nginx config file:

  • server { ... }: Everything inside this blog will belong together. You can
  • listen 80 default_server; Listen on port 80 (the standard HTTP port) and make this the default server, i.e. respond to any domain name that does not have any other server configured.
  • listen [::]:80 default_server; Same as the previous line, but for IPv6. [::] means: Listen on all IPv6 addresses.
  • location / { ... }: Everything inside this block is valid for any URL starting with / i.e. any URL at all. In clauses like location /app { ... } the content of the clause would be valid for URLs starting with /app only, e.g. http://localhost/app/ or http://localhost/app/dashboard.
  • proxy_pass https://techoverflow.net; Redirect requests to the current location (/) to the server https://techoverflow.net using a reverse proxy.
  • proxy_http_version 1.1; This sets the HTTP version that nginx uses to make the requests to https://techoverflow.net. This is not always neccessary but might increase compatibility.
Posted by Uli Köhler in Docker, nginx

Towards a docker-based build of C/C++ applications

Note: Based on this post I have published buildock on GitHub.

Many C/C++ programmers and project managers know the pain of creating a reproducible build environment for all developers: Works for me is a common meme not without a reason.

My approach is to dockerize not neccessarily the application itself but the build system, encapsulating both the specific compiler version and the system around it plus all required system-level libraries in a Docker image.

Take the obligatory Hello World in C++:

// main.cpp
#include <iostream>
using namespace std;

int main() {
    cout << "Hello, World!" << endl;
    return 0;
}

and the corresponding Makefile:

all:
        g++ -o helloworld main.cpp

How can we compile this simple project without installing a compiler and GNU make on the local computer (no cheating by using build servers allowed)?

Actually it’s rather simple:

docker run --user $(id -u):$(id -g) -v $PWD:/app -it ulikoehler/ubuntu-gcc-make make

Breaking it down:

  • docker run: Create a new docker container and run a command in it
  • --user $(id -u):$(id -g): This makes the docker container run with the current user’s ID and the current user’s group – both for preventing the compiler to create output files as root and to safeguard against some IT security risks. Also see How to run docker container as current user & group
  • -v $PWD:/app: Mount the current directory ($PWD) on /app in the container. Since the Dockerfile used to build the container contains the WORKDIR /app directive, whatever command we run in the container will by default be executed in the /app directory – and therefore in the current local directory on the host.
  • -it runs the container in interactive mode, i.e. keypressed are passed down to the command you are running. Also, this means that our command will only finish when the container has finished executing.
  • ulikoehler/ubuntu-gcc-make: This is the image we’re using for that example. It’s nothing more than an ubuntu:18.04 base image with build-essentials and make installed and WORKDIR set to /app
  • make: This is the command we’ll run in the container. You can use any command here, even no command is possible (in which case the container’s default command will be used – in case of ulikoehler/ubuntu-gcc-make that is CMD [ "/usr/bin/make" ])

Here’s the complete Dockerfile used to generate ulikoehler/ubuntu-gcc-make:

FROM ubuntu:18.04
RUN apt update && apt -y install build-essential make && rm -rf /var/lib/apt/lists/*
WORKDIR /app
CMD [ "/usr/bin/make" ]
Posted by Uli Köhler in Build systems, C/C++, Container, Docker

How to have multiple Dockerfiles in one directory

Usually you build a docker image like

docker build -t myimagename .

but using this method you can only have one Dockerfile in each directory.

In case you don’t want to have separate directories for your Dockerfiles you can use the -f argument to docker build:

docker build -f FirstDockerfile .

Note that you still need to add . at the end so docker build knows where to COPY files from if you have COPY or similar statements in your Dockerfile.

Posted by Uli Köhler in Docker

How to run docker container as current user & group

If you want to prevent your docker container creating files as root, use

--user $(id -u):$(id -g)

as an argument to docker run. Example:

docker run --user $(id -u):$(id -g) -it -v $(pwd):/app myimage

 

Posted by Uli Köhler in Container, Docker, Linux

How to fix ‘Configuring tzdata’ interactive input when building Docker images

Often, when installing deb packages in your Dockerfile, some packages will install tzdata as a dependency.

The tzdata installer will try to interactively prompt you for your location using

Configuring tzdata
------------------

Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.

 1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
 2. America     5. Arctic     8. Europe    11. SystemV
 3. Antarctica  6. Asia       9. Indian    12. US
Geographic area:

This will stall your image build.

In order to fix that, we’ll need to make the tzdata prompt non-interactive.

The preferred method is to add

ENV DEBIAN_FRONTEND=noninteractive

before the first RUN statements in your Dockerfile.

Alternatively you can run just the apt install or apt-get install command using DEBIAN_FRONTEND=noninteractive:

RUN DEBIAN_FRONTEND=noninteractive apt install -y tzdata

This will automatically select a default configuration for tzdata.

Posted by Uli Köhler in Container, Docker

Fixing docker ‘unable to delete …- image is being used by running container’

Problem:

You want to delete a docker image using a command like

docker image rm c91b419ac445

but you see an error message like

Error response from daemon: conflict: unable to delete c91b419ac445 (cannot be forced) - image is being used by running container 3477a4dcdce2

Solution:

There is currently a container running that uses the image you are trying to delete. We will solve this issue by first stopping the container and then deleting the image

Warning: Deleting the image is dangerous since you cannot undo deleting the image ! Also note that force-stopping a running container might result in data loss if that container is doing something important !

Run these commands to stop the container and delete the image:

# Force-stop the container
docker container rm --force <container ID>
# Delete the image
docker image rm <image ID>

Copy <container ID> from the end of your original error message (3477a4dcdce2 in my example).

Copy <image ID> from the beginning of your error message. This is the same image ID you originally intended to delete (c91b419ac445) in my example.

In my example, the command would be

# Force-stop the container
docker container rm --force 3477a4dcdce2
# Delete the image
docker image rm c91b419ac445

Note that there might be multiple containers running using this image, so if you keep getting a similar error message, you might need to repeat this command.

Background information:

Docker will not allow you to force-delete the image using

docker image rm c91b419ac445 --force

as you can also see from the (cannot be forced) clause of your original error message. This behaviour makes sense since the container would crash in an undefineable manner if the underlying image is deleted.

Note that we could use docker image rm --force after stopping the container but this is typically not required and might result in additional risks for other containers, e.g. if other images depend on said image. Since docker uses layered images. Read the background information section of our post Docker: Remove all images and containers to learn more about how docker images work from an image management perspective.

Posted by Uli Köhler in Container, Docker

How to fix Google Cloud Build ignoring .dockerignore

Problem:

You want to run a docker image build on Google Cloud Build, but the client is trying to upload a huge context image to Google Cloud even though you have added all your large directories to your .dockerignore and the build works fine locally.

Solution:

Google Cloud Build ignores .dockerignore by design – the equivalent is called .gcloudignore.

You can copy the .dockerignore behaviour for gcloud by running

cp .dockerignore .gcloudignore

 

Posted by Uli Köhler in Cloud, Container, Docker

How to expand Kubernetes Physical Volume Claim (PVC)

Important note: By default, volumes will not be resized immediately but instead require a restart of the associated pod.

First, ensure that you have set allowVolumeExpansion: true for the storage class of your PVC. See our previous post on How to allow Physical Volume Claim (PVC) resize for Kubernetes storage class for more details.

We can expand the volume (named myapp-myapp-pvc-myapp-myapp-1 in this example) by running

kubectl patch pvc/"myapp-myapp-pvc-myapp-myapp-1" \
  --namespace "default" \
  --patch '{"spec": {"resources": {"requests": {"storage": "40Gi"}}}}'

Ensure that you have replaced  the name of the PVC (myapp-myapp-pvc-myapp-myapp-1 in this example) and the storage size. It’s only possible to increase the size of the volume / expand it and not to downsize / shrink it. If your size is less than the previous value, you’ll see this error message:

The PersistentVolumeClaim "myapp-myapp-pvc-myapp-myapp-1" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value

After running this command, the PVC will be in the FileSystemResizePending state.

In order for the update to have effect, you’ll need to force Kubernetes to re-create all the pods for your deployment. To find out how to do this, read our post on How to force restarting all Pods in a Kubernetes Deployment.

For reference, see the official documentation on expanding persistent volumes

Posted by Uli Köhler in Cloud, Kubernetes

How to force restarting all Pods in a Kubernetes Deployment

In contrast to classical deployment managers like systemd or pm2, Kubernetes does not provide a simple restart my application command.

However there’s an easy workaround: If you chance anything in your configuration, even innocuous things that don’t have any effect, Kubernetes will restart your pods.

Consider configuring a rolling update strategy before doing this if you are updating a production application that should have minimal downtime.

In this example we’ll assume you have a StatefulSet your want to update and it’s named elasticsearch-elasticsearch. Be sure to fill in the actual name of your deployment here.

kubectl patch statefulset/elasticsearch-elasticsearch -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"dummy-date\":\"`date +'%s'`\"}}}}}"

This will just set a dummy-date annotation which does not have any effect.

You can monitor the update by

kubectl rollout status statefulset/elasticsearch-elasticsearch

Credits for the original solution idea to pstadler on GitHub.

Posted by Uli Köhler in Cloud, Kubernetes

How to allow Physical Volume Claim (PVC) resize for Kubernetes storage class

The prerequisite for resizing a Kubernetes Physical Volume Claim is that you allow volume expansion in the storage class the PVC belongs to (standard storage class for this example).

We can allow this by setting allowVolumeExpansion: true for that storage class.

Patching the configuration on-the-go can easily be done using

kubectl patch storageclass/"standard" \
  --namespace "default" \
  --patch '{"allowVolumeExpansion": true}'

Remember that you might need to adjust your storage class and namespace depending on which ones you used. For any standard configuration, however, the namespace default and the storage class standard will be the ones you need.

Posted by Uli Köhler in Kubernetes