Docker

How to fix docker-compose start ERROR: No containers to start

Problem:

When running docker-compose start, you see an error message like this:

Starting mongodb     ... error
Starting myapp       ... error

ERROR: No containers to start

Solution:

In order to start your containers, use docker-compose up, instead of docker-compose start!

Posted by Uli Köhler in Container, Docker

The super-simple docker-compose cheatsheet

Run these commands in the directory (or git repo) where docker-compose.yml is located!

Start all services

docker-compose up -d

-d means run in background (= daemonize).

Stop all services

docker-compose down

Restart all services

docker-compose restart

Update containers

docker-compose pull
docker-compose restart

View logs

docker-compose logs

To view and follow use

docker-compose logs -f

 

Start a specific service (and all the services it depends on)

docker-compose start myservice

Show info about which container images are being used

docker-compose images
Posted by Uli Köhler in Container, Docker

Minimal local nginx setup using Docker

If you have not installed Docker, see our guide at How to install docker and docker-compose on Ubuntu in 30 seconds

1. Create your nginx config file (my-nginx.conf). This is a template that reverse proxys TechOverflow:

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    location / {
        proxy_pass https://techoverflow.net;
        proxy_http_version 1.1;
    }
}

3. Start nginx using docker:

docker run -it -p 80:80 --rm -v $(pwd)/my-nginx.conf:/etc/nginx/conf.d/default.conf nginx:latest

4. Go to http://localhost and see the result!

Explanation of the docker command:

  • docker run -it: Create a new docker container and run it in interactive mode (i.e. it will not run in the background, once you kill the command, nginx will exit)
  • -p 80:80: Makes port 80 of the nginx server (the standard HTTP port) available on the host’s port 80. The first 80 is the host port whereas the second port 80 is the container’s port.
  • --rm: Once the container is stopped, delete it!
  • -v $(pwd)/my-nginx.conf:/etc/nginx/conf.d/default.conf: Map my-nginx.conf in the current directory ($(pwd)) to /etc/nginx/conf.d/default.conf on the container.
  • nginx:latest: In the container run the official nginx image from DockerHub in the latest version.

Explanation of the nginx config file:

  • server { ... }: Everything inside this blog will belong together. You can
  • listen 80 default_server; Listen on port 80 (the standard HTTP port) and make this the default server, i.e. respond to any domain name that does not have any other server configured.
  • listen [::]:80 default_server; Same as the previous line, but for IPv6. [::] means: Listen on all IPv6 addresses.
  • location / { ... }: Everything inside this block is valid for any URL starting with / i.e. any URL at all. In clauses like location /app { ... } the content of the clause would be valid for URLs starting with /app only, e.g. http://localhost/app/ or http://localhost/app/dashboard.
  • proxy_pass https://techoverflow.net; Redirect requests to the current location (/) to the server https://techoverflow.net using a reverse proxy.
  • proxy_http_version 1.1; This sets the HTTP version that nginx uses to make the requests to https://techoverflow.net. This is not always neccessary but might increase compatibility.
Posted by Uli Köhler in Docker, nginx

Towards a docker-based build of C/C++ applications

Note: Based on this post I have published buildock on GitHub.

Many C/C++ programmers and project managers know the pain of creating a reproducible build environment for all developers: Works for me is a common meme not without a reason.

My approach is to dockerize not neccessarily the application itself but the build system, encapsulating both the specific compiler version and the system around it plus all required system-level libraries in a Docker image.

Take the obligatory Hello World in C++:

// main.cpp
#include <iostream>
using namespace std;

int main() {
    cout << "Hello, World!" << endl;
    return 0;
}

and the corresponding Makefile:

all:
        g++ -o helloworld main.cpp

How can we compile this simple project without installing a compiler and GNU make on the local computer (no cheating by using build servers allowed)?

Actually it’s rather simple:

docker run --user $(id -u):$(id -g) -v $PWD:/app -it ulikoehler/ubuntu-gcc-make make

Breaking it down:

  • docker run: Create a new docker container and run a command in it
  • --user $(id -u):$(id -g): This makes the docker container run with the current user’s ID and the current user’s group – both for preventing the compiler to create output files as root and to safeguard against some IT security risks. Also see How to run docker container as current user & group
  • -v $PWD:/app: Mount the current directory ($PWD) on /app in the container. Since the Dockerfile used to build the container contains the WORKDIR /app directive, whatever command we run in the container will by default be executed in the /app directory – and therefore in the current local directory on the host.
  • -it runs the container in interactive mode, i.e. keypressed are passed down to the command you are running. Also, this means that our command will only finish when the container has finished executing.
  • ulikoehler/ubuntu-gcc-make: This is the image we’re using for that example. It’s nothing more than an ubuntu:18.04 base image with build-essentials and make installed and WORKDIR set to /app
  • make: This is the command we’ll run in the container. You can use any command here, even no command is possible (in which case the container’s default command will be used – in case of ulikoehler/ubuntu-gcc-make that is CMD [ "/usr/bin/make" ])

Here’s the complete Dockerfile used to generate ulikoehler/ubuntu-gcc-make:

FROM ubuntu:18.04
RUN apt update && apt -y install build-essential make && rm -rf /var/lib/apt/lists/*
WORKDIR /app
CMD [ "/usr/bin/make" ]
Posted by Uli Köhler in Build systems, C/C++, Container, Docker

How to have multiple Dockerfiles in one directory

Usually you build a docker image like

docker build -t myimagename .

but using this method you can only have one Dockerfile in each directory.

In case you don’t want to have separate directories for your Dockerfiles you can use the -f argument to docker build:

docker build -f FirstDockerfile .

Note that you still need to add . at the end so docker build knows where to COPY files from if you have COPY or similar statements in your Dockerfile.

Posted by Uli Köhler in Docker

How to run docker container as current user & group

If you want to prevent your docker container creating files as root, use

--user $(id -u):$(id -g)

as an argument to docker run. Example:

docker run --user $(id -u):$(id -g) -it -v $(pwd):/app myimage

 

Posted by Uli Köhler in Container, Docker, Linux

How to fix ‘Configuring tzdata’ interactive input when building Docker images

Often, when installing deb packages in your Dockerfile, some packages will install tzdata as a dependency.

The tzdata installer will try to interactively prompt you for your location using

Configuring tzdata
------------------

Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.

 1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
 2. America     5. Arctic     8. Europe    11. SystemV
 3. Antarctica  6. Asia       9. Indian    12. US
Geographic area:

This will stall your image build.

In order to fix that, we’ll need to make the tzdata prompt non-interactive.

The preferred method is to add

ENV DEBIAN_FRONTEND=noninteractive

before the first RUN statements in your Dockerfile.

Alternatively you can run just the apt install or apt-get install command using DEBIAN_FRONTEND=noninteractive:

RUN DEBIAN_FRONTEND=noninteractive apt install -y tzdata

This will automatically select a default configuration for tzdata.

Posted by Uli Köhler in Container, Docker

Fixing docker ‘unable to delete …- image is being used by running container’

Problem:

You want to delete a docker image using a command like

docker image rm c91b419ac445

but you see an error message like

Error response from daemon: conflict: unable to delete c91b419ac445 (cannot be forced) - image is being used by running container 3477a4dcdce2

Solution:

There is currently a container running that uses the image you are trying to delete. We will solve this issue by first stopping the container and then deleting the image

Warning: Deleting the image is dangerous since you cannot undo deleting the image ! Also note that force-stopping a running container might result in data loss if that container is doing something important !

Run these commands to stop the container and delete the image:

# Force-stop the container
docker container rm --force <container ID>
# Delete the image
docker image rm <image ID>

Copy <container ID> from the end of your original error message (3477a4dcdce2 in my example).

Copy <image ID> from the beginning of your error message. This is the same image ID you originally intended to delete (c91b419ac445) in my example.

In my example, the command would be

# Force-stop the container
docker container rm --force 3477a4dcdce2
# Delete the image
docker image rm c91b419ac445

Note that there might be multiple containers running using this image, so if you keep getting a similar error message, you might need to repeat this command.

Background information:

Docker will not allow you to force-delete the image using

docker image rm c91b419ac445 --force

as you can also see from the (cannot be forced) clause of your original error message. This behaviour makes sense since the container would crash in an undefineable manner if the underlying image is deleted.

Note that we could use docker image rm --force after stopping the container but this is typically not required and might result in additional risks for other containers, e.g. if other images depend on said image. Since docker uses layered images. Read the background information section of our post Docker: Remove all images and containers to learn more about how docker images work from an image management perspective.

Posted by Uli Köhler in Container, Docker

How to fix Google Cloud Build ignoring .dockerignore

Problem:

You want to run a docker image build on Google Cloud Build, but the client is trying to upload a huge context image to Google Cloud even though you have added all your large directories to your .dockerignore and the build works fine locally.

Solution:

Google Cloud Build ignores .dockerignore by design – the equivalent is called .gcloudignore.

You can copy the .dockerignore behaviour for gcloud by running

cp .dockerignore .gcloudignore

 

Posted by Uli Köhler in Cloud, Container, Docker

How to fix docker push ‘denied: requested access to the resource is denied’

Problem:

You want to push an image to Dockerhub using a command like

docker push myusernameondocker/my-image

but you see an error message like

denied: requested access to the resource is denied

Solution:

First ensure that your local docker client is logged in to Docker by using

docker login

This will ask you for your username and password. In case you have not registered yet on Dockerhub, register here!

Now you can retry your command. Note that you do not need to create the repository on DockerHub explicitly, it will be created automatically!

Posted by Uli Köhler in Container, Docker

How to build & upload a Dockerized application to Google Container Registry in 5 minutes

This post provides an easy example on how to build & upload your application to the private Google Container registry. We assume you have already setup your project and installed Docker. In this example, we’ll build & upload pseudo-perseus v1.0. Since this is a NodeJS-based application, we also assume that you installed a recent version of NodeJS and NPM (see our previous article on how to do that using Ubuntu)

First we configure docker to be able to authenticate to Google:

gcloud auth configure-docker

Now we can checkout the repository and install the NPM packages:

git clone https://github.com/ulikoehler/pseudo-perseus.git
cd pseudo-perseus
git checkout v1.0
npm install

Now we can build the local docker image (we directly name it so that it can be uploaded to the Google Container Registry. Be sure to use the correct google cloud project ID!):

docker build -t eu.gcr.io/myproject-123456/pseudo-perseus:v1.0 .

The next step is to upload the image:

docker push eu.gcr.io/myproject-123456/pseudo-perseus:v1.0

For reference see the official Container Registry documentation.

Posted by Uli Köhler in Cloud, Container, Docker

Fixing gcloud WARNING: `docker-credential-gcloud` not in system PATH

Problem:

You want to configure docker to be able to access Google Container Registry using

gcloud auth configure-docker

but you see this warning message:

WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
gcloud credential helpers already registered correctly.

Solution:

Install docker-credential-gcloud using

sudo gcloud components install docker-credential-gcr

In case you see this error message:

ERROR: (gcloud.components.install) You cannot perform this action because this Cloud SDK installation is managed by an external package manager.
Please consider using a separate installation of the Cloud SDK created through the default mechanism described at: https://cloud.google.com/sdk/

use this alternate installation command instead (this command is for Linux, see the official documentation for other operating systems):

VERSION=1.5.0
OS=linux
ARCH=amd64

curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" \
  | tar xz --to-stdout ./docker-credential-gcr \
  | sudo tee /usr/bin/docker-credential-gcr > /dev/null && sudo chmod +x /usr/bin/docker-credential-gcr

After that, configure docker using

docker-credential-gcr configure-docker

Now you can retry running your original command.

For reference, see the official documentation.

Posted by Uli Köhler in Cloud, Container, Docker, Linux

How to fix ERROR: Couldn’t connect to Docker daemon at http+docker://localhost – is it running?

Problem:

You want to run a docker-container or docker-compose application, but once you try to start it, you see this error message:

ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.

Solution:

There are two possible reasons for this error message.

The common reason is that the user you are running the command as does not have the permissions to access docker.

You can fix this either by running the command as root using sudo (since root has the permission to access docker) or adding your user to the docker group:

sudo usermod -a -G docker $USER

and then logging out and logging back in completely (or restarting the system/server).

The other reason is that you have not started docker. On Ubuntu, you can start it using

sudo systemctl enable docker # Auto-start on boot
sudo systemctl start docker # Start right now

 

TechOverflow’s Docker install instructions automatically takes care of starting & enabling the service

Posted by Uli Köhler in Container, Docker, Linux

How to fix ‘elasticsearch exited with code 78’

Problem:

You want to run ElasticSearch using docker, but the container immediately stops again using this error message

elasticsearch exited with code 78

or

elasticsearch2 exited with code 78

Solution:

If you look through the entire log message, you’ll find lines like

elasticsearch     | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Therefore we need to increase the vm.max_map_count limit:

sudo sysctl -w vm.max_map_count=524288

Now we need to edit /etc/sysctl.conf so the setting will also be in effect after a reboot.

Look for any vm.max_map_count line in /etc/sysctl.conf. If you find one, set its value to 524288. If there is no such line present, add the line

vm.max_map_count=524288

to the end of /etc/sysctl.conf

Original source: GitHub

 

Posted by Uli Köhler in Container, Databases, Docker, Linux

How to install docker and docker-compose on Ubuntu in 30 seconds

Use our script:

wget -qO- https://techoverflow.net/scripts/install-docker.sh | bash

After that, logout and login (or close your SSH session and re-connect) (else, you will only be able to run the docker client as root – see Solving Docker permission denied while trying to connect to the Docker daemon socket)

Or do it manually:

Copy and paste these command blocks into your Linux shell. You need to copy & paste one block at a time – you can paste the next block once the previous block is finished!

# Install prerequisites
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose
# Enable & start docker
sudo systemctl enable docker
sudo systemctl start docker

Note that this will install Docker as deb package whereas docker-compose will be downloaded to /usr/local/bin.

In case you intend to use docker under your normal user account (i.e. without sudo), you might want to add that user to the docker group (we recommend you do this):

sudo usermod -a -G docker $USER

This settings requires that you logout and log back in (or completely terminate your SSH session and open a new SSH session) in order to take effect.

In case that does not work and you still get permission denied error messages try rebooting your computer.

In order to check if your user is currently a member of the docker groups, run

groups

Example output:

uli adm tty lp uucp dialout cdrom sudo dip plugdev lpadmin sambashare vboxusers lxd docker

These are the groups your user currently belongs to (as said before, changes only take effect after logging out and logging back in or terminating and re-opening your SSH session). If docker is listed in the output of groups (tip: it’s typically near the end in case you have just added it!), you should be able to access the docker socket. See the Background information section of Solving Docker permission denied while trying to connect to the Docker daemon socket for more details on docker sockets and their permissions.

Posted by Uli Köhler in Container, Docker, Linux

How to backup Redmine using the Bitnami Docker image

In a previous post I detailed how to install Redmine on Linux using the excellent Bitnami docker image.

This post will teach you how to easily make an online backup of your Redmine installation. Note that automating the backup is not within the scope of this post.

We assume that the redmine is installed as shown in my previous post in /var/lib/redmine. and that you want to backup to my.backup.server:~/redmine-backup/ using rsync.

Backing up the Redmine data

This is pretty easy, as the data is all in just one directory. You can sync it using

rsync --checksum -Pavz /var/lib/redmine/redmine_data my.backup.server:~/redmine-backup/

Note that old versions of files in redmine_data will be overwritten, however files that are deleted locally will not be deleted on the backup server. To me, this seems like a good compromise between the ability to recover deleted files and the used storage space.

Backing up the Redmine database

This part is slightly more complicated, since we need to access the MariaDB server running in a different container. Important note: The container ID can change so it is not sufficient to just find the container ID once and then use it. You need to determine the appropriate ID each time you do a backup. See below on instructions how to do that.

Full command:

docker exec -it $(docker container ls | grep redmine_mariadb_1 | cut -d' ' -f1) mysqldump -uroot bitnami_redmine | xz -e9 -zc - > redmine-database-dump-$(date -I).sql.xz

Let’s break it down:

  • docker exec -it (container ID) (command): Run a command on a running docker container.
  • docker container ls | grep redmine_mariadb_1 | cut -d' ' -f1: Get the ID (first field of the output cut -d' ' -f1) of the running docker container named redmine_mariadb_1
  • mysqldump -uroot bitnami_redmine: This is run on the docker container and dumps the Redmine Database as SQL to stdout. No password is neccessary since the Bitnami MariaDB image allows access without any password.
  • xz -e9 -zc -: Takes the data from mysqldump from stdin (-), compresses it using maximum compression settings (-e9 -z) and writes the compressed data to stdout.
  • > redmine-database-dump-$(date -I).sql.xz: Writes the compressed data from xz into a file called redmine-database-dump-(current date).sql.xz in the current directory.

The resulting file is called e.g. redmine-database-dump-2019-02-01.sql.xz and it’s placed in the current directory. Ensure that you run the command in a suitable directory. Run it in /tmp if you don’t know which directory might be suitable.

Now we can rsync it to the server:

rsync --checksum -Pavz redmine-backup-*.sql.xz my.backup.server:~/redmine-backup/

Since the filename contains the current data, this approach will not overwrite old daily backups of the database, so you can restore your database very flexibly.

Posted by Uli Köhler in Container, Docker, Linux, Project management