Container

A modern Kimai setup using docker-compose and nginx

This is the setup I use to run multiple productive kimai instances. In my example, I create the files in /opt/kimai-mydomain. The folder name is not critical, but it is helpful to distinguish multiple indepedent kimai instances.

First, let’s create /opt/kimai-mydomain/docker-compose.yml. You don’t need to modify anything in this file as every relevant configuration is loaded from .env using environment variables.

version: '3.5'
services:
  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_DATABASE=kimai
      - MYSQL_USER=kimai
      - MYSQL_PASSWORD=${MARIADB_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
    volumes:
      - ./mariadb_data:/var/lib/mysql
    command: --default-storage-engine innodb
    restart: unless-stopped
    healthcheck:
      test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
      interval: 20s
      start_period: 10s
      timeout: 10s
      retries: 3

  kimai:
    image: kimai/kimai2:apache-debian-master-prod
    environment:
      - APP_ENV=prod
      - TRUSTED_HOSTS=localhost,${HOSTNAME}
      - [email protected]
      - ADMINPASS=${KIMAI_ADMIN_PASSWORD}
      - DATABASE_URL=mysql://kimai:${MARIADB_PASSWORD}@mariadb/kimai
    volumes:
      - ./kimai_var:/opt/kimai/var
    ports:
      - '17919:8001'
    depends_on:
      - mariadb
    restart: unless-stopped

Now we’ll create the configuration in /opt/kimai-mydomain/.env:

MARIADB_ROOT_PASSWORD=eishi5Pae3chai1Aeth2wiuCh7Ahhi
MARIADB_PASSWORD=su1aesheereithubo0iedootaeRooT
KIMAI_ADMIN_PASSWORD=toiWaeShaiz5Yeifohngu6chunuo6C
[email protected]
HOSTNAME=kimai.mydomain.com

Generate random passwords for .env ! Do NOT leave the default passwords in .env !

You also need to set KIMAI_ADMIN_EMAIL and HOSTNAME correctly.

We can now create the kimai data directory and set the correct permissions:

mkdir -p kimai_var
chown -R 33:33 kimai_var

(33 is the user ID and group ID of the www-data user inside the container)

Now, we will initialize the kimai database and the user:

docker-compose run kimai console kimai:install -n

Once you see a line like

[Sun Mar 07 23:53:35.986477 2021] [core:notice] [pid 50] AH00094: Command line: '/usr/sbin/apache2 -D FOREGROUND'

stop the process using Ctrl+C as this means that Kimai has finished installing.

Now we can create a systemd service that automatically starts Kimai using TechOverflow’s method from Create a systemd service for your docker-compose project in 10 seconds:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Now we only need to create an nginx config for reverse proxying of your Kimai domain. There is nothing special to be considered for the config, hence I’ll show my config just as an example that you can copy and paste.

server {
    server_name  kimai.mydomain.com;

    location / {
        proxy_pass http://localhost:17919/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen [::]:443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/kimai.mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/kimai.mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

}
server {
    if ($host = kimai.mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    server_name  kimai.mydomain.com;

    listen [::]:80; # managed by Certbot
    return 404; # managed by Certbot
}

After setting up your config – I always recommend to setup TLS using Let’s Encrypt, even for test setups, open your Browser and go to your Kimai domain, e.g. to https://kimai.mydomain.com. You can directly login to kimai using KIMAI_ADMIN_EMAIL and KIMAI_ADMIN_PASSWORD as specified in .env.

Posted by Uli Köhler in Container, Docker

How to install python3 pip / pip3 in Alpine Linux

Problem:

You want to install pip3 (also called python3-pip) in Alpine linux, but running apk install python3-pip shows you that the package doesn’t exist

/ # apk add python3-pip
ERROR: unable to select packages:
  python3-pip (no such package):
    required by: world[python3-pip]

Solution:

You need to install py3-pip instead using

apk add py3-pip

Example output:

/ # apk add py3-pip
(1/35) Installing libbz2 (1.0.8-r1)
(2/35) Installing expat (2.2.10-r1)
(3/35) Installing libffi (3.3-r2)
[...]

 

Posted by Uli Köhler in Alpine Linux, Container, Docker, Linux

A modern Docker-Compose config for Etherpad using nginx as reverse proxy

This is the configuration I use to run my etherpad installations behind nginx:

docker-compose.yml

version: "3.5"
services:
  etherpad:
    image: etherpad/etherpad:latest
    environment:
      - TITLE=My Etherpad
      - DEFAULT_PAD_TEXT=My Etherpad
      - ADMIN_PASSWORD=${ETHERPAD_ADMIN_PASSWORD}
      - ADMIN_USER=admin
      - DB_TYPE=mysql
      - DB_HOST=mariadb
      - DB_PORT=3306
      - DB_USER=etherpad
      - DB_PASS=${MARIADB_PASSWORD}
      - DB_NAME=etherpad
      - DB_CHARSET=utf8mb4
      - API_KEY=${ETHERPAD_API_KEY}
      - SESSION_REQUIRED=false
    ports:
      - "17201:9001"
    depends_on:
      - mariadb

  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=etherpad
      - MYSQL_USER=etherpad
      - MYSQL_PASSWORD=${MARIADB_PASSWORD}
    volumes:
      - './mariadb_data:/var/lib/mysql'
    command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
    healthcheck:
      test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
      interval: 20s
      start_period: 10s
      timeout: 10s
      retries: 3

.env

Remember to replace the passwords with your own random passwords !

MARIADB_ROOT_PASSWORD=ue9zahb8Poh1oeMieyaeFaicheecaz
MARIADB_PASSWORD=dieQuoghiu6sao9aiphei7eiquael5
ETHERPAD_API_KEY=een4Chohdiedohzaich0ega5thung6
ETHERPAD_ADMIN_PASSWORD=ahNee3OhR6aiCootaiy7uBui3rooco

nginx config

server {
    server_name  etherpad.mydomain.de;
    access_log off;
    error_log /var/log/nginx/etherpad.mydomain.de-error.log;

    location / {
        proxy_pass http://localhost:17201/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen [::]:443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/etherpad.mydomain.de/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/etherpad.mydomain.de/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

}

server {
    if ($host = etherpad.mydomain.de) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen [::]:80; # managed by Certbot
    server_name  etherpad.mydomain.de;
    return 404; # managed by Certbot
}

How to autostart

Our post Create a systemd service for your docker-compose project in 10 seconds provides an extremely quick and easy-to-use one-liner to create and autostart a systemd service running docker-compose for your etherpad instance.

How to backup

This will be detailed in a future blogpost.

Posted by Uli Köhler in Container, Docker, nginx

How to use FCCT Transpiler online without installation

You can use fcctthe Fedora CoreOS Configuration Transpiler in order to create Ignition JSON files for installing CoreOS from YAML.

Instead of installing fcct, you can use

Click here to go to TechOverflow FCCT Online

Currently our service runs FCCT 0.9.0 using the fcct-online container.

Posted by Uli Köhler in CoreOS, Linux

How to use apt install correctly in your Dockerfile

This is the correct way to use apt install in your Dockerfile:

ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y PACKAGE && rm -rf /var/lib/apt/lists/*

Key takeaways:

  • Set DEBIAN_FRONTEND=noninteractive to prevent some packages from prompting interactive input (tzdata for example), which leads to indefinite waiting for an user input
  • Run apt update before the install command to fetch the current package lists
  • apt install with -y to prevent apt from asking you if you really want to install the packages
  • rm -rf /var/lib/apt/lists/* after the install command in order ot prevent the cached apt lists (which are fetched by apt update) from ending up in the container image
  • All of that in one command joined by && in order to prevent docker from building separate layers for each part of the command (and to prevent it from first storing /var/lib/apt/lists in one layer and then delete it in another layer)

Also see the official guide on Dockerfile best practices

Posted by Uli Köhler in Container, Docker

How to fix Gitlab CI error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on … no such host

Problem:

In your GitLab CI jobs, you see this error message while trying to run some docker command:

error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on 192.168.178.1:53: no such host

Solution:

This error occurs with docker-based gitlab runners such as the one we’re that are configured using a docker executor. The error message means that the inner docker container doesn’t have a connection to the host docker daemon.

In order to fix this, set these options in [runners.docker] in your gitlab-runner config.toml:

privileged = true
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]

A valid config.toml looks like this:

concurrent = 1
check_interval = 0

[session_server]
  session_timeout = 1800

[[runners]]
  name = "CoreOS-Haar-Runner"
  url = "https://gitlab.techoverflow.net/"
  token = "bemaiBie8usahMoo9ish"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "docker:stable"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
    shm_size = 0

Why this works

Adding "/var/run/docker.sock:/var/run/docker.sock" allows the inner docker container running inside the gitlab-runner to access the outer (host) docker daemon. Without this, it tries to connect to the docker default URL, i.e. http://docker:2375. This URL can’t be resolved via DNS (the DNS server in my case is 192.168.178.1, the DNS standard port being 53), hence docker prints the error message listed above.

Posted by Uli Köhler in Docker

How to enable SSH password authentication on Fedora CoreOS

Add this config line to your Fedora CoreOS Ignition config in order to enable SSH password authentication on your install.

storage:
  files:
    - path: /etc/ssh/sshd_config.d/20-enable-passwords.conf
      mode: 0644
      contents:
        inline: |
          PasswordAuthentication yes

By default, Fedora CoreOS will only allow pubkey authentication and disable password authentication. This Ignition config will set PasswordAuthentication yes as a config option for the SSH daemon.

Original source: The Fedore CoreOS authentication guide.

Posted by Uli Köhler in CoreOS

How to install fcct on Ubuntu Linux

Note: This currently doesn’t work due to libc issues on most Ubuntu versions. I am working on a solution !

I use fcct, the Fedora CoreOS Configuration Transpiler in order to create Ignition JSON files for installing CoreOS from YAML

Use these commands in your shell to download fcct to /usr/local/bin/fcct and hence install it. This works on most Linux distributions, but I mainly use it in Ubuntu and Debian.

sudo wget -O /usr/local/bin/fcct https://github.com/coreos/fcct/releases/download/v0.9.0/fcct-x86_64-unknown-linux-gnu
chmod a+x /usr/local/bin/fcct

See Fedora CoreOS minimal ignition config for XCP-NG for my Fedora CoreOS ignition config

Posted by Uli Köhler in Container

How to install Kompose on Ubuntu in 10 seconds

Kompose is a tool that converts docker-compose.yml into Kubernetes YAML config files.

You can easily install it globally on most Linux distributions using this command:

sudo curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o /usr/local/bin/kompose
chmod a+x /usr/local/bin/kompose

After that, run

kompose convert

in a directory containing a docker-compose.yml. See the Kompose website for further information.

Posted by Uli Köhler in Container, Docker, Kubernetes, Podman

How to install Podman on Ubuntu 20.04 in 25 seconds

Run this one-liner to install podman on your Ubuntu system:

wget -qO- https://techoverflow.net/scripts/install-podman-ubuntu.sh | sudo bash /dev/stdin

This is the code that is being run, which is exactly the code taken from the Podman installation page.

. /etc/os-release
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y install podman

 

Posted by Uli Köhler in Container, Linux, Podman

Minimal Dockerfile example

This is a minimal Dockerfile that you can use to build a test container:

FROM alpine
CMD ["/bin/sh", "-c", "echo 'It works!'"]

Build example:

docker build -t techoverflow-minimal-docker .

Run example:

docker run techoverflow-minimal-docker

Example output:

$ docker run techoverflow-minimal-docker       
It works!

 

Posted by Uli Köhler in Docker

Best-practice configuration for MongoDB with docker-compose

Create /var/lib/mongodb/docker-compose.yml:

version: '3.1'
services:
  mongo:
    image: mongo
    volumes:
        - ./mongodb_data:/data/db
    ports:
        - 27017:27017

This will store the MongoDB data in /var/lib/mongodb/data. I prefer this variant to using docker volumes since this method keeps all MongoDB-related data in the same directory.

Then create a systemd service using

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

See our post on how to Create a systemd service for your docker-compose project in 10 seconds for more details on this method.

You can access MongoDB at localhost:27017! It will autostart after boot

Restart by

sudo systemctl restart mongodb

Stop by

sudo systemctl stop mongodb

View logs:

sudo journalctl -xfu mongodb

View logs in less:

sudo journalctl -xu mongodb

 

Posted by Uli Köhler in Docker, MongoDB

How to disable SELinux in Fedora CoreOS

Warning: Depending on your application, disabling the SELinux security layer might be a bad idea since it might introduce new security risks especially if the container system has security issues.

In order to disable SELinux on Fedora CoreOS, run this:

sudo sed -i -e 's/SELINUX=/SELINUX=disabled #/g' /etc/selinux/config
sudo systemctl reboot

Note that this will reboot your system in order for the changes to take effect.

The sed command will replace the default

SELINUX=enforcing

in /etc/selinux/config to

SELINUX=disabled

 

Posted by Uli Köhler in Container, Docker, Linux

How to export gitlab backup in Gitlab Docker container

GitLab provides an integrated feature to export a TAR file containing all data from the current GitLab instance.

How to make a GitLab backup?

In order to run this for a GitLab instance running on docker-compose use

docker-compose exec gitlab gitlab-backup create STRATEGY=copy

where gitlab is the container running the gitlab image, e.g. gitlab/gitlab-ce:latest.

In case you are running a docker-based setup without docker-compose, run

docker exec my-gitlab-container gitlab-backup create STRATEGY=copy

where my-gitlab-container is the ID or name of the container

Where to find the backups?

By default, gitlab stores the backups in /var/opt/gitlab/backups. In case you need to change this, look for the following line in /etc/gitlab/gitlab.rb:

gitlab_rails['backup_path'] = "/var/opt/gitlab/backups"

In my docker-compose configuration, I map out the entire /var/opt/gitlab directory:

volumes:
   - './data:/var/opt/gitlab'

hence I can find the backups in ./data/backups:

$ ls data/backups/
1607642274_2020_12_10_13.6.3_gitlab_backup.tar
Posted by Uli Köhler in Docker

How to create a systemd backup timer & service in 10 seconds

In our previous post Create a systemd service for your docker-compose project in 10 seconds we introduced a script that automatically creates a systemd service to start a docker-compose-based project. In this post, we’ll show

First, you need to create a file named backup.sh in the directory where docker-compose.yml is located. This file will be run by the systemd service every day. What that file contains is entirely up to you and we will provide examples in future blogposts.

Secondly, run

wget -qO- https://techoverflow.net/scripts/create-backup-service.sh | sudo bash /dev/stdin

from the directory where docker-compose.yml is located. Note that the script will use the directory name as a name for the service and timer that is created. For example, running the script in /var/lib/redmine-mydomain will cause redmine-mydomain-backup to be used a service name.

Example output from the script:

Creating systemd service... /etc/systemd/system/redmine-mydomain-backup.service
Creating systemd timer... /etc/systemd/system/redmine-mydomain-backup.timer
Enabling & starting redmine-mydomain-backup.timer
Created symlink /etc/systemd/system/timers.target.wants/redmine-mydomain-backup.timer → /etc/systemd/system/redmine-mydomain-backup.timer.

The script will create /etc/systemd/systemd/redmine-mydomain-backup.service containing the specification on what exactly to run:

[Unit]
Description=redmine-mydomain-backup

[Service]
Type=oneshot
ExecStart=/bin/bash backup.sh
WorkingDirectory=/var/lib/redmine-mydomain

and /etc/systemd/systemd/redmine-mydomain-backup.timer containing the logic when the .service is started:

[Unit]
Description=redmine-mydomain-backup

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

and will automatically start and enable the timer. This means: no further steps are needed after running this script!

In order to show the current status of the service, use e.g.

sudo systemctl status redmine-mydomain-backup.timer

Example output:

● redmine-mydomain-backup.timer - redmine-mydomain-backup
     Loaded: loaded (/etc/systemd/system/redmine-mydomain-backup.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Thu 2020-12-10 02:50:31 CET; 19min ago
    Trigger: Fri 2020-12-11 00:00:00 CET; 20h left
   Triggers: ● redmine-mydomain-backup.service

Dec 10 02:50:31 myserverhostname systemd[1]: Started redmine-mydomain-backup.

In the

Trigger: Fri 2020-12-11 00:00:00 CET; 20h left

line you can see when the service will be run next. By default, the script generates tasks that run OnCalendar=daily, which means the service will be run on 00:00:00 every day. Checkout the systemd.time manpage for further information on the syntax you can use to specify other timeframes.

In order to run the backup immediately (it will still run daily after doing this), do

sudo systemctl start redmine-mydomain-backup.service

(note that you need to run systemctl start on the .service! Running systemctl start on the .timer will only enable the timer and not run the service immediately).

In order to view the logs, use

sudo journalctl -xfu redmine-mydomain-backup.service

(just like above, you need to run journalctl -xfu on the .service, not on the .timer).

In order to disable automatic backups, use e.g.

sudo systemctl disable redmine-mydomain-backup.timer

Source code:

#!/bin/bash
# Create a systemd service & timer that runs the given backup daily
# by Uli Köhler - https://techoverflow.net
# Licensed as CC0 1.0 Universal
export SERVICENAME=$(basename $(pwd))-backup

export SERVICEFILE=/etc/systemd/system/${SERVICENAME}.service
export TIMERFILE=/etc/systemd/system/${SERVICENAME}.timer

echo "Creating systemd service... $SERVICEFILE"
sudo cat >$SERVICEFILE <<EOF
[Unit]
Description=$SERVICENAME

[Service]
Type=oneshot
ExecStart=/bin/bash backup.sh
WorkingDirectory=$(pwd)
EOF

echo "Creating systemd timer... $TIMERFILE"
sudo cat >$TIMERFILE <<EOF
[Unit]
Description=$SERVICENAME

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target
EOF

echo "Enabling & starting $SERVICENAME.timer"
sudo systemctl enable $SERVICENAME.timer
sudo systemctl start $SERVICENAME.timer

 

Posted by Uli Köhler in Docker, Linux

How to use pg_dump in Gitlab Docker container

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.sql

This will save the SQL dump of the database into gitlab-dump.sql

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.sql

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Docker, Linux

How to run psql in Gitlab Docker image

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Databases, Docker, Linux

How to fix XCP-NG XENAPI_MISSING_PLUGIN(xscontainer) or Error on getting the default coreOS cloud template

Problem:

When creating a CoreOS container on your XCP-NG host, XCP-NG center or XenOrchestra tells you

Cloud config: Error on getting the default coreOS cloud template

with the error message

XENAPI_MISSING_PLUGIN(xscontainer)
This is a XenServer/XCP-ng error

Solution:

Log into the host’s console as root using SSH or the console in XCP-NG center or XenOrchestra and run

yum install xscontainer

After that, reload the page (F5) you use to create your container. No host restart is required.

Note that if you have multiple hosts, you need to yum install xscontainer for each host individually.

Posted by Uli Köhler in Docker, Virtualization

How to backup data from docker-compose MariaDB container using mysqldump

For containers with a MYSQL_ROOT_PASSWORD stored in .env

This is the recommended best practice. For this example, we will assume that .env looks like this:

MARIADB_ROOT_PASSWORD=mophur3roh6eegiL8Eeto7goneeFei

To create a dump:

source .env && docker-compose exec mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases > mariadb-dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql, ensure the container is NOT running before this command:

source .env && docker-compose run -T mariadb mariadb -uroot -p${MARIADB_ROOT_PASSWORD} < mariadb-dump.sql

Note that you have to replace mariadb by the name of your container in docker-compose.yml.

For containers with a MYSQL_ROOT_PASSWORD set to some value not stored in .env

This is secure but you typically have to copy the password multiple times: One time for the mariadb container, one time for whatever container or application uses the database, and one time for any backup script that exports a SQL dump of the entire database

To create a dump:

docker-compose exec mariadb mysqldump -uroot -pYOUR_MARIADB_ROOT_PASSWORD --all-databases > dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql:

docker-compose exec -T mariadb mysql -uroot -pYOUR_MARIADB_ROOT_PASSWORD  < mariadb-dump.sql

Replace YOUR_MARIADB_ROOT_PASSWORD by the password of your installation.

Furthermore, you have to replace mariadb by the name of your container in docker-compose.yml

For containers with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This configuration is a security risk – see The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes.

To create a dump:

docker-compose exec mariadb mysqldump -uroot --all-databases > mariadb-dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql:

docker-compose exec -T mariadb mysql -uroot < mariadb-dump.sql

More posts on this topic

TechOverflow is currently planning a post on how to use bup in order to provide quick & efficient backups of docker-based MariaDB/MySQL installations.

Posted by Uli Köhler in Docker

The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This is part of a common docker-compose.yml which is frequently seen on the internet

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=yes
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
 [...]

Simple and secure, right? A no-root-password MariaDB instance that’s running in a separate container and does not have its port 3306 exposed – so only services from the same docker-compose.yml can reach it since docker-compose puts all those services in a separate network.

Wrong.

While the MariaDB instance is not reachable from the internet since no, it can be reached by any process via its internal IP address.

In order to comprehend what’s happening, we shall take a look at docker’s networks. In this case, my docker-compose config is called redmine.

$ docker network ls | grep redmine
ea7ed38f469b        redmine_default           bridge              local

This is the network that docker-compose creates without any explicit network configuration. Let’s inspect the network to show the hosts:

[
    // [...]
        "Containers": {
            "2578fc65b4dab9f204d0a252e421dd4ddd9f41c35642d48350f4e59370581757": {
                "Name": "redmine_mariadb_1",
                "EndpointID": "1e6d81acc096a12fc740173f4e107090333c42e8a86680ac5c9886c148d578e7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "7867f71d2a36265c34c133b70aea487b90ea68fcf30ecb42d6e7e9a376cf8e07": {
                "Name": "redmine_redmine_1",
                "EndpointID": "f5ac7b3325aa9bde12f0c625c4881f9a6fc9957da4965767563ec9a3b76c19c3",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
    // [...]
]

We can see that the IP address of the redmine_mariadb_1 container is 172.18.0.2.

Using the internal IP 172.18.0.2, you can access the MySQL server.

Any process on the host (even from unprivileged users) can connect to the container without any password, e.g.

$ mysqldump -uroot -h172.18.0.2 --all-databases
// This will show the dump of the entire MariaDB database

How to mitigate this security risk?

Mitigation is quite easy since we only need to set a root password for the MariaDB instance.

My recommended best practice is to avoid duplicate passwords. In order to do this, create a .env file in the directory where docker-compose.yml is located.

MARIADB_ROOT_PASSWORD=aiPaipei6ookaemue4voo0NooC0AeH

Remember to replace the password by a random password or use this shell script to automatically create it:

echo MARIADB_ROOT_PASSWORD=$(pwgen 30) > .env

Now we can use ${MARIADB_ROOT_PASSWORD} in docker-compose.yml whereever the MariaDB root password is required, for example:

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
  redmine:
    image: 'redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - [email protected]
      - REDMINE_DB_MYSQL=mariadb
      - REDMINE_DB_USERNAME=root
      - REDMINE_DB_PASSWORD=${MARIADB_ROOT_PASSWORD}
    ports:
      - '3718:3000'
    volumes:
      - './redmine_data/conf:/usr/src/redmine/conf'
      - './redmine_data/files:/usr/src/redmine/files'
      - './redmine_themes:/usr/src/redmine/public/themes'
    depends_on:
      - mariadb

Note that the mariadb docker image will not change the root password if the database directory already exists (mariadb_data in this example).

My recommended best practice for changing the root password is to use mysqldump --all-databases to export the entire database to a SQL file, then backup and delete the data directory, then re-start the container so the new root password will be set. After that, re-import the dump from the SQL file.

Posted by Uli Köhler in Databases, Docker, Linux