Docker

How to optimize MySQL/MariaDB tables in docker-compose

If your MariaDB / MySQL root password is stored in .env , use this command:

source .env && docker-compose exec mariadb mysqlcheck -uroot -p$MARIADB_ROOT_PASSWORD --auto-repair --optimize --all-databases

You can also directly use the root password in the command:

docker-compose exec mariadb mysqlcheck -uroot -phoox8AiFahuniPaivatoh2iexighee --auto-repair --optimize --all-databases

 

Posted by Uli Köhler in Container, Databases, Docker

How to enable Collabora for multiple domains using docker-compose

In our previous post How to run Collabora office for Nextcloud using docker-compose we investigated how to configure your Collabora office server using docker-compose.yml.

If you want to use multiple domains, you need to change this line in .env:

COLLABORA_DOMAIN=collabora.mydomain.com

By reading the source code I found out that COLLABORA_DOMAIN is interpreted as a regular expression. Therefore you can use a (...|...|...) syntax.

COLLABORA_DOMAIN=(nextcloud.mydomain.com|nextcloud.myseconddomain.com)

After that, restart collabora.

Posted by Uli Köhler in Docker, Nextcloud

How to run Collabora office for Nextcloud using docker-compose

Create this docker-compose.yml, e.g. in /opt/collabora-mydomain:

version: '3'
services:
  code:
    image: collabora/code:latest
    restart: always
    environment:
      - password=${COLLABORA_PASSWORD}
      - username=${COLLABORA_USERNAME}
      - domain=${COLLABORA_DOMAIN}
      - extra_params=--o:ssl.enable=true
    ports:
      - 9980:9980

Now create this .env with the configuration. You need to change the password and the domain!

COLLABORA_USERNAME=admin
COLLABORA_PASSWORD=veecheit0Phophiesh1fahPah0Wue3
COLLABORA_DOMAIN=collabora.mydomain.com

Now you can create a systemd service to autostart by using our script from Create a systemd service for your docker-compose project in 10 seconds.

Run from inside your directory (e.g. /opt/collabora-mydomain)

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Now you need to configure your reverse proxy to point to port 9980. Here’s an example nginx config:

server {
    server_name collabora.mydomain.com;

    access_log /var/log/nginx/collabora.mydomain.com.access_log;
    error_log /var/log/nginx/collabora.mydomain.com.error_log info;

    location / {
        proxy_pass https://127.0.0.1:9980;
        proxy_http_version 1.1;
        proxy_read_timeout 3600s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host            $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        add_header X-Frontend-Host $host;
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    }

    listen [::]:80; # managed by Certbot
}

Now open your browser and open collabora.mydomain.com. If collabora is running correctly, you should see:

OK

In Nextcloud, goto https://nextcloud.mydomain.com/settings/admin/richdocuments and set the

https://admin:[email protected]

Ensure to use your custom password from .env and your custom domain!

Click Save and you should see Collabora Online server is reachable:

Posted by Uli Köhler in Container, Docker, Nextcloud

How to fix Docker-Nextcloud Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.

Problem:

When using the official nextcloud docker image, you will see a message like

Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.

on the system overview page

Solution

This is a bug in the docker image and will likely be resolved soon – in the meantime, we can just manually install the required library on the container:

docker-compose exec nextcloud apt -y update
docker-compose exec nextcloud apt -y install libmagickcore-6.q16-6-extra

If you re-create the container, this change will be lost, but in my opinion it’s best to opt for a simple solution here and possible do it again once or twice as opposed to a permanent but much more labour-intensive procedure like updating the docker image and later migrating back to the official image.

Posted by Uli Köhler in Container, Docker

How to fix build ‘lz4 library not found, compiling without it’

Problem:

When compiling a piece of software – for example in your Dockerfile or on your PC – you see a warning message like

lz4 library not found, compiling without it

Solution:

Install liblz4, which is a library for a compression algorithm. On Ubuntu/Debian based systems you can install it using

sudo apt -y install liblz4-dev

In your Dockerfile, install using

RUN apt update && apt install -y liblz4-dev && rm -rf /var/lib/apt/lists/*

Otherwise, refer to the liblz4 GitHub page.

Posted by Uli Köhler in C/C++, Docker

Simple Elasticsearch setup with docker-compose

The following docker-compose.yml is a simple starting point for using ElasticSearch within a docker-based setup:

version: '2.2'
services:
    elasticsearch1:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
        container_name: elasticsearch1
        environment:
            - cluster.name=docker-cluster
            - node.name=elasticsearch1
            - cluster.initial_master_nodes=elasticsearch1
            - bootstrap.memory_lock=true
            - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
            - http.cors.enabled=true
            - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
            - http.cors.allow-credentials=true
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        volumes:
            - ./esdata1:/usr/share/elasticsearch/data
        ports:
            - 9200:9200
    dejavu:
        image: appbaseio/dejavu
        container_name: dejavu
        ports:
            - 1358:1358

Now create the esdata1 directory with the correct permissions:

sudo mkdir esdata1
sudo chown -R 1000:1000 esdata1

We also need to configure the vm.max_map_count sysctl parameter:

echo -e "\nvm.max_map_count=524288\n" | sudo tee -a /etc/sysctl.conf && sudo sysctl -w vm.max_map_count=524288

 

I recommend to place it in /opt/elasticsearch, but you can place wherever you like.

If you want to autostart it on boot, see Create a systemd service for your docker-compose project in 10 seconds or just use this snippet from said post:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will create a systemd service named elasticsearch (if your directory is named elasticsearch like /opt/elasticsearch) and enable and start it immediately. Hence you can restart using

sudo systemctl restart elasticsearch

and view the logs using

sudo journalctl -xfu elasticsearch

For more complex setup involving more than one node, see our previous post on ElasticSearch docker-compose.yml and systemd service generator

Posted by Uli Köhler in Container, Databases, Docker, ElasticSearch

How I connected a network_mode: host container to its database container

I have setup my FreePBX to use network_mode: 'host' but faced issues when it couldn’t connect to the MariaDB container which was not using network_mode: 'host'.

I fixed this by:

  • Setting the MariaDB container to network_mode: 'host'
  • Setting the FreePBX container to connect to 127.0.0.1 (DB_HOST=127.0.0.1). Setting it to localhost did NOT allow FreePBX to connect to MariaDB!
Posted by Uli Köhler in Docker, FreePBX, Networking

Recommended docker-compose mariadb service

I recommend this service:

mariadb:
  image: mariadb:latest
  environment:
    - MYSQL_DATABASE=servicename
    - MYSQL_USER=servicename
    - MYSQL_PASSWORD=${MARIADB_PASSWORD}
    - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
  volumes:
    - ./mariadb_data:/var/lib/mysql
  command: --default-storage-engine innodb
  restart: unless-stopped
  healthcheck:
    test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
    interval: 20s
    start_period: 10s
    timeout: 10s
    retries: 3

(replace servicename by the name of your service, e.g. kimai, redmine, …) and this .env:

MARIADB_ROOT_PASSWORD=eiNgam3woh4ahTee4chi9vohvauk6a
MARIADB_PASSWORD=shahb4alubei5Vie8arahhok2morae

 

Posted by Uli Köhler in Container, Docker

Local redmine backup using bup (docker-compose compatible)

This script uses bupto backup your docker-compose based redmine installation to a local bup folder e.g. in /var/lib/bup/my-redmine.bup:

#!/bin/bash
# Auto-determine the name from the directory name
# /opt/my-redmine => $NAME=my-redmine => /var/lib/bup/my-redmine.bup
export NAME=$(basename $(pwd))
export BUP_DIR=/var/lib/bup/$NAME.bup
bup_directory() {
        echo "BUPing $1"
        bup -d $BUP_DIR index $1 && bup save -9 --strip-path $(pwd) -n $1 $1
}
# Init
bup -d $BUP_DIR init
# Save MariaDB
source .env && docker-compose exec mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases | bup -d $BUP_DIR split -n $NAME-mariadb.sql
# Save directories
bup_directory redmine_data
bup_directory redmine_themes
# Backup self
bup_directory backup.sh
bup_directory docker-compose.yml
# OPTIONAL: Add par2 information
#   This is only recommended for backup on unreliable storage or for extremely critical backups
#   If you already have bitrot protection (like BTRFS with regular scrubbing), this might be overkill.
# Uncomment this line to enable:
# bup fsck -g

# OPTIONAL: Cleanup old backups
bup -d $BUP_DIR prune-older --keep-all-for 1m --keep-dailies-for 6m --keep-monthlies-for forever -9 --unsafe

It will backup:

  • MySQL data from inside redmine using mysqldump
  • The redmine_data folder
  • The redmine_themes folder
  • The backup script backup.sh itself
  • docker-compose.yml

Place it in the same folder where docker-compose.yml is located.

The script is compatible with our previous post How to create a systemd backup timer & service in 10 seconds

Posted by Uli Köhler in bup, Docker

Simple 5-minute Vaultwarden (SQLite) setup using docker-compose

In order to setup Vaultwarden in a docker-compose & SQLite based configuration (e.g. on CoreOS), first we need to create a directory. I recommend using /opt/vaultwarden.

Run all the following commands and place all the following files in the /opt/vaultwarden directory!

First, we’ll create a .env file with random passwords (I recommend using pwgen 30). Not using a unique, random password here is a huge security risk since it will allow full admin access to Vaultwarden!

ADMIN_TOKEN=iqueingufo3LohshoohoG3tha2zou6
SIGNUPS_ALLOWED=true

Now place your docker-compose.yml:

version: '3.4'
services:
  vaultwarden:
    image: vaultwarden/server:latest
    environment:
      - ADMIN_TOKEN=${ADMIN_TOKEN}
      - SIGNUPS_ALLOWED=${SIGNUPS_ALLOWED}
    volumes:
      - ./vw_data:/data
    ports:
      - 17881:80

Next, we’ll create a systemd service to autostart docker-compose:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will automatically start vaultwarden.

Now you need to configure your reverse proxy server to point https://vaultwarden.mydomain.com . You need to use https, http won’t work due to some browser limitations.

Now we need to configure vaultwarden using the admin interface.

Go to https://vaultwarden.mydomain.com/admin and enter the ADMIN_TOKEN from .env.

There are two things that you need to configure here:

  • The Domain Name under General settings
  • The email server settings under SMTP email settings

With these settings configured, Vaultwarden should be up and running and you can access it using https://vaultwarden.mydomain.com .

After the first user has been setup and tested, you can uncheck the Allow new signups in General settings in the admin interface. This is recommended since everyone who will be able to guess your domain name would be able to create a Vaultwarden account otherwise.

Posted by Uli Köhler in Container, Docker

Simple 15-minute passbolt setup using docker-compose

This is how I run my local passbolt instance.

First, create the directory. I use /opt/passbolt. Run all the following commands and place all the following files in that directory!

First, initialize the folders with the correct permissions:

mkdir -p passbolt_gpg
chown -R 33:33 passbolt_gpg

Now create a .env file with random passwords (I recommend using pwgen 30):

MARIADB_ROOT_PASSWORD=meiJieseingi4dutiareimoh2Aiv5j
MARIADB_USER_PASSWORD=ohre3ye1oNexeShiuChaengahzuemo

Now place your docker-compose.yml:

version: '3.4'
services:
  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_DATABASE=passbolt
      - MYSQL_USER=passbolt
      - MYSQL_PASSWORD=${MARIADB_USER_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
    volumes:
      - ./mariadb_data:/var/lib/mysql

  passbolt:
    image: passbolt/passbolt:latest-ce
    tty: true
    depends_on:
      - mariadb
    environment:
      - DATASOURCES_DEFAULT_HOST=mariadb
      - DATASOURCES_DEFAULT_USERNAME=passbolt
      - DATASOURCES_DEFAULT_PASSWORD=${MARIADB_USER_PASSWORD}
      - DATASOURCES_DEFAULT_DATABASE=passbolt
      - DATASOURCES_DEFAULT_PORT=3306
      - DATASOURCES_QUOTE_IDENTIFIER=true
      - APP_FULL_BASE_URL=https://passbolt.mydomain.com
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_HOST=smtp.mydomain.com
      - EMAIL_TRANSPORT_DEFAULT_PORT=587
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_PASSWORD=yei5QueiNa5ahF0Aice8Na0aphoyoh
      - EMAIL_TRANSPORT_DEFAULT_TLS=true
      - [email protected]
    volumes:
      - ./passbolt_gpg:/etc/passbolt/gpg
      - ./passbolt_web:/usr/share/php/passbolt/webroot/img/public
    command: ["/usr/bin/wait-for.sh", "-t", "0", "mariadb:3306", "--", "/docker-entrypoint.sh"]
    ports:
      - 17880:80

Be sure to replace all the email addresses, domain names and SMTP credentials by the values appropriate for your setup.

Now startup passbolt for the first time, it will initialize the database:

docker-compose up

You need to keep passbolt running during the following steps.

First, we’ll send a test email:

docker-compose exec passbolt su -m -c "bin/cake passbolt send_test_email"

If you see

The message has been successfully sent!

then your SMTP config is correct. Otherwise, debug the error message, and, if neccessary, modify the EMAIL_… environment variables in docker-compose.yml and restart passbolt afterwards.

Now we’ll create an admin user:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f John -l Doe -r admin" -s /bin/sh www-data

If you want to create a normal (non-admin) user, use user instead of admin:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f Jane -l Doe -r user" -s /bin/sh www-data

After that, the only thing left to do is to create a systemd service to autostart your passbolt service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Passbolt is now running on port 17880 (you can configure this using docker-compose.yml). Just configure your reverse proxy appropriately to point to this port.

Posted by Uli Köhler in Container, Docker

How to install ruby & rubygems in Alpine Linux

Problem:

You want to install ruby and the gem package manager in Alpine linux, but running apk install ruby rubygems shows you that the package doesn’t exist

/ # apk add ruby rubygems
ERROR: unable to select packages:
  rubygems (no such package):
    required by: world[rubygems]

Solution:

gem is included in the ruby package. So the only command you need to run is

apk update
apk add ruby

Example output:

/ # apk add ruby
(1/7) Installing ca-certificates (20191127-r5)
(2/7) Installing gdbm (1.19-r0)
(3/7) Installing gmp (6.2.1-r0)
(4/7) Installing readline (8.1.0-r0)
(5/7) Installing yaml (0.2.5-r0)
(6/7) Installing ruby-libs (2.7.3-r0)
(7/7) Installing ruby (2.7.3-r0)
Executing busybox-1.32.1-r6.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 928 MiB in 154 packages

After doing that, you can immediately use both ruby and gem.

Posted by Uli Köhler in Alpine Linux, Container, Docker, Linux, Ruby

How to run mkpasswd with yescrypt on Ubuntu/Debian

Currently the Ubuntu/Debian mkpasswd command does not support yescrypt.

In order to use it anyway, we can use the ulikoehler/mkpasswd docker image to run the proper version of mkpasswd:

docker run --rm -it ulikoehler/mkpasswd

This will prompt you for a password and then echo the yescrypt encrypted and salted password:

$ docker run --rm -it ulikoehler/mkpasswd
Password:
$y$j9T$YzrfO5lQkDWahpz5pwYzg/$HzQoMYt.7E1jj.sd6OyYCGI/Qk6oGehNgz5uvY1qp59

 

Posted by Uli Köhler in Docker, Linux

How to use yum in Dockerfile correctly

Example of how to install the mkpasswd package using yum in your Dockerfile:

RUN yum -y install mkpasswd && yum -y clean all  && rm -rf /var/cache

There are two basic aspects to remember here:

  1. Use yum -y in order to avoid interactive Y/N questions during the automated build
  2. Use yum -y clean all && rm -rf /var/cache to clean up after the call to yum -y install

Complete Dockerfile example:

FROM fedora:34
RUN yum -y install mkpasswd && yum -y clean all  && rm -rf /var/cache

 

Posted by Uli Köhler in Container, Docker

How to fix docker.errors.DockerException: Error while fetching server API version: (‘Connection aborted.’, FileNotFoundError(2, ‘No such file or directory’))

Problem:

While running a docker command like docker-compose pull, you see an error message like

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 33, in <module>
    sys.exit(load_entry_point('docker-compose==1.27.4', 'console_scripts', 'docker-compose')())
  File "/usr/lib/python3.8/site-packages/compose/cli/main.py", line 67, in main
    command()
  File "/usr/lib/python3.8/site-packages/compose/cli/main.py", line 123, in perform_command
    project = project_from_options('.', options)
  File "/usr/lib/python3.8/site-packages/compose/cli/command.py", line 60, in project_from_options
    return get_project(
  File "/usr/lib/python3.8/site-packages/compose/cli/command.py", line 131, in get_project
    client = get_client(
  File "/usr/lib/python3.8/site-packages/compose/cli/docker_client.py", line 41, in get_client
    client = docker_client(
  File "/usr/lib/python3.8/site-packages/compose/cli/docker_client.py", line 170, in docker_client
    client = APIClient(**kwargs)
  File "/usr/lib/python3.8/site-packages/docker/api/client.py", line 197, in __init__
    self._version = self._retrieve_server_version()
  File "/usr/lib/python3.8/site-packages/docker/api/client.py", line 221, in _retrieve_server_version
    raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

Solution:

This means you haven’t started your docker service!

First, try to start it using

sudo systemctl start docker

or

sudo service docker start

or

sudo /etc/init.d/docker restart

(whatever works with your distribution).

After that, retry the command that originally caused the error message to appear.

In case it still shows the same error message, try the following steps:

  • First, check /var/log/docker.log using
    cat /var/log/docker.log

    Check that file for errors during docker startup.

  • Also check if the user you’re running the command as is a member of the docker group. While insufficient permissions will not cause a FileNotFoundError(2, 'No such file or directory')), but a Permission denied, the error message might look similar in some cases.
Posted by Uli Köhler in Container, Docker, Linux

How to fix Synology Docker: failed to initialize logging driver: database is locked

Problem:

When you try to start a specific Docker container using the Synology NAS GUI, the container is being stopped unexpectedly and you see an error message like this in the logs:

Start container mycontainer failed: {"message":"failed to initialize logging driver: database is locked"}.
Signal container mycontainer failed: {"message":"Cannot kill container: mycontainer: Container 5136ddceeb46004c5b18f04eb9ec10cac3808938515874fc31185b0964232201 is not running"}.

Solution:

I fixed this problem by stopping the container, duplicating the container session: Right click on the container -> Settings -> Duplicate Settings

That will create a new container with the given settings. Note that local ports will be set to Auto and will not be copied over, so if you use fixed local ports, you need to set them to a different value in the original container and then set the local ports on the new container to the desired fixed value. Also note that files inside the container are not copied over. In my configuration, all relevant files are stored in mapped volumes on the NAS.

The root cause of this issue seems to be that the logging database for this specific container has been locked by some process. The issue is always limited to a certain container and will not affect other containers (though it could in principle occur for more than one container). I know that at least in my specific case, the issue is not caused by a reboot and will also not be fixed by a reboot of the Synology NAS. Just before I encountered the issue, my NAS had not been rebooted for months, but it might be related to Synology package updates since I updated some packages using the Package manager just before encountering the issue, including a Synology Mail Plus update which failed on the first attempt, but succeeded when I clicked Update again.

Posted by Uli Köhler in Docker, Networking

A modern Kimai setup using docker-compose and nginx

This is the setup I use to run multiple productive kimai instances. In my example, I create the files in /opt/kimai-mydomain. The folder name is not critical, but it is helpful to distinguish multiple indepedent kimai instances.

First, let’s create /opt/kimai-mydomain/docker-compose.yml. You don’t need to modify anything in this file as every relevant configuration is loaded from .env using environment variables.

version: '3.5'
services:
  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_DATABASE=kimai
      - MYSQL_USER=kimai
      - MYSQL_PASSWORD=${MARIADB_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
    volumes:
      - ./mariadb_data:/var/lib/mysql
    command: --default-storage-engine innodb
    restart: unless-stopped
    healthcheck:
      test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
      interval: 20s
      start_period: 10s
      timeout: 10s
      retries: 3

  kimai:
    image: kimai/kimai2:apache-debian-master-prod
    environment:
      - APP_ENV=prod
      - TRUSTED_HOSTS=localhost,${HOSTNAME}
      - [email protected]
      - ADMINPASS=${KIMAI_ADMIN_PASSWORD}
      - DATABASE_URL=mysql://kimai:${MARIADB_PASSWORD}@mariadb/kimai
    volumes:
      - ./kimai_var:/opt/kimai/var
    ports:
      - '17919:8001'
    depends_on:
      - mariadb
    restart: unless-stopped

Now we’ll create the configuration in /opt/kimai-mydomain/.env:

MARIADB_ROOT_PASSWORD=eishi5Pae3chai1Aeth2wiuCh7Ahhi
MARIADB_PASSWORD=su1aesheereithubo0iedootaeRooT
KIMAI_ADMIN_PASSWORD=toiWaeShaiz5Yeifohngu6chunuo6C
[email protected]
HOSTNAME=kimai.mydomain.com

Generate random passwords for .env ! Do NOT leave the default passwords in .env !

You also need to set KIMAI_ADMIN_EMAIL and HOSTNAME correctly.

We can now create the kimai data directory and set the correct permissions:

mkdir -p kimai_var
chown -R 33:33 kimai_var

(33 is the user ID and group ID of the www-data user inside the container)

Now, we will initialize the kimai database and the user:

docker-compose run kimai console kimai:install -n

Once you see a line like

[Sun Mar 07 23:53:35.986477 2021] [core:notice] [pid 50] AH00094: Command line: '/usr/sbin/apache2 -D FOREGROUND'

stop the process using Ctrl+C as this means that Kimai has finished installing.

Now we can create a systemd service that automatically starts Kimai using TechOverflow’s method from Create a systemd service for your docker-compose project in 10 seconds:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Now we only need to create an nginx config for reverse proxying of your Kimai domain. There is nothing special to be considered for the config, hence I’ll show my config just as an example that you can copy and paste.

server {
    server_name  kimai.mydomain.com;

    location / {
        proxy_pass http://localhost:17919/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen [::]:443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/kimai.mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/kimai.mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

}
server {
    if ($host = kimai.mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    server_name  kimai.mydomain.com;

    listen [::]:80; # managed by Certbot
    return 404; # managed by Certbot
}

After setting up your config – I always recommend to setup TLS using Let’s Encrypt, even for test setups, open your Browser and go to your Kimai domain, e.g. to https://kimai.mydomain.com. You can directly login to kimai using KIMAI_ADMIN_EMAIL and KIMAI_ADMIN_PASSWORD as specified in .env.

Posted by Uli Köhler in Container, Docker

How to install python3 pip / pip3 in Alpine Linux

Problem:

You want to install pip3 (also called python3-pip) in Alpine linux, but running apk install python3-pip shows you that the package doesn’t exist

/ # apk add python3-pip
ERROR: unable to select packages:
  python3-pip (no such package):
    required by: world[python3-pip]

Solution:

You need to install py3-pip instead using

apk add py3-pip

Example output:

/ # apk add py3-pip
(1/35) Installing libbz2 (1.0.8-r1)
(2/35) Installing expat (2.2.10-r1)
(3/35) Installing libffi (3.3-r2)
[...]

 

Posted by Uli Köhler in Alpine Linux, Container, Docker, Linux

A modern Docker-Compose config for Etherpad using nginx as reverse proxy

This is the configuration I use to run my etherpad installations behind nginx:

docker-compose.yml

version: "3.5"
services:
  etherpad:
    image: etherpad/etherpad:latest
    environment:
      - TITLE=My Etherpad
      - DEFAULT_PAD_TEXT=My Etherpad
      - ADMIN_PASSWORD=${ETHERPAD_ADMIN_PASSWORD}
      - ADMIN_USER=admin
      - DB_TYPE=mysql
      - DB_HOST=mariadb
      - DB_PORT=3306
      - DB_USER=etherpad
      - DB_PASS=${MARIADB_PASSWORD}
      - DB_NAME=etherpad
      - DB_CHARSET=utf8mb4
      - API_KEY=${ETHERPAD_API_KEY}
      - SESSION_REQUIRED=false
    ports:
      - "17201:9001"
    depends_on:
      - mariadb

  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=etherpad
      - MYSQL_USER=etherpad
      - MYSQL_PASSWORD=${MARIADB_PASSWORD}
    volumes:
      - './mariadb_data:/var/lib/mysql'
    command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
    healthcheck:
      test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
      interval: 20s
      start_period: 10s
      timeout: 10s
      retries: 3

.env

Remember to replace the passwords with your own random passwords !

MARIADB_ROOT_PASSWORD=ue9zahb8Poh1oeMieyaeFaicheecaz
MARIADB_PASSWORD=dieQuoghiu6sao9aiphei7eiquael5
ETHERPAD_API_KEY=een4Chohdiedohzaich0ega5thung6
ETHERPAD_ADMIN_PASSWORD=ahNee3OhR6aiCootaiy7uBui3rooco

nginx config

server {
    server_name  etherpad.mydomain.de;
    access_log off;
    error_log /var/log/nginx/etherpad.mydomain.de-error.log;

    location / {
        proxy_pass http://localhost:17201/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen [::]:443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/etherpad.mydomain.de/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/etherpad.mydomain.de/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

}

server {
    if ($host = etherpad.mydomain.de) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen [::]:80; # managed by Certbot
    server_name  etherpad.mydomain.de;
    return 404; # managed by Certbot
}

How to autostart

Our post Create a systemd service for your docker-compose project in 10 seconds provides an extremely quick and easy-to-use one-liner to create and autostart a systemd service running docker-compose for your etherpad instance.

How to backup

This will be detailed in a future blogpost.

Posted by Uli Köhler in Container, Docker, nginx

How to use apt install correctly in your Dockerfile

This is the correct way to use apt install in your Dockerfile:

ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && apt install -y PACKAGE && rm -rf /var/lib/apt/lists/*

Key takeaways:

  • Set DEBIAN_FRONTEND=noninteractive to prevent some packages from prompting interactive input (tzdata for example), which leads to indefinite waiting for an user input
  • Run apt update before the install command to fetch the current package lists
  • apt install with -y to prevent apt from asking you if you really want to install the packages
  • rm -rf /var/lib/apt/lists/* after the install command in order ot prevent the cached apt lists (which are fetched by apt update) from ending up in the container image
  • All of that in one command joined by && in order to prevent docker from building separate layers for each part of the command (and to prevent it from first storing /var/lib/apt/lists in one layer and then delete it in another layer)

Also see the official guide on Dockerfile best practices

Posted by Uli Köhler in Container, Docker