Container

Simple Elasticsearch setup with docker-compose

The following docker-compose.yml is a simple starting point for using ElasticSearch within a docker-based setup:

version: '2.2'
services:
    elasticsearch1:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
        container_name: elasticsearch1
        environment:
            - cluster.name=docker-cluster
            - node.name=elasticsearch1
            - cluster.initial_master_nodes=elasticsearch1
            - bootstrap.memory_lock=true
            - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
            - http.cors.enabled=true
            - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
            - http.cors.allow-credentials=true
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        volumes:
            - ./esdata1:/usr/share/elasticsearch/data
        ports:
            - 9200:9200
    dejavu:
        image: appbaseio/dejavu
        container_name: dejavu
        ports:
            - 1358:1358

Now create the esdata1 directory with the correct permissions:

sudo mkdir esdata1
sudo chown -R 1000:1000 esdata1

We also need to configure the vm.max_map_count sysctl parameter:

echo -e "\nvm.max_map_count=524288\n" | sudo tee -a /etc/sysctl.conf && sudo sysctl -w vm.max_map_count=524288

 

I recommend to place it in /opt/elasticsearch, but you can place wherever you like.

If you want to autostart it on boot, see Create a systemd service for your docker-compose project in 10 seconds or just use this snippet from said post:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will create a systemd service named elasticsearch (if your directory is named elasticsearch like /opt/elasticsearch) and enable and start it immediately. Hence you can restart using

sudo systemctl restart elasticsearch

and view the logs using

sudo journalctl -xfu elasticsearch

For more complex setup involving more than one node, see our previous post on ElasticSearch docker-compose.yml and systemd service generator

Posted by Uli Köhler in Container, Databases, Docker, ElasticSearch

How I connected a network_mode: host container to its database container

I have setup my FreePBX to use network_mode: 'host' but faced issues when it couldn’t connect to the MariaDB container which was not using network_mode: 'host'.

I fixed this by:

  • Setting the MariaDB container to network_mode: 'host'
  • Setting the FreePBX container to connect to 127.0.0.1 (DB_HOST=127.0.0.1). Setting it to localhost did NOT allow FreePBX to connect to MariaDB!
Posted by Uli Köhler in Docker, FreePBX, Networking

Recommended docker-compose mariadb service

I recommend this service:

mariadb:
  image: mariadb:latest
  environment:
    - MYSQL_DATABASE=servicename
    - MYSQL_USER=servicename
    - MYSQL_PASSWORD=${MARIADB_PASSWORD}
    - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
  volumes:
    - ./mariadb_data:/var/lib/mysql
  command: --default-storage-engine innodb
  restart: unless-stopped
  healthcheck:
    test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
    interval: 20s
    start_period: 10s
    timeout: 10s
    retries: 3

(replace servicename by the name of your service, e.g. kimai, redmine, …) and this .env:

MARIADB_ROOT_PASSWORD=eiNgam3woh4ahTee4chi9vohvauk6a
MARIADB_PASSWORD=shahb4alubei5Vie8arahhok2morae

You can also easily generate these passwords by using:

echo -e "MARIADB_ROOT_PASSWORD=$(pwgen 30 1)\nMARIADB_PASSWORD=$(pwgen 30 1)" > .env

 

Posted by Uli Köhler in Container, Docker

Local redmine backup using bup (docker-compose compatible)

This script uses bupto backup your docker-compose based redmine installation to a local bup folder e.g. in /var/lib/bup/my-redmine.bup:

#!/bin/bash
# Auto-determine the name from the directory name
# /opt/my-redmine => $NAME=my-redmine => /var/lib/bup/my-redmine.bup
export NAME=$(basename $(pwd))
export BUP_DIR=/var/lib/bup/$NAME.bup
bup_directory() {
        echo "BUPing $1"
        bup -d $BUP_DIR index $1 && bup save -9 --strip-path $(pwd) -n $1 $1
}
# Init
bup -d $BUP_DIR init
# Save MariaDB
source .env && docker-compose exec -T mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases | bup -d $BUP_DIR split -n $NAME-mariadb.sql
# Save directories
bup_directory redmine_data
bup_directory redmine_themes
# Backup self
bup_directory backup.sh
bup_directory docker-compose.yml
# OPTIONAL: Add par2 information
#   This is only recommended for backup on unreliable storage or for extremely critical backups
#   If you already have bitrot protection (like BTRFS with regular scrubbing), this might be overkill.
# Uncomment this line to enable:
# bup fsck -g

# OPTIONAL: Cleanup old backups
bup -d $BUP_DIR prune-older --keep-all-for 1m --keep-dailies-for 6m --keep-monthlies-for forever -9 --unsafe

It will backup:

  • MySQL data from inside redmine using mysqldump
  • The redmine_data folder
  • The redmine_themes folder
  • The backup script backup.sh itself
  • docker-compose.yml

Place it in the same folder where docker-compose.yml is located.

The script is compatible with our previous post How to create a systemd backup timer & service in 10 seconds

Posted by Uli Köhler in bup, Docker

How to fix Fedora CoreOS rpm-ostree error: Transaction in progress: deploy –lock-finalization revision=… –disallow-downgrade

Problem:

When trying to install a package using rpm-ostree, you see an error message like

error: Transaction in progress: deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade 

Solution:

The error message means that currently there’s an rpm-ostree operating in progress and you need to wait for it to finish.

In order to see which process is running, use

ps aux | grep rpm

Example output:

[root@CoreOS uli]# ps aux | grep rpm
root         730 41.2  1.7 1218036 34568 ?       Ssl  18:41   0:30 /usr/bin/rpm-ostree start-daemon
zincati     1896  0.0  0.8 481172 17324 ?        Sl   18:41   0:00 rpm-ostree deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade
root        3223  0.0  0.0 221452   832 pts/0    S+   18:42   0:00 grep --color=auto rpm

As you can see in the second line:

zincati 1896 0.0 0.8 481172 17324 ? Sl 18:41 0:00 rpm-ostree deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade

the user zincati is currently running rpm-ostree on my system. zincati is the Fedora CoreOS auto-updater – in other words, an automatic system update is currently running on CoreOS.

In case the process got stuck and waiting doesn’t help reboot the system. Killing the process won’t work.

Posted by Uli Köhler in CoreOS

How to fix CoreOS “WARNING: This system is using cgroups v1”

Problem:

When logging into your CoreOS instance, you see this warning message:

############################################################################
WARNING: This system is using cgroups v1. For increased reliability
it is strongly recommended to migrate this system and your workloads
to use cgroups v2. For instructions on how to adjust kernel arguments
to use cgroups v2, see:
https://docs.fedoraproject.org/en-US/fedora-coreos/kernel-args/

To disable this warning, use:
sudo systemctl disable coreos-check-cgroups.service
############################################################################

but when you look at https://docs.fedoraproject.org/en-US/fedora-coreos/kernel-args/ you only see an example of how to initialize a new CoreOS instance with Ignition files with cgroups v2.

Solution:

In order to migrate your system to cgroups v2, run

sudo rpm-ostree kargs --delete=systemd.unified_cgroup_hierarchy

After that, you need to reboot your system in order for the changes to take effect:

sudo systemctl reboot

After the system has rebooted, the error should disappear.

Posted by Uli Köhler in CoreOS

A simple CoreOS config for beginners with password login

In constrast to other Linux-based systems, CoreOS requires quite a large learning curve to get installed properly – for example, you have to create the right ignition file for . This is a huge obstacle to overcome especially for first-time users.

This posts attempts to alleviate the steep learning curve by providing a basic config that is suitable for most practical (and especially small-scale) usecases and provides a good starting point for custom configs.

Simple install

First, boot up the VM from the CoreOS Live CD. We assume that you have a DHCP network connected to eth0. You will see a shell immediately.

The VM will automatically acquire an IP address over DHCP.

You can use TechOverflow’s hosted ignition file for the installation. You need to use the correct disk instead of /dev/xvda depending on your hardware/hypervisor. If in doubt, use lsblk to find the correct disk name.

Now run the installation command:

sudo coreos-installer install /dev/xvda --copy-network --ignition-url https://techoverflow.net/coreos.ign

After the installation is finished, reboot using

reboot

and the machine has rebooted, you can use the default login credentials:

Username: admin
Password: coreos

The hostname is CoreOS.

You absolutely need to change the password after the installation! If you create another user, remember that you still need to change the password of the admin user using

sudo passwd admin

Build your own config file

This is the Ignition YAML we used to create the correct config file. Use our online transpiler at https://fcct.techoverflow.net to compile the YAML to the JSON file. In order to create a new password hash, use TechOverflow’s docker-based mkpasswd approach.

variant: fcos
version: 1.0.0
passwd:
  users:
    - name: admin
      groups:
        - "sudo"
        - "docker"
      password_hash: $y$j9T$n6h8P2ik8tfoNUFBBoly00$7bnrMF8oFrB25Fc3NqigqEH/MI5YXIJwtCG/iEsns.2

systemd:
  units:
    - name: docker.service
      enabled: true

    - name: containerd.service
      enabled: true
    - name: [email protected]
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure
          ExecStart=-/usr/sbin/agetty --autologin admin --noclear %I $TERM
          TTYVTDisallocate=no
storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          CoreOS
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          # Tell systemd to not use a pager when printing information
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          # Raise console message logging level from DEBUG (7) to WARNING (4)
          # to hide audit messages from the interactive console
          kernel.printk=4
    - path: /etc/ssh/sshd_config.d/20-enable-passwords.conf
      mode: 0644
      contents:
        inline: |
          # Enable SSH password login
          PasswordAuthentication yes

which results in the following transpiled JSON:

{
  "ignition": {
    "version": "3.0.0"
  },
  "passwd": {
    "users": [
      {
        "groups": [
          "sudo",
          "docker"
        ],
        "name": "admin",
        "passwordHash": "$y$j9T$n6h8P2ik8tfoNUFBBoly00$7bnrMF8oFrB25Fc3NqigqEH/MI5YXIJwtCG/iEsns.2"
      }
    ]
  },
  "storage": {
    "files": [
      {
        "contents": {
          "source": "data:,CoreOS%0A"
        },
        "mode": 420,
        "path": "/etc/hostname"
      },
      {
        "contents": {
          "source": "data:,%23%20Tell%20systemd%20to%20not%20use%20a%20pager%20when%20printing%20information%0Aexport%20SYSTEMD_PAGER%3Dcat%0A"
        },
        "mode": 420,
        "path": "/etc/profile.d/systemd-pager.sh"
      },
      {
        "contents": {
          "source": "data:,%23%20Raise%20console%20message%20logging%20level%20from%20DEBUG%20(7)%20to%20WARNING%20(4)%0A%23%20to%20hide%20audit%20messages%20from%20the%20interactive%20console%0Akernel.printk%3D4%0A"
        },
        "mode": 420,
        "path": "/etc/sysctl.d/20-silence-audit.conf"
      },
      {
        "contents": {
          "source": "data:,%23%20Enable%20SSH%20password%20login%0APasswordAuthentication%20yes%0A"
        },
        "mode": 420,
        "path": "/etc/ssh/sshd_config.d/20-enable-passwords.conf"
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "enabled": true,
        "name": "docker.service"
      },
      {
        "enabled": true,
        "name": "containerd.service"
      },
      {
        "dropins": [
          {
            "contents": "[Service]\n# Override Execstart in main unit\nExecStart=\n# Add new Execstart with `-` prefix to ignore failure\nExecStart=-/usr/sbin/agetty --autologin admin --noclear %I $TERM\nTTYVTDisallocate=no\n",
            "name": "autologin-core.conf"
          }
        ],
        "name": "[email protected]"
      }
    ]
  }
}

 

Posted by Uli Köhler in CoreOS

Simple 5-minute Vaultwarden (SQLite) setup using docker-compose

In order to setup Vaultwarden in a docker-compose & SQLite based configuration (e.g. on CoreOS), first we need to create a directory. I recommend using /opt/vaultwarden.

Run all the following commands and place all the following files in the /opt/vaultwarden directory!

First, we’ll create a .env file with random passwords (I recommend using pwgen 30). Not using a unique, random password here is a huge security risk since it will allow full admin access to Vaultwarden!

ADMIN_TOKEN=iqueingufo3LohshoohoG3tha2zou6
SIGNUPS_ALLOWED=true

Now place your docker-compose.yml:

version: '3.4'
services:
  vaultwarden:
    image: vaultwarden/server:latest
    environment:
      - ADMIN_TOKEN=${ADMIN_TOKEN}
      - SIGNUPS_ALLOWED=${SIGNUPS_ALLOWED}
    volumes:
      - ./vw_data:/data
    ports:
      - 17881:80

Next, we’ll create a systemd service to autostart docker-compose:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will automatically start vaultwarden.

Now you need to configure your reverse proxy server to point https://vaultwarden.mydomain.com . You need to use https, http won’t work due to some browser limitations.

Now we need to configure vaultwarden using the admin interface.

Go to https://vaultwarden.mydomain.com/admin and enter the ADMIN_TOKEN from .env.

There are two things that you need to configure here:

  • The Domain Name under General settings
  • The email server settings under SMTP email settings

With these settings configured, Vaultwarden should be up and running and you can access it using https://vaultwarden.mydomain.com .

After the first user has been setup and tested, you can uncheck the Allow new signups in General settings in the admin interface. This is recommended since everyone who will be able to guess your domain name would be able to create a Vaultwarden account otherwise.

Posted by Uli Köhler in Container, Docker

Simple 15-minute passbolt setup using docker-compose

This is how I run my local passbolt instance.

First, create the directory. I use /opt/passbolt. Run all the following commands and place all the following files in that directory!

First, initialize the folders with the correct permissions:

mkdir -p passbolt_gpg
chown -R 33:33 passbolt_gpg

Now create a .env file with random passwords (I recommend using pwgen 30):

MARIADB_ROOT_PASSWORD=meiJieseingi4dutiareimoh2Aiv5j
MARIADB_USER_PASSWORD=ohre3ye1oNexeShiuChaengahzuemo

Now place your docker-compose.yml:

version: '3.4'
services:
  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_DATABASE=passbolt
      - MYSQL_USER=passbolt
      - MYSQL_PASSWORD=${MARIADB_USER_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
    volumes:
      - ./mariadb_data:/var/lib/mysql

  passbolt:
    image: passbolt/passbolt:latest-ce
    tty: true
    depends_on:
      - mariadb
    environment:
      - DATASOURCES_DEFAULT_HOST=mariadb
      - DATASOURCES_DEFAULT_USERNAME=passbolt
      - DATASOURCES_DEFAULT_PASSWORD=${MARIADB_USER_PASSWORD}
      - DATASOURCES_DEFAULT_DATABASE=passbolt
      - DATASOURCES_DEFAULT_PORT=3306
      - DATASOURCES_QUOTE_IDENTIFIER=true
      - APP_FULL_BASE_URL=https://passbolt.mydomain.com
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_HOST=smtp.mydomain.com
      - EMAIL_TRANSPORT_DEFAULT_PORT=587
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_PASSWORD=yei5QueiNa5ahF0Aice8Na0aphoyoh
      - EMAIL_TRANSPORT_DEFAULT_TLS=true
      - [email protected]
    volumes:
      - ./passbolt_gpg:/etc/passbolt/gpg
      - ./passbolt_web:/usr/share/php/passbolt/webroot/img/public
    command: ["/usr/bin/wait-for.sh", "-t", "0", "mariadb:3306", "--", "/docker-entrypoint.sh"]
    ports:
      - 17880:80

Be sure to replace all the email addresses, domain names and SMTP credentials by the values appropriate for your setup.

Now startup passbolt for the first time, it will initialize the database:

docker-compose up

You need to keep passbolt running during the following steps.

First, we’ll send a test email:

docker-compose exec passbolt su -m -c "bin/cake passbolt send_test_email"

If you see

The message has been successfully sent!

then your SMTP config is correct. Otherwise, debug the error message, and, if neccessary, modify the EMAIL_… environment variables in docker-compose.yml and restart passbolt afterwards.

Now we’ll create an admin user:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f John -l Doe -r admin" -s /bin/sh www-data

If you want to create a normal (non-admin) user, use user instead of admin:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f Jane -l Doe -r user" -s /bin/sh www-data

After that, the only thing left to do is to create a systemd service to autostart your passbolt service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Passbolt is now running on port 17880 (you can configure this using docker-compose.yml). Just configure your reverse proxy appropriately to point to this port.

Posted by Uli Köhler in Container, Docker

How to install ruby & rubygems in Alpine Linux

Problem:

You want to install ruby and the gem package manager in Alpine linux, but running apk install ruby rubygems shows you that the package doesn’t exist

/ # apk add ruby rubygems
ERROR: unable to select packages:
  rubygems (no such package):
    required by: world[rubygems]

Solution:

gem is included in the ruby package. So the only command you need to run is

apk update
apk add ruby

Example output:

/ # apk add ruby
(1/7) Installing ca-certificates (20191127-r5)
(2/7) Installing gdbm (1.19-r0)
(3/7) Installing gmp (6.2.1-r0)
(4/7) Installing readline (8.1.0-r0)
(5/7) Installing yaml (0.2.5-r0)
(6/7) Installing ruby-libs (2.7.3-r0)
(7/7) Installing ruby (2.7.3-r0)
Executing busybox-1.32.1-r6.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 928 MiB in 154 packages

After doing that, you can immediately use both ruby and gem.

Posted by Uli Köhler in Alpine Linux, Container, Docker, Linux, Ruby

How to apply Fedora CoreOS changes without a reboot

Do you want to install Fedora CoreOS packages without having to reboot your entire system in order for the packages to be available? Just run

sudo rpm-ostree ex apply-live

after running your rpm-ostree install commands.

For example:

sudo rpm-ostree install nano
sudo rpm-ostree ex apply-live

 

Note that this is not completely safe for multiple reasons, not even for seemingly innocuous utility packages like nano:

  • As indicated by the ex in the command, the apply-live command is experimental
  • It might apply other changes from the new OSTree like automatically installed updated and hence might have effects
  • When changing files on a system with productive services running, the services might crash or experience other issues. This might not happen immediately and it might be hard to debug especially in a complex environment. In case you want to safely update your services, it’s almost always best to just reboot into the new OSTree.

Also read our previous post on Why do you have to reboot after rpm-ostree install on Fedora CoreOS? where we explain the technical reasoning behind the reboots.

Posted by Uli Köhler in CoreOS

Why do you have to reboot after rpm-ostree install on Fedora CoreOS?

If you have worked with Fedora CoreOS, you might have noticed that every time you install a package you need to reboot in order for the files from said package to be available to you. This is quite different from other Linux distributions where you can immediately use whatever package you installed without having to reboot every time.

What is the technical reasoning for having to reboot?

rpm-ostree is quite a special tool: It does not just install a package. This has the advantage that the currently running system is not modified at all, but a separate OS tree – image it like an image containing all the files constituting your system – is built after running rpm-ostree install.

While rebooting after every install might seem like a stupid idea since it takes down the entire server, remember that it can save you a lot of headache since there are no partially updated services and you don’t need to manually fix or restart anything since everything is restarted on reboot. This means that your system is always in a consistent state, since every service is cleanly shut down before the system reboot – and after the reboot, every service is cleanly started with the system changes.

Can you install multiple packages before having to reboot?

Yes, you can run multiple rpm-ostree install commands before rebooting. When rebooting, all the changes will be applied at once.

Can you delay the reboot after rpm-ostree install?

Yes, there is no need to reboot immediately after the rpm-ostree command. You can delay the reboot as long as you like. Note however, that when the machine is rebooted for reasons other than a manual reboot (like a power outage or restart of the VM host), the updates will be applied as well, but you might not be there to check if all services are running correctly. Hence, I recommend to reboot as soon as possible.

Can you avoid to reboot after installing packages?

Yes, Fedora CoreOS provides an experimental live update feature using rpm-ostree ex apply-live. See our post How to apply Fedora CoreOS changes without a reboot . Note that applying updates or new packages on a system with productively running services might be a bad idea, but it’s not inherently more unsafe than installing packages on a typical Linux distribution like Debian, Fedora or Ubuntu where every install or update to a package immediately affects the files on the file system.

 

Posted by Uli Köhler in CoreOS

How to install docker-compose on Fedora CoreOS

Just install it using rpm-ostree:

sudo rpm-ostree install docker-compose

and then reboot in order for the changes to the OSTree to take effect:

sudo systemctl reboot

 

Posted by Uli Köhler in CoreOS

Fedora CoreOS: How to install Xen/XCP-NG guest utilities using rpm-ostree

In Fedora CoreOS, you can install the Xen guest utilities using

sudo rpm-ostree install xe-guest-utilities-latest

After installing the package, reboot in order for the changes to take effect:

sudo systemctl reboot

Now we need to enable and start the Xen service:

sudo systemctl enable --now xe-linux-distribution

It will now automatically start on boot.

Example output from the install command:

# rpm-ostree install xe-guest-utilities-latest
Checking out tree 49ec34c... done
Enabled rpm-md repositories: fedora-cisco-openh264 updates fedora
rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2020-08-25T19:10:34Z
rpm-md repo 'updates' (cached); generated: 2021-05-13T01:04:01Z
rpm-md repo 'fedora' (cached); generated: 2020-10-19T23:27:19Z
Importing rpm-md... done
Resolving dependencies... done
Will download: 1 package (1.0 MB)
Downloading from 'updates'... done
Importing packages... done
Checking out packages... done
Running pre scripts... done
Running post scripts... done
Running posttrans scripts... done
Writing rpmdb... done
Writing OSTree commit... done
Staging deployment... done
Added:
  xe-guest-utilities-latest-7.21.0-1.fc33.x86_64
Run "systemctl reboot" to start a reboot

 

Posted by Uli Köhler in CoreOS

Fedora CoreOS: How to use German keyboard layout in installer

If you want to use the German keyboard layout in the Fedora CoreOS installer, set the de keymap using:

sudo localectl set-keymap de

They new keymap will be effective immediately.

Note that the keyboard layout will not automatically be transferred to the installed system.

Posted by Uli Köhler in CoreOS

How to set keymap in Fedora CoreOS installer or terminal

In order to set the keymap in the Fedora CoreOS installation shell, use

sudo localectl set-keymap [keymap]

For example, in order to set the de keymap:

sudo localectl set-keymap de

 

Posted by Uli Köhler in CoreOS

How to run mkpasswd with yescrypt on Ubuntu/Debian

Currently the Ubuntu/Debian mkpasswd command does not support yescrypt.

In order to use it anyway, we can use the ulikoehler/mkpasswd docker image to run the proper version of mkpasswd:

docker run --rm -it ulikoehler/mkpasswd

This will prompt you for a password and then echo the yescrypt encrypted and salted password:

$ docker run --rm -it ulikoehler/mkpasswd
Password:
$y$j9T$YzrfO5lQkDWahpz5pwYzg/$HzQoMYt.7E1jj.sd6OyYCGI/Qk6oGehNgz5uvY1qp59

 

Posted by Uli Köhler in Docker, Linux

How to use yum in Dockerfile correctly

Example of how to install the mkpasswd package using yum in your Dockerfile:

RUN yum -y install mkpasswd && yum -y clean all  && rm -rf /var/cache

There are two basic aspects to remember here:

  1. Use yum -y in order to avoid interactive Y/N questions during the automated build
  2. Use yum -y clean all && rm -rf /var/cache to clean up after the call to yum -y install

Complete Dockerfile example:

FROM fedora:34
RUN yum -y install mkpasswd && yum -y clean all  && rm -rf /var/cache

 

Posted by Uli Köhler in Container, Docker

How to fix docker.errors.DockerException: Error while fetching server API version: (‘Connection aborted.’, FileNotFoundError(2, ‘No such file or directory’))

Problem:

While running a docker command like docker-compose pull, you see an error message like

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 33, in <module>
    sys.exit(load_entry_point('docker-compose==1.27.4', 'console_scripts', 'docker-compose')())
  File "/usr/lib/python3.8/site-packages/compose/cli/main.py", line 67, in main
    command()
  File "/usr/lib/python3.8/site-packages/compose/cli/main.py", line 123, in perform_command
    project = project_from_options('.', options)
  File "/usr/lib/python3.8/site-packages/compose/cli/command.py", line 60, in project_from_options
    return get_project(
  File "/usr/lib/python3.8/site-packages/compose/cli/command.py", line 131, in get_project
    client = get_client(
  File "/usr/lib/python3.8/site-packages/compose/cli/docker_client.py", line 41, in get_client
    client = docker_client(
  File "/usr/lib/python3.8/site-packages/compose/cli/docker_client.py", line 170, in docker_client
    client = APIClient(**kwargs)
  File "/usr/lib/python3.8/site-packages/docker/api/client.py", line 197, in __init__
    self._version = self._retrieve_server_version()
  File "/usr/lib/python3.8/site-packages/docker/api/client.py", line 221, in _retrieve_server_version
    raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

Solution:

This means you haven’t started your docker service!

First, try to start it using

sudo systemctl start docker

or

sudo service docker start

or

sudo /etc/init.d/docker restart

(whatever works with your distribution).

After that, retry the command that originally caused the error message to appear.

In case it still shows the same error message, try the following steps:

  • First, check /var/log/docker.log using
    cat /var/log/docker.log

    Check that file for errors during docker startup.

  • Also check if the user you’re running the command as is a member of the docker group. While insufficient permissions will not cause a FileNotFoundError(2, 'No such file or directory')), but a Permission denied, the error message might look similar in some cases.
Posted by Uli Köhler in Container, Docker, Linux

How to fix Synology Docker: failed to initialize logging driver: database is locked

Problem:

When you try to start a specific Docker container using the Synology NAS GUI, the container is being stopped unexpectedly and you see an error message like this in the logs:

Start container mycontainer failed: {"message":"failed to initialize logging driver: database is locked"}.
Signal container mycontainer failed: {"message":"Cannot kill container: mycontainer: Container 5136ddceeb46004c5b18f04eb9ec10cac3808938515874fc31185b0964232201 is not running"}.

Solution:

I fixed this problem by stopping the container, duplicating the container session: Right click on the container -> Settings -> Duplicate Settings

That will create a new container with the given settings. Note that local ports will be set to Auto and will not be copied over, so if you use fixed local ports, you need to set them to a different value in the original container and then set the local ports on the new container to the desired fixed value. Also note that files inside the container are not copied over. In my configuration, all relevant files are stored in mapped volumes on the NAS.

The root cause of this issue seems to be that the logging database for this specific container has been locked by some process. The issue is always limited to a certain container and will not affect other containers (though it could in principle occur for more than one container). I know that at least in my specific case, the issue is not caused by a reboot and will also not be fixed by a reboot of the Synology NAS. Just before I encountered the issue, my NAS had not been rebooted for months, but it might be related to Synology package updates since I updated some packages using the Package manager just before encountering the issue, including a Synology Mail Plus update which failed on the first attempt, but succeeded when I clicked Update again.

Posted by Uli Köhler in Docker, Networking