Container

How to fix Caddy container generating docker volume with autosave.json

My docker-compose-based Caddy setup re-created the container and hence created a new docker volume with only the autosave.json whenever it was restarted. Since it was auto-restarted once a minute, this led do over 70000 volumes piling up in /var/lib/docker/volumes.

The Caddy log shows that Caddy is creating /config/caddy/autosave.json:

mycaddy_1 | {"level":"info","ts":1637877640.7375677,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}

I fixed this by mapping /config/caddy to a local directory:

- ./caddy_data:/config/caddy/

Complete docker-compose.yml example:

version: '3.5'
services:
  mycaddy:
    image: 'caddy:2.4.6-alpine'
    volumes:
      - ./caddy_data:/config/caddy/
      - ./static:/usr/share/caddy
      - ./Caddyfile:/etc/caddy/Caddyfile
    ports:
      - 19815:80

 

Posted by Uli Köhler in Docker, Networking

How to fix Mosquitto ‘exited with code 13’

When mosquitto exits with code 13 such as a in a docker based setup, you will often see not error messsage:

Attaching to mosquitto_mosquitto_1
mosquitto_mosquitto_1 exited with code 13

However, there will be an error message in mosquitto.logSo, ensure that you have configured a log_dest file in your mosquitto.conf such as:

log_dest file /mosquitto/log/mosquitto.log

and check that file. In my case it showed these error messages:

1637860284: mosquitto version 2.0.14 starting
1637860284: Config loaded from /mosquitto/config/mosquitto.conf.
1637860284: Error: Unable to open pwfile "/mosquitto/conf/mosquitto.passwd".
1637860284: Error opening password file "/mosquitto/conf/mosquitto.passwd".

In my case, the path of the password file was mis-spelled (conf instead of config)

Note that you need to create the password file in order for mosquitto to start up!

See How to setup standalone mosquitto MQTT broker using docker-compose for example commands on how to create the user and the password file

 

Posted by Uli Köhler in Docker, MQTT

How to fix Docker Home-Assistant [finish] process exit code 256

Problem:

Your Docker container running home-assistant always exits immediately after starting up, with the log being similar to this:

$ docker-compose up
Recreating homeassistant ... done
Attaching to homeassistant
homeassistant    | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
homeassistant    | [s6-init] ensuring user provided files have correct perms...exited 0.
homeassistant    | [fix-attrs.d] applying ownership & permissions fixes...
homeassistant    | [fix-attrs.d] done.
homeassistant    | [cont-init.d] executing container initialization scripts...
homeassistant    | [cont-init.d] done.
homeassistant    | [services.d] starting services
homeassistant    | [services.d] done.
homeassistant    | [finish] process exit code 256
homeassistant    | [finish] process received signal 15
homeassistant    | [cont-finish.d] executing container finish scripts...
homeassistant    | [cont-finish.d] done.
homeassistant    | [s6-finish] waiting for services.
homeassistant    | [s6-finish] sending all processes the TERM signal.
homeassistant    | [s6-finish] sending all processes the KILL signal and exiting.
homeassistant exited with code 0

Solution:

You need to start the container with --privileged=true if using docker directly to start up the service, or use privileged: true if using docker-compose.

Here’s an example of a working docker-compose.yml file:

version: '3.5'
services:
  homeassistant:
    container_name: homeassistant
    restart: unless-stopped
    image: ghcr.io/home-assistant/home-assistant:stable
    network_mode: host
    privileged: true
    environment:
      - TZ=Europe/Berlin
    volumes:
      - ./config:/config

https://techoverflow.net/wp-admin/post-new.php#category-all

Posted by Uli Köhler in Container, Docker, Home-Assistant

How to optimize MySQL/MariaDB tables in docker-compose

If your MariaDB / MySQL root password is stored in .env , use this command:

source .env && docker-compose exec mariadb mysqlcheck -uroot -p$MARIADB_ROOT_PASSWORD --auto-repair --optimize --all-databases

You can also directly use the root password in the command:

docker-compose exec mariadb mysqlcheck -uroot -phoox8AiFahuniPaivatoh2iexighee --auto-repair --optimize --all-databases

 

Posted by Uli Köhler in Container, Databases, Docker

How to enable Collabora for multiple domains using docker-compose

In our previous post How to run Collabora office for Nextcloud using docker-compose we investigated how to configure your Collabora office server using docker-compose.yml.

If you want to use multiple domains, you need to change this line in .env:

COLLABORA_DOMAIN=collabora.mydomain.com

By reading the source code I found out that COLLABORA_DOMAIN is interpreted as a regular expression. Therefore you can use a (...|...|...) syntax.

COLLABORA_DOMAIN=(nextcloud.mydomain.com|nextcloud.myseconddomain.com)

After that, restart collabora.

Posted by Uli Köhler in Docker, Nextcloud

How to run Collabora office for Nextcloud using docker-compose

Create this docker-compose.yml, e.g. in /opt/collabora-mydomain:

version: '3'
services:
  code:
    image: collabora/code:latest
    restart: always
    environment:
      - password=${COLLABORA_PASSWORD}
      - username=${COLLABORA_USERNAME}
      - domain=${COLLABORA_DOMAIN}
      - extra_params=--o:ssl.enable=true
    ports:
      - 9980:9980

Now create this .env with the configuration. You need to change the password and the domain!

COLLABORA_USERNAME=admin
COLLABORA_PASSWORD=veecheit0Phophiesh1fahPah0Wue3
COLLABORA_DOMAIN=collabora.mydomain.com

Now you can create a systemd service to autostart by using our script from Create a systemd service for your docker-compose project in 10 seconds.

Run from inside your directory (e.g. /opt/collabora-mydomain)

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Now you need to configure your reverse proxy to point to port 9980. Here’s an example nginx config:

server {
    server_name collabora.mydomain.com;

    access_log /var/log/nginx/collabora.mydomain.com.access_log;
    error_log /var/log/nginx/collabora.mydomain.com.error_log info;

    location / {
        proxy_pass https://127.0.0.1:9980;
        proxy_http_version 1.1;
        proxy_read_timeout 3600s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host            $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        add_header X-Frontend-Host $host;
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    }

    listen [::]:80; # managed by Certbot
}

Now open your browser and open collabora.mydomain.com. If collabora is running correctly, you should see:

OK

In Nextcloud, goto https://nextcloud.mydomain.com/settings/admin/richdocuments and set the

https://admin:[email protected]

Ensure to use your custom password from .env and your custom domain!

Click Save and you should see Collabora Online server is reachable:

Posted by Uli Köhler in Container, Docker, Nextcloud

How to fix Docker-Nextcloud Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.

Problem:

When using the official nextcloud docker image, you will see a message like

Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.

on the system overview page

Solution

This is a bug in the docker image and will likely be resolved soon – in the meantime, we can just manually install the required library on the container:

docker-compose exec nextcloud apt -y update
docker-compose exec nextcloud apt -y install libmagickcore-6.q16-6-extra

If you re-create the container, this change will be lost, but in my opinion it’s best to opt for a simple solution here and possible do it again once or twice as opposed to a permanent but much more labour-intensive procedure like updating the docker image and later migrating back to the official image.

Posted by Uli Köhler in Container, Docker

How to fix build ‘lz4 library not found, compiling without it’

Problem:

When compiling a piece of software – for example in your Dockerfile or on your PC – you see a warning message like

lz4 library not found, compiling without it

Solution:

Install liblz4, which is a library for a compression algorithm. On Ubuntu/Debian based systems you can install it using

sudo apt -y install liblz4-dev

In your Dockerfile, install using

RUN apt update && apt install -y liblz4-dev && rm -rf /var/lib/apt/lists/*

Otherwise, refer to the liblz4 GitHub page.

Posted by Uli Köhler in C/C++, Docker

Simple Elasticsearch setup with docker-compose

The following docker-compose.yml is a simple starting point for using ElasticSearch within a docker-based setup:

version: '2.2'
services:
    elasticsearch1:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
        container_name: elasticsearch1
        environment:
            - cluster.name=docker-cluster
            - node.name=elasticsearch1
            - cluster.initial_master_nodes=elasticsearch1
            - bootstrap.memory_lock=true
            - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
            - http.cors.enabled=true
            - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
            - http.cors.allow-credentials=true
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        volumes:
            - ./esdata1:/usr/share/elasticsearch/data
        ports:
            - 9200:9200
    dejavu:
        image: appbaseio/dejavu
        container_name: dejavu
        ports:
            - 1358:1358

Now create the esdata1 directory with the correct permissions:

sudo mkdir esdata1
sudo chown -R 1000:1000 esdata1

We also need to configure the vm.max_map_count sysctl parameter:

echo -e "\nvm.max_map_count=524288\n" | sudo tee -a /etc/sysctl.conf && sudo sysctl -w vm.max_map_count=524288

 

I recommend to place it in /opt/elasticsearch, but you can place wherever you like.

If you want to autostart it on boot, see Create a systemd service for your docker-compose project in 10 seconds or just use this snippet from said post:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will create a systemd service named elasticsearch (if your directory is named elasticsearch like /opt/elasticsearch) and enable and start it immediately. Hence you can restart using

sudo systemctl restart elasticsearch

and view the logs using

sudo journalctl -xfu elasticsearch

For more complex setup involving more than one node, see our previous post on ElasticSearch docker-compose.yml and systemd service generator

Posted by Uli Köhler in Container, Databases, Docker, ElasticSearch

How I connected a network_mode: host container to its database container

I have setup my FreePBX to use network_mode: 'host' but faced issues when it couldn’t connect to the MariaDB container which was not using network_mode: 'host'.

I fixed this by:

  • Setting the MariaDB container to network_mode: 'host'
  • Setting the FreePBX container to connect to 127.0.0.1 (DB_HOST=127.0.0.1). Setting it to localhost did NOT allow FreePBX to connect to MariaDB!
Posted by Uli Köhler in Docker, FreePBX, Networking

Recommended docker-compose mariadb service

I recommend this service:

mariadb:
  image: mariadb:latest
  environment:
    - MYSQL_DATABASE=servicename
    - MYSQL_USER=servicename
    - MYSQL_PASSWORD=${MARIADB_PASSWORD}
    - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
  volumes:
    - ./mariadb_data:/var/lib/mysql
  command: --default-storage-engine innodb
  restart: unless-stopped
  healthcheck:
    test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
    interval: 20s
    start_period: 10s
    timeout: 10s
    retries: 3

(replace servicename by the name of your service, e.g. kimai, redmine, …) and this .env:

MARIADB_ROOT_PASSWORD=eiNgam3woh4ahTee4chi9vohvauk6a
MARIADB_PASSWORD=shahb4alubei5Vie8arahhok2morae

You can also easily generate these passwords by using:

echo -e "MARIADB_ROOT_PASSWORD=$(pwgen 30 1)\nMARIADB_PASSWORD=$(pwgen 30 1)" > .env

 

Posted by Uli Köhler in Container, Docker

Local redmine backup using bup (docker-compose compatible)

This script uses bupto backup your docker-compose based redmine installation to a local bup folder e.g. in /var/lib/bup/my-redmine.bup:

#!/bin/bash
# Auto-determine the name from the directory name
# /opt/my-redmine => $NAME=my-redmine => /var/lib/bup/my-redmine.bup
export NAME=$(basename $(pwd))
export BUP_DIR=/var/lib/bup/$NAME.bup
bup_directory() {
        echo "BUPing $1"
        bup -d $BUP_DIR index $1 && bup save -9 --strip-path $(pwd) -n $1 $1
}
# Init
bup -d $BUP_DIR init
# Save MariaDB
source .env && docker-compose exec -T mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases | bup -d $BUP_DIR split -n $NAME-mariadb.sql
# Save directories
bup_directory redmine_data
bup_directory redmine_themes
# Backup self
bup_directory backup.sh
bup_directory docker-compose.yml
# OPTIONAL: Add par2 information
#   This is only recommended for backup on unreliable storage or for extremely critical backups
#   If you already have bitrot protection (like BTRFS with regular scrubbing), this might be overkill.
# Uncomment this line to enable:
# bup fsck -g

# OPTIONAL: Cleanup old backups
bup -d $BUP_DIR prune-older --keep-all-for 1m --keep-dailies-for 6m --keep-monthlies-for forever -9 --unsafe

It will backup:

  • MySQL data from inside redmine using mysqldump
  • The redmine_data folder
  • The redmine_themes folder
  • The backup script backup.sh itself
  • docker-compose.yml

Place it in the same folder where docker-compose.yml is located.

The script is compatible with our previous post How to create a systemd backup timer & service in 10 seconds

Posted by Uli Köhler in bup, Docker

How to fix Fedora CoreOS rpm-ostree error: Transaction in progress: deploy –lock-finalization revision=… –disallow-downgrade

Problem:

When trying to install a package using rpm-ostree, you see an error message like

error: Transaction in progress: deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade 

Solution:

The error message means that currently there’s an rpm-ostree operating in progress and you need to wait for it to finish.

In order to see which process is running, use

ps aux | grep rpm

Example output:

[[email protected] uli]# ps aux | grep rpm
root         730 41.2  1.7 1218036 34568 ?       Ssl  18:41   0:30 /usr/bin/rpm-ostree start-daemon
zincati     1896  0.0  0.8 481172 17324 ?        Sl   18:41   0:00 rpm-ostree deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade
root        3223  0.0  0.0 221452   832 pts/0    S+   18:42   0:00 grep --color=auto rpm

As you can see in the second line:

zincati 1896 0.0 0.8 481172 17324 ? Sl 18:41 0:00 rpm-ostree deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade

the user zincati is currently running rpm-ostree on my system. zincati is the Fedora CoreOS auto-updater – in other words, an automatic system update is currently running on CoreOS.

In case the process got stuck and waiting doesn’t help reboot the system. Killing the process won’t work.

Posted by Uli Köhler in CoreOS

How to fix CoreOS “WARNING: This system is using cgroups v1”

Problem:

When logging into your CoreOS instance, you see this warning message:

############################################################################
WARNING: This system is using cgroups v1. For increased reliability
it is strongly recommended to migrate this system and your workloads
to use cgroups v2. For instructions on how to adjust kernel arguments
to use cgroups v2, see:
https://docs.fedoraproject.org/en-US/fedora-coreos/kernel-args/

To disable this warning, use:
sudo systemctl disable coreos-check-cgroups.service
############################################################################

but when you look at https://docs.fedoraproject.org/en-US/fedora-coreos/kernel-args/ you only see an example of how to initialize a new CoreOS instance with Ignition files with cgroups v2.

Solution:

In order to migrate your system to cgroups v2, run

sudo rpm-ostree kargs --delete=systemd.unified_cgroup_hierarchy

After that, you need to reboot your system in order for the changes to take effect:

sudo systemctl reboot

After the system has rebooted, the error should disappear.

Posted by Uli Köhler in CoreOS

A simple CoreOS config for beginners with password login

In constrast to other Linux-based systems, CoreOS requires quite a large learning curve to get installed properly – for example, you have to create the right ignition file for . This is a huge obstacle to overcome especially for first-time users.

This posts attempts to alleviate the steep learning curve by providing a basic config that is suitable for most practical (and especially small-scale) usecases and provides a good starting point for custom configs.

Simple install

First, boot up the VM from the CoreOS Live CD. We assume that you have a DHCP network connected to eth0. You will see a shell immediately.

The VM will automatically acquire an IP address over DHCP.

You can use TechOverflow’s hosted ignition file for the installation. You need to use the correct disk instead of /dev/xvda depending on your hardware/hypervisor. If in doubt, use lsblk to find the correct disk name.

Now run the installation command:

sudo coreos-installer install /dev/xvda --copy-network --ignition-url https://techoverflow.net/coreos.ign

After the installation is finished, reboot using

reboot

and the machine has rebooted, you can use the default login credentials:

Username: admin
Password: coreos

The hostname is CoreOS.

You absolutely need to change the password after the installation! If you create another user, remember that you still need to change the password of the admin user using

sudo passwd admin

Build your own config file

This is the Ignition YAML we used to create the correct config file. Use our online transpiler at https://fcct.techoverflow.net to compile the YAML to the JSON file. In order to create a new password hash, use TechOverflow’s docker-based mkpasswd approach.

variant: fcos
version: 1.0.0
passwd:
  users:
    - name: admin
      groups:
        - "sudo"
        - "docker"
      password_hash: $y$j9T$n6h8P2ik8tfoNUFBBoly00$7bnrMF8oFrB25Fc3NqigqEH/MI5YXIJwtCG/iEsns.2

systemd:
  units:
    - name: docker.service
      enabled: true

    - name: containerd.service
      enabled: true
    - name: [email protected]
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure
          ExecStart=-/usr/sbin/agetty --autologin admin --noclear %I $TERM
          TTYVTDisallocate=no
storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          CoreOS
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          # Tell systemd to not use a pager when printing information
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          # Raise console message logging level from DEBUG (7) to WARNING (4)
          # to hide audit messages from the interactive console
          kernel.printk=4
    - path: /etc/ssh/sshd_config.d/20-enable-passwords.conf
      mode: 0644
      contents:
        inline: |
          # Enable SSH password login
          PasswordAuthentication yes

which results in the following transpiled JSON:

{
  "ignition": {
    "version": "3.0.0"
  },
  "passwd": {
    "users": [
      {
        "groups": [
          "sudo",
          "docker"
        ],
        "name": "admin",
        "passwordHash": "$y$j9T$n6h8P2ik8tfoNUFBBoly00$7bnrMF8oFrB25Fc3NqigqEH/MI5YXIJwtCG/iEsns.2"
      }
    ]
  },
  "storage": {
    "files": [
      {
        "contents": {
          "source": "data:,CoreOS%0A"
        },
        "mode": 420,
        "path": "/etc/hostname"
      },
      {
        "contents": {
          "source": "data:,%23%20Tell%20systemd%20to%20not%20use%20a%20pager%20when%20printing%20information%0Aexport%20SYSTEMD_PAGER%3Dcat%0A"
        },
        "mode": 420,
        "path": "/etc/profile.d/systemd-pager.sh"
      },
      {
        "contents": {
          "source": "data:,%23%20Raise%20console%20message%20logging%20level%20from%20DEBUG%20(7)%20to%20WARNING%20(4)%0A%23%20to%20hide%20audit%20messages%20from%20the%20interactive%20console%0Akernel.printk%3D4%0A"
        },
        "mode": 420,
        "path": "/etc/sysctl.d/20-silence-audit.conf"
      },
      {
        "contents": {
          "source": "data:,%23%20Enable%20SSH%20password%20login%0APasswordAuthentication%20yes%0A"
        },
        "mode": 420,
        "path": "/etc/ssh/sshd_config.d/20-enable-passwords.conf"
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "enabled": true,
        "name": "docker.service"
      },
      {
        "enabled": true,
        "name": "containerd.service"
      },
      {
        "dropins": [
          {
            "contents": "[Service]\n# Override Execstart in main unit\nExecStart=\n# Add new Execstart with `-` prefix to ignore failure\nExecStart=-/usr/sbin/agetty --autologin admin --noclear %I $TERM\nTTYVTDisallocate=no\n",
            "name": "autologin-core.conf"
          }
        ],
        "name": "[email protected]"
      }
    ]
  }
}

 

Posted by Uli Köhler in CoreOS

Simple 5-minute Vaultwarden (SQLite) setup using docker-compose

In order to setup Vaultwarden in a docker-compose & SQLite based configuration (e.g. on CoreOS), first we need to create a directory. I recommend using /opt/vaultwarden.

Run all the following commands and place all the following files in the /opt/vaultwarden directory!

First, we’ll create a .env file with random passwords (I recommend using pwgen 30). Not using a unique, random password here is a huge security risk since it will allow full admin access to Vaultwarden!

ADMIN_TOKEN=iqueingufo3LohshoohoG3tha2zou6
SIGNUPS_ALLOWED=true

Now place your docker-compose.yml:

version: '3.4'
services:
  vaultwarden:
    image: vaultwarden/server:latest
    environment:
      - ADMIN_TOKEN=${ADMIN_TOKEN}
      - SIGNUPS_ALLOWED=${SIGNUPS_ALLOWED}
    volumes:
      - ./vw_data:/data
    ports:
      - 17881:80

Next, we’ll create a systemd service to autostart docker-compose:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will automatically start vaultwarden.

Now you need to configure your reverse proxy server to point https://vaultwarden.mydomain.com . You need to use https, http won’t work due to some browser limitations.

Now we need to configure vaultwarden using the admin interface.

Go to https://vaultwarden.mydomain.com/admin and enter the ADMIN_TOKEN from .env.

There are two things that you need to configure here:

  • The Domain Name under General settings
  • The email server settings under SMTP email settings

With these settings configured, Vaultwarden should be up and running and you can access it using https://vaultwarden.mydomain.com .

After the first user has been setup and tested, you can uncheck the Allow new signups in General settings in the admin interface. This is recommended since everyone who will be able to guess your domain name would be able to create a Vaultwarden account otherwise.

Posted by Uli Köhler in Container, Docker

Simple 15-minute passbolt setup using docker-compose

This is how I run my local passbolt instance.

First, create the directory. I use /opt/passbolt. Run all the following commands and place all the following files in that directory!

First, initialize the folders with the correct permissions:

mkdir -p passbolt_gpg
chown -R 33:33 passbolt_gpg

Now create a .env file with random passwords (I recommend using pwgen 30):

MARIADB_ROOT_PASSWORD=meiJieseingi4dutiareimoh2Aiv5j
MARIADB_USER_PASSWORD=ohre3ye1oNexeShiuChaengahzuemo

Now place your docker-compose.yml:

version: '3.4'
services:
  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_DATABASE=passbolt
      - MYSQL_USER=passbolt
      - MYSQL_PASSWORD=${MARIADB_USER_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
    volumes:
      - ./mariadb_data:/var/lib/mysql

  passbolt:
    image: passbolt/passbolt:latest-ce
    tty: true
    depends_on:
      - mariadb
    environment:
      - DATASOURCES_DEFAULT_HOST=mariadb
      - DATASOURCES_DEFAULT_USERNAME=passbolt
      - DATASOURCES_DEFAULT_PASSWORD=${MARIADB_USER_PASSWORD}
      - DATASOURCES_DEFAULT_DATABASE=passbolt
      - DATASOURCES_DEFAULT_PORT=3306
      - DATASOURCES_QUOTE_IDENTIFIER=true
      - APP_FULL_BASE_URL=https://passbolt.mydomain.com
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_HOST=smtp.mydomain.com
      - EMAIL_TRANSPORT_DEFAULT_PORT=587
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_PASSWORD=yei5QueiNa5ahF0Aice8Na0aphoyoh
      - EMAIL_TRANSPORT_DEFAULT_TLS=true
      - [email protected]
    volumes:
      - ./passbolt_gpg:/etc/passbolt/gpg
      - ./passbolt_web:/usr/share/php/passbolt/webroot/img/public
    command: ["/usr/bin/wait-for.sh", "-t", "0", "mariadb:3306", "--", "/docker-entrypoint.sh"]
    ports:
      - 17880:80

Be sure to replace all the email addresses, domain names and SMTP credentials by the values appropriate for your setup.

Now startup passbolt for the first time, it will initialize the database:

docker-compose up

You need to keep passbolt running during the following steps.

First, we’ll send a test email:

docker-compose exec passbolt su -m -c "bin/cake passbolt send_test_email"

If you see

The message has been successfully sent!

then your SMTP config is correct. Otherwise, debug the error message, and, if neccessary, modify the EMAIL_… environment variables in docker-compose.yml and restart passbolt afterwards.

Now we’ll create an admin user:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f John -l Doe -r admin" -s /bin/sh www-data

If you want to create a normal (non-admin) user, use user instead of admin:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f Jane -l Doe -r user" -s /bin/sh www-data

After that, the only thing left to do is to create a systemd service to autostart your passbolt service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Passbolt is now running on port 17880 (you can configure this using docker-compose.yml). Just configure your reverse proxy appropriately to point to this port.

Posted by Uli Köhler in Container, Docker

How to install ruby & rubygems in Alpine Linux

Problem:

You want to install ruby and the gem package manager in Alpine linux, but running apk install ruby rubygems shows you that the package doesn’t exist

/ # apk add ruby rubygems
ERROR: unable to select packages:
  rubygems (no such package):
    required by: world[rubygems]

Solution:

gem is included in the ruby package. So the only command you need to run is

apk update
apk add ruby

Example output:

/ # apk add ruby
(1/7) Installing ca-certificates (20191127-r5)
(2/7) Installing gdbm (1.19-r0)
(3/7) Installing gmp (6.2.1-r0)
(4/7) Installing readline (8.1.0-r0)
(5/7) Installing yaml (0.2.5-r0)
(6/7) Installing ruby-libs (2.7.3-r0)
(7/7) Installing ruby (2.7.3-r0)
Executing busybox-1.32.1-r6.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 928 MiB in 154 packages

After doing that, you can immediately use both ruby and gem.

Posted by Uli Köhler in Alpine Linux, Container, Docker, Linux, Ruby

How to apply Fedora CoreOS changes without a reboot

Do you want to install Fedora CoreOS packages without having to reboot your entire system in order for the packages to be available? Just run

sudo rpm-ostree ex apply-live

after running your rpm-ostree install commands.

For example:

sudo rpm-ostree install nano
sudo rpm-ostree ex apply-live

 

Note that this is not completely safe for multiple reasons, not even for seemingly innocuous utility packages like nano:

  • As indicated by the ex in the command, the apply-live command is experimental
  • It might apply other changes from the new OSTree like automatically installed updated and hence might have effects
  • When changing files on a system with productive services running, the services might crash or experience other issues. This might not happen immediately and it might be hard to debug especially in a complex environment. In case you want to safely update your services, it’s almost always best to just reboot into the new OSTree.

Also read our previous post on Why do you have to reboot after rpm-ostree install on Fedora CoreOS? where we explain the technical reasoning behind the reboots.

Posted by Uli Köhler in CoreOS