Container

How to optimize MySQL/MariaDB tables in docker-compose

If your MariaDB / MySQL root password is stored in .env , use this command:

source .env && docker-compose exec mariadb mysqlcheck -uroot -p$MARIADB_ROOT_PASSWORD --auto-repair --optimize --all-databases

You can also directly use the root password in the command:

docker-compose exec mariadb mysqlcheck -uroot -phoox8AiFahuniPaivatoh2iexighee --auto-repair --optimize --all-databases

 

Posted by Uli Köhler in Container, Databases, Docker

How to enable Collabora for multiple domains using docker-compose

In our previous post How to run Collabora office for Nextcloud using docker-compose we investigated how to configure your Collabora office server using docker-compose.yml.

If you want to use multiple domains, you need to change this line in .env:

COLLABORA_DOMAIN=collabora.mydomain.com

By reading the source code I found out that COLLABORA_DOMAIN is interpreted as a regular expression. Therefore you can use a (...|...|...) syntax.

COLLABORA_DOMAIN=(nextcloud.mydomain.com|nextcloud.myseconddomain.com)

After that, restart collabora.

Posted by Uli Köhler in Docker, Nextcloud

How to run Collabora office for Nextcloud using docker-compose

Create this docker-compose.yml, e.g. in /opt/collabora-mydomain:

version: '3'
services:
  code:
    image: collabora/code:latest
    restart: always
    environment:
      - password=${COLLABORA_PASSWORD}
      - username=${COLLABORA_USERNAME}
      - domain=${COLLABORA_DOMAIN}
      - extra_params=--o:ssl.enable=true
    ports:
      - 9980:9980

Now create this .env with the configuration. You need to change the password and the domain!

COLLABORA_USERNAME=admin
COLLABORA_PASSWORD=veecheit0Phophiesh1fahPah0Wue3
COLLABORA_DOMAIN=collabora.mydomain.com

Now you can create a systemd service to autostart by using our script from Create a systemd service for your docker-compose project in 10 seconds.

Run from inside your directory (e.g. /opt/collabora-mydomain)

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Now you need to configure your reverse proxy to point to port 9980. Here’s an example nginx config:

server {
    server_name collabora.mydomain.com;

    access_log /var/log/nginx/collabora.mydomain.com.access_log;
    error_log /var/log/nginx/collabora.mydomain.com.error_log info;

    location / {
        proxy_pass https://127.0.0.1:9980;
        proxy_http_version 1.1;
        proxy_read_timeout 3600s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host            $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        add_header X-Frontend-Host $host;
        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    }

    listen [::]:80; # managed by Certbot
}

Now open your browser and open collabora.mydomain.com. If collabora is running correctly, you should see:

OK

In Nextcloud, goto https://nextcloud.mydomain.com/settings/admin/richdocuments and set the

https://admin:[email protected]

Ensure to use your custom password from .env and your custom domain!

Click Save and you should see Collabora Online server is reachable:

Posted by Uli Köhler in Container, Docker, Nextcloud

How to fix Docker-Nextcloud Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.

Problem:

When using the official nextcloud docker image, you will see a message like

Module php-imagick in this instance has no SVG support. For better compatibility it is recommended to install it.

on the system overview page

Solution

This is a bug in the docker image and will likely be resolved soon – in the meantime, we can just manually install the required library on the container:

docker-compose exec nextcloud apt -y update
docker-compose exec nextcloud apt -y install libmagickcore-6.q16-6-extra

If you re-create the container, this change will be lost, but in my opinion it’s best to opt for a simple solution here and possible do it again once or twice as opposed to a permanent but much more labour-intensive procedure like updating the docker image and later migrating back to the official image.

Posted by Uli Köhler in Container, Docker

How to fix build ‘lz4 library not found, compiling without it’

Problem:

When compiling a piece of software – for example in your Dockerfile or on your PC – you see a warning message like

lz4 library not found, compiling without it

Solution:

Install liblz4, which is a library for a compression algorithm. On Ubuntu/Debian based systems you can install it using

sudo apt -y install liblz4-dev

In your Dockerfile, install using

RUN apt update && apt install -y liblz4-dev && rm -rf /var/lib/apt/lists/*

Otherwise, refer to the liblz4 GitHub page.

Posted by Uli Köhler in C/C++, Docker

Simple Elasticsearch setup with docker-compose

The following docker-compose.yml is a simple starting point for using ElasticSearch within a docker-based setup:

version: '2.2'
services:
    elasticsearch1:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
        container_name: elasticsearch1
        environment:
            - cluster.name=docker-cluster
            - node.name=elasticsearch1
            - cluster.initial_master_nodes=elasticsearch1
            - bootstrap.memory_lock=true
            - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
            - http.cors.enabled=true
            - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
            - http.cors.allow-credentials=true
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        volumes:
            - ./esdata1:/usr/share/elasticsearch/data
        ports:
            - 9200:9200
    dejavu:
        image: appbaseio/dejavu
        container_name: dejavu
        ports:
            - 1358:1358

Now create the esdata1 directory with the correct permissions:

sudo mkdir esdata1
sudo chown -R 1000:1000 esdata1

We also need to configure the vm.max_map_count sysctl parameter:

echo -e "\nvm.max_map_count=524288\n" | sudo tee -a /etc/sysctl.conf && sudo sysctl -w vm.max_map_count=524288

 

I recommend to place it in /opt/elasticsearch, but you can place wherever you like.

If you want to autostart it on boot, see Create a systemd service for your docker-compose project in 10 seconds or just use this snippet from said post:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will create a systemd service named elasticsearch (if your directory is named elasticsearch like /opt/elasticsearch) and enable and start it immediately. Hence you can restart using

sudo systemctl restart elasticsearch

and view the logs using

sudo journalctl -xfu elasticsearch

For more complex setup involving more than one node, see our previous post on ElasticSearch docker-compose.yml and systemd service generator

Posted by Uli Köhler in Container, Databases, Docker, ElasticSearch

How I connected a network_mode: host container to its database container

I have setup my FreePBX to use network_mode: 'host' but faced issues when it couldn’t connect to the MariaDB container which was not using network_mode: 'host'.

I fixed this by:

  • Setting the MariaDB container to network_mode: 'host'
  • Setting the FreePBX container to connect to 127.0.0.1 (DB_HOST=127.0.0.1). Setting it to localhost did NOT allow FreePBX to connect to MariaDB!
Posted by Uli Köhler in Docker, FreePBX, Networking

Recommended docker-compose mariadb service

I recommend this service:

mariadb:
  image: mariadb:latest
  environment:
    - MYSQL_DATABASE=servicename
    - MYSQL_USER=servicename
    - MYSQL_PASSWORD=${MARIADB_PASSWORD}
    - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
  volumes:
    - ./mariadb_data:/var/lib/mysql
  command: --default-storage-engine innodb
  restart: unless-stopped
  healthcheck:
    test: mysqladmin -p${MARIADB_ROOT_PASSWORD} ping -h localhost
    interval: 20s
    start_period: 10s
    timeout: 10s
    retries: 3

(replace servicename by the name of your service, e.g. kimai, redmine, …) and this .env:

MARIADB_ROOT_PASSWORD=eiNgam3woh4ahTee4chi9vohvauk6a
MARIADB_PASSWORD=shahb4alubei5Vie8arahhok2morae

 

Posted by Uli Köhler in Container, Docker

Local redmine backup using bup (docker-compose compatible)

This script uses bupto backup your docker-compose based redmine installation to a local bup folder e.g. in /var/lib/bup/my-redmine.bup:

#!/bin/bash
# Auto-determine the name from the directory name
# /opt/my-redmine => $NAME=my-redmine => /var/lib/bup/my-redmine.bup
export NAME=$(basename $(pwd))
export BUP_DIR=/var/lib/bup/$NAME.bup
bup_directory() {
        echo "BUPing $1"
        bup -d $BUP_DIR index $1 && bup save -9 --strip-path $(pwd) -n $1 $1
}
# Init
bup -d $BUP_DIR init
# Save MariaDB
source .env && docker-compose exec mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases | bup -d $BUP_DIR split -n $NAME-mariadb.sql
# Save directories
bup_directory redmine_data
bup_directory redmine_themes
# Backup self
bup_directory backup.sh
bup_directory docker-compose.yml
# OPTIONAL: Add par2 information
#   This is only recommended for backup on unreliable storage or for extremely critical backups
#   If you already have bitrot protection (like BTRFS with regular scrubbing), this might be overkill.
# Uncomment this line to enable:
# bup fsck -g

# OPTIONAL: Cleanup old backups
bup -d $BUP_DIR prune-older --keep-all-for 1m --keep-dailies-for 6m --keep-monthlies-for forever -9 --unsafe

It will backup:

  • MySQL data from inside redmine using mysqldump
  • The redmine_data folder
  • The redmine_themes folder
  • The backup script backup.sh itself
  • docker-compose.yml

Place it in the same folder where docker-compose.yml is located.

The script is compatible with our previous post How to create a systemd backup timer & service in 10 seconds

Posted by Uli Köhler in bup, Docker

How to fix Fedora CoreOS rpm-ostree error: Transaction in progress: deploy –lock-finalization revision=… –disallow-downgrade

Problem:

When trying to install a package using rpm-ostree, you see an error message like

error: Transaction in progress: deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade 

Solution:

The error message means that currently there’s an rpm-ostree operating in progress and you need to wait for it to finish.

In order to see which process is running, use

ps aux | grep rpm

Example output:

[[email protected] uli]# ps aux | grep rpm
root         730 41.2  1.7 1218036 34568 ?       Ssl  18:41   0:30 /usr/bin/rpm-ostree start-daemon
zincati     1896  0.0  0.8 481172 17324 ?        Sl   18:41   0:00 rpm-ostree deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade
root        3223  0.0  0.0 221452   832 pts/0    S+   18:42   0:00 grep --color=auto rpm

As you can see in the second line:

zincati 1896 0.0 0.8 481172 17324 ? Sl 18:41 0:00 rpm-ostree deploy --lock-finalization revision=5040eaabed46962a07b1e918ba5afa1502e1f898bf958673519cd83e986c228f --disallow-downgrade

the user zincati is currently running rpm-ostree on my system. zincati is the Fedora CoreOS auto-updater – in other words, an automatic system update is currently running on CoreOS.

Posted by Uli Köhler in CoreOS

How to fix CoreOS “WARNING: This system is using cgroups v1”

Problem:

When logging into your CoreOS instance, you see this warning message:

############################################################################
WARNING: This system is using cgroups v1. For increased reliability
it is strongly recommended to migrate this system and your workloads
to use cgroups v2. For instructions on how to adjust kernel arguments
to use cgroups v2, see:
https://docs.fedoraproject.org/en-US/fedora-coreos/kernel-args/

To disable this warning, use:
sudo systemctl disable coreos-check-cgroups.service
############################################################################

but when you look at https://docs.fedoraproject.org/en-US/fedora-coreos/kernel-args/ you only see an example of how to initialize a new CoreOS instance with Ignition files with cgroups v2.

Solution:

In order to migrate your system to cgroups v2, run

sudo rpm-ostree kargs --delete=systemd.unified_cgroup_hierarchy

After that, you need to reboot your system in order for the changes to take effect:

sudo systemctl reboot

After the system has rebooted, the error should disappear.

Posted by Uli Köhler in CoreOS

A simple CoreOS config for beginners with password login

In constrast to other Linux-based systems, CoreOS requires quite a large learning curve to get installed properly – for example, you have to create the right ignition file for . This is a huge obstacle to overcome especially for first-time users.

This posts attempts to alleviate the steep learning curve by providing a basic config that is suitable for most practical (and especially small-scale) usecases and provides a good starting point for custom configs.

Simple install

First, boot up the VM from the CoreOS Live CD. We assume that you have a DHCP network connected to eth0. You will see a shell immediately.

The VM will automatically acquire an IP address over DHCP.

You can use TechOverflow’s hosted ignition file for the installation. You need to use the correct disk instead of /dev/xvda depending on your hardware/hypervisor. If in doubt, use lsblk to find the correct disk name.

Now run the installation command:

sudo coreos-installer install /dev/xvda --copy-network --ignition-url https://techoverflow.net/coreos.ign

After the installation is finished, reboot using

reboot

and the machine has rebooted, you can use the default login credentials:

Username: admin
Password: coreos

The hostname is CoreOS.

You absolutely need to change the password after the installation! If you create another user, remember that you still need to change the password of the admin user using

sudo passwd admin

Build your own config file

This is the Ignition YAML we used to create the correct config file. Use our online transpiler at https://fcct.techoverflow.net to compile the YAML to the JSON file. In order to create a new password hash, use TechOverflow’s docker-based mkpasswd approach.

variant: fcos
version: 1.0.0
passwd:
  users:
    - name: admin
      groups:
        - "sudo"
        - "docker"
      password_hash: $y$j9T$n6h8P2ik8tfoNUFBBoly00$7bnrMF8oFrB25Fc3NqigqEH/MI5YXIJwtCG/iEsns.2

systemd:
  units:
    - name: docker.service
      enabled: true

    - name: containerd.service
      enabled: true
    - name: [email protected]
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure
          ExecStart=-/usr/sbin/agetty --autologin admin --noclear %I $TERM
          TTYVTDisallocate=no
storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          CoreOS
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          # Tell systemd to not use a pager when printing information
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          # Raise console message logging level from DEBUG (7) to WARNING (4)
          # to hide audit messages from the interactive console
          kernel.printk=4
    - path: /etc/ssh/sshd_config.d/20-enable-passwords.conf
      mode: 0644
      contents:
        inline: |
          # Enable SSH password login
          PasswordAuthentication yes

which results in the following transpiled JSON:

{
  "ignition": {
    "version": "3.0.0"
  },
  "passwd": {
    "users": [
      {
        "groups": [
          "sudo",
          "docker"
        ],
        "name": "admin",
        "passwordHash": "$y$j9T$n6h8P2ik8tfoNUFBBoly00$7bnrMF8oFrB25Fc3NqigqEH/MI5YXIJwtCG/iEsns.2"
      }
    ]
  },
  "storage": {
    "files": [
      {
        "contents": {
          "source": "data:,CoreOS%0A"
        },
        "mode": 420,
        "path": "/etc/hostname"
      },
      {
        "contents": {
          "source": "data:,%23%20Tell%20systemd%20to%20not%20use%20a%20pager%20when%20printing%20information%0Aexport%20SYSTEMD_PAGER%3Dcat%0A"
        },
        "mode": 420,
        "path": "/etc/profile.d/systemd-pager.sh"
      },
      {
        "contents": {
          "source": "data:,%23%20Raise%20console%20message%20logging%20level%20from%20DEBUG%20(7)%20to%20WARNING%20(4)%0A%23%20to%20hide%20audit%20messages%20from%20the%20interactive%20console%0Akernel.printk%3D4%0A"
        },
        "mode": 420,
        "path": "/etc/sysctl.d/20-silence-audit.conf"
      },
      {
        "contents": {
          "source": "data:,%23%20Enable%20SSH%20password%20login%0APasswordAuthentication%20yes%0A"
        },
        "mode": 420,
        "path": "/etc/ssh/sshd_config.d/20-enable-passwords.conf"
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "enabled": true,
        "name": "docker.service"
      },
      {
        "enabled": true,
        "name": "containerd.service"
      },
      {
        "dropins": [
          {
            "contents": "[Service]\n# Override Execstart in main unit\nExecStart=\n# Add new Execstart with `-` prefix to ignore failure\nExecStart=-/usr/sbin/agetty --autologin admin --noclear %I $TERM\nTTYVTDisallocate=no\n",
            "name": "autologin-core.conf"
          }
        ],
        "name": "[email protected]"
      }
    ]
  }
}

 

Posted by Uli Köhler in CoreOS

Simple 5-minute Vaultwarden (SQLite) setup using docker-compose

In order to setup Vaultwarden in a docker-compose & SQLite based configuration (e.g. on CoreOS), first we need to create a directory. I recommend using /opt/vaultwarden.

Run all the following commands and place all the following files in the /opt/vaultwarden directory!

First, we’ll create a .env file with random passwords (I recommend using pwgen 30). Not using a unique, random password here is a huge security risk since it will allow full admin access to Vaultwarden!

ADMIN_TOKEN=iqueingufo3LohshoohoG3tha2zou6
SIGNUPS_ALLOWED=true

Now place your docker-compose.yml:

version: '3.4'
services:
  vaultwarden:
    image: vaultwarden/server:latest
    environment:
      - ADMIN_TOKEN=${ADMIN_TOKEN}
      - SIGNUPS_ALLOWED=${SIGNUPS_ALLOWED}
    volumes:
      - ./vw_data:/data
    ports:
      - 17881:80

Next, we’ll create a systemd service to autostart docker-compose:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will automatically start vaultwarden.

Now you need to configure your reverse proxy server to point https://vaultwarden.mydomain.com . You need to use https, http won’t work due to some browser limitations.

Now we need to configure vaultwarden using the admin interface.

Go to https://vaultwarden.mydomain.com/admin and enter the ADMIN_TOKEN from .env.

There are two things that you need to configure here:

  • The Domain Name under General settings
  • The email server settings under SMTP email settings

With these settings configured, Vaultwarden should be up and running and you can access it using https://vaultwarden.mydomain.com .

After the first user has been setup and tested, you can uncheck the Allow new signups in General settings in the admin interface. This is recommended since everyone who will be able to guess your domain name would be able to create a Vaultwarden account otherwise.

Posted by Uli Köhler in Container, Docker

Simple 15-minute passbolt setup using docker-compose

This is how I run my local passbolt instance.

First, create the directory. I use /opt/passbolt. Run all the following commands and place all the following files in that directory!

First, initialize the folders with the correct permissions:

mkdir -p passbolt_gpg
chown -R 33:33 passbolt_gpg

Now create a .env file with random passwords (I recommend using pwgen 30):

MARIADB_ROOT_PASSWORD=meiJieseingi4dutiareimoh2Aiv5j
MARIADB_USER_PASSWORD=ohre3ye1oNexeShiuChaengahzuemo

Now place your docker-compose.yml:

version: '3.4'
services:
  mariadb:
    image: mariadb:latest
    environment:
      - MYSQL_DATABASE=passbolt
      - MYSQL_USER=passbolt
      - MYSQL_PASSWORD=${MARIADB_USER_PASSWORD}
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
    volumes:
      - ./mariadb_data:/var/lib/mysql

  passbolt:
    image: passbolt/passbolt:latest-ce
    tty: true
    depends_on:
      - mariadb
    environment:
      - DATASOURCES_DEFAULT_HOST=mariadb
      - DATASOURCES_DEFAULT_USERNAME=passbolt
      - DATASOURCES_DEFAULT_PASSWORD=${MARIADB_USER_PASSWORD}
      - DATASOURCES_DEFAULT_DATABASE=passbolt
      - DATASOURCES_DEFAULT_PORT=3306
      - DATASOURCES_QUOTE_IDENTIFIER=true
      - APP_FULL_BASE_URL=https://passbolt.mydomain.com
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_HOST=smtp.mydomain.com
      - EMAIL_TRANSPORT_DEFAULT_PORT=587
      - [email protected]
      - EMAIL_TRANSPORT_DEFAULT_PASSWORD=yei5QueiNa5ahF0Aice8Na0aphoyoh
      - EMAIL_TRANSPORT_DEFAULT_TLS=true
      - [email protected]
    volumes:
      - ./passbolt_gpg:/etc/passbolt/gpg
      - ./passbolt_web:/usr/share/php/passbolt/webroot/img/public
    command: ["/usr/bin/wait-for.sh", "-t", "0", "mariadb:3306", "--", "/docker-entrypoint.sh"]
    ports:
      - 17880:80

Be sure to replace all the email addresses, domain names and SMTP credentials by the values appropriate for your setup.

Now startup passbolt for the first time, it will initialize the database:

docker-compose up

You need to keep passbolt running during the following steps.

First, we’ll send a test email:

docker-compose exec passbolt su -m -c "bin/cake passbolt send_test_email"

If you see

The message has been successfully sent!

then your SMTP config is correct. Otherwise, debug the error message, and, if neccessary, modify the EMAIL_… environment variables in docker-compose.yml and restart passbolt afterwards.

Now we’ll create an admin user:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f John -l Doe -r admin" -s /bin/sh www-data

If you want to create a normal (non-admin) user, use user instead of admin:

docker-compose exec passbolt su -m -c "bin/cake passbolt register_user -u [email protected] -f Jane -l Doe -r user" -s /bin/sh www-data

After that, the only thing left to do is to create a systemd service to autostart your passbolt service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Passbolt is now running on port 17880 (you can configure this using docker-compose.yml). Just configure your reverse proxy appropriately to point to this port.

Posted by Uli Köhler in Container, Docker

How to install ruby & rubygems in Alpine Linux

Problem:

You want to install ruby and the gem package manager in Alpine linux, but running apk install ruby rubygems shows you that the package doesn’t exist

/ # apk add ruby rubygems
ERROR: unable to select packages:
  rubygems (no such package):
    required by: world[rubygems]

Solution:

gem is included in the ruby package. So the only command you need to run is

apk update
apk add ruby

Example output:

/ # apk add ruby
(1/7) Installing ca-certificates (20191127-r5)
(2/7) Installing gdbm (1.19-r0)
(3/7) Installing gmp (6.2.1-r0)
(4/7) Installing readline (8.1.0-r0)
(5/7) Installing yaml (0.2.5-r0)
(6/7) Installing ruby-libs (2.7.3-r0)
(7/7) Installing ruby (2.7.3-r0)
Executing busybox-1.32.1-r6.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 928 MiB in 154 packages

After doing that, you can immediately use both ruby and gem.

Posted by Uli Köhler in Alpine Linux, Container, Docker, Linux, Ruby

How to apply Fedora CoreOS changes without a reboot

Do you want to install Fedora CoreOS packages without having to reboot your entire system in order for the packages to be available? Just run

sudo rpm-ostree ex apply-live

after running your rpm-ostree install commands.

For example:

sudo rpm-ostree install nano
sudo rpm-ostree ex apply-live

 

Note that this is not completely safe for multiple reasons, not even for seemingly innocuous utility packages like nano:

  • As indicated by the ex in the command, the apply-live command is experimental
  • It might apply other changes from the new OSTree like automatically installed updated and hence might have effects
  • When changing files on a system with productive services running, the services might crash or experience other issues. This might not happen immediately and it might be hard to debug especially in a complex environment. In case you want to safely update your services, it’s almost always best to just reboot into the new OSTree.

Also read our previous post on Why do you have to reboot after rpm-ostree install on Fedora CoreOS? where we explain the technical reasoning behind the reboots.

Posted by Uli Köhler in CoreOS

Why do you have to reboot after rpm-ostree install on Fedora CoreOS?

If you have worked with Fedora CoreOS, you might have noticed that every time you install a package you need to reboot in order for the files from said package to be available to you. This is quite different from other Linux distributions where you can immediately use whatever package you installed without having to reboot every time.

What is the technical reasoning for having to reboot?

rpm-ostree is quite a special tool: It does not just install a package. This has the advantage that the currently running system is not modified at all, but a separate OS tree – image it like an image containing all the files constituting your system – is built after running rpm-ostree install.

While rebooting after every install might seem like a stupid idea since it takes down the entire server, remember that it can save you a lot of headache since there are no partially updated services and you don’t need to manually fix or restart anything since everything is restarted on reboot. This means that your system is always in a consistent state, since every service is cleanly shut down before the system reboot – and after the reboot, every service is cleanly started with the system changes.

Can you install multiple packages before having to reboot?

Yes, you can run multiple rpm-ostree install commands before rebooting. When rebooting, all the changes will be applied at once.

Can you delay the reboot after rpm-ostree install?

Yes, there is no need to reboot immediately after the rpm-ostree command. You can delay the reboot as long as you like. Note however, that when the machine is rebooted for reasons other than a manual reboot (like a power outage or restart of the VM host), the updates will be applied as well, but you might not be there to check if all services are running correctly. Hence, I recommend to reboot as soon as possible.

Can you avoid to reboot after installing packages?

Yes, Fedora CoreOS provides an experimental live update feature using rpm-ostree ex apply-live. See our post How to apply Fedora CoreOS changes without a reboot . Note that applying updates or new packages on a system with productively running services might be a bad idea, but it’s not inherently more unsafe than installing packages on a typical Linux distribution like Debian, Fedora or Ubuntu where every install or update to a package immediately affects the files on the file system.

 

Posted by Uli Köhler in CoreOS

How to install docker-compose on Fedora CoreOS

Just install it using rpm-ostree:

sudo rpm-ostree install docker-compose

and then reboot in order for the changes to the OSTree to take effect:

sudo systemctl reboot

 

Posted by Uli Köhler in CoreOS

Fedora CoreOS: How to install Xen/XCP-NG guest utilities using rpm-ostree

In Fedora CoreOS, you can install the Xen guest utilities using

sudo rpm-ostree install xe-guest-utilities-latest

After installing the package, reboot in order for the changes to take effect:

sudo systemctl reboot

Now we need to enable and start the Xen service:

sudo systemctl enable --now xe-linux-distribution

It will now automatically start on boot.

Example output from the install command:

# rpm-ostree install xe-guest-utilities-latest
Checking out tree 49ec34c... done
Enabled rpm-md repositories: fedora-cisco-openh264 updates fedora
rpm-md repo 'fedora-cisco-openh264' (cached); generated: 2020-08-25T19:10:34Z
rpm-md repo 'updates' (cached); generated: 2021-05-13T01:04:01Z
rpm-md repo 'fedora' (cached); generated: 2020-10-19T23:27:19Z
Importing rpm-md... done
Resolving dependencies... done
Will download: 1 package (1.0 MB)
Downloading from 'updates'... done
Importing packages... done
Checking out packages... done
Running pre scripts... done
Running post scripts... done
Running posttrans scripts... done
Writing rpmdb... done
Writing OSTree commit... done
Staging deployment... done
Added:
  xe-guest-utilities-latest-7.21.0-1.fc33.x86_64
Run "systemctl reboot" to start a reboot

 

Posted by Uli Köhler in CoreOS

Fedora CoreOS: How to use German keyboard layout in installer

If you want to use the German keyboard layout in the Fedora CoreOS installer, set the de keymap using:

sudo localectl set-keymap de

They new keymap will be effective immediately.

Note that the keyboard layout will not automatically be transferred to the installed system.

Posted by Uli Köhler in CoreOS