Docker

How to list just container ID of all Docker containers

Also see:

Use

docker ps -q

Example output:

$ docker ps -q
29f721fa3124
39a240769ae8
cae90fe55b9a
90580dc4a6d2
348c24768864
5e64779be4f0
78874ae92a8e
92650c527106
948a8718050f
7aad5a210e3c

 

Posted by Uli Köhler in Docker

How to correctly use apk in Dockerfile

In Dockerfiles you should always use apk with --no-cache to prevent extra files being deposited on the containers, for example:

FROM alpine:3.17
RUN apk add --no-cache python3-dev

 

Posted by Uli Köhler in Alpine Linux, Docker

How to specify which docker image to use in .gitlab-ci.yml

The following .gitlab-ci.yml will build a native executable project using cmake with a custom docker image:

stages:
  - build

buildmyexe:
  stage: build
  image: 'ulikoehler/ubuntu-gcc-cmake:latest'
  script:
    - cmake .
    - make -j4

In this example, we have only one stage – if you have multiple stages, you can specify different images for each of them.

Posted by Uli Köhler in Docker, git, GitLab

How to configure SMTP E-Mail for InvenTree (docker/docker-compose)

You can easily configure SMTP email for InvenTree by adding the following config to your .env file (I’m using the docker production config):

INVENTREE_EMAIL_HOST=smtp.mydomain.com
[email protected]
INVENTREE_EMAIL_PASSWORD=cheen1zaiCh4yaithaecieng2jazey
INVENTREE_EMAIL_TLS=true
[email protected]

Even after setting up InvenTree, it is sufficient to just add this config to the .env file and restarting the server.

Posted by Uli Köhler in Docker, InvenTree

How to remove container_name: … statements from docker-compose.yml automatically using sed

This script will remove alle container_name statements from a docker-compose.yml config file:

sed -zie 's#container_name:\s*[a-zA-Z0-9_-]*\s*##g' docker-compose.yml

Example input:

services:
    inventree-proxy:
        container_name: inventree-proxy
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Example output:

services:
    inventree-proxy:
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Posted by Uli Köhler in Docker, Linux

How to fix docker emqx_ctl Node ’[email protected]’ not responding to pings.

Problem:

When trying to run emqx_ctl in a dockerized emqx setup using a command like

docker-compose exec emqx ./bin/emqx status

you see an error message like

Node '[email protected]' not responding to pings.
/opt/emqx/bin/emqx: line 41: die: command not found

Solution:

The problem here is that emqx_ctl is trying to connect to the IP address for node1.emqx.mydomain.com but that IP address does not point to the IP address for the docker container (maybe it’s the public IP address for your server?)

The solution here is to create a network alias within docker/docker-compose so that the Docker DNS system resolves node1.emqx.mydomain.com to the internal IP address of the container.

For example, in docker-compose, you can create your network using

networks:
  emqx:
    driver: bridge

and then configure the alias for the container using

services:
  emqx:
    image: emqx:4.4.4
    environment:
      - "EMQX_NAME=emqx"
      - "EMQX_HOST=node1.emqx.mydomain.com"
      - "EMQX_LOADED_PLUGINS=emqx_recon,emqx_retainer,emqx_management,emqx_dashboard"
    ports:
      - 18083:18083
      - 1883:1883
    volumes:
      - ./emqx_data:/opt/emqx/data
      - ./emqx_log:/opt/emqx/log
    networks:
      emqx:
        aliases:
          - "node1.emqx.mydomain.com"

 

 

Posted by Uli Köhler in Container, Docker, EMQX, MQTT

How to setup Netmaker using docker-compose in under 15 minutes

In this post, we’ll build a simple setup for running netmaker with PostgreSQL backend using docker-compose with an external Trik

First, create a directory for the Netmaker files to reside in, e.g.:

mkdir /opt/netmaker

cd to that directory:

cd /opt/netmaker

At this point we’ll download the Mosquitto config from Github and enable the ports in the firewall ufw. I reserved 1000 ports 51821:52821 in order to facilitate having a lot of networks (more than the default 9). My traefik config is the one from Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges which allows having a single *.netmaker.mydomain.com Let’s Encrypt wildcard certificate.

Now, create docker-compose.yml in that directory

version: "3.4"

services:
  postgres:
    image: postgres
    restart: unless-stopped
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=netmaker
      - POSTGRES_USER=netmaker
  netmaker: # The Primary Server for running Netmaker
    image: gravitl/netmaker:v0.14.5
    depends_on:
      - postgres
    cap_add:
      - NET_ADMIN
      - NET_RAW
      - SYS_MODULE
    sysctls:
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=0
      - net.ipv6.conf.all.forwarding=1
    restart: always
    volumes: # Volume mounts necessary for sql, coredns, and mqtt
      - ./netmaker_dnsconfig:/root/config/dnsconfig
      - ./netmaker_sqldata:/root/data
      - ./netmaker_shared_certs:/etc/netmaker
    environment: # Necessary capabilities to set iptables when running in container
      SERVER_NAME: "broker.${NETMAKER_BASE_DOMAIN}" # The domain/host IP indicating the mq broker address
      SERVER_HOST: "${NETMAKER_PUBLIC_IP}" # Set to public IP of machine.
      SERVER_HTTP_HOST: "api.${NETMAKER_BASE_DOMAIN}" # Overrides SERVER_HOST if set. Useful for making HTTP available via different interfaces/networks.
      SERVER_API_CONN_STRING: "api.${NETMAKER_BASE_DOMAIN}:443"
      # COREDNS_ADDR: "${NETMAKER_PUBLIC_IP}" # Address of the CoreDNS server. Defaults to SERVER_HOST
      DNS_MODE: "off" # Enables DNS Mode, meaning all nodes will set hosts file for private dns settings.
      API_PORT: "8081" # The HTTP API port for Netmaker. Used for API calls / communication from front end. If changed, need to change port of BACKEND_URL for netmaker-ui.
      CLIENT_MODE: "on" # Depricated. CLIENT_MODE should always be ON
      REST_BACKEND: "on" # Enables the REST backend (API running on API_PORT at SERVER_HTTP_HOST). Change to "off" to turn off.
      DISABLE_REMOTE_IP_CHECK: "off" # If turned "on", Server will not set Host based on remote IP check. This is already overridden if SERVER_HOST is set. Turned "off" by default.
      TELEMETRY: "on" # Whether or not to send telemetry data to help improve Netmaker. Switch to "off" to opt out of sending telemetry.
      RCE: "off" # Enables setting PostUp and PostDown (arbitrary commands) on nodes from the server. Off by default.
      MASTER_KEY: "${NETMAKER_MASTER_KEY}" # The admin master key for accessing the API. Change this in any production installation.
      CORS_ALLOWED_ORIGIN: "*" # The "allowed origin" for API requests. Change to restrict where API requests can come from.
      DISPLAY_KEYS: "on" # Show keys permanently in UI (until deleted) as opposed to 1-time display.
      DATABASE: "postgres" # Database to use - sqlite, postgres, or rqlite
      SQL_HOST: "postgres"
      SQL_DB: "netmaker"
      SQL_USER: "netmaker"
      SQL_PASS: "${POSTGRES_PASSWORD}"
      NODE_ID: "${SERVER_NAME}" # used for HA - identifies this server vs other servers
      MQ_HOST: "mq"  # the address of the mq server. If running from docker compose it will be "mq". Otherwise, need to input address. If using "host networking", it will find and detect the IP of the mq container.
      MQ_SERVER_PORT: "1883" # the reachable port of MQ by the server - change if internal MQ port changes (or use external port if MQ is not on the same machine)
      MQ_PORT: "443" # the reachable port of MQ - change if external MQ port changes (port on proxy, not necessarily the one exposed in docker-compose)
      HOST_NETWORK: "off" # whether or not host networking is turned on. Only turn on if configured for host networking (see docker-compose.hostnetwork.yml). Will set host-level settings like iptables.
      VERBOSITY: "1" # logging verbosity level - 1, 2, or 3
      MANAGE_IPTABLES: "on" # deprecated
      # PORT_FORWARD_SERVICES: "ssh,mq" # decide which services to port forward ("dns","ssh", or "mq")
      # this section is for OAuth
      AUTH_PROVIDER: "" # "<azure-ad|github|google|oidc>"
      CLIENT_ID: "" # "<client id of your oauth provider>"
      CLIENT_SECRET: "" # "<client secret of your oauth provider>"
      FRONTEND_URL: "" # "https://dashboard.<netmaker base domain>"
      AZURE_TENANT: "" # "<only for azure, you may optionally specify the tenant for the OAuth>"
      OIDC_ISSUER: "" # https://oidc.yourprovider.com - URL of oidc provider
    ports:
      - "51821-52821:51821-52821/udp" # wireguard ports
    expose:
      - "8081" # api port
    labels: # only for use with traefik proxy (default)
      - traefik.enable=true
      - traefik.http.routers.netmaker-api.entrypoints=websecure
      - traefik.http.routers.netmaker-api.rule=Host(`api.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-api.service=netmaker-api
      - traefik.http.services.netmaker-api.loadbalancer.server.port=8081
      - "traefik.http.routers.netmaker-api.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-api.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-api.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  netmaker-ui:  # The Netmaker UI Component
    container_name: netmaker-ui
    image: gravitl/netmaker-ui:v0.14.5
    depends_on:
      - netmaker
    links:
      - "netmaker:api"
    restart: always
    environment:
      BACKEND_URL: "https://api.${NETMAKER_BASE_DOMAIN}" # URL where UI will send API requests. Change based on SERVER_HOST, SERVER_HTTP_HOST, and API_PORT
    expose:
      - "80"
    labels:
      - traefik.enable=true
      - traefik.http.middlewares.nmui-security.headers.accessControlAllowOriginList=*.${NETMAKER_BASE_DOMAIN}
      - traefik.http.middlewares.nmui-security.headers.stsSeconds=31536000
      - traefik.http.middlewares.nmui-security.headers.browserXssFilter=true
      - traefik.http.middlewares.nmui-security.headers.customFrameOptionsValue=SAMEORIGIN
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.X-Robots-Tag=none
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.Server= # Remove the server name
      - traefik.http.routers.netmaker-ui.entrypoints=websecure
      - traefik.http.routers.netmaker-ui.middlewares=nmui-security@docker
      - traefik.http.routers.netmaker-ui.rule=Host(`dashboard.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-ui.service=netmaker-ui
      - traefik.http.services.netmaker-ui.loadbalancer.server.port=80
      - "traefik.http.routers.netmaker-ui.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  mq: # the MQTT broker for netmaker
    container_name: mq
    image: eclipse-mosquitto:2.0.11-openssl
    depends_on:
      - netmaker
    restart: unless-stopped
    volumes:
      - ./mosquitto.conf:/mosquitto/config/mosquitto.conf # need to pull conf file from github before running (under docker/mosquitto.conf)
      - ./mosquitto_data:/mosquitto/data
      - ./mosquitto_logs:/mosquitto/log
      - ./netmaker_shared_certs:/mosquitto/certs
    expose:
      - "8883"
    labels:
      - traefik.enable=true
      - traefik.tcp.routers.mqtts.rule=HostSNI(`broker.${NETMAKER_BASE_DOMAIN}`)
      - traefik.tcp.routers.mqtts.tls.passthrough=true
      - traefik.tcp.services.mqtts-svc.loadbalancer.server.port=8883
      - traefik.tcp.routers.mqtts.service=mqtts-svc
      - traefik.tcp.routers.mqtts.entrypoints=websecure
      - "traefik.tcp.routers.mqtts.tls.certresolver=cloudflare"
      - "traefik.tcp.routers.mqtts.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.tcp.routers.mqtts.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"

After that, create .env in said directory containing some info about your node:

SERVER_NAME=netmaker-01
NETMAKER_BASE_DOMAIN=netmaker.mydomain.com
NETMAKER_MASTER_KEY=geeveeBaeQuie1cie6aaz6eepahleo
NETMAKER_PUBLIC_IP=101.252.11.54
POSTGRES_PASSWORD=ahph8Jih4aesheel7id1uyo0gietai

Adjust the NETMAKER_BASE_DOMAIN and the NETMAKER_PUBLIC_IP accordingly. Also, you need to choose a random NETMAKER_MASTER_KEY and POSTGRES_PASSWORD.

Now we’ll use the script from Create a systemd service for your docker-compose project in 10 seconds in order to create a systemd service to automatically run the service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will also automatically start the service (i.e. docker-compose up).

 

Posted by Uli Köhler in Container, Docker

How to setup Wekan using docker-compose in just 3 minutes

First, create a directory for the Wekan files to reside in, e.g.:

mkdir /opt/wekan

Change to that directory:

cd /opt/wekan

Now, we need to create the data directory which needs to be owned by UID 999: in order for Wekan to store uploads:

mkdir -p wekan_data && chown -R 999:999 wekan_data

Now, create docker-compose.yml in that directory

version: '3.4'

services:
  wekandb:
    restart: always
    image: mongo:5
    command: mongod --logpath /dev/null --oplogSize 128 --quiet
    healthcheck:
      test: ["CMD", "mongo", "--quiet", "--eval", "'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'"]
      interval: 15s
      timeout: 10s
      retries: 3
      start_period: 10s
    volumes:
      - ./wekan-db:/data/db
      - ./wekan-db-dump:/dump
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro


  wekan:
    image: quay.io/wekan/wekan
    restart: always
    ports:
      - 7972:8080
    environment:
      - MONGO_URL=mongodb://wekandb:27017/wekan
      - ROOT_URL=${URL}
      - MAIL_URL=${EMAIL_URL}
      - MAIL_FROM=${EMAIL_FROM}
      - WITH_API=true
      - WRITABLE_PATH=/data
    volumes:
      - ./wekan_data:/data
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - wekandb

After that, create .env in said directory containing some info about your node:

URL=https://wekan.mydomain.com
EMAIL_URL=smtp://noreply%40mydomain.com:[email protected]:25/
EMAIL_FROM='My Wekan <[email protected]>'

Now we’ll use the script from Create a systemd service for your docker-compose project in 10 seconds in order to create a systemd service to automatically run the service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will also automatically start the service (i.e. docker-compose up).

Now you can setup your reverse proxy to point your domain – e.g. wekan.techoverflow.net to port 7972 (or change that port in docker-compose.yml). Which reverse proxy you use doesn’t matter all too much, I use both nginx and traefik. I will cover the configuration for these reverse proxies in future posts.

Once you can access Wekan, you can register as a new user. The first user will be an admin and can also disable registration using the Web UI.

Posted by Uli Köhler in Container, Docker

How to fix wekan Path “/data/attachments” is not writable!

Problem:

While starting your dockerized wekan setup, you see an error message like this:

wekan_1    | errorClass [Error]: [FilesCollection.attachments] Path "/data/attachments" is not writable! [401]
wekan_1    |     at new FilesCollection (packages/ostrio:files/server.js:354:15)
wekan_1    |     at module (models/attachments.js:52:15)
wekan_1    |     at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |     at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |     at Module.moduleLink [as link] (/build/programs/server/npm/node_modules/meteor/modules/node_modules/@meteorjs/reify/lib/runtime/index.js:52:22)
wekan_1    |     at module (server/publications/attachments.js:1:24)
wekan_1    |     at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |     at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |     at require (packages/modules-runtime.js:258:21)
wekan_1    |     at /build/programs/server/app/app.js:162362:1
wekan_1    |     at /build/programs/server/boot.js:401:38
wekan_1    |     at Array.forEach (<anonymous>)
wekan_1    |     at /build/programs/server/boot.js:226:21
wekan_1    |     at /build/programs/server/boot.js:464:7
wekan_1    |     at Function.run (/build/programs/server/profile.js:280:14)
wekan_1    |     at /build/programs/server/boot.js:463:13 {
wekan_1    |   isClientSafe: true,
wekan_1    |   error: 401,
wekan_1    |   reason: '[FilesCollection.attachments] Path "/data/attachments" is not writable!',
wekan_1    |   details: Error: EACCES: permission denied, mkdir '/data/attachments'
wekan_1    |       at Object.mkdirSync (fs.js:1014:3)
wekan_1    |       at new FilesCollection (packages/ostrio:files/server.js:348:10)
wekan_1    |       at module (models/attachments.js:52:15)
wekan_1    |       at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |       at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |       at Module.moduleLink [as link] (/build/programs/server/npm/node_modules/meteor/modules/node_modules/@meteorjs/reify/lib/runtime/index.js:52:22)
wekan_1    |       at module (server/publications/attachments.js:1:24)
wekan_1    |       at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |       at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |       at require (packages/modules-runtime.js:258:21)
wekan_1    |       at /build/programs/server/app/app.js:162362:1
wekan_1    |       at /build/programs/server/boot.js:401:38
wekan_1    |       at Array.forEach (<anonymous>)
wekan_1    |       at /build/programs/server/boot.js:226:21
wekan_1    |       at /build/programs/server/boot.js:464:7
wekan_1    |       at Function.run (/build/programs/server/profile.js:280:14)
wekan_1    |       at /build/programs/server/boot.js:463:13 {
wekan_1    |     errno: -13,
wekan_1    |     syscall: 'mkdir',
wekan_1    |     code: 'EACCES',
wekan_1    |     path: '/data/attachments'
wekan_1    |   },
wekan_1    |   errorType: 'Meteor.Error'
wekan_1    | }

Solution:

You have mounted a local directory as wekan data directory, for example like this in docker-compose.yml:

wekan:
  image: quay.io/wekan/wekan
  /* ... */
  environment:
    /*...*/
    - WRITABLE_PATH=/data
  volumes:
    - ./wekan_data:/data

But this directory does not have the correct permissions set. You can fix it using this command on the directory (wekan_data in this example):

sudo chown -R 999:999 wekan_data

After that, restart wekan and the issue should be fixed.

Posted by Uli Köhler in Container, Docker, Networking

How to fix docker MariaDB correct definition of table mysql.column_stats: expected column ‘hist_type’ at position 9…

Problem:

In the log of your MySQL docker server, you see logs like

mariadb_1    | 2022-06-07 20:24:00 283 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9 to have type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB','JSON_HB'), found type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB').
mariadb_1    | 2022-06-07 20:24:00 283 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'histogram' at position 10 to have type longblob, found type varbinary(255).

Solution:

This happens after doing a version upgrade of the MySQL version.

In order to fix it, run the upgrade by running mysql_upgrade in the contaienr

docker-compose exec mariadb mysql_upgrade -uroot -pchopahl0aib4eiphuk5bao3shiVoow

where chopahl0aib4eiphuk5bao3shiVoow is your MySQL root password.

If you have your password in .env as we recommend, you can use this command:

source .env && docker-compose exec mariadb mysql_upgrade -uroot -p${MARIADB_ROOT_PASSWORD}

 

 

Posted by Uli Köhler in Docker

How to fix WordPress docker image upload size 2M limit

Problem:

You are running your WordPress instance using the official WordPress Apache image.

However, the WordPress Media page has a maximum upload size of 2 Megabytes.

Solution:

This setting is configured in the php.ini used by the WordPress docker image internally. While it is possible to use a custom php.ini, it’s much easier to edit .htaccess . Just edit .htaccess in the wordpress directory where wp-config.php is located and append this after # END WordPress to set the upload limit to 256 Megabytes:

php_value upload_max_filesize 256M
php_value post_max_size 256M
php_value max_execution_time 300
php_value max_input_time 300

The change should be effective immediately after reloading the page. Note that you still might need to configure your reverse proxy (if any) to allow larger uploads. My recommendation is to just try it out as is and if large uploads fail, it’s likely that your reverse proxy is at fault.

Full .htaccess example:

# BEGIN WordPress
# Die Anweisungen (Zeilen) zwischen „BEGIN WordPress“ und „END WordPress“ sind
# dynamisch generiert und sollten nur über WordPress-Filter geändert werden.
# Alle Änderungen an den Anweisungen zwischen diesen Markierungen werden überschrieben.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteRule ^en/wp-login.php /wp-login.php [QSA,L]
RewriteRule ^de/wp-login.php /wp-login.php [QSA,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

# END WordPress

php_value upload_max_filesize 256M
php_value post_max_size 256M
php_value max_execution_time 300
php_value max_input_time 300

 

Posted by Uli Köhler in Container, Docker, Wordpress

How to create systemd service timer that runs Nextcloud cron.php in 10 seconds

This post shows you a really quick method to create a systemd timer that runs cron.php on dockerized nextcloud (using docker-compose). We created a script that automatically creates a systemd timer and related service to run cron.php hourly using the command from our previous post How to run Nextcloud cron in a docker-compose based setup:

 

In order to run our autoinstall script, run:

wget -qO- https://techoverflow.net/scripts/install-nextcloud-cron.sh | sudo bash /dev/stdin

from the directory where docker-compose.yml is located. Note that the script will use the directory name as a name for the service and timer that is created. For example, running the script in /var/lib/nextcloud-mydomain will cause nextcloud-mydomain-cron to be used a service name.

Example output from the script:

Creating systemd service... /etc/systemd/system/nextcloud-mydomain-cron.service
Creating systemd timer... /etc/systemd/system/nextcloud-mydomain-cron.timer
Enabling & starting nextcloud-mydomain-cron.timer
Created symlink /etc/systemd/system/timers.target.wants/nextcloud-mydomain-cron.timer → /etc/systemd/system/nextcloud-mydomain-cron.timer.

The script will create /etc/systemd/systemd/nextcloud-mydomain-cron.service containing the specification on what exactly to run:

[Unit]
Description=nextcloud-mydomain-cron

[Service]
Type=oneshot
ExecStart=/usr/bin/docker-compose exec -T -u www-data nextcloud php cron.php
WorkingDirectory=/var/opt/nextcloud-mydomain

and /etc/systemd/systemd/nextcloud-mydomain-cron.timer containing the logic when the .service is started:

[Unit]
Description=nextcloud-mydomain-cron

[Timer]
OnCalendar=hourly
Persistent=true

[Install]
WantedBy=timers.target

and will automatically start and enable the timer. This means: no further steps are needed after running this script!

In order to show the current status of the service, use e.g.

sudo systemctl status nextcloud-mydomain-cron.timer

Example output:

● nextcloud-mydomain-cron.timer - nextcloud-mydomain-cron
     Loaded: loaded (/etc/systemd/system/nextcloud-mydomain-cron.timer; enabled; vendor preset: disabled)
     Active: active (waiting) since Fri 2022-04-01 00:33:48 UTC; 6min ago
    Trigger: Fri 2022-04-01 01:00:00 UTC; 19min left
   Triggers: ● nextcloud-mydomain-cron.service

Apr 01 00:33:48 CoreOS systemd[1]: Started nextcloud-mydomain-cron.

In the

Trigger: Fri 2020-12-11 00:00:00 CET; 20h left

line you can see when the service will be run next. By default, the script generates tasks that run OnCalendar=daily, which means the service will be run on 00:00:00 every day. Checkout the systemd.time manpage for further information on the syntax you can use to specify other timeframes.

In order to run the backup immediately (it will still run daily after doing this), do

sudo systemctl start nextcloud-mydomain-cron.service

(note that you need to run systemctl start on the .service! Running systemctl start on the .timer will only enable the timer and not run the service immediately).

In order to view the logs, use

sudo journalctl -xfu nextcloud-mydomain-cron.service

(just like above, you need to run journalctl -xfu on the .service, not on the .timer).

In order to disable automatic backups, use e.g.

sudo systemctl disable nextcloud-mydomain-cron.timer

Source code:

#!/bin/bash
# Create a systemd service & timer that runs cron.php on dockerized nextcloud
# by Uli Köhler - https://techoverflow.net
# Licensed as CC0 1.0 Universal
export SERVICENAME=$(basename $(pwd))-cron

export SERVICEFILE=/etc/systemd/system/${SERVICENAME}.service
export TIMERFILE=/etc/systemd/system/${SERVICENAME}.timer

echo "Creating systemd service... $SERVICEFILE"
sudo cat >$SERVICEFILE <<EOF
[Unit]
Description=$SERVICENAME

[Service]
Type=oneshot
ExecStart=$(which docker-compose) exec -T -u www-data nextcloud php cron.php
WorkingDirectory=$(pwd)
EOF

echo "Creating systemd timer... $TIMERFILE"
sudo cat >$TIMERFILE <<EOF
[Unit]
Description=$SERVICENAME

[Timer]
OnCalendar=hourly
Persistent=true

[Install]
WantedBy=timers.target
EOF

echo "Enabling & starting $SERVICENAME.timer"
sudo systemctl enable $SERVICENAME.timer
sudo systemctl start $SERVICENAME.timer

 

Posted by Uli Köhler in Docker, Linux, Nextcloud

How to run Nextcloud cron in a docker-compose based setup

Run this command in the directory where docker-compose.yml is located in order to run the Nextcloud cron job:

docker-compose exec -u www-data nextcloud php cron.php

 

Posted by Uli Köhler in Docker

A working Traefik & docker-compose minio setup with console

Note: I have not updated this config to use the xl or xl-single storage backends, hence the version is locked at RELEASE.2022-10-24T18-35-07Z

The following config works by using two domains: minio.mydomain.com and console.minio.mydomain.com.

For the basic Traefik setup this is based on, see Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges. Regarding this setup, the important part is to enabled the docker autodiscovery and defining the certificate resolve (we’re using the ALPN resolver).

Be sure to choose a random MINIO_ROOT_PASSWORD!

version: '3.5'
services:
   minio:
       image: quay.io/minio/minio:RELEASE.2022-10-24T18-35-07Z
       command: server --console-address ":9001" /data
       volumes:
          - ./data:/data
          - ./config:/root/.minio
       environment:
          - MINIO_ROOT_USER=minioadmin
          - MINIO_ROOT_PASSWORD=uikui5choRith0ZieV2zohN5aish5r
          - MINIO_DOMAIN=minio.mydomain.com
          - MINIO_SERVER_URL=https://minio.mydomain.com
          - MINIO_BROWSER_REDIRECT_URL=https://console.minio.mydomain.com
       labels:
          - "traefik.enable=true"
          # Console
          - "traefik.http.routers.minio-console.rule=Host(`console.minio.mydomain.com`)"
          - "traefik.http.routers.minio-console.entrypoints=websecure"
          - "traefik.http.routers.minio-console.tls.certresolver=alpn"
          - "traefik.http.routers.minio-console.service=minio-console"
          - "traefik.http.services.minio-console.loadbalancer.server.port=9001"
          # APi
          - "traefik.http.routers.minio.rule=Host(`minio.mydomain.com`)"
          - "traefik.http.routers.minio.entrypoints=websecure"
          - "traefik.http.routers.minio.tls.certresolver=alpn"
          - "traefik.http.routers.minio.service=minio"
          - "traefik.http.services.minio.loadbalancer.server.port=9000"

 

Posted by Uli Köhler in Container, Docker, S3, Traefik

How to fix Traefik Could not define the service name for the router: too many services

Problem:

Traefik does not load some of your services and you see an error message like the following one:

traefik_1  | time="2022-03-27T15:22:05Z" level=error msg="Could not define the service name for the router: too many services" routerName=myapp providerName=docker

with a docker label config with multiple routers like this:

labels:
    - "traefik.enable=true"
    - "traefik.http.routers.myapp-console.rule=Host(`console.myapp.mydomain.com`)"
    - "traefik.http.routers.myapp-console.entrypoints=websecure"
    - "traefik.http.routers.myapp-console.tls.certresolver=alpn"
    - "traefik.http.services.myapp-console.loadbalancer.server.port=9001"
    #
    - "traefik.http.routers.myapp.rule=Host(`myapp.mydomain.com`)"
    - "traefik.http.routers.myapp.entrypoints=websecure"
    - "traefik.http.routers.myapp.tls.certresolver=cloudflare-techoverflow"
    - "traefik.http.routers.myapp.tls.domains[0].main=mydomain.com"
    - "traefik.http.routers.myapp.tls.domains[0].sans=*.mydomain.com"
    - "traefik.http.services.myapp.loadbalancer.server.port=9000"

Solution:

The basic issue here is that you have multiple routers defined for a single docker container and Traefik does not know which http.services belongs to which http.routers!

In order to fix this, explicitly tell traefik for each router what service it should use like this:

- "traefik.http.routers.myapp-console.service=myapp-console"

Full example:

labels:
    - "traefik.enable=true"
    - "traefik.http.routers.myapp-console.rule=Host(`console.myapp.mydomain.com`)"
    - "traefik.http.routers.myapp-console.entrypoints=websecure"
    - "traefik.http.routers.myapp-console.tls.certresolver=alpn"
    - "traefik.http.routers.myapp-console.service=myapp-console"
    - "traefik.http.services.myapp-console.loadbalancer.server.port=9001"
    #
    - "traefik.http.routers.myapp.rule=Host(`myapp.mydomain.com`)"
    - "traefik.http.routers.myapp.entrypoints=websecure"
    - "traefik.http.routers.myapp.tls.certresolver=cloudflare-techoverflow"
    - "traefik.http.routers.myapp.tls.domains[0].main=mydomain.com"
    - "traefik.http.routers.myapp.tls.domains[0].sans=*.mydomain.com"
    - "traefik.http.routers.myapp.service=myapp"
    - "traefik.http.services.myapp.loadbalancer.server.port=9000"

 

Posted by Uli Köhler in Container, Docker, Networking, Traefik

Recommended PostgreSQL docker-compose service configuration

I use the following docker-compose.yml service:

version: '3.5'
services:
  postgres:
    image: postgres
    restart: unless-stopped
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
      - POSTGRES_USER=${POSTGRES_USER}

With the following .env:

POSTGRES_DB=headscale
POSTGRES_USER=headscale
POSTGRES_PASSWORD=vah2phuen3shesahc6Jeenaechecee

Using .env has the huge advantage that other services like my backup script can access the configuration in a standardized manner using environment variables.

Posted by Uli Köhler in Container, Databases, Docker

How I run pg_dump in my docker-compose setup

I have the following docker-compose.yml service:

version: '3.5'
services:
  postgres:
    image: postgres
    restart: unless-stopped
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
      - POSTGRES_USER=${POSTGRES_USER}

With the following .env:

POSTGRES_DB=headscale
POSTGRES_USER=headscale
POSTGRES_PASSWORD=vah2phuen3shesahc6Jeenaechecee

Given that setup, I run pg_dump like this:

source .env && docker-compose exec postgres pg_dump -U${POSTGRES_USER} > pgdump-$(date +%F_%H-%M-%S).sql
Posted by Uli Köhler in Container, Databases, Docker

A simple systemd service autoinstall script

I use this in git repositories to ease the deployment process

#!/bin/bash
# This script installs and enables/starts a systemd service
# It also installs the service file
export NAME=MyService

cat >/etc/systemd/system/${NAME}.service <<EOF
# TODO Copy & paste the systemd .service file here
EOF

# Enable and start service
systemctl enable --now ${NAME}.service

The following example automatically installs a docker-compose systemd .service:

#!/bin/bash
# This script installs and enables/starts a systemd service
# It also installs the service file
export NAME=MyService

cat >/etc/systemd/system/${NAME}.service <<EOF
[Unit]
Description=${NAME}
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=$(pwd)
# Shutdown container (if running) before unit is being started
ExecStartPre=$(which docker-compose) -f docker-compose.yml down
# Start container when unit is started
ExecStart=$(which docker-compose) -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=$(which docker-compose) -f docker-compose.yml down

[Install]
WantedBy=multi-user.target
EOF

# Enable and start service
systemctl enable --now ${NAME}.service
Posted by Uli Köhler in Docker, systemd

How I fixed project transfer error Gitlab URI::InvalidURIError (query conflicts with opaque)

Problem:

Any time I was trying to transfer a project in my docker-hosted gitlab instance. the transfer failed with error 500 and I was presented with the following error log:

==> /var/log/gitlab/gitlab-rails/production.log <==
  
URI::InvalidURIError (query conflicts with opaque):
  
lib/container_registry/client.rb:84:in `repository_tags'
app/models/container_repository.rb:94:in `manifest'
app/models/container_repository.rb:98:in `tags'
app/models/container_repository.rb:118:in `has_tags?'
app/models/project.rb:2890:in `has_root_container_repository_tags?'
app/models/project.rb:1037:in `has_container_registry_tags?'
app/services/projects/transfer_service.rb:61:in `transfer'
app/services/projects/transfer_service.rb:35:in `execute'
app/controllers/projects_controller.rb:120:in `transfer'
app/controllers/application_controller.rb:490:in `set_current_admin'
lib/gitlab/session.rb:11:in `with_session'
app/controllers/application_controller.rb:481:in `set_session_storage'
lib/gitlab/i18n.rb:105:in `with_locale'
lib/gitlab/i18n.rb:111:in `with_user_locale'
app/controllers/application_controller.rb:475:in `set_locale'
app/controllers/application_controller.rb:469:in `set_current_context'
lib/gitlab/metrics/elasticsearch_rack_middleware.rb:16:in `call'
lib/gitlab/middleware/rails_queue_duration.rb:33:in `call'
lib/gitlab/middleware/speedscope.rb:13:in `call'
lib/gitlab/request_profiler/middleware.rb:17:in `call'
lib/gitlab/database/load_balancing/rack_middleware.rb:23:in `call'
lib/gitlab/metrics/rack_middleware.rb:16:in `block in call'
lib/gitlab/metrics/web_transaction.rb:46:in `run'
lib/gitlab/metrics/rack_middleware.rb:16:in `call'
lib/gitlab/jira/middleware.rb:19:in `call'
lib/gitlab/middleware/go.rb:20:in `call'
lib/gitlab/etag_caching/middleware.rb:21:in `call'
lib/gitlab/middleware/multipart.rb:173:in `call'
lib/gitlab/middleware/read_only/controller.rb:50:in `call'
lib/gitlab/middleware/read_only.rb:18:in `call'
lib/gitlab/middleware/same_site_cookies.rb:27:in `call'
lib/gitlab/middleware/handle_malformed_strings.rb:21:in `call'
lib/gitlab/middleware/basic_health_check.rb:25:in `call'
lib/gitlab/middleware/handle_ip_spoof_attack_error.rb:25:in `call'
lib/gitlab/middleware/request_context.rb:21:in `call'
lib/gitlab/middleware/webhook_recursion_detection.rb:15:in `call'
config/initializers/fix_local_cache_middleware.rb:11:in `call'
lib/gitlab/middleware/compressed_json.rb:26:in `call'
lib/gitlab/middleware/rack_multipart_tempfile_factory.rb:19:in `call'
lib/gitlab/middleware/sidekiq_web_static.rb:20:in `call'
lib/gitlab/metrics/requests_rack_middleware.rb:75:in `call'
lib/gitlab/middleware/release_env.rb:13:in `call'

Solution:

This error seems to occur if you had a docker registry configured in previous gitlab versions (a legacy docker repository) but doesn’t disappear even after deconfiguring the registry.

In order to fix it, I logged into the container using

docker-compose exec gitlab /bin/bash

and edited /opt/gitlab/embedded/service/gitlab-rails/app/models/project.rb using

vi /opt/gitlab/embedded/service/gitlab-rails/app/models/project.rb

where on line 2890 you can find the following function:

##
# This method is here because of support for legacy container repository
# which has exactly the same path like project does, but which might not be
# persisted in `container_repositories` table.
#
def has_root_container_repository_tags?
  return false unless Gitlab.config.registry.enabled

  ContainerRepository.build_root_repository(self).has_tags?
end

We just want Gitlab to ignore the repository stuff, so we insert return false after the return false unless Gitlab.config.registry.enabled line:

##
# This method is here because of support for legacy container repository
# which has exactly the same path like project does, but which might not be
# persisted in `container_repositories` table.
#
def has_root_container_repository_tags?
  return false unless Gitlab.config.registry.enabled
  return false

  ContainerRepository.build_root_repository(self).has_tags?
end

to fake Gitlab into thinking that repository does not have any legacy registries. After that, save the file and

sudo gitlab-ctl restart

after which you should be able to transfer your repositories just fine.

Posted by Uli Köhler in Docker, git

How I fixed docker panic: assertion failed: write: circular dependency occurred

Problem:

When starting docker on a VM that got suddenly turned off before during a power outage, the docker daemon failed to start up with the following error log:

Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.946815577Z" level=info msg="Starting up" 
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.947842629Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" 
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949500623Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949617127Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949705114Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949776371Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950747679Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950788173Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950806216Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950815090Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.012683899Z" level=info msg="[graphdriver] using prior storage driver: overlay2" 
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.027806434Z" level=warning msg="Docker could not enable SELinux on the host system" 
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.176098505Z" level=info msg="Loading containers: start." 
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.728503609Z" level=info msg="Removing stale sandbox 22e03a9f65217fa0ce1603fa1d6326b7bf412777be94e930b02dbe6554549084 (154aa4bd403045e229b39cc4dda1d16a72b45d18671cab7c993bff4eaee9c2c9)" 
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: panic: assertion failed: write: circular dependency occurred
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: goroutine 1 [running]:
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt._assert(...)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/db.go:1172
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*node).write(0xc0001e16c0, 0xc001381000)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/node.go:233 +0x3c5
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*node).spill(0xc0001e16c0, 0xc0012da078, 0x1)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/node.go:374 +0x225
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*node).spill(0xc0001e1650, 0xc000fefaa0, 0xc000d9dc50)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/node.go:346 +0xbc
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*Bucket).spill(0xc000d6f180, 0xc000fefa00, 0xc000d9de40)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/bucket.go:570 +0x49a
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*Bucket).spill(0xc000fca2b8, 0x310b759f, 0x55dfe7bf17e0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/bucket.go:537 +0x3f6
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*Tx).Commit(0xc000fca2a0, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/tx.go:160 +0xe8
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*DB).Update(0xc000d6bc00, 0xc000d9e078, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/db.go:701 +0x105
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libkv/store/boltdb.(*BoltDB).AtomicPut(0xc000a862d0, 0xc000fd32d0, 0x65, 0xc000d46300, 0x179, 0x180, 0xc000fef890, 0x0, 0x55dfe5407900, 0x0, ...)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libkv/store/boltdb/boltdb.go:371 +0x225
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore.(*datastore).PutObjectAtomic(0xc000c3bbc0, 0x55dfe6bd1398, 0xc000efad20, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore/datastore.go:415 +0x3ca
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).storeUpdate(0xc0000c3c00, 0x55dfe6bd1398, 0xc000efad20, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge_store.go:106 +0x6e
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).RevokeExternalConnectivity(0xc0000c3c00, 0xc000f19d80, 0x40, 0xc00111c300, 0x40, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge.go:1405 +0x1f1
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*endpoint).sbLeave(0xc000d41e40, 0xc000f63200, 0x1, 0x0, 0x0, 0x0, 0x0, 0xc001b91ce0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/endpoint.go:751 +0x1326
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*endpoint).Delete(0xc000d41b80, 0xc000f19d01, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/endpoint.go:842 +0x374
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*sandbox).delete(0xc000f63200, 0x1, 0x55dfe61ee216, 0x1e)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/sandbox.go:229 +0x191
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*controller).sandboxCleanup(0xc000432a00, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/sandbox_store.go:278 +0xdae
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.New(0xc0000c3b80, 0x9, 0x10, 0xc000878090, 0xc001e69e60, 0xc0000c3b80, 0x9)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/controller.go:248 +0x726
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/daemon.(*Daemon).initNetworkController(0xc00000c1e0, 0xc00021a000, 0xc001e69e60, 0xc000151270, 0xc00000c1e0, 0xc000686a10, 0xc001e69e60)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/daemon/daemon_unix.go:855 +0xac
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/daemon.(*Daemon).restore(0xc00000c1e0, 0xc00045c4c0, 0xc00023e2a0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/daemon/daemon.go:490 +0x52c
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/daemon.NewDaemon(0x55dfe6bad710, 0xc00045c4c0, 0xc00021a000, 0xc000878090, 0x0, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/daemon/daemon.go:1150 +0x2c1d
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.(*DaemonCli).start(0xc000b62720, 0xc00009a8a0, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/daemon.go:195 +0x785
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.runDaemon(...)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/docker_unix.go:13
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.newDaemonCommand.func1(0xc0000bc2c0, 0xc0000b2000, 0x0, 0xc, 0x0, 0x0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/docker.go:34 +0x7d
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000bc2c0, 0xc00011caa0, 0xc, 0xc, 0xc0000bc2c0, 0xc00011caa0)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:850 +0x472
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000bc2c0, 0x0, 0x0, 0x10)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:958 +0x375
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).Execute(...)
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:895
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.main()
Jan 05 15:15:15 CoreOS-Haar dockerd[5333]:         /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/docker.go:97 +0x185
Jan 05 15:15:15 CoreOS-Haar systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

Solution:

First, try moving /var/lib/docker/network/files/local-kv.db:

sudo mv /var/lib/docker/network/files/local-kv.db /opt/old-docker-local-kv.db

and then restarting docker using e.g. sudo systemctl restart docker, which worked for many users on GitHub.

Only if that does not work, try the following brute-force method!

Move the /var/lib/docker directory to a new location for backup, after which docker was able to start up again:

sudo mv /var/lib/docker /opt/old-var-lib-docker

This is a pretty brute-force method of doing that but for me it worked totally fine since my setup did not use volumes but local directories. In case you are actively using volumes, you might need to restore the volumes from /opt/old-var-lib-docker manually!

 

Posted by Uli Köhler in Container, Docker