Docker

How to automatically remove duplicate docker networks

docker-compose based setups with locally mounted volumes have very few common failure modes in practice. The most important ones are system upgrades to docker stopping all the services and duplicate networks with the same name preventing the startup of a service. Sometimes, docker-compose does not delete the old network properly, possibly due to unclean or unfinished shutdown procedures.

This will result in log messages such as

May 22 21:52:15 myserver docker-compose[2384642]: Removing network etherpad-mydomain_default
May 22 21:52:15 myserver docker-compose[2384642]: network etherpad-mydomain_default is ambiguous (2 matches found based on name)
May 22 21:52:16 myserver systemd[1]: etherpad-mydomain.service: Control process exited, code=exited, status=1/FAILURE

This simple script will find all duplicate network names and simply delete one of them.

#!/usr/bin/env python3
import subprocess
import json

already_seen_networks = set()

output = subprocess.check_output(["docker", "network", "ls", "--format", "{{json .}}"])
for line in output.decode("utf-8").split("\n"):
    line = line.strip()
    if not line: continue
    obj = json.loads(line.strip())
    id = obj["ID"]
    name = obj["Name"]
    if name in already_seen_networks:
        print(f"Detected duplicate network {name}. Removing duplicate network {id}...")
        subprocess.check_output(["docker", "network", "rm", id])

    already_seen_networks.add(name)

Just call this script without any arguments

python docker-remove-duplicate-networks.py

 

Posted by Uli Köhler in Docker, Python

How to list docker networks as JSON

docker network ls --format '{{json .}}'

Example output:

{"CreatedAt":"2023-05-12 01:21:58.769840402 +0200 CEST","Driver":"bridge","ID":"649e42effc83","IPv6":"false","Internal":"false","Labels":"com.docker.compose.version=1.29.2,com.docker.compose.network=default,com.docker.compose.project=vaultwarden-mydomain","Name":"vaultwarden-mydomain_default","Scope":"local"}

 

Posted by Uli Köhler in Docker

How to install InvenTree using docker in just 5 minutes

The following script is an automated installation script for InvenTree that fetches the current docker-compose.yml and other configs from GitHub, modifies them so that only local directories are used for storage and then setups InvenTree.

First, create a directory such as /opt/inventree-mydomain. I recommend to choose a unique directory name and not just inventree to keep instae

 

#!/bin/sh
wget -O nginx.prod.conf https://github.com/inventree/InvenTree/raw/master/docker/production/nginx.prod.conf
wget -O docker-compose.yml https://github.com/inventree/InvenTree/raw/master/docker/production/docker-compose.yml
wget -O .env https://github.com/inventree/InvenTree/raw/master/docker/production/.env

sed -i -e 's/#INVENTREE_DB_USER=pguser/INVENTREE_DB_USER=inventree/g' .env
sed -i -e "s/#INVENTREE_DB_PASSWORD=pgpassword/INVENTREE_DB_PASSWORD=$(pwgen 30 1)/g" .env
sed -i -e "s/INVENTREE_WEB_PORT=1337/INVENTREE_WEB_PORT=$(shuf -i 1024-65535 -n 1)/g" .env
sed -i -e "s/#INVENTREE_ADMIN_USER=/INVENTREE_ADMIN_USER=admin/g" .env
sed -i -e "s/#INVENTREE_ADMIN_PASSWORD=/INVENTREE_ADMIN_PASSWORD=$(pwgen 30 1)/g" .env
sed -i -e "s/#INVENTREE_ADMIN_EMAIL=/[email protected]/g" .env
sed -i -e 's/COMPOSE_PROJECT_NAME=inventree-production//g' .env
# Enable cache
sed -i -e "s/#INVENTREE_CACHE_HOST=inventree-cache/INVENTREE_CACHE_HOST=inventree-cache/g" .env
sed -i -e "s/#INVENTREE_CACHE_PORT=6379/INVENTREE_CACHE_PORT=6379/g" .env
# Use direct directory mapping to avoid mounting issues
sed -i -e "s%- inventree_data:%- $(pwd)/inventree_data:%g" docker-compose.yml
# ... now we can remove the volume declarations from docker-compose.yml
sed -i -e '/^volumes:/,$d' docker-compose.yml

sed -z -i -e 's#profiles:\s*- redis\s*##g' docker-compose.yml # Make redis start always, even without docker-compose --profile redis
# Use standard docker-compose directory naming to facilitate multiple parallel installations
sed -z -i -e 's#container_name:\s*[a-zA-Z0-9_-]*\s*##g' docker-compose.yml # Remove container_name: ... statements
# Create data directory which is bound to the docker volume
mkdir -p inventree_data
# Initialize database
docker-compose up -d inventree-cache inventree-db # database initialization needs cache
docker-compose run inventree-server invoke update

After that, you can check .env for the randomly generated  INVENTREE_ADMIN_PASSWORD and INVENTREE_WEB_PORT.

Now you can enable autostart & start the service using systemd, for more details see our post Create a systemd service for your docker-compose project in 10 seconds:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Don’t forget to configure your reverse proxy to point to InvenTree.

Posted by Uli Köhler in Docker, InvenTree

How to get IP address of a running docker-compose container

This will get the IP address of a running docker-compose container for the mongoservice.

docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

When you use this in shell scripts, it’s often convenient to store the IP address in a variable:

export MONGO_IP=$(docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')

which you can then use as $MONGO_IP.

For more details on how this works, see the following posts:

 

Posted by Uli Köhler in Docker

How to get container name of docker-compose container

Lets’s assume the directory where your docker-compose.yml is located is called myservice

If you have, for example, a docker-compose.yml that declares a service mongo running MongoDB, docker-compose will call the container mongo or mongo-1.

However, docker itself will call that container myservice-mongo-1.

In order to find out the actual docker name of your container – assuming the container is running – use the following code:

docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name'

This uses docker-compose ps to list running containers, exporting some information as JSON, for example:

[{
  "ID": "2d68b1c1625dbfb41e05f55af0a333b5700332112c6c7551f78afe27b1dfc7ad",
  "Name": "production-mongo-1",
  "Command": "docker-entrypoint.sh mongod",
  "Project": "production",
  "Service": "mongo",
  "State": "running",
  "Health": "",
  "ExitCode": 0,
  "Publishers": [
    {
      "URL": "",
      "TargetPort": 27017,
      "PublishedPort": 0,
      "Protocol": "tcp"
    }
  ]
}]

Then we use jq (a command line JSON processor) to a) select only the entry in the list of running containers where the Service attribute equals mongo, b) take the first one using [0] and get the Name attribute which stores the name of the container.

Example output

$ docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name'
myservice-mongo-1

 

Posted by Uli Köhler in Container, Docker

bup remote server docker-compose config with CIFS-mounted backup store

In our previous post How to setup a “bup remote” server in 5 minutes using docker-compose we outlined how to setup your own bup remote server using docker-composeRead that post before this one!

This post provides an alternate docker-compose.yml config file that mounts a remote CIFS directory as /bup backup directory instead of using a local directory. This is most useful when using a NAS and a separate bup server.

For this example, we’ll mount the CIFS share //10.1.2.3/bup-backups with user cifsuser and password pheT8Eigho.

Note: For performance reasons, the CIFS server (NAS) and the bup server should be locally connected, not via the internet.

# Mount the backup volume using CIFS
# NOTE: We recommend to not use a storage mounted over the internet
# for performance reasons. Instead, deploy a bup remote server locally.
volumes:
  bup-backups:
    driver_opts:
      type: cifs
      o: "username=cifsuser,password=pheT8Eigho,uid=1111,gid=1111"
      device: "//10.1.2.3/bup-backups"

version: "3.8"
services:
  bup-server:
    image: ulikoehler/bup-server:latest
    environment:
      - SSH_PORT=2022
    volumes:
      - ./dotssh:/home/bup/.ssh
      - ./dropbear:/etc/dropbear
      # BUP backup storage: CIFS mounted
      - bup-backups:/bup
    ports:
      - 2022:2022
    restart: unless-stopped

 

Posted by Uli Köhler in bup, Docker, Networking

How to setup a “bup remote” server in 5 minutes using docker-compose

The bup backup system implements remote backup on a server by connecting via SSH to said server, starting a bup process there and then communicating via the SSH tunnel.

In this post, we’ll setup a server for bup remote backup based on our ulikoehler/bup-server image (which contains both bup and dropbear as an SSH server).

1. Initialize the directory structure & create SSH keyset to access the server

I recommend doing this in /opt/bup, but in principle, any directory will do.

mkdir -p dotssh bup
# Generate new elliptic curve public key
ssh-keygen -t ed25519 -f id_bup -N ""
# Add SSH key to list of authorized keys
cat id_bup.pub | sudo tee -a dotssh/authorized_keys
# Fix permissions so that dropbear does not complain
sudo chown -R 1111:1111 bup
sudo chmod 0600 dotssh/authorized_keys
sudo chmod 0700 dotssh

1111 is the user ID of the bup user in the VM.

2. Create docker-compose.yml

Note: This docker-compose.yml uses a local backup directory – you can also mount a CIFS directory from e.g. a NAS device. See bup remote server docker-compose config with CIFS-mounted backup store for more details.

version: "3.8"
services:
  bup-server:
    image: ulikoehler/bup-server:latest
    environment:
      - SSH_PORT=2022
    volumes:
      - ./dotssh:/home/bup/.ssh
      - ./dropbear:/etc/dropbear
      # BUP backup storage:
      - ./bup:/bup
    ports:
      - 2022:2022
    restart: unless-stopped

3. Startup the container

At this point, you can use docker-compose up to startup the service. However, it’s typically easier to just use TechOverflow’s script to generate a systemd script to autostart the service on boot (and start it right now):

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

When you run docker-compose logs -f, you should see a greeting message from dropbear such as

bupremotedocker-bup-remote-1  | [1] Dec 25 14:58:20 Not backgrounding

4. Create a .ssh/config entry on the client

You need to do this for each client.

Copy id_bup (which we generated earlier) to each client into a folder such as ~/.ssh. Where you copy it does not matter, but the user who will be running the backups later will need access to this file. Also, for that user you need to create a .ssh/config entry telling SSH how to access the bup server:

Host BupServer
    HostName 10.1.2.3
    User bup
    Port 2022
    IdentityFile ~/.ssh/id_bup

Set HostName to the IP or domain name of the host running the docker container.
Set User to bup. This is hard-coded in the container.
Set Port to whatever port you mapped out in docker-compose.yml. If the ports: line in docker-compose.ymlis - 1234:2022, the correct value for Port in .ssh/config is 1234.
Set IdentityFile to whereever id_bup is located (see above).

Now you need to connect to the bup server container once for each client. This is both to spot issues with your SSH configuration (such as wrong permissions on the id_bup file) and to save the SSH host key of the container as known key:

ssh BupServer

If this prompts you for a password, something is wrong in your configuration – possibly, your are connecting to the wrong SSH host since the bup server container has password authentication disabled.

5. Connect using bup

Every client will need bup to be installed. See How to install bup on Ubuntu 22.04 and similar posts.

You have to understand that bup will need both a local directory (called the index) and a directory on the bup server called destination directory.  You have to use one index directory and one destination directory per backup project. What you define as backup project is up to you, but I strongly recommend to use one backup project per application you backup, in order to have data locality: Backups from one application belong together.

By convention, the /bup directory on the server (i.e. container) is dedicated for this purpose (and mapped to a directory or volume outside of the container).

On the local host, I recommend using either /var/lib/bup/project.index.bup or ~/bup/project.index.bup and let bup auto-create project-specific directories from there. If you use a special user on the client to do backups, you can also place the indexes. If the index is lost, this is not an issue as long as the backup works (it just will take a few minutes to check all files again). You should not backup the index directory.

There is no requirement for the .bup or .index.bup suffix but if you use it, it will allow you to quickly discern what a directory is and whether it is important or nor.

In order to use bup, you first need to initialize the directories. You can do this multiple times without any issue, so I do it at the start of each of my backup scripts.

bup -d ~/buptest.index.bup init -r BupServer:/bup/buptest.bup

After that, you can start backing up. Generally this is done by first running bup index (this operation is local-only) and then running bup save (which saves the backup on the bup remote server).

bup -d ~/buptest.index.bup index . && bup save -r BupServer:/bup/buptest.bup -9 --strip-path $(pwd) -n mybackup .

Some parameters demand further explanation:

  • -9: Maximum compression. bup is so fast that it hardly makes a difference but it saves a ton of disk space especially for text-like data.
  • --strip-path $(pwd) If you backup a directory /home/uli/Documents/ with a file /home/uli/Documents/Letters/Myletter.txt this makes bup save the backup of said file under the name Letters/Myletter.txt instead of  /home/uli/Documents/Letters/Myletter.txt.
  • -n mybackup. The name of this backup. This allows you to separate different backups in a single repository.

6. Let’s restore!

You might want to say hopefully I’ll never need to restore. WRONG. You need to restore right now, and you need to restore regularly, as a test that if you actually need to recover data by restoring, it will actually work.

In order to do this, we’ll first need to get access to the folder where. This is typically stored on some kind of Linux server anyway, so just install bup there. In our example above, the directory we’ll work with is called buptest.bup.

There are two conventient ways to view bup backups:

  1. Use bup web and open your browser at http://localhost:8080 to view the backup data (including history):
    bup -d buptest.bup web
  2. Use bup fuse to mount the entire tree including history to a directory such as /tmp/buptest:
    mkdir -p  /tmp/buptest && bup -d buptest.bup fuse /tmp/buptest

     

Posted by Uli Köhler in bup, Container, Docker

Minimal SSH server on Docker container using dropbear

This example Dockerfile runs a dropbear SSH daemon on Alpine Linux. It creates a system user called myuser and only allows login for that specific user.

FROM alpine:3.17
WORKDIR /app
ENV SSHUSER=myuser
# The SSH user to create
RUN apk --no-cache add dropbear &&\
    mkdir -p /home/$SSHUSER/.ssh &&\
    adduser -s /bin/sh -D $SSHUSER --home /home/$SSHUSER &&\
    chown -R $SSHUSER:$SSHUSER /home/$SSHUSER

CMD ["/bin/sh", "-c", "/usr/sbin/dropbear -RFEwgsjk -G ${SSHUSER} -p 22"]

Change the username to your liking.

Build like this:

docker build -t sshtest .

Starting the container

You can run it like this – remember to mount /etc/dropbear to a volume or local directory both for persisting host key files and for storing authorized key files:

docker run -v $(pwd)/dropbear:/etc/dropbear -v $(pwd)/dotssh:/home/myuser/.ssh -it sshtest

Dropbear options

The dropbear options -RFEwgsjk are:

  • -R: Create hostkeys as required
  • -F: Don’t fork into background
  • -E: Log to stderr rather than syslog
  • -w: Disallow root logins
  • -g: Disable password logins for root
  • -s: Disable password logins
  • -j: Disable local port forwarding
  • -k: Disable remote port forwarding

Setting up public key authentication

First, generate a key pair using

ssh-keygen -t ed25519 -f id_dropbear -N ""

We assume that you have mounted the user’s home .ssh directory in ./dotssh (as in our example, see Starting the container above). You can then copy the pubkey that is generated by ssh-keygen – which is saved in id_dropbear.pub – to the authorized_keys file in the Dropbear SSH directory:

cat id_dropbear.pub | sudo tee -a ./dotssh/authorized_keys

The sudo (in sudo tee) is only required because the dotssh directory is owned by another user.

Connecting to the container

First, you need to find the container’s IP address using the method outline in How to list just container names & IP address(es) of all Docker conatiners. In our example, this IP address is 10.254.1.4. You can then connect to the container using the public key:

ssh -i id_dropbear [email protected]

 

Posted by Uli Köhler in Docker

How to list just container names & IP address(es) of all Docker conatiners

Also see:

Use

docker ps -q | xargs -n1 docker inspect --format '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

Example output:

$ docker ps -q | xargs -n1 docker inspect --format '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
/flamboyant_cohen 10.254.1.6
/flamboyant_cohen 10.254.1.6
/lucid_shtern 10.254.1.4

This solution was inspired by GitHub user zeroows on GitHub Gists

 

Posted by Uli Köhler in Docker

How to list just container names of all Docker containers

Also see:

Use

docker ps --format "{{.Names}}"

Example output:

$ docker ps --format "{{.Names}}"
flamboyant_cohen
lucid_shtern
gitlab-techoverflow_gitlab_1

 

Posted by Uli Köhler in Docker

How to list just container ID of all Docker containers

Also see:

Use

docker ps -q

Example output:

$ docker ps -q
29f721fa3124
39a240769ae8
cae90fe55b9a
90580dc4a6d2
348c24768864
5e64779be4f0
78874ae92a8e
92650c527106
948a8718050f
7aad5a210e3c

 

Posted by Uli Köhler in Docker

How to correctly use apk in Dockerfile

In Dockerfiles you should always use apk with --no-cache to prevent extra files being deposited on the containers, for example:

FROM alpine:3.17
RUN apk add --no-cache python3-dev

 

Posted by Uli Köhler in Alpine Linux, Docker

How to specify which docker image to use in .gitlab-ci.yml

The following .gitlab-ci.yml will build a native executable project using cmake with a custom docker image:

stages:
  - build

buildmyexe:
  stage: build
  image: 'ulikoehler/ubuntu-gcc-cmake:latest'
  script:
    - cmake .
    - make -j4

In this example, we have only one stage – if you have multiple stages, you can specify different images for each of them.

Posted by Uli Köhler in Docker, git, GitLab

How to configure SMTP E-Mail for InvenTree (docker/docker-compose)

You can easily configure SMTP email for InvenTree by adding the following config to your .env file (I’m using the docker production config):

INVENTREE_EMAIL_HOST=smtp.mydomain.com
[email protected]
INVENTREE_EMAIL_PASSWORD=cheen1zaiCh4yaithaecieng2jazey
INVENTREE_EMAIL_TLS=true
[email protected]

Even after setting up InvenTree, it is sufficient to just add this config to the .env file and restarting the server.

Posted by Uli Köhler in Docker, InvenTree

How to remove container_name: … statements from docker-compose.yml automatically using sed

This script will remove alle container_name statements from a docker-compose.yml config file:

sed -zie 's#container_name:\s*[a-zA-Z0-9_-]*\s*##g' docker-compose.yml

Example input:

services:
    inventree-proxy:
        container_name: inventree-proxy
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Example output:

services:
    inventree-proxy:
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Posted by Uli Köhler in Docker, Linux

How to fix docker emqx_ctl Node ’[email protected]’ not responding to pings.

Problem:

When trying to run emqx_ctl in a dockerized emqx setup using a command like

docker-compose exec emqx ./bin/emqx status

you see an error message like

Node '[email protected]' not responding to pings.
/opt/emqx/bin/emqx: line 41: die: command not found

Solution:

The problem here is that emqx_ctl is trying to connect to the IP address for node1.emqx.mydomain.com but that IP address does not point to the IP address for the docker container (maybe it’s the public IP address for your server?)

The solution here is to create a network alias within docker/docker-compose so that the Docker DNS system resolves node1.emqx.mydomain.com to the internal IP address of the container.

For example, in docker-compose, you can create your network using

networks:
  emqx:
    driver: bridge

and then configure the alias for the container using

services:
  emqx:
    image: emqx:4.4.4
    environment:
      - "EMQX_NAME=emqx"
      - "EMQX_HOST=node1.emqx.mydomain.com"
      - "EMQX_LOADED_PLUGINS=emqx_recon,emqx_retainer,emqx_management,emqx_dashboard"
    ports:
      - 18083:18083
      - 1883:1883
    volumes:
      - ./emqx_data:/opt/emqx/data
      - ./emqx_log:/opt/emqx/log
    networks:
      emqx:
        aliases:
          - "node1.emqx.mydomain.com"

 

 

Posted by Uli Köhler in Container, Docker, EMQX, MQTT

How to setup Netmaker using docker-compose in under 15 minutes

In this post, we’ll build a simple setup for running netmaker with PostgreSQL backend using docker-compose with an external Trik

First, create a directory for the Netmaker files to reside in, e.g.:

mkdir /opt/netmaker

cd to that directory:

cd /opt/netmaker

At this point we’ll download the Mosquitto config from Github and enable the ports in the firewall ufw. I reserved 1000 ports 51821:52821 in order to facilitate having a lot of networks (more than the default 9). My traefik config is the one from Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges which allows having a single *.netmaker.mydomain.com Let’s Encrypt wildcard certificate.

Now, create docker-compose.yml in that directory

version: "3.4"

services:
  postgres:
    image: postgres
    restart: unless-stopped
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=netmaker
      - POSTGRES_USER=netmaker
  netmaker: # The Primary Server for running Netmaker
    image: gravitl/netmaker:v0.14.5
    depends_on:
      - postgres
    cap_add:
      - NET_ADMIN
      - NET_RAW
      - SYS_MODULE
    sysctls:
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=0
      - net.ipv6.conf.all.forwarding=1
    restart: always
    volumes: # Volume mounts necessary for sql, coredns, and mqtt
      - ./netmaker_dnsconfig:/root/config/dnsconfig
      - ./netmaker_sqldata:/root/data
      - ./netmaker_shared_certs:/etc/netmaker
    environment: # Necessary capabilities to set iptables when running in container
      SERVER_NAME: "broker.${NETMAKER_BASE_DOMAIN}" # The domain/host IP indicating the mq broker address
      SERVER_HOST: "${NETMAKER_PUBLIC_IP}" # Set to public IP of machine.
      SERVER_HTTP_HOST: "api.${NETMAKER_BASE_DOMAIN}" # Overrides SERVER_HOST if set. Useful for making HTTP available via different interfaces/networks.
      SERVER_API_CONN_STRING: "api.${NETMAKER_BASE_DOMAIN}:443"
      # COREDNS_ADDR: "${NETMAKER_PUBLIC_IP}" # Address of the CoreDNS server. Defaults to SERVER_HOST
      DNS_MODE: "off" # Enables DNS Mode, meaning all nodes will set hosts file for private dns settings.
      API_PORT: "8081" # The HTTP API port for Netmaker. Used for API calls / communication from front end. If changed, need to change port of BACKEND_URL for netmaker-ui.
      CLIENT_MODE: "on" # Depricated. CLIENT_MODE should always be ON
      REST_BACKEND: "on" # Enables the REST backend (API running on API_PORT at SERVER_HTTP_HOST). Change to "off" to turn off.
      DISABLE_REMOTE_IP_CHECK: "off" # If turned "on", Server will not set Host based on remote IP check. This is already overridden if SERVER_HOST is set. Turned "off" by default.
      TELEMETRY: "on" # Whether or not to send telemetry data to help improve Netmaker. Switch to "off" to opt out of sending telemetry.
      RCE: "off" # Enables setting PostUp and PostDown (arbitrary commands) on nodes from the server. Off by default.
      MASTER_KEY: "${NETMAKER_MASTER_KEY}" # The admin master key for accessing the API. Change this in any production installation.
      CORS_ALLOWED_ORIGIN: "*" # The "allowed origin" for API requests. Change to restrict where API requests can come from.
      DISPLAY_KEYS: "on" # Show keys permanently in UI (until deleted) as opposed to 1-time display.
      DATABASE: "postgres" # Database to use - sqlite, postgres, or rqlite
      SQL_HOST: "postgres"
      SQL_DB: "netmaker"
      SQL_USER: "netmaker"
      SQL_PASS: "${POSTGRES_PASSWORD}"
      NODE_ID: "${SERVER_NAME}" # used for HA - identifies this server vs other servers
      MQ_HOST: "mq"  # the address of the mq server. If running from docker compose it will be "mq". Otherwise, need to input address. If using "host networking", it will find and detect the IP of the mq container.
      MQ_SERVER_PORT: "1883" # the reachable port of MQ by the server - change if internal MQ port changes (or use external port if MQ is not on the same machine)
      MQ_PORT: "443" # the reachable port of MQ - change if external MQ port changes (port on proxy, not necessarily the one exposed in docker-compose)
      HOST_NETWORK: "off" # whether or not host networking is turned on. Only turn on if configured for host networking (see docker-compose.hostnetwork.yml). Will set host-level settings like iptables.
      VERBOSITY: "1" # logging verbosity level - 1, 2, or 3
      MANAGE_IPTABLES: "on" # deprecated
      # PORT_FORWARD_SERVICES: "ssh,mq" # decide which services to port forward ("dns","ssh", or "mq")
      # this section is for OAuth
      AUTH_PROVIDER: "" # "<azure-ad|github|google|oidc>"
      CLIENT_ID: "" # "<client id of your oauth provider>"
      CLIENT_SECRET: "" # "<client secret of your oauth provider>"
      FRONTEND_URL: "" # "https://dashboard.<netmaker base domain>"
      AZURE_TENANT: "" # "<only for azure, you may optionally specify the tenant for the OAuth>"
      OIDC_ISSUER: "" # https://oidc.yourprovider.com - URL of oidc provider
    ports:
      - "51821-52821:51821-52821/udp" # wireguard ports
    expose:
      - "8081" # api port
    labels: # only for use with traefik proxy (default)
      - traefik.enable=true
      - traefik.http.routers.netmaker-api.entrypoints=websecure
      - traefik.http.routers.netmaker-api.rule=Host(`api.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-api.service=netmaker-api
      - traefik.http.services.netmaker-api.loadbalancer.server.port=8081
      - "traefik.http.routers.netmaker-api.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-api.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-api.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  netmaker-ui:  # The Netmaker UI Component
    container_name: netmaker-ui
    image: gravitl/netmaker-ui:v0.14.5
    depends_on:
      - netmaker
    links:
      - "netmaker:api"
    restart: always
    environment:
      BACKEND_URL: "https://api.${NETMAKER_BASE_DOMAIN}" # URL where UI will send API requests. Change based on SERVER_HOST, SERVER_HTTP_HOST, and API_PORT
    expose:
      - "80"
    labels:
      - traefik.enable=true
      - traefik.http.middlewares.nmui-security.headers.accessControlAllowOriginList=*.${NETMAKER_BASE_DOMAIN}
      - traefik.http.middlewares.nmui-security.headers.stsSeconds=31536000
      - traefik.http.middlewares.nmui-security.headers.browserXssFilter=true
      - traefik.http.middlewares.nmui-security.headers.customFrameOptionsValue=SAMEORIGIN
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.X-Robots-Tag=none
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.Server= # Remove the server name
      - traefik.http.routers.netmaker-ui.entrypoints=websecure
      - traefik.http.routers.netmaker-ui.middlewares=nmui-security@docker
      - traefik.http.routers.netmaker-ui.rule=Host(`dashboard.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-ui.service=netmaker-ui
      - traefik.http.services.netmaker-ui.loadbalancer.server.port=80
      - "traefik.http.routers.netmaker-ui.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  mq: # the MQTT broker for netmaker
    container_name: mq
    image: eclipse-mosquitto:2.0.11-openssl
    depends_on:
      - netmaker
    restart: unless-stopped
    volumes:
      - ./mosquitto.conf:/mosquitto/config/mosquitto.conf # need to pull conf file from github before running (under docker/mosquitto.conf)
      - ./mosquitto_data:/mosquitto/data
      - ./mosquitto_logs:/mosquitto/log
      - ./netmaker_shared_certs:/mosquitto/certs
    expose:
      - "8883"
    labels:
      - traefik.enable=true
      - traefik.tcp.routers.mqtts.rule=HostSNI(`broker.${NETMAKER_BASE_DOMAIN}`)
      - traefik.tcp.routers.mqtts.tls.passthrough=true
      - traefik.tcp.services.mqtts-svc.loadbalancer.server.port=8883
      - traefik.tcp.routers.mqtts.service=mqtts-svc
      - traefik.tcp.routers.mqtts.entrypoints=websecure
      - "traefik.tcp.routers.mqtts.tls.certresolver=cloudflare"
      - "traefik.tcp.routers.mqtts.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.tcp.routers.mqtts.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"

After that, create .env in said directory containing some info about your node:

SERVER_NAME=netmaker-01
NETMAKER_BASE_DOMAIN=netmaker.mydomain.com
NETMAKER_MASTER_KEY=geeveeBaeQuie1cie6aaz6eepahleo
NETMAKER_PUBLIC_IP=101.252.11.54
POSTGRES_PASSWORD=ahph8Jih4aesheel7id1uyo0gietai

Adjust the NETMAKER_BASE_DOMAIN and the NETMAKER_PUBLIC_IP accordingly. Also, you need to choose a random NETMAKER_MASTER_KEY and POSTGRES_PASSWORD.

Now we’ll use the script from Create a systemd service for your docker-compose project in 10 seconds in order to create a systemd service to automatically run the service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will also automatically start the service (i.e. docker-compose up).

 

Posted by Uli Köhler in Container, Docker

How to setup Wekan using docker-compose in just 3 minutes

First, create a directory for the Wekan files to reside in, e.g.:

mkdir /opt/wekan

Change to that directory:

cd /opt/wekan

Now, we need to create the data directory which needs to be owned by UID 999: in order for Wekan to store uploads:

mkdir -p wekan_data && chown -R 999:999 wekan_data

Now, create docker-compose.yml in that directory

version: '3.4'

services:
  wekandb:
    restart: always
    image: mongo:5
    command: mongod --logpath /dev/null --oplogSize 128 --quiet
    healthcheck:
      test: ["CMD", "mongo", "--quiet", "--eval", "'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'"]
      interval: 15s
      timeout: 10s
      retries: 3
      start_period: 10s
    volumes:
      - ./wekan-db:/data/db
      - ./wekan-db-dump:/dump
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro


  wekan:
    image: quay.io/wekan/wekan
    restart: always
    ports:
      - 7972:8080
    environment:
      - MONGO_URL=mongodb://wekandb:27017/wekan
      - ROOT_URL=${URL}
      - MAIL_URL=${EMAIL_URL}
      - MAIL_FROM=${EMAIL_FROM}
      - WITH_API=true
      - WRITABLE_PATH=/data
    volumes:
      - ./wekan_data:/data
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - wekandb

After that, create .env in said directory containing some info about your node:

URL=https://wekan.mydomain.com
EMAIL_URL=smtp://noreply%40mydomain.com:[email protected]:25/
EMAIL_FROM='My Wekan <[email protected]>'

Now we’ll use the script from Create a systemd service for your docker-compose project in 10 seconds in order to create a systemd service to automatically run the service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will also automatically start the service (i.e. docker-compose up).

Now you can setup your reverse proxy to point your domain – e.g. wekan.techoverflow.net to port 7972 (or change that port in docker-compose.yml). Which reverse proxy you use doesn’t matter all too much, I use both nginx and traefik. I will cover the configuration for these reverse proxies in future posts.

Once you can access Wekan, you can register as a new user. The first user will be an admin and can also disable registration using the Web UI.

Posted by Uli Köhler in Container, Docker

How to fix wekan Path “/data/attachments” is not writable!

Problem:

While starting your dockerized wekan setup, you see an error message like this:

wekan_1    | errorClass [Error]: [FilesCollection.attachments] Path "/data/attachments" is not writable! [401]
wekan_1    |     at new FilesCollection (packages/ostrio:files/server.js:354:15)
wekan_1    |     at module (models/attachments.js:52:15)
wekan_1    |     at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |     at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |     at Module.moduleLink [as link] (/build/programs/server/npm/node_modules/meteor/modules/node_modules/@meteorjs/reify/lib/runtime/index.js:52:22)
wekan_1    |     at module (server/publications/attachments.js:1:24)
wekan_1    |     at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |     at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |     at require (packages/modules-runtime.js:258:21)
wekan_1    |     at /build/programs/server/app/app.js:162362:1
wekan_1    |     at /build/programs/server/boot.js:401:38
wekan_1    |     at Array.forEach (<anonymous>)
wekan_1    |     at /build/programs/server/boot.js:226:21
wekan_1    |     at /build/programs/server/boot.js:464:7
wekan_1    |     at Function.run (/build/programs/server/profile.js:280:14)
wekan_1    |     at /build/programs/server/boot.js:463:13 {
wekan_1    |   isClientSafe: true,
wekan_1    |   error: 401,
wekan_1    |   reason: '[FilesCollection.attachments] Path "/data/attachments" is not writable!',
wekan_1    |   details: Error: EACCES: permission denied, mkdir '/data/attachments'
wekan_1    |       at Object.mkdirSync (fs.js:1014:3)
wekan_1    |       at new FilesCollection (packages/ostrio:files/server.js:348:10)
wekan_1    |       at module (models/attachments.js:52:15)
wekan_1    |       at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |       at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |       at Module.moduleLink [as link] (/build/programs/server/npm/node_modules/meteor/modules/node_modules/@meteorjs/reify/lib/runtime/index.js:52:22)
wekan_1    |       at module (server/publications/attachments.js:1:24)
wekan_1    |       at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |       at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |       at require (packages/modules-runtime.js:258:21)
wekan_1    |       at /build/programs/server/app/app.js:162362:1
wekan_1    |       at /build/programs/server/boot.js:401:38
wekan_1    |       at Array.forEach (<anonymous>)
wekan_1    |       at /build/programs/server/boot.js:226:21
wekan_1    |       at /build/programs/server/boot.js:464:7
wekan_1    |       at Function.run (/build/programs/server/profile.js:280:14)
wekan_1    |       at /build/programs/server/boot.js:463:13 {
wekan_1    |     errno: -13,
wekan_1    |     syscall: 'mkdir',
wekan_1    |     code: 'EACCES',
wekan_1    |     path: '/data/attachments'
wekan_1    |   },
wekan_1    |   errorType: 'Meteor.Error'
wekan_1    | }

Solution:

You have mounted a local directory as wekan data directory, for example like this in docker-compose.yml:

wekan:
  image: quay.io/wekan/wekan
  /* ... */
  environment:
    /*...*/
    - WRITABLE_PATH=/data
  volumes:
    - ./wekan_data:/data

But this directory does not have the correct permissions set. You can fix it using this command on the directory (wekan_data in this example):

sudo chown -R 999:999 wekan_data

After that, restart wekan and the issue should be fixed.

Posted by Uli Köhler in Container, Docker, Networking

How to fix docker MariaDB correct definition of table mysql.column_stats: expected column ‘hist_type’ at position 9…

Problem:

In the log of your MySQL docker server, you see logs like

mariadb_1    | 2022-06-07 20:24:00 283 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9 to have type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB','JSON_HB'), found type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB').
mariadb_1    | 2022-06-07 20:24:00 283 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'histogram' at position 10 to have type longblob, found type varbinary(255).

Solution:

This happens after doing a version upgrade of the MySQL version.

In order to fix it, run the upgrade by running mysql_upgrade in the contaienr

docker-compose exec mariadb mysql_upgrade -uroot -pchopahl0aib4eiphuk5bao3shiVoow

where chopahl0aib4eiphuk5bao3shiVoow is your MySQL root password.

If you have your password in .env as we recommend, you can use this command:

source .env && docker-compose exec mariadb mysql_upgrade -uroot -p${MARIADB_ROOT_PASSWORD}

 

 

Posted by Uli Köhler in Docker