python3 -c "import urllib.request; urllib.request.urlretrieve('https://bootstrap.pypa.io/get-pip.py', 'get-pip.py')" python3 get-pip.py --break-system-packages
python3 -c "import urllib.request; urllib.request.urlretrieve('https://bootstrap.pypa.io/get-pip.py', 'get-pip.py')" python3 get-pip.py --break-system-packages
Run the following command in the directory where docker-compose.yml
resides:
docker-compose -f $(find . -name 'docker-compose.yml' -type f) ps -q | xargs -I {} docker inspect --format '{{.Name}}: {{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' {}
/myservice-server-1: 3b533d2074e7230426adf0d269b399d356066130719ffd3ef68d6d492f3757f6 /myservice-ui-1: 3b533d2074e7230426adf0d269b399d356066130719ffd3ef68d6d492f3757f6
docker-compose -f $(find . -name 'docker-compose.yml' -type f) ps -q
: This part finds all docker-compose.yml
files in the current directory and its subdirectories and runs docker-compose ps -q
for each of them. This command lists the running containers’ IDs for each service defined in the Docker Compose files.xargs -I {} docker inspect --format '{{.Name}}: {{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' {}
: This part takes the container IDs output by the previous step and runs docker inspect
on each of them. It extracts the container’s name and its Ethernet interface information using a custom format. It formats the output as “ServiceName: NetworkID.”When trying to update a plugin or similar action using the wordpress:cli
wpcli
docker image, for example using a command such as
docker-compose exec wpcli wp plugin update google-sitemap-generator
you see an error message such as
Warning: Failed to create directory. "/var/www/html/wp-content/upgrade/google-sitemap-generator.4.1.16" +--------------------------+-------------+-------------+--------+ | name | old_version | new_version | status | +--------------------------+-------------+-------------+--------+ | google-sitemap-generator | 4.1.13 | 4.1.16 | Error | +--------------------------+-------------+-------------+--------+ Error: No plugins updated (1 failed).
This error occurs because the wordpress
image (without :cli
!) is based on Debian and the wordpress:cli
image is based on Alpine Linux. Debian uses the UID 33
for the www-data
user whereas Alpine Linux uses 83
. So to fix the permission problem, you need to force the cli
image to use 33
:
This is documented on the wordpress
docker page.
docker-compose exec -e HOME=/tmp --user 33:33 wpcli wp plugin update google-sitemap-generator
This setup uses a docker-compose
based syncthing
setup and Traefik as reverse proxy. HTTP basic auth is used to prevent unauthorized access to the syncthing web UI. Alternatively, you can use the built-in password protection.
See Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges for my HTTPS setup for Traefik.
version: "3" services: syncthing: image: syncthing/syncthing hostname: Syncthing-Myserver environment: - PUID=1000 - PGID=1000 volumes: - ./syncthing_data:/var/syncthing ports: # NOTE: 8384 not forwarded, this is handled by traefik - "22000:22000" - "21027:21027/udp" restart: unless-stopped labels: - "traefik.enable=true" - "traefik.http.routers.syncthing.rule=Host(`syncthing.myserver.net`)" - "traefik.http.routers.syncthing.entrypoints=websecure" - "traefik.http.routers.syncthing.tls.certresolver=cloudflare" - "traefik.http.routers.syncthing.tls.domains[0].main=myserver.net" - "traefik.http.routers.syncthing.tls.domains[0].sans=*.myserver.net" - "traefik.http.services.syncthing.loadbalancer.server.port=8384" - "traefik.http.routers.syncthing.middlewares=syncthing-auth" # Auth (this is shared with the server). NOTE: generate with "htpasswd -n admin" and REPLACE EVERY "$" by "$$" IN THE OUTPUT! - "traefik.http.middlewares.syncthing-auth.basicauth.users=admin:$$apr1$$ehr8oqEZ$$tHoOVLG19oHdUe81IeePo1 "
When you use a Ubuntu-based docker image such as
FROM ubuntu:22.04
by default, /etc/services
is not installed.
However, you can easily install it by installing the netbase
apt package:
# netbase provides /etc/services RUN apt update && apt install -y netbase && rm -rf /var/lib/apt/lists/*
I tested with node:20
and node:20-alpine
and can confirm that:
node:20
(i.e. debian bookworm based) has a working ssh-keygen
node:20-alpine
does not have ssh-keygen
On Alpine, you can install ssh-keygen
using apk
.
Typically, this means you have to build your own docker image based no node:20-alpine
. In your Dockerfile
, add
RUN apk update && \ apk add --no-cache \ openssh-keygen
docker-compose
based setups with locally mounted volumes have very few common failure modes in practice. The most important ones are system upgrades to docker
stopping all the services and duplicate networks with the same name preventing the startup of a service. Sometimes, docker-compose
does not delete the old network properly, possibly due to unclean or unfinished shutdown procedures.
This will result in log messages such as
May 22 21:52:15 myserver docker-compose[2384642]: Removing network etherpad-mydomain_default May 22 21:52:15 myserver docker-compose[2384642]: network etherpad-mydomain_default is ambiguous (2 matches found based on name) May 22 21:52:16 myserver systemd[1]: etherpad-mydomain.service: Control process exited, code=exited, status=1/FAILURE
This simple script will find all duplicate network names and simply delete one of them.
#!/usr/bin/env python3 import subprocess import json already_seen_networks = set() output = subprocess.check_output(["docker", "network", "ls", "--format", "{{json .}}"]) for line in output.decode("utf-8").split("\n"): line = line.strip() if not line: continue obj = json.loads(line.strip()) id = obj["ID"] name = obj["Name"] if name in already_seen_networks: print(f"Detected duplicate network {name}. Removing duplicate network {id}...") subprocess.check_output(["docker", "network", "rm", id]) already_seen_networks.add(name)
Just call this script without any arguments
python docker-remove-duplicate-networks.py
docker network ls --format '{{json .}}'
{"CreatedAt":"2023-05-12 01:21:58.769840402 +0200 CEST","Driver":"bridge","ID":"649e42effc83","IPv6":"false","Internal":"false","Labels":"com.docker.compose.version=1.29.2,com.docker.compose.network=default,com.docker.compose.project=vaultwarden-mydomain","Name":"vaultwarden-mydomain_default","Scope":"local"}
The following script is an automated installation script for InvenTree that fetches the current docker-compose.yml and other configs from GitHub, modifies them so that only local directories are used for storage and then setups InvenTree.
First, create a directory such as /opt/inventree-mydomain
. I recommend to choose a unique directory name and not just inventree
to keep instae
#!/bin/sh wget -O nginx.prod.conf https://github.com/inventree/InvenTree/raw/master/docker/production/nginx.prod.conf wget -O docker-compose.yml https://github.com/inventree/InvenTree/raw/master/docker/production/docker-compose.yml wget -O .env https://github.com/inventree/InvenTree/raw/master/docker/production/.env sed -i -e 's/#INVENTREE_DB_USER=pguser/INVENTREE_DB_USER=inventree/g' .env sed -i -e "s/#INVENTREE_DB_PASSWORD=pgpassword/INVENTREE_DB_PASSWORD=$(pwgen 30 1)/g" .env sed -i -e "s/INVENTREE_WEB_PORT=1337/INVENTREE_WEB_PORT=$(shuf -i 1024-65535 -n 1)/g" .env sed -i -e "s/#INVENTREE_ADMIN_USER=/INVENTREE_ADMIN_USER=admin/g" .env sed -i -e "s/#INVENTREE_ADMIN_PASSWORD=/INVENTREE_ADMIN_PASSWORD=$(pwgen 30 1)/g" .env sed -i -e "s/#INVENTREE_ADMIN_EMAIL=/[email protected]/g" .env sed -i -e 's/COMPOSE_PROJECT_NAME=inventree-production//g' .env # Enable cache sed -i -e "s/#INVENTREE_CACHE_HOST=inventree-cache/INVENTREE_CACHE_HOST=inventree-cache/g" .env sed -i -e "s/#INVENTREE_CACHE_PORT=6379/INVENTREE_CACHE_PORT=6379/g" .env # Use direct directory mapping to avoid mounting issues sed -i -e "s%- inventree_data:%- $(pwd)/inventree_data:%g" docker-compose.yml # ... now we can remove the volume declarations from docker-compose.yml sed -i -e '/^volumes:/,$d' docker-compose.yml sed -z -i -e 's#profiles:\s*- redis\s*##g' docker-compose.yml # Make redis start always, even without docker-compose --profile redis # Use standard docker-compose directory naming to facilitate multiple parallel installations sed -z -i -e 's#container_name:\s*[a-zA-Z0-9_-]*\s*##g' docker-compose.yml # Remove container_name: ... statements # Create data directory which is bound to the docker volume mkdir -p inventree_data # Initialize database docker-compose up -d inventree-cache inventree-db # database initialization needs cache docker-compose run inventree-server invoke update
After that, you can check .env
for the randomly generated INVENTREE_ADMIN_PASSWORD
and INVENTREE_WEB_PORT
.
Now you can enable autostart & start the service using systemd
, for more details see our post Create a systemd service for your docker-compose project in 10 seconds:
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Don’t forget to configure your reverse proxy to point to InvenTree.
This will get the IP address of a running docker-compose
container for the mongo
service.
docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
When you use this in shell scripts, it’s often convenient to store the IP address in a variable:
export MONGO_IP=$(docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')
which you can then use as $MONGO_IP
.
For more details on how this works, see the following posts:
Lets’s assume the directory where your docker-compose.yml
is located is called myservice
If you have, for example, a docker-compose.yml
that declares a service mongo
running MongoDB, docker-compose
will call the container mongo
or mongo-1
.
However, docker
itself will call that container myservice-mongo-1
.
In order to find out the actual docker name of your container – assuming the container is running – use the following code:
docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name'
This uses docker-compose ps
to list running containers, exporting some information as JSON, for example:
[{ "ID": "2d68b1c1625dbfb41e05f55af0a333b5700332112c6c7551f78afe27b1dfc7ad", "Name": "production-mongo-1", "Command": "docker-entrypoint.sh mongod", "Project": "production", "Service": "mongo", "State": "running", "Health": "", "ExitCode": 0, "Publishers": [ { "URL": "", "TargetPort": 27017, "PublishedPort": 0, "Protocol": "tcp" } ] }]
Then we use jq
(a command line JSON processor) to a) select only the entry in the list of running containers where the Service
attribute equals mongo
, b) take the first one using [0]
and get the Name
attribute which stores the name of the container.
$ docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name' myservice-mongo-1
In our previous post How to setup a “bup remote” server in 5 minutes using docker-compose we outlined how to setup your own bup
remote server using docker-compose
. Read that post before this one!
This post provides an alternate docker-compose.yml
config file that mounts a remote CIFS directory as /bup
backup directory instead of using a local directory. This is most useful when using a NAS and a separate bup
server.
For this example, we’ll mount the CIFS share //10.1.2.3/bup-backups
with user cifsuser
and password pheT8Eigho
.
Note: For performance reasons, the CIFS server (NAS) and the bup server should be locally connected, not via the internet.
# Mount the backup volume using CIFS # NOTE: We recommend to not use a storage mounted over the internet # for performance reasons. Instead, deploy a bup remote server locally. volumes: bup-backups: driver_opts: type: cifs o: "username=cifsuser,password=pheT8Eigho,uid=1111,gid=1111" device: "//10.1.2.3/bup-backups" version: "3.8" services: bup-server: image: ulikoehler/bup-server:latest environment: - SSH_PORT=2022 volumes: - ./dotssh:/home/bup/.ssh - ./dropbear:/etc/dropbear # BUP backup storage: CIFS mounted - bup-backups:/bup ports: - 2022:2022 restart: unless-stopped
The bup
backup system implements remote backup on a server by connecting via SSH to said server, starting a bup
process there and then communicating via the SSH tunnel.
In this post, we’ll setup a server for bup
remote backup based on our ulikoehler/bup-server
image (which contains both bup
and dropbear
as an SSH server).
I recommend doing this in /opt/bup
, but in principle, any directory will do.
mkdir -p dotssh bup # Generate new elliptic curve public key ssh-keygen -t ed25519 -f id_bup -N "" # Add SSH key to list of authorized keys cat id_bup.pub | sudo tee -a dotssh/authorized_keys # Fix permissions so that dropbear does not complain sudo chown -R 1111:1111 bup sudo chmod 0600 dotssh/authorized_keys sudo chmod 0700 dotssh
1111
is the user ID of the bup
user in the VM.
docker-compose.yml
Note: This docker-compose.yml
uses a local backup directory – you can also mount a CIFS directory from e.g. a NAS device. See bup remote server docker-compose config with CIFS-mounted backup store for more details.
version: "3.8" services: bup-server: image: ulikoehler/bup-server:latest environment: - SSH_PORT=2022 volumes: - ./dotssh:/home/bup/.ssh - ./dropbear:/etc/dropbear # BUP backup storage: - ./bup:/bup ports: - 2022:2022 restart: unless-stopped
At this point, you can use docker-compose up
to startup the service. However, it’s typically easier to just use TechOverflow’s script to generate a systemd script to autostart the service on boot (and start it right now):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
When you run docker-compose logs -f
, you should see a greeting message from dropbear
such as
bupremotedocker-bup-remote-1 | [1] Dec 25 14:58:20 Not backgrounding
.ssh/config
entry on the clientYou need to do this for each client.
Copy id_bup
(which we generated earlier) to each client into a folder such as ~/.ssh
. Where you copy it does not matter, but the user who will be running the backups later will need access to this file. Also, for that user you need to create a .ssh/config
entry telling SSH how to access the bup
server:
Host BupServer HostName 10.1.2.3 User bup Port 2022 IdentityFile ~/.ssh/id_bup
Set HostName
to the IP or domain name of the host running the docker container.
Set User
to bup
. This is hard-coded in the container.
Set Port
to whatever port you mapped out in docker-compose.yml
. If the ports:
line in docker-compose.yml
is - 1234:2022
, the correct value for Port
in .ssh/config
is 1234
.
Set IdentityFile
to whereever id_bup
is located (see above).
Now you need to connect to the bup
server container once for each client. This is both to spot issues with your SSH configuration (such as wrong permissions on the id_bup
file) and to save the SSH host key of the container as known key:
ssh BupServer
If this prompts you for a password, something is wrong in your configuration – possibly, your are connecting to the wrong SSH host since the bup server container has password authentication disabled.
Every client will need bup to be installed. See How to install bup on Ubuntu 22.04 and similar posts.
You have to understand that bup will need both a local directory (called the index) and a directory on the bup
server called destination directory. You have to use one index directory and one destination directory per backup project. What you define as backup project is up to you, but I strongly recommend to use one backup project per application you backup, in order to have data locality: Backups from one application belong together.
By convention, the /bup
directory on the server (i.e. container) is dedicated for this purpose (and mapped to a directory or volume outside of the container).
On the local host, I recommend using either /var/lib/bup/project.index.bup
or ~/bup/project.index.bup
and let bup auto-create project-specific directories from there. If you use a special user on the client to do backups, you can also place the indexes. If the index is lost, this is not an issue as long as the backup works (it just will take a few minutes to check all files again). You should not backup the index directory.
There is no requirement for the .bup
or .index.bup
suffix but if you use it, it will allow you to quickly discern what a directory is and whether it is important or nor.
In order to use bup
, you first need to initialize the directories. You can do this multiple times without any issue, so I do it at the start of each of my backup scripts.
bup -d ~/buptest.index.bup init -r BupServer:/bup/buptest.bup
After that, you can start backing up. Generally this is done by first running bup index
(this operation is local-only) and then running bup save
(which saves the backup on the bup remote server).
bup -d ~/buptest.index.bup index . && bup save -r BupServer:/bup/buptest.bup -9 --strip-path $(pwd) -n mybackup .
Some parameters demand further explanation:
-9
: Maximum compression. bup
is so fast that it hardly makes a difference but it saves a ton of disk space especially for text-like data.--strip-path $(pwd)
If you backup a directory /home/uli/Documents/
with a file /home/uli/Documents/Letters/Myletter.txt
this makes bup
save the backup of said file under the name Letters/Myletter.txt
instead of /home/uli/Documents/Letters/Myletter.txt
.-n mybackup
. The name of this backup. This allows you to separate different backups in a single repository.You might want to say hopefully I’ll never need to restore. WRONG. You need to restore right now, and you need to restore regularly, as a test that if you actually need to recover data by restoring, it will actually work.
In order to do this, we’ll first need to get access to the folder where. This is typically stored on some kind of Linux server anyway, so just install bup there. In our example above, the directory we’ll work with is called buptest.bup
.
There are two conventient ways to view bup
backups:
bup web
and open your browser at http://localhost:8080
to view the backup data (including history):bup -d buptest.bup web
bup fuse
to mount the entire tree including history to a directory such as /tmp/buptest
:mkdir -p /tmp/buptest && bup -d buptest.bup fuse /tmp/buptest
This example Dockerfile
runs a dropbear
SSH daemon on Alpine Linux. It creates a system user called myuser
and only allows login for that specific user.
FROM alpine:3.17 WORKDIR /app ENV SSHUSER=myuser # The SSH user to create RUN apk --no-cache add dropbear &&\ mkdir -p /home/$SSHUSER/.ssh &&\ adduser -s /bin/sh -D $SSHUSER --home /home/$SSHUSER &&\ chown -R $SSHUSER:$SSHUSER /home/$SSHUSER CMD ["/bin/sh", "-c", "/usr/sbin/dropbear -RFEwgsjk -G ${SSHUSER} -p 22"]
Change the username to your liking.
Build like this:
docker build -t sshtest .
You can run it like this – remember to mount /etc/dropbear
to a volume or local directory both for persisting host key files and for storing authorized key files:
docker run -v $(pwd)/dropbear:/etc/dropbear -v $(pwd)/dotssh:/home/myuser/.ssh -it sshtest
The dropbear options -RFEwgsjk
are:
-R
: Create hostkeys as required-F
: Don’t fork into background-E
: Log to stderr rather than syslog-w
: Disallow root logins-g
: Disable password logins for root-s
: Disable password logins-j
: Disable local port forwarding-k
: Disable remote port forwardingFirst, generate a key pair using
ssh-keygen -t ed25519 -f id_dropbear -N ""
We assume that you have mounted the user’s home .ssh
directory in ./dotssh
(as in our example, see Starting the container above). You can then copy the pubkey that is generated by ssh-keygen
– which is saved in id_dropbear.pub
– to the authorized_keys
file in the Dropbear SSH directory:
cat id_dropbear.pub | sudo tee -a ./dotssh/authorized_keys
The sudo
(in sudo tee
) is only required because the dotssh
directory is owned by another user.
First, you need to find the container’s IP address using the method outline in How to list just container names & IP address(es) of all Docker conatiners. In our example, this IP address is 10.254.1.4
. You can then connect to the container using the public key:
ssh -i id_dropbear [email protected]
Also see:
Use
docker ps -q | xargs -n1 docker inspect --format '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
$ docker ps -q | xargs -n1 docker inspect --format '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' /flamboyant_cohen 10.254.1.6 /flamboyant_cohen 10.254.1.6 /lucid_shtern 10.254.1.4
This solution was inspired by GitHub user zeroows on GitHub Gists
Also see:
Use
docker ps --format "{{.Names}}"
$ docker ps --format "{{.Names}}" flamboyant_cohen lucid_shtern gitlab-techoverflow_gitlab_1
Also see:
Use
docker ps -q
$ docker ps -q 29f721fa3124 39a240769ae8 cae90fe55b9a 90580dc4a6d2 348c24768864 5e64779be4f0 78874ae92a8e 92650c527106 948a8718050f 7aad5a210e3c
In Dockerfile
s you should always use apk
with --no-cache
to prevent extra files being deposited on the containers, for example:
FROM alpine:3.17 RUN apk add --no-cache python3-dev
The following .gitlab-ci.yml
will build a native executable project using cmake
with a custom docker image:
stages: - build buildmyexe: stage: build image: 'ulikoehler/ubuntu-gcc-cmake:latest' script: - cmake . - make -j4
In this example, we have only one stage – if you have multiple stages, you can specify different images for each of them.
You can easily configure SMTP email for InvenTree by adding the following config to your .env
file (I’m using the docker production config):
INVENTREE_EMAIL_HOST=smtp.mydomain.com [email protected] INVENTREE_EMAIL_PASSWORD=cheen1zaiCh4yaithaecieng2jazey INVENTREE_EMAIL_TLS=true [email protected]
Even after setting up InvenTree, it is sufficient to just add this config to the .env
file and restarting the server.