Container

How to install InvenTree using docker in just 5 minutes

The following script is an automated installation script for InvenTree that fetches the current docker-compose.yml and other configs from GitHub, modifies them so that only local directories are used for storage and then setups InvenTree.

First, create a directory such as /opt/inventree-mydomain. I recommend to choose a unique directory name and not just inventree to keep instae

 

#!/bin/sh
wget -O nginx.prod.conf https://github.com/inventree/InvenTree/raw/master/docker/production/nginx.prod.conf
wget -O docker-compose.yml https://github.com/inventree/InvenTree/raw/master/docker/production/docker-compose.yml
wget -O .env https://github.com/inventree/InvenTree/raw/master/docker/production/.env

sed -i -e 's/#INVENTREE_DB_USER=pguser/INVENTREE_DB_USER=inventree/g' .env
sed -i -e "s/#INVENTREE_DB_PASSWORD=pgpassword/INVENTREE_DB_PASSWORD=$(pwgen 30 1)/g" .env
sed -i -e "s/INVENTREE_WEB_PORT=1337/INVENTREE_WEB_PORT=$(shuf -i 1024-65535 -n 1)/g" .env
sed -i -e "s/#INVENTREE_ADMIN_USER=/INVENTREE_ADMIN_USER=admin/g" .env
sed -i -e "s/#INVENTREE_ADMIN_PASSWORD=/INVENTREE_ADMIN_PASSWORD=$(pwgen 30 1)/g" .env
sed -i -e "s/#INVENTREE_ADMIN_EMAIL=/[email protected]/g" .env
sed -i -e 's/COMPOSE_PROJECT_NAME=inventree-production//g' .env
# Enable cache
sed -i -e "s/#INVENTREE_CACHE_HOST=inventree-cache/INVENTREE_CACHE_HOST=inventree-cache/g" .env
sed -i -e "s/#INVENTREE_CACHE_PORT=6379/INVENTREE_CACHE_PORT=6379/g" .env
# Use direct directory mapping to avoid mounting issues
sed -i -e "s%- inventree_data:%- $(pwd)/inventree_data:%g" docker-compose.yml
# ... now we can remove the volume declarations from docker-compose.yml
sed -i -e '/^volumes:/,$d' docker-compose.yml

sed -z -i -e 's#profiles:\s*- redis\s*##g' docker-compose.yml # Make redis start always, even without docker-compose --profile redis
# Use standard docker-compose directory naming to facilitate multiple parallel installations
sed -z -i -e 's#container_name:\s*[a-zA-Z0-9_-]*\s*##g' docker-compose.yml # Remove container_name: ... statements
# Create data directory which is bound to the docker volume
mkdir -p inventree_data
# Initialize database
docker-compose up -d inventree-cache inventree-db # database initialization needs cache
docker-compose run inventree-server invoke update

After that, you can check .env for the randomly generated  INVENTREE_ADMIN_PASSWORD and INVENTREE_WEB_PORT.

Now you can enable autostart & start the service using systemd, for more details see our post Create a systemd service for your docker-compose project in 10 seconds:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

Don’t forget to configure your reverse proxy to point to InvenTree.

Posted by Uli Köhler in Docker, InvenTree

How to get IP address of a running docker-compose container

This will get the IP address of a running docker-compose container for the mongoservice.

docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

When you use this in shell scripts, it’s often convenient to store the IP address in a variable:

export MONGO_IP=$(docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')

which you can then use as $MONGO_IP.

For more details on how this works, see the following posts:

 

Posted by Uli Köhler in Docker

How to get container name of docker-compose container

Lets’s assume the directory where your docker-compose.yml is located is called myservice

If you have, for example, a docker-compose.yml that declares a service mongo running MongoDB, docker-compose will call the container mongo or mongo-1.

However, docker itself will call that container myservice-mongo-1.

In order to find out the actual docker name of your container – assuming the container is running – use the following code:

docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name'

This uses docker-compose ps to list running containers, exporting some information as JSON, for example:

[{
  "ID": "2d68b1c1625dbfb41e05f55af0a333b5700332112c6c7551f78afe27b1dfc7ad",
  "Name": "production-mongo-1",
  "Command": "docker-entrypoint.sh mongod",
  "Project": "production",
  "Service": "mongo",
  "State": "running",
  "Health": "",
  "ExitCode": 0,
  "Publishers": [
    {
      "URL": "",
      "TargetPort": 27017,
      "PublishedPort": 0,
      "Protocol": "tcp"
    }
  ]
}]

Then we use jq (a command line JSON processor) to a) select only the entry in the list of running containers where the Service attribute equals mongo, b) take the first one using [0] and get the Name attribute which stores the name of the container.

Example output

$ docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name'
myservice-mongo-1

 

Posted by Uli Köhler in Container, Docker

bup remote server docker-compose config with CIFS-mounted backup store

In our previous post How to setup a “bup remote” server in 5 minutes using docker-compose we outlined how to setup your own bup remote server using docker-composeRead that post before this one!

This post provides an alternate docker-compose.yml config file that mounts a remote CIFS directory as /bup backup directory instead of using a local directory. This is most useful when using a NAS and a separate bup server.

For this example, we’ll mount the CIFS share //10.1.2.3/bup-backups with user cifsuser and password pheT8Eigho.

Note: For performance reasons, the CIFS server (NAS) and the bup server should be locally connected, not via the internet.

# Mount the backup volume using CIFS
# NOTE: We recommend to not use a storage mounted over the internet
# for performance reasons. Instead, deploy a bup remote server locally.
volumes:
  bup-backups:
    driver_opts:
      type: cifs
      o: "username=cifsuser,password=pheT8Eigho,uid=1111,gid=1111"
      device: "//10.1.2.3/bup-backups"

version: "3.8"
services:
  bup-server:
    image: ulikoehler/bup-server:latest
    environment:
      - SSH_PORT=2022
    volumes:
      - ./dotssh:/home/bup/.ssh
      - ./dropbear:/etc/dropbear
      # BUP backup storage: CIFS mounted
      - bup-backups:/bup
    ports:
      - 2022:2022
    restart: unless-stopped

 

Posted by Uli Köhler in bup, Docker, Networking

How to setup a “bup remote” server in 5 minutes using docker-compose

The bup backup system implements remote backup on a server by connecting via SSH to said server, starting a bup process there and then communicating via the SSH tunnel.

In this post, we’ll setup a server for bup remote backup based on our ulikoehler/bup-server image (which contains both bup and dropbear as an SSH server).

1. Initialize the directory structure & create SSH keyset to access the server

I recommend doing this in /opt/bup, but in principle, any directory will do.

mkdir -p dotssh bup
# Generate new elliptic curve public key
ssh-keygen -t ed25519 -f id_bup -N ""
# Add SSH key to list of authorized keys
cat id_bup.pub | sudo tee -a dotssh/authorized_keys
# Fix permissions so that dropbear does not complain
sudo chown -R 1111:1111 bup
sudo chmod 0600 dotssh/authorized_keys
sudo chmod 0700 dotssh

1111 is the user ID of the bup user in the VM.

2. Create docker-compose.yml

Note: This docker-compose.yml uses a local backup directory – you can also mount a CIFS directory from e.g. a NAS device. See bup remote server docker-compose config with CIFS-mounted backup store for more details.

version: "3.8"
services:
  bup-server:
    image: ulikoehler/bup-server:latest
    environment:
      - SSH_PORT=2022
    volumes:
      - ./dotssh:/home/bup/.ssh
      - ./dropbear:/etc/dropbear
      # BUP backup storage:
      - ./bup:/bup
    ports:
      - 2022:2022
    restart: unless-stopped

3. Startup the container

At this point, you can use docker-compose up to startup the service. However, it’s typically easier to just use TechOverflow’s script to generate a systemd script to autostart the service on boot (and start it right now):

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

When you run docker-compose logs -f, you should see a greeting message from dropbear such as

bupremotedocker-bup-remote-1  | [1] Dec 25 14:58:20 Not backgrounding

4. Create a .ssh/config entry on the client

You need to do this for each client.

Copy id_bup (which we generated earlier) to each client into a folder such as ~/.ssh. Where you copy it does not matter, but the user who will be running the backups later will need access to this file. Also, for that user you need to create a .ssh/config entry telling SSH how to access the bup server:

Host BupServer
    HostName 10.1.2.3
    User bup
    Port 2022
    IdentityFile ~/.ssh/id_bup

Set HostName to the IP or domain name of the host running the docker container.
Set User to bup. This is hard-coded in the container.
Set Port to whatever port you mapped out in docker-compose.yml. If the ports: line in docker-compose.ymlis - 1234:2022, the correct value for Port in .ssh/config is 1234.
Set IdentityFile to whereever id_bup is located (see above).

Now you need to connect to the bup server container once for each client. This is both to spot issues with your SSH configuration (such as wrong permissions on the id_bup file) and to save the SSH host key of the container as known key:

ssh BupServer

If this prompts you for a password, something is wrong in your configuration – possibly, your are connecting to the wrong SSH host since the bup server container has password authentication disabled.

5. Connect using bup

Every client will need bup to be installed. See How to install bup on Ubuntu 22.04 and similar posts.

You have to understand that bup will need both a local directory (called the index) and a directory on the bup server called destination directory.  You have to use one index directory and one destination directory per backup project. What you define as backup project is up to you, but I strongly recommend to use one backup project per application you backup, in order to have data locality: Backups from one application belong together.

By convention, the /bup directory on the server (i.e. container) is dedicated for this purpose (and mapped to a directory or volume outside of the container).

On the local host, I recommend using either /var/lib/bup/project.index.bup or ~/bup/project.index.bup and let bup auto-create project-specific directories from there. If you use a special user on the client to do backups, you can also place the indexes. If the index is lost, this is not an issue as long as the backup works (it just will take a few minutes to check all files again). You should not backup the index directory.

There is no requirement for the .bup or .index.bup suffix but if you use it, it will allow you to quickly discern what a directory is and whether it is important or nor.

In order to use bup, you first need to initialize the directories. You can do this multiple times without any issue, so I do it at the start of each of my backup scripts.

bup -d ~/buptest.index.bup init -r BupServer:/bup/buptest.bup

After that, you can start backing up. Generally this is done by first running bup index (this operation is local-only) and then running bup save (which saves the backup on the bup remote server).

bup -d ~/buptest.index.bup index . && bup save -r BupServer:/bup/buptest.bup -9 --strip-path $(pwd) -n mybackup .

Some parameters demand further explanation:

  • -9: Maximum compression. bup is so fast that it hardly makes a difference but it saves a ton of disk space especially for text-like data.
  • --strip-path $(pwd) If you backup a directory /home/uli/Documents/ with a file /home/uli/Documents/Letters/Myletter.txt this makes bup save the backup of said file under the name Letters/Myletter.txt instead of  /home/uli/Documents/Letters/Myletter.txt.
  • -n mybackup. The name of this backup. This allows you to separate different backups in a single repository.

6. Let’s restore!

You might want to say hopefully I’ll never need to restore. WRONG. You need to restore right now, and you need to restore regularly, as a test that if you actually need to recover data by restoring, it will actually work.

In order to do this, we’ll first need to get access to the folder where. This is typically stored on some kind of Linux server anyway, so just install bup there. In our example above, the directory we’ll work with is called buptest.bup.

There are two conventient ways to view bup backups:

  1. Use bup web and open your browser at http://localhost:8080 to view the backup data (including history):
    bup -d buptest.bup web
  2. Use bup fuse to mount the entire tree including history to a directory such as /tmp/buptest:
    mkdir -p  /tmp/buptest && bup -d buptest.bup fuse /tmp/buptest

     

Posted by Uli Köhler in bup, Container, Docker

Minimal SSH server on Docker container using dropbear

This example Dockerfile runs a dropbear SSH daemon on Alpine Linux. It creates a system user called myuser and only allows login for that specific user.

FROM alpine:3.17
WORKDIR /app
ENV SSHUSER=myuser
# The SSH user to create
RUN apk --no-cache add dropbear &&\
    mkdir -p /home/$SSHUSER/.ssh &&\
    adduser -s /bin/sh -D $SSHUSER --home /home/$SSHUSER &&\
    chown -R $SSHUSER:$SSHUSER /home/$SSHUSER

CMD ["/bin/sh", "-c", "/usr/sbin/dropbear -RFEwgsjk -G ${SSHUSER} -p 22"]

Change the username to your liking.

Build like this:

docker build -t sshtest .

Starting the container

You can run it like this – remember to mount /etc/dropbear to a volume or local directory both for persisting host key files and for storing authorized key files:

docker run -v $(pwd)/dropbear:/etc/dropbear -v $(pwd)/dotssh:/home/myuser/.ssh -it sshtest

Dropbear options

The dropbear options -RFEwgsjk are:

  • -R: Create hostkeys as required
  • -F: Don’t fork into background
  • -E: Log to stderr rather than syslog
  • -w: Disallow root logins
  • -g: Disable password logins for root
  • -s: Disable password logins
  • -j: Disable local port forwarding
  • -k: Disable remote port forwarding

Setting up public key authentication

First, generate a key pair using

ssh-keygen -t ed25519 -f id_dropbear -N ""

We assume that you have mounted the user’s home .ssh directory in ./dotssh (as in our example, see Starting the container above). You can then copy the pubkey that is generated by ssh-keygen – which is saved in id_dropbear.pub – to the authorized_keys file in the Dropbear SSH directory:

cat id_dropbear.pub | sudo tee -a ./dotssh/authorized_keys

The sudo (in sudo tee) is only required because the dotssh directory is owned by another user.

Connecting to the container

First, you need to find the container’s IP address using the method outline in How to list just container names & IP address(es) of all Docker conatiners. In our example, this IP address is 10.254.1.4. You can then connect to the container using the public key:

ssh -i id_dropbear [email protected]

 

Posted by Uli Köhler in Docker

How to list just container names & IP address(es) of all Docker conatiners

Also see:

Use

docker ps -q | xargs -n1 docker inspect --format '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

Example output:

$ docker ps -q | xargs -n1 docker inspect --format '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
/flamboyant_cohen 10.254.1.6
/flamboyant_cohen 10.254.1.6
/lucid_shtern 10.254.1.4

This solution was inspired by GitHub user zeroows on GitHub Gists

 

Posted by Uli Köhler in Docker

How to list just container names of all Docker containers

Also see:

Use

docker ps --format "{{.Names}}"

Example output:

$ docker ps --format "{{.Names}}"
flamboyant_cohen
lucid_shtern
gitlab-techoverflow_gitlab_1

 

Posted by Uli Köhler in Docker

How to list just container ID of all Docker containers

Also see:

Use

docker ps -q

Example output:

$ docker ps -q
29f721fa3124
39a240769ae8
cae90fe55b9a
90580dc4a6d2
348c24768864
5e64779be4f0
78874ae92a8e
92650c527106
948a8718050f
7aad5a210e3c

 

Posted by Uli Köhler in Docker

How to correctly use apk in Dockerfile

In Dockerfiles you should always use apk with --no-cache to prevent extra files being deposited on the containers, for example:

FROM alpine:3.17
RUN apk add --no-cache python3-dev

 

Posted by Uli Köhler in Alpine Linux, Docker

How to install magic-wormhole on CoreOS

Step 1: Install pip

sudo rpm-ostree install python3-pip

then reboot for the changes to take effect:

sudo systemctl reboot

Step 2: Install magic-wormhole

sudo pip install magic-wormhole

 

Posted by Uli Köhler in CoreOS

How to specify which docker image to use in .gitlab-ci.yml

The following .gitlab-ci.yml will build a native executable project using cmake with a custom docker image:

stages:
  - build

buildmyexe:
  stage: build
  image: 'ulikoehler/ubuntu-gcc-cmake:latest'
  script:
    - cmake .
    - make -j4

In this example, we have only one stage – if you have multiple stages, you can specify different images for each of them.

Posted by Uli Köhler in Docker, git, GitLab

How to show current CoreOS system version using rpm-ostree

In CoreOS, run

sudo rpm-ostree status

and look for the entry with the dot () in front of it to see which deployment – i.e. which CoreOS version is currently active. Then, look for Version:  in the line below. This serves as the alternative to lsb_release -a which is not available on CoreOS.

Example output:

State: idle
AutomaticUpdatesDriver: Zincati
  DriverState: active; periodically polling for updates (last checked Thu 2022-12-08 03:49:05 UTC)
Deployments:
● fedora:fedora/x86_64/coreos/stable
                  Version: 37.20221106.3.0 (2022-11-28T20:05:48Z)
               BaseCommit: 6278bd1e5f311880a6975307e7ce734076a0b1a37f8a97c875c07037c748ddcc
             GPGSignature: Valid signature by ACB5EE4E831C74BB7C168D27F55AD3FB5323552A
          LayeredPackages: bmon docker-compose htop iotop make tailscale tree wget xe-guest-utilities-latest

  fedora:fedora/x86_64/coreos/stable
                  Version: 36.20221030.3.0 (2022-11-11T15:51:02Z)
               BaseCommit: eab21e5b533407b67b1751ba64d83c809d076edffa1ff002334603bf13655a14
             GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
          LayeredPackages: bmon docker-compose htop iotop make tailscale tree wget xe-guest-utilities-latest

In this example, CoreOS 37.20221106.3.0 is active.

Posted by Uli Köhler in CoreOS

How to fix zincati not updating CoreOS: rpm-ostree deploy failed: error: Packages not found: …

Problem:

My zincati service – the service that automatically updates CoreOS could not update CoreOS due to the following logs (view with journalctl -xfu zincati.service):

[ERROR zincati::update_agent::actor] failed to stage deployment: rpm-ostree deploy failed:
    error: Packages not found: magic-wormhole

Solution:

The solution typically involves uninstalling the package – in this case magic-wormhole using

sudo rpm-ostree uninstall magic-wormhole

Note that this might uninstall a service that is required for your infrastructure, and it will delete files associated with the package in the process of uninstalling it. You should make a backup of valuable data in any case.

Posted by Uli Köhler in Allgemein, CoreOS

How to configure SMTP E-Mail for InvenTree (docker/docker-compose)

You can easily configure SMTP email for InvenTree by adding the following config to your .env file (I’m using the docker production config):

INVENTREE_EMAIL_HOST=smtp.mydomain.com
[email protected]
INVENTREE_EMAIL_PASSWORD=cheen1zaiCh4yaithaecieng2jazey
INVENTREE_EMAIL_TLS=true
[email protected]

Even after setting up InvenTree, it is sufficient to just add this config to the .env file and restarting the server.

Posted by Uli Köhler in Docker, InvenTree

How to remove container_name: … statements from docker-compose.yml automatically using sed

This script will remove alle container_name statements from a docker-compose.yml config file:

sed -zie 's#container_name:\s*[a-zA-Z0-9_-]*\s*##g' docker-compose.yml

Example input:

services:
    inventree-proxy:
        container_name: inventree-proxy
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Example output:

services:
    inventree-proxy:
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Posted by Uli Köhler in Docker, Linux

How to fix lxc launch Failed getting root disk: No root device could be found

Problem:

While trying to launch a lxc container using a command like

lxc launch ubuntu:22.04 mycontainer

you see the following error message:

Creating mycontainer
Error: Failed instance creation: Failed creating instance record: Failed initialising instance: Failed getting root disk: No root device could be found

Solution:

You didn’t initialize your LXD storage properly. Run

lxd init

in order to configure the storage for lxd. For most setups except performance-critical production setups, I recommend to use the dir storage backend because it does not require any further configuration. You can leave all other options at their default values.

Name of the storage backend to use (zfs, btrfs, ceph, cephobject, dir, lvm) [default=zfs]: dir
Posted by Uli Köhler in Container, LXC

Move LXC container to new VM

Create snapshot on your current VM

lxc snapshot container_name snapshot_name

Validate created snapshot by checking the snapshots list displayed with:

lxc info container_name 

In case you have not named your snapshot, look for the most recent creation date. It might have a default name like snap1.

Create an image from the snapshot

lxc publish container_name/snapshot_name --alias="image_alias" description="image_description"

Verify your created image by checking the image list displayed with:

lxc image info image_alias

Export the created image to an archive in your current path

lxc image export image_alias image_archive_name

Send the file to your new VM

Make sure, that you can establish an SSH connection to your new VM from your old VM, e.g. by a VPN or Wireguard connection. Use scp to copy the image like so:

scp ./image_archive_name.tar.gz [email protected]:/home/user

Import image and launch new container on your new VM

Make sure lxc and lxd are installed on your new VM and then import the image like so:

lxc image import image_archive_name.tar.gz --alias image_alias_on_new_vm

Make sure the imported image appears in the list on your new VM.

lxc image list

Then launch a new container from the image with:

lxc launch image_alias_on_new_vm container_name
Posted by Joshua Simon in Container, LXC

How to fix docker emqx_ctl Node ’[email protected]’ not responding to pings.

Problem:

When trying to run emqx_ctl in a dockerized emqx setup using a command like

docker-compose exec emqx ./bin/emqx status

you see an error message like

Node '[email protected]' not responding to pings.
/opt/emqx/bin/emqx: line 41: die: command not found

Solution:

The problem here is that emqx_ctl is trying to connect to the IP address for node1.emqx.mydomain.com but that IP address does not point to the IP address for the docker container (maybe it’s the public IP address for your server?)

The solution here is to create a network alias within docker/docker-compose so that the Docker DNS system resolves node1.emqx.mydomain.com to the internal IP address of the container.

For example, in docker-compose, you can create your network using

networks:
  emqx:
    driver: bridge

and then configure the alias for the container using

services:
  emqx:
    image: emqx:4.4.4
    environment:
      - "EMQX_NAME=emqx"
      - "EMQX_HOST=node1.emqx.mydomain.com"
      - "EMQX_LOADED_PLUGINS=emqx_recon,emqx_retainer,emqx_management,emqx_dashboard"
    ports:
      - 18083:18083
      - 1883:1883
    volumes:
      - ./emqx_data:/opt/emqx/data
      - ./emqx_log:/opt/emqx/log
    networks:
      emqx:
        aliases:
          - "node1.emqx.mydomain.com"

 

 

Posted by Uli Köhler in Container, Docker, EMQX, MQTT

How to setup Netmaker using docker-compose in under 15 minutes

In this post, we’ll build a simple setup for running netmaker with PostgreSQL backend using docker-compose with an external Trik

First, create a directory for the Netmaker files to reside in, e.g.:

mkdir /opt/netmaker

cd to that directory:

cd /opt/netmaker

At this point we’ll download the Mosquitto config from Github and enable the ports in the firewall ufw. I reserved 1000 ports 51821:52821 in order to facilitate having a lot of networks (more than the default 9). My traefik config is the one from Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges which allows having a single *.netmaker.mydomain.com Let’s Encrypt wildcard certificate.

Now, create docker-compose.yml in that directory

version: "3.4"

services:
  postgres:
    image: postgres
    restart: unless-stopped
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=netmaker
      - POSTGRES_USER=netmaker
  netmaker: # The Primary Server for running Netmaker
    image: gravitl/netmaker:v0.14.5
    depends_on:
      - postgres
    cap_add:
      - NET_ADMIN
      - NET_RAW
      - SYS_MODULE
    sysctls:
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=0
      - net.ipv6.conf.all.forwarding=1
    restart: always
    volumes: # Volume mounts necessary for sql, coredns, and mqtt
      - ./netmaker_dnsconfig:/root/config/dnsconfig
      - ./netmaker_sqldata:/root/data
      - ./netmaker_shared_certs:/etc/netmaker
    environment: # Necessary capabilities to set iptables when running in container
      SERVER_NAME: "broker.${NETMAKER_BASE_DOMAIN}" # The domain/host IP indicating the mq broker address
      SERVER_HOST: "${NETMAKER_PUBLIC_IP}" # Set to public IP of machine.
      SERVER_HTTP_HOST: "api.${NETMAKER_BASE_DOMAIN}" # Overrides SERVER_HOST if set. Useful for making HTTP available via different interfaces/networks.
      SERVER_API_CONN_STRING: "api.${NETMAKER_BASE_DOMAIN}:443"
      # COREDNS_ADDR: "${NETMAKER_PUBLIC_IP}" # Address of the CoreDNS server. Defaults to SERVER_HOST
      DNS_MODE: "off" # Enables DNS Mode, meaning all nodes will set hosts file for private dns settings.
      API_PORT: "8081" # The HTTP API port for Netmaker. Used for API calls / communication from front end. If changed, need to change port of BACKEND_URL for netmaker-ui.
      CLIENT_MODE: "on" # Depricated. CLIENT_MODE should always be ON
      REST_BACKEND: "on" # Enables the REST backend (API running on API_PORT at SERVER_HTTP_HOST). Change to "off" to turn off.
      DISABLE_REMOTE_IP_CHECK: "off" # If turned "on", Server will not set Host based on remote IP check. This is already overridden if SERVER_HOST is set. Turned "off" by default.
      TELEMETRY: "on" # Whether or not to send telemetry data to help improve Netmaker. Switch to "off" to opt out of sending telemetry.
      RCE: "off" # Enables setting PostUp and PostDown (arbitrary commands) on nodes from the server. Off by default.
      MASTER_KEY: "${NETMAKER_MASTER_KEY}" # The admin master key for accessing the API. Change this in any production installation.
      CORS_ALLOWED_ORIGIN: "*" # The "allowed origin" for API requests. Change to restrict where API requests can come from.
      DISPLAY_KEYS: "on" # Show keys permanently in UI (until deleted) as opposed to 1-time display.
      DATABASE: "postgres" # Database to use - sqlite, postgres, or rqlite
      SQL_HOST: "postgres"
      SQL_DB: "netmaker"
      SQL_USER: "netmaker"
      SQL_PASS: "${POSTGRES_PASSWORD}"
      NODE_ID: "${SERVER_NAME}" # used for HA - identifies this server vs other servers
      MQ_HOST: "mq"  # the address of the mq server. If running from docker compose it will be "mq". Otherwise, need to input address. If using "host networking", it will find and detect the IP of the mq container.
      MQ_SERVER_PORT: "1883" # the reachable port of MQ by the server - change if internal MQ port changes (or use external port if MQ is not on the same machine)
      MQ_PORT: "443" # the reachable port of MQ - change if external MQ port changes (port on proxy, not necessarily the one exposed in docker-compose)
      HOST_NETWORK: "off" # whether or not host networking is turned on. Only turn on if configured for host networking (see docker-compose.hostnetwork.yml). Will set host-level settings like iptables.
      VERBOSITY: "1" # logging verbosity level - 1, 2, or 3
      MANAGE_IPTABLES: "on" # deprecated
      # PORT_FORWARD_SERVICES: "ssh,mq" # decide which services to port forward ("dns","ssh", or "mq")
      # this section is for OAuth
      AUTH_PROVIDER: "" # "<azure-ad|github|google|oidc>"
      CLIENT_ID: "" # "<client id of your oauth provider>"
      CLIENT_SECRET: "" # "<client secret of your oauth provider>"
      FRONTEND_URL: "" # "https://dashboard.<netmaker base domain>"
      AZURE_TENANT: "" # "<only for azure, you may optionally specify the tenant for the OAuth>"
      OIDC_ISSUER: "" # https://oidc.yourprovider.com - URL of oidc provider
    ports:
      - "51821-52821:51821-52821/udp" # wireguard ports
    expose:
      - "8081" # api port
    labels: # only for use with traefik proxy (default)
      - traefik.enable=true
      - traefik.http.routers.netmaker-api.entrypoints=websecure
      - traefik.http.routers.netmaker-api.rule=Host(`api.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-api.service=netmaker-api
      - traefik.http.services.netmaker-api.loadbalancer.server.port=8081
      - "traefik.http.routers.netmaker-api.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-api.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-api.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  netmaker-ui:  # The Netmaker UI Component
    container_name: netmaker-ui
    image: gravitl/netmaker-ui:v0.14.5
    depends_on:
      - netmaker
    links:
      - "netmaker:api"
    restart: always
    environment:
      BACKEND_URL: "https://api.${NETMAKER_BASE_DOMAIN}" # URL where UI will send API requests. Change based on SERVER_HOST, SERVER_HTTP_HOST, and API_PORT
    expose:
      - "80"
    labels:
      - traefik.enable=true
      - traefik.http.middlewares.nmui-security.headers.accessControlAllowOriginList=*.${NETMAKER_BASE_DOMAIN}
      - traefik.http.middlewares.nmui-security.headers.stsSeconds=31536000
      - traefik.http.middlewares.nmui-security.headers.browserXssFilter=true
      - traefik.http.middlewares.nmui-security.headers.customFrameOptionsValue=SAMEORIGIN
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.X-Robots-Tag=none
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.Server= # Remove the server name
      - traefik.http.routers.netmaker-ui.entrypoints=websecure
      - [email protected]r
      - traefik.http.routers.netmaker-ui.rule=Host(`dashboard.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-ui.service=netmaker-ui
      - traefik.http.services.netmaker-ui.loadbalancer.server.port=80
      - "traefik.http.routers.netmaker-ui.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  mq: # the MQTT broker for netmaker
    container_name: mq
    image: eclipse-mosquitto:2.0.11-openssl
    depends_on:
      - netmaker
    restart: unless-stopped
    volumes:
      - ./mosquitto.conf:/mosquitto/config/mosquitto.conf # need to pull conf file from github before running (under docker/mosquitto.conf)
      - ./mosquitto_data:/mosquitto/data
      - ./mosquitto_logs:/mosquitto/log
      - ./netmaker_shared_certs:/mosquitto/certs
    expose:
      - "8883"
    labels:
      - traefik.enable=true
      - traefik.tcp.routers.mqtts.rule=HostSNI(`broker.${NETMAKER_BASE_DOMAIN}`)
      - traefik.tcp.routers.mqtts.tls.passthrough=true
      - traefik.tcp.services.mqtts-svc.loadbalancer.server.port=8883
      - traefik.tcp.routers.mqtts.service=mqtts-svc
      - traefik.tcp.routers.mqtts.entrypoints=websecure
      - "traefik.tcp.routers.mqtts.tls.certresolver=cloudflare"
      - "traefik.tcp.routers.mqtts.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.tcp.routers.mqtts.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"

After that, create .env in said directory containing some info about your node:

SERVER_NAME=netmaker-01
NETMAKER_BASE_DOMAIN=netmaker.mydomain.com
NETMAKER_MASTER_KEY=geeveeBaeQuie1cie6aaz6eepahleo
NETMAKER_PUBLIC_IP=101.252.11.54
POSTGRES_PASSWORD=ahph8Jih4aesheel7id1uyo0gietai

Adjust the NETMAKER_BASE_DOMAIN and the NETMAKER_PUBLIC_IP accordingly. Also, you need to choose a random NETMAKER_MASTER_KEY and POSTGRES_PASSWORD.

Now we’ll use the script from Create a systemd service for your docker-compose project in 10 seconds in order to create a systemd service to automatically run the service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will also automatically start the service (i.e. docker-compose up).

 

Posted by Uli Köhler in Container, Docker