How to run Nextcloud cron job manually using docker-compose

For docker-compose based Nextcloud installations, this is the command to run the cron job manually:

docker-compose exec -u www-data nextcloud php cron.php

You need to run this from the directory where docker-compose.yml is located.

Posted by Uli Köhler in Linux, Nextcloud

How to fix Nextcloud nextcloudcmd CLI “skipped due to earlier error, trying again in …”

Problem:

Your Nextcloud CLI client fails for some files (upload or download) with an error message like this

"Server replied "413 Request Entity Too Large" to "PUT https://example.com/remote.php/dav/uploads/username/XXXXXXXX/YYYYYY" (skipped due to earlier error, trying again in 6 hour(s))
PATH/TO/FILE.bmp

Solution:

The nextcloud CLI client nextcloudcmd stores the sync SQLite database in ~/.local/share/nextcloudcmd/._sync_############.db where ############ is a hex code. If you have multiple such files in ~/.local/share/nextcloudcmd, try out this procedure for each of them:

While nextcloudcmd is not running, use the SQLite3 command line tool to open the database, for example:

sqlite3 ~/.local/share/nextcloudcmd/._sync_bf15278da518.db

Then run this SQL command:

DELETE FROM 'blacklist';

and exit using Ctrl-D. Now try re-running nextcloudcmd, it should immediately retry syncing the file.

Posted by Uli Köhler in Nextcloud

How to install bup on Ubuntu 20.04

I’ve built a PPA that currently publishes bup 0.31 for Ubuntu 20.04 for x64 computers

This one-liner installs the PPA, updates the APT package cache and installs bup:

sudo add-apt-repository -y ppa:ulikoehler/bup && apt update && apt -y install bup

The bup package has been built using my deb-buildscripts toolchain. In order to build it yourself:

git clone https://github.com/ulikoehler/deb-buildscripts
cd deb-buildscripts
./deb-bup.py

You might need to install some build dependencies for the build process to work, but the script will tell you what is missing.

Posted by Uli Köhler in Linux

How to backup data from docker-compose MariaDB container using mysqldump

For containers with a MYSQL_ROOT_PASSWORD stored in .env

This is the recommended best practice. For this example, we will assume that .env looks like this:

MARIADB_ROOT_PASSWORD=mophur3roh6eegiL8Eeto7goneeFei

To create a dump:

source .env && docker-compose exec mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases > mariadb-dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql:

source .env && docker-compose exec -T mariadb mysql -uroot -p${MARIADB_ROOT_PASSWORD} < mariadb-dump.sql

Note that you have to replace mariadb by the name of your container in docker-compose.yml

For containers with a MYSQL_ROOT_PASSWORD set to some value not stored in .env

This is secure but you typically have to copy the password multiple times: One time for the mariadb container, one time for whatever container or application uses the database, and one time for any backup script that exports a SQL dump of the entire database

To create a dump:

docker-compose exec mariadb mysqldump -uroot -pYOUR_MARIADB_ROOT_PASSWORD --all-databases > dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql:

docker-compose exec -T mariadb mysql -uroot -pYOUR_MARIADB_ROOT_PASSWORD  < mariadb-dump.sql

Replace YOUR_MARIADB_ROOT_PASSWORD by the password of your installation.

Furthermore, you have to replace mariadb by the name of your container in docker-compose.yml

For containers with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This configuration is a security risk – see The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes.

To create a dump:

docker-compose exec mariadb mysqldump -uroot --all-databases > mariadb-dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql:

docker-compose exec -T mariadb mysql -uroot < mariadb-dump.sql
Posted by Uli Köhler in Docker

The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This is part of a common docker-compose.yml which is frequently seen on the internet

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=yes
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
 [...]

Simple and secure, right? A no-root-password MariaDB instance that’s running in a separate container and does not have its port 3306 exposed – so only services from the same docker-compose.yml can reach it since docker-compose puts all those services in a separate network.

Wrong.

While the MariaDB instance is not reachable from the internet since no, it can be reached by any process via its internal IP address.

In order to comprehend what’s happening, we shall take a look at docker’s networks. In this case, my docker-compose config is called redmine.

$ docker network ls | grep redmine
ea7ed38f469b        redmine_default           bridge              local

This is the network that docker-compose creates without any explicit network configuration. Let’s inspect the network to show the hosts:

[
    // [...]
        "Containers": {
            "2578fc65b4dab9f204d0a252e421dd4ddd9f41c35642d48350f4e59370581757": {
                "Name": "redmine_mariadb_1",
                "EndpointID": "1e6d81acc096a12fc740173f4e107090333c42e8a86680ac5c9886c148d578e7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "7867f71d2a36265c34c133b70aea487b90ea68fcf30ecb42d6e7e9a376cf8e07": {
                "Name": "redmine_redmine_1",
                "EndpointID": "f5ac7b3325aa9bde12f0c625c4881f9a6fc9957da4965767563ec9a3b76c19c3",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
    // [...]
]

We can see that the IP address of the redmine_mariadb_1 container is 172.18.0.2.

Using the internal IP 172.18.0.2, you can access the MySQL server.

Any process on the host (even from unprivileged users) can connect to the container without any password, e.g.

$ mysqldump -uroot -h172.18.0.2 --all-databases
// This will show the dump of the entire MariaDB database

How to mitigate this security risk?

Mitigation is quite easy since we only need to set a root password for the MariaDB instance.

My recommended best practice is to avoid duplicate passwords. In order to do this, create a .env file in the directory where docker-compose.yml is located.

MARIADB_ROOT_PASSWORD=aiPaipei6ookaemue4voo0NooC0AeH

Remember to replace the password by a random password or use this shell script to automatically create it:

echo MARIADB_ROOT_PASSWORD=$(pwgen 30) > .env

Now we can use ${MARIADB_ROOT_PASSWORD} in docker-compose.yml whereever the MariaDB root password is required, for example:

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
  redmine:
    image: 'redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - REDMINE_EMAIL=ukoehler@techoverflow.net
      - REDMINE_DB_MYSQL=mariadb
      - REDMINE_DB_USERNAME=root
      - REDMINE_DB_PASSWORD=${MARIADB_ROOT_PASSWORD}
    ports:
      - '3718:3000'
    volumes:
      - './redmine_data/conf:/usr/src/redmine/conf'
      - './redmine_data/files:/usr/src/redmine/files'
      - './redmine_themes:/usr/src/redmine/public/themes'
    depends_on:
      - mariadb

Note that the mariadb docker image will not change the root password if the database directory already exists (mariadb_data in this example).

My recommended best practice for changing the root password is to use mysqldump --all-databases to export the entire database to a SQL file, then backup and delete the data directory, then re-start the container so the new root password will be set. After that, re-import the dump from the SQL file.

Posted by Uli Köhler in Databases, Docker, Linux

Best practice for installing & autostarting OpenVPN client/server configurations on Ubuntu

This post details my systemd-based setup for installing and activating OpenVPN client or server configs on Ubuntu. It might also work for other Linux distributions that are based on systemd..

First, place the OpenVPN config (usually a .ovpn file, but it can also be a .conf file) in /etc/openvpnYou need to change the filename extension to .conf.ovpn won’t work. Furthermore, ensure that there are no spaces in the filename.

In this example, our original OpenVPN config will be called techoverflow.ovpn, hence it needs to be copied to /etc/openvpn/techoverflow.conf!

Now we can enable (i.e. autostart on boot – but not start immediately) the config using

sudo systemctl enable openvpn@techoverflow

For techoverflow.conf you need to systemctl enableopenvpn@techoverflow whereas for a hypothetical foo.conf you would need to systemctl enable openvpn@foo.

Now we can start the VPN config – i.e. run it immediately using

sudo systemctl start openvpn@techoverflow

Now your VPN client or server is running – or is it? We shall check the logs using

journalctl -xfu openvpn@techoverflow

In order to manually restart the VPN client or server use

sudo systemctl restart openvpn@techoverflow

and similarly run this to stop the VPN client or server:

sudo systemctl stop openvpn@techoverflow

In order to show if the instance is running – i.e. show its status, use

sudo systemctl status openvpn@techoverflow

Example output for an OpenVPN client:

● openvpn@techoverflow.service - OpenVPN connection to techoverflow
     Loaded: loaded (/lib/systemd/system/openvpn@.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2020-11-29 03:37:52 CET; 953ms ago
       Docs: man:openvpn(8)
             https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
             https://community.openvpn.net/openvpn/wiki/HOWTO
   Main PID: 4123809 (openvpn)
     Status: "Pre-connection initialization successful"
      Tasks: 1 (limit: 18689)
     Memory: 1.3M
     CGroup: /system.slice/system-openvpn.slice/openvpn@techoverflow.service
             └─4123809 /usr/sbin/openvpn --daemon ovpn-techoverflow --status /run/openvpn/techoverflow.status 10 --cd /etc/openvpn --script-security 2 --config /etc/ope>

Nov 29 03:37:52 localgrid systemd[1]: Starting OpenVPN connection to techoverflow...
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Sep >
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: library versions: OpenSSL 1.1.1f  31 Mar 2020, LZO 2.10
Nov 29 03:37:52 localgrid systemd[1]: Started OpenVPN connection to techoverflow.
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: TCP/UDP: Preserving recently used remote address: [AF_INET]83.135.163.227:19011
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: UDPv4 link local (bound): [AF_INET][undef]:1194
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: UDPv4 link remote: [AF_INET]83.135.163.22:19011
Nov 29 03:37:53 localgrid ovpn-techoverflow[4123809]: [nas-vpn.haar.techoverflow.net] Peer Connection Initiated with [AF_INET]83.135.163.227:19011

 

Posted by Uli Köhler in Linux, VPN

Simple self-hosted WebWormhole.io using docker-compose

WebWormhole.io is a new service similar to and inspired by magic-wormhole that allows easily sharing files between browsers without the need to install a software. Internally, it uses WebRTC, allowing direct transfer of files between computers even through firewalls.

While there is no official Docker image published on Docker Hub, the WebWormhole GitHub project provides an official Dockerfile. Based on this, I have published ulikoehler/webwormhole which has been built using

git clone https://github.com/saljam/webwormhole.git
cd webwormhole
docker build -t ulikoehler/webwormhole:latest .
docker push ulikoehler/webwormhole:latest

This is the docker-compose.yml that you can use to run WebWormhole behind a reverse proxy:

version: '3'
services:
  webwormhole:
    image: 'ulikoehler/webwormhole:latest'
    entrypoint: ["/bin/ww", "server", "-http=localhost:52618", "-https="]
    network_mode: host

and this is my nginx config:

server {
    server_name  webwormhole.mydomain.com;

    access_log off;
    error_log /var/log/nginx/webwormhole.mydomain.com.error.log;

    location / {
        proxy_pass http://localhost:52618/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/webwormhole.mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/webwormhole.mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    #ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
    if ($host = webwormhole.mydomain.com) {
        return 301 https://$host$request_uri;
    }

    server_name webwormhole.mydomain.com;

    listen 80;
    return 404; # managed by Certbot
}

I store docker-compose.yml in /var/lib/webwormhole.mydomain.com and I used the script from our previous post Create a systemd service for your docker-compose project in 10 seconds in order to create this systemd config file in /etc/systemd/system/webwormhole.mydomain.com.service:

[Unit]
Description=webwormhole.mydomain.com
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/webwormhole.mydomain.com
# Shutdown container (if running) when unit is started
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down

[Install]
WantedBy=multi-user.target

which you can start and enable using

sudo systemctl enable webwormhole.mydomain.com
sudo systemctl start webwormhole.mydomain.com

 

Posted by Uli Köhler in Docker, Linux

How to make PowerShell output error messages in English

If you want to see a PowerShell output (e.g. an error message) in english instead of your local language, prefix your command by

[Threading.Thread]::CurrentThread.CurrentUICulture = 'en-US';

For example, in order to run to run My-Cmdlet -Arg 1 with output in English instead of your local language, use

[Threading.Thread]::CurrentThread.CurrentUICulture = 'en-US'; My-Cmdlet -Arg 1

[Threading.Thread]::CurrentThread.CurrentUICulture only affects the current command and does not have any effect for other commands. Hence your need to copy the command before each and every command for which you want to see the output in English.

Possibly you also need to install the English help files in order to see more messages in English. In order to do that, run this command in PowerShell as an administrator:

Update-Help -UICulture en-US

 

Posted by Uli Köhler in PowerShell, Windows

How to restore MySQL database dump in docker-compose mariadb container

Use this snippet to restore a SQL file in your MariaDB container:

docker-compose exec -T [container name] mysql -uroot < mydump.sql

This assumes you have not set a root password. In order to use a root password, use

docker-compose exec -T mariadb mysql -uroot -pmysecretrootpassword < mydump.sql

-T means don’t use a TTY, in other words, don’t expect interactive input. This avoids the

the input device is not a TTY

error message.

Posted by Uli Köhler in Container, Docker

How to use child_process.exec in Koa (async/await)

First install the child-process-promise library

npm i --save child-process-promise

Then you can use it like this:

const router = require('koa-router')();
const {exec} = require('child-process-promise');

router.get('/test', async ctx => {
  const [stdout, stderr] = await exec('python myscript.py')
  const ipv6 = stdout.toString();
  ctx.body = ipv6;
});

 

Posted by Uli Köhler in Javascript

How to format IPv6 address in groups of 32 bits in Python

def format_ipv6addr_group2(addr):
    """
    Format IPv6 addresses in 4 \n-separated groups of 32 bits
    Returns a string
    """
    addr_s = str(addr.exploded)
    return f"{addr_s[:10]}\n{addr_s[10:20]}\n{addr_s[20:30]}\n{addr_s[30:40]}"

Usage example:

addr = ipaddress.IPv6Address("fd66:6cbb:8c10:1234:4567:89ab:cdef:0123")
print(format_ipv6addr_group2(addr))

Output:

fd66:6cbb:
8c10:1234:
4567:89ab:
cdef:0123

 

Posted by Uli Köhler in Python

How to generate random IPv6 addresses in a given network using Python

This code generates random IPv6 addresses in a given network using Python’s ipaddress module:

import ipaddress
import random

def random_ipv6_addr(network):
    """
    Generate a random IPv6 address in the given network
    Example: random_ipv6_addr("fd66:6cbb:8c10::/48")
    Returns an IPv6Address object.
    """
    net = ipaddress.IPv6Network(network)
    # Which of the network.num_addresses we want to select?
    addr_no = random.randint(0, net.num_addresses)
    # Create the random address by converting to a 128-bit integer, adding addr_no and converting back
    network_int = int.from_bytes(net.network_address.packed, byteorder="big")
    addr_int = network_int + addr_no
    addr = ipaddress.IPv6Address(addr_int.to_bytes(16, byteorder="big"))
    return addr

# Usage example
print(random_ipv6_addr("fdce:4879:a1e9::/48"))
# Prints e.g. fdce:4879:a1e9:e351:1a01:be9:4d9a:157d

It works by first converting the IPv6 network address to binary and then adding a random host number. After that, it will be converted back to an IPv6Address object.

Posted by Uli Köhler in Networking, Python

How to modify file inside a ZIP file using Python

Python provides the zipfile module to read and write ZIP files. Our previous posts Python example: List files in ZIP archive and Downloading & reading a ZIP file in memory using Python show how to list and read files inside a ZIP file.

In this example, we will show how to copy the files from one ZIP file to another and modify one of the files in the progress. This is often the case if you want to use ZIP file formats like ODT or LBX as templates, replacing parts of the text content of a file.

import zipfile

with zipfile.ZipFile(srcfile) as inzip, zipfile.ZipFile(dstfile, "w") as outzip:
    # Iterate the input files
    for inzipinfo in inzip.infolist():
        # Read input file
        with inzip.open(inzipinfo) as infile:
            if inzipinfo.filename == "test.txt":
                content = infile.read()
                # Modify the content of the file by replacing a string
                content = content.replace("abc", "123")
                # Write conte
                outzip.writestr(inzipinfo.filename, content)
            else: # Other file, dont want to modify => just copy it
                

After opening both the input file and the output ZIP using

with zipfile.ZipFile(srcfile) as inzip, zipfile.ZipFile(dstfile, "w") as outzip:

we iterate through all the files in the input ZIP file:

for inzipinfo in inzip.infolist():

In case we’ve encountered the file we want to modify, which is identified by it’s filename test.txt:

if inzipinfo.filename == "test.txt":

we read and modify the content ….

with inzip.open(inzipinfo) as infile:
    content = infile.read().replace("abc", "123")

… and write the modified content to the output ZIP:

outzip.writestr("test.txt", content)

Otherwise, if the current file is not the file we want to modify,  we just copy the file to the output ZIP using

outzip.writestr(inzipinfo.filename, infile.read())

Note that the algorithm will always .read() the file from the input ZIP, hence its entire content will be temporarily stored in memory. Therefore, it doesn’t work well for files which are large when uncompressed.

Posted by Uli Köhler in Python

How to find out if your WSL Ubuntu is running on WSL1 oder WSL2

In order to find out if your Ubuntu or other WSL linux installation is running on WSL1 or WSL2, open a Powershell and run

wsl --list -v

This will show you all WSL installations and the associated WSL versions:

PS C:\WINDOWS\system32> wsl --list -v
  NAME      STATE           VERSION
* Ubuntu    Running         1

As you can see, the Ubuntu VM is running WSL VERSION 1.

Go to the WSL2 installation page for instructions on how to upgrade to WSL2.

Posted by Uli Köhler in Windows

Create a systemd service for your docker-compose project in 10 seconds

Run this in the directory where docker-compose.yml is located:

wget -qO- https://techoverflow.net/scripts/create-docker-compose-service.sh | bash /dev/stdin

This script will automatically create  a systemd service that starts docker-compose up and shuts down using docker-compose down. Our script will also systemctl enable the script (i.e. start automatically on boot) and systemctl start it (start it immediately).

How it works

The command above will download the script from TechOverflow and run it in bash:

#!/bin/bash
# Create a systemd service that autostarts & manages a docker-compose instance in the current directory
# by Uli Köhler - https://techoverflow.net
# Licensed as CC0 1.0 Universal
SERVICENAME=$(basename $(pwd))

echo "Creating systemd service... /etc/systemd/system/${SERVICENAME}.service"
# Create systemd service file
sudo cat >/etc/systemd/system/$SERVICENAME.service <<EOF
[Unit]
Description=$SERVICENAME
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=$(pwd)
# Shutdown container (if running) when unit is started
ExecStartPre=$(which docker-compose) -f docker-compose.yml down
# Start container when unit is started
ExecStart=$(which docker-compose) -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=$(which docker-compose) -f docker-compose.yml down

[Install]
WantedBy=multi-user.target
EOF

echo "Enabling & starting $SERVICENAME"
# Autostart systemd service
sudo systemctl enable $SERVICENAME.service
# Start systemd service now
sudo systemctl start $SERVICENAME.service

The service name is the directory name:

SERVICENAME=$(basename $(pwd))

Now we will create the service file in /etc/systemd/system/${SERVICENAME}.service using the template embedded in the script

The script will automatically determine the location of docker-composeusing $(which docker-compose) and finally enable and start the systemd service:

# Autostart systemd service
sudo systemctl enable $SERVICENAME.service
# Start systemd service now
sudo systemctl start $SERVICENAME.service

 

Posted by Uli Köhler in Docker, Linux

Running Portainer using docker-compose and systemd

In this post we’ll show how to run Portainer Community Edition on a computer using docker-compose and systemd. In case you haven’t installed docker or docker-compose, see How to install docker and docker-compose on Ubuntu in 30 seconds.

If you already have a Portainer instance and want to run a Portainer Edge Agent on a remote computer, see Running Portainer Edge Agent using docker-compose and systemd!

First, create the directory where the docker-compose.yml will live and edit it:

sudo mkdir -p /var/lib/portainer
sudo nano /var/lib/portainer/docker-compose.yml

Now paste this config file:

version: '2'

services:
  portainer:
    image: portainer/portainer
    command: -H unix:///var/run/docker.sock
    restart: always
    ports:
      - 9192:9000
      - 8000:8000
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - portainer_data:/data

volumes:
  portainer_data:

In this case, we’re exposing the Web UI on port 9192 since we’re using a reverse proxy setup in order to access the web UI. Using Portainer over HTTP without a HTTPS frontend is a security risk!

This is my nginx config that is used to reverse proxy my Portainer instance. Note that I generate the HTTPS config using certbot --nginx, hence it’s not shown here:

server {
    server_name  portainer.mydomain.com;

    location / {
        proxy_pass http://localhost:9192/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen 80;
}

Now we can create the systemd service that will automatically start Portainer:

sudo nano /etc/systemd/system/portainer.service
[Unit]
Description=Portainer
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/portainer
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down

[Install]
WantedBy=multi-user.target

Now we can can enable autostart on boot and start Portainer:

sudo systemctl enable portainer.service
sudo systemctl start portainer.service

 

Posted by Uli Köhler in Container, Docker, Linux, Portainer

How to fix Portainer Edge Agent [message: an error occured during short poll] [error: short poll request failed]

Problem:

You are trying to run a Portainer Edge Agent, but can’t connect to the endpoint in the Portainer UI, but you see an error message like this in the logs:

2020/10/24 13:58:23 [ERROR] [internal,edge,poll] [message: an error occured during short poll] [error: short poll request failed]

Solution:

First, check your EDGE_ID and your EDGE_KEY. In most cases, these are incorrectly set and prevent proper communication between the Edge Agent and the Portainer instance.

If that doesn’t help, check your firewall. Both port 8000 of the portainer instance . When creating a new Endpoint, Portainer will show you a message like

The agent will communicate with Portainer via https://portainer.mydomain.com and tcp://portainer.mydomain.com:8000

Depending on your system configuration, you need enable port 8000 on your firewall, e.g. using

sudo ufw enable 8000/tcp

In order to test the connectivity, you can use nc:

echo -e "\n" |  nc portainer.techoverflow.net 8000

This is how it looks on a working Portainer instance:

$ echo -e "\n" |  nc portainer.mydomain.com 8000
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close

400 Bad Request

In case you don’t see any response, check your firewall and check if you’ve exposed port 8000 on the Portainer container.

Also, you can decode your EDGE_KEY (use the one that is actually used in the Portainer Edge Agent instance) in any online base64 decoder like base64code.com: Decoding

aHR0cHM6Ly9wb3J0YWluZXIubXlkb21haW4uY29tfHBvcnRhaW5lci5teWRvbWFpbi5jb206ODAwMHw3MTphNTpiYTpkMjo4MToxOToxMTo4NzplYTowZjo0NDo0YTpmYTo0Mjo4YTphNnwz

will result in this string:

https://portainer.mydomain.com|portainer.mydomain.com:8000|71:a5:ba:d2:81:19:11:87:ea:0f:44:4a:fa:42:8a:a6|3

in which you can check the URLs. For example, check if the protocol (http or https) mismatches what you used to configure your main Portainer instance.

Finally, on the host that is running the Portainer Edge Agent, check if the hostname resolves correctly:

host portainer.mydomain.com

This should show you at least the IPv4 address of the Portainer instance. If that is not correct, these are most likely culprits:

  • Your configured DNS server doesn’t work correctly. Use another DNS server, like 1.1.1.1 (echo nameserver 1.1.1.1 > /etc/resolv.conf will typically fix that temporarily).
  • Your DNS records are not set correctly for the domain name you use
  • If you use Dynamic DNS, your DDNS client might not have updated the record correctly

Always check if you get the same results from your local computer as you get from the host that is running the Portainer Edge Agent.

Posted by Uli Köhler in Container, Docker, Portainer

Running Portainer Edge Agent using docker-compose and systemd

In this post we’ll show how to run the Portainer Edge Agent on a computer using docker-compose and systemd. In case you haven’t installed docker or docker-compose, see How to install docker and docker-compose on Ubuntu in 30 seconds.

If you don’t have a Portainer instance running to which the Edge Agent can connect, see Running Portainer using docker-compose and systemd!

First, create the directory where the docker-compose.yml will live and edit it:

sudo mkdir -p /var/lib/portainer-edge-agent
sudo nano /var/lib/portainer-edge-agent/docker-compose.yml

Now paste this config file:

version: "3"

services:
  portainer_edge_agent:
    image: portainer/agent
    command: -H unix:///var/run/docker.sock
    restart: always
    volumes:
      - /:/host
      - /var/lib/docker/volumes:/var/lib/docker/volumes
      - /var/run/docker.sock:/var/run/docker.sock
      - portainer_agent_data:/data
    environment:
      - CAP_HOST_MANAGEMENT=1
      - EDGE=1
      - EDGE_ID=[YOUR EDGE ID]
      - EDGE_KEY=[YOUR EDGE KEY]

volumes:
  portainer_agent_data:

Don’t forget to fill in [YOUR EDGE ID] and [YOUR EDGE KEY]. You can find those by creating a new endpoint in your Portainer instance.

Now we can create the systemd service that will automatically start the Edge Agent:

sudo nano /etc/systemd/system/PortainerEdgeAgent.service
[Unit]
Description=PortainerEdgeAgent
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/portainer-edge-agent
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down

[Install]
WantedBy=multi-user.target

Now we can can enable and start the agent:

sudo systemctl enable PortainerEdgeAgent.service
sudo systemctl start PortainerEdgeAgent.service

 

Posted by Uli Köhler in Container, Docker, Linux, Portainer