Container

How to use pg_dump in Gitlab Docker container

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.sql

This will save the SQL dump of the database into gitlab-dump.sql

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.sql

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Docker, Linux

How to run psql in Gitlab Docker image

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Databases, Docker, Linux

How to fix XCP-NG XENAPI_MISSING_PLUGIN(xscontainer) or Error on getting the default coreOS cloud template

Problem:

When creating a CoreOS container on your XCP-NG host, XCP-NG center or XenOrchestra tells you

Cloud config: Error on getting the default coreOS cloud template

with the error message

XENAPI_MISSING_PLUGIN(xscontainer)
This is a XenServer/XCP-ng error

Solution:

Log into the host’s console as root using SSH or the console in XCP-NG center or XenOrchestra and run

yum install xscontainer

After that, reload the page (F5) you use to create your container. No host restart is required.

Note that if you have multiple hosts, you need to yum install xscontainer for each host individually.

Posted by Uli Köhler in Docker, Virtualization

How to backup data from docker-compose MariaDB container using mysqldump

For containers with a MYSQL_ROOT_PASSWORD stored in .env

This is the recommended best practice. For this example, we will assume that .env looks like this:

MARIADB_ROOT_PASSWORD=mophur3roh6eegiL8Eeto7goneeFei

To create a dump:

source .env && docker-compose exec mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases > mariadb-dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql, ensure the container is NOT running before this command:

source .env && docker-compose run -T mariadb mariadb -uroot -p${MARIADB_ROOT_PASSWORD} < mariadb-dump.sql

Note that you have to replace mariadb by the name of your container in docker-compose.yml.

For containers with a MYSQL_ROOT_PASSWORD set to some value not stored in .env

This is secure but you typically have to copy the password multiple times: One time for the mariadb container, one time for whatever container or application uses the database, and one time for any backup script that exports a SQL dump of the entire database

To create a dump:

docker-compose exec mariadb mysqldump -uroot -pYOUR_MARIADB_ROOT_PASSWORD --all-databases > dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql:

docker-compose exec -T mariadb mysql -uroot -pYOUR_MARIADB_ROOT_PASSWORD  < mariadb-dump.sql

Replace YOUR_MARIADB_ROOT_PASSWORD by the password of your installation.

Furthermore, you have to replace mariadb by the name of your container in docker-compose.yml

For containers with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This configuration is a security risk – see The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes.

To create a dump:

docker-compose exec mariadb mysqldump -uroot --all-databases > mariadb-dump-$(date +%F_%H-%M-%S).sql

To restore a dump from mariadb-dump.sql:

docker-compose exec -T mariadb mysql -uroot < mariadb-dump.sql

More posts on this topic

TechOverflow is currently planning a post on how to use bup in order to provide quick & efficient backups of docker-based MariaDB/MySQL installations.

Posted by Uli Köhler in Docker

The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This is part of a common docker-compose.yml which is frequently seen on the internet

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=yes
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
 [...]

Simple and secure, right? A no-root-password MariaDB instance that’s running in a separate container and does not have its port 3306 exposed – so only services from the same docker-compose.yml can reach it since docker-compose puts all those services in a separate network.

Wrong.

While the MariaDB instance is not reachable from the internet since no, it can be reached by any process via its internal IP address.

In order to comprehend what’s happening, we shall take a look at docker’s networks. In this case, my docker-compose config is called redmine.

$ docker network ls | grep redmine
ea7ed38f469b        redmine_default           bridge              local

This is the network that docker-compose creates without any explicit network configuration. Let’s inspect the network to show the hosts:

[
    // [...]
        "Containers": {
            "2578fc65b4dab9f204d0a252e421dd4ddd9f41c35642d48350f4e59370581757": {
                "Name": "redmine_mariadb_1",
                "EndpointID": "1e6d81acc096a12fc740173f4e107090333c42e8a86680ac5c9886c148d578e7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "7867f71d2a36265c34c133b70aea487b90ea68fcf30ecb42d6e7e9a376cf8e07": {
                "Name": "redmine_redmine_1",
                "EndpointID": "f5ac7b3325aa9bde12f0c625c4881f9a6fc9957da4965767563ec9a3b76c19c3",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
    // [...]
]

We can see that the IP address of the redmine_mariadb_1 container is 172.18.0.2.

Using the internal IP 172.18.0.2, you can access the MySQL server.

Any process on the host (even from unprivileged users) can connect to the container without any password, e.g.

$ mysqldump -uroot -h172.18.0.2 --all-databases
// This will show the dump of the entire MariaDB database

How to mitigate this security risk?

Mitigation is quite easy since we only need to set a root password for the MariaDB instance.

My recommended best practice is to avoid duplicate passwords. In order to do this, create a .env file in the directory where docker-compose.yml is located.

MARIADB_ROOT_PASSWORD=aiPaipei6ookaemue4voo0NooC0AeH

Remember to replace the password by a random password or use this shell script to automatically create it:

echo MARIADB_ROOT_PASSWORD=$(pwgen 30) > .env

Now we can use ${MARIADB_ROOT_PASSWORD} in docker-compose.yml whereever the MariaDB root password is required, for example:

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
  redmine:
    image: 'redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - [email protected]
      - REDMINE_DB_MYSQL=mariadb
      - REDMINE_DB_USERNAME=root
      - REDMINE_DB_PASSWORD=${MARIADB_ROOT_PASSWORD}
    ports:
      - '3718:3000'
    volumes:
      - './redmine_data/conf:/usr/src/redmine/conf'
      - './redmine_data/files:/usr/src/redmine/files'
      - './redmine_themes:/usr/src/redmine/public/themes'
    depends_on:
      - mariadb

Note that the mariadb docker image will not change the root password if the database directory already exists (mariadb_data in this example).

My recommended best practice for changing the root password is to use mysqldump --all-databases to export the entire database to a SQL file, then backup and delete the data directory, then re-start the container so the new root password will be set. After that, re-import the dump from the SQL file.

Posted by Uli Köhler in Databases, Docker, Linux

Simple self-hosted WebWormhole.io using docker-compose

Note: This config is currently missing a TURN server, so it won’t work if the clients can’t reach each other! I am working on this.

WebWormhole.io is a new service similar to and inspired by magic-wormhole that allows easily sharing files between browsers without the need to install a software. Internally, it uses WebRTC, allowing direct transfer of files between computers even through firewalls.

While there is no official Docker image published on Docker Hub, the WebWormhole GitHub project provides an official Dockerfile. Based on this, I have published ulikoehler/webwormhole which has been built using

git clone https://github.com/saljam/webwormhole.git
cd webwormhole
docker build -t ulikoehler/webwormhole:latest .
docker push ulikoehler/webwormhole:latest

This is the docker-compose.yml that you can use to run WebWormhole behind a reverse proxy:

version: '3'
services:
  webwormhole:
    image: 'ulikoehler/webwormhole:latest'
    entrypoint: ["/bin/ww", "server", "-http=localhost:52618", "-https="]
    network_mode: host

and this is my nginx config:

server {
    server_name  webwormhole.mydomain.com;

    access_log off;
    error_log /var/log/nginx/webwormhole.mydomain.com.error.log;

    location / {
        proxy_pass http://localhost:52618/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/webwormhole.mydomain.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/webwormhole.mydomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    #ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
    if ($host = webwormhole.mydomain.com) {
        return 301 https://$host$request_uri;
    }

    server_name webwormhole.mydomain.com;

    listen 80;
    return 404; # managed by Certbot
}

I store docker-compose.yml in /var/lib/webwormhole.mydomain.com and I used the script from our previous post Create a systemd service for your docker-compose project in 10 seconds in order to create this systemd config file in /etc/systemd/system/webwormhole.mydomain.com.service:

[Unit]
Description=webwormhole.mydomain.com
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/webwormhole.mydomain.com
# Shutdown container (if running) when unit is started
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down

[Install]
WantedBy=multi-user.target

which you can start and enable using

sudo systemctl enable webwormhole.mydomain.com
sudo systemctl start webwormhole.mydomain.com

 

Posted by Uli Köhler in Docker, Linux

How to restore MySQL database dump in docker-compose mariadb container

Use this snippet to restore a SQL file in your MariaDB container:

docker-compose exec -T [container name] mysql -uroot < mydump.sql

This assumes you have not set a root password. In order to use a root password, use

docker-compose exec -T mariadb mysql -uroot -pmysecretrootpassword < mydump.sql

-T means don’t use a TTY, in other words, don’t expect interactive input. This avoids the

the input device is not a TTY

error message.

Posted by Uli Köhler in Container, Docker

Create a systemd service for your docker-compose project in 10 seconds

Run this in the directory where docker-compose.yml is located:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will automatically create  a systemd service that starts docker-compose up and shuts down using docker-compose down. Our script will also systemctl enable the script (i.e. start automatically on boot) and systemctl start it (start it immediately).

How it works

The command above will download the script from TechOverflow and run it in bash:

#!/bin/bash
# Create a systemd service that autostarts & manages a docker-compose instance in the current directory
# by Uli Köhler - https://techoverflow.net
# Licensed as CC0 1.0 Universal
SERVICENAME=$(basename $(pwd))

echo "Creating systemd service... /etc/systemd/system/${SERVICENAME}.service"
# Create systemd service file
sudo cat >/etc/systemd/system/$SERVICENAME.service <<EOF
[Unit]
Description=$SERVICENAME
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
TimeoutStopSec=15
WorkingDirectory=$(pwd)
# Shutdown container (if running) when unit is started
ExecStartPre=$(which docker-compose) -f docker-compose.yml down
# Start container when unit is started
ExecStart=$(which docker-compose) -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=$(which docker-compose) -f docker-compose.yml down

[Install]
WantedBy=multi-user.target
EOF

echo "Enabling & starting $SERVICENAME"
# Autostart systemd service
sudo systemctl enable $SERVICENAME.service
# Start systemd service now
sudo systemctl start $SERVICENAME.service

The service name is the directory name:

SERVICENAME=$(basename $(pwd))

Now we will create the service file in /etc/systemd/system/${SERVICENAME}.service using the template embedded in the script

The script will automatically determine the location of docker-composeusing $(which docker-compose) and finally enable and start the systemd service:

# Autostart systemd service
sudo systemctl enable $SERVICENAME.service
# Start systemd service now
sudo systemctl start $SERVICENAME.service

 

Posted by Uli Köhler in Docker, Linux

Running Portainer using docker-compose and systemd

In this post we’ll show how to run Portainer Community Edition on a computer using docker-compose and systemd. In case you haven’t installed docker or docker-compose, see How to install docker and docker-compose on Ubuntu in 30 seconds.

If you already have a Portainer instance and want to run a Portainer Edge Agent on a remote computer, see Running Portainer Edge Agent using docker-compose and systemd!

First, create the directory where the docker-compose.yml will live and edit it:

sudo mkdir -p /var/lib/portainer
sudo nano /var/lib/portainer/docker-compose.yml

Now paste this config file:

version: '2'

services:
  portainer:
    image: portainer/portainer
    command: -H unix:///var/run/docker.sock
    restart: always
    ports:
      - 9192:9000
      - 8000:8000
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - portainer_data:/data

volumes:
  portainer_data:

In this case, we’re exposing the Web UI on port 9192 since we’re using a reverse proxy setup in order to access the web UI. Using Portainer over HTTP without a HTTPS frontend is a security risk!

This is my nginx config that is used to reverse proxy my Portainer instance. Note that I generate the HTTPS config using certbot --nginx, hence it’s not shown here:

server {
    server_name  portainer.mydomain.com;

    location / {
        proxy_pass http://localhost:9192/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen 80;
}

Now we can create the systemd service that will automatically start Portainer:

sudo nano /etc/systemd/system/portainer.service
[Unit]
Description=Portainer
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/portainer
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down

[Install]
WantedBy=multi-user.target

Now we can can enable autostart on boot and start Portainer:

sudo systemctl enable portainer.service
sudo systemctl start portainer.service

 

Posted by Uli Köhler in Container, Docker, Linux, Portainer

How to fix Portainer Edge Agent [message: an error occured during short poll] [error: short poll request failed]

Problem:

You are trying to run a Portainer Edge Agent, but can’t connect to the endpoint in the Portainer UI, but you see an error message like this in the logs:

2020/10/24 13:58:23 [ERROR] [internal,edge,poll] [message: an error occured during short poll] [error: short poll request failed]

Solution:

First, check your EDGE_ID and your EDGE_KEY. In most cases, these are incorrectly set and prevent proper communication between the Edge Agent and the Portainer instance.

If that doesn’t help, check your firewall. Both port 8000 of the portainer instance . When creating a new Endpoint, Portainer will show you a message like

The agent will communicate with Portainer via https://portainer.mydomain.com and tcp://portainer.mydomain.com:8000

Depending on your system configuration, you need enable port 8000 on your firewall, e.g. using

sudo ufw enable 8000/tcp

In order to test the connectivity, you can use nc:

echo -e "\n" |  nc portainer.techoverflow.net 8000

This is how it looks on a working Portainer instance:

$ echo -e "\n" |  nc portainer.mydomain.com 8000
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close

400 Bad Request

In case you don’t see any response, check your firewall and check if you’ve exposed port 8000 on the Portainer container.

Also, you can decode your EDGE_KEY (use the one that is actually used in the Portainer Edge Agent instance) in any online base64 decoder like base64code.com: Decoding

aHR0cHM6Ly9wb3J0YWluZXIubXlkb21haW4uY29tfHBvcnRhaW5lci5teWRvbWFpbi5jb206ODAwMHw3MTphNTpiYTpkMjo4MToxOToxMTo4NzplYTowZjo0NDo0YTpmYTo0Mjo4YTphNnwz

will result in this string:

https://portainer.mydomain.com|portainer.mydomain.com:8000|71:a5:ba:d2:81:19:11:87:ea:0f:44:4a:fa:42:8a:a6|3

in which you can check the URLs. For example, check if the protocol (http or https) mismatches what you used to configure your main Portainer instance.

Finally, on the host that is running the Portainer Edge Agent, check if the hostname resolves correctly:

host portainer.mydomain.com

This should show you at least the IPv4 address of the Portainer instance. If that is not correct, these are most likely culprits:

  • Your configured DNS server doesn’t work correctly. Use another DNS server, like 1.1.1.1 (echo nameserver 1.1.1.1 > /etc/resolv.conf will typically fix that temporarily).
  • Your DNS records are not set correctly for the domain name you use
  • If you use Dynamic DNS, your DDNS client might not have updated the record correctly

Always check if you get the same results from your local computer as you get from the host that is running the Portainer Edge Agent.

Posted by Uli Köhler in Container, Docker, Portainer

Running Portainer Edge Agent using docker-compose and systemd

In this post we’ll show how to run the Portainer Edge Agent on a computer using docker-compose and systemd. In case you haven’t installed docker or docker-compose, see How to install docker and docker-compose on Ubuntu in 30 seconds.

If you don’t have a Portainer instance running to which the Edge Agent can connect, see Running Portainer using docker-compose and systemd!

First, create the directory where the docker-compose.yml will live and edit it:

sudo mkdir -p /var/lib/portainer-edge-agent
sudo nano /var/lib/portainer-edge-agent/docker-compose.yml

Now paste this config file:

version: "3"

services:
  portainer_edge_agent:
    image: portainer/agent
    command: -H unix:///var/run/docker.sock
    restart: always
    volumes:
      - /:/host
      - /var/lib/docker/volumes:/var/lib/docker/volumes
      - /var/run/docker.sock:/var/run/docker.sock
      - portainer_agent_data:/data
    environment:
      - CAP_HOST_MANAGEMENT=1
      - EDGE=1
      - EDGE_ID=[YOUR EDGE ID]
      - EDGE_KEY=[YOUR EDGE KEY]

volumes:
  portainer_agent_data:

Don’t forget to fill in [YOUR EDGE ID] and [YOUR EDGE KEY]. You can find those by creating a new endpoint in your Portainer instance.

Now we can create the systemd service that will automatically start the Edge Agent:

sudo nano /etc/systemd/system/PortainerEdgeAgent.service
[Unit]
Description=PortainerEdgeAgent
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/portainer-edge-agent
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down

[Install]
WantedBy=multi-user.target

Now we can can enable and start the agent:

sudo systemctl enable PortainerEdgeAgent.service
sudo systemctl start PortainerEdgeAgent.service

 

Posted by Uli Köhler in Container, Docker, Linux, Portainer

How to setup OnlyOffice using docker-compose & nginx

Prerequisite: Install docker and docker-compose

For example, follow our guide How to install docker and docker-compose on Ubuntu in 30 seconds

Step 1: Create docker-compose.yml

Create the directory where we’ll install OnlyOffice using

sudo mkdir /var/lib/onlyoffice

and then edit the docker-compose configuration using e.g.

sudo nano /var/lib/onlyoffice/docker-compose.yml

and copy and paste this content

version: '3'
services:
  onlyoffice-documentserver:
    image: onlyoffice/documentserver:latest
    restart: always
    environment:
      - JWT_ENABLED=true
      - JWT_SECRET=ahSaTh4waeKe4zoocohngaihaub5pu
    ports:
      - 2291:80
    volumes:
      - ./onlyoffice/data:/var/www/onlyoffice/Data
      - ./onlyoffice/lib:/var/lib/onlyoffice
      - ./onlyoffice/logs:/var/log/onlyoffice
      - ./onlyoffice/db:/var/lib/postgresql

Now add your custom password in JWT_SECRET=... ! Don’t forget this step, or anyone can use your OnlyOffice server ! I’m using pwgen 30 to generate a new random password (install using sudo apt -y install pwgen).

Step 2: Setup systemd service

Create the service using sudo nano /etc/systemd/system/onlyoffice.service:

[Unit]
Description=OnlyOffice server
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /var/lib/onlyoffice/docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /var/lib/onlyoffice/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /var/lib/onlyoffice/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

Now enable & start the service using

sudo systemctl enable onlyoffice
sudo systemctl start onlyoffice

Step 3:  Create nginx reverse proxy configuration

Note that we mapped OnlyOffice’s port 80 to port 2291. In case you’re not using nginx as reverse proxy, you need to manually configure your reverse proxy to pass requests to port 2291.

server {
    server_name onlyoffice.mydomain.org;

    access_log /var/log/nginx/onlyoffice.access_log;
    error_log /var/log/nginx/onlyoffice.error_log info;

    location / {
        proxy_pass http://127.0.0.1:2291;
        proxy_http_version 1.1;
        proxy_read_timeout 3600s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host            $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        add_header X-Frontend-Host $host;
        # Uncomment this line and reload once you have setup TLS for that domain !
        # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    }

    listen 80;
}

Now test if your nginx config works using nginx -t and reload using service nginx reload.

Now I recommend to setup Let’s Encrypt for your domain so that your OnlyOffice instance will only be accessed using an encrypted connecting (sudo certbot --nginx, see other guides if you don’t know how to do that).

Once certbot asks you whether to redirect, choose option 2 – Redirect to HTTPS.

Step 4: Test OnlyOffice

If your installation worked, you should see a screen like this:

If not, try checking the logs using

sudo journalctl -xu onlyoffice

(Optional) Step 5: Configure NextCloud to use OnlyOffice

If you are running NextCloud, go to Settings => ONLYOFFICE and enter your domain and the JWT_SECRET you created before:

Ensure that Connect to demo ONLYOFFICE Document Server is unchecked and click Save.

Nextcloud will tell you at the top right if it has been able to connect to your OnlyOffice instance successfully:

  • Settings successfully updated means that NextCloud is now connected to OnlyOffice
  • Invalid token means that your password / secret key does not match
  • Other messages typically mean that your OnlyOffice is not running or that you haven’t entered the correct domain or protocol. I recommend to only use https:// – use http:// for testing only and don’t forget to revert back to https:// once you have found the issue.
Posted by Uli Köhler in Container, Docker, Linux, nginx

How to open a shell in an LXC container

You can run a shell in your LXC container using

lxc exec [name of container] /bin/bash

for example

lxc exec mycontainer /bin/bash
Posted by Uli Köhler in LXC

How to fix LXC ‘Error: The remote isn’t a private LXD server’

Problem:

You are trying to launch a LXC container using a command like

lxc launch mycontainer ubuntu:18.04

but you see this error message:

Solution:

Your command line arguments are in the wrong order. You need to run lxc launch [image] [name of container], not lxc launch [name of container] [image] ! The correct command looks like this:

lxc launch ubuntu:18.04 mycontainer

 

Posted by Uli Köhler in LXC

How to fix LXC file push ‘Error: Path already exists as a directory: File too large’

Problem:

You are trying to copy a file to a LXC container using lxc file push, but you see this error message:

Error: Path already exists as a directory: File too large

Solution:

Add a slash (/) at the end of your path, for example:

mycontainer/root => mycontainer/root/

Working example:

lxc file push myfile.zip mycontainer/root/

Also see our previous post How to copy files to a LXC container

Posted by Uli Köhler in LXC

How to copy files to a LXC container

Once you’ve created a LXC container using a command like

lxc launch ubuntu:18.04 mycontainer

you can push files to the container using

lxc file push myfile.zip mycontainer/root/

This will copy the local file myfile.zip to /root/myfile.zip on the container. Ensure that your path ends with /, since lxc file push myfile.zip mycontainer/root  will show this error message:

Error: Path already exists as a directory: File too large

In that case, add a slash (/) to the end of your destination path (e.g. mycontainer/root => mycontainer/root/).

Posted by Uli Köhler in Container, LXC

How I reduced gitlab memory consumption in my docker-based setup

I’m currently running 4 separate dockerized gitlab instances on my server. These tend to consume quite a lot of memory even when not being used for some time.

Reduce the number of unicorn worker processes

The gitlab default is to use 6 unicorn worker processes. By reducing the number of workers to 2, my gitlab memory consumption decreased by approximately 60%:

unicorn['worker_processes'] = 2

In my dockerized setup, I justed updated the GITLAB_OMNIBUS_CONFIG in docker-compose.yml and restarted the instance. If you didn’t install gitlab using docker, you might need to sudo gitlab-ctl reconfigure.

Note that you need at least 2 unicorn workers for gitlab to work properly. See this issue for details.

Also note that reducing the number of workers to the minimum will likely impact your gitlab performance in a negative way. Increase the number of workers if you notice a lack in performance.

Disable Prometheus monitoring

Most small installation do not need Prometheus, the monitoring tool integrated into Gitlab:

prometheus_monitoring['enable'] = false

Reduce sidekiq concurrency

sidekiq is the background job processor integrated into Gitlab. The default concurrency is 25. I recommend reducing it.

sidekiq['concurrency'] = 2

This might cause background jobs to take longer since they have to wait in queue, but for small installations it does not matter in my experience.

Reduce the PostgreSQL shared memory

This was recommended on StackOverflow.

postgresql['shared_buffers'] = "256MB"

Setting this too low might cause a heavier IO load and all operations (including website page loads) might be slower.

The complete config

This is the configuration (combined from all strategies listed above) in order to get down the memory consumption:

# Unicorn config
unicorn['worker_processes'] = 2
# PostgreSQL config
postgresql['shared_buffers'] = "256MB"
# Sidekiq config
sidekiq['concurrency'] = 2
# Prometheus config
prometheus_monitoring['enable'] = false

 

Posted by Uli Köhler in Docker, git

How to setup OnlyOffice using docker-compose, systemd and nginx

In this setup we show how to setup OnlyOffice using nginx as a reverse proxy, docker-compose to run and configure the OnlyOffice image and systemd to automatically start and restart the OnlyOffice instance. Running it in a reverse proxy configuration allows you to have other domains listening on the same IP address and have a central management of Let’s Encrypt SSL certificates.

We will setup the instance in /opt/onlyoffice on port 2291.

Save this file as /opt/onlyoffice/docker-compose.yml and don’t forget to change JWT_SECRET to a random password!

version: '3'
services:
  onlyoffice-documentserver:
    image: onlyoffice/documentserver:latest
    restart: always
    environment:
      - JWT_ENABLED=true
      - JWT_SECRET=Shei9AifuZ4ze7udahG2seb3aa6ool
    ports:
      - 2291:80
    volumes:
      - ./onlyoffice/data:/var/www/onlyoffice/Data
      - ./onlyoffice/lib:/var/lib/onlyoffice
      - ./onlyoffice/logs:/var/log/onlyoffice
      - ./onlyoffice/db:/var/lib/postgresql

Now we can create the systemd service. I created it using TechOverflow’s docker-compose systemd .service generator. Save it in /etc/systemd/system/OnlyOffice.service:

[Unit]
Description=OnlyOffice
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /opt/onlyoffice/docker-compose.yml down
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /opt/onlyoffice/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /opt/onlyoffice/docker-compose.yml down

[Install]
WantedBy=multi-user.target

Now we can enable & start the service using

sudo systemctl enable OnlyOffice.service
sudo systemctl start OnlyOffice.service

Now let’s create the nginx config in /etc/nginx/sites-enabled/OnlyOffice.conf. Obviously, you’ll have to modify at least the

 server {
    server_name onlyoffice.mydomain.com;

    access_log /var/log/nginx/onlyoffice.access_log;
    error_log /var/log/nginx/onlyoffice.error_log info;

    location / {
        proxy_pass http://127.0.0.1:2291;
        proxy_http_version 1.1;
        proxy_read_timeout 3600s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        add_header X-Frontend-Host $host;
    }

    listen 80;
}

Check the validity of the nginx config using

sudo nginx -t

and unless it fails, reload nginx using

sudo service nginx reload

Now I recommend to use certbot to enable TLS encryption on your domain. You should be familiar with these steps already ; my approach is to sudo apt -y install python-certbot-nginx, then certbot --nginx --staging to first obtain a staging certificate to avoid being blocked if there are any issues and after you have obtained the staging certificate use certbot --nginx and Renew & replace cert. After that, run sudo service nginx reload and check if you domain works with HTTPS. You should always choose redirection to HTTPS if certbot asks you.

Posted by Uli Köhler in Docker, nginx

How to repair docker-compose MariaDB instances (aria_chk -r)

Problem:

You are trying to run a MariaDB container using docker-compose. However, the database container doesn’t start up and you see error messages like these in the logs:

[ERROR] mysqld: Aria recovery failed. Please run aria_chk -r on all Aria tables and delete all aria_log.######## files
[ERROR] Plugin 'Aria' init function returned error.
[ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed.
....
[ERROR] Could not open mysql.plugin table. Some plugins may be not loaded
[ERROR] Failed to initialize plugins.
[ERROR] Aborting

Solution:

The log messages already tell you what to do – but they don’t tell you how to do it:

Aria recovery failed. Please run aria_chk -r on all Aria tables and delete all aria_log.######## files

First, backup the entire MariaDB data directory: Check onto which host directory the data directory (/var/lib/mysql) of the container is mapped and copy the entire directory to a backup space. This is important in case the repair process fails.

Now let’s run aria_chk -r to check and repair MySQL table files.

docker-compose run my-db bash -c 'aria_chk -r /var/lib/mysql/**/*'

Replace my-db by the name of your database container. This will attempt to repair a lot of non-table-files as well but aria_chk will happily ignore those.

Now we can delete the log files:

docker-compose run my-db bash -c 'rm /var/lib/mysql/aria_log.*'

Again, replace my-db by the name of your database container.

Posted by Uli Köhler in Databases, Docker

How to automatically cleanup your docker registry instance

Quick install

This quick-install script works if you are running the docker registry image using docker-compose and the service in docker-compose.yml is called registry. I recommend to use our example on how to install the docker registry for Gitlab (not yet available).

Run this in the directory where docker-compose.yml  is located!

wget -qO- https://techoverflow.net/scripts/install-registry-autocleanup.sh | sudo bash

Need an explanation (or not using docker-compose)?

Docker registry instances will store every version of every image you push to them, so especially if you are in a continous integration environment you might want to do periodic cleanups that delete all images without a tag.

The command to do that is

registry garbage-collect /etc/docker/registry/config.yml -m

You can use a systemd service like

[Unit]
Description=registry-gc

[Service]
Type=oneshot
ExecStart=/usr/local/bin/docker-compose exec -T registry bin/registry garbage-collect /etc/docker/registry/config.yml -m
WorkingDirectory=/opt/my-registry

and a timer like

[Unit]
Description=registry-gc

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

to run the command daily. You need to adjust both the WorkingDirectory and the exact docker-compose exec command to suit your needs.

Copy both files to /etc/systemd/system and enable the timer using

sudo systemctl enable registry-gc.timer

and you can run it manually at any time using

sudo systemctl start registry-gc.service
Posted by Uli Köhler in Docker