Technologies

How to fix Traefik “Gateway Timeout” for Docker services

Problem

If you have setup your Traefik instance with Docker providers, you will often encounter an issue where every service running on Docker or docker-compose will return

Gateway Timeout

(HTTP response 504) after a couple of seconds.

Why does the gateway timeout occur?

This issue is caused by the Traefik instance not being on the same Docker network(s) as the containers running the services. Therefore, the IP address of the traefik container is firewalled from being able to access the IP addresses of the docker containers.

There are two ways to fix this issue.

Preferred solution: Use host networking

The host is able to access all docker container IP addresses. Therefore, we can operate the traefik contaienr with network_mode: "host" so it doesn’t receive a separate IP address in a separate network but uses the hosts’s IP address and ports directly.

In order enable host networking in a docker-compose-based setup, use

network_mode: "host" 

For example:

version: "3.3"
services:
  traefik:
    image: "traefik:v2.4.8"
    network_mode: "host"
# [...]

The approach of using host network also has the added advantage of increasing traefik throughput, since you don’t need any docker port forwarding but the host ports (like port 80 for HTTP and port 443 for HTTPS) are connected directly to traefik.

Alternate solution: Add traefik to every docker network

You can also add the traefik instance to each and every docker network where a service container is located. This will work, but you need to remember to add the traefik instance to every docker container. Since this is not only often a lot of work (especially if you have many services with separate networks running in your setup)

Posted by Uli Köhler in Traefik

How to download Wasabi/S3 object to string/bytes using boto3 in Python

You can use io.BytesIO to store the content of an S3 object in memory and then convert it to bytes which you can then decode to a str. The following example downloads myfile.txt into memory:

# Download to file
buf = io.BytesIO()
my_bucket.download_fileobj("myfile.txt", buf)
# Get file content as bytes
filecontent_bytes = buf.getvalue()
# ... or convert to string
filecontent_str = buf.getvalue().decode("utf-8")

Full example

import boto3
import io

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Download to file
buf = io.BytesIO()
my_bucket.download_fileobj("myfile.txt", buf)
# Get file content as bytes
filecontent_bytes = buf.getvalue()
# ... or convert to string
filecontent_str = buf.getvalue().decode("utf-8")

print(filecontent_str)

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Posted by Uli Köhler in Python, S3

How to upload string as Wasabi/S3 object using boto3 in Python

In order to upload a Python string like

my_string = "This shall be the content for a file I want to create on an S3-compatible storage"

to an S3-compatible storage like Wasabi or Amazon S3, you need to encode it using .encode("utf-8") and then wrap it in an io.BytesIO object:

my_bucket.upload_fileobj(io.BytesIO(my_string.encode("utf-8")), "myfile.txt")

Full example:

import boto3
import io

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Upload string to file
my_string = "This shall be the content for a file I want to create on an S3-compatible storage"

my_bucket.upload_fileobj(io.BytesIO(my_string.encode("utf-8")), "myfile.txt")

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Posted by Uli Köhler in Python, S3

How to filter for objects in a given S3 directory using boto3

Using boto3, you can filter for objects in a given bucket by directory by applying a prefix filter.

Instead of iterating all objects using

for obj in my_bucket.objects.all():
    pass # ...

(see How to use boto3 to iterate ALL objects in a Wasabi / S3 bucket in Python for a full example)

you can apply a prefix filter using

for obj in my_bucket.objects.filter(Prefix="MyDirectory/"):
    print(obj)

Don’t forget the trailing / for the prefix argument ! Just using filter(Prefix="MyDirectory") without a trailing slash will also match e.g. MyDirectoryFileList.txt.

This complete example prints the object description for every object in the 10k-Test-Objects directory (from our post on How to use boto3 to create a lot of test files in Wasabi / S3 in Python).

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Iterate over objects in bucket
for obj in my_bucket.objects.filter(Prefix="MyDirectory"):
    print(obj)

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Example output:

s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/10.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/100.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1000.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/10000.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1001.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1002.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1003.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1004.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1005.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1006.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1007.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1008.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1009.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/101.txt')
[...]

 

Posted by Uli Köhler in Python, S3

How to use boto3 to iterate ALL objects in a Wasabi / S3 bucket in Python

This snippet shows you how to iterate over all objects in a bucket:

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Iterate over objects in bucket
for obj in my_bucket.objects.all():
    print(obj)

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Example output:

s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/10.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/100.txt')
[...]

 

Posted by Uli Köhler in Python, S3

How to use boto3 to create a lot of test files in Wasabi / S3 in Python

The following example code creates 10000 test files on Wasabi / S3. It is based on How to use concurrent.futures map with a tqdm progress bar:

import boto3
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(64)

from tqdm import tqdm
import concurrent.futures
def tqdm_parallel_map(executor, fn, *iterables, **kwargs):
    """
    Equivalent to executor.map(fn, *iterables),
    but displays a tqdm-based progress bar.
    
    Does not support timeout or chunksize as executor.submit is used internally
    
    **kwargs is passed to tqdm.
    """
    futures_list = []
    for iterable in iterables:
        futures_list += [executor.submit(fn, i) for i in iterable]
    for f in tqdm(concurrent.futures.as_completed(futures_list), total=len(futures_list), **kwargs):
        yield f.result()

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
boto_test_bucket = s3.Bucket('boto-test')

def create_s3_object(i, directory):
    # Create test data
    buf = io.BytesIO()
    buf.write(f"{i}".encode())
    # Reset read pointer. DOT NOT FORGET THIS, else all uploaded files will be empty!
    buf.seek(0)

    # Upload the file
    boto_test_bucket.upload_fileobj(buf, f"{directory}/{i}.txt")

for _ in tqdm_parallel_map(executor, lambda i: create_s3_object(i, directory="10k-Test-Objects"), range(1, 10001)):
    pass

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL at https://s3.eu-central-1.wasabisys.com.

Note that running this script, especially when creating lots of test files, will send a lot of requests to your S3 provider and, depending on what plan you are using, these requests might be expensive. Wasabi, for example, does not charge for requests but charges for storage (with a minimum of 1TB storage per month being charged, at the time of writing this).

Posted by Uli Köhler in Python, S3

How to use boto3 to upload BytesIO to Wasabi / S3 in Python

This snippet provides a concise example on how to upload a io.BytesIO() object to

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
boto_test_bucket = s3.Bucket('boto-test')

# Create a test BytesIO we want to upload
import io
buf = io.BytesIO()
buf.write(b"Hello S3 world!")

# Reset read pointer. DOT NOT FORGET THIS, else all uploaded files will be empty!
buf.seek(0)
    
# Upload the file. "MyDirectory/test.txt" is the name of the object to create
boto_test_bucket.upload_fileobj(buf, "MyDirectory/test.txt")

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL at https://s3.eu-central-1.wasabisys.com.

Also don’t forget

buf.seek(0)

or your uploaded files will be empty.

 

Posted by Uli Köhler in Python, S3

How to use boto3 to upload file to Wasabi / S3 in Python

Using boto to upload data to Wasabi is pretty simple, but not well-documented.

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
boto_test_bucket = s3.Bucket('boto-test')

# Create a test file we want to upload
with open("upload-test.txt", "w") as outfile:
    outfile.write("Hello S3!")
    
# Upload the file. "MyDirectory/test.txt" is the name of the object to create
boto_test_bucket.upload_file("upload-test.txt", "MyDirectory/test.txt")

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL at https://s3.eu-central-1.wasabisys.com.

Posted by Uli Köhler in Python, S3

How to install gitlab-runner using docker-compose

First, choose a directory where the service will reside in. I recommend /opt/gitlab-runner.  Then create docker-compose.yml in said directory with this content:

version: '3'
services:
  gitlab-runner:
    image: 'gitlab/gitlab-runner:latest'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./config:/etc/gitlab-runner
    restart: unless-stopped

then run this command to configure the runner:

docker-compose up -d
docker-compose exec -T gitlab-runner gitlab-runner register

It will ask you for details about the GitLab instance you want to attach to. You will find this information at https://<your-gitlab-domain>/admin/runners. This example is for my GitLab instance:

Runtime platform                                    arch=amd64 os=linux pid=38 revision=943fc252 version=13.7.0
Running in system-mode.

Enter the GitLab instance URL (for example, https://gitlab.com/):
https://gitlab.techoverflow.net/
Enter the registration token:
Loo2lahf9Shoogheiyae
Enter a description for the runner:
[148a53203df8]: My-Runner
Enter tags for the runner (comma-separated):

Registering runner... succeeded                     runner=oc-oKWMH
Enter an executor: custom, docker-ssh, shell, virtualbox, docker-ssh+machine, docker, parallels, ssh, docker+machine, kubernetes:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Now, restart the runner that is running with the old config (i.e. with no gitlab instance being attached):

docker-compose down

After that’s finished, you can run the script from our previous post Create a systemd service for your docker-compose project in 10 seconds in the directory where docker-compose.yml is located.

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will automatically generate a systemd service and start the runner (also on boot). For more details, see the corresponding blogpost. If your directory is named gitlab-runner, the service file will be stored in /etc/systemd/systemd/gitlab-runner.service, hence these are commands you can use to control the service:

Note that the script that creates the systemd service will automatically start the runner, so you don’t need to start it manually. !

Start by

sudo systemctl start gitlab-runner

Restart by

sudo systemctl restart gitlab-runner

Stop by

sudo systemctl stop gitlab-runner

View status:

sudo systemctl status gitlab-runner

View & follow logs:

sudo journalctl -xfu gitlab-runner

View logs in less:

sudo journalctl -xu gitlab-runner

Also see Mini systemd cheat-sheet

Also see How to register gitlab runner for multiple GitLab instances.

Note that you can also use

docker-compose logs -f

to view the logs (run this from the directory where docker-compose.yml) is located.

In case you see an error message like

error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on 192.168.178.1:53: no such host

in your jobs, see How to fix Gitlab CI error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on &#8230; no such host

Posted by Uli Köhler in GitLab

How to install Emscripten SDK on Ubuntu in 1 minute

This script installs the emscripten SDK in Ubuntu in ~/.emsdk and automatically adds source ~/.emsdk/emsdk_env.sh to .bashrc and .zshrc if they exist. It will also automatically update emscripten in case ~/.emsdk already exists.

Run this one liner to install:

curl -fsSL https://techoverflow.net/scripts/install-emscripten.sh | bash

Script content:

#!/bin/bash
# This script installs emscripten to ~/.emsdk
if [[ -d "~/.emsdk" ]]
then # Update
  echo "Updating emscripten SDK..."
  cd ~/.emsdk && git pull
else # Install
  echo "Installing emscripten SDK..."
  git clone https://github.com/emscripten-core/emsdk.git ~/.emsdk
fi
# Install & activate latest SDK
# See https://emscripten.org/docs/getting_started/downloads.html for more details
cd ~/.emsdk
./emsdk install latest 
./emsdk activate latest    
# Add to .bashrc and .zshrc
if [[ -f "~/.bashrc" ]]; then echo -e "\nsource ~/.emsdk/emsdk_env.sh" >> ~/.bashrc; fi
if [[ -f "~/.zshrc" ]]; then echo -e "\nsource ~/.emsdk/emsdk_env.sh" >> ~/.zshrc; fi

 

 

Posted by Uli Köhler in WASM

How to install xenutils on Linux (XCP-NG)

Using CoreOS? See this post instead!

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run these commands. Note that this will reboot the machine after it finished

sudo mount -o ro /dev/sr0 /mnt/
cd /mnt/Linux
sudo ./install.sh -n
sudo reboot

After the VM reboots, XCP-NG should detect the management agent.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

Best-practice configuration for MongoDB with docker-compose

Create /var/lib/mongodb/docker-compose.yml:

version: '3.1'
services:
  mongo:
    image: mongo
    volumes:
        - ./data:/data/db
    ports:
        - 27017:27017
    command: --serviceExecutor adaptive

This will store the MongoDB data in /var/lib/mongodb/data. I prefer this variant to using docker volumes since this method keeps all MongoDB-related data in the same directory.

Then create a systemd service using

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

See our post on how to Create a systemd service for your docker-compose project in 10 seconds for more details on this method.

You can access MongoDB at localhost:27017! It will autostart after boot

Restart by

sudo systemctl restart mongodb

Stop by

sudo systemctl stop mongodb

View logs:

sudo journalctl -xfu mongodb

View logs in less:

sudo journalctl -xu mongodb

 

Posted by Uli Köhler in Docker, MongoDB

Fedora CoreOS minimal ignition config for XCP-NG

This is the Ignition config that I use to bring up my Fedora CoreOS instance on a VM on my XCP-NG server:

{
  "ignition": {
    "version": "3.2.0"
  },
  "passwd": {
    "users": [
      {
        "groups": [
          "sudo",
          "docker"
        ],
        "name": "uli",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"
        ]
      }
    ]
  },
  "storage": {
    "files": [
      {
        "path": "/etc/hostname",
        "contents": {
          "source": "data:,coreos-test%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/profile.d/systemd-pager.sh",
        "contents": {
          "source": "data:,%23%20Tell%20systemd%20to%20not%20use%20a%20pager%20when%20printing%20information%0Aexport%20SYSTEMD_PAGER%3Dcat%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/sysctl.d/20-silence-audit.conf",
        "contents": {
          "source": "data:,%23%20Raise%20console%20message%20logging%20level%20from%20DEBUG%20(7)%20to%20WARNING%20(4)%0A%23%20to%20hide%20audit%20messages%20from%20the%20interactive%20console%0Akernel.printk%3D4"
        },
        "mode": 420
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "enabled": true,
        "name": "docker.service"
      },
      {
        "enabled": true,
        "name": "containerd.service"
      },
      {
        "dropins": [
          {
            "contents": "[Service]\n# Override Execstart in main unit\nExecStart=\n# Add new Execstart with `-` prefix to ignore failure\nExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM\nTTYVTDisallocate=no\n",
            "name": "autologin-core.conf"
          }
        ],
        "name": "[email protected]"
      }
    ]
  }
}

Which is build from this YAML:

variant: fcos
version: 1.2.0
passwd:
  users:
    - name: uli
      groups:
        - "sudo"
        - "docker"
      ssh_authorized_keys:
        - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"

systemd:
  units:
    - name: docker.service
      enabled: true

    - name: containerd.service
      enabled: true
    - name: [email protected]
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure
          ExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM
          TTYVTDisallocate=no
storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          coreos-test
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          # Tell systemd to not use a pager when printing information
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          # Raise console message logging level from DEBUG (7) to WARNING (4)
          # to hide audit messages from the interactive console
          kernel.printk=4

using

fcct --pretty --strict ignition.yml --output ignition.ign

or TechOverflow’s online transpiler tool.

Install using:

sudo coreos-installer install /dev/xvda --copy-network --ignition-url https://mydomain.com/ignition.ign

Features:

  • DHCP on all network interfaces
  • TTY on the screen
  • No password – remember to replace the SSH key by your key!
Posted by Uli Köhler in Virtualization

How to install XCP-NG xe-guest-utilities on Fedore CoreOS using guest-tools.iso

Important note: While installing the Xen utilities using the CD ISO still works, it is outdated and you should prefer installing it using the rpm package. See our post Fedora CoreOS: How to install Xen/XCP-NG guest utilities using rpm-ostree

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run this sequence of commands to install. Note that this will reboot the CoreOS instance!

curl -fsSL https://techoverflow.net/scripts/install-xenutils-coreos.sh | sudo bash /dev/stdin

This will run the following script:

sudo mount -o ro /dev/sr0 /mnt
sudo rpm-ostree install /mnt/Linux/*.x86_64.rpm
sudo cp -f /mnt/Linux/xen-vcpu-hotplug.rules /etc/udev/rules.d/
sudo cp -f /mnt/Linux/xe-linux-distribution.service /etc/systemd/system/
sudo sed 's/share\/oem\/xs/sbin/g' -i /etc/systemd/system/xe-linux-distribution.service
sudo systemctl daemon-reload
sudo systemctl enable /etc/systemd/system/xe-linux-distribution.service
sudo umount /mnt
sudo systemctl reboot

After rebooting the VM, XCP-NG should detect the management agent.

Based on work by steniofilho on the Fedora Forum.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

How to list VMs in XCP-NG on the command line

In order to list VMs on the command line, login to XCP-NG using SSH and run this command:

xe vm-list

Example output:

[16:51 virt01-xcpng ~]# xe vm-list
uuid ( RO)           : 56dc99f2-c617-f7a9-5712-a4c9df54229a
     name-label ( RW): VM 1
    power-state ( RO): running


uuid ( RO)           : 268d56ab-9672-0f45-69ae-efbc88380b21
     name-label ( RW): VM2
    power-state ( RO): running


uuid ( RO)           : 9b1a771f-fb84-8108-8e01-6dac0f957b19
     name-label ( RW): My VM 3
    power-state ( RO): running

 

Posted by Uli Köhler in Virtualization

How to fix ElasticSearch [1]: initial heap size […] not equal to maximum heap size […];

Problem:

Your ElasticSearch server fails to start with an error message like

ERROR: [1] bootstrap checks failed
[1]: initial heap size [536870912] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents memory locking from locking the entire heap
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log

Solution:

Set the initial heap size equal to the maximum heap size: The -Xms argument and the -Xmx argument must be equal, for example:

-Xms2048m -Xmx2048m

Typically (such as in a docker-based setup) you can set this in ES_JAVA_OPTS:

ES_JAVA_OPTS=-Xms2048m -Xmx2048m

For docker-compose based environments, this is an example configuration that works:

environment:
    - cluster.name=docker-cluster
    - node.name=elasticsearch1
    - cluster.initial_master_nodes=elasticsearch1
    - bootstrap.memory_lock=true
    - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
    - http.cors.enabled=true
    - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
    - http.cors.allow-credentials=true
    - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"

After that, restart your ElasticSearch instance.

Posted by Uli Köhler in ElasticSearch

How to run psql in Gitlab Docker image

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Databases, Docker, Linux

How to fix XCP-NG XENAPI_MISSING_PLUGIN(xscontainer) or Error on getting the default coreOS cloud template

Problem:

When creating a CoreOS container on your XCP-NG host, XCP-NG center or XenOrchestra tells you

Cloud config: Error on getting the default coreOS cloud template

with the error message

XENAPI_MISSING_PLUGIN(xscontainer)
This is a XenServer/XCP-ng error

Solution:

Log into the host’s console as root using SSH or the console in XCP-NG center or XenOrchestra and run

yum install xscontainer

After that, reload the page (F5) you use to create your container. No host restart is required.

Note that if you have multiple hosts, you need to yum install xscontainer for each host individually.

Posted by Uli Köhler in Docker, Virtualization

The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This is part of a common docker-compose.yml which is frequently seen on the internet

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=yes
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
 [...]

Simple and secure, right? A no-root-password MariaDB instance that’s running in a separate container and does not have its port 3306 exposed – so only services from the same docker-compose.yml can reach it since docker-compose puts all those services in a separate network.

Wrong.

While the MariaDB instance is not reachable from the internet since no, it can be reached by any process via its internal IP address.

In order to comprehend what’s happening, we shall take a look at docker’s networks. In this case, my docker-compose config is called redmine.

$ docker network ls | grep redmine
ea7ed38f469b        redmine_default           bridge              local

This is the network that docker-compose creates without any explicit network configuration. Let’s inspect the network to show the hosts:

[
    // [...]
        "Containers": {
            "2578fc65b4dab9f204d0a252e421dd4ddd9f41c35642d48350f4e59370581757": {
                "Name": "redmine_mariadb_1",
                "EndpointID": "1e6d81acc096a12fc740173f4e107090333c42e8a86680ac5c9886c148d578e7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "7867f71d2a36265c34c133b70aea487b90ea68fcf30ecb42d6e7e9a376cf8e07": {
                "Name": "redmine_redmine_1",
                "EndpointID": "f5ac7b3325aa9bde12f0c625c4881f9a6fc9957da4965767563ec9a3b76c19c3",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
    // [...]
]

We can see that the IP address of the redmine_mariadb_1 container is 172.18.0.2.

Using the internal IP 172.18.0.2, you can access the MySQL server.

Any process on the host (even from unprivileged users) can connect to the container without any password, e.g.

$ mysqldump -uroot -h172.18.0.2 --all-databases
// This will show the dump of the entire MariaDB database

How to mitigate this security risk?

Mitigation is quite easy since we only need to set a root password for the MariaDB instance.

My recommended best practice is to avoid duplicate passwords. In order to do this, create a .env file in the directory where docker-compose.yml is located.

MARIADB_ROOT_PASSWORD=aiPaipei6ookaemue4voo0NooC0AeH

Remember to replace the password by a random password or use this shell script to automatically create it:

echo MARIADB_ROOT_PASSWORD=$(pwgen 30) > .env

Now we can use ${MARIADB_ROOT_PASSWORD} in docker-compose.yml whereever the MariaDB root password is required, for example:

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
  redmine:
    image: 'redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - [email protected]
      - REDMINE_DB_MYSQL=mariadb
      - REDMINE_DB_USERNAME=root
      - REDMINE_DB_PASSWORD=${MARIADB_ROOT_PASSWORD}
    ports:
      - '3718:3000'
    volumes:
      - './redmine_data/conf:/usr/src/redmine/conf'
      - './redmine_data/files:/usr/src/redmine/files'
      - './redmine_themes:/usr/src/redmine/public/themes'
    depends_on:
      - mariadb

Note that the mariadb docker image will not change the root password if the database directory already exists (mariadb_data in this example).

My recommended best practice for changing the root password is to use mysqldump --all-databases to export the entire database to a SQL file, then backup and delete the data directory, then re-start the container so the new root password will be set. After that, re-import the dump from the SQL file.

Posted by Uli Köhler in Databases, Docker, Linux