Technologies

How to export certificates from Traefik certificate store

Traefik stores certificates as base64 encoded X.509 certificates and keys inside the certificate store.

This is a python script to export certificate from Traefik certificate store .json file:

import json
import base64

# Read Traefik ACME JSON
with open("acme.json") as acme_file:
    acme = json.load(acme_file)

# Select certificates from a specific resolver
resolver_name = "my-resolver"
certificates = acme[resolver_name]["Certificates"]

# Find the specific certificate we are looking for
certificate = [certificate for certificate in certificates if "myddomain.com" in certificate["domain"].get("sans", [])][0]

# Extract X.509 certificate data
certificate_data = base64.b64decode(certificate["certificate"])
key_data = base64.b64decode(certificate["key"])

# Export certificate and key to file
with open("certificate.pem", "wb") as certfile:
    certfile.write(certificate_data)

with open("key.pem", "wb") as keyfile:
    keyfile.write(key_data)

Note that depending on what is the primary name for your certificate, you might need to use

if "myddomain.com" == certificate["domain"]["main"]

instead of

if "myddomain.com" in certificate["domain"].get("sans", [])

 

Posted by Uli Köhler in Python, Traefik

How to find XCP-NG host-uuid

For many xe commands on XCP-NG you have to use a host-uuid=... parameter.

In order to find out your Host UUID, use

xe host-list

Example:

[03:19 virt01-xcpng ~]# xe host-list
uuid ( RO)                : 71739b18-2999-4794-a024-87d5d26215d1
          name-label ( RW): virt01-xcpng
    name-description ( RW): Default install

In this example, the Host UUID of the only host is 71739b18-2999-4794-a024-87d5d26215d1.

Posted by Uli Köhler in Virtualization

BUP MySQL backup using mysqldump without intermediary file

BUP is a nice git-based backup tool that is both free and easy to use and saves space by differential backups.

You can also use bup to backup a MySQL database using mysqldump directly instead of first dumping to a .sql file and then backing up the file.

This is possible by piping the mysqldump output directly into bup split:

mysqldump [...]| bup -d $BUP_DIR split -n mysqldump.sql

By using -n mysqldump.sql you are telling bup that the file created by the dumping should be named mysqldump.sql in the backup.

Full example:

export BUP_DIR=/var/lib/bup/mysql.bup
export MARIADB_ROOT_PASSWORD=piahaen9ehilei0Ieneirohthue4Iu

bup -d $BUP_DIR init
mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases | bup -d $BUP_DIR split -n mysqldump.sql

 

Posted by Uli Köhler in bup

Local redmine backup using bup (docker-compose compatible)

This script uses bupto backup your docker-compose based redmine installation to a local bup folder e.g. in /var/lib/bup/my-redmine.bup:

#!/bin/bash
# Auto-determine the name from the directory name
# /opt/my-redmine => $NAME=my-redmine => /var/lib/bup/my-redmine.bup
export NAME=$(basename $(pwd))
export BUP_DIR=/var/lib/bup/$NAME.bup
bup_directory() {
        echo "BUPing $1"
        bup -d $BUP_DIR index $1 && bup save -9 --strip-path $(pwd) -n $1 $1
}
# Init
bup -d $BUP_DIR init
# Save MariaDB
source .env && docker-compose exec mariadb mysqldump -uroot -p${MARIADB_ROOT_PASSWORD} --all-databases | bup -d $BUP_DIR split -n $NAME-mariadb.sql
# Save directories
bup_directory redmine_data
bup_directory redmine_themes
# Backup self
bup_directory backup.sh
bup_directory docker-compose.yml
# OPTIONAL: Add par2 information
#   This is only recommended for backup on unreliable storage or for extremely critical backups
#   If you already have bitrot protection (like BTRFS with regular scrubbing), this might be overkill.
# Uncomment this line to enable:
# bup fsck -g

# OPTIONAL: Cleanup old backups
bup -d $BUP_DIR prune-older --keep-all-for 1m --keep-dailies-for 6m --keep-monthlies-for forever -9 --unsafe

It will backup:

  • MySQL data from inside redmine using mysqldump
  • The redmine_data folder
  • The redmine_themes folder
  • The backup script backup.sh itself
  • docker-compose.yml

Place it in the same folder where docker-compose.yml is located.

The script is compatible with our previous post How to create a systemd backup timer & service in 10 seconds

Posted by Uli Köhler in bup, Docker

How to fix Traefik “Gateway Timeout” for Docker services

Problem

If you have setup your Traefik instance with Docker providers, you will often encounter an issue where every service running on Docker or docker-compose will return

Gateway Timeout

(HTTP response 504) after a couple of seconds.

Why does the gateway timeout occur?

This issue is caused by the Traefik instance not being on the same Docker network(s) as the containers running the services. Therefore, the IP address of the traefik container is firewalled from being able to access the IP addresses of the docker containers.

There are two ways to fix this issue.

Preferred solution: Use host networking

The host is able to access all docker container IP addresses. Therefore, we can operate the traefik contaienr with network_mode: "host" so it doesn’t receive a separate IP address in a separate network but uses the hosts’s IP address and ports directly.

In order enable host networking in a docker-compose-based setup, use

network_mode: "host" 

For example:

version: "3.3"
services:
  traefik:
    image: "traefik:v2.4.8"
    network_mode: "host"
# [...]

The approach of using host network also has the added advantage of increasing traefik throughput, since you don’t need any docker port forwarding but the host ports (like port 80 for HTTP and port 443 for HTTPS) are connected directly to traefik.

Alternate solution: Add traefik to every docker network

You can also add the traefik instance to each and every docker network where a service container is located. This will work, but you need to remember to add the traefik instance to every docker container. Since this is not only often a lot of work (especially if you have many services with separate networks running in your setup)

Posted by Uli Köhler in Traefik

How to download Wasabi/S3 object to string/bytes using boto3 in Python

You can use io.BytesIO to store the content of an S3 object in memory and then convert it to bytes which you can then decode to a str. The following example downloads myfile.txt into memory:

# Download to file
buf = io.BytesIO()
my_bucket.download_fileobj("myfile.txt", buf)
# Get file content as bytes
filecontent_bytes = buf.getvalue()
# ... or convert to string
filecontent_str = buf.getvalue().decode("utf-8")

Full example

import boto3
import io

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Download to file
buf = io.BytesIO()
my_bucket.download_fileobj("myfile.txt", buf)
# Get file content as bytes
filecontent_bytes = buf.getvalue()
# ... or convert to string
filecontent_str = buf.getvalue().decode("utf-8")

print(filecontent_str)

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Posted by Uli Köhler in Python, S3

How to upload string as Wasabi/S3 object using boto3 in Python

In order to upload a Python string like

my_string = "This shall be the content for a file I want to create on an S3-compatible storage"

to an S3-compatible storage like Wasabi or Amazon S3, you need to encode it using .encode("utf-8") and then wrap it in an io.BytesIO object:

my_bucket.upload_fileobj(io.BytesIO(my_string.encode("utf-8")), "myfile.txt")

Full example:

import boto3
import io

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Upload string to file
my_string = "This shall be the content for a file I want to create on an S3-compatible storage"

my_bucket.upload_fileobj(io.BytesIO(my_string.encode("utf-8")), "myfile.txt")

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Posted by Uli Köhler in Python, S3

How to filter for objects in a given S3 directory using boto3

Using boto3, you can filter for objects in a given bucket by directory by applying a prefix filter.

Instead of iterating all objects using

for obj in my_bucket.objects.all():
    pass # ...

(see How to use boto3 to iterate ALL objects in a Wasabi / S3 bucket in Python for a full example)

you can apply a prefix filter using

for obj in my_bucket.objects.filter(Prefix="MyDirectory/"):
    print(obj)

Don’t forget the trailing / for the prefix argument ! Just using filter(Prefix="MyDirectory") without a trailing slash will also match e.g. MyDirectoryFileList.txt.

This complete example prints the object description for every object in the 10k-Test-Objects directory (from our post on How to use boto3 to create a lot of test files in Wasabi / S3 in Python).

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Iterate over objects in bucket
for obj in my_bucket.objects.filter(Prefix="MyDirectory"):
    print(obj)

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Example output:

s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/10.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/100.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1000.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/10000.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1001.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1002.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1003.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1004.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1005.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1006.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1007.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1008.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1009.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/101.txt')
[...]

 

Posted by Uli Köhler in Python, S3

How to use boto3 to iterate ALL objects in a Wasabi / S3 bucket in Python

This snippet shows you how to iterate over all objects in a bucket:

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
my_bucket = s3.Bucket('boto-test')

# Iterate over objects in bucket
for obj in my_bucket.objects.all():
    print(obj)

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL instead of https://s3.eu-central-1.wasabisys.com.

Example output:

s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/1.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/10.txt')
s3.ObjectSummary(bucket_name='boto-test', key='10k-Test-Objects/100.txt')
[...]

 

Posted by Uli Köhler in Python, S3

How to use boto3 to create a lot of test files in Wasabi / S3 in Python

The following example code creates 10000 test files on Wasabi / S3. It is based on How to use concurrent.futures map with a tqdm progress bar:

import boto3
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(64)

from tqdm import tqdm
import concurrent.futures
def tqdm_parallel_map(executor, fn, *iterables, **kwargs):
    """
    Equivalent to executor.map(fn, *iterables),
    but displays a tqdm-based progress bar.
    
    Does not support timeout or chunksize as executor.submit is used internally
    
    **kwargs is passed to tqdm.
    """
    futures_list = []
    for iterable in iterables:
        futures_list += [executor.submit(fn, i) for i in iterable]
    for f in tqdm(concurrent.futures.as_completed(futures_list), total=len(futures_list), **kwargs):
        yield f.result()

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
boto_test_bucket = s3.Bucket('boto-test')

def create_s3_object(i, directory):
    # Create test data
    buf = io.BytesIO()
    buf.write(f"{i}".encode())
    # Reset read pointer. DOT NOT FORGET THIS, else all uploaded files will be empty!
    buf.seek(0)

    # Upload the file
    boto_test_bucket.upload_fileobj(buf, f"{directory}/{i}.txt")

for _ in tqdm_parallel_map(executor, lambda i: create_s3_object(i, directory="10k-Test-Objects"), range(1, 10001)):
    pass

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL at https://s3.eu-central-1.wasabisys.com.

Note that running this script, especially when creating lots of test files, will send a lot of requests to your S3 provider and, depending on what plan you are using, these requests might be expensive. Wasabi, for example, does not charge for requests but charges for storage (with a minimum of 1TB storage per month being charged, at the time of writing this).

Posted by Uli Köhler in Python, S3

How to use boto3 to upload BytesIO to Wasabi / S3 in Python

This snippet provides a concise example on how to upload a io.BytesIO() object to

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
boto_test_bucket = s3.Bucket('boto-test')

# Create a test BytesIO we want to upload
import io
buf = io.BytesIO()
buf.write(b"Hello S3 world!")

# Reset read pointer. DOT NOT FORGET THIS, else all uploaded files will be empty!
buf.seek(0)
    
# Upload the file. "MyDirectory/test.txt" is the name of the object to create
boto_test_bucket.upload_fileobj(buf, "MyDirectory/test.txt")

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL at https://s3.eu-central-1.wasabisys.com.

Also don’t forget

buf.seek(0)

or your uploaded files will be empty.

 

Posted by Uli Köhler in Python, S3

How to use boto3 to upload file to Wasabi / S3 in Python

Using boto to upload data to Wasabi is pretty simple, but not well-documented.

import boto3

# Create connection to Wasabi / S3
s3 = boto3.resource('s3',
    endpoint_url = 'https://s3.eu-central-1.wasabisys.com',
    aws_access_key_id = 'MY_ACCESS_KEY',
    aws_secret_access_key = 'MY_SECRET_KEY'
)

# Get bucket object
boto_test_bucket = s3.Bucket('boto-test')

# Create a test file we want to upload
with open("upload-test.txt", "w") as outfile:
    outfile.write("Hello S3!")
    
# Upload the file. "MyDirectory/test.txt" is the name of the object to create
boto_test_bucket.upload_file("upload-test.txt", "MyDirectory/test.txt")

Don’t forget to fill in MY_ACCESS_KEY and MY_SECRET_KEY. Depending on what region and what S3-compatible service you use, you might need to use another endpoint URL at https://s3.eu-central-1.wasabisys.com.

Posted by Uli Köhler in Python, S3

How to install gitlab-runner using docker-compose

First, choose a directory where the service will reside in. I recommend /opt/gitlab-runner.  Then create docker-compose.yml in said directory with this content:

version: '3'
services:
  gitlab-runner:
    image: 'gitlab/gitlab-runner:latest'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./config:/etc/gitlab-runner
    restart: unless-stopped

then run this command to configure the runner:

docker-compose up -d
docker-compose exec -T gitlab-runner gitlab-runner register

It will ask you for details about the GitLab instance you want to attach to. You will find this information at https://<your-gitlab-domain>/admin/runners. This example is for my GitLab instance:

Runtime platform                                    arch=amd64 os=linux pid=38 revision=943fc252 version=13.7.0
Running in system-mode.

Enter the GitLab instance URL (for example, https://gitlab.com/):
https://gitlab.techoverflow.net/
Enter the registration token:
Loo2lahf9Shoogheiyae
Enter a description for the runner:
[148a53203df8]: My-Runner
Enter tags for the runner (comma-separated):

Registering runner... succeeded                     runner=oc-oKWMH
Enter an executor: custom, docker-ssh, shell, virtualbox, docker-ssh+machine, docker, parallels, ssh, docker+machine, kubernetes:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Now, restart the runner that is running with the old config (i.e. with no gitlab instance being attached):

docker-compose down

After that’s finished, you can run the script from our previous post Create a systemd service for your docker-compose project in 10 seconds in the directory where docker-compose.yml is located.

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will automatically generate a systemd service and start the runner (also on boot). For more details, see the corresponding blogpost. If your directory is named gitlab-runner, the service file will be stored in /etc/systemd/systemd/gitlab-runner.service, hence these are commands you can use to control the service:

Note that the script that creates the systemd service will automatically start the runner, so you don’t need to start it manually. !

Start by

sudo systemctl start gitlab-runner

Restart by

sudo systemctl restart gitlab-runner

Stop by

sudo systemctl stop gitlab-runner

View status:

sudo systemctl status gitlab-runner

View & follow logs:

sudo journalctl -xfu gitlab-runner

View logs in less:

sudo journalctl -xu gitlab-runner

Also see Mini systemd cheat-sheet

Also see How to register gitlab runner for multiple GitLab instances.

Note that you can also use

docker-compose logs -f

to view the logs (run this from the directory where docker-compose.yml) is located.

In case you see an error message like

error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on 192.168.178.1:53: no such host

in your jobs, see How to fix Gitlab CI error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on &#8230; no such host

Posted by Uli Köhler in GitLab

How to install Emscripten SDK on Ubuntu in 1 minute

This script installs the emscripten SDK in Ubuntu in ~/.emsdk and automatically adds source ~/.emsdk/emsdk_env.sh to .bashrc and .zshrc if they exist. It will also automatically update emscripten in case ~/.emsdk already exists.

Run this one liner to install:

curl -fsSL https://techoverflow.net/scripts/install-emscripten.sh | bash

Script content:

#!/bin/bash
# This script installs emscripten to ~/.emsdk
if [[ -d "~/.emsdk" ]]
then # Update
  echo "Updating emscripten SDK..."
  cd ~/.emsdk && git pull
else # Install
  echo "Installing emscripten SDK..."
  git clone https://github.com/emscripten-core/emsdk.git ~/.emsdk
fi
# Install & activate latest SDK
# See https://emscripten.org/docs/getting_started/downloads.html for more details
cd ~/.emsdk
./emsdk install latest 
./emsdk activate latest    
# Add to .bashrc and .zshrc
if [[ -f "~/.bashrc" ]]; then echo -e "\nsource ~/.emsdk/emsdk_env.sh" >> ~/.bashrc; fi
if [[ -f "~/.zshrc" ]]; then echo -e "\nsource ~/.emsdk/emsdk_env.sh" >> ~/.zshrc; fi

 

 

Posted by Uli Köhler in WASM

How to install xenutils on Linux (XCP-NG)

Using CoreOS? See this post instead!

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run these commands. Note that this will reboot the machine after it finished

sudo mount -o ro /dev/sr0 /mnt/
cd /mnt/Linux
sudo ./install.sh -n
sudo reboot

After the VM reboots, XCP-NG should detect the management agent.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

Best-practice configuration for MongoDB with docker-compose

Create /var/lib/mongodb/docker-compose.yml:

version: '3.1'
services:
  mongo:
    image: mongo
    volumes:
        - ./data:/data/db
    ports:
        - 27017:27017
    command: --serviceExecutor adaptive

This will store the MongoDB data in /var/lib/mongodb/data. I prefer this variant to using docker volumes since this method keeps all MongoDB-related data in the same directory.

Then create a systemd service using

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

See our post on how to Create a systemd service for your docker-compose project in 10 seconds for more details on this method.

You can access MongoDB at localhost:27017! It will autostart after boot

Restart by

sudo systemctl restart mongodb

Stop by

sudo systemctl stop mongodb

View logs:

sudo journalctl -xfu mongodb

View logs in less:

sudo journalctl -xu mongodb

 

Posted by Uli Köhler in Docker, MongoDB

Fedora CoreOS minimal ignition config for XCP-NG

This is the Ignition config that I use to bring up my Fedora CoreOS instance on a VM on my XCP-NG server:

{
  "ignition": {
    "version": "3.2.0"
  },
  "passwd": {
    "users": [
      {
        "groups": [
          "sudo",
          "docker"
        ],
        "name": "uli",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"
        ]
      }
    ]
  },
  "storage": {
    "files": [
      {
        "path": "/etc/hostname",
        "contents": {
          "source": "data:,coreos-test%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/profile.d/systemd-pager.sh",
        "contents": {
          "source": "data:,%23%20Tell%20systemd%20to%20not%20use%20a%20pager%20when%20printing%20information%0Aexport%20SYSTEMD_PAGER%3Dcat%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/sysctl.d/20-silence-audit.conf",
        "contents": {
          "source": "data:,%23%20Raise%20console%20message%20logging%20level%20from%20DEBUG%20(7)%20to%20WARNING%20(4)%0A%23%20to%20hide%20audit%20messages%20from%20the%20interactive%20console%0Akernel.printk%3D4"
        },
        "mode": 420
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "enabled": true,
        "name": "docker.service"
      },
      {
        "enabled": true,
        "name": "containerd.service"
      },
      {
        "dropins": [
          {
            "contents": "[Service]\n# Override Execstart in main unit\nExecStart=\n# Add new Execstart with `-` prefix to ignore failure\nExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM\nTTYVTDisallocate=no\n",
            "name": "autologin-core.conf"
          }
        ],
        "name": "[email protected]"
      }
    ]
  }
}

Which is build from this YAML:

variant: fcos
version: 1.2.0
passwd:
  users:
    - name: uli
      groups:
        - "sudo"
        - "docker"
      ssh_authorized_keys:
        - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"

systemd:
  units:
    - name: docker.service
      enabled: true

    - name: containerd.service
      enabled: true
    - name: [email protected]
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure
          ExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM
          TTYVTDisallocate=no
storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          coreos-test
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          # Tell systemd to not use a pager when printing information
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          # Raise console message logging level from DEBUG (7) to WARNING (4)
          # to hide audit messages from the interactive console
          kernel.printk=4

using

fcct --pretty --strict ignition.yml --output ignition.ign

or TechOverflow’s online transpiler tool.

Install using:

sudo coreos-installer install /dev/xvda --copy-network --ignition-url https://mydomain.com/ignition.ign

Features:

  • DHCP on all network interfaces
  • TTY on the screen
  • No password – remember to replace the SSH key by your key!
Posted by Uli Köhler in Virtualization

How to install XCP-NG xe-guest-utilities on Fedore CoreOS using guest-tools.iso

Important note: While installing the Xen utilities using the CD ISO still works, it is outdated and you should prefer installing it using the rpm package. See our post Fedora CoreOS: How to install Xen/XCP-NG guest utilities using rpm-ostree

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run this sequence of commands to install. Note that this will reboot the CoreOS instance!

curl -fsSL https://techoverflow.net/scripts/install-xenutils-coreos.sh | sudo bash /dev/stdin

This will run the following script:

sudo mount -o ro /dev/sr0 /mnt
sudo rpm-ostree install /mnt/Linux/*.x86_64.rpm
sudo cp -f /mnt/Linux/xen-vcpu-hotplug.rules /etc/udev/rules.d/
sudo cp -f /mnt/Linux/xe-linux-distribution.service /etc/systemd/system/
sudo sed 's/share\/oem\/xs/sbin/g' -i /etc/systemd/system/xe-linux-distribution.service
sudo systemctl daemon-reload
sudo systemctl enable /etc/systemd/system/xe-linux-distribution.service
sudo umount /mnt
sudo systemctl reboot

After rebooting the VM, XCP-NG should detect the management agent.

Based on work by steniofilho on the Fedora Forum.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

How to list VMs in XCP-NG on the command line

In order to list VMs on the command line, login to XCP-NG using SSH and run this command:

xe vm-list

Example output:

[16:51 virt01-xcpng ~]# xe vm-list
uuid ( RO)           : 56dc99f2-c617-f7a9-5712-a4c9df54229a
     name-label ( RW): VM 1
    power-state ( RO): running


uuid ( RO)           : 268d56ab-9672-0f45-69ae-efbc88380b21
     name-label ( RW): VM2
    power-state ( RO): running


uuid ( RO)           : 9b1a771f-fb84-8108-8e01-6dac0f957b19
     name-label ( RW): My VM 3
    power-state ( RO): running

 

Posted by Uli Köhler in Virtualization