Linux

How to check / enable DHCP in Alpine Linux installer

Once you have booted from the Alpine Linux installer CD and logged in using root<no password> as described in What is the Alpine linux default login & password?, one often wants to test if DHCP works before going through the installer and potentially having to repeat the process.

First enable Ethernet using

ifconfig eth0 up

Then run the DHCP client using

udhcpc eth0

This will show you the IP address of the lease that your Alpine Live CD did acquire.

Posted by Uli Köhler in Alpine Linux, Linux, Networking

How to install python3 pip / pip3 in Alpine Linux

Problem:

You want to install pip3 (also called python3-pip) in Alpine linux, but running apk install python3-pip shows you that the package doesn’t exist

/ # apk add python3-pip
ERROR: unable to select packages:
  python3-pip (no such package):
    required by: world[python3-pip]

Solution:

You need to install py3-pip instead using

apk add py3-pip

Example output:

/ # apk add py3-pip
(1/35) Installing libbz2 (1.0.8-r1)
(2/35) Installing expat (2.2.10-r1)
(3/35) Installing libffi (3.3-r2)
[...]

 

Posted by Uli Köhler in Alpine Linux, Container, Docker, Linux

How to fix Alpine Linux fatal error: stdio.h: No such file or directory

Problem:

When trying to compile a C/C++ program or library on Alpine Linux, you see an error message like

/home/user/test.cpp:5:10: fatal error: stdio.h: No such file or directory
  123 | #include <stdio.h>
      |          ^~~~~~~~~

Solution:

Install the libc headers using

apk add musl-dev

 

Posted by Uli Köhler in Alpine Linux, C/C++, GCC errors, Linux

How to use FCCT Transpiler online without installation

You can use fcctthe Fedora CoreOS Configuration Transpiler in order to create Ignition JSON files for installing CoreOS from YAML.

Instead of installing fcct, you can use

Click here to go to TechOverflow FCCT Online

Currently our service runs FCCT 0.9.0 using the fcct-online container.

Posted by Uli Köhler in CoreOS, Linux

Mini systemd command cheat-sheet

These are the most common commands I use if my systemd service file is placed in /etc/systemd/system/myservice.service.

Enable (i.e. start at boot) and also start the service right now (--now):

sudo systemctl enable --now myservice

Start by

sudo systemctl start myservice

Restart by

sudo systemctl restart myservice

Stop by

sudo systemctl stop myservice

View status:

sudo systemctl status myservice

View & follow logs:

sudo journalctl -xfu myservice

View logs in less:

sudo journalctl -xu myservice

 

Posted by Uli Köhler in Linux

What is the Alpine linux default login & password?

The Alpine Linux installation ISO uses root as the default user and an empty password. In order to login, just enter the username root and press return.

Posted by Uli Köhler in Alpine Linux, Linux

How to install Podman on Ubuntu 20.04 in 25 seconds

Run this one-liner to install podman on your Ubuntu system:

wget -qO- https://techoverflow.net/scripts/install-podman-ubuntu.sh | sudo bash /dev/stdin

This is the code that is being run, which is exactly the code taken from the Podman installation page.

. /etc/os-release
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
sudo apt-get update
sudo apt-get -y upgrade
sudo apt-get -y install podman

 

Posted by Uli Köhler in Container, Linux, Podman

How to fix Jigasi config file getting overwritten

Problem:

My .jitsi-meet-cfg/jigasi/sip-communicator.properties is getting overwritten every time I start Jigasi, but I need to set

net.java.sip.communicator.impl.protocol.sip.acc1.AUTHORIZATION_NAME=abc123abc

in order for my SIP communication to work.

Solution:

Run this script after starting the jigasi container. It will fix the overwritten config and then restart the Jigasi Java process without restarting the container

#!/bin/sh

sed -i -e "s/# SIP account/net.java.sip.communicator.impl.protocol.sip.acc1.AUTHORIZATION_NAME=abc123abc/g" .jitsi-meet-cfg/jigasi/sip-communicator.properties

# Reload config hack
docker-compose -f docker-compose.yml -f jigasi.yml exec jigasi /bin/bash -c 'kill $(pidof java)'

 

Original source: This GitHub ticket which provides a similar solution for a similar problem

Posted by Uli Köhler in Linux

How to use xargs in parallel

A good start is to use -P 4 -n 1 to run 4 processes in parallel (-P 4), but give each instance of the command to be run just one argument (-n 1)

These are the xargs options for parallel use from the xargs manpage:

-P, --max-procs=MAX-PROCS    run at most MAX-PROCS processes at a time

-n, --max-args=MAX-ARGS      use at most MAX-ARGS arguments per command line

Example:

cat urls.txt | xargs -P 4 -n 1 wget

This command will run up to 4 wget processes in parallel until each of the URLs in urls.txt has been downloaded. These processes would be run in parallel

wget [URL #1]
wget [URL #2]
wget [URL #3]
wget [URL #4]

If you would use -P 4 -n 2 these processes would be run in parallel:

wget [URL #1] [URL #2]
wget [URL #3] [URL #4]
wget [URL #5] [URL #6]
wget [URL #7] [URL #8]

Using a higher value for -n might slightly increase the efficiency since fewer processes need to be initialized, but it might not work with some commands if you pass multiple arguments.

Posted by Uli Köhler in Linux

How to extract href attributes from HTML page using grep & regex

You can use a regular expression to grep for href="..." attributes in a HTML like this:

grep -oP "(HREF|href)=\"\K.+?(?=\")"

grep is operated with -o (only print match, this is required to get extra features like lookahead assertions) and -P (use Perl regular expression engine). The regular expression is basically

href=".*"

where the .+ is used in non-greedy mode (.+?):

href=".+?"

This will give us hits like

href="/files/image.png"

Since we only want the content of the quotes (") and not  the href="..." part, we can use positive lookbehind assertions (\K) to remove the href part:

href=\"\K.+?\"

but we also want to get rid of the closing double quote. In order to do this, we can use positive lookahead assertions ((?=\")):

href=\"\K.+?(?=\")

Now we want to match both href and HREF to get some case insensitivity:

(href|HREF)=\"\K.+?(?=\")

Often we want to specifically match one file type. For example, we could match only .png:

(href|HREF)=\"\K.+?\.png(?=\")

In order to reduce falsely too long matches on some pages, we want to use [^\"]+? instead of .+?:

(href|HREF)=\"\K[^\"]+?\.png(?=\")

This disallows matches containing ” characters, hence preventing more than the tag being matched.

Usage example:

wget -qO- https://nasagrace.unl.edu/data/NASApublication/maps/ | grep -oP "(href|HREF)=\"\K[^\"]+?\.png(?=\")"

Output:

/data/NASApublication/maps/GRACE_SFSM_20201026.png
[...]
Posted by Uli Köhler in Linux

Systemd Unit for autostarting NodeJS application (npm start)

This systemd unit file autostarts your NodeJS service using npm start. Place it in /etc/systemd/system/myapplication.service (replace myapplication by the name of your application!)

[Unit]
Description=My application

[Service]
Type=simple
Restart=always
User=nobody
Group=nobody
WorkingDirectory=/opt/myapplication
ExecStart=/usr/bin/npm start

[Install]
WantedBy=multi-user.target

Replace:

  • /opt/myapplication by the directory of your application (where package.json is located)
  • User=nobody and Group=nobody by the user and group you want to run the service as
  • Optionally you can add a custom decription instead of Description=My application

Then enable start at boot & start right now: (Remember to replace myapplication by the name of the service file you chose!)

sudo systemctl enable --now myapplication

Start by

sudo systemctl start myapplication

Restart by

sudo systemctl restart myapplication

Stop by

sudo systemctl stop myapplication

View & follow logs:

sudo journalctl -xfu myapplication

View logs in less:

sudo journalctl -xu myapplication

 

Posted by Uli Köhler in Linux

How to disable SELinux in Fedora CoreOS

Warning: Depending on your application, disabling the SELinux security layer might be a bad idea since it might introduce new security risks especially if the container system has security issues.

In order to disable SELinux on Fedora CoreOS, run this:

sudo sed -i -e 's/SELINUX=/SELINUX=disabled #/g' /etc/selinux/config
sudo systemctl reboot

Note that this will reboot your system in order for the changes to take effect.

The sed command will replace the default

SELINUX=enforcing

in /etc/selinux/config to

SELINUX=disabled

 

Posted by Uli Köhler in Container, Docker, Linux

How to enable SSH access in already-running GRML

You can enable SSH in GRML using the ssh boot option. But if you have already started grub you can enable SSH using

Start ssh

Also remember to set a root password:

passwd
Posted by Uli Köhler in Linux

Can’t see md RAID devices in GRML? How to fix

Run

mdadm --assemble --scan

and you’ll see all your MD devices.

Posted by Uli Köhler in Linux

How to fix EC2 [error] Veeam Cannot find a compatible file system for storing snapshot data

Problem:

You are trying to run a Veeam backup on your EC2 machine (e.g. to a remote SMB or NFS service), but the backup fails immediately after the backup job is being started. The log looks like this:

21:52:29   Job BackupJob1 started at 2020-12-23 21:52:29 GMT
21:52:30   Preparing to backup
21:52:40   [error] Failed to create volume snapshot
21:52:41   [error] Failed to take volume snapshot
21:52:41   [error] Failed to perform backup
21:52:41   [error] Cannot find a compatible file system for storing snapshot data
21:52:41   [error] Processing finished with errors at 2020-12-23 21:52:41 GMT

The detailed log in /var/log/veeam/veeamsvc.log shows these errors:

[23.12.2020 21:52:41.069] <140589764957952> lpbcore|       Searching for the best candidate to store snapshot data.
[23.12.2020 21:52:41.069] <140589764957952> lpbcore|         Logical block size limit 512 bytes
[23.12.2020 21:52:41.071] <140589764957952> lpbcore|       Searching for the best candidate to store snapshot data. Failed.
[23.12.2020 21:52:41.071] <140589764957952> lpbcore| ERR |No suitable devices for snapshot data storage were found.
[23.12.2020 21:52:41.071] <140589764957952> lpbcore| >>  |An exception was thrown from thread [140589764957952].
[23.12.2020 21:52:41.071] <140589764957952> lpbcore|     Detecting snapshot storage parameters. Failed.
[23.12.2020 21:52:41.071] <140589764957952> lpbcore|   Creating snapshot storage. Storage type: stretch file Failed.
[23.12.2020 21:52:41.071] <140589764957952> lpbcore| Creating machine snapshot using VeeamSnap kernel module Failed.
[23.12.2020 21:52:41.071] <140589764957952> lpbcore| ERR |Snapshot creation operation has failed.
[23.12.2020 21:52:41.071] <140589764957952> lpbcore| >>  |Cannot find a compatible file system for storing snapshot data
[23.12.2020 21:52:41.071] <140589764957952> lpbcore| >>  |--tr:Failed to create machine snapshot
[23.12.2020 21:52:41.071] <140589764957952> lpbcore| >>  |An exception was thrown from thread [140589764957952].
[23.12.2020 21:52:41.071] <140589764957952>        | Thread finished. Role: 'snapshot operation'.
[23.12.2020 21:52:41.102] <140590156179200> lpbcore| ERR |Cannot find a compatible file system for storing snapshot data
[23.12.2020 21:52:41.102] <140590156179200> lpbcore| >>  |--tr:Failed to create machine snapshot
[23.12.2020 21:52:41.102] <140590156179200> lpbcore| >>  |--tr:Failed to finish snapshot creation process.

Solution:

Veeam currently fails to detect the EC2 EBS boot device /dev/xvda as a proper device to store snapshot data on.

  1. You need to create a separate EBS block volume and attach it to the VM. The recommended size is computed as follows: 15% * total size of the disks that will be backed up. In case you’re backing up files or directories, use 0.25 * the maximum anticipated size of the files or directories to be backed up.
  2. Attach that new EBS block device to the VM (e.g. as /dev/xvdb – use lsblk to find the correct drive !)
  3. Create a new partition table and a new partition on the EBS device using something like
    sudo fdisk /dev/xvdb

    then enter these commands into fdisk: g to create a new GUID partition table, then n to create a new partition. You can use the default parameters everywhere. Then, run w to write the changes to disk and exit fdisk using Ctrl+D)

  4. Create a new filesystem on the new partition e.g. using
    sudo mkfs.ext4 /dev/xvdb1
  5. Mount the partition somewhere (e.g. on /mnt/) using a command like mount /dev/xvda1 /mnt. In order to auto-mount on boot, use a line like
    /dev/xvdb1 /mnt ext4 defaults,auto 0 0

    in /etc/fstablsblk should now tell you that the partition is mounted, e.g.

    xvdb    202:16   0    2G  0 disk
    └─xvdb1 202:17   0    2G  0 part /mnt
    
  6. Re-run veeam. The backup should now work properly.

I don’t know why exactly this issue occurs but EC2’s /dev/xvda doesn’t seem to be a normal block device from Veeam’s viewpoint.

Note that veeam computes the minimum space for a snapshot store for entire machine backups as 10% * free space + 5% * used space. I don’t know if these factors are constant or determined dynamically, hence my recommendation of 15% * total space is much more conservative.

Posted by Uli Köhler in Backup, Linux

How to activate auto-reboot on kernel hung on Linux in 20 seconds

Use this script (run as root) to automatically configure Linux to reboot after a Kernel hung event:

# Comment out previous kernel hung configuration
sed -i -e 's/kernel.hung_/#kernel.hung_/g' /etc/sysctl.conf
sed -i -e 's/kernel.panic/#kernel.panic/g' /etc/sysctl.conf
# Add configuration (which becomes effective after reboot)
cat <<EOF >>/etc/sysctl.conf

# Reboot after kernel hang
kernel.hung_task_timeout_secs=600
kernel.hung_task_panic=1
kernel.panic=300
EOF
# Activate config NOW
sysctl -w kernel.hung_task_timeout_secs=600
sysctl -w kernel.hung_task_panic=1
sysctl -w kernel.panic=300

The configuration works by converting the kernel hung event to a kernel panic event after kernel.hung_task_timeout_secs (10 minutes by default).

The configuration will be activated without reboot but it will also stay active after reboot.

Posted by Uli Köhler in Linux

How to create a systemd backup timer & service in 10 seconds

In our previous post Create a systemd service for your docker-compose project in 10 seconds we introduced a script that automatically creates a systemd service to start a docker-compose-based project. In this post, we’ll show

First, you need to create a file named backup.sh in the directory where docker-compose.yml is located. This file will be run by the systemd service every day. What that file contains is entirely up to you and we will provide examples in future blogposts.

Secondly, run

wget -qO- https://techoverflow.net/scripts/create-backup-service.sh | sudo bash /dev/stdin

from the directory where docker-compose.yml is located. Note that the script will use the directory name as a name for the service and timer that is created. For example, running the script in /var/lib/redmine-mydomain will cause redmine-mydomain-backup to be used a service name.

Example output from the script:

Creating systemd service... /etc/systemd/system/redmine-mydomain-backup.service
Creating systemd timer... /etc/systemd/system/redmine-mydomain-backup.timer
Enabling & starting redmine-mydomain-backup.timer
Created symlink /etc/systemd/system/timers.target.wants/redmine-mydomain-backup.timer → /etc/systemd/system/redmine-mydomain-backup.timer.

The script will create /etc/systemd/systemd/redmine-mydomain-backup.service containing the specification on what exactly to run:

[Unit]
Description=redmine-mydomain-backup

[Service]
Type=oneshot
ExecStart=/bin/bash backup.sh
WorkingDirectory=/var/lib/redmine-mydomain

and /etc/systemd/systemd/redmine-mydomain-backup.timer containing the logic when the .service is started:

[Unit]
Description=redmine-mydomain-backup

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target

and will automatically start and enable the timer. This means: no further steps are needed after running this script!

In order to show the current status of the service, use e.g.

sudo systemctl status redmine-mydomain-backup.timer

Example output:

● redmine-mydomain-backup.timer - redmine-mydomain-backup
     Loaded: loaded (/etc/systemd/system/redmine-mydomain-backup.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Thu 2020-12-10 02:50:31 CET; 19min ago
    Trigger: Fri 2020-12-11 00:00:00 CET; 20h left
   Triggers: ● redmine-mydomain-backup.service

Dec 10 02:50:31 myserverhostname systemd[1]: Started redmine-mydomain-backup.

In the

Trigger: Fri 2020-12-11 00:00:00 CET; 20h left

line you can see when the service will be run next. By default, the script generates tasks that run OnCalendar=daily, which means the service will be run on 00:00:00 every day. Checkout the systemd.time manpage for further information on the syntax you can use to specify other timeframes.

In order to run the backup immediately (it will still run daily after doing this), do

sudo systemctl start redmine-mydomain-backup.service

(note that you need to run systemctl start on the .service! Running systemctl start on the .timer will only enable the timer and not run the service immediately).

In order to view the logs, use

sudo journalctl -xfu redmine-mydomain-backup.service

(just like above, you need to run journalctl -xfu on the .service, not on the .timer).

In order to disable automatic backups, use e.g.

sudo systemctl disable redmine-mydomain-backup.timer

Source code:

#!/bin/bash
# Create a systemd service & timer that runs the given backup daily
# by Uli Köhler - https://techoverflow.net
# Licensed as CC0 1.0 Universal
export SERVICENAME=$(basename $(pwd))-backup

export SERVICEFILE=/etc/systemd/system/${SERVICENAME}.service
export TIMERFILE=/etc/systemd/system/${SERVICENAME}.timer

echo "Creating systemd service... $SERVICEFILE"
sudo cat >$SERVICEFILE <<EOF
[Unit]
Description=$SERVICENAME

[Service]
Type=oneshot
ExecStart=/bin/bash backup.sh
WorkingDirectory=$(pwd)
EOF

echo "Creating systemd timer... $TIMERFILE"
sudo cat >$TIMERFILE <<EOF
[Unit]
Description=$SERVICENAME

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target
EOF

echo "Enabling & starting $SERVICENAME.timer"
sudo systemctl enable $SERVICENAME.timer
sudo systemctl start $SERVICENAME.timer

 

Posted by Uli Köhler in Docker, Linux

How to use pg_dump in Gitlab Docker container

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.sql

This will save the SQL dump of the database into gitlab-dump.sql

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab pg_dump -h /var/opt/gitlab/postgresql/ -d gitlabhq_production > gitlab-dump.sql

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Docker, Linux

How to run psql in Gitlab Docker image

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Databases, Docker, Linux

How to run Nextcloud cron job manually using docker-compose

For docker-compose based Nextcloud installations, this is the command to run the cron job manually:

docker-compose exec -u www-data nextcloud php cron.php

You need to run this from the directory where docker-compose.yml is located.

Posted by Uli Köhler in Linux, Nextcloud