Linux

How to fix wifi blocked on boot on Raspberry Pi 4

After migrating a fresh Raspbian install using the official 2020-04 Raspbian light image from my Raspberry Pi Model 2 to my new Raspberry Model 4, the Wifi was disabled at boot.

I tried configuring the Wifi using raspi-config but that didn’t change anything.

First, try rfkill unblock all and then reboot in order to check if the Wifi adapter is still unblocked after the reboot. In my case, this fixed the issue permanently and wifi worked immediately.

If that doesn’t help, check if country=... is set in /etc/wpa_supplicant/wpa_supplicant.conf. You need to set it to your correct country code to comply with regulatory limits. For example, use country=DE to set the regulatory domain to Germany.

Posted by Uli Köhler in Linux, Raspberry Pi

How to setup OnlyOffice using docker-compose & nginx

Prerequisite: Install docker and docker-compose

For example, follow our guide How to install docker and docker-compose on Ubuntu in 30 seconds

Step 1: Create docker-compose.yml

Create the directory where we’ll install OnlyOffice using

sudo mkdir /var/lib/onlyoffice

and then edit the docker-compose configuration using e.g.

sudo nano /var/lib/onlyoffice/docker-compose.yml

and copy and paste this content

version: '3'
services:
  onlyoffice-documentserver:
    image: onlyoffice/documentserver:latest
    restart: always
    environment:
      - JWT_ENABLED=true
      - JWT_SECRET=ahSaTh4waeKe4zoocohngaihaub5pu
    ports:
      - 2291:80
    volumes:
      - ./onlyoffice/data:/var/www/onlyoffice/Data
      - ./onlyoffice/lib:/var/lib/onlyoffice
      - ./onlyoffice/logs:/var/log/onlyoffice
      - ./onlyoffice/db:/var/lib/postgresql

Now add your custom password in JWT_SECRET=... ! Don’t forget this step, or anyone can use your OnlyOffice server ! I’m using pwgen 30 to generate a new random password (install using sudo apt -y install pwgen).

Step 2: Setup systemd service

Create the service using sudo nano /etc/systemd/system/onlyoffice.service:

[Unit]
Description=OnlyOffice server
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /var/lib/onlyoffice/docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /var/lib/onlyoffice/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /var/lib/onlyoffice/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

Now enable & start the service using

sudo systemctl enable onlyoffice
sudo systemctl start onlyoffice

Step 3:  Create nginx reverse proxy configuration

Note that we mapped OnlyOffice’s port 80 to port 2291. In case you’re not using nginx as reverse proxy, you need to manually configure your reverse proxy to pass requests to port 2291.

server {
    server_name onlyoffice.mydomain.org;

    access_log /var/log/nginx/onlyoffice.access_log;
    error_log /var/log/nginx/onlyoffice.error_log info;

    location / {
        proxy_pass http://127.0.0.1:2291;
        proxy_http_version 1.1;
        proxy_read_timeout 3600s;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host            $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        add_header X-Frontend-Host $host;
        # Uncomment this line and reload once you have setup TLS for that domain !
        # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
    }

    listen 80;
}

Now test if your nginx config works using nginx -t and reload using service nginx reload.

Now I recommend to setup Let’s Encrypt for your domain so that your OnlyOffice instance will only be accessed using an encrypted connecting (sudo certbot --nginx, see other guides if you don’t know how to do that).

Once certbot asks you whether to redirect, choose option 2 – Redirect to HTTPS.

Step 4: Test OnlyOffice

If your installation worked, you should see a screen like this:

If not, try checking the logs using

sudo journalctl -xu onlyoffice

(Optional) Step 5: Configure NextCloud to use OnlyOffice

If you are running NextCloud, go to Settings => ONLYOFFICE and enter your domain and the JWT_SECRET you created before:

Ensure that Connect to demo ONLYOFFICE Document Server is unchecked and click Save.

Nextcloud will tell you at the top right if it has been able to connect to your OnlyOffice instance successfully:

  • Settings successfully updated means that NextCloud is now connected to OnlyOffice
  • Invalid token means that your password / secret key does not match
  • Other messages typically mean that your OnlyOffice is not running or that you haven’t entered the correct domain or protocol. I recommend to only use https:// – use http:// for testing only and don’t forget to revert back to https:// once you have found the issue.
Posted by Uli Köhler in Container, Docker, Linux, nginx

How to fix Terraria does not start / immediately exits on Linux

If your Terraria exits immediately and the Terrria window never appears, try

cd ~/.local/share/Steam/steamapps/common/Terraria
./Terraria.bin.x86_64 > terraria.log

As it turns out, Terraria only starts properly on my machine if you redirect stdout to a file (or pipe it into another program), hence > terraria.log is neccessary to get it running.

Posted by Uli Köhler in Linux

How to fix landscape-package-reporter: UnicodeDecodeError: ‘utf-8’ codec can’t decode byte

On some servers attached to a landscape instance, I encountered this stacktrace when trying to run sudo landscape-package-reporter:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 653, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/lib/python3/dist-packages/landscape/client/package/reporter.py", line 92, in <lambda>
    result.addCallback(lambda x: self.request_unknown_hashes())
  File "/usr/lib/python3/dist-packages/landscape/client/package/reporter.py", line 485, in request_unknown_hashes
    self._facade.ensure_channels_reloaded()
  File "/usr/lib/python3/dist-packages/landscape/lib/apt/package/facade.py", line 265, in ensure_channels_reloaded
    self.reload_channels()
  File "/usr/lib/python3/dist-packages/landscape/lib/apt/package/facade.py", line 253, in reload_channels
    version, with_info=False).get_hash()
  File "/usr/lib/python3/dist-packages/landscape/lib/apt/package/facade.py", line 402, in get_package_skeleton
    return build_skeleton_apt(pkg, with_info=with_info, with_unicode=True)
  File "/usr/lib/python3/dist-packages/landscape/lib/apt/package/skeleton.py", line 131, in build_skeleton_apt
    version.record, "Provides", DEB_PROVIDES))
  File "/usr/lib/python3/dist-packages/apt/package.py", line 690, in record
    return Record(self._records.record)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 724: invalid start byte

Tracing down the issue, it was related with a misplaced set of Unicode bytes (EB BF BD) in an old veeam version in version 1.0.0.944 of the veeamsnap package in /var/lib/apt/lists/repository.veeam.com_backup_linux_agent_dpkg_debian_public_dists_stable_veeam_binary-amd64_Packages: The Description field contains this text:

[...] Linux � simple [...]

The strange character is the U+FFFD � REPLACEMENT CHARACTER.

You can fix it by deleting this character. It’s just at the end of /var/lib/apt/lists/repository.veeam.com_backup_linux_agent_dpkg_debian_public_dists_stable_veeam_binary-amd64_Packages. However, if there’s an update for that repository, your change will be overwritten.

In order to fix it (my fix is for landscape-client version 18.01-0ubuntu3.5), I added a try: ... except: ... clause to skeleton.py, which will ignore some properties of the package where the issue occurs:

try:
    relations.update(parse_record_field(
        version.record, "Provides", DEB_PROVIDES))
    relations.add((
        DEB_NAME_PROVIDES,
        "%s = %s" % (version.package.name, version.version)))
    relations.update(parse_record_field(
        version.record, "Pre-Depends", DEB_REQUIRES, DEB_OR_REQUIRES))
    relations.update(parse_record_field(
        version.record, "Depends", DEB_REQUIRES, DEB_OR_REQUIRES))

    relations.add((
        DEB_UPGRADES, "%s < %s" % (version.package.name, version.version)))

    relations.update(parse_record_field(
        version.record, "Conflicts", DEB_CONFLICTS))
    relations.update(parse_record_field(
        version.record, "Breaks", DEB_CONFLICTS))
    skeleton.relations = sorted(relations)

    if with_info:
        skeleton.section = version.section
        skeleton.summary = version.summary
        skeleton.description = version.description
        skeleton.size = version.size
        if version.installed_size > 0:
            skeleton.installed_size = version.installed_size
        if with_unicode and not _PY3:
            skeleton.section = skeleton.section.decode("utf-8")
            skeleton.summary = skeleton.summary.decode("utf-8")
            # Avoid double-decoding package descriptions in build_skeleton_apt,
            # which causes an error with newer python-apt (Xenial onwards)
            if not isinstance(skeleton.description, unicode):
                skeleton.description = skeleton.description.decode("utf-8")
    return skeleton
except UnicodeError:
    return skeleton

Replace /usr/lib/python3/dist-packages/landscape/lib/apt/package/skeleton.py by this:

from landscape.lib.hashlib import sha1

import apt_pkg

from twisted.python.compat import unicode, _PY3


PACKAGE   = 1 << 0
PROVIDES  = 1 << 1
REQUIRES  = 1 << 2
UPGRADES  = 1 << 3
CONFLICTS = 1 << 4

DEB_PACKAGE       = 1 << 16 | PACKAGE
DEB_PROVIDES      = 2 << 16 | PROVIDES
DEB_NAME_PROVIDES = 3 << 16 | PROVIDES
DEB_REQUIRES      = 4 << 16 | REQUIRES
DEB_OR_REQUIRES   = 5 << 16 | REQUIRES
DEB_UPGRADES      = 6 << 16 | UPGRADES
DEB_CONFLICTS     = 7 << 16 | CONFLICTS


class PackageTypeError(Exception):
    """Raised when an unsupported package type is passed to build_skeleton."""


class PackageSkeleton(object):

    section = None
    summary = None
    description = None
    size = None
    installed_size = None
    _hash = None

    def __init__(self, type, name, version):
        self.type = type
        self.name = name
        self.version = version
        self.relations = []

    def add_relation(self, type, info):
        self.relations.append((type, info))

    def get_hash(self):
        """Calculate the package hash.

        If C{set_hash} has been used, that hash will be returned and the
        hash won't be the calculated value.
        """
        if self._hash is not None:
            return self._hash
        # We use ascii here as encoding  for backwards compatibility as it was
        # default encoding for conversion from unicode to bytes in Python 2.7.
        package_info = ("[%d %s %s]" % (self.type, self.name, self.version)
                        ).encode("ascii")
        digest = sha1(package_info)
        self.relations.sort()
        for pair in self.relations:
            digest.update(("[%d %s]" % (pair[0], pair[1])
                           ).encode("ascii"))
        return digest.digest()

    def set_hash(self, package_hash):
        """Set the hash to an explicit value.

        This should be used when the hash is previously known and can't
        be calculated from the relations anymore.

        The only use case for this is package resurrection. We're
        planning on getting rid of package resurrection, and this code
        can be removed when that is done.
        """
        self._hash = package_hash


def relation_to_string(relation_tuple):
    """Convert an apt relation to a string representation.

    @param relation_tuple: A tuple, (name, version, relation). version
        and relation can be the empty string, if the relation is on a
        name only.

    Returns something like "name > 1.0"
    """
    name, version, relation_type = relation_tuple
    relation_string = name
    if relation_type:
        relation_string += " %s %s" % (relation_type, version)
    return relation_string


def parse_record_field(record, record_field, relation_type,
                       or_relation_type=None):
    """Parse an apt C{Record} field and return skeleton relations

    @param record: An C{apt.package.Record} instance with package information.
    @param record_field: The name of the record field to parse.
    @param relation_type: The deb relation that can be passed to
        C{skeleton.add_relation()}
    @param or_relation_type: The deb relation that should be used if
        there is more than one value in a relation.
    """
    relations = set()
    values = apt_pkg.parse_depends(record.get(record_field, ""))
    for value in values:
        value_strings = [relation_to_string(relation) for relation in value]
        value_relation_type = relation_type
        if len(value_strings) > 1:
            value_relation_type = or_relation_type
        relation_string = " | ".join(value_strings)
        relations.add((value_relation_type, relation_string))
    return relations


def build_skeleton_apt(version, with_info=False, with_unicode=False):
    """Build a package skeleton from an apt package.

    @param version: An instance of C{apt.package.Version}
    @param with_info: Whether to extract extra information about the
        package, like description, summary, size.
    @param with_unicode: Whether the C{name} and C{version} of the
        skeleton should be unicode strings.
    """
    name, version_string = version.package.name, version.version
    if with_unicode:
        name, version_string = unicode(name), unicode(version_string)
    skeleton = PackageSkeleton(DEB_PACKAGE, name, version_string)
    relations = set()
    try:
        relations.update(parse_record_field(
            version.record, "Provides", DEB_PROVIDES))
        relations.add((
            DEB_NAME_PROVIDES,
            "%s = %s" % (version.package.name, version.version)))
        relations.update(parse_record_field(
            version.record, "Pre-Depends", DEB_REQUIRES, DEB_OR_REQUIRES))
        relations.update(parse_record_field(
            version.record, "Depends", DEB_REQUIRES, DEB_OR_REQUIRES))

        relations.add((
            DEB_UPGRADES, "%s < %s" % (version.package.name, version.version)))

        relations.update(parse_record_field(
            version.record, "Conflicts", DEB_CONFLICTS))
        relations.update(parse_record_field(
            version.record, "Breaks", DEB_CONFLICTS))
        skeleton.relations = sorted(relations)

        if with_info:
            skeleton.section = version.section
            skeleton.summary = version.summary
            skeleton.description = version.description
            skeleton.size = version.size
            if version.installed_size > 0:
                skeleton.installed_size = version.installed_size
            if with_unicode and not _PY3:
                skeleton.section = skeleton.section.decode("utf-8")
                skeleton.summary = skeleton.summary.decode("utf-8")
                # Avoid double-decoding package descriptions in build_skeleton_apt,
                # which causes an error with newer python-apt (Xenial onwards)
                if not isinstance(skeleton.description, unicode):
                    skeleton.description = skeleton.description.decode("utf-8")
        return skeleton
    except UnicodeError:
        return skeleton

After that, you can run sudo landscape-package-reporter again.

Posted by Uli Köhler in Linux, Python

How to fix Python ‘ValueError: Namespace GnomeDesktop not available’ on Ubuntu

Problem:

On Ubuntu, you are trying to run a Python script using the gi package and GnomeDesktop but you are seeing this stacktrace:

Traceback (most recent call last):
  File "myscript.py", line 48, in <module>
    gi.require_version('GnomeDesktop', '3.0')
  File "/usr/lib/python3/dist-packages/gi/__init__.py", line 130, in require_version
    raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace GnomeDesktop not available

Solution

Install gir1.2-gnomedesktop-3.0:

sudo apt -y install gir1.2-gnomedesktop-3.0

and retry running your script.

Posted by Uli Köhler in Linux, Python

How to install RocksDB on Ubuntu

deb-buildscripts provides a convenient build script for building RocksDB as a deb package. Since RocksDB optimizes for the current computer’s CPU instruction set extensions (-march=native), it is required to build RocksDB on the computer where you will run it, or at least one with the same CPU type (generation)

First install the prerequisites:

sudo apt-get -y install devscripts debhelper build-essential fakeroot zlib1g-dev libbz2-dev libsnappy-dev libgflags-dev libzstd-dev

then build RocksDB:

git clone https://github.com/ulikoehler/deb-buildscripts.git
cd deb-buildscripts
./deb-rocksdb.py

This will build the librocksdb and librocksdb-dev packages in the deb-buildscripts directory.

Posted by Uli Köhler in C/C++, Linux

How to install x11vnc on DISPLAY=:0 as a systemd service

First, install x11vnc using e.g.

sudo apt -y install x11vnc

Now run this script as the user that is running the X11 session. The script needs to know the correct user to start x11vnc as.

wget -qO- https://techoverflow.net/scripts/install-x11vnc.sh | sudo bash -s $USER

This will install a systemd service like

[Unit]
Description=VNC Server for X11

[Service]
Type=simple
User=uli
Group=uli
ExecStart=/usr/bin/x11vnc -display :0 -norc -forever -shared -autoport 5900
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

and automatically enable it on boot and start it.

You can connect to the computer using VNC now e.g. using:

vncviewer [hostname]
Posted by Uli Köhler in Linux

How to connect to your 3D printer using picocom

Use this command to connect to your Marlin-based 3D printer:

picocom -b 115200 /dev/ttyUSB0 --imap lfcrlf --echo

This command might also work for firmwares other than Marlin.

On some boards the USB port is called /dev/ttyACM0 instead of /dev/ttyUSB0. In this case, use

picocom -b 115200 /dev/ttyACM0 --imap lfcrlf --echo

By default, picocom uses character maps that cause the newlines not to be shown correctly. --imap lfcrlf maps line feeds sent by the printer to CR + LF on the terminal. --echo enables local echo, enabling you to see what you are typing.

 

Posted by Uli Köhler in Hardware, Linux

How to install PyPy3 + virtual environment in 30 seconds

TL;DR:

Run this

wget -qO- https://techoverflow.net/scripts/pypy3-installer.sh | bash

then run vpypy every time you want to activate (you might need to restart). The script currently assumes you are running Linux x86_64 and have installed virtualenv (sudo apt install virtualenv or similar if you don’t have it installed)

Full description:

PyPy is an alternate Python implementation that can be used to speed up many workloads. However, installing it is a somewhat cumbersome process, especially if you don’t have too much experience with virtual environments and related concepts.

We provide a script that automatically downloads PyPy3, installs it to ~/.pypy3 and creates a virtual environment in ~/.pypy3-virtualenv. After that, it creates a shell alias vpypy that aliases to source ~/.pypy3-virtualenv/bin/activate and hence provides an easily memoizable way of activating the environment without requiring the user to memoize the directory.

Also, since both pypy3 itself and the virtual environment are  installed in the user’s home directory, running this script does not require admin permissions.

After running the script using

wget -qO- https://techoverflow.net/scripts/pypy3-installer.sh | bash

you can activate the virtual environment using the vpypy alias that is automatically added to ~/.bashrc and ~/.zshrc. Restart your shell for the alias definition to load, then run vpypy:

uli@uli-laptop ~ % vpypy
(.pypy3-virtualenv) uli@uli-laptop ~ % 

You can see that the prompt has changed. Now you can use pip (which will install packages locally to the PyPy3 virtualenv), python (which maps to pypy3) and other related executables. In order to run a script using PyPy, just run python myscript.py

Full source code:

#!/bin/bash
# TechOverflow's 30-second Pypy3 virtual environment generator
# This script is released under CC0 1.0 Universal
DIRECTORY=~/.pypy3
VENV_DIRECTORY=~/.pypy3-virtualenv
VERSION=pypy3.6-v7.3.0-linux64

# Download (or use existing) pypy3
if [ -d "$DIRECTORY" ]; then
    echo "Skipping PyPy download, already exists"
else
    echo "Downloading PyPy to $DIRECTORY"
    # Download & extract to DIRECTORY
    wget https://techoverflow.net/downloads/${VERSION}.tar.bz2 -O /tmp/${VERSION}.tar.bz2
    bash -c "cd /tmp && tar xjvf ${VERSION}.tar.bz2"
    mv /tmp/${VERSION} $DIRECTORY
    rm /tmp/${VERSION}.tar.bz2
fi

# Create virtualenv
if [ -d "$VENV_DIRECTORY" ]; then
    echo "Skipping to create pypy3 virtualenv, already exists"
else
    echo "Creating PyPy virtual environment in $VENV_DIRECTORY"
    virtualenv -p ${DIRECTORY}/bin/pypy3 ${VENV_DIRECTORY}
fi

# Create "vpypy" shortcut
set -x
vpypy
result="$?"
set +x
if [ "$result" -ne 127 ]; then
    echo "Skipping to create vpypy shortcut, already exists in current shell"
else
    echo "Creating bash/zsh shortcut 'vpypy'"
    if [ -f ~/.bashrc ]; then
        echo -e "\n# TechOverflow PyPy installer\nalias vpypy='source ${VENV_DIRECTORY}/bin/activate'\n" >> ~/.bashrc
    fi
    if [ -f ~/.zshrc ]; then
        echo -e "\n# TechOverflow PyPy installer\nalias vpypy='source ${VENV_DIRECTORY}/bin/activate'\n" >> ~/.zshrc
    fi
    # Activate shortcut in current shell (but do not automatically activate virtual environment)
    alias vpypy='source ${VENV_DIRECTORY}/bin/activate'
fi

echo -e "\n\nPyPy installation finished. Restart your shell, then run 'vpypy' to activate the virtual environment"

 

 

Posted by Uli Köhler in Linux, Python

How to install OpenSSL development headers on Ubuntu

In order to install the OpenSSL headers on Ubuntu, use

sudo apt -y install libssl-dev
Posted by Uli Köhler in C/C++, Linux

How to fix ‘apt: command not found’ on Fedora

If you want to install a package on a Fedora Linux, you might have tried a command like

sudo apt install [package name]

However, the Fedora distribution does not use the apt package manager. Fedora uses yum instead.

Use

sudo yum install [package name]

to install a package, for example

sudo yum install sqlite

Note: yum update does not do the same as apt update (i.e. update the list of available packages) but is the equivalent to apt upgrade or apt dist-upgrade i.e. update packages on the system!

Posted by Uli Köhler in Linux

How to identify large directories for ‘No space left on device’ on Linux

TL;DR

cd / and run

sudo du -sh * --exclude proc --exclude sys --exclude dev

and then repeat for the largest directory shown (by cding to that directory and running the command above)

Long answer

If you get No space left on device errors on Linux, this means that one of your mounted disks has (virtually) no space left to write on.

First, check which device is the one that has no space left:

$ sudo df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            2.4G     0  2.4G   0% /dev
tmpfs           494M   51M  444M  11% /run
/dev/xvda1       46G   17G   11M 100% /
tmpfs           2.5G     0  2.5G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.5G     0  2.5G   0% /sys/fs/cgroup
tmpfs           494M     0  494M   0% /run/user/1003

Check the Use % column – you can see that the device mounted on / (your root filesystem, i.e. the one where your system is mounted) is full. In 95% of all cases it’s the root filesystem that’s full.

This post only covers the case where your root filesystem is full. In most other cases, it’s either /dev/shm (in which case rebooting your system typically works) or an external drive (in which case you have to figure out for yourself which directories can be deleted).

Probabilistic method:

This quick check tries to identify the most likely candidates first (based on admin experience). Run this in your shell:

sudo du -sh /var/lib/docker /var/lib/mysql /var/lib/postgresql /home/

This might take some time to complete.

Check if one of those directories is so large that it eats up a significant fraction of your drive space. If not, I recommend going forward with the Simple method:

Simple method:

Copy & paste this into your shell:

function findLargestSubdir { cd "$1" && sudo du -sb * --exclude proc --exclude sys --exclude dev | sort -n ; }

then run

findLargestSubdir /

This will tell you which directory is the largest. It might take a long time to compute the size of all directoriesThis command will only print anything once it’s finished! The numbers are in bytes.

The last element in the list is the largest one!

Now, run the same command inside the largest directory to find the largest sub-directory. For example, if /var happens to be the largest directory, run

findLargestSubdir /var

which will show you the largest directory in /var. Continue checking the largest subdir(s) using findLargestSubdir until you found out what ate up all the space on your disk.

Advanced method: Much quicker but more complex

If it takes more than 2 minutes to compute the size of all directories, I recommend following this interactive procedure as root (sudo):

  1. ls -1 to show all files and directories in /. Example output:
    # ls -1 /
    bin
    boot
    dev
    etc
    home
    initrd.img
    initrd.img.old
    lib
    lib64
    lost+found
    media
    mnt
    opt
    proc
    root
    run
    sbin
    snap
    srv
    sys
    tmp
    usr
    var
    vmlinuz
    vmlinuz.old
  2. Now run du -sh *. This will try to compute the size of each of those files and directories. Typically this command will stall for a long time when trying to compute the size of one directory (if that directory has a huge number of files in it). For example, when you see the output
    # du -sh *
    17M     bin
    157M    boot
    0       dev
    7.7M    etc

    and then nothing happens for more than 30 seconds, look up the next entry after the last entry in the list ( etc in this example) in the output of ls -1 above. In this example, this would be /home. Since du -sh took so long computing the size of /home it’s very likely (though not guaranteed) that /home is the directory that takes up so much space. Also check the number

  3. In whatever directory you found to be a candidate for being the largest directory, run ls -1 <directory> again and check

Note that this method sometimes has a tendency to identify directories that recursively contain many files as opposed to directories whose total size is large. Therefore, you might need to go back in case you can’t identify any directories that eat up a large fraction of your hard drive space.

Posted by Uli Köhler in Linux

How to fix ALL USB permission issues on Linux once and for all

On Linux users often have the issue that normal users can’t access some USB devices while root can. Most pages on the internet try to address this issue individually for each device, but most users don’t need that granularity, they just want it to work.

This post provides a method 

Installation

Run this in your favourite shell:

wget https://techoverflow.net/scripts/udev-install-usbusers.sh | sudo bash -s $USER

This will print:

SUBSYSTEM=="usb", MODE="0666", GROUP="usbusers"
USB device configuration has been installed. Please log out and log back in or reboot

then log out and log back in (or close your SSH session and log back in).

In case this doesn’t work, reboot!

How it works

  1. It creates a group called usbusers
  2. It adds your user ($USER) to the usbusers group. You might need to sudo usermod -a -G usbusers $USER for additional users that should have access to USB devices!
  3. Then it creates an udev config file /etc/udev/rules.d/99-usbusers.rules with the following content:
    SUBSYSTEM=="usb", MODE="0666", GROUP="usbusers"
  4. It then tries to reload & trigger udev using udevadm. This usually means you don’t have to reboot

In effect, it sets the group to usbusers for every USB device, no matter what type and ensures the group has write access. This is why this solution is so generic – it’s not limited to a specific type of USB device.

Posted by Uli Köhler in Linux

How to convert a PDF file to SVG on the command line

If you want to convert my.pdf to my.svg, use eps2svg like this:

eps2svg my.eps

Even though the name eps2svg may suggest it can only read EPS files, the program will handle PDFs just fine!

This command produces my.svg – note that if my.svg already exists, eps2svg will create my_1.svg, my_2.svg and so on and will not overwrite my.svg!

You can also use this shell function:

function pdf2svg { eps2svg "${1%.*}.eps" "${1%.*}.svg" ; }

This will always produce my.svg, overwriting it if it already exists!

Usage example:

pdf2svg my.pdf # Produces my.svg

 

Posted by Uli Köhler in Linux

How to convert a DVI file to SVG on the command line

If you want to convert my.dvi to my.svg, use this command

dvi2ps my.dvi | ps2eps - > my.eps && eps2svg my.eps

This produces my.svg – note that if my.svg already exists, eps2svg will create my_1.svg, my_2.svg and so on and will not overwrite my.svg!

You can also use this shell function:

function dviToSVG { dvi2ps "$1" | ps2eps - > "${1%.*}.eps" && eps2svg "${1%.*}.eps" "${1%.*}.svg" ; }

Usage example:

dviToSVG my.dvi # Produces my.svg

 

Posted by Uli Köhler in LaTeX, Linux, Shell

How to check if your filesystem is mounted in noatime, relatime or strictatime mode

If you need to use a software that depends on your filesystem storing the last access time of a file (atime), you can use this script to check if your filesystem is mounted in noatime, strictatime or relatime mode.

This script works on both Linux and Windows.

On Linux, you can simply run this

wget -qO- https://techoverflow.net/scripts/check-atime.py | python3

Python 2 version (pythonclock.org !)

wget -qO- https://techoverflow.net/scripts/check-atime.py | python

Note that the script will check for the atime mode in whichever directory you run the script in.

On Windows, download the script and directly open it using Python. In case you don’t have Python installed, install it from the Microsoft store or download it here before downloading the script.

In case you need to check the atime mode of a specific drive (C:, D:, …), download, the script, place it in that directory and run it from there.

This script will print one of three messages:

  • Your filesystem is mounted in NOATIME mode – access times will NEVER be updated automatically
  • Your filesystem is mounted in RELATIME mode – access times will only be updated if they are too old
  • Your filesystem is mounted in STRICTATIME mode – access times will be updated on EVERY file access

On Linux, the default is relatime whereas on Windows the default is strictatime.

Sourcecode of the script:

#!/usr/bin/env python3
"""
This utility script checks which atime mode (strictatime, relatime or noatime)
is in use for the current filesystem
"""
import os
import time
from datetime import datetime

def datetime_to_timestamp(dt):
    return time.mktime(dt.timetuple()) + dt.microsecond/1e6

def set_file_access_time(filename, atime):
    """
    Set the access time of a given filename to the given atime.
    atime must be a datetime object.
    """
    stat = os.stat(filename)
    mtime = stat.st_mtime
    os.utime(filename, (datetime_to_timestamp(atime), mtime))


def last_file_access_time(filename):
    """
    Get a datetime() representing the last access time of the given file.
    The returned datetime object is in local time
    """
    return datetime.fromtimestamp(os.stat(filename).st_atime)

try:
    # Create test file
    with open("test.txt", "w") as outfile:
        outfile.write("test!")
    time.sleep(0.1)
    # Read & get first atime
    with open("test.txt") as infile:
        infile.read()
    atime1 = last_file_access_time("test.txt")
    # Now read file
    time.sleep(0.1)
    with open("test.txt") as infile:
        infile.read()
    # Different atime after read?
    atime2 = last_file_access_time("test.txt")
    # Set OLD atime for relatime check!
    set_file_access_time("test.txt", datetime(2000, 1, 1, 0, 0, 0))
    # Access again
    with open("test.txt") as infile:
        infile.read()
    # Different atime now
    atime3 = last_file_access_time("test.txt")
    # Check atime
    changed_after_simple_access = atime2 > atime1
    changed_after_old_atime = atime3 > atime1
    # Convert mode to text and print
    if (not changed_after_simple_access) and (not changed_after_old_atime):
        print("Your filesystem is mounted in NOATIME mode - access times will NEVER be updated automatically")
    elif (not changed_after_simple_access) and changed_after_old_atime:
        print("Your filesystem is mounted in RELATIME mode - access times will only be updated if they are too old")
    elif changed_after_simple_access and (not changed_after_old_atime):
        print("Unable to determine your access time mode")
    else: # Both updated
        print("Your filesystem is mounted in STRICTATIME mode - access times will be updated on EVERY file access")
finally:
    # Delete our test file
    try:
        os.remove("test.txt")
    except:
        pass

Also available on GitHub.

Posted by Uli Köhler in Linux, Python, Windows

How to re-encode your Audiobooks as Opus

Opus is a modern high-efficiency audio codec that is especially suited to encode speech with very low bitrates.

Therefore, it’s a good fit to compress your Audiobook library so it consumes much less space.

First, choose a bitrate for Opus. I recommend to use 24kbit/s (24k) for general use, or 32 kbit/s (32k) if you want to have higher quality audio, e.g. if you are listening using good-quality headphones.

You can use ffmpeg directly by using this syntax:

ffmpeg -i <input file> -c:a libopus -b:a bitrate <output file>

but I recommend to use this shell function instead:

function audioToOpus { ffmpeg -i "$2" -c:a libopus -b:a "$1" "${2%.*}.opus" ; }

Copy & paste it into your shell, then call it like this:

audioToOpus <bitrate> <input file>

Example:

audioToOpus 24k myaudiobook.mp3

This command will create myaudiobook.opus. myaudiobook.mp3 will not be deleted automatically.

In case you want to have this function available permanently, add the function definition to your ~/.bashrc or ~/.zshrc, depending on which shell you use.

Posted by Uli Köhler in Audio, Linux

How to disable syntax highlighting in nano

To temporarily disable syntax highlighting in GNU nano, use the -Ynone option:

Instead of

nano myfile.php

use

nano -Ynone myfile.php

In order to permanently disable nano syntax highlighting, run this command:

echo "alias nano='nano -Ynone'" >> ~/.$(echo $SHELL | rev | cut -d/ -f1 | rev)rc
source ~/.$(echo $SHELL | rev | cut -d/ -f1 | rev)rc # Reload immediately

This will add nano -Ynone as an alias for nano into your .bashrc or .zshrc

Posted by Uli Köhler in Linux

How to get current shell name (e.g. bash/zsh) on Linux

To get just the name of the shell, e.g. bash or zsh, use

echo $SHELL | rev | cut -d/ -f1 | rev

Example:

$ echo $SHELL | rev | cut -d/ -f1 | rev
bash

To get the full path of the current shell executable, use

echo $SHELL

Example:

$ echo $SHELL
/bin/zsh

 

Posted by Uli Köhler in Linux

How to install automated certbot/LetsEncrypt renewal in 30 seconds

Let’s Encrypt currently issues certificates for 3 months at a time only. For many users, this mandates automated renewal of Let’s Encrypt certificates, however many manuals how to install automated renewals on ordinary Linux servers are needlessly complicated.

I created a systemd-timer based daily renewal routine using TechOverflow’s Simple systemd timer generator.

Quick install using

wget -qO- https://techoverflow.net/scripts/install-renew-certbot.sh | sudo bash

This is the script which automatically creates & installs both systemd config files.

#!/bin/sh
# This script installs automated certbot renewal onto systemd-based systems.
# It requires that certbot is installed in /usr/bin/certbot!
# This needs to be run using sudo!

cat >/etc/systemd/system/RenewCertbot.service <<EOF
[Unit]
Description=RenewCertbot

[Service]
Type=oneshot
ExecStart=/usr/bin/certbot renew
WorkingDirectory=/tmp
EOF

cat >/etc/systemd/system/RenewCertbot.timer <<EOF
[Unit]
Description=RenewCertbot

[Timer]
OnCalendar=daily
Persistent=true

[Install]
WantedBy=timers.target
EOF

# Enable and start service
systemctl enable RenewCertbot.timer && sudo systemctl start RenewCertbot.timer

To view logs, use

journalctl -xfu RenewCertbot.service

To view the status, use

sudo systemctl status RenewCertbot.timer

To immediately run a renewal, use

sudo systemctl start RenewCertbot.service
Posted by Uli Köhler in Linux, nginx