How to fix Python ‘ValueError: Namespace GnomeDesktop not available’ on Ubuntu


On Ubuntu, you are trying to run a Python script using the gi package and GnomeDesktop but you are seeing this stacktrace:

Traceback (most recent call last):
  File "", line 48, in <module>
    gi.require_version('GnomeDesktop', '3.0')
  File "/usr/lib/python3/dist-packages/gi/", line 130, in require_version
    raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace GnomeDesktop not available


Install gir1.2-gnomedesktop-3.0:

sudo apt -y install gir1.2-gnomedesktop-3.0

and retry running your script.

Posted by Uli Köhler in Linux, Python

How to install RocksDB on Ubuntu

deb-buildscripts provides a convenient build script for building RocksDB as a deb package. Since RocksDB optimizes for the current computer’s CPU instruction set extensions (-march=native), it is required to build RocksDB on the computer where you will run it, or at least one with the same CPU type (generation)

First install the prerequisites:

sudo apt-get -y install devscripts debhelper build-essential fakeroot zlib1g-dev libbz2-dev libsnappy-dev libgflags-dev libzstd-dev

then build RocksDB:

git clone
cd deb-buildscripts

This will build the librocksdb and librocksdb-dev packages in the deb-buildscripts directory.

Posted by Uli Köhler in C/C++, Linux

How to install x11vnc on DISPLAY=:0 as a systemd service

First, install x11vnc using e.g.

sudo apt -y install x11vnc

Now run this script as the user that is running the X11 session. The script needs to know the correct user to start x11vnc as.

wget -qO- | sudo bash -s $USER

This will install a systemd service like

Description=VNC Server for X11

ExecStart=/usr/bin/x11vnc -display :0 -norc -forever -shared -autoport 5900 -o /var/log/x11vnc.log


and automatically enable it on boot and start it.

You can connect to the computer using VNC now e.g. using:

vncviewer [hostname]
Posted by Uli Köhler in Linux

How to connect to your 3D printer using picocom

Use this command to connect to your Marlin-based 3D printer:

picocom -b 115200 /dev/ttyUSB0 --imap lfcrlf --echo

This command might also work for firmwares other than Marlin.

By default, picocom uses character maps that cause the newlines not to be shown correctly. --imap lfcrlf maps line feeds sent by the printer to CR + LF on the terminal. --echo enables local echo, enabling you to see what you are typing.


Posted by Uli Köhler in Hardware, Linux

How to install PyPy3 + virtual environment in 30 seconds


Run this

wget -qO- | bash

then run vpypy every time you want to activate (you might need to restart). The script currently assumes you are running Linux x86_64 and have installed virtualenv (sudo apt install virtualenv or similar if you don’t have it installed)

Full description:

PyPy is an alternate Python implementation that can be used to speed up many workloads. However, installing it is a somewhat cumbersome process, especially if you don’t have too much experience with virtual environments and related concepts.

We provide a script that automatically downloads PyPy3, installs it to ~/.pypy3 and creates a virtual environment in ~/.pypy3-virtualenv. After that, it creates a shell alias vpypy that aliases to source ~/.pypy3-virtualenv/bin/activate and hence provides an easily memoizable way of activating the environment without requiring the user to memoize the directory.

Also, since both pypy3 itself and the virtual environment are  installed in the user’s home directory, running this script does not require admin permissions.

After running the script using

wget -qO- | bash

you can activate the virtual environment using the vpypy alias that is automatically added to ~/.bashrc and ~/.zshrc. Restart your shell for the alias definition to load, then run vpypy:

uli@uli-laptop ~ % vpypy
(.pypy3-virtualenv) uli@uli-laptop ~ % 

You can see that the prompt has changed. Now you can use pip (which will install packages locally to the PyPy3 virtualenv), python (which maps to pypy3) and other related executables. In order to run a script using PyPy, just run python

Full source code:

# TechOverflow's 30-second Pypy3 virtual environment generator
# This script is released under CC0 1.0 Universal

# Download (or use existing) pypy3
if [ -d "$DIRECTORY" ]; then
    echo "Skipping PyPy download, already exists"
    echo "Downloading PyPy to $DIRECTORY"
    # Download & extract to DIRECTORY
    wget${VERSION}.tar.bz2 -O /tmp/${VERSION}.tar.bz2
    bash -c "cd /tmp && tar xjvf ${VERSION}.tar.bz2"
    mv /tmp/${VERSION} $DIRECTORY
    rm /tmp/${VERSION}.tar.bz2

# Create virtualenv
if [ -d "$VENV_DIRECTORY" ]; then
    echo "Skipping to create pypy3 virtualenv, already exists"
    echo "Creating PyPy virtual environment in $VENV_DIRECTORY"
    virtualenv -p ${DIRECTORY}/bin/pypy3 ${VENV_DIRECTORY}

# Create "vpypy" shortcut
set -x
set +x
if [ "$result" -ne 127 ]; then
    echo "Skipping to create vpypy shortcut, already exists in current shell"
    echo "Creating bash/zsh shortcut 'vpypy'"
    if [ -f ~/.bashrc ]; then
        echo -e "\n# TechOverflow PyPy installer\nalias vpypy='source ${VENV_DIRECTORY}/bin/activate'\n" >> ~/.bashrc
    if [ -f ~/.zshrc ]; then
        echo -e "\n# TechOverflow PyPy installer\nalias vpypy='source ${VENV_DIRECTORY}/bin/activate'\n" >> ~/.zshrc
    # Activate shortcut in current shell (but do not automatically activate virtual environment)
    alias vpypy='source ${VENV_DIRECTORY}/bin/activate'

echo -e "\n\nPyPy installation finished. Restart your shell, then run 'vpypy' to activate the virtual environment"



Posted by Uli Köhler in Linux, Python

How to fix ‘apt: command not found’ on Fedora

If you want to install a package on a Fedora Linux, you might have tried a command like

sudo apt install [package name]

However, the Fedora distribution does not use the apt package manager. Fedora uses yum instead.


sudo yum install [package name]

to install a package, for example

sudo yum install sqlite

Note: yum update does not do the same as apt update (i.e. update the list of available packages) but is the equivalent to apt upgrade or apt dist-upgrade i.e. update packages on the system!

Posted by Uli Köhler in Linux

How to identify large directories for ‘No space left on device’ on Linux


cd / and run

sudo du -sh * --exclude proc --exclude sys --exclude dev

and then repeat for the largest directory shown (by cding to that directory and running the command above)

Long answer

If you get No space left on device errors on Linux, this means that one of your mounted disks has (virtually) no space left to write on.

First, check which device is the one that has no space left:

$ sudo df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            2.4G     0  2.4G   0% /dev
tmpfs           494M   51M  444M  11% /run
/dev/xvda1       46G   17G   11M 100% /
tmpfs           2.5G     0  2.5G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.5G     0  2.5G   0% /sys/fs/cgroup
tmpfs           494M     0  494M   0% /run/user/1003

Check the Use % column – you can see that the device mounted on / (your root filesystem, i.e. the one where your system is mounted) is full. In 95% of all cases it’s the root filesystem that’s full.

This post only covers the case where your root filesystem is full. In most other cases, it’s either /dev/shm (in which case rebooting your system typically works) or an external drive (in which case you have to figure out for yourself which directories can be deleted).

Probabilistic method:

This quick check tries to identify the most likely candidates first (based on admin experience). Run this in your shell:

sudo du -sh /var/lib/docker /var/lib/mysql /var/lib/postgresql /home/

This might take some time to complete.

Check if one of those directories is so large that it eats up a significant fraction of your drive space. If not, I recommend going forward with the Simple method:

Simple method:

Copy & paste this into your shell:

function findLargestSubdir { cd "$1" && sudo du -sb * --exclude proc --exclude sys --exclude dev | sort -n ; }

then run

findLargestSubdir /

This will tell you which directory is the largest. It might take a long time to compute the size of all directoriesThis command will only print anything once it’s finished! The numbers are in bytes.

The last element in the list is the largest one!

Now, run the same command inside the largest directory to find the largest sub-directory. For example, if /var happens to be the largest directory, run

findLargestSubdir /var

which will show you the largest directory in /var. Continue checking the largest subdir(s) using findLargestSubdir until you found out what ate up all the space on your disk.

Advanced method: Much quicker but more complex

If it takes more than 2 minutes to compute the size of all directories, I recommend following this interactive procedure as root (sudo):

  1. ls -1 to show all files and directories in /. Example output:
    # ls -1 /
  2. Now run du -sh *. This will try to compute the size of each of those files and directories. Typically this command will stall for a long time when trying to compute the size of one directory (if that directory has a huge number of files in it). For example, when you see the output
    # du -sh *
    17M     bin
    157M    boot
    0       dev
    7.7M    etc

    and then nothing happens for more than 30 seconds, look up the next entry after the last entry in the list ( etc in this example) in the output of ls -1 above. In this example, this would be /home. Since du -sh took so long computing the size of /home it’s very likely (though not guaranteed) that /home is the directory that takes up so much space. Also check the number

  3. In whatever directory you found to be a candidate for being the largest directory, run ls -1 <directory> again and check

Note that this method sometimes has a tendency to identify directories that recursively contain many files as opposed to directories whose total size is large. Therefore, you might need to go back in case you can’t identify any directories that eat up a large fraction of your hard drive space.

Posted by Uli Köhler in Linux

How to fix ALL USB permission issues on Linux once and for all

On Linux users often have the issue that normal users can’t access some USB devices while root can. Most pages on the internet try to address this issue individually for each device, but most users don’t need that granularity, they just want it to work.

This post provides a method 


Run this in your favourite shell:

wget | sudo bash -s $USER

This will print:

SUBSYSTEM=="usb", MODE="0666", GROUP="usbusers"
USB device configuration has been installed. Please log out and log back in or reboot

then log out and log back in (or close your SSH session and log back in).

In case this doesn’t work, reboot!

How it works

  1. It creates a group called usbusers
  2. It adds your user ($USER) to the usbusers group. You might need to sudo usermod -a -G usbusers $USER for additional users that should have access to USB devices!
  3. Then it creates an udev config file /etc/udev/rules.d/99-usbusers.rules with the following content:
    SUBSYSTEM=="usb", MODE="0666", GROUP="usbusers"
  4. It then tries to reload & trigger udev using udevadm. This usually means you don’t have to reboot

In effect, it sets the group to usbusers for every USB device, no matter what type and ensures the group has write access. This is why this solution is so generic – it’s not limited to a specific type of USB device.

Posted by Uli Köhler in Linux

How to convert a PDF file to SVG on the command line

If you want to convert my.pdf to my.svg, use eps2svg like this:

eps2svg my.eps

Even though the name eps2svg may suggest it can only read EPS files, the program will handle PDFs just fine!

This command produces my.svg – note that if my.svg already exists, eps2svg will create my_1.svg, my_2.svg and so on and will not overwrite my.svg!

You can also use this shell function:

function pdf2svg { eps2svg "${1%.*}.eps" "${1%.*}.svg" ; }

This will always produce my.svg, overwriting it if it already exists!

Usage example:

pdf2svg my.pdf # Produces my.svg


Posted by Uli Köhler in Linux

How to convert a DVI file to SVG on the command line

If you want to convert my.dvi to my.svg, use this command

dvi2ps my.dvi | ps2eps - > my.eps && eps2svg my.eps

This produces my.svg – note that if my.svg already exists, eps2svg will create my_1.svg, my_2.svg and so on and will not overwrite my.svg!

You can also use this shell function:

function dviToSVG { dvi2ps "$1" | ps2eps - > "${1%.*}.eps" && eps2svg "${1%.*}.eps" "${1%.*}.svg" ; }

Usage example:

dviToSVG my.dvi # Produces my.svg


Posted by Uli Köhler in LaTeX, Linux, Shell

How to check if your filesystem is mounted in noatime, relatime or strictatime mode

If you need to use a software that depends on your filesystem storing the last access time of a file (atime), you can use this script to check if your filesystem is mounted in noatime, strictatime or relatime mode.

This script works on both Linux and Windows.

On Linux, you can simply run this

wget -qO- | python3

Python 2 version ( !)

wget -qO- | python

Note that the script will check for the atime mode in whichever directory you run the script in.

On Windows, download the script and directly open it using Python. In case you don’t have Python installed, install it from the Microsoft store or download it here before downloading the script.

In case you need to check the atime mode of a specific drive (C:, D:, …), download, the script, place it in that directory and run it from there.

This script will print one of three messages:

  • Your filesystem is mounted in NOATIME mode – access times will NEVER be updated automatically
  • Your filesystem is mounted in RELATIME mode – access times will only be updated if they are too old
  • Your filesystem is mounted in STRICTATIME mode – access times will be updated on EVERY file access

On Linux, the default is relatime whereas on Windows the default is strictatime.

Sourcecode of the script:

#!/usr/bin/env python3
This utility script checks which atime mode (strictatime, relatime or noatime)
is in use for the current filesystem
import os
import time
from datetime import datetime

def datetime_to_timestamp(dt):
    return time.mktime(dt.timetuple()) + dt.microsecond/1e6

def set_file_access_time(filename, atime):
    Set the access time of a given filename to the given atime.
    atime must be a datetime object.
    stat = os.stat(filename)
    mtime = stat.st_mtime
    os.utime(filename, (datetime_to_timestamp(atime), mtime))

def last_file_access_time(filename):
    Get a datetime() representing the last access time of the given file.
    The returned datetime object is in local time
    return datetime.fromtimestamp(os.stat(filename).st_atime)

    # Create test file
    with open("test.txt", "w") as outfile:
    # Read & get first atime
    with open("test.txt") as infile:
    atime1 = last_file_access_time("test.txt")
    # Now read file
    with open("test.txt") as infile:
    # Different atime after read?
    atime2 = last_file_access_time("test.txt")
    # Set OLD atime for relatime check!
    set_file_access_time("test.txt", datetime(2000, 1, 1, 0, 0, 0))
    # Access again
    with open("test.txt") as infile:
    # Different atime now
    atime3 = last_file_access_time("test.txt")
    # Check atime
    changed_after_simple_access = atime2 > atime1
    changed_after_old_atime = atime3 > atime1
    # Convert mode to text and print
    if (not changed_after_simple_access) and (not changed_after_old_atime):
        print("Your filesystem is mounted in NOATIME mode - access times will NEVER be updated automatically")
    elif (not changed_after_simple_access) and changed_after_old_atime:
        print("Your filesystem is mounted in RELATIME mode - access times will only be updated if they are too old")
    elif changed_after_simple_access and (not changed_after_old_atime):
        print("Unable to determine your access time mode")
    else: # Both updated
        print("Your filesystem is mounted in STRICTATIME mode - access times will be updated on EVERY file access")
    # Delete our test file

Also available on GitHub.

Posted by Uli Köhler in Linux, Python, Windows

How to re-encode your Audiobooks as Opus

Opus is a modern high-efficiency audio codec that is especially suited to encode speech with very low bitrates.

Therefore, it’s a good fit to compress your Audiobook library so it consumes much less space.

First, choose a bitrate for Opus. I recommend to use 24kbit/s (24k) for general use, or 32 kbit/s (32k) if you want to have higher quality audio, e.g. if you are listening using good-quality headphones.

You can use ffmpeg directly by using this syntax:

ffmpeg -i <input file> -c:a libopus -b:a bitrate <output file>

but I recommend to use this shell function instead:

function audioToOpus { ffmpeg -i "$2" -c:a libopus -b:a "$1" "${2%.*}.opus" ; }

Copy & paste it into your shell, then call it like this:

audioToOpus <bitrate> <input file>


audioToOpus 24k myaudiobook.mp3

This command will create myaudiobook.opus. myaudiobook.mp3 will not be deleted automatically.

In case you want to have this function available permanently, add the function definition to your ~/.bashrc or ~/.zshrc, depending on which shell you use.

Posted by Uli Köhler in Audio, Linux

How to disable syntax highlighting in nano

To temporarily disable syntax highlighting in GNU nano, use the -Ynone option:

Instead of

nano myfile.php


nano -Ynone myfile.php

In order to permanently disable nano syntax highlighting, run this command:

echo "alias nano='nano -Ynone'" >> ~/.$(echo $SHELL | rev | cut -d/ -f1 | rev)rc
source ~/.$(echo $SHELL | rev | cut -d/ -f1 | rev)rc # Reload immediately

This will add nano -Ynone as an alias for nano into your .bashrc or .zshrc

Posted by Uli Köhler in Linux

How to get current shell name (e.g. bash/zsh) on Linux

To get just the name of the shell, e.g. bash or zsh, use

echo $SHELL | rev | cut -d/ -f1 | rev


$ echo $SHELL | rev | cut -d/ -f1 | rev

To get the full path of the current shell executable, use

echo $SHELL


$ echo $SHELL


Posted by Uli Köhler in Linux

How to install automated certbot/LetsEncrypt renewal in 30 seconds

Let’s Encrypt currently issues certificates for 3 months at a time only. For many users, this mandates automated renewal of Let’s Encrypt certificates, however many manuals how to install automated renewals on ordinary Linux servers are needlessly complicated.

I created a systemd-timer based daily renewal routine using TechOverflow’s Simple systemd timer generator.

Quick install using

wget -qO- | sudo bash

This is the script which automatically creates & installs both systemd config files.

# This script installs automated certbot renewal onto systemd-based systems.
# It requires that certbot is installed in /usr/bin/certbot!
# This needs to be run using sudo!

cat >/etc/systemd/system/RenewCertbot.service <<EOF

ExecStart=/usr/bin/certbot renew

cat >/etc/systemd/system/RenewCertbot.timer <<EOF



# Enable and start service
systemctl enable RenewCertbot.timer && sudo systemctl start RenewCertbot.timer

To view logs, use

journalctl -xfu RenewCertbot.service

To view the status, use

sudo systemctl status RenewCertbot.timer

To immediately run a renewal, use

sudo systemctl start RenewCertbot.service
Posted by Uli Köhler in Linux, nginx

How to run docker container as current user & group

If you want to prevent your docker container creating files as root, use

--user $(id -u):$(id -g)

as an argument to docker run. Example:

docker run --user $(id -u):$(id -g) -it -v $(pwd):/app myimage


Posted by Uli Köhler in Container, Docker, Linux