How to SSH to an IPv6 address

If your IPv6 address begins with fe80::

This type of IPv6 address is called link-local and is therefore specific to a network interface on your computer. You can use ifconfig to show information about the network interfaces. You are looking for an identifer like eth0, wlan0, enp3s0, wlp4s0 or tap1. For this example we’re using eth0.

Now you can connect to the IPv6 using:

ssh <username>@<ipv6 address>%<interface>

for example

ssh user@fe80::21b:21ff:fe22:e865%eth0

Replace <interface> by the correct interface (if you don’t know, try out every interface), replace <ipv6 address> by the correct IP address and replace <user> by the correct username.

If your IPv6 address does NOT begin with fe80::

You can just use

ssh <username>@<ipv6 address>

for example

ssh uli@2a01:4f9:c010:278::1

Replace <ipv6 address> by the correct IP address and replace <user> by the correct username.

How to find the size of a lxc container

In order to determine the size of a LXC container, first run lxc storage list to list your storage pools:

uli@myserver:~$ lxc storage list
+---------+-------------+--------+------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |               SOURCE               | USED BY |
+---------+-------------+--------+------------------------------------+---------+
| default |             | dir    | /var/lib/lxd/storage-pools/default | 2       |
+---------+-------------+--------+------------------------------------+---------+

If the driver is not dir, you are using a COW-type storage backend. Using this technology it is not possible to easily determine the storage size of a container. The following instructions apply only for the dir driver.

Now open a root shell and cd to the directory listed in the SOURCE column and cd to its containers subdirectory:

root@myserver ~ # cd /var/lib/lxd/storage-pools/default
root@myserver /var/lib/lxd/storage-pools/default # cd containers/
root@myserver /var/lib/lxd/storage-pools/default/containers # 

This directory contains the storage directory for all containers. Run du -sh * in order to find the size of each container:

root@myserver /var/lib/lxd/storage-pools/default/containers # du -sh *
2.0G    my-container

In this example, the container my-container occupies 2.0 Gibibytes of disk space.

Routing public IPv6 addresses to your lxc/lxd containers

The enormous amount of IPv6 addresses available to most commercially hosted VPS / root servers with a public IPv6 prefix allows you to route a public IPv6 address to every container that is running on your server. This tutorial shows you how to do that, even if you have no prior experience with routing,

Step 0: Create your LXC container

We assume you have already done this – just for reference, here’s how you can create a container:

lxc launch ubuntu:18.04 my-container

Step 1: Which IP address do you want to assign to your container?

First you need to find out what prefix is routed to your host. Usually you can do that by checking in your provider’s control panel. You’re looking for something like 2a01:4f9:c010:278::1/64. Another option would be to run sudo ifconfig

and look for a inet6 line in the section of your primary network interface (this only works if you have configured your server to have an IPv6 address). Note that addresses that start with fe80:: and addresses starting with fd, among others, are not public IPv6 addresses.

Then you can define a new IPv6 address to your container. Which one you choose – as long as it’s within the prefix – is entirely your decision.

Often, <prefix>::1 is used for the host itself, therefore you could, for example, choose <prefix>::2. Note that some providers use some IP addresses for other purposes. Check your provider’s documentation for details.

If you don’t want to make it easy to find your container’s public IPv6, don’t choose <prefix>::1<prefix>::2<prefix>::3 etc but something more random like <prefix>:af15:99b1:0b05:1, for example2a01:4f9:c010:278:af15:99b1:0b05:0001. Ensure your IPv6 address has 8 groups of 4 hex digits each!

For this example, we choose the IPv6 address 2a01:4f9:c010:278::8.

Step 2: Find out the ULA of your container

We need to find the ULA (unique local address – similar to a private IPv4 address which is not routed on the internet) of the container. Using lxc, this is quite easy:

uli@myserver:~$ lxc list
+--------------+---------+-----------------------+-----------------------------------------------+
|     NAME     |  STATE  |         IPV4          |                     IPV6                      |
+--------------+---------+-----------------------+-----------------------------------------------+
| my-container | RUNNING | 10.144.118.232 (eth0) | fd42:830b:36dc:3691:216:3eff:fed1:9058 (eth0) |
+--------------+---------+-----------------------+-----------------------------------------------+

You need to look in the IPv6 column and copy the address listed there. In this example, the address is fd42:830b:36dc:3691:216:3eff:fed1:9058.

Step 3: Setup IPv6 routing

Now we can tell the host Linux to route your chosen public IPv6 to the container’s private IPv6. This is quite easy:

sudo ip6tables -t nat -A PREROUTING -d <public IPv6> -j DNAT --to-destination <container private IPv6>

In our example, this would be

sudo ip6tables -t nat -A PREROUTING -d 2a01:4f9:c010:278::8 -j DNAT --to-destination fd42:830b:36dc:3691:216:3eff:fed1:9058

First, test the command by running it in a shell. If it works (i.e. if it doesn’t print any error message), you can permanently store it e.g. by adding it to /etc/rc.local (after #!/bin/bash, before exit 0). Advanced users should prefer to add it to /etc/network/interfaces.

Step 4: Connect to your container using SSH on your public IPv6 (optional)

Note: This step requires that you have working IPv6 connectivity at your local computer. If you are unsure, check at ipv6-test.com

First, open a shell on your container:

lxc exec my-container bash

After running this, you should see a root shell prompt inside your container:

root@my-container:~#

The following commands should be entered in the container shell, not the host!

Now we can create a user to login to (in this example, we create the uli user):

root@my-container:~# adduser uli
Adding user `uli' ...
Adding new group `uli' (1001) ...
Adding new user `uli' (1001) with group `uli' ...
Creating home directory `/home/uli' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for uli
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n]

You only need to enter a password (you won’t see anything on screen when entering it) twice, for all other lines you can just press enter.

The ubuntu:18.04 lxc image used in this example does not allow SSH password authentication in its default configuration. In order to fix this, change PasswordAuthentication no to PasswordAuthentication yes in /etc/ssh/sshd_config and restart the SSH server by running service sshd restart. Be sure you understand the security implications before you do that!

Now, logout of your container shell by pressing Ctrl+D. The following commands can be entered on your desktop or any other server with IPv6 connectivity.

Now login to your server:

ssh <username>@<public IPv6 address>

in this example:

ssh uli@2a01:4f9:c010:278::8

If you configured everything correctly, you’ll see the shell prompt for your container:

uli@my-container:~$

Note: Don’t forget to configure a firewall for your container, e.g. ufw! Your container’s IPv6 is exposed to the internet and just assuming noone will guess it is not good security practice.

How to fix puppetteer error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory

Problem:

You are trying to run puppetteer on Ubuntu, but when it starts to run chrome, you are facing the following issue:

/home/user/erp/node_modules/puppeteer/.local-chromium/linux-555668/chrome-linux/chrome: error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory

Solution:

Install the missing packages using

sudo apt-get install gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

Credits to @coldner on the puppetteer issue tracker for assembling the required pkgs.

If you encounter E: Unable to locate package errors, run sudo apt-get update.

Fixing npm/node-gyp Error: not found: make on Ubuntu

When you run npm install and it tries to install a native package like bcrypt and you see an error message like this:

gyp ERR! build error 
gyp ERR! stack Error: not found: make
gyp ERR! stack     at getNotFoundError (/usr/lib/node_modules/npm/node_modules/which/which.js:13:12)
gyp ERR! stack     at F (/usr/lib/node_modules/npm/node_modules/which/which.js:68:19)
gyp ERR! stack     at E (/usr/lib/node_modules/npm/node_modules/which/which.js:80:29)
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/which/which.js:89:16
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/isexe/index.js:42:5
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/isexe/mode.js:8:5
gyp ERR! stack     at FSReqWrap.oncomplete (fs.js:182:21)

you simple need to install GNU Make. On Ubuntu, the easiest way of doing this is to run

sudo apt install build-essential

This will not only install make but also related tools like gcc and some standard header files and tools.

How to fix FreeCAD ‘No module named WebGui’ on Ubuntu 18.04

On Ubuntu 18.04 there’s currently a known bug where FreeCAD starts but does not show any widgets at startup but this error message instead:

No module named WebGui

One way I’ve found of fixing this issue is to install FreeCAD not from the Ubuntu repositories but from the freecad-stable PPA:

sudo add-apt-repository ppa:freecad-maintainers/freecad-stable
sudo apt-get update

Then you can install freecad again:

sudo apt install freecad

If you’ve installed previous versions of OpenCASCADE from the freecad PPAs, you might get an error message similar to this one:

Die folgenden Pakete haben unerfüllte Abhängigkeiten:
 freecad : Hängt ab von: libocct-data-exchange-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-foundation-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-modeling-algorithms-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-modeling-data-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-ocaf-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-visualization-7.2 soll aber nicht installiert werden
E: Probleme können nicht korrigiert werden, Sie haben zurückgehaltene defekte Pakete.

In that case, you need to force apt to install OpenCASCADE 7.2 along with freecad and deinstall OpenCASCADE 7.1

sudo apt install freecad libocct-data-exchange-7.2 libocct-foundation-7.2 libocct-modeling-algorithms-7.2 libocct-modeling-data-7.2 libocct-ocaf-7.2 libocct-visualization-7.2

How to fix apt-get source You must put some ‘source’ URIs in your sources.list

Problem:

You want to download an apt source package using

apt-get source <package name>

but instead you see this error message:

E: You must put some 'source' URIs in your sources.list

Solution:

In most cases, you can fix this easily using

sudo apt-get update

If this does not fix the issue, edit /etc/apt/sources.list, e.g. using

sudo nano /etc/apt/sources.list

and ensure that the deb-src lines are not commented out.

Example: You need to change

deb http://archive.ubuntu.com/ubuntu artful main restricted
# deb-src http://archive.ubuntu.com/ubuntu artful main restricted

to

deb http://archive.ubuntu.com/ubuntu artful main restricted
deb-src http://archive.ubuntu.com/ubuntu artful main restricted

How to fix lxd ‘Failed container creation: No storage pool found. Please create a new storage pool.’

Problem:

You want to launch some lxd container using lxc launch […] but instead you get the following error message:

Failed container creation: No storage pool found. Please create a new storage pool.

Solution:

You need to initialize lxd before using it:

lxd init

When it asks you about the backend

Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:

choosing the default option (btrfs) means that you’ll have to use a dedicated block device (or a dedicated preallocated file image) for storage. While this is more efficient if you run many containers at a time, I recommend to choose the dir backend for the default storage pool, because that option will be easiest to configure and will not occupy as much space on your hard drive.

See Storage management in lxd for more more details, including different options for storage pools in case you need a more advanced setup.

How to create a partition table using fdisk

Warning: If you run fdisk on the wrong drive here or there is some important data left, you might lose all your data and it will be very hard to restore. Before running these commands, triple-check that you’ve used the correct device (e.g. /dev/sdh)!

In order to create a partition table on a device (e.g. /dev/sdh/dev/sdh1 is not a device but a partition, so using that does not make any sense!), run these commands

sudo fdisk <device file>

If the device doesn’t have a valid partition table, fdisk will automatically create a partition table (but not write it to the disk yet). It will show this output if that is the case (the identifier is random and different every time):

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x81ee11ff.

Command (m for help):

If you are sure that you want to run the partition, enter w and press return to write the partition table to disk & exit.

The partition table will be effective immediately, but will not contain any partition. In order to create a partition (for this example we will create one partition being as large as the entire device), run

sudo fdisk <device file>

again.

This time, enter the n command (new partition). When it asks you about the partition type and its size, just press return every time to select the defaults. It should look like this (<return> added to show you where you should press return).

Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): <return>
Partition number (1-4, default 1): <return>
First sector (2048-31143935, default 2048): <return>
Last sector, +sectors or +size{K,M,G,T,P} (2048-31143935, default 31143935): <return>

After that, when fdisk prompts for a command again (i.e. when it says Command (m for help): ), enter w in order to write the changes (i.e. the new partition) to the disk & exit. After fdisk exits, you can see the partition in /dev, e.g. /dev/sdh1

After that, you’ll likely need to create a filesystem on that partition, e.g. sudo mkfs.ext4 /dev/sdh1 or sudo mkfs.vfat /dev/sdh1 . Make sure to create the correct filesystem for the operating system and usecase the device will be used in.

How to easily find errors in nginx config files

If you edited some nginx config file and nginx doesn’t want to reload or restart, e.g. with an error message like this:

# service nginx reload
Job for nginx.service failed because the control process exited with error code.
See "systemctl  status nginx.service" and "journalctl  -xe" for details.

you likely have some error in one of your config files.

There’s a simple command to check for errors (you need to run it as root): nginx -t

Example output:

nginx: [emerg] unknown directive "autoindex$" in /etc/nginx/sites-enabled/mysite:31
nginx: configuration file /etc/nginx/nginx.conf test failed

Firstly, the last line tells you that there actually is some error in the config files.
The first line tells you exactly where it is: /etc/nginx/sites-enabled/mysite:31 means: Look in the file /etc/nginx/sites-enabled/mysite, line 31.

In this particular case, the actual error message is unknown directive "autoindex$". By checking the aforementioned file I was able to find out that I accidentally entered autoindex $; instead of autoindex on;

After fixing this issue, nginx -t shows that the configuration file seems correct now:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Note that while most cases of nginx failing to (re)start are caused by issues in the config files, there are some cases in which the config file seems correct and nginx will still not start up. In that case. have a look at the logfile which is commonly located at /var/log/nginx/error.log . You need to be root in order to view it. I recommend this command:

sudo tail -n 1000 /var/log/nginx/error.log

How to exit the GNU nano editor?

Just press Ctrl+X. If you dont have unsaved changes, this will exit nano immediately.

In case you have unsaved changes, it will ask you whether to save those changes after pressing.

  • Press Y to tell it to save the changes you’ve made. It will then ask you to check or enter the filename to save to. Once you are finished with the filename, press Enter.
  • Press N to discard all changes (you won’t be able to restore your changes later) and exit nano immediately.

How to delete the baloo index database file

baloo is a KDE desktop search component that indexes files in order to speed up the search.

The index can get quite large, e.g. my index consumes more than 2 GB of HDD space:

$ balooctl indexSize
Actual Size: 2,04 GiB
Expected Size: 1,33 GiB

           PostingDB:     313,32 MiB    22.924 %
         PosistionDB:     521,05 MiB    38.122 %
            DocTerms:     167,93 MiB    12.287 %
    DocFilenameTerms:      53,46 MiB     3.912 %
       DocXattrTerms:            0 B     0.000 %
              IdTree:       9,79 MiB     0.716 %
          IdFileName:      41,71 MiB     3.052 %
             DocTime:      25,80 MiB     1.887 %
             DocData:       2,02 MiB     0.148 %
   ContentIndexingDB:      14,86 MiB     1.087 %
         FailedIdsDB:            0 B     0.000 %
             MTimeDB:       9,12 MiB     0.667 %

If you don’t want to use baloo anyway or if you just want to re-index all files, you might want to delete the entire index:

rm -rf ~/.local/share/baloo

Note that if you haven’t disabled baloo using balooctl stop ; balooctl disable it might silently re-create the index in the background.

Fixing TensorFlow libcublas.so.8.0: cannot open shared object file on Ubuntu

Problem:

When you run import tensorflow in Python, you get one of the following errors:

ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcufft.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcurand.so.8.0: cannot open shared object file: No such file or directory

Solution:

Install the required packages using:

apt-get install libcublas8.0 libcusolver8.0 libcudart8.0 libcufft8.0 libcurand8.0

Note that you also need to install cuDNN – see this followup post

Which version on CuDNN should you install for TensorFlow GPU on Ubuntu?

for details on how to do that.

If this method does not work, you can (as a quick workaround) uninstall tensorflow-gpu and install the tensorflow – the version without GPU support:

pip3 uninstall tensorflow-gpu
pip3 install tensorflow

However, this will likely make your applications much slower.

For other solutions see the TensorFlow bugtracker on GitHub.

Fixing LaTeX Error: File … not found on Debian/Ubuntu

Problem:

You’re using latex or pdflatex to compile a .tex file, but you get an error message similar to this one (the solution will work for any missing file, not just utf8x.def):

! LaTeX Error: File `utf8x.def' not found.

Now you’re wondering which package you need to install

Solution 1: Install everything

This problem can often be fixed once and for all by just installing all packages:

sudo apt-get install texlive-full

However, this pulls in a huge amount of packages and is therefore not recommended for most situations.

Solution 2: Install only required package

You can use apt-file to find the package containing the missing file and install it.

First, update the list of files in all known packages (sudo apt-get install apt-file if required):

sudo apt-file update

You only need to do this once every few months or so, before you use apt-file.

Then, look for the missing file (replace utf8x.def by your missing file):

$ apt-file search utf8x.def
texlive-lang-japanese: /usr/share/texlive/texmf-dist/tex/latex/bxbase/bxutf8x.def
texlive-latex-extra: /usr/share/texlive/texmf-dist/tex/latex/ucs/utf8x.def
texlive-luatex: /usr/share/texlive/texmf-dist/tex/lualatex/luainputenc/lutf8x.def

Now it takes some educated guessing which of the three listed packages (texlive-lang-japanese, texlive-latex-extra, texlive-luatex) needs to be installed. In this case, texlive-latex-extrais the correct choice as the other packages list the missing file only in some subdirectory of package (like luainputenc). If in doubt, you can just install all of the listed packages.

Fixing PPA Unable to identify ‘package’: user@mycomputer in launchpad

Problem:

You’ve uploaded a DEB package to a Launchpad PPA (e.g. using dput), but you get an error message similar to this:

Solution:

You need to use a proper email address (which must be registered in Launchpad) in debian/changelog .

In order to do this, set the $DEBEMAIL environment variable before running dch

Example:

export DEBEMAIL=myemail@example.com
dch [...]

If $DEBEMAIL is not set, [username]@[hostname] will be used

Solving Docker permission denied while trying to connect to the Docker daemon socket

Problem:

You are trying to run a docker container or do the docker tutorial, but you only get an error message like this:

docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.26/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.

 Solution:

The error message tells you that your current user can’t access the docker engine, because you’re lacking permissions to access the unix socket to communicate with the engine.

As a temporary solution, you can use sudo to run the failed command as root.
However it is recommended to fix the issue by adding the current user to the docker group:

Run this command in your favourite shell and then completely log out of your account and log back in (if in doubt, reboot!):

sudo usermod -a -G docker $USER

After doing that, you should be able to run the command without any issues. Run docker run hello-world as a normal user in order to check if it works. Reboot if the issue still persists.

Logging out and logging back in is required because the group change will not have an effect unless your session is closed.

How to interpret smartctl messages like ‘Error: UNC at LBA’?

When running smartctl on your hard drive, you often get a plethora of information that can be hard to interpret for unexperienced users. This post attempts to provide aid in interpreting what the technical reasons behind the error messages are. If you’re looking for advice on whether to replace your hard drive, the only guidance I can give you is it might fail any time, so better backup your data, but it might also run for many years to come.. Furthermore, this article does not describe basic SMART WHEN_FAILED checking but rather interpretation of more subtle signs of possibly impending HDD failures.

Read more