Linux

How to run ‘docker-compose up’ in the background

In order to run docker-compose up in the background, use

docker-compose up -d

The -d option means --detach, i.e. the process is detached from the foreground shell you are running.

Posted by Uli Köhler in Container, Docker, Linux

Solving Bitnami Docker Redmine ‘cannot create directory ‘/bitnami/mariadb’: Permission denied’

Problem:

You are setting up a docker-based redmine installation using the bitnami image, but you’re getting this error message when you use a host directory mounted as volume:

cannot create directory '/bitnami/mariadb': Permission denied

Solution:

Run

sudo chown -R 1001:1001 <directory>

on the host directories used by both the MariaDB container and the Redmine container.

In order to find the directories, look for these lines in the docker-compose YML file::

# Example: This can be found in the mariadb section:
    volumes:
      - '/var/lib/myredmine/mariadb_data:/bitnami'
# Example: This can be found in the redmine section
    volumes:
      - '/var/lib/myredmine/redmine_data:/bitnami'

In this example, you would have to run

sudo chown -R 1001:1001 /var/lib/myredmine/mariadb_data /var/lib/myredmine/redmine_data

and then restart the container:

docker-compose down
docker-compose up # Use 'docker-compose up -d' to run in the background

 

Posted by Uli Köhler in Container, Docker, Linux

A systemd service template for docker-compose

Here’s my template for running a docker-compose service as a systemd service:

# Save as e.g. /etc/systemd/system/my-service.service
[Unit]
Description=MyService
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=uli
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /home/uli/mydockerservice/docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /home/uli/mydockerservice/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /home/uli/mydockerservice/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

In order to get it up and running for your application, you need to modify a couple of things:

  1. Check if you have docker-compose in /usr/local/bin/docker-compose (as I do, because I use the docker-ce installation from the official docker repositories for Ubuntu 18.04) or in /usr/bin/docker-compose (in which case you need to set the correct docker-compose path in all 3 places in the service file)
  2. Ensure that the user you want to run docker-compose as (uli in this example) is a member of the docker group (sudo usermod -a -G docker <user>), and set the correct user in the User=... line
  3. Define a name for your service that should be reflected in both the service filename and the Description=... line
  4. Set the correct path for your docker-compose YML config file in all the Exec…=… lines (i.e. replace /home/uli/mydockerservice/docker-compose.yml by your YML path).

After that, you can start your service using

sudo systemctl start my-service # --> my-service.service, use whatever you named your file as

and optionally enable it at bootup:

systemctl enable docker # Docker is required for your service so you need to enable it as well!
systemctl enable my-service # --> my-service.service, use whatever you named your file as
Posted by Uli Köhler in Container, Docker, Linux

How to fix docker ‘Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?’ on Ubuntu

Problem:

You’re running a docker command like docker ps, but you only see this error message:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Solution:

As the error message already tells you, the docker daemon is currently not running.

On Ubuntu (16.04 upwards) and many other systemd-based distributions, you can fix this by

sudo systemctl start docker

In most cases, you want to automatically start the docker daemon at boot. In order to do this, run

sudo systemctl enable docker

After that, run your command (e.g. docker ps) again.

Posted by Uli Köhler in Container, Docker, Linux

How to fix docker ‘Got permission denied while trying to connect to the Docker daemon socket’

Problem:

You are running a command like docker ps but you get this error message:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json: dial unix /var/run/docker.sock: connect: permission denied

Solution:

As a quick fix, running the command as root using sudo (e.g. sudo docker ps) will solve the issue temporarily.

The issue here is that the user you’re running the command as is not a member of the docker group. In order to add it to the docker group, run

sudo usermod -a -G docker $USER

After running that command, you need to logout and log back in to your computer (or terminate your SSH session and re-connect in case you are logged in using SSH) – else, the group change does not take effect.

Running groups should show you that you now belong to the docker group:

$ groups
uli sudo www-data lxd docker # Check if docker appears here!

After that, retry running the command (e.g. docker ps) – the error should now have disappeared.

See What does sudo usermod -a -G group $USER do on Linux? for details on what this command changes on your system and what the parameters mean.

Background information

When you run any docker command on Linux, the docker binary will try to connect to /var/run/docker.sock. This allows you to run docker commands as non-root-user without using sudo all the time.

When you start the docker daemon, it will create /var/run/docker.sock as a unix socket for client applications to connect to.

You can have a look at the owner, group and permissions of the docker socket by using stat /var/run/docker.sock on the command line:

$ stat /var/run/docker.sock
  File: /var/run/docker.sock
  Size: 0               Blocks: 0          IO Block: 4096   socket
Device: 16h/22d Inode: 677         Links: 1
Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
Access: 2019-04-30 01:32:21.718150679 +0200
Modify: 2019-04-24 18:37:39.236357175 +0200
Change: 2019-04-24 18:37:39.240357175 +0200
 Birth: -

For our purposes, the interesting information is Uid: ( 0/ root) Gid: ( 999/ docker) which tells you that the docker socket is owned by the user root and the group docker. The group ID might be different on your computer, but only the name of the group is relevant.

Given the permissions Access: (0660/srw-rw----), both the owner (root) and the group (docker) can read & write (rw) to the docker socket. This means that if you are either the user root (which you can become temporarily using sudo) or you are a member of the docker group, you will be able to connect to that socket and communicate with the docker daemon.

Note that the docker daemon itself (dockerd) is running as root, which you can check using

$ ps aux | grep dockerd
root      2680  0.1  0.3 1247872 19828 ?       Ssl  Apr24   7:44 /usr/bin/dockerd -H fd://

For more information on the docker daemon, see the official Docker daemon guide.

Posted by Uli Köhler in Container, Docker, Linux

How to list all currently running docker containers?

To list all currently running docker containers run

docker ps

If no containers are running, this will give you this output:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

In case some containers are running, there will be additional lines listing the container like

CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                    NAMES
1bd0a1461b38        bitnami/mariadb:latest   "/entrypoint.sh /run…"   6 minutes ago       Up 6 minutes        3306/tcp                 mydb

 

Posted by Uli Köhler in Container, Docker, Linux

Fixing ‘netplan apply’ Failed to start NetworkManager.service: Unit NetworkManager.service not found.

Problem:

You’ve configured a wifi or similar (non-ethernet) network in netplan. Your netplan configuration (e.g. in /etc/netplan/50-cloud-init.yaml) looks similar to this:

network:
    ethernets:
        enp0s25:
            addresses: []
            dhcp4: true
    wifis:
        wlxc04a0013c4ca:
            renderer: NetworkManager
            match: {}
            dhcp4: true
            access-points:
                MyWifi:
                    password: "mywifipassword"
    version: 2

 

But when you run

sudo netplan apply

you see an error message like this:

Failed to start NetworkManager.service: Unit NetworkManager.service not found.
Traceback (most recent call last):
  File "/usr/sbin/netplan", line 23, in <module>
    netplan.main()
  File "/usr/share/netplan/netplan/cli/core.py", line 50, in main
    self.run_command()
  File "/usr/share/netplan/netplan/cli/utils.py", line 130, in run_command
    self.func()
  File "/usr/share/netplan/netplan/cli/commands/apply.py", line 41, in run
    self.run_command()
  File "/usr/share/netplan/netplan/cli/utils.py", line 130, in run_command
    self.func()
  File "/usr/share/netplan/netplan/cli/commands/apply.py", line 101, in command_apply
    utils.systemctl_network_manager('start', sync=sync)
  File "/usr/share/netplan/netplan/cli/utils.py", line 68, in systemctl_network_manager
    subprocess.check_call(command)
  File "/usr/lib/python3.6/subprocess.py", line 291, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['systemctl', 'start', '--no-block', 'NetworkManager.service']' returned non-zero exit status 5.

Solution:

The renderer: NetworkManager line tells netplan to use NetworkManager to connect to this network.

The error message tells you that NetworkManager is not installed on your system.

On Ubuntu and Debian, use

sudo apt install network-manager

to install it. On other distributions, try to install network-manager or a similarly named package using your distribution’s package manager.

After than, run

sudo netplan apply

again.

Posted by Uli Köhler in Linux, Networking

How to fix apt-key gpg: keyserver receive failed: No dirmngr

Problem:

You want to add a repository signing key using apt-key using a command like

sudo apt-key adv --keyserver hkp://keys.gnupg.net --recv-key E0FF663E

but you get an error message like this:

Executing: /tmp/apt-key-gpghome.qn3065We9J/gpg.1.sh --keyserver hkp://keys.gnupg.net --recv-key E0EE663E
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/tmp/apt-key-gpghome.qn3065We9J/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr

Solution:

sudo apt install dirmngr

then retry the apt-key command

Posted by Uli Köhler in Linux

Launching Debian containers using LXC on Ubuntu

Problem:

You know you can launch an Ubuntu LXC container using

lxc launch ubuntu:18.04 myvm

Now you want to launch a Debian VM using

lxc launch debian:jessie myvm

but you only get this error message:

Error: The remote "debian" doesn't exist

Solution:

The debian images are (by default) available from the images remote, not the debian remote, so you need to use this:

lxc launch images:debian/jessie myvm

 

Posted by Uli Köhler in Container, Linux, LXC, Virtualization

How to disable SSL certification verification in LFTP

Problem:

You want to use lftp to access a FTPS server, but you get an error message like this:

mirror: Fatal error: Certificate verification: certificate common name doesn't match requested host name ‘mydomain.de’ (C8:98:BC:01:1E:FF:08:CB:62:08:6B:F1:E8:4C:1F:13:0A:3B:D8:06)

Solution:

You can use the following command in lftp to disable certificate verification:

set ssl:verify-certificate false

Inside the lftp command line, you can run the command and then retry the command that caused the error message. As lftp has a memory of which commands you used before, just press the Arrow up key multiple times until you see the original command.

Example:

lftp [email protected]:~> mirror . MyBackup
mirror: Fatal error: Certificate verification: certificate common name doesn't match requested host name ‘mydomain.de’ (C8:98:BC:01:1E:FF:08:CB:62:08:6B:F1:E8:4C:1F:13:0A:3B:D8:06)
lftp [email protected]:~> set ssl:verify-certificate false
lftp [email protected]:~> mirror . MyBackup
[...]

In case your server doesn’t actually support FTPS, you might need to use the set ftp:ssl-allow no command to disable FTPS entirely.

Posted by Uli Köhler in Linux

How to SSH to an IPv6 address

If your IPv6 address begins with fe80::

This type of IPv6 address is called link-local and is therefore specific to a network interface on your computer. You can use ifconfig to show information about the network interfaces. You are looking for an identifer like eth0, wlan0, enp3s0, wlp4s0 or tap1. For this example we’re using eth0.

Now you can connect to the IPv6 using:

ssh <username>@<ipv6 address>%<interface>

for example

ssh user@fe80::21b:21ff:fe22:e865%eth0

Replace <interface> by the correct interface (if you don’t know, try out every interface), replace <ipv6 address> by the correct IP address and replace <user> by the correct username.

If your IPv6 address does NOT begin with fe80::

You can just use

ssh <username>@<ipv6 address>

for example

ssh uli@2a01:4f9:c010:278::1

Replace <ipv6 address> by the correct IP address and replace <user> by the correct username.

Posted by Uli Köhler in Linux, Networking

How to find the size of a lxc container

In order to determine the size of a LXC container, first run lxc storage list to list your storage pools:

uli@myserver:~$ lxc storage list
+---------+-------------+--------+------------------------------------+---------+
|  NAME   | DESCRIPTION | DRIVER |               SOURCE               | USED BY |
+---------+-------------+--------+------------------------------------+---------+
| default |             | dir    | /var/lib/lxd/storage-pools/default | 2       |
+---------+-------------+--------+------------------------------------+---------+

If the driver is not dir, you are using a COW-type storage backend. Using this technology it is not possible to easily determine the storage size of a container. The following instructions apply only for the dir driver.

Now open a root shell and cd to the directory listed in the SOURCE column and cd to its containers subdirectory:

root@myserver ~ # cd /var/lib/lxd/storage-pools/default
root@myserver /var/lib/lxd/storage-pools/default # cd containers/
root@myserver /var/lib/lxd/storage-pools/default/containers # 

This directory contains the storage directory for all containers. Run du -sh * in order to find the size of each container:

root@myserver /var/lib/lxd/storage-pools/default/containers # du -sh *
2.0G    my-container

In this example, the container my-container occupies 2.0 Gibibytes of disk space.

Posted by Uli Köhler in Container, Linux, LXC

Routing public IPv6 addresses to your lxc/lxd containers

The enormous amount of IPv6 addresses available to most commercially hosted VPS / root servers with a public IPv6 prefix allows you to route a public IPv6 address to every container that is running on your server. This tutorial shows you how to do that, even if you have no prior experience with routing,

Step 0: Create your LXC container

We assume you have already done this – just for reference, here’s how you can create a container:

lxc launch ubuntu:18.04 my-container

Step 1: Which IP address do you want to assign to your container?

First you need to find out what prefix is routed to your host. Usually you can do that by checking in your provider’s control panel. You’re looking for something like 2a01:4f9:c010:278::1/64. Another option would be to run sudo ifconfig

and look for a inet6 line in the section of your primary network interface (this only works if you have configured your server to have an IPv6 address). Note that addresses that start with fe80:: and addresses starting with fd, among others, are not public IPv6 addresses.

Then you can define a new IPv6 address to your container. Which one you choose – as long as it’s within the prefix – is entirely your decision.

Often, <prefix>::1 is used for the host itself, therefore you could, for example, choose <prefix>::2. Note that some providers use some IP addresses for other purposes. Check your provider’s documentation for details.

If you don’t want to make it easy to find your container’s public IPv6, don’t choose <prefix>::1<prefix>::2<prefix>::3 etc but something more random like <prefix>:af15:99b1:0b05:1, for example2a01:4f9:c010:278:af15:99b1:0b05:0001. Ensure your IPv6 address has 8 groups of 4 hex digits each!

For this example, we choose the IPv6 address 2a01:4f9:c010:278::8.

Step 2: Find out the ULA of your container

We need to find the ULA (unique local address – similar to a private IPv4 address which is not routed on the internet) of the container. Using lxc, this is quite easy:

uli@myserver:~$ lxc list
+--------------+---------+-----------------------+-----------------------------------------------+
|     NAME     |  STATE  |         IPV4          |                     IPV6                      |
+--------------+---------+-----------------------+-----------------------------------------------+
| my-container | RUNNING | 10.144.118.232 (eth0) | fd42:830b:36dc:3691:216:3eff:fed1:9058 (eth0) |
+--------------+---------+-----------------------+-----------------------------------------------+

You need to look in the IPv6 column and copy the address listed there. In this example, the address is fd42:830b:36dc:3691:216:3eff:fed1:9058.

Step 3: Setup IPv6 routing

Now we can tell the host Linux to route your chosen public IPv6 to the container’s private IPv6. This is quite easy:

sudo ip6tables -t nat -A PREROUTING -d <public IPv6> -j DNAT --to-destination <container private IPv6>

In our example, this would be

sudo ip6tables -t nat -A PREROUTING -d 2a01:4f9:c010:278::8 -j DNAT --to-destination fd42:830b:36dc:3691:216:3eff:fed1:9058

First, test the command by running it in a shell. If it works (i.e. if it doesn’t print any error message), you can permanently store it e.g. by adding it to /etc/rc.local (after #!/bin/bash, before exit 0). Advanced users should prefer to add it to /etc/network/interfaces.

Step 4: Connect to your container using SSH on your public IPv6 (optional)

Note: This step requires that you have working IPv6 connectivity at your local computer. If you are unsure, check at ipv6-test.com

First, open a shell on your container:

lxc exec my-container bash

After running this, you should see a root shell prompt inside your container:

root@my-container:~#

The following commands should be entered in the container shell, not the host!

Now we can create a user to login to (in this example, we create the uli user):

root@my-container:~# adduser uli
Adding user `uli' ...
Adding new group `uli' (1001) ...
Adding new user `uli' (1001) with group `uli' ...
Creating home directory `/home/uli' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for uli
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n]

You only need to enter a password (you won’t see anything on screen when entering it) twice, for all other lines you can just press enter.

The ubuntu:18.04 lxc image used in this example does not allow SSH password authentication in its default configuration. In order to fix this, change PasswordAuthentication no to PasswordAuthentication yes in /etc/ssh/sshd_config and restart the SSH server by running service sshd restart. Be sure you understand the security implications before you do that!

Now, logout of your container shell by pressing Ctrl+D. The following commands can be entered on your desktop or any other server with IPv6 connectivity.

Now login to your server:

ssh <username>@<public IPv6 address>

in this example:

ssh uli@2a01:4f9:c010:278::8

If you configured everything correctly, you’ll see the shell prompt for your container:

uli@my-container:~$

Note: Don’t forget to configure a firewall for your container, e.g. ufw! Your container’s IPv6 is exposed to the internet and just assuming noone will guess it is not good security practice.

Posted by Uli Köhler in Cloud, Container, Linux, LXC, Networking

How to fix lxc ‘Error: The remote isn’t a private LXD server’

Problem:

You want to launch a lxc container using lxc launch, but you get this error message instead:

Error: The remote isn't a private LXD server

Solution:

You are using a command like this:

lxc launch mycontainer ubuntu:18.04

You’ve swapped the container name and image arguments! The correct command looks like this:

lxc launch ubuntu:18.04 mycontainer
Posted by Uli Köhler in Container, Linux, LXC

How to fix puppetteer error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory

Problem:

You are trying to run puppetteer on Ubuntu, but when it starts to run chrome, you are facing the following issue:

/home/user/erp/node_modules/puppeteer/.local-chromium/linux-555668/chrome-linux/chrome: error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory

Solution:

Install the missing packages using

sudo apt install -y gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

Credits to @coldner on the puppetteer issue tracker for assembling the required pkgs.

If you encounter E: Unable to locate package errors, run sudo apt-get update.

Background information

If you want to know more on why this issue occurs, continue reading here.

Puppetteer is essentially a minimal headless (see What is a headless program or application?) Chromium instance with an additional API for controlling and monitoring it from NodeJS.

Even though Puppetteer does not actually display a GUI, the Chromium instance it uses still requires some of the libraries to draw a GUI and connect to the X11 server, even though that isn’t used in Puppetteer. One of those libraries is libxcb which provides the shared library libX11-xcb.so.1. You can fix this by installing the libx11-xcb1 package on most Debian-based systems.

However, as it is so often the case with missing shared libraries, once you install the one that is missing, there will be at least one other library missing after that. That’s why we need to install the large number of libraries listed above.

Posted by Uli Köhler in Linux, Puppeteer

Fixing npm/node-gyp Error: not found: make on Ubuntu

When you run npm install and it tries to install a native package like bcrypt and you see an error message like this:

gyp ERR! build error 
gyp ERR! stack Error: not found: make
gyp ERR! stack     at getNotFoundError (/usr/lib/node_modules/npm/node_modules/which/which.js:13:12)
gyp ERR! stack     at F (/usr/lib/node_modules/npm/node_modules/which/which.js:68:19)
gyp ERR! stack     at E (/usr/lib/node_modules/npm/node_modules/which/which.js:80:29)
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/which/which.js:89:16
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/isexe/index.js:42:5
gyp ERR! stack     at /usr/lib/node_modules/npm/node_modules/isexe/mode.js:8:5
gyp ERR! stack     at FSReqWrap.oncomplete (fs.js:182:21)

you simple need to install GNU Make. On Ubuntu, the easiest way of doing this is to run

sudo apt install build-essential

This will not only install make but also related tools like gcc and some standard header files and tools.

Posted by Uli Köhler in Linux, NodeJS

Fixing node/npm ImportError: No module named compiler.ast on Ubuntu 18.04

If you run npm install and encounter this error message:

ImportError: No module named compiler.ast

you need to install the python development files using

sudo apt install python-dev

Note: In my case, using apt install python3-dev did not solve the issue.

Posted by Uli Köhler in Linux, NodeJS

How to fix FreeCAD ‘No module named WebGui’ on Ubuntu 18.04

On Ubuntu 18.04 there’s currently a known bug where FreeCAD starts but does not show any widgets at startup but this error message instead:

No module named WebGui

One way I’ve found of fixing this issue is to install FreeCAD not from the Ubuntu repositories but from the freecad-stable PPA:

sudo add-apt-repository ppa:freecad-maintainers/freecad-stable
sudo apt-get update

Then you can install freecad again:

sudo apt install freecad

If you’ve installed previous versions of OpenCASCADE from the freecad PPAs, you might get an error message similar to this one:

Die folgenden Pakete haben unerfüllte Abhängigkeiten:
 freecad : Hängt ab von: libocct-data-exchange-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-foundation-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-modeling-algorithms-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-modeling-data-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-ocaf-7.2 soll aber nicht installiert werden
           Hängt ab von: libocct-visualization-7.2 soll aber nicht installiert werden
E: Probleme können nicht korrigiert werden, Sie haben zurückgehaltene defekte Pakete.

In that case, you need to force apt to install OpenCASCADE 7.2 along with freecad and deinstall OpenCASCADE 7.1

sudo apt install freecad libocct-data-exchange-7.2 libocct-foundation-7.2 libocct-modeling-algorithms-7.2 libocct-modeling-data-7.2 libocct-ocaf-7.2 libocct-visualization-7.2
Posted by Uli Köhler in CAD, Linux

How to fix apt-get source You must put some ‘source’ URIs in your sources.list

Problem:

You want to download an apt source package using

apt-get source <package name>

but instead you see this error message:

E: You must put some 'source' URIs in your sources.list

Solution:

In most cases, you can fix this easily using

sudo apt-get update

If this does not fix the issue, edit /etc/apt/sources.list, e.g. using

sudo nano /etc/apt/sources.list

and ensure that the deb-src lines are not commented out.

Example: You need to change

deb http://archive.ubuntu.com/ubuntu artful main restricted
# deb-src http://archive.ubuntu.com/ubuntu artful main restricted

to

deb http://archive.ubuntu.com/ubuntu artful main restricted
deb-src http://archive.ubuntu.com/ubuntu artful main restricted

and run sudo apt update after changing the file.

If there are repositories without a deb-src line, you can often try to copy the deb line – for example, from

deb http://myserver.com/deb focal main

you can create an additional line

deb-src http://myserver.com/deb focal main

by changing deb to deb-src and running sudo apt update afterwards. This often works, but it depends on the repository.

Posted by Uli Köhler in Linux

How to fix lxd ‘Failed container creation: No storage pool found. Please create a new storage pool.’

Problem:

You want to launch some lxd container using lxc launch […] but instead you get the following error message:

Failed container creation: No storage pool found. Please create a new storage pool.

Solution:

You need to initialize lxd before using it:

lxd init

When it asks you about the backend

Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:

choosing the default option (btrfs) means that you’ll have to use a dedicated block device (or a dedicated preallocated file image) for storage. While this is more efficient if you run many containers at a time, I recommend to choose the dir backend for the default storage pool, because that option will be easiest to configure and will not occupy as much space on your hard drive.

See Storage management in lxd for more more details, including different options for storage pools in case you need a more advanced setup.

Posted by Uli Köhler in Linux, LXC, Virtualization