Virtualization

How to install xenutils on Linux (XCP-NG)

Using CoreOS? See this post instead!

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run these commands. Note that this will reboot the machine after it finished

sudo mount -o ro /dev/sr0 /mnt/
cd /mnt/Linux
sudo ./install.sh -n
sudo reboot

After the VM reboots, XCP-NG should detect the management agent.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

Fedora CoreOS minimal ignition config for XCP-NG

This is the Ignition config that I use to bring up my Fedora CoreOS instance on a VM on my XCP-NG server:

{
  "ignition": {
    "version": "3.2.0"
  },
  "passwd": {
    "users": [
      {
        "groups": [
          "sudo",
          "docker"
        ],
        "name": "uli",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"
        ]
      }
    ]
  },
  "storage": {
    "files": [
      {
        "path": "/etc/hostname",
        "contents": {
          "source": "data:,coreos-test%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/profile.d/systemd-pager.sh",
        "contents": {
          "source": "data:,%23%20Tell%20systemd%20to%20not%20use%20a%20pager%20when%20printing%20information%0Aexport%20SYSTEMD_PAGER%3Dcat%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/sysctl.d/20-silence-audit.conf",
        "contents": {
          "source": "data:,%23%20Raise%20console%20message%20logging%20level%20from%20DEBUG%20(7)%20to%20WARNING%20(4)%0A%23%20to%20hide%20audit%20messages%20from%20the%20interactive%20console%0Akernel.printk%3D4"
        },
        "mode": 420
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "enabled": true,
        "name": "docker.service"
      },
      {
        "enabled": true,
        "name": "containerd.service"
      },
      {
        "dropins": [
          {
            "contents": "[Service]\n# Override Execstart in main unit\nExecStart=\n# Add new Execstart with `-` prefix to ignore failure\nExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM\nTTYVTDisallocate=no\n",
            "name": "autologin-core.conf"
          }
        ],
        "name": "[email protected]"
      }
    ]
  }
}

Which is build from this YAML:

variant: fcos
version: 1.2.0
passwd:
  users:
    - name: uli
      groups:
        - "sudo"
        - "docker"
      ssh_authorized_keys:
        - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"

systemd:
  units:
    - name: docker.service
      enabled: true

    - name: containerd.service
      enabled: true
    - name: [email protected]
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure
          ExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM
          TTYVTDisallocate=no
storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          coreos-test
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          # Tell systemd to not use a pager when printing information
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          # Raise console message logging level from DEBUG (7) to WARNING (4)
          # to hide audit messages from the interactive console
          kernel.printk=4

using

fcct --pretty --strict ignition.yml --output ignition.ign

Install using:

sudo coreos-installer install /dev/xvda --copy-network --ignition-url https://mydomain.com/ignition.ign

Features:

  • DHCP on all network interfaces
  • TTY on the screen
  • No password – remember to replace the SSH key by your key!
Posted by Uli Köhler in Virtualization

How to install XCP-NG xe-guest-utilities on Fedore CoreOS

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run this sequence of commands to install. Note that this will reboot the CoreOS instance!

curl -fsSL https://techoverflow.net/scripts/install-xenutils-coreos.sh | sudo bash /dev/stdin

This will run the following script:

sudo mount -o ro /dev/sr0 /mnt
sudo rpm-ostree install /mnt/Linux/*.x86_64.rpm
sudo cp -f /mnt/Linux/xen-vcpu-hotplug.rules /etc/udev/rules.d/
sudo cp -f /mnt/Linux/xe-linux-distribution.service /etc/systemd/system/
sudo sed 's/share\/oem\/xs/sbin/g' -i /etc/systemd/system/xe-linux-distribution.service
sudo systemctl daemon-reload
sudo systemctl enable /etc/systemd/system/xe-linux-distribution.service
sudo umount /mnt
sudo systemctl reboot

After rebooting the VM, XCP-NG should detect the management agent.

Based on work by steniofilho on the Fedora Forum.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

How to list VMs in XCP-NG on the command line

In order to list VMs on the command line, login to XCP-NG using SSH and run this command:

xe vm-list

Example output:

[16:51 virt01-xcpng ~]# xe vm-list
uuid ( RO)           : 56dc99f2-c617-f7a9-5712-a4c9df54229a
     name-label ( RW): VM 1
    power-state ( RO): running


uuid ( RO)           : 268d56ab-9672-0f45-69ae-efbc88380b21
     name-label ( RW): VM2
    power-state ( RO): running


uuid ( RO)           : 9b1a771f-fb84-8108-8e01-6dac0f957b19
     name-label ( RW): My VM 3
    power-state ( RO): running

 

Posted by Uli Köhler in Virtualization

How to fix XCP-NG XENAPI_MISSING_PLUGIN(xscontainer) or Error on getting the default coreOS cloud template

Problem:

When creating a CoreOS container on your XCP-NG host, XCP-NG center or XenOrchestra tells you

Cloud config: Error on getting the default coreOS cloud template

with the error message

XENAPI_MISSING_PLUGIN(xscontainer)
This is a XenServer/XCP-ng error

Solution:

Log into the host’s console as root using SSH or the console in XCP-NG center or XenOrchestra and run

yum install xscontainer

After that, reload the page (F5) you use to create your container. No host restart is required.

Note that if you have multiple hosts, you need to yum install xscontainer for each host individually.

Posted by Uli Köhler in Docker, Virtualization

How to download a file or directory from a LXC container

To download files, use

lxc file pull <container name>/<path>/<filename> <target directory>

To download directories, use

lxc file pull --recursive <container name>/<path>/<filename> <target directory>

Examples:

Download /root/myfile.txt from mycontainer to the current directory (.):

lxc file pull mycontainer/root/myfile.txt .

Download /root/mydirectory from mycontainer to the current directory (.):

lxc file pull -r mycontainer/root/mydirectory .

 

Posted by Uli Köhler in Container, Linux, LXC, Virtualization

Launching Debian containers using LXC on Ubuntu

Problem:

You know you can launch an Ubuntu LXC container using

lxc launch ubuntu:18.04 myvm

Now you want to launch a Debian VM using

lxc launch debian:jessie myvm

but you only get this error message:

Error: The remote "debian" doesn't exist

Solution:

The debian images are (by default) available from the images remote, not the debian remote, so you need to use this:

lxc launch images:debian/jessie myvm

 

Posted by Uli Köhler in Container, Linux, LXC, Virtualization

How to fix lxd ‘Failed container creation: No storage pool found. Please create a new storage pool.’

Problem:

You want to launch some lxd container using lxc launch […] but instead you get the following error message:

Failed container creation: No storage pool found. Please create a new storage pool.

Solution:

You need to initialize lxd before using it:

lxd init

When it asks you about the backend

Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:

choosing the default option (btrfs) means that you’ll have to use a dedicated block device (or a dedicated preallocated file image) for storage. While this is more efficient if you run many containers at a time, I recommend to choose the dir backend for the default storage pool, because that option will be easiest to configure and will not occupy as much space on your hard drive.

See Storage management in lxd for more more details, including different options for storage pools in case you need a more advanced setup.

Posted by Uli Köhler in Linux, LXC, Virtualization