Use the following command to install a VirtualBox extension pack on a server or other headless machine:
sudo vboxmanage extpack install [filename]
for example
sudo vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-7.0.8.vbox-extpack
Use the following command to install a VirtualBox extension pack on a server or other headless machine:
sudo vboxmanage extpack install [filename]
for example
sudo vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-7.0.8.vbox-extpack
You can often save multiple gigabytes of space by deleting old logs from Gitlab instances. It should be clear that the logs will be lost forever once deleting them, so ensure that you don’t really care about your content before deleting this.
First, enter the logs
directory, that is the directory mapped to /var/log/gitlab
. This should be mapped out from your container to a local directory or volume. In our Gitlab reference config for docker-compose, we have mapped it to the logs
directory.
In that directory, run the following commands:
find . \( -name "*.gz" -o -name "*.log* -o -name "*.s" -o -name "*.u" \) -exec rm -v {} \;
This will delete all files with the given extensions.
This post is based on How to install InvenTree using docker in just 5 minutes and uses the auto-generated docker-compose.yml
from there. However it should be useable for almost any setup.
The setup is pretty standard since the inventree proxy container runs the webserver on port 80. Therefore, you don’t even have to explicitly specify a load balancer port
In your docker-compose.yml
, add the following labels
to the inventree-proxy
container:
For more details on the base Traefik setup, see Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges
labels: - "traefik.enable=true" - "traefik.http.routers.inventree-mydomain.rule=Host(`inventree.mydomain.com`)" - "traefik.http.routers.inventree-mydomain.entrypoints=websecure" - "traefik.http.routers.inventree-mydomain.tls.certresolver=cloudflare" - "traefik.http.routers.inventree-mydomain.tls.domains[0].main=mydomain.com" - "traefik.http.routers.inventree-mydomain.tls.domains[0].sans=*.mydomain.com"
This command will generate a PostgreSQL dump using pg_dump
and immediately feed it into bup split
(without creating an intermediate file) for backup.
It assumes that .env
contains a line
POSTGRES_USER=myuser
so that the sc
.bup
variantSet BUP_DIR
export BUP_DIR=/var/bup/my-database.bup source .env && docker-compose exec -u postgres -T postgres pg_dump -U${POSTGRES_USER} | bup -d $BUP_DIR split -n mydb-pgdump.sql
export BUP_DIR=/var/bup/my-database.index.bup export BUP_REMOTE=bup-server:/bup/my-database.bup source .env && docker-compose exec -u postgres -T postgres pg_dump -U${POSTGRES_USER} | bup -d $BUP_DIR split -r $BUP_REMOTE -n mydb-pgdump.sql
When trying to connect to your Oracle Cloud instance using VNC using Linux, you’ll get a command like
ssh -o ProxyCommand='ssh -W %h:%p -p 443 ocid1.instanceconsoleconnection.oc1.eu-frankfurt-1.antheljtwxs32nycl7rgwekcj4t2pecwwcsm7mgzy5c3tt3iiovq564wubta@instance-console.eu-frankfurt-1.oci.oraclecloud.com' -N -L localhost:5900:ocid1.instance.oc1.eu-frankfurt-1.antheljtwxs32nycblplzbuamqsqbi4ipz377f3qhs6a4tdh74j673jfsjtq:5900 ocid1.instance.oc1.eu-frankfurt-1.antheljtwxs32nycblplzbuamqsqbi4ipz377f3qhs6a4tdh74j673jfsjtq
but you see error messages like
Unable to negotiate with 130.61.0.255 port 443: no matching host key type found. Their offer: ssh-rsa
Add the following text at the end of your ~/.ssh/config
and retry:
Host * HostkeyAlgorithms +ssh-rsa PubkeyAcceptedAlgorithms +ssh-rsa
This will make SSH accept RSA host keys.
This guide shows you how to create a bup
server. This is based on our previous post How to setup a “bup remote” server in 5 minutes using docker-compose but uses Synology’s built-in Docker GUI instead of docker-compose
.
First, create two shared directories bup-backups
(which will store the backups itself) and bup-config
)which will store the dropbear
SSH server configuration, that is SSH host keys and authorized client keys).
Alternatively, you can also use sub-directories of existing shared directories, but I’d like to keep them separate.
Then create a new Docker container by opening Docker
-> Container
, clicking Create
and follow these steps:
ulikoehler/bup-server:lastest
2022
(bup
server SSH port)You can choose any other port in Local Port
but keep the Container Port
the same.
As we said before, any directory will do. Create the sub-directories as needed.
On your local linux computer, create a SSH key using
ssh-keygen -t ed25519 -f id_bup -N ""
Upload id_bup
and id_bup.pub
to the bup-config
shared folder.
Furthermore, copy id_pub.pub
to bup-config/dotssh/authorized_keys
.
After that you can startup the container.
Use
ssh -i id_bup -p 2022 bup@[AS IP address]
to try to connect to your NAS.
In case connecting via SSH does not work, most likely the issue is with your public/private key and/or your authorized_keys
file. Check if it is in the right directory (/home/bup/.ssh/authorized_keys
on the container). Also check the logs of the Docker container.
The following sequence allows you to enter the UEFI setup and set the screen size. It does not work for VMs running BIOS!
F2
repeatedly until you see the UEFI setup screen:Device Manager
OVMF Platform Configuration
and select the screen resolutionESC
and select Y
to save the changes. Continue pressing ESC
until you are at the start screen.Reset
and wait for the OS to boot. You need to select Reset
because the change will only be effective after the next complete reboot.
In our previous post How to setup a “bup remote” server in 5 minutes using docker-compose we outlined how to setup your own bup
remote server using docker-compose
. Read that post before this one!
This post provides an alternate docker-compose.yml
config file that mounts a remote CIFS directory as /bup
backup directory instead of using a local directory. This is most useful when using a NAS and a separate bup
server.
For this example, we’ll mount the CIFS share //10.1.2.3/bup-backups
with user cifsuser
and password pheT8Eigho
.
Note: For performance reasons, the CIFS server (NAS) and the bup server should be locally connected, not via the internet.
# Mount the backup volume using CIFS # NOTE: We recommend to not use a storage mounted over the internet # for performance reasons. Instead, deploy a bup remote server locally. volumes: bup-backups: driver_opts: type: cifs o: "username=cifsuser,password=pheT8Eigho,uid=1111,gid=1111" device: "//10.1.2.3/bup-backups" version: "3.8" services: bup-server: image: ulikoehler/bup-server:latest environment: - SSH_PORT=2022 volumes: - ./dotssh:/home/bup/.ssh - ./dropbear:/etc/dropbear # BUP backup storage: CIFS mounted - bup-backups:/bup ports: - 2022:2022 restart: unless-stopped
The bup
backup system implements remote backup on a server by connecting via SSH to said server, starting a bup
process there and then communicating via the SSH tunnel.
In this post, we’ll setup a server for bup
remote backup based on our ulikoehler/bup-server
image (which contains both bup
and dropbear
as an SSH server).
I recommend doing this in /opt/bup
, but in principle, any directory will do.
mkdir -p dotssh bup # Generate new elliptic curve public key ssh-keygen -t ed25519 -f id_bup -N "" # Add SSH key to list of authorized keys cat id_bup.pub | sudo tee -a dotssh/authorized_keys # Fix permissions so that dropbear does not complain sudo chown -R 1111:1111 bup sudo chmod 0600 dotssh/authorized_keys sudo chmod 0700 dotssh
1111
is the user ID of the bup
user in the VM.
docker-compose.yml
Note: This docker-compose.yml
uses a local backup directory – you can also mount a CIFS directory from e.g. a NAS device. See bup remote server docker-compose config with CIFS-mounted backup store for more details.
version: "3.8" services: bup-server: image: ulikoehler/bup-server:latest environment: - SSH_PORT=2022 volumes: - ./dotssh:/home/bup/.ssh - ./dropbear:/etc/dropbear # BUP backup storage: - ./bup:/bup ports: - 2022:2022 restart: unless-stopped
At this point, you can use docker-compose up
to startup the service. However, it’s typically easier to just use TechOverflow’s script to generate a systemd script to autostart the service on boot (and start it right now):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
When you run docker-compose logs -f
, you should see a greeting message from dropbear
such as
bupremotedocker-bup-remote-1 | [1] Dec 25 14:58:20 Not backgrounding
.ssh/config
entry on the clientYou need to do this for each client.
Copy id_bup
(which we generated earlier) to each client into a folder such as ~/.ssh
. Where you copy it does not matter, but the user who will be running the backups later will need access to this file. Also, for that user you need to create a .ssh/config
entry telling SSH how to access the bup
server:
Host BupServer HostName 10.1.2.3 User bup Port 2022 IdentityFile ~/.ssh/id_bup
Set HostName
to the IP or domain name of the host running the docker container.
Set User
to bup
. This is hard-coded in the container.
Set Port
to whatever port you mapped out in docker-compose.yml
. If the ports:
line in docker-compose.yml
is - 1234:2022
, the correct value for Port
in .ssh/config
is 1234
.
Set IdentityFile
to whereever id_bup
is located (see above).
Now you need to connect to the bup
server container once for each client. This is both to spot issues with your SSH configuration (such as wrong permissions on the id_bup
file) and to save the SSH host key of the container as known key:
ssh BupServer
If this prompts you for a password, something is wrong in your configuration – possibly, your are connecting to the wrong SSH host since the bup server container has password authentication disabled.
Every client will need bup to be installed. See How to install bup on Ubuntu 22.04 and similar posts.
You have to understand that bup will need both a local directory (called the index) and a directory on the bup
server called destination directory. You have to use one index directory and one destination directory per backup project. What you define as backup project is up to you, but I strongly recommend to use one backup project per application you backup, in order to have data locality: Backups from one application belong together.
By convention, the /bup
directory on the server (i.e. container) is dedicated for this purpose (and mapped to a directory or volume outside of the container).
On the local host, I recommend using either /var/lib/bup/project.index.bup
or ~/bup/project.index.bup
and let bup auto-create project-specific directories from there. If you use a special user on the client to do backups, you can also place the indexes. If the index is lost, this is not an issue as long as the backup works (it just will take a few minutes to check all files again). You should not backup the index directory.
There is no requirement for the .bup
or .index.bup
suffix but if you use it, it will allow you to quickly discern what a directory is and whether it is important or nor.
In order to use bup
, you first need to initialize the directories. You can do this multiple times without any issue, so I do it at the start of each of my backup scripts.
bup -d ~/buptest.index.bup init -r BupServer:/bup/buptest.bup
After that, you can start backing up. Generally this is done by first running bup index
(this operation is local-only) and then running bup save
(which saves the backup on the bup remote server).
bup -d ~/buptest.index.bup index . && bup save -r BupServer:/bup/buptest.bup -9 --strip-path $(pwd) -n mybackup .
Some parameters demand further explanation:
-9
: Maximum compression. bup
is so fast that it hardly makes a difference but it saves a ton of disk space especially for text-like data.--strip-path $(pwd)
If you backup a directory /home/uli/Documents/
with a file /home/uli/Documents/Letters/Myletter.txt
this makes bup
save the backup of said file under the name Letters/Myletter.txt
instead of /home/uli/Documents/Letters/Myletter.txt
.-n mybackup
. The name of this backup. This allows you to separate different backups in a single repository.You might want to say hopefully I’ll never need to restore. WRONG. You need to restore right now, and you need to restore regularly, as a test that if you actually need to recover data by restoring, it will actually work.
In order to do this, we’ll first need to get access to the folder where. This is typically stored on some kind of Linux server anyway, so just install bup there. In our example above, the directory we’ll work with is called buptest.bup
.
There are two conventient ways to view bup
backups:
bup web
and open your browser at http://localhost:8080
to view the backup data (including history):bup -d buptest.bup web
bup fuse
to mount the entire tree including history to a directory such as /tmp/buptest
:mkdir -p /tmp/buptest && bup -d buptest.bup fuse /tmp/buptest
This script backups a WordPress installation (including data,base files & directories, excluding cache) to a bup
remote server running on 10.1.2.3
. You need to ensure passwordless access to that server.
It is based on automated extraction of database host, username & password, see How to grep for WordPress DB_NAME, DB_USER, DB_PASSWORD and DB_HOST in wp-config.php for more details.
#!/bin/bash export NAME=$(basename $(pwd)) export BUP_DIR=/var/bup/$NAME.bup export REMOTE_BUP_DIR=/bup-backups/$NAME.bup export REMOTE_SERVER=10.1.2.3 export BUP_REMOTE=$REMOTE_SERVER:$REMOTE_BUP_DIR # Init bup -d $BUP_DIR init -r $BUP_REMOTE # Save MariaDB dump (extract MariaDB config from wp-config.php) DB_NAME=$(grep -oP "define\(['\"]DB_NAME['\"],\s*['\"]\K[^'\"]+(?=[\'\"]\s*\)\s*;)" wp-config.php) DB_USER=$(grep -oP "define\(['\"]DB_USER['\"],\s*['\"]\K[^'\"]+(?=[\'\"]\s*\)\s*;)" wp-config.php) DB_PASSWORD=$(grep -oP "define\(['\"]DB_PASSWORD['\"],\s*['\"]\K[^'\"]+(?=[\'\"]\s*\)\s*;)" wp-config.php) DB_HOST=$(grep -oP "define\(['\"]DB_HOST['\"],\s*['\"]\K[^'\"]+(?=[\'\"]\s*\)\s*;)" wp-config.php) mysqldump -h$DB_HOST -u$DB_USER -p$DB_PASSWORD $DB_NAME | bup -d $BUP_DIR split -n $NAME-$DB_NAME.sql # Save wordpress directory bup -d $BUP_DIR index --exclude wp-content/cache --exclude wp-content/uploads/cache . && bup save -r $BUP_REMOTE -9 --strip-path $(pwd) -n $NAME . # OPTIONAL: Add par2 information # This is only recommended for backup on unreliable storage or for extremely critical backups # If you already have bitrot protection (like BTRFS with regular scrubbing), this might be overkill. # Uncomment this line to enable: # bup on $REMOTE_SERVER -d $REMOTE_BUP_DIR fsck -g # OPTIONAL: Cleanup old backups bup on $REMOTE_SERVER -d $REMOTE_BUP_DIR prune-older --keep-all-for 1m --keep-dailies-for 6m --keep-monthlies-for forever -9 --unsafe
In order to install the bup
backup software on Alpine Linux, you currently have to compile it yourself.
First, install the prerequisites:
apk add bash make g++ python3-dev git automake autoconf par2cmdline py3-pip && pip3 install wheel && pip3 install pyxattr
Now we can clone bup
:
git clone -b 0.33 --depth 1 https://github.com/bup/bup
and build:
cd bup && ./configure && make -j4 install PREFIX=/usr
After this, bup
should be installed in /usr/bin/bup
. The bup clone directory we created in the first step is not needed any more.
While trying to build bup
using make
you see the following error message:
set -e; bup_ver=$(./bup version); \ echo "s,%BUP_VERSION%,$bup_ver,g" > Documentation/substvars.tmp; \ echo "s,%BUP_DATE%,$bup_ver,g" >> Documentation/substvars.tmp Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/uli/dev/deb-buildscripts/bup/lib/bup/main.py", line 181, in <module> cmd_module = import_module('bup.cmd.' File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1050, in _gcd_import File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/home/uli/dev/deb-buildscripts/bup/lib/bup/cmd/version.py", line 5, in <module> from bup import options, version File "/home/uli/dev/deb-buildscripts/bup/lib/bup/version.py", line 20, in <module> assert not date.startswith(b'$Format') AssertionError make: *** [GNUmakefile:252: Documentation/substvars] Fehler 1
This error occurs because bup
can’t determine the correct version & date for bup
. This happens because you deleted the .git
directory which bup
needs in order to determine its version.
You can create a clone of bup with the .git
directory intact by just cloning, for example, the specific version you want to build:
git clone -b 0.33 --depth 1 https://github.com/bup/bup
and then run
./configure make -j4
as usual
The following .gitlab-ci.yml
will build a native executable project using cmake
with a custom docker image:
stages: - build buildmyexe: stage: build image: 'ulikoehler/ubuntu-gcc-cmake:latest' script: - cmake . - make -j4
In this example, we have only one stage – if you have multiple stages, you can specify different images for each of them.
The following code uses the v4l2-ctl
executable to get and set v4l2
parameters such as exposure_absolute
. It also provides means of writing a parameter and verifying if it has been set correctly.
def v4l2_set_parameters_once(params, device="/dev/video0"): """ Given a dict of parameters: { "exposure_auto": 1, "exposure_absolute": 10, } this function sets those parameters using the v4l2-ctl command line executable """ set_ctrl_str = ",".join([f"{k}={v}" for k,v in params.items()]) # expsosure_absolute=400,exposure_auto=1 subprocess.check_output(["v4l2-ctl", "-d", device, f"--set-ctrl={set_ctrl_str}"]) def v4l2_get_parameters(params, device="/dev/video0"): """ Query a bunch of v4l2 parameters. params is a list like [ "exposure_auto", "exposure_absolute" ] Returns a dict of values: { "exposure_auto": 1, "exposure_absolute": 10, } """ get_ctrl_str = ",".join([f"{k}" for k in params]) out = subprocess.check_output(["v4l2-ctl", "-d", device, f"--get-ctrl={get_ctrl_str}"]) out = out.decode("utf-8") result = {} for line in out.split("\n"): # line should be like "exposure_auto: 1" if ":" not in line: continue k, _, v = line.partition(":") result[k.strip()] = v.strip() return result def v4l2_set_params_until_effective(params, device="/dev/video0"): """ Set V4L2 params and check if they have been set correctly. If V4L2 does not confirm the parameters correctly, they will be set again until they have an effect params is a dict like { "exposure_auto": 1, "exposure_absolute": 10, } """ while True: v4l2_set_parameters_once(params, device=device) result = v4l2_get_parameters(params.keys(), device=device) # Check if queried parameters match set parameters had_any_mismatch = False for k, v in params.items(): if k not in result: raise ValueError(f"Could not query {k}") # Note: Values from v4l2 are always strings. So we need to compare as strings if str(result.get(k)) != str(v): print(f"Mismatch in {k} = {result.get(k)} but should be {v}") had_any_mismatch = True # Check if there has been any mismatch if not had_any_mismatch: return
v4l2_set_params_until_effective({ "exposure_auto": 1, "exposure_absolute": 1000, })
Using OpenCV on Linux, if you have a video device that interfaces a V4L2 device such as a USB webcam:
camera = cv2.VideoCapture(0)
in order to set the manual white balance temperature, you first need to disable automatic white balancing using CAP_PROP_AUTO_WB
. See our previous post How to enable/disable manual white balance in OpenCV (Python) for more details on how you can do this, here’s only the short version that works with most cameras.
After that, you can set the white balance temperature using CAP_PROP_WB_TEMPERATURE
:
camera.set(cv2.CAP_PROP_AUTO_WB, 0.0) # Disable automatic white balance camera.set(cv2.CAP_PROP_WB_TEMPERATURE, 4200) # Set manual white balance temperature to 4200K
For V4L2 cameras, as you can see in our previous post on mapping of OpenCV parameters to V4L2 parameters, CAP_PROP_WB_TEMPERATURE
is mapped to V4L2_CID_WHITE_BALANCE_TEMPERATURE
which is shown in v4l2-ctl -d /dev/video0 --all
as white_balance_temperature
. Therefore, you can easily verify if, for example, disabling the auto white balance worked for your V4L2 camera such as any USB camera by looking at the white_balance_temperature
section of v4l2-ctl -d /dev/video0 --all
:
white_balance_temperature 0x0098091a (int) : min=2800 max=6500 step=1 default=4600 value=4200
Using OpenCV on Linux, if you have a video device that interfaces a V4L2 device such as a USB webcam:
camera = cv2.VideoCapture(0)
you can typically enable automatic white balance (= disable manual white balance) for any camera by using
camera.set(cv2.CAP_PROP_AUTO_WB, 1.0) # Enable automatic white balance
or disable automatic white balance (= enable manual white balance) using
camera.set(cv2.CAP_PROP_AUTO_WB, 0.0) # Disable automatic white balance
When disabling automatic white balance, you should also set the manual white balance temperature – see our post How to set manual white balance temperature in OpenCV (Python) for more details.
For V4L2 cameras, as you can see in our previous post on mapping of OpenCV parameters to V4L2 parameters, CAP_PROP_AUTO_WB
is mapped to V4L2_CID_AUTO_WHITE_BALANCE
which is shown in v4l2-ctl -d /dev/video0 --all
as white_balance_temperature_auto
. Therefore, you can easily verify if, for example, disabling the auto white balance worked for your V4L2 camera such as any USB camera by looking at the white_balance_temperature_auto
section of v4l2-ctl -d /dev/video0 --all
:
white_balance_temperature_auto 0x0098090c (bool) : default=1 value=0
Using OpenCV on Linux, if you have a video device that interfaces a V4L2 device such as a USB webcam:
camera = cv2.VideoCapture(0)
you can typically set the automatic exposure mode by setting exposure_auto
to 1
(the following output is from v4l2-ctl -d /dev/video0 --all
):
exposure_auto 0x009a0901 (menu) : min=0 max=3 default=3 value=1 1: Manual Mode 3: Aperture Priority Mode
As you can see in our previous blogpost, exposure_auto
(which is named V4L2_CID_EXPOSURE_AUTO
in V4L2 in C/C++) is mapped to CAP_PROP_AUTO_EXPOSURE
.
Therefore, you can enable manual exposure using
camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1) # Set exposure to manual mode
You should, however, verify these settings using v4l2-ctl --all
using your specific camera.
From both the OpenCV documentation and the V4L2 documentation, it is unclear how all the CAP_PROP_...
parameters are mapped to v4l2 controls such as exposure_absolute
.
However, you can easily look in the source code (int capPropertyToV4L2(int prop)
in cap_v4l.cpp
) in order to see how the parameters are mapped internally. Github link to the source code
This list can be easily obtained using the following Python code:
for v in [k for k in cv2.__dict__.keys() if k.startswith("CAP_PROP")]: print(f"cv2.{v}")
When you right-click on a plot in Jupyter, you will see the Jupyter Menu popup instead of the normal right-click menu which would allow you to save the image to your computer or copy it to the clipboard.
However, there’s an easy workaround: You can just Shift+Right click
to see the normal right-click menu and then save the image to a file.