1. Create a snapshot
lxc snapshot [mycontainer] [snapshot name]
2. Create local image from snapshot
lxc publish [mycontainer]/[snapshot name] --alias [image alias]
3. Create container from iamge
lxc launch [image alias] [mynewcontainer]
1. Create a snapshot
lxc snapshot [mycontainer] [snapshot name]
2. Create local image from snapshot
lxc publish [mycontainer]/[snapshot name] --alias [image alias]
3. Create container from iamge
lxc launch [image alias] [mynewcontainer]
gpg --armor -o MyKey.gpg --export [Key ID or fingerprint]
For example, with fingerprint
gpg --armor -o MyKey.gpg --export AA15942077B73AE65E88FB4BCFC41606DD8C212E
with (short) key ID:
gpg --armor -o MyKey.gpg --export DD8C212E
Pandas can take care of the conversion of a Counter to a DataFrameby itself but you need to add a column label:
pd.DataFrame({"YourColumnLabelGoesHere": counterObject})
import pandas as pd from collections import Counter ctr = Counter() ctr["a"] += 1 ctr["b"] += 1 ctr["a"] += 1 ctr["a"] += 1 ctr["b"] += 1 ctr["a"] += 1 ctr["c"] += 1 pd.DataFrame({"ctr": ctr})
This will result in the following DataFrame
:
import structlog logger = structlog.get_logger() # Usage example logger.info("Test log")
You can copy a running lxc container like this
lxc copy [name of container to be copied] [new container]
for example
lxc copy mycontainer mycontainerCopy
You are running your WordPress instance using the official WordPress Apache image.
However, the WordPress Media page has a maximum upload size of 2 Megabytes
.
This setting is configured in the php.ini
used by the WordPress docker image internally. While it is possible to use a custom php.ini, it’s much easier to edit .htaccess
. Just edit .htaccess
in the wordpress
directory where wp-config.php
is located and append this after # END WordPress
to set the upload limit to 256 Megabytes:
php_value upload_max_filesize 256M php_value post_max_size 256M php_value max_execution_time 300 php_value max_input_time 300
The change should be effective immediately after reloading the page. Note that you still might need to configure your reverse proxy (if any) to allow larger uploads. My recommendation is to just try it out as is and if large uploads fail, it’s likely that your reverse proxy is at fault.
# BEGIN WordPress # Die Anweisungen (Zeilen) zwischen „BEGIN WordPress“ und „END WordPress“ sind # dynamisch generiert und sollten nur über WordPress-Filter geändert werden. # Alle Änderungen an den Anweisungen zwischen diesen Markierungen werden überschrieben. <IfModule mod_rewrite.c> RewriteEngine On RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] RewriteBase / RewriteRule ^index\.php$ - [L] RewriteRule ^en/wp-login.php /wp-login.php [QSA,L] RewriteRule ^de/wp-login.php /wp-login.php [QSA,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress php_value upload_max_filesize 256M php_value post_max_size 256M php_value max_execution_time 300 php_value max_input_time 300
This simple command will permanently enable IPv4 forwarding on Alpine Linux. Run this as root
:
echo net.ipv4.ip_forward=1 | tee -a /etc/sysctl.conf && sysctl -p
By installing tailscale on XCP-NG hosts, you can provide easier access to your virtualization host using VPN.
Run the following commands via SSH as root on the XCP-NG host:
sudo yum-config-manager --add-repo https://pkgs.tailscale.com/stable/centos/7/tailscale.repo sudo yum -y install tailscale
and enable & start the tailscale daemon tailscaled
:
systemctl enable --now tailscaled
My recommendation is to just use the community repository:
echo http://dl-2.alpinelinux.org/alpine/edge/community/ >> /etc/apk/repositories apk add -U tailscale
Now you need to add tailscaled
to the autostart list and then start the service so you can use it right now:
rc-update add tailscale /etc/init.d/tailscale start
When trying to use usermod
in Alpine Linux, you see the following error message:
-ash: usermod: not found
Install usermod and related tools by adding the community repositories and using
echo http://dl-2.alpinelinux.org/alpine/edge/community/ >> /etc/apk/repositories apk add -U shadow
While trying to enable the matplotlib xkcd style using
plt.style.use("xkcd")
you see the following error message:
OSError: 'xkcd' not found in the style library and input is not a valid URL or path; see `style.available` for list of available styles
You can’t enable xkcd
-style plots by running plt.style.use("xkcd")
. Instead, use with plt.xkcd()
:
with plt.xkcd(): # TODO your plotting code goes here! # plt.plot(x, y) # Example
Directly after any proxy_pass
line add
proxy_set_header X-Forwarded-Proto $scheme;
Typically X-Forwarded-Proto
is used together with X-Forwarded-Host
like this:
proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Proto $scheme;
I prefer this method to the GUI docker method because:
--net=host
and therefore doesn’t involve additional routing, bridging or forwarding of packets which might impact performanceLogin to the Synology NAS over SSH using a user with admin privileges, then sudo su
.
For using iperf3
as a serve, use
docker run -it --rm --name=iperf3-server --net=host networkstatic/iperf3 -s
For using iperf3
as a client, use
docker run -it --rm --name=iperf3-client --net=host networkstatic/iperf3 -c 10.1.2.3
We tested iperf3
performance using our network based on the following devices:
Tailscale version was
1.24.1 tailscale commit: 1a9302b1edba91d0f638e775faeaa0ce2a6a25f8 other commit: 1331ed5836e1a0ab32b10e6ce8748e17ba2c7598 go version: go1.18.1-ts710a0d8610
The network is completely switched, not routed and we took care that tailscale actually used the switched connection using tailscale ping
.
Desktop running iperf -s
, VM running iperf -c 10.9.2.10
:
Connecting to host 10.9.2.10, port 5201 [ 5] local 10.9.2.103 port 52944 connected to 10.9.2.10 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 94.7 MBytes 794 Mbits/sec 338 109 KBytes [ 5] 1.00-2.00 sec 98.0 MBytes 822 Mbits/sec 353 148 KBytes [ 5] 2.00-3.00 sec 96.6 MBytes 811 Mbits/sec 382 117 KBytes [ 5] 3.00-4.00 sec 103 MBytes 862 Mbits/sec 334 116 KBytes [ 5] 4.00-5.00 sec 101 MBytes 851 Mbits/sec 483 102 KBytes [ 5] 5.00-6.00 sec 104 MBytes 874 Mbits/sec 503 126 KBytes [ 5] 6.00-7.00 sec 105 MBytes 883 Mbits/sec 527 119 KBytes [ 5] 7.00-8.00 sec 108 MBytes 906 Mbits/sec 451 105 KBytes [ 5] 8.00-9.00 sec 108 MBytes 903 Mbits/sec 442 117 KBytes [ 5] 9.00-10.00 sec 107 MBytes 900 Mbits/sec 461 123 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.00 GBytes 861 Mbits/sec 4274 sender [ 5] 0.00-10.00 sec 1.00 GBytes 860 Mbits/sec receiver iperf Done.
VM running iperf -s
, Desktop running iperf -c 10.9.2.103
Connecting to host 10.9.2.103, port 5201 [ 5] local 10.9.2.10 port 42630 connected to 10.9.2.103 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 88.5 MBytes 742 Mbits/sec 0 966 KBytes [ 5] 1.00-2.00 sec 90.0 MBytes 755 Mbits/sec 0 1.12 MBytes [ 5] 2.00-3.00 sec 87.5 MBytes 734 Mbits/sec 33 833 KBytes [ 5] 3.00-4.00 sec 90.0 MBytes 755 Mbits/sec 0 833 KBytes [ 5] 4.00-5.00 sec 88.8 MBytes 745 Mbits/sec 0 1.00 MBytes [ 5] 5.00-6.00 sec 88.8 MBytes 744 Mbits/sec 0 1.00 MBytes [ 5] 6.00-7.00 sec 87.5 MBytes 734 Mbits/sec 0 1.09 MBytes [ 5] 7.00-8.00 sec 90.0 MBytes 755 Mbits/sec 0 1.09 MBytes [ 5] 8.00-9.00 sec 90.0 MBytes 755 Mbits/sec 0 1.09 MBytes [ 5] 9.00-10.00 sec 90.0 MBytes 755 Mbits/sec 13 863 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 891 MBytes 747 Mbits/sec 46 sender [ 5] 0.00-10.00 sec 888 MBytes 745 Mbits/sec receiver iperf Done.
The direction where the VM hosts the iperf -s
server i.e. sends the data shows a slight degradation of performance
iperf -s
, VM running iperf -c
Connecting to host 100.64.0.2, port 5201 [ 5] local 100.64.0.3 port 37466 connected to 100.64.0.2 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 39.4 MBytes 330 Mbits/sec 62 149 KBytes [ 5] 1.00-2.00 sec 45.8 MBytes 385 Mbits/sec 44 150 KBytes [ 5] 2.00-3.00 sec 38.9 MBytes 326 Mbits/sec 97 122 KBytes [ 5] 3.00-4.00 sec 47.9 MBytes 401 Mbits/sec 7 242 KBytes [ 5] 4.00-5.00 sec 39.5 MBytes 332 Mbits/sec 118 110 KBytes [ 5] 5.00-6.00 sec 46.6 MBytes 391 Mbits/sec 32 136 KBytes [ 5] 6.00-7.00 sec 41.8 MBytes 351 Mbits/sec 42 159 KBytes [ 5] 7.00-8.00 sec 44.3 MBytes 372 Mbits/sec 91 104 KBytes [ 5] 8.00-9.00 sec 36.1 MBytes 303 Mbits/sec 72 133 KBytes [ 5] 9.00-10.00 sec 41.5 MBytes 348 Mbits/sec 39 139 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 422 MBytes 354 Mbits/sec 604 sender [ 5] 0.00-10.00 sec 421 MBytes 353 Mbits/sec receiver iperf Done.
iperf -s
, Desktop running iperf -c
Connecting to host 100.64.0.3, port 5201 [ 5] local 100.64.0.2 port 36744 connected to 100.64.0.3 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 23.7 MBytes 199 Mbits/sec 104 89.9 KBytes [ 5] 1.00-2.00 sec 23.6 MBytes 198 Mbits/sec 80 49.2 KBytes [ 5] 2.00-3.00 sec 21.1 MBytes 177 Mbits/sec 59 54.0 KBytes [ 5] 3.00-4.00 sec 23.6 MBytes 198 Mbits/sec 68 69.6 KBytes [ 5] 4.00-5.00 sec 19.1 MBytes 160 Mbits/sec 77 48.0 KBytes [ 5] 5.00-6.00 sec 25.3 MBytes 212 Mbits/sec 76 62.4 KBytes [ 5] 6.00-7.00 sec 21.4 MBytes 179 Mbits/sec 50 107 KBytes [ 5] 7.00-8.00 sec 25.6 MBytes 215 Mbits/sec 35 124 KBytes [ 5] 8.00-9.00 sec 22.5 MBytes 188 Mbits/sec 71 48.0 KBytes [ 5] 9.00-10.00 sec 25.0 MBytes 209 Mbits/sec 42 64.8 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 231 MBytes 194 Mbits/sec 662 sender [ 5] 0.00-10.01 sec 230 MBytes 193 Mbits/sec receiver
UDP tests were mostly similar to TCP tests (albeit slightly higher throughput at up to 400 Mbit/s), including the sensitivity to the direction of the connection.
Tailscale has significant impact on network speeds and will not regularly be able to achieve near-Gigabit iperf3 speeds given typical setup with Desktop that are a couple of years old, and virtual machines. However, achieving a throughput of 200-400 Mbit/s is more than enough for most applications.
Interestingly, the speed is highly dependent on the direction of transfer between a less powerful VM and a more powerful Desktop, with a factor of x1.5 … x2 between the two directions. This might be attributed to the amount of computation required to encrypt or decrypt the data.
I recommend using invoke
instead of the built-in subprocess
to handle executing any shell command in Python.
Not only does it provide a more user friendly syntax compared to e.g. subprocess.check_output()
:
run('make')
but it also tends to act more like you’d expect especially regarding the output of the command and has easy-to-use parameters such as hide=True
to hide the output of shell commands.
Furthermore, it provides a buch of really useful features such as automatically responding to prompts from the shell command.
If you want to connect an USB device such as a 3D printer mainboard to your Raspberry Pi 4 using the USB-C connector as opposed to the larger USB-A connector, you need to first configure the Raspberry Pi kernel to use host mode for the USB-C connector.
sudo modprobe -r dwc2 && sudo dtoverlay dwc2 dr_mode=host && sudo modprobe dwc2
This method has the advantage of not requiring a reboot.
Edit /boot/config.txt
and add
dtoverlay=dwc2,dr_mode=host
at the end of the file (in the [all] section). Then
reboot
To enable the WireGuard peer called MyPeer
:
/interface/wireguard/peers/enable [find comment="MyPeer"]
To disable the WireGuard peer called MyPeer
:
/interface/wireguard/peers/disable [find comment="MyPeer"]
Use this command to test if a given MongoDB database exists:
mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")'
This will return an index such as 0
or 241
if the database is found. On the other hand, it will return -1
if the database does not exist.
docker-compose
version:
docker-compose exec mongodb mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")'
where mongodb
is the name of your container.
Now we can put it together in a bash script to test if the database exists:
# Query if DB exists in MongoDB mongo_indexof_db=$(mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")') if [ $mongo_indexof_db -ne "-1" ]; then echo "MongoDB database exists" else echo "MongoDB database does not exist" fi
docker-compose
variant:
# Query if DB exists in MongoDB mongo_indexof_db=$(docker-compose -f inspect.yml exec -T mongodb mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")') if [ $mongo_indexof_db -ne "-1" ]; then echo "MongoDB database exists" else echo "MongoDB database does not exist" fi
When trying to install pyarrow such as using
pip install pyarrow
you see an error log like
-- Found Python3Alt: /home/uli/.pypy3-virtualenv/bin/pypy3 CMake Warning (dev) at /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message): The package name passed to `find_package_handle_standard_args` (PkgConfig) does not match the name of the calling package (Arrow). This can lead to problems in calling code that expects `find_package` result variables (e.g., `_FOUND`) to follow a certain pattern. Call Stack (most recent call first): /usr/share/cmake-3.18/Modules/FindPkgConfig.cmake:59 (find_package_handle_standard_args) cmake_modules/FindArrow.cmake:39 (include) cmake_modules/FindArrowPython.cmake:46 (find_package) CMakeLists.txt:229 (find_package) This warning is for project developers. Use -Wno-dev to suppress it. -- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.2") -- Could NOT find Arrow (missing: Arrow_DIR) -- Checking for module 'arrow' -- No package 'arrow' found CMake Error at /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:165 (message): Could NOT find Arrow (missing: ARROW_INCLUDE_DIR ARROW_LIB_DIR ARROW_FULL_SO_VERSION ARROW_SO_VERSION) Call Stack (most recent call first): /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:458 (_FPHSA_FAILURE_MESSAGE) cmake_modules/FindArrow.cmake:450 (find_package_handle_standard_args) cmake_modules/FindArrowPython.cmake:46 (find_package) CMakeLists.txt:229 (find_package) -- Configuring incomplete, errors occurred! See also "/tmp/pip-install-409dctif/pyarrow_b70cde6894c3469483f7360493fc2e65/build/temp.linux-x86_64-pypy39/CMakeFiles/CMakeOutput.log". error: command '/usr/bin/cmake' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pyarrow Failed to build pyarrow ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects
You need to install the arrow library in order to be able to compile pyarrow from source. On Ubuntu, this can be done using
sudo apt install -y -V ca-certificates lsb-release wget wget https://apache.jfrog.io/artifactory/arrow/$(lsb_release --id --short | tr 'A-Z' 'a-z')/apache-arrow-apt-source-latest-$(lsb_release --codename --short).deb -O /tmp/apache-arrow.deb sudo apt -y install /tmp/apache-arrow.deb sudo apt -y update sudo apt -y install libarrow-dev libarrow-python-dev
When trying to install pillow such as using
pip install Pillow
you see an error log like
running build_ext The headers or library files could not be found for jpeg, a required dependency when compiling Pillow from source. Please see the install instructions at: https://pillow.readthedocs.io/en/latest/installation.html Traceback (most recent call last): File "/tmp/pip-install-_g5fa7ox/pillow_7cb18c0d6bec468e8844184b98c8bf45/setup.py", line 989, in <module> setup( File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/__init__.py", line 87, in setup return distutils.core.setup(**attrs) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup return run_commands(dist) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands dist.run_commands() File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands self.run_command(cmd) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/command/install.py", line 68, in run return orig.install.run(self) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/command/install.py", line 670, in run self.run_command('build') File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/command/build.py", line 136, in run self.run_command(cmd_name) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/dist.py", line 1214, in run_command super().run_command(command) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command cmd_obj.run() File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/command/build_ext.py", line 79, in run _build_ext.run(self) File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run self.build_extensions() File "/tmp/pip-install-_g5fa7ox/pillow_7cb18c0d6bec468e8844184b98c8bf45/setup.py", line 804, in build_extensions raise RequiredDependencyException(f) RequiredDependencyException: jpeg During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/tmp/pip-install-_g5fa7ox/pillow_7cb18c0d6bec468e8844184b98c8bf45/setup.py", line 1009, in <module> raise RequiredDependencyException(msg) RequiredDependencyException: The headers or library files could not be found for jpeg, a required dependency when compiling Pillow from source. Please see the install instructions at: https://pillow.readthedocs.io/en/latest/installation.html [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─> Pillow
Pillow needs a buch of libraries to be installed in order to work properly. Use the following command from the official Pillow website on Ubuntu:
sudo apt-get install cmake libtiff5-dev libjpeg8-dev libopenjp2-7-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python3-tk libharfbuzz-dev libfribidi-dev libxcb1-dev
or check out the installation guide for commands for other operating systems.