PM means phase modulation. While it rarely refers to power management, this meaning is much less common.
How to save systemd journalctl log to json file
Save your journalctl logs as pretty json to a file.
sudo journalctl -xu yourservice.service -o json-pretty > ~/test_logs.json
iperf3 TCP minimal example commands
On the host sending the data:
iperf3 -s
On the host receiving the data:
iperf3 -c [IP address of host sending the data]
for example:
iperf3 -c 192.168.178.22
TPLink WDR3600 OpenWRT wireguard throughput benchmark
TechOverflow tested Wireguard bandwidth / throughput on the TPLink WDR3600 with OpenWRT 21.02, based on a standard iperf3
TCP benchmark. We did not use Pre-shared keys in this setup.
So far we were able to verify that the Wireguard bandwidth is approximately 27Mbit/s
(unidirectional), measured using iperf3
.
Connecting to host 192.168.239.254, port 5201 [ 5] local 10.9.1.104 port 57502 connected to 192.168.239.254 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 3.48 MBytes 29.2 Mbits/sec 0 215 KBytes [ 5] 1.00-2.00 sec 3.86 MBytes 32.4 Mbits/sec 0 387 KBytes [ 5] 2.00-3.00 sec 3.13 MBytes 26.2 Mbits/sec 0 470 KBytes [ 5] 3.00-4.00 sec 3.37 MBytes 28.3 Mbits/sec 0 470 KBytes [ 5] 4.00-5.00 sec 3.31 MBytes 27.8 Mbits/sec 0 470 KBytes [ 5] 5.00-6.00 sec 3.31 MBytes 27.8 Mbits/sec 0 470 KBytes [ 5] 6.00-7.00 sec 3.31 MBytes 27.8 Mbits/sec 0 470 KBytes [ 5] 7.00-8.00 sec 2.76 MBytes 23.1 Mbits/sec 0 470 KBytes [ 5] 8.00-9.00 sec 3.31 MBytes 27.8 Mbits/sec 0 470 KBytes [ 5] 9.00-10.00 sec 2.76 MBytes 23.1 Mbits/sec 0 470 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 32.6 MBytes 27.4 Mbits/sec 0 sender [ 5] 0.00-10.14 sec 32.0 MBytes 26.4 Mbits/sec receiver
During the test, top
consistently showed 0%
idle CPU load, with the load being approximately 51%
sys and 49%
sirq.
The commands in use were
iperf3 -s
on the WDR3600 and
iperf -c [IP address of WDR3600]
on the client.
How to fix iperf connect failed: Operation in progress
Problem:
When running iperf -c [IP address]
you see this error message:
connect failed: Operation in progress
Solution:
You are running different iperf versions on the server and the client. Typically this error occurs if the client is running iperf
2.x whereas the server is running iperf
3.x.
Check using iperf --version
. In my case, on the client, it was
iperf version 2.0.13 (21 Jan 2019) pthreads
on OpenWRT.
How to install wireguard_watchdog on OpenWRT
Run this on your OpenWRT router to automatically re-resolve DNS names for peers.
/usr/bin/wireguard_watchdog
is automatically installed with the standard wireguard package, so you only need to enable it to run every minute:
echo '* * * * * /usr/bin/wireguard_watchdog' >> /etc/crontabs/root
Source: This commit message.
How to install Wireguard on OpenWRT
Install using
opkg update opkg install luci-proto-wireguard
How to install KiCAD 6.0 on Ubuntu
Run this to install KiCAD 6.0:
sudo add-apt-repository ppa:kicad/kicad-6.0-releases sudo apt update -y sudo apt -y install kicad
How to search pubmed entrez API with Python and filter results by metadata
If you want to apply more filters to search results of Pubmed than given in their web interface, you might want to use the entrez API.
The following example shows how you can sort alphabetically by the journal the articles originally appeared in.
I would recommend processing the data in the .json format.
import requests import json db = 'pubmed' domain = 'https://www.ncbi.nlm.nih.gov/entrez/eutils' nresults = 10 query = "depression" retmode='json' # standard query queryLinkSearch = f'{domain}/esearch.fcgi?db={db}&retmax={nresults}&retmode={retmode}&term={query}' response = requests.get(queryLinkSearch) pubmedJson = response.json() results = [] for paperId in pubmedJson["esearchresult"]["idlist"]: # metadata query queryLinkSummary = f'{domain}/esummary.fcgi?db={db}&id={paperId}&retmode={retmode}' results.append({'paperId': paperId, 'metadata': requests.get(queryLinkSummary).json()}) # check the journalnames # print(results[-1]["metadata"]["result"][paperId]["fulljournalname"]) resultsSorted = sorted(results, key=lambda x: x["metadata"]["result"][x["paperId"]]["fulljournalname"]) with open('resultsSorted.json', 'w') as f: json.dump(resultsSorted, f)
How to re-encode videos as H.265 & opus using ffmpeg for archival
When you don’t want your video archive to eat up too much space, I recommend encoding them as H.265 and OPUS as these codecs provide excellent quality with typically less than half the bitrate of older formats.
I also tried VP9 and AV1 which should result in even lower file size at the same quality. However, these are painfully slow, running at 0.0046x speed (libaom-av1) and 0.076x speed (libvpx-vp9) compared to 1.5x speed for libx265 for a test video since these encoders seem not to be highly optimized yet.
CRF means constant rate factor, higher values mean better quality but larger file size.
My recommendation is to use CRF 30 for lower-quality videos like analog grainy-ish videos, CRF 23 where you want to preserve the utmost quality, and CRF 26 for everything else.
For non-interlaced videos
ffmpeg -i input.mpg -c:v libx265 -crf 30 -c:a libopus -b:a 56k -frame_duration 60 output.mkv
For interlaced videos
Use -vf yadif=1
as interlace filter to prevent interlacing artifacts:
ffmpeg -i input.mpg -vf yadif=1 -c:v libx265 -crf 30 -c:a libopus -b:a 56k -frame_duration 60 output.mkv
Also see our shell script to encode all videos in a directory using this method
How to check if WireGuard client/peer is connected?
You can use wg show
to check if a client is connected:
interface: Computer public key: X6NJW+IznvItD3B5TseUasRPjPzF0PkM5+GaLIjdBG4= private key: (hidden) listening port: 19628 peer: H3KaL/X94984cLDNWFsM4Hx6Rs/Ku0bW2ECkDUn7wFw= endpoint: 10.9.1.108:19628 allowed ips: 10.217.59.2/32 latest handshake: 27 seconds ago transfer: 13.19 MiB received, 12.70 MiB sent persistent keepalive: every 1 minute
Look for this line:
latest handshake: 27 seconds ago
If it’s less than two minutes old, the client is connected.
If the latest handshake
line is missing entirely, the peer has never connected successfully!
If in doubt, you can often ping
the client to verify. It depends on the client configuration and possibly firewall settings if it will answer the ping but it never hurts to try.
ESP32 Wireguard example with HTTP access over Wireguard (PlatformIO)
In this example we will use Wireguard-ESP32-Arduino in order to make HTTP requests over Wireguard on the ESP32.
[env:esp32-gateway] platform = espressif32 board = esp32-gateway framework = arduino monitor_speed = 115200 lib_deps = ciniml/WireGuard-ESP32@^0.1.5
#include <WiFi.h> #include <WireGuard-ESP32.h> // WiFi configuration --- UPDATE this configuration for your WiFi AP char ssid[] = "MyWifiESSID"; char password[] = "my-wifi-password"; // WireGuard configuration --- UPDATE this configuration from JSON char private_key[] = "gH2YqDa+St6x5eFhomVQDwtV1F0YMQd3HtOElPkZgVY="; IPAddress local_ip(10, 217, 59, 2); char public_key[] = "X6NJW+IznvItD3B5TseUasRPjPzF0PkM5+GaLIjdBG4="; char endpoint_address[] = "192.168.178.133"; // IP of Wireguard endpoint to connect to. int endpoint_port = 19628; static WireGuard wg; void setup() { Serial.begin(115200); Serial.println("Connecting to the AP..."); WiFi.begin(ssid, password); while( !WiFi.isConnected() ) { delay(100); } Serial.println(WiFi.localIP()); Serial.println("Adjusting system time..."); configTime(9 * 60 * 60, 0, "ntp.jst.mfeed.ad.jp", "ntp.nict.jp", "time.google.com"); Serial.println("Connected. Initializing WireGuard..."); wg.begin( local_ip, private_key, endpoint_address, public_key, endpoint_port); } void loop() { WiFiClient client; /** * Connect to * python3 -m http.server */ if( !client.connect("10.217.59.1", 8000) ) { Serial.println("Failed to connect..."); delay(1000); return; } else { // Client connected successfully. Send dummy HTTP request. client.write("GET /wireguard-test HTTP/1.1\r\n"); client.write("Host: wireguard.test.com\r\n"); client.write("\r\n\r\n"); } }
Remember to replace 192.168.238.133
by the IP address of the computer your ESP32 should connect to (i.e. the computer running WireGuard). You also need to enter the correct Wifi credentials.
On the computer, deploy this WireGuard config:
[Interface] # Name = Computer PrivateKey = ONj6Iefel47uMKtWRCSMLan2UC5eW3Fj9Gsy9bqcyEc= Address = 10.217.59.1/24 ListenPort = 19628 [Peer] # Name = ESP32 PublicKey = H3KaL/X94984cLDNWFsM4Hx6Rs/Ku0bW2ECkDUn7wFw= AllowedIPs = 10.217.59.2/32 PersistentKeepalive = 60
which is auto-generated by the following GuardMyWire config:
{ "rules": { "Node": { "connect_to": ["*"], "keepalive": 60 } }, "peers": [ { "name": "Computer", "endpoint": "192.168.178.233:19628", "addresses": [ "10.217.59.1/24" ], "type": "Node", "interface_name": "wg0" }, { "name": "ESP32", "addresses": [ "10.217.59.2/24" ], "type": "Node", "interface_name": "wg0" } ] }
Enable this config and start a Python HTTP server to receive the requests using
python3 -m http.server
Now flash the firmware on the ESP32.
Using wg show
you should see the ESP connecting:
interface: Computer public key: X6NJW+IznvItD3B5TseUasRPjPzF0PkM5+GaLIjdBG4= private key: (hidden) listening port: 19628 peer: H3KaL/X94984cLDNWFsM4Hx6Rs/Ku0bW2ECkDUn7wFw= endpoint: 10.9.1.108:19628 allowed ips: 10.217.59.2/32 latest handshake: 5 seconds ago transfer: 11.71 MiB received, 10.43 MiB sent persistent keepalive: every 1 minute
Look for the
latest handshake: 5 seconds ago
line.
On the shell running python3 -m http.server
you should see the dummy HTTP requests:
10.217.59.2 - - [31/Dec/2021 02:36:48] "GET /wireguard-test HTTP/1.1" 404 - 10.217.59.2 - - [31/Dec/2021 02:36:48] code 404, message File not found 10.217.59.2 - - [31/Dec/2021 02:36:48] "GET /wireguard-test HTTP/1.1" 404 - 10.217.59.2 - - [31/Dec/2021 02:36:48] code 404, message File not found 10.217.59.2 - - [31/Dec/2021 02:36:48] "GET /wireguard-test HTTP/1.1" 404 - 10.217.59.2 - - [31/Dec/2021 02:36:48] code 404, message File not found
How to install nextcloud GUI client on Ubuntu using the PPA
The following script will install the nextcloud client and file manager integrations for all installed file managers. Run as root
!
sudo add-apt-repository -y ppa:nextcloud-devs/client sudo apt-get update sudo apt -y install nextcloud-client # Install Nautilus integration if nautilus is installed dpkg --status nautilus > /dev/null 2>/dev/null retVal=$? if [ $retVal -eq 0 ]; then sudo apt -y install nautilus-nextcloud fi # Install Dolphin integration if nautilus is installed dpkg --status dolphin > /dev/null 2>/dev/null retVal=$? if [ $retVal -eq 0 ]; then sudo apt -y install dolphin-nextcloud fi # Install Caja integration if nautilus is installed dpkg --status caja > /dev/null 2>/dev/null retVal=$? if [ $retVal -eq 0 ]; then sudo apt -y install caja-nextcloud fi # Install Nemo integration if nautilus is installed dpkg --status nemo > /dev/null 2>/dev/null retVal=$? if [ $retVal -eq 0 ]; then sudo apt -y install nemo-nextcloud fi
Simple uptime-kuma docker-compose setup: Self-hosted UptimeRobot alternative
In order to install Uptime-Kuma, first create a directory for the service to reside in. In this example, we’ll use /opt/uptimekuma
Note that at the moment UptimeKuma only supports one user, so if you need multiple users, you need to create multiple instances of Uptime-Kuma. Fortunately, this is extremely easy using docker-compose.
Now we will create docker-compose.yml
version: '3' services: kuma: image: 'louislam/uptime-kuma:1' ports: - '17958:3001' volumes: - './uptimekuma_data:/app/data'
This will listen on port 17958
. You can choose any port you want here, just make sure to choose different ports when running different instances of uptime-kuma.
Now you can use our script from Create a systemd service for your docker-compose project in 10 seconds to automatically start Uptime-Kuma on boot (and start it immediately):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Now access https://<IP of controller>:17958
(or your custom HTTP port) to get started with the Uptime-Kuma setup.
Simple XenOrchestra setup using docker-compose
Also see this variant with Traefik reverse proxy config: XenOrchestra docker-compose setup with Traefik labels
Create a directory such as /opt/xenorchestra
and create docker-compose.yml
:
version: '3' services: xen-orchestra: restart: unless-stopped image: ronivay/xen-orchestra:latest container_name: xen-orchestra network_mode: host stop_grace_period: 1m environment: - HTTP_PORT=1780 cap_add: - SYS_ADMIN security_opt: - apparmor:unconfined volumes: - ./xo-data:/var/lib/xo-server - ./redis-data:/var/lib/redis
You can choose any HTTP port you want using HTTP_PORT=1780
. In this case, we opted for using network_mode: host
to bypass the docker networking, since XenOrchestra seems to work better with full network access instead of the container having an own IP.
Now you can use our script from Create a systemd service for your docker-compose project in 10 seconds to automatically start XenOrchestra on boot (and start it immediately):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Now access https://<IP of controller>:1780
(or your custom HTTP port) to get started with the XO setup.
Simple Unifi controller setup using docker-compose
Updated 2022-12-24: Added --bind_ip 127.0.0.1
to prevent remote MongoDB access in context with network_mode: host
. Thanks Matt Johnson for the suggestion 🙂
This setup runs both MongoDB and unifi using network_mode: host
, this is why we are running MongoDB on a nonstandard port (so it will not interfere with other MongoDB instances). This has the huge benefit of allowing direct Layer 2 network access allowing L2 access point adoption.
Create a directory such as /opt/unifi
and create docker-compose.yml
version: '2.3' services: mongo: image: mongo:3.6 network_mode: host restart: always volumes: - ./mongo_db:/data/db - ./mongo/dbcfg:/data/configdb command: mongod --bind_ip 127.0.0.1 --port 29718 controller: image: "jacobalberty/unifi:latest" depends_on: - mongo init: true network_mode: host restart: always volumes: - ./unifi_dir:/unifi - ./unifi_data:/unifi/data - ./unifi_log:/unifi/log - ./unifi_cert:/unifi/cert - ./unifi_init:/unifi/init.d - ./unifi_run:/var/run/unifi - ./unifi_backup:/unifi/data/backup environment: - DB_URI=mongodb://localhost:29718/unifi - STATDB_URI=mongodb://localhost:29718/unifi_stat - DB_NAME=unifi logs: image: bash depends_on: - controller command: bash -c 'tail -F /unifi/log/*.log' restart: always volumes: - ./unifi_log:/unifi/log
Now create the directories with the correct permissions:
mkdir -p unifi_backup unifi_cert unifi_data unifi_dir unifi_init unifi_log unifi_run chown -R 999:999 unifi_backup unifi_cert unifi_data unifi_dir unifi_init unifi_log unifi_run
Now you can use our script from Create a systemd service for your docker-compose project in 10 seconds to automatically start the controller on boot (and start it immediately):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Now access https://<IP of controller>:8443
to get started with the setup or import a backup.
Recommended tools for deduplicating files
I recommend these two tools for general deduplication of files:
Czkawa: GUI-based deduplication tool
Czkawka homepage – direct download link
This open source tool supports both hash-based deduplication (which finds byte-for-byte identical files) and additionally it supports similarity-based image deduplication with visual comparison. It is rather easy to use, so Czkawka is what I recommend everyone to start with – but nevertheless it supports many features that make deduplication efficient.
rmlint – command line deduplication
rmlint is a full-featured command line dedupliation tool which is extremely feature-rich and supports. As far as I know it doesn’t have a GUI, so it’s only for users familiar with the command line. I use it for deduplication on servers etc, and regularly use some of its features like tagged files so that duplicates from some folders will not be deleted:
rmlint -k folder_where_to_remove_files // original_folder
How to fix Unifi controller on Docker error /unifi/data/system.properties: Permission denied
Problem:
Your Unifi controller running on docker or docker-compose shows log messages like
controller_1 | [2021-12-29 17:37:26,396] <docker-entrypoint> Starting unifi controller service. controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied
on startup.
Solution:
Fix the permissions of the directory mounted. I have listed
volumes: - ./unifi_data:/unifi/data
in my docker-compose.yml
.
Fix the permissions by
sudo chown -R 999:999 unifi_data
How to update WireGuard peer endpoint address using DNS on MikroTik RouterOS
Update 2022-12-30: Updated code, now uses variables
Assuming your peer comment is peer1
and the correct endpoint DNS record is peer1.mydomain.com
, you can use this RouterOS script to update the endpoint based on the DNS record:
:local PEERCOMMENT :local DOMAIN :set PEERCOMMENT "peer1" :set DOMAIN "peer1.mydomain.com" :if ([interface wireguard peers get number=[find comment=$PEERCOMMENT] value-name=endpoint-address] != [/resolve $DOMAIN]) do={ interface wireguard peers set number=[find comment=$PEERCOMMENT] endpoint-address=[/resolve $DOMAIN] }
Modify the variables to suit your Wireguard config: Set PEERCOMMENT
to the comment of the peer that should be updated and set DOMAIN
to the DNS domain name that should be used to update the peer’s IP address
After that, add it as a new script in System -> Scripts, then add a Scheduler to run the script e.g. every 30 seconds under System -> Scheduler
Script settings
Scheduler settings
Related posts which might make the process easier to understand:
How to check if WireGuard Peer endpoint address equals DNS record using RouterOS scripting on MikroTik
Assuming your peer comment is peer1
and the correct endpoint DNS record is peer1.mydomain.com
:
([interface wireguard peers get number=[find comment=peer1] value-name=endpoint-address] = [resolve peer1.mydomain.com])
This will return true
if the peer endpoint is the same as the DNS record.
Example
[admin@CoreSwitch01] > :put ([interface wireguard peers get number=[find comment=peer1] value-name=endpoint-address] = [resolve peer1.mydomain.com]) true