How to install ZeroTier on Teltonika RUTX…/TRB… routers

You can install the ZeroTier using the LuCI webinterface and opening Services->Package Manager. On that page, install the ZeroTier package.

Then configure ZeroTier using Services -> VPN -> ZeroTier. First, add a new zerotier interface and then add one or more networks. Typically you can leave other settings as default.

Posted by Uli Köhler in Networking

How to layer/composite multiple images using imagemagick (convert)

In order to layer two images a.png and b.png using convert, use this syntax:

convert a.png b.png -composite out.png

In order to layer three images a.png, b.png and c.png using convert, use this syntax:

convert a.png b.png -composite c.png -composite out.png

In order to layer four images a.png up to d.png using convert, use this syntax:

convert a.png b.png -composite c.png -composite d.png -composite out.png         

In general, you need to list the first two input images without -composite, and then list every other filename including the output filename using -composite.

Posted by Uli Köhler in ImageMagick

How to convert color to transparency using ImageMagick

You can use ImageMagick to convert a given color to transparency:

convert in.png -fuzz 10% -transparent #ffffff out.png

The fuzz parameter tells ImageMagick to also convert colors within a 20% tolerance range to alpha. This is especially important for JPEG images containing compression artifacts, i.e. pixels that are not purely white. In practice, you’ll often need to play around with the fuzz parameter to select the right value.

Example:

Applied to the following image:

using

convert Transparency-example.png -fuzz 10% -transparent #ffffff out.png

you’ll generate a out.png like this:

 

Posted by Uli Köhler in ImageMagick

How to compute distance and bearing of two points represented by Coordinate strings in Python

Problem:

You have to points represented by some coordinate string in Python:

a = "N 48° 06.112' E 11° 44.113'"
b = "N 48° 06.525' E 11° 44.251'"

and you want to compute both bearing and distance between them

Solution:

This can be done using a combination of two of our previous posts:

from geographiclib.geodesic import Geodesic
from geopy.geocoders import ArcGIS

geolocator = ArcGIS()

a = geolocator.geocode("N 48° 06.112' E 11° 44.113'")
b = geolocator.geocode("N 48° 06.525' E 11° 44.251'")

result = Geodesic.WGS84.Inverse(a.latitude, a.longitude, b.latitude, b.longitude)
distance = result["s12"] # in [m] (meters)
bearing = result["azi1"] # in [°] (degrees)

Result for our example:

distance = 784.3069649126435 # m
bearing = 12.613924599757134 # °

 

Posted by Uli Köhler in Geography, Python

How to compute distance and bearing between two lat/lon points in Python

Problem:

Let’s assume we have the following points represented by their latitude & longitude in Python:

a = (48.11617185, 11.743858785932662)
b = (48.116026149999996, 11.743938922310974)

and we want to compute both distance and bearing between those points on the WGS84 or any other Geoid of your choosing.

Solution:

We can use geographiclib to do that:

from geographiclib.geodesic import Geodesic

result = Geodesic.WGS84.Inverse(*a, *b)
distance = result["s12"] # in [m] (meters)
bearing = result["azi1"] # in [°] (degrees)

Geodesic.WGS84.Inverse(*a, *b) is just a shorthand for Geodesic.WGS84.Inverse(a[0], a[1], b[0], b[1]), so don’t be too confused by the syntax.

Using our example coordinates from above result is

{'lat1': 48.11617185,
 'lon1': 11.743858785932662,
 'lat2': 48.116026149999996,
 'lon2': 11.743938922310974,
 'a12': 0.00015532346032069415,
 's12': 17.26461706032189,
 'azi1': 159.78110567187977,
 'azi2': 159.7811653333465}

Therefore,

distance = 17.26461706032189 # m
bearing = 159.78110567187977 # °
Posted by Uli Köhler in Geography, Python

How to parse any lat/lon string in Python using GeoPy

When working with user-entered coordinates, you often have strings like N 48° 06.976 E 11° 44.638, 48°06'58.6"N 11°44'38.3"E or N48.116267, E11.743967. These lan/lon coordinates come in many, many different formats which makes it rather hard to parse in an automated setting.

One simple solution for Python is to use geopy which provides access to a bunch of Online services such as ArcGIS. These services make it easy to parse pretty much any form of coordinate. You can install geopy using

pip install geopy

Note that Nominatim does not work for the pure coordinates use case – it parses the coordinates just fine but will return the closest building / address.

from geopy.geocoders import ArcGIS

geolocator = ArcGIS()

result = geolocator.geocode("N 48° 06.976' E 11° 44.638'")

In case the coordinates can’t be parsed, result will be None

After that, you can work with result, for example, in the following ways:

print(result) will just print the result:

>>> print(result)
Location(Y:48.116267 X:11.743967, (48.11626666666667, 11.743966666666667, 0.0))

You can extract the latitude and longitude using result.latitude and result.longitude.

>>> print(result.latitude, result.longitude)
(48.11626666666667, 11.743966666666667)

For other ways to work with these coordinates, refer to the geopy documentation.

Posted by Uli Köhler in Geography, Python

Primitive sum of values in a 3D array vs Numba JIT Benchmark in Python

This function sums up all values from the given 3D array:

def primitive_pixel_sum(frame):
    result = 0.0
    for x in range(frame.shape[0]):
        for y in range(frame.shape[1]):
            for z in range(frame.shape[2]):
                result += frame[x,y,z]
    return result

Whereas the following function is exactly the same algorithm, but using numba.jit:

import numba

@numba.jit
def numba_pixel_sum(frame):
    result = 0.0
    for x in range(frame.shape[0]):
        for y in range(frame.shape[1]):
            for z in range(frame.shape[2]):
                result += frame[x,y,z]
    return result

We can benchmark them in Jupyter using

%%timeit
primitive_pixel_sum(frame)

and

%%timeit
numba_pixel_sum(frame)

respectively.

Results

We tested this with a random camera image taken from OpenCV of shape (480, 640, 3)

primitive_pixel_sum():

1.78 s ± 253 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

numba_pixel_sum():

4.06 ms ± 413 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

It should be clear from these results that the numba version is 438 times faster compared to the primitive version.

Note that when compiling complex functions using numba.jit it can take many milliseconds or even seconds to compile – possibly longer than a simple Python function would take.

Since it’s so simple to use Numba, my recommendation is to just try it out for every function you suspect will eat up a lot of CPU time. Over time you will be able to develop an intuition for which functions it’s worth to use Numba and which functions won’t work at all or if it will be slower overall than just using Python.

Remember than often you can also use NumPy functions to achieve the same result. In our example, you could achieve the same thing using

np.sum(frame)

which is even faster than Numba:

%%timeit
np.sum(frame)

Result:

2.5 ms ± 7.17 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Posted by Uli Köhler in Python

How to set and verify v4l2-ctl parameters in Python using subprocess

The following code uses the v4l2-ctl executable to get and set v4l2 parameters such as exposure_absolute. It also provides means of writing a parameter and verifying if it has been set correctly.

def v4l2_set_parameters_once(params, device="/dev/video0"):
    """
    Given a dict of parameters:
    {
        "exposure_auto": 1,
        "exposure_absolute": 10,
    }
    this function sets those parameters using the v4l2-ctl command line executable
    """
    set_ctrl_str = ",".join([f"{k}={v}" for k,v in params.items()]) # expsosure_absolute=400,exposure_auto=1
    subprocess.check_output(["v4l2-ctl", "-d", device, f"--set-ctrl={set_ctrl_str}"])

def v4l2_get_parameters(params, device="/dev/video0"):
    """
    Query a bunch of v4l2 parameters.
    params is a list like
    [
        "exposure_auto",
        "exposure_absolute"
    ]
    
    Returns a dict of values:
    {
        "exposure_auto": 1,
        "exposure_absolute": 10,
    }
    """
    get_ctrl_str = ",".join([f"{k}" for k in params])
    out = subprocess.check_output(["v4l2-ctl", "-d", device, f"--get-ctrl={get_ctrl_str}"])
    out = out.decode("utf-8")
    result = {}
    for line in out.split("\n"):
        # line should be like "exposure_auto: 1"
        if ":" not in line:
            continue
        k, _, v = line.partition(":")
        result[k.strip()] = v.strip()
    return result

def v4l2_set_params_until_effective(params, device="/dev/video0"):
    """
    Set V4L2 params and check if they have been set correctly.
    If V4L2 does not confirm the parameters correctly, they will be set again until they have an effect
    
    params is a dict like {
        "exposure_auto": 1,
        "exposure_absolute": 10,
    }
    """
    while True:
        v4l2_set_parameters_once(params, device=device)
        result = v4l2_get_parameters(params.keys(), device=device)
        # Check if queried parameters match set parameters
        had_any_mismatch = False
        for k, v in params.items():
            if k not in result:
                raise ValueError(f"Could not query {k}")
            # Note: Values from v4l2 are always strings. So we need to compare as strings
            if str(result.get(k)) != str(v):
                print(f"Mismatch in {k} = {result.get(k)} but should be {v}")
                had_any_mismatch = True
        # Check if there has been any mismatch
        if not had_any_mismatch:
            return

Usage example:

v4l2_set_params_until_effective({
    "exposure_auto": 1,
    "exposure_absolute": 1000,
})

 

Posted by Uli Köhler in Audio/Video, Linux, OpenCV, Python

How to set manual white balance temperature in OpenCV (Python)

Using OpenCV on Linux, if you have a video device that interfaces a V4L2 device such as a USB webcam:

camera = cv2.VideoCapture(0)

in order to set the manual white balance temperature, you first need to disable automatic white balancing using CAP_PROP_AUTO_WB. See our previous post How to enable/disable manual white balance in OpenCV (Python) for more details on how you can do this, here’s only the short version that works with most cameras.

After that, you can set the white balance temperature using CAP_PROP_WB_TEMPERATURE:

camera.set(cv2.CAP_PROP_AUTO_WB, 0.0) # Disable automatic white balance
camera.set(cv2.CAP_PROP_WB_TEMPERATURE, 4200) # Set manual white balance temperature to 4200K

For V4L2 cameras, as you can see in our previous post on mapping of OpenCV parameters to V4L2 parameters, CAP_PROP_WB_TEMPERATURE is mapped to V4L2_CID_WHITE_BALANCE_TEMPERATURE which is shown in v4l2-ctl -d /dev/video0 --all as white_balance_temperature
. Therefore, you can easily verify if, for example, disabling the auto white balance worked for your V4L2 camera such as any USB camera by looking at the white_balance_temperature section of v4l2-ctl -d /dev/video0 --all:

white_balance_temperature 0x0098091a (int)    : min=2800 max=6500 step=1 default=4600 value=4200
Posted by Uli Köhler in OpenCV, Python

How to enable/disable manual white balance in OpenCV (Python)

Using OpenCV on Linux, if you have a video device that interfaces a V4L2 device such as a USB webcam:

camera = cv2.VideoCapture(0)

you can typically enable automatic white balance (= disable manual white balance) for any camera by using

camera.set(cv2.CAP_PROP_AUTO_WB, 1.0) # Enable automatic white balance

or disable automatic white balance (= enable manual white balance) using

camera.set(cv2.CAP_PROP_AUTO_WB, 0.0) # Disable automatic white balance

When disabling automatic white balance, you should also set the manual white balance temperature – see our post How to set manual white balance temperature in OpenCV (Python)  for more details.

For V4L2 cameras, as you can see in our previous post on mapping of OpenCV parameters to V4L2 parameters, CAP_PROP_AUTO_WB is mapped to V4L2_CID_AUTO_WHITE_BALANCE which is shown in v4l2-ctl -d /dev/video0 --all as white_balance_temperature_auto. Therefore, you can easily verify if, for example, disabling the auto white balance worked for your V4L2 camera such as any USB camera by looking at the white_balance_temperature_auto section of v4l2-ctl -d /dev/video0 --all:

white_balance_temperature_auto 0x0098090c (bool)   : default=1 value=0
Posted by Uli Köhler in OpenCV, Python

How to set V4L2 exposure to manual mode in OpenCV & Python

Using OpenCV on Linux, if you have a video device that interfaces a V4L2 device such as a USB webcam:

camera = cv2.VideoCapture(0)

you can typically set the automatic exposure mode by setting exposure_auto to 1 (the following output is from v4l2-ctl -d /dev/video0 --all):

exposure_auto 0x009a0901 (menu)   : min=0 max=3 default=3 value=1
              1: Manual Mode
              3: Aperture Priority Mode

As you can see in our previous blogpost, exposure_auto (which is named V4L2_CID_EXPOSURE_AUTO in V4L2 in C/C++) is mapped to CAP_PROP_AUTO_EXPOSURE.

Therefore, you can enable manual exposure using

camera.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1) # Set exposure to manual mode

You should, however, verify these settings using v4l2-ctl --all using your specific camera.

Posted by Uli Köhler in OpenCV, Python

How are OpenCV CAP_PROP_… mapped to V4L2 ctrls / parameters?

From both the OpenCV documentation and the V4L2 documentation, it is unclear how all the CAP_PROP_... parameters are mapped to v4l2 controls such as exposure_absolute.

However, you can easily look in the source code (int capPropertyToV4L2(int prop) in cap_v4l.cpp) in order to see how the parameters are mapped internally. Github link to the source code

Continue reading →

Posted by Uli Köhler in Audio/Video, Linux, OpenCV

List of all cv2.CAP_PROP_… properties for OpenCV in Python

This list can be easily obtained using the following Python code:

for v in [k for k in cv2.__dict__.keys() if k.startswith("CAP_PROP")]:
    print(f"cv2.{v}")

Continue reading →

Posted by Uli Köhler in OpenCV, Python

Recommended local Google Font hosting plugin for WordPress

I tested different local Google Fonts (GPDR) plugins and for some websites especially with Elementor/WPBakery etc, out of all plugins tested, only the OMGF (optimize my google fonts) plugin really worked in removing all fonts for GPDR compliance.

Therefore, I can recommend installing OMGF specifically, even though most other plugins like Self hosted Google Fonts will work for most websites.

You can installing it by opening the WordPress admin panel, clicking Plugins, clicking Install, and then entering OMGF in the search field.

Posted by Uli Köhler in Wordpress

How to install veeam agent on Ubuntu

You need to create an veeam account and login to access the .deb package.

Download the “Veeam Agent for Linux FREE” from here.

Access the download link in your browser and run

wget https://www.veeam.com/...

in your downloads folder on your Ubuntu.

Then install with:

sudo apt install /home/user/downloads/veeam-release-deb_*.*.*_amd**.deb/

and

sudo apt update & sudo apt install veeam
Posted by Joshua Simon in veeam

How to get length/duration of video file in Python using ffprobe

In our previous post How to get video metadata as JSON using ffmpeg/ffprobe we showed how to generate json-formatted output using ffprobe which comes bundled with ffmpeg.

Assuming ffprobe is installed, you can easily use this to obtain the duration of a video clip (say, in  input.mp4) using Python:

import subprocess
import json

input_filename = "input.mp4"

out = subprocess.check_output(["ffprobe", "-v", "quiet", "-show_format", "-print_format", "json", input_filename])

ffprobe_data = json.loads(out)
duration_seconds = float(ffprobe_data["format"]["duration"])
# Example: duration_seconds = 11.6685

When writing such code, be aware of the risk of shell code injection if you don’t using subprocess correctly!

Posted by Uli Köhler in Audio/Video, Python

How to make Jupyter Lab open using Chrome instead of Firefox on Linux

Note: This specific post only covers Jupyter Lab – not Jupyter Notebook. I have no post for Jupyter Notebook so far, but you can use a similar method here, just with slightly different config names etc.

On Ubuntu, my Jupyter Lab always opens using firefox whereas I generally want to use chrome.

In order to fix this, I first needed to generate a default config file (the echo -e "\n" part is to automatically answer no when prompted if any existing config file should be overwritten:

echo -e "\n" | jupyter lab --generate-config

Now the config file in ~/.jupyter/jupyter_lab_config.py contains this line:

# c.ServerApp.browser = ''

which we can automatically un-comment and set to chrome using:

sed -i -e "s/# c.ServerApp.browser = ''/c.ServerApp.browser = 'google-chrome'/g" ~/.jupyter/jupyter_lab_config.py

The resulting line looks like this:

c.ServerApp.browser = 'google-chrome'

Full script to use google-chrome instead of firefox

This is a script which you can copy & paste directly into your command line:

echo -e "\n" | jupyter lab --generate-config
sed -i -e "s/# c.ServerApp.browser = ''/c.ServerApp.browser = 'google-chrome'/g" ~/.jupyter/jupyter_lab_config.py
Posted by Uli Köhler in Linux, Python

How to get video metadata as JSON using ffmpeg/ffprobe

You can easily use ffprobe to extract metadata from a given video file (input.mp4 in this example):

ffprobe -v quiet -show_format -show_streams -print_format json input.mp4

Depending on what info you need, you can also omit -show_streams which doesn’t print detailed codec info for the Audio/Video streams but just general data about the file:

ffprobe -v quiet -show_format -print_format json input.mp4

Example output (without -show-streams):

{
    "format": {
        "filename": "input.mp4",
        "nb_streams": 2,
        "nb_programs": 0,
        "format_name": "mov,mp4,m4a,3gp,3g2,mj2",
        "format_long_name": "QuickTime / MOV",
        "start_time": "0.000000",
        "duration": "11.668500",
        "size": "25045529",
        "bit_rate": "17171378",
        "probe_score": 100,
        "tags": {
            "major_brand": "mp42",
            "minor_version": "0",
            "compatible_brands": "isommp42",
            "creation_time": "2022-10-20T19:00:13.000000Z",
            "location": "+48.1072+011.7441/",
            "location-eng": "+48.1072+011.7441/",
            "com.android.version": "12",
            "com.android.capture.fps": "30.000000"
        }
    }
}

 

Example output (with -show_streams):

{
    "streams": [
        {
            "index": 0,
            "codec_name": "h264",
            "codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
            "profile": "High",
            "codec_type": "video",
            "codec_tag_string": "avc1",
            "codec_tag": "0x31637661",
            "width": 1920,
            "height": 1080,
            "coded_width": 1920,
            "coded_height": 1080,
            "closed_captions": 0,
            "has_b_frames": 0,
            "pix_fmt": "yuv420p",
            "level": 40,
            "color_range": "tv",
            "color_space": "bt709",
            "color_transfer": "bt709",
            "color_primaries": "bt709",
            "chroma_location": "left",
            "refs": 1,
            "is_avc": "true",
            "nal_length_size": "4",
            "r_frame_rate": "30/1",
            "avg_frame_rate": "10170000/349991",
            "time_base": "1/90000",
            "start_pts": 0,
            "start_time": "0.000000",
            "duration_ts": 1049973,
            "duration": "11.666367",
            "bit_rate": "16914341",
            "bits_per_raw_sample": "8",
            "nb_frames": "339",
            "disposition": {
                "default": 1,
                "dub": 0,
                "original": 0,
                "comment": 0,
                "lyrics": 0,
                "karaoke": 0,
                "forced": 0,
                "hearing_impaired": 0,
                "visual_impaired": 0,
                "clean_effects": 0,
                "attached_pic": 0,
                "timed_thumbnails": 0
            },
            "tags": {
                "rotate": "90",
                "creation_time": "2022-10-20T19:00:13.000000Z",
                "language": "eng",
                "handler_name": "VideoHandle",
                "vendor_id": "[0][0][0][0]"
            },
            "side_data_list": [
                {
                    "side_data_type": "Display Matrix",
                    "displaymatrix": "\n00000000:            0       65536           0\n00000001:       -65536           0           0\n00000002:            0           0  1073741824\n",
                    "rotation": -90
                }
            ]
        },
        {
            "index": 1,
            "codec_name": "aac",
            "codec_long_name": "AAC (Advanced Audio Coding)",
            "profile": "LC",
            "codec_type": "audio",
            "codec_tag_string": "mp4a",
            "codec_tag": "0x6134706d",
            "sample_fmt": "fltp",
            "sample_rate": "48000",
            "channels": 2,
            "channel_layout": "stereo",
            "bits_per_sample": 0,
            "r_frame_rate": "0/0",
            "avg_frame_rate": "0/0",
            "time_base": "1/48000",
            "start_pts": 2016,
            "start_time": "0.042000",
            "duration_ts": 558071,
            "duration": "11.626479",
            "bit_rate": "256234",
            "nb_frames": "545",
            "disposition": {
                "default": 1,
                "dub": 0,
                "original": 0,
                "comment": 0,
                "lyrics": 0,
                "karaoke": 0,
                "forced": 0,
                "hearing_impaired": 0,
                "visual_impaired": 0,
                "clean_effects": 0,
                "attached_pic": 0,
                "timed_thumbnails": 0
            },
            "tags": {
                "creation_time": "2022-10-20T19:00:13.000000Z",
                "language": "eng",
                "handler_name": "SoundHandle",
                "vendor_id": "[0][0][0][0]"
            }
        }
    ],
    "format": {
        "filename": "input.mp4",
        "nb_streams": 2,
        "nb_programs": 0,
        "format_name": "mov,mp4,m4a,3gp,3g2,mj2",
        "format_long_name": "QuickTime / MOV",
        "start_time": "0.000000",
        "duration": "11.668500",
        "size": "25045529",
        "bit_rate": "17171378",
        "probe_score": 100,
        "tags": {
            "major_brand": "mp42",
            "minor_version": "0",
            "compatible_brands": "isommp42",
            "creation_time": "2022-10-20T19:00:13.000000Z",
            "location": "+48.1072+011.7441/",
            "location-eng": "+48.1072+011.7441/",
            "com.android.version": "12",
            "com.android.capture.fps": "30.000000"
        }
    }
}
u

 

Posted by Uli Köhler in Audio/Video

How to rotate video by 90°/180°/270° using ffmpeg

In order to rotate a video named input.mp4 using ffmpeg, use the following commands:

Rotate by 90° (clockwise)

ffmpeg -i input.mp4 -vf "transpose=1" rotated.mp4

Rotate by 180°

ffmpeg -i input.mp4 -vf "transpose=2" rotated.mp4

Rotate by 270° (clockwise)

This is equivalent to rotating 90° counter-clockwise.

ffmpeg -i input.mp4 -vf "transpose=3" rotated.mp4

 

Posted by Uli Köhler in Audio/Video

Teardown: Saia-Burgess XGK2-88Z1 microswitch

What’s inside the Saia-Burgess XGK2-88Z1 microswitch?

The microswitch contains one beige and one black case part (injection-moulded plastic) that are riveted together. In order to open the microswitch, one needs to drill out the aluminium rivet.

The pusher button actuates the middle part of the leaf spring which causes the contact to flip sides. Both outer sides of the leaf spring, but not the middle part of the sping, are affixed to the common terminal (both electrically and mechanically). The button is pushed down using the spring in the top right.

See my video on the acutation mechanism for a better understanding on how the leafspring is actuated.

Posted by Uli Köhler in Electronics, Teardown
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPTPrivacy & Cookies Policy