How to backup Redmine using the Bitnami Docker image

In a previous post I detailed how to install Redmine on Linux using the excellent Bitnami docker image.

This post will teach you how to easily make an online backup of your Redmine installation. Note that automating the backup is not within the scope of this post.

We assume that the redmine is installed as shown in my previous post in /var/lib/redmine. and that you want to backup to my.backup.server:~/redmine-backup/ using rsync.

Backing up the Redmine data

This is pretty easy, as the data is all in just one directory. You can sync it using

rsync --checksum -Pavz /var/lib/redmine/redmine_data my.backup.server:~/redmine-backup/

Note that old versions of files in redmine_data will be overwritten, however files that are deleted locally will not be deleted on the backup server. To me, this seems like a good compromise between the ability to recover deleted files and the used storage space.

Backing up the Redmine database

This part is slightly more complicated, since we need to access the MariaDB server running in a different container. Important note: The container ID can change so it is not sufficient to just find the container ID once and then use it. You need to determine the appropriate ID each time you do a backup. See below on instructions how to do that.

Full command:

docker exec -it $(docker container ls | grep redmine_mariadb_1 | cut -d' ' -f1) mysqldump -uroot bitnami_redmine | xz -e9 -zc - > redmine-database-dump-$(date -I).sql.xz

Let’s break it down:

  • docker exec -it (container ID) (command): Run a command on a running docker container.
  • docker container ls | grep redmine_mariadb_1 | cut -d' ' -f1: Get the ID (first field of the output cut -d' ' -f1) of the running docker container named redmine_mariadb_1
  • mysqldump -uroot bitnami_redmine: This is run on the docker container and dumps the Redmine Database as SQL to stdout. No password is neccessary since the Bitnami MariaDB image allows access without any password.
  • xz -e9 -zc -: Takes the data from mysqldump from stdin (-), compresses it using maximum compression settings (-e9 -z) and writes the compressed data to stdout.
  • > redmine-database-dump-$(date -I).sql.xz: Writes the compressed data from xz into a file called redmine-database-dump-(current date).sql.xz in the current directory.

The resulting file is called e.g. redmine-database-dump-2019-02-01.sql.xz and it’s placed in the current directory. Ensure that you run the command in a suitable directory. Run it in /tmp if you don’t know which directory might be suitable.

Now we can rsync it to the server:

rsync --checksum -Pavz redmine-backup-*.sql.xz my.backup.server:~/redmine-backup/

Since the filename contains the current data, this approach will not overwrite old daily backups of the database, so you can restore your database very flexibly.

Posted by Uli Köhler in Container, Docker, Linux, Project management

How to calculate STM32 bxCAN bit timings using Python

When I needed to configure some STM32 microcontrollers with nonstandard clock speeds to use the correct CAN bit timings.

While this script has been written for the STM32F413 series, it should be applicable to pretty much all STM32 MCUs with bxCAN – however, you need to check the details, especially from which clock the CAN is derived.

You need to know:

  • The frequency of the clock from which the CAN clock is derived (PCLK1 in my example: 48 MHz)
  • Your desired Baudrate: 1 MBaud in my example

Using an iterative trial and error approach, we will determine the appropriate values for BRP, TS1 and TS2 (these are specific sets of bits in the CAN):

#!/usr/bin/env python3
# Configuration. Change here
master_clock = 48e6 # PCLK1, or whatever CLK CAN is derived from
desired_baudrate = 1e6 # 1e6 = 1 MBaud, 500e3 = 500 kBaud
# Register values
brp = 5 # Baud rate prescaler
ts1 = 4 # (Length of time segment 1) - 1
ts2 = 1 # (Length of time segment 2) - 1

# Calculation. You usually don't need to change this.
bs1_segments = ts1 +1
bs2_segments = ts2 + 1
total_segments = 1 + bs1_segments + bs2_segments
bittime = 1. / desired_baudrate
master_time = 1. / master_clock
tq = (brp + 1) * master_time
total_time = total_segments * tq
effective_rate = 1. / total_time
sample_point = 1 - (bs2_segments / total_segments)

# Print results
print(f"Effective Baudrate: {effective_rate:.2f} Baud = {100. * effective_rate / desired_baudrate:.2f} % of desired baudrate")
print(f"Sample point: {100. * sample_point:.2f}  %")

When you run this script, you will see an output like this:

Effective Baudrate: 1000000.00 Baud = 100.00 % of desired baudrate
Sample point: 75.00  %

Of course you need to fill the master clock frequency and the desired baudrate at the top. You can start with the pre-configured values for BRP, TS1 and TS2.

In the end, you should achieve 100% of desired baudrate and a sample point that is 87.5% (80% to 90% is usually OK, 75% might not work with long cables. There might be a specific value dictated by your comms standard, e.g. CANOpen or J1939).

Change the values of for BRP, TS1 and TS2., re-running the script and observing the output each time. I recommend first changing BRP until you get close and then adjusting using this algorithm:

  • If your actual baudrate is too slow (i.e. < 100%), decrease BRP (larger step) and/or decrease TS1 + TS2
  • If your actual baudrate is too fast (i.e. > 100%), increase BRP (larger step) and/or increase TS1 + TS2
  • If your sample point percentage is too small, increase the ratio of TS1 to TS2
  • If your sample point percentage is too large, decrease the ratio of TS1 to TS2

Note that there might be master clock speeds and baudrates for which there is no ideal setting. Be sure to check whether a slightly different baudrate is still OK in your application (usually it’s not). If it is not, you need to find a way to use a different PCLK1 speed.

Posted by Uli Köhler in Electronics, Embedded

How to use custom themes with the Bitnami Redmine Docker image

In a previous post I detailed how to install Redmine on Linux using the excellent Bitnami docker image.

This post shows you how to install a custom theme like A1 (which I used successfully for more than 5 years) if you use the bitnami Docker image. We will assume that you installed redmine in /var/lib/redmine and your systemd service is called redmine.

Note: If you get any permission denied errors, try running the same command using sudo.

First, we need to create the themes directory.

sudo mkdir /var/lib/redmine/themes

The first thing we need to do is to copy the current (default) themes to that directory, since Redmine won’t be able to start up if the default theme isn’t available in the correct version.

In order to do this, we must first ensure that your container is running:

sudo systemctl start redmine

Now we can find out the container ID of the running redmine container:

uli:/var/lib/redmine$ docker container ps | grep redmine
ae4de10d0b41        bitnami/redmine:latest    "/app-entrypoint.sh …"   30 minutes ago      Up 30 minutes   0.0.0.0:3718->3000/tcp   redmine_redmine_1
c231d11c48e9        bitnami/mariadb:latest    "/entrypoint.sh /run…"   30 minutes ago      Up 30 minutes   3306/tcp redmine_mariadb_1

From these lines, you need to select the line that says redmine_redmine_1 at the end. The one that lists redmine_mariadb_1 at the end is the database container and we don’t need that one for this task.

From that line, copy the first column – this is the container ID – e.g. ae4de10d0b41 in this example.

Now we can copy the default theme folder:

docker cp ae4de10d0b41:/opt/bitnami/redmine/public/themes /var/lib/redmine/themes

Now copy your custom theme (e.g. the a1 folder) to /var/lib/redmine/themes.

The next step is to fix the permissions. The bitnami container uses the user with UID 1001, so we need to change the owner to that. Repeat this every time you changed something in the themes directory:

sudo chown -R 1001:1001 /var/lib/redmine/themes

At this point we need to edit the docker-compose config (in /var/lib/redmine/docker-compose.yml) to mount /var/lib/redmine/themes in the correct directory. This is pretty easy: Just add - '/var/lib/redmine-szalata/themes:/opt/bitnami/redmine/public/themes' to the volumes section of the redmine container.

The finished config file will look like this:

version: '2'
services:
  mariadb:
    image: 'bitnami/mariadb:latest'
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
    volumes:
      - '/var/lib/redmine/mariadb_data:/bitnami'
  redmine:
    image: 'bitnami/redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - REDMINE_EMAIL=me@gmail.com
      - SMTP_HOST=smtp.gmail.com
      - SMTP_PORT=25
      - SMTP_USER=me@gmail.com
      - SMTP_PASSWORD=yourGmailPassword
    ports:
      - '3718:3000'
    volumes:
      - '/var/lib/redmine/redmine_data:/bitnami'
      - '/var/lib/redmine-szalata/themes:/opt/bitnami/redmine/public/themes'
    depends_on:
      - mariadb

Now you can restart Redmine:

sudo systemctl restart redmine

and set your new theme by selecting it in Administration -> Settings -> Display.

Posted by Uli Köhler in Container, Docker, Linux, Project management

Fixing ScanaStudio error while loading shared libraries: libftd2xx.so: cannot open shared object file on Ubuntu

Problem:

You want to start IkaLogic ScanaStudio, but you see the following error message:

./ScanaStudio: error while loading shared libraries: libftd2xx.so: cannot open shared object file: No such file or directory

Solution:

You need to download the D2XX drivers from the FTDI D2XX drivers page.

Look for your architecture in the columns (usually x64 (64 bit), this is the same as x86_64) and click the link in the Linux row.

This will download a file named something like libftd2xx-x86_64-1.4.8.gz.

Now you can extract the archive using

tar xvf libftd2xx-*

Now we can copy the shared object file to the IkaLogic ScanaStudio directory:

cp release/build/libftd2xx.so.* ~/Ikalogic/ScanaStudio/libftd2xx.so

Now you can start ScanaStudio:

cd ~/Ikalogic/ScanaStudio
./ScanaStudio
Posted by Uli Köhler in Electronics, Linux

Puppeteer: Get text content / inner HTML of an element

Problem:

You want to use puppeteer to automate testing a webpage. You need to get either the text or the inner HTML of some element, e.g. of

<div id="mydiv">
</div>

on the page.

Solution:

// Get inner text
const innerText = await page.evaluate(() => document.querySelector('#mydiv').innerText);

// Get inner HTML
const innerHTML = await page.evaluate(() => document.querySelector('#mydiv').innerHTML);

Note that .innerText includes the text of sub-elements. You can use the complete DOM API inside page.evaluate(...). You can use any CSS selector as an argument for document.querySelector(...).

Posted by Uli Köhler in Javascript, Puppeteer

Fixing requests session TypeError: __init__() got an unexpected keyword argument ‘headers’

Problem:

You are trying to initialize a Python requests session using a custom set of HTTP headers like this:

s = requests.Session(headers={
    "User-Agent": "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0"
})

but you only see this stacktrace:

File "RequestsSession.py", line 30, in translate
    "User-Agent": "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0"
TypeError: __init__() got an unexpected keyword argument 'headers'

Solution:

You can’t use the headers=... argument in the requests.Session(...) constructor.

Use s.headers.update({...}) instead:

s = requests.Session()
s.headers.update({
    "User-Agent": "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:34.0) Gecko/20100101 Firefox/34.0"
})

 

Posted by Uli Köhler in Python

How to fix ModuleNotFoundError: No module named ‘google.cloud.iam’

Problem:

You want to run a Python script that uses one of the Google Cloud Python APIs but you get this error message:

ModuleNotFoundError: No module named 'google.cloud.iam'

Solution:

Reinstall any google cloud package using pip:

sudo pip install --upgrade google-cloud-storage

or

sudo pip3 install --upgrade google-cloud-storage

That will also reinstall the relevant google.cloud.iam module.

After that, re-run your script. If that didn’t work, try to install --upgrade some other google-cloud-* module, especially the modules you actually use in your script.

 

Posted by Uli Köhler in Cloud, Python

Fixing numpy.distutils.system_info.NotFoundError: No lapack/blas resources found on Ubuntu or Travis

Note: If you are on Windows, you can not install scipy using pip! Follow this guide instead: https://www.scipy.org/install.html. This blog post is only for Linux-based systems!

When building some of my libraries on Travis, I encountered this error during

sudo pip3 install numpy scipy --upgrade
numpy.distutils.system_info.NotFoundError: No lapack/blas resources

Solution

Install lapack and blas:

sudo apt-get -y install liblapack-dev libblas-dev

In most cases you will then get this error message:

error: library dfftpack has Fortran sources but no Fortran compiler found

Fix that by

sudo apt-get install -y gfortran

In Travis, you can do it like this in .travis.yml:

before_install:
    - sudo apt-get -y install liblapack-dev libblas-dev gfortran
Posted by Uli Köhler in Build systems, Linux, Python

Easily compute & visualize FFTs in Python using UliEngineering

UliEngineering is a mixed data analytics library in Python – one of the utilities it provides is an easy-to-use package to compute FFTs. In contrast to other package, this library is oriented towards practical usecases and allows you to do the FFT in only one line of code! No knowledge of Math required.

How to install UliEngineering

UliEngineering is a Python 3 only library. Install using pip:

sudo pip3 install -U UliEngineering

Getting started

First, we’ll generate some test data. See this previous post for more details on how to generate sinusoid test data:

from UliEngineering.SignalProcessing.Simulation import *

# Generate test data: 100 Hz + 400 Hz tone
data = sine_wave(frequency=100.0, samplerate=1000, amplitude=1.) \
       + sine_wave(frequency=400.0, samplerate=1000, amplitude=0.5)

The test data consists of a 100 Hz sine wave plus a 400 Hz sine wave (with half the amplitude). That signal is sampled with a sample rate of 1000 Hz.

Now we can compute & visualize the FFT using matplotlib:

# Compute FFT. Use the same samplerate 
# NOTE: Window defaults to "blackman"!
from UliEngineering.SignalProcessing.FFT import compute_fft
fft = compute_fft(data, samplerate=1e3)

# Plot
from matplotlib import pyplot as plt
plt.style.use("ggplot")

plt.gcf().set_size_inches(10, 5) # Use (20, 10) to get a larger plot
plt.plot(fft.frequencies, fft.amplitudes)
plt.xlabel("Frequency")
plt.ylabel("Amplitude")


compute_fft(data, samplerate=1e3) returns a tuple (fftx, ffty). It performs an FFT the size of the input (i.e. since data is a length-1000 array, the FFT will be of size 1000).

fft.frequencies is an array of frequencies (in Hz), corresponding to the values in fft.amplitudes. You can also use fft.angles to get relative angles in degrees, but that is not being covered in this blogpost.

As you can see in the plot shown above, the maximum frequency that can be detected is always half the samplerate, i.e. for our samplerate of f_s = 1000\,\text{Hz} it is 500\,\text{Hz}. See this more math-centric FFT explanation if you want to know more details.

Internally, compute_fft() performs this computation:

2 \cdot \frac{\text{abs}\left(\text{FFT}(\text{data} \cdot \text{Window})\right)}{\text{len(data)}}
  • 2 is a correction factor that takes into account that we throw away the latter half of the raw FFT result (since we’re doing a real FFT)
  • \frac{1}{\text{len(data)}} normalizes the FFT results so they are indepedent of the data length (i.e. if you pass a longer sample of the same sine wave you will still get the same result
  • \text{abs}\left(\cdots\right) Converts the complex phase-aware result of the FFT to a spectrum which is easier to read & visualize.
  • Window (which defaults to blackman) is the window which is applied to the data to alleviate some mathematical effects at the beginning and the end of the dataset. See wikipedia on window functions for more details. UliEngineering currently offers this list of window functions:
    • blackman
    • bartlett
    • hamming
    • hanning
    • kaiser (Parameter is fixed to 2.0)
    • none

Selecting frequency ranges

Using the UliEngineering API, selecting a frequency range of the FFT is trivially easy: Just use fft[lowfreq:highfreq]. You can use fft[lowfreq:] to select everything starting from lowfreq or use fft[:highfreq] to select everything up to highfreq.

from UliEngineering.SignalProcessing.FFT import compute_fft
fft = compute_fft(data, samplerate=1e3)

# Select frequency range: 50 to 200 Hz
fft = fft[50.0:200.0]

# Plot
from matplotlib import pyplot as plt
plt.style.use("ggplot")

plt.gcf().set_size_inches(10, 5) # Use (20, 10) to get a larger plot
plt.plot(fft.frequencies, fft.amplitudes)
plt.xlabel("Frequency")
plt.ylabel("Amplitude")
plt.savefig("/ram/fft-frequency-range.svg")

Extract amplitude & angle at a certain frequency

By using [frequency], i.e. getitem operator with a single value, you get an FFTPoint() object containing the frequency, amplitude and relative angle for a given frequency. The library automatically selects the closest FFT bucket, so even if your FFT does not have a bucket for that specific frequency, you will get sensible results.

from UliEngineering.SignalProcessing.FFT import compute_fft
fft = compute_fft(data, samplerate=1e3)

# Show value at a certain frequency
print(fft[30]) # FFTPoint(frequency=30.0, value=2.91e-08, angle=0.0)
print(fft[100]) # FFTPoint(frequency=100.0, value=0.419, angle=0.0)

Short FFTs for long data

Using compute_fft(), if we have an extremely long data array, this means we’ll compute an extremely long FFT. In many cases, this is not desirable and you want to compute a fixed-size FFT (most often a power-of-two FFT, e.g. 1024, 2048, 4096 etc).

Let’s generate some long test data and assume we want to compute a size-1024 FFT on it

from UliEngineering.SignalProcessing.FFT import *
fftx, ffty = simple_serial_fft_reduce(data, samplerate=1e3, fftsize=1024)

Plotting the data like above yields

which looks almost exactly like our compute_fft() plot before – just as we expected.

simple_serial_fft_reduce() takes care of all the magic and normalization for us, including partitioning the data into overlapping chunks, adding the FFTs and properly normalizing the result.

The naming convention is significant here:

  • simple_...._reduce means this is a variant with sensible defaults (simple) for using a reduction function (default: sum) on multiple FFTs.
  • serial means the individual FFTs

Parallelizing the FFTs

If you have a huge dataset, you can use simple_parallel_fft_reduce() identically tosimple_serial_fft_reduce():

from UliEngineering.SignalProcessing.FFT import *
fftx, ffty = simple_parallel_fft_reduce(data, samplerate=1e3, fftsize=1024)

However in most cases you want to initialize the executor manually so you can re-use it later:

from UliEngineering.SignalProcessing.FFT import *
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor() # No argument => use num_cpus threads
fftx, ffty = simple_parallel_fft_reduce(data, samplerate=1e3, fftsize=1024, executor=executor)

We can use a ThreadPoolExecutor() since scipy.fftpack (which UliEngineering uses to do the hard math) unlocks the Python GIL.

Note that due to the need to do a lot of housekeeping tasks, simple_parallel_fft_reduce() is much slower than simple_serial_fft_reduce() if you have a dataset so small that parallelization is not effective. My initial recommendation is to consider using the parallel variant if the total execution time of the serial variant is larger than 0.5\,s

Posted by Uli Köhler in Data science, Mathematics, Python

Easily generate square/triangle/sawtooth/inverse sawtooth waveform data in Python using UliEngineering

In a previous post, I’ve detailed how to generate sine/cosine wave data with frequency, amplitude, offset, phase shift and time offset using only a single line of code.

This post extends this approach by showing how to generate square wave, triangle wave, sawtooth wave and inverse sawtooth wave data – still in only one line of code.

All of the parameters, including frequency, amplitude, offset, phase shift and time offset apply for those functions as well – see the previous post for details and examples for those parameters.

We are using the UliEngineering library, more specifically the UliEngineering.SignalProcessing.Simulation package:

How to install UliEngineering

UliEngineering is a Python 3 only library. Install using pip:

sudo pip3 install -U UliEngineering

Square wave

from UliEngineering.SignalProcessing.Simulation import square_wave

data = square_wave(frequency=10.0, samplerate=10e3)

Triangle wave

from UliEngineering.SignalProcessing.Simulation import triangle_wave

data = triangle_wave(frequency=10.0, samplerate=10e3)

Sawtooth wave

from UliEngineering.SignalProcessing.Simulation import sawtooth

data = sawtooth(frequency=10.0, samplerate=10e3)

Inverse sawtooth wave

from UliEngineering.SignalProcessing.Simulation import inverse_sawtooth

data = inverse_sawtooth(frequency=10.0, samplerate=10e3)

Plotting code

This code was used to generate the plots for this post in Jupyter:

%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use("ggplot")

from UliEngineering.SignalProcessing.Simulation import square_wave

data = square_wave(frequency=10.0, samplerate=10e3)

# set_size_inches(20, 10) to make it even larger!
plt.gcf().set_size_inches(10, 5)
plt.plot(data, label="original")
plt.savefig("/dev/shm/square-wave.svg")
Posted by Uli Köhler in Data science, Mathematics, Python

Easily generate sine/cosine waveform data in Python using UliEngineering

In order to generate sinusoid test data in Python you can use the UliEngineering library which provides an easy-to-use functions in UliEngineering.SignalProcessing.Simulation:

How to install UliEngineering

UliEngineering is a Python 3 only library. Install using pip:

sudo pip3 install -U UliEngineering

Basic example

from UliEngineering.SignalProcessing.Simulation import sine_wave
# Default: Generates 1 second of data with amplitude = 1.0 (swing from -1.0 ... 1.0)
sine = sine_wave(frequency=10.0, samplerate=10e3)
cosine = cosine_wave(frequency=10.0, samplerate=10e3)

Amplitude & offset

Use amplitude=0.5 to specify that the sine wave should swing between -0.5 and 0.5.

Use offset=2.0 to specify that the sine wave should be centered vertically around 2.0:

from UliEngineering.SignalProcessing.Simulation import sine_wave

data = sine_wave(frequency=10.0, samplerate=10e3, amplitude=0.5, offset=2.0)

Phaseshift example

You can specify the phaseshift to use – to use a 180° phaseshift, just use phaseshift=180.0

from UliEngineering.SignalProcessing.Simulation import sine_wave

original = sine_wave(frequency=10.0, samplerate=10e3)
shifted = sine_wave(frequency=10.0, samplerate=10e3, phaseshift=180.)

Time delay example

While this is functionally equivalent to phase offset, it is often convenient to specify the time delay in seconds instead of the phase shift in degrees:

from UliEngineering.SignalProcessing.Simulation import sine_wave

original = sine_wave(frequency=10.0, samplerate=10e3)
# Shifted signal is delayed 0.01s = 10 milliseconds compared to original
shifted = sine_wave(frequency=10.0, samplerate=10e3, timedelay=0.01)

Note that in the time domain, the signals appear to be shifted backwards when you use a positive timedelay value. This is in accordance with the delay naming, implying that the signal is delayed by that amount.

You can specify both phase shift and time delay, meaning that both will be applied (the offset is added)

Plotting

If you want to debug your signals visually, this is the code that was used inside Jupyter to generate the plots shown above:

%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use("ggplot")

# Generate data
from UliEngineering.SignalProcessing.Simulation import sine_wave

original = sine_wave(frequency=10.0, samplerate=10e3)
shifted = sine_wave(frequency=10.0, samplerate=10e3, timedelay=0.01)

# set_size_inches(20, 10) to make it even larger!
plt.gcf().set_size_inches(10, 5)
plt.plot(original, label="original")
plt.plot(shifted, label="shifted")
plt.savefig("/dev/shm/timedelay.svg")
plt.legend(loc=1) # Top right
Posted by Uli Köhler in Data science, Mathematics, Python

Easy zero crossing detection in Python using UliEngineering

In order to perform zero crossing detection in NumPy arrays you can use the UliEngineering library which provides an easy-to-use zero_crossings function:

How to install UliEngineering

UliEngineering is a Python 3 only library. Install using pip:

sudo pip3 install -U UliEngineering

Usage example:

from UliEngineering.SignalProcessing.Utils import zero_crossings
# To generate test data
from UliEngineering.SignalProcessing.Simulation import sine_wave

# Generate test data: 10 Hz 10ksps, 1 second (= 1D 10000 values NumPy array)
data = sine_wave(frequency=10.0, samplerate=10e3)

# Prints: [0, 500, 1000, 1500, ...]
print(zero_crossings(data))

Thanks to Jim Brissom on StackOverflow for the original solution!

Posted by Uli Köhler in Data science, Mathematics, Python

How to set cv2.VideoCapture() image size in Python

Use cv2.CAP_PROP_FRAME_WIDTH and cv2.CAP_PROP_FRAME_HEIGHT in order to tell OpenCV which image size you would like.

import cv2

video_capture = cv2.VideoCapture(0)
# Check success
if not video_capture.isOpened():
    raise Exception("Could not open video device")
# Set properties. Each returns === True on success (i.e. correct resolution)
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 160)
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
# Read picture. ret === True on success
ret, frame = video_capture.read()
# Close device
video_capture.release()

Note that most video capture devices (like webcams) only support specific sets of widths & heights. Use uvcdynctrl -f to find out which resolutions are supported:

$ uvcdynctrl -f
Listing available frame formats for device video0:
Pixel format: YUYV (YUYV 4:2:2; MIME type: video/x-raw-yuv)
  Frame size: 640x480
    Frame rates: 30, 20, 10
  Frame size: 352x288
    Frame rates: 30, 20, 10
  Frame size: 320x240
    Frame rates: 30, 20, 10
  Frame size: 176x144
    Frame rates: 30, 20, 10
  Frame size: 160x120
    Frame rates: 30, 20, 10
Posted by Uli Köhler in OpenCV, Python, Video

How to take a webcam picture using OpenCV in Python

This code opens /dev/video0 and takes a single picture, closing the device afterwards:

import cv2

video_capture = cv2.VideoCapture(0)
# Check success
if not video_capture.isOpened():
    raise Exception("Could not open video device")
# Read picture. ret === True on success
ret, frame = video_capture.read()
# Close device
video_capture.release()

You can also use cv2.VideoCapture("/dev/video0"), but this approach is platform-dependent. cv2.VideoCapture(0) will also open the first video device on non-Linux platforms.

In Jupyter you can display the picture using

import sys
from matplotlib import pyplot as plt

frameRGB = frame[:,:,::-1] # BGR => RGB
plt.imshow(frameRGB)

 

Posted by Uli Köhler in OpenCV, Python, Video

How to build rav1e on Ubuntu

This will fetch and build the current git master of rav1e. The build process has been tested on Ubuntu 18.04 with rav1e git revision 49dcaada4.

sudo apt update
sudo apt -y install cargo git perl nasm cmake clang pkg-config
# Fetch
git clone https://github.com/xiph/rav1e.git
mv rav1e rav1e-git
cd rav1e-git
git submodule update --init
# Build libaom-av1
cmake aom_build/aom -DAOM_TARGET_CPU=x86_64 -DCONFIG_AV1_ENCODER=0 -DENABLE_TESTS=0 -DENABLE_DOCS=0 -DCONFIG_LOWBITDEPTH=1
make -j$(nproc)
# Build rav1e
cargo build --release
# Copy to parent directory
cp target/

After the build finishes, the rav1e executable is placed in the directory where you ran those commands. You can delete the rav1e-git folder

Download the resulting binary for Ubuntu 18.04 x86_64 here

Posted by Uli Köhler in Linux, Video

GPG symmetric encryption: Passphrase on command line

Problem:

You want to use GnuPG’s –symmetric encryption, but instead of interactively entering the password you want to use a command line argument with the cleartext password.

Solution:

Use --batch --yes --passphrase <passphrase>:

gpg --symmetric --batch --yes --passphrase 12345 <input file>

Note that this is potentially insecure as it’s way easier to find out the command line parameters of running programs than intercepting the inputs of the interactive input dialog. Therefore, use this strategy only if neccessary.

Posted by Uli Köhler in Cryptography

Running Gitlab CE via docker behind a reverse proxy on Ubuntu

Similarly to my previous article about installing Redmine via docker behind a reverse proxy, this article details. Since I am running an instance of Redmine and an instance of Gitlab on the same virtual server, plus tens of other services.

While the Gitlab CE docker container is nicely preconfigured for standalone use on a dedicated VPS, running it behind a reverse proxy is not supported and will lead to a multitude of error messages – in effect, requiring lots of extra work to get up and running.

Note that we will not setup GitLab for SSH access. This is possible using this setup, but usually makes more trouble than it is worth. See this article on how to store git https passwords so you don’t have to enter your password every time.

Installing Docker & Docker-Compose

# Install prerequisites
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose

# Add current user to the docker group
sudo usermod -a -G docker $USER
# Enable & start docker service
sudo systemctl enable docker
sudo systemctl start docker

After running this shell script, log out & login from the system in order for the docker group to be added to the current user.

Creating the directory & docker-compose configuration

We will install Gitlab in /var/lib/gitlab which will host the data directories and the docker-compose script. You can use any directory if you use it consistently in all the configs (most importantly, docker-compose.yml and the systemd service).

# Create directories
sudo mkdir /var/lib/gitlab

Next, we’ll create /var/lib/gitlab/docker-compose.yml.

There’s a couple of things you need to change here:

  • Set gitlab_rails['gitlab_email_from'] and gitlab_rails['gitlab_email_display_name'] to whatever sender address & name you want emails to be sent from
  • Set the SMTP credentials (gitlab_rails['smtp_address'], gitlab_rails['smtp_port'], gitlab_rails['smtp_user_name'], gitlab_rails['smtp_password'] & gitlab_rails['smtp_domain']) to a valid SMTP server. In rare cases you also have to change the other gitlab_rails['smtp_...'] settings.
  • You need to change every 4 occurrences of gitlab.mydomain.de to your domain.
  • The ports configuration, in this case '9080:80' means that Gitlab will be mapped to port 9080 on the local PC. This port is chosen somewhat arbitarily – as we will run Gitlab behind an nginx reverse proxy, the port does not need to be any port in particular (as long as you use the same port everywhere), but it may not be used by anything else. You can use any port here, provided that it’s not used for anything else. Leave 80 as-is and only change 9080 if required.
gitlab:
   image: 'gitlab/gitlab-ce:latest'
   restart: always
   hostname: 'gitlab.mydomain.de'
   environment:
     GITLAB_OMNIBUS_CONFIG: |
       external_url 'https://gitlab.mydomain.de'
       letsencrypt['enabled'] = false
       # Email
       gitlab_rails['gitlab_email_enabled'] = true
       gitlab_rails['gitlab_email_from'] = 'gitlab@mydomain.de'
       gitlab_rails['gitlab_email_display_name'] = 'My GitLab'
       # SMTP
       gitlab_rails['smtp_enable'] = true
       gitlab_rails['smtp_address'] = "mail.mydomain.de"
       gitlab_rails['smtp_port'] = 25
       gitlab_rails['smtp_user_name'] = "gitlab@mydomain.de"
       gitlab_rails['smtp_password'] = "yourSMTPPassword"
       gitlab_rails['smtp_domain'] = "mydomain.de"
       gitlab_rails['smtp_authentication'] = "login"
       gitlab_rails['smtp_enable_starttls_auto'] = true
       gitlab_rails['smtp_tls'] = true
       gitlab_rails['smtp_openssl_verify_mode'] = 'none'
       # Reverse proxy nginx config
       nginx['listen_port'] = 80
       nginx['listen_https'] = false
       nginx['proxy_set_headers'] = {
         "X-Forwarded-Proto" => "https",
         "X-Forwarded-Ssl" => "on",
         "Host" => "gitlab.mydomain.de",
         "X-Real-IP" => "$$remote_addr",
         "X-Forwarded-For" => "$$proxy_add_x_forwarded_for",
         "Upgrade" => "$$http_upgrade",
         "Connection" => "$$connection_upgrade"
       }
   ports:
     - '9080:80'
   volumes:
     - './config:/etc/gitlab'
     - './logs:/var/log/gitlab'
     - './data:/var/opt/gitlab'

Setting up the systemd service

Next, we’ll configure the systemd service in /etc/systemd/system/gitlab.service.

Set User=... to your preferred user in the [Service] section. That user needs to be a member of the docker group. Also check if the WorkingDirectory=... is correct.

[Unit]
Description=Gitlab
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/gitlab
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

After creating the file, we can enable and start the gitlab service:

sudo systemctl enable gitlab
sudo systemctl start gitlab

The output of sudo systemctl start gitlab should be empty. In case it is

Job for gitlab.service failed because the control process exited with error code.
See "systemctl status gitlab.service" and "journalctl -xe" for details.

you can debug the issue using journalctl -xe and journalctl -e

The first startup usually takes about 10 minutes, so grab at least one cup of coffee. You can follow the progress using journalctl -xefu gitlab. Once you see lines like

Dec 17 17:28:04 instance-1 docker-compose[4087]: gitlab_1  | {"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"duration":28.82,"view":22.82,"db":0.97,"time":"2018-12-17T17:28:03.252Z","params":[],"remote_ip":null,"user_id":null,"username":null,"ua":null}

the startup is finished.

Now you can check if GitLab is running using

wget -O- http://localhost:9080/

(if you changed the port config before, you need to use your custom port in the URL).

If it worked, it will show a debug message output. Since gitlab will automatically redirect you to your domain (gitlab.mydomain.de in this example) you should see something like

--2018-12-17 17:28:32--  http://localhost:9080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:9080... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://gitlab.gridbox.de/users/sign_in [following]
--2018-12-17 17:28:32--  https://gitlab.mydomain.de/users/sign_in
Resolving gitlab.gridbox.de (gitlab.mydomain.de)... 35.198.165.121
Connecting to gitlab.gridbox.de (gitlab.mydomain.de)|35.198.165.121|:443... failed: Connection refused.

Since we have not setup nginx as a reverse proxy yet, it’s totally fine that it’s saying connection refused. The redirection worked if you see the output listed above.

Setting up the nginx reverse proxy (optional but recommended)

We’ll use nginx to proxy the requests from a certain domain (Using Apache, if you use it already, is also possible but it is outside the scope of this tutorial to tell you how to do that). Install it using

sudo apt -y install nginx

First, you’ll need a domain name with DNS being configured. For this example, we’ll assume that your domain name is gitlab.mydomain.de ! You need to change it to your domain name!

First, we’ll create the config file in /etc/nginx/sites-enabled/gitlab.conf. Remember to replace gitlab.mydomain.de by your domain name! If you use a port different from 9080, replace that as ewll.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

server {
    server_name gitlab.mydomain.de;

    access_log /var/log/nginx/gitlab.access_log;
    error_log /var/log/nginx/gitlab.error_log info;

    location / {
        proxy_pass http://127.0.0.1:9080; # docker container listens here
        proxy_read_timeout 3600s;
        proxy_http_version 1.1;
        # Websocket connection
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }

    listen 80;
}

Now run sudo nginx -t to test if there are any errors in the config file. If everything is alright, you’ll see

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once you have fixed all errors, if any, run sudo service nginx reload to apply the configuration.

We need to setup a Let’s Encrypt SSL certificate before we can check if Gitlab is working:

Securing the nginx reverse proxy using Let’s Encrypt

First we need to install certbot and the certbot nginx plugin in order to create & install the certificate in nginx:

sudo apt -y install python3-certbot python3-certbot-nginx

Fortunately certbot automates most of the process of installing & configuring SSL and the certificate. Run

sudo certbot --nginx

It will ask you to enter your Email address and agree to the terms of service and if you want to receive the EFF newsletter.

After that, certbot will ask you to select the correct domain name:

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: gitlab.mydomain.de
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

In this case, there is only one domain name (there will be more if you have more domains active on nginx!).

Therefore, enter 1 and press enter. certbot will now generate the certificate. In case of success you will see an output including a line like

Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/gitlab.mydomain.de.conf

Now it will ask you whether to redirect all requests to HTTPS automatically:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 

Choose Redirect here: Type 2 and press enter. Now you can login to GitLab and finish the installation.

You need to renew the certificate every 3 months for it to stay valid, and run sudo service nginx reload afterwards to use the new certificate. If you fail to do this, users will see certificate expired error messages and won’t be able to access Gitlab easily! See this post for details on how to mostly automate this process!

Setting up Gitlab

Now you can open a browser and have a first look at your new GitLab installation:

Set the new password and then login with the username root and your newly set password.

After that, open the admin area at the top by clicking at the wrench icon in the purple navigation bar at the top.

At the navigation bar at the left, click on Settings (it’s at the bottom – you need to scroll down) and then click on General.

Click the Expand button to the right of Visibility and access controls. Scroll down until you see Enabled Git access protocols and select Only HTTP(S) in the combo box.

Then click the green Save changes button.

Since we have now disabled SSH access (which we didn’t set up in the first place), you can now use GitLab. A good place to start is to create a new project and try checking it out. See this article on how to store git https passwords so you don’t have to enter your git password every time.

Note: If GitLab doesn’t send emails, check config/gitlab.rb, search for smtp and if neccessary fix the SMTP settings there. After that, sudo systemctl stop gitlab && sudo systemctl start gitlab

Posted by Uli Köhler in Container, Docker, git, nginx, Version management

How to disable Let’s Encrypt in the Gitlab CE docker image

Problem:

You want to run the Gitlab CE docker image, but since you want to run it together with other services behind a reverse proxy, you see an error message like this:

gitlab_1  | letsencrypt_certificate[gitlab.mydomain.com] (letsencrypt::http_authorization line 3) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 20) had an error: RuntimeError: [gitlab.mydomain.com] Validation failed for domain gitlab.mydomain.com

Solution:

Add

letsencrypt['enabled'] = false

to GITLAB_OMNIBUS_CONFIG. See this file on GitHub for more Let’s Encrypt-related configs you can add.

In docker-compose.yml it could look like this:

gitlab:
   image: 'gitlab/gitlab-ce:latest'
   restart: always
   hostname: 'gitlab.mydomain.com'
   environment:
     GITLAB_OMNIBUS_CONFIG: |
       external_url 'https://gitlab.mydomain.com'
       letsencrypt['enabled'] = false
   ports:
     - '7080:80'
     - '1022:22'
   volumes:
     - '/var/lib/gitlab/config:/etc/gitlab'
     - '/var/lib/gitlab/logs:/var/log/gitlab'
     - '/var/lib/gitlab/data:/var/opt/gitlab'

 

Posted by Uli Köhler in Container, Docker

Tips on how to use Redmine for your project

Having used Redmine for almost 10 years for a multitude of projects and with widely varying team sizes, here are my practical tips on how you should use it for your project:

Super-projects

Create one super-project of which all projects are subprojects. This allows you to use all project management features like Gantt, Milestones etc in the super-project.

Example project structure

  • My Company Inc (Super-project)
    • R&D
      • Migration to cloud computing
      • Machine learning research
    • Operations
      • Customer X
      • Customer Y
    • Management

By opening the ticket list for the super-project, you see all the tickets of all sub-projects – and therefore have more tools at your disposal for high-level management of the company. For example, you could create milestones for business development in the super-project that depend on tasks in some R&D sub-project.

No public projects !

People often make the mistake of making a Redmine project public – maybe because that’s the default setting for new projects.

However, anyone that knows the project URL, e.g. https://redmine.mydomain.com/projects/test/ can see ALL THE DATA in the project by default, even though the project isn’t listed in the search bar project list

Solution: Make all projects non-public: As admin, go to the project, go to the Settings tab, uncheck the Public checkbox and click Save. Unless, of course, you actually want all the data to be publically available via Redmine…

Avoid multiple issue trackers

Companies that grew organically to their current size often tend to have separate systems that fulfill mostly the same purpose but are mostly not interconnected – but are mostly used by separate subgroups of the personnel.

Typical example: (Project) management uses Redmine while the developers use Gitlab for source-code-oriented bug management and only interface to Redmine to communicate with the project managers.

While this constellation will work reasonably well even over long time periods, it inevitably leads to an unhealthy spread of information and, as a consequence, inefficiency in looking for information. Some low-level-development discussion or documentation will end up in Redmine whereas some project-management-related discussions inevitably end up in Gitlab.

If at all possible, try to encourage the use of one system and discourage the use of the other for any purpose that overlaps. Try to actively work with people opposed to that policy to find solutions on how to adapt their known process in the new system.

Enable support for incoming emails

Most Redmine setups are notoriously hard to use on a smartphone – the UI is not really optimized for mobile use.

In order to facilitate the use on mobile or other devices, one easy way is to allow the user to answer to notification emails, with their response being posted as an update to the ticket. While this is somewhat hard to set up, a couple of hours by a skilled admin or expert will most certainly get you a couple of years of maintenance-free updates by incoming emails.

Note that this does not cover the use case of creating new tickets or otherwise interfacing with Redmine, but in practice, people that are currently on mobile-only will either just update existing tickets or whatever they intend to do is sufficiently important that they will deal with the suboptimal mobile UI that Redmine currently provides.

While there are a few Redmine mobile apps out there and it is certainly worth a try at some point, I have not had great success with them in past projects.

Use (parts of) the wiki to collect unstructured information

Many companies lack a structured and searchable way of collecting unstructured information. What about the SSH command the admin uses to connect to some server? Everyone who has worked in R&D knows that perfect-documentation-land is pure utopia and while actually writing top-down documentation for all the tasks at hand might work for some time, it is mostly only used to describe what to do whereas details about how to do it are quite easily omitted.

Moreover, documentation is often not in sync with the real world, especially if the people who originally wrote it have moved on to other projects or even other jobs. Don’t make the mistake of assuming that you have any way of enforcing the discipline to write & keep in sync technical documentation down to the command- or component-level over a long term. Initial success regarding that matter is far worse in predicting the long-term success of documentation projects than most project managers would admit – and it’s debatable if good-but-horribly-outdated-information is of any use in a multi-year project.

If you don’t use the Redmine wiki for any other process, try to push people towards using it for documenting what they did, even if it’s only a prototype command line script or something that doesn’t work out later (you can delete it if it serves no purpose anymore).

If, however, you use the wiki for other documentation, just reserve a certain section, e.g. any page accessible from a special Documentation page, for structured information and create pages for different types of unstructured information

Crying Wolf: Don’t overuse Urgent and Immediate priorities

Everyone who has worked in the technology industry knows that some project managers appear to have the uncanny ability to always have one more Urgent task up their sleeve, no matter how many you have already fixed. Moreover, there are so many Urgent and Immediate tasks that no-one really knows what these words mean any more – especially since you can’t quite remember the last time you fixed a Low-priority task.

When you introduce the use of an advanced issue tracking tool, you have one more change in breaking out of that vicious circle. It’s a productive policy to enforce a rule that project managers must only have one Immediate and three Urgent tasks open at any one time. This enables developers to actually prioritize and avoid the risk of getting yelled at by some manager that didn’t use the tools at hand properly, in effect imposing wrong or unclear constraints on the developers.

Always have someone assigned & Order by priority

In practical projects, one phenomenon that often happens in conjunction with issue trackers is ticket orphanisation. Here’s how it happens.

  • Mgmt creates a ticket with high priority and assigns a developer
  • Developer fixes the issue and un-assigns himself, notifying the management to assign someone to test the fix properly via the issue update text.
  • Mgmt forgets to actually assign someone (e.g. due to holiday or more urgent tasks) and the ticket enters the endless list of unclosed-but-not-that-important tickets that have (almost) been fixed but some small part is still missing.

Unless you are one of the lucky few that manages to close and properly & periodically handle all tickets over a long period of time, the small lapse of not having one assigned for a short period of time exponentially increases the likelihood of a ticket not being properly finished.

Here’s how it should work:

  • Mgmt creates a ticket with high priority and assigns a developer
  • Developer fixes the issue and assigns management, telling them in the update text that the next step is to find some tester capable of testing it. The developer sets the ticket to an appropriate priority level
  • Mgmt will always see the tickets on the My tickets page.

In other words, there must be someone assigned at any one time. While this does not automatically solve issues of people leaving the project, or being on holiday or sick, this is how it should work:

  • If the next step to do on this ticket will be done by you, leave yourself assigned
  • If there are multiple next steps that should happen in parallel, create sub-tickets and assign accordingly
  • If the next task is unknown to you, assign to mgmt or head developer and note that the next task needs to be determined
  • If you know the next sub-task but don’t know who should do it, assign to mgmt and note that X needs to be done and they shall determine by whom (this represents our example listed above)
  • If the task has been assigned to you and you don’t know what to do, assign to head developer or mgmt and note, as clearly as possible, why you are unsure about what to do. Frequent occurence of this case typically indicates a failure of mgmt or development head to communicate the tasks in a developer-compatible way.

Independently of the aforementioned decisions:

  • Use sub-tickets heavily: Any time you see that something can be split into multiple tasks, even if you’re gonna do all of those tasks anyway
  • Don’t create sub-issues for sub-tasks that are projected to take less than 10 minutes
  • Always create sub-issues for sub-tasks that are projected to take more than 60 minutes
  • It is the responsibility of mgmt to look through tasks of temporarily unavailable project members and re-assign issues that can’t wait accordingly
  • It is the responsibility of mgmt to look through tasks of permanently unavailable project members and re-assign all issues as a matter of principle

A good way of conceptualizing sub-tickets is as local mini-milestones. While some tickets might have a scope larger than a typical milestone, sub-tickets should primarily focus on conveying relationship information to developers, while milestones should primarily focus on conveying information to mgmt.

Given that process, anyone should work primarily on the My tickets list, not so much on the project tickets list. The project tickets list is primarily for the mgmt to get an overview and developers should only use it to get some extra information or cross-reference some tasks.

The most efficient (& most obvious) way of using Redmine is to work through the My tickets list starting from the high priority tasks. Remember to regularly check if any higher priority tasks popped up.

One of the main tasks of the project manager is to ensure that the process I described automatically leads to people working on milestones, with some high-priority tasks outside the milestone interspersed in between. Proper communication to the developers ensures that while everyone is working based on the aforementioned scheme, the common goal of reaching the milestone is clearly defined in the minds of the developers.

Enforce regular updates by using Microtasks

A typical failure mode of software and hardware project management is that management thinks top-down and fails to enforce a process that thinks all the way to the bottom.

Regarding issue trackers, this systematic problem is often manifested in there being only coarse macro-tasks that, even if properly assigned to developers, don’t allow anyone to accurately assess the current state of the project (e.g. because there is no way of assessing a 0%…100% progress of an unclearly defined wide-scope task) and often lead to developers not giving regular updates.

Depending on the management style, irregular updates regarding specific parts of the development often lead to procrastination of tasks on the side of developers, greatly reducing their effectivity.

All of those failure modes can be avoided by using smaller, more strictly-defined micro-tasks. While this solution is obvious from a conceptual standpoint, the practical implementation is most often somewhere between utterly disastrous and not-viable-in-the-long term. The most common issues are:

  • Management does not have insight into the “low level” of development and therefore can’t assess how to split up the tasks beyond a certain level
  • Developers are said to not have enough insight into the bigger picture of the project and therefore it’s not considered feasible to let them split up tasks themselves
  • Management considers the overhead of creating and micro-managing micro-tasks to be too large and therefore this processing being inefficient

However, most of these concerns are caused by a combination of inadequate processes and fundamental misconceptions about micro-tasks:

  • Management often tends to raise artificial barriers between them and the developers – such as not considering the possibility that one who is assigned to perform a task should know best how to split the task.
  • Many processes operate under a no-error paradigm regarding the issues (i.e. there shall be virtually no mistakes such as wrong splitting of tasks). Yet under real circumstances, it is almost always more effective in both short- and long-term to embrace the possibility of incorrectly-handled tickets on every level just like the industry has accepted the existence of software bugs. Embracing a certain level of errors being made also significantly reduces certain types of undesirable behavior, the most prominent one being the affinity to hide one’s own errors once one has realized they have been made, in order to avoid social repercussions (such as being yelled at), even if such repercussions are purely hypothetical in practice.This strategy also often eases the tendency to raise artificial barriers as outlined above, because the underlying reason for those barriers is often based in some conflict of competence and therefore a battle for authority.
  • The overhead issue can easily & effectively be taken care of by simply defining the desirable size of microtasks, such as listed in the last chapter.
    • Never create a micro-task if you assume the task will take less than 10 minutes
    • Always create a micro-task if you assume the task will take more than 60 minutes
    • In between those figures, use your own judgment as a developer
  • Moreover, it is effective when using micro-tasks to make it as quick & easy as possible to create a microtask. In the worst case assuming accurate projections, the developer will create one micro-task for every 10 minutes of development work. If some requirement, policy or process makes creating the task inefficient for the developers, the overhead can quickly accumulate – however this is rare in practice as most technology-oriented developers who do not operate under strict & direct management supervision do employ some sort of self-regulation and will create fewer micro-tasks if the overhead is sufficiently large
  • Yet, the same self-regulation mechanism can under some circumstances also lead to too few micro-tasks being created. This can only be solved by active encouragement and communication from the management, including some level of give-and-take on both sides.

Since management of larger companies often operates under the assumption that every subordinate shall be exchangeable to a certain extent and that concept often prevails throughout the hierarchy, some find it hard to swallow that some tasks might be split into multiple micro-tasks by one developer whereas another one will just treat it monolithically based on differences in their projected timespan. Yet given even a large expected deviance from the projected time a task will take to the actual time spent on it (plus the fact that it’s much easier to judge the duration of a short task as compared to a long one) this remains without any practical consequence given that the min-forced-split time of 60 minutes is so small compared to the duration of the entire project. All remaining issues need to be handled by communicating between management and developers anyway, such as telling the developer that he should have split this or that task after repeatedly failing to do so.

In addition to facilitating effective & automatic information flow between management and the developers, micro-tasks in combination with effective supervision by development heads or management tend to avoid many cases of getting stuck: A typical behavior for developers is to continue working on a technically challenging task, even if they are totally stuck dealing with the current issue.

My recommendation is what I call the 15-minute rule: Once you can’t get measurable progress on a problem within 15 minutes, you should try another task, or (if not possible) approach the issue from a different angle.
While this leaves significant room for interpretation and even besides that should be treated more like a heuristic than a rule, self-applying this rule has helped many inexperienced developers.

Micro-tasks deal with this issue on a project setting by allowing the management or the supervisory developer (depending on the project size) to periodically check all the currently active tasks with very short cycles times of less than an hour: If there has been no progress on a micro-task within one or two hours, one can expect that either the developer hasn’t split up the task properly or he is stuck and might be able to profit from assistance by the usually more experiences supervisor.

Without micro-tasks the timespan in which one can expect a feedback or otherwise can assume that it’s stuck not only varies a lot depending on the individual task, but is also extremely long – sometimes even weeks or months. This increases the likelihood that a lot of time is being lost due to long periods of developer being stuck on a task without measureable progress. In effect, micro-tasks add to the supervisory developer’s toolbox by providing means to intervene before much valuable time is lost, while avoiding the work environment being perceived as surveillance-focused by the developers.

Ease-of-use

The way Redmine is set up in many companies effectively makes people, especially non-tech oriented people, unlikely to use Redmine to document their issues.

Example 1: Redmine only over VPN

For security reasons, Redmine is often only available over a virtual private network. Regarding usability, this has the effect that often a user has to either log in through multiple layers of security (login to VPN, login to Redmine).

Moreover, often the VPN client requires a special setup that might not always.

One particular case that I recall is a company where all project mgmt tools were accessible only over VPN – and my VPN account could only handle one user at a time. Therefore, if I switched from my Desktop to my Notebook, the VPN couldn’t connect and therefore I was unable to use Redmine.

Security is usually handled sufficiently well by recurring reviews by admins and/or external specialists – these reviews should also cover your website. My main recommendation regarding Redmine is to copy the project URLs (e.g. https://redmine.gridbox.de/projects/test)  when you’re logged in and opening them in Incognito mode (where you’re not logged in). If you see the login form, everything is alright for this project. If you see the project overview page, you have set the project to Public mode – see above for how to fix it.

In general, security should not be conceptualized as an optimization goal (as it is often done by technology-affine people) but more as a tradeoff between certain types of risk and usability. In that context, usability can be modeled as the likelihood that people will use a tool or process voluntarily – or, equivalently, the amount of effort you have to put in so that a certain group of people uses a certain tool or process to a given extent.

This implies that any educated discussion about security will need to define some model of the risk that is being talked about, and the people that. In many cases, the mistake being made especially by technologically affine people is to base the usability model on their own aptitude & capability of using extremely suboptimal tools effectively, while sometimes their risk model is somewhat influenced by a desire to better than the current state-of-the-market. While both these properties are fundamentally desirable in almost every aspect, practical businesses will have to make the fundamental security-usability tradeoff at some point, explicitly or implicitly, whether they like it or now.

Example 2: Too few privileges in Redmine

Often, Redmine admins strip users of all but the most obviously required privileges for all the subprojects. This means that users often can’t edit descriptions, can’t close or reopen issues. Even if they can contact the admin to e.g. re-open an erroneously closed issue, users will not use it effectively based on some level of fear of doing something they can’t revert themselves – and the social shame of having to contact the admin about it, even it the admin does not seem to judge them for their behavior.

The only action I recommend to not allow is to delete (as opposed to close) an issue as this is very hard to recover from. There are certain projects, of course, where certain (low level) users factually shouldn’t be able to do anything (e.g. marketing might just be able to read R&D info but not contribute to it), but in many projects, restrictions apply to all projects and inhibit effective use especially by non-tech personnel in projects with more that ~3 active participants.

Example 3: Too strict policies for Redmine

If one actively enforces a specific format, length or any other policy for creating or updating tickets or documentation, this will not only create a significant entry barrier to new users (who often will refrain from using project management tools altogether except for the level that is actively enforced) but also create a hard wall of separation between the veteran users (who know the rules, or, as some would say, define the interpretation of the rules) and everyone else – with everyone else being more or less afraid of not being able to follow the rules and therefore refrain from using it beyond the minimal enforced level.

Additionally, if you impose policies on the format of some documentation don’t forget to consider the case that some relevant information might not fit into a predefined scheme. If you are too aggressive on enforcing the policy, important information might be lost as it is dropped from updates. Therefore, you should always include an additional information field in any predefined format, even if it’s often misused.

Example: A company enforces that development tickets must only be closed by git commits. Yet they failed to consider that many bugs are either unintentionally fixed by a secondary ticket without the other developer being aware of the primary ticket – or are based on a misconception and are therefore not really a fixable bug.

The ostensibly good intention of being able to map all tickets to git commits might, therefore, result in either a lot of tickets not being closed at all, or handled in some other far-from-ideal way as people are looking to a workaround for the policy. The corporate process of fixing the policies once such cases are known and documented are often both too slow and too complex for the user to use compared to a quick workaround. However, in the long term, the unknown status or a large portion of your bug tracker will almost certainly come and haunt you!

Think of it this way: Your Bugtracker might suffer the same fate as many of the classical internet forums ubiquitous since the early 2000s: If you ask a question, and you’ll only get an answer like

Use the search function ! What you are looking for has been answered SO MANY TIMES on this forum!

by some veteran user, this will likely be the last question you ask voluntarily on said forum.

Why is that? Because not only isn’t it as obviously easy to use the search function and actually get some meaningful results out of it as said veteran implies (because the new user usually doesn’t know the right question to ask, i.e. what terms to look for), but said veteran could have just as easily used the 10 seconds of his time to get you one or two of the seemingly ubiquitous answer-to-your-question-links to at least get you started.

Example 4: Enforced deadlines

One project I recall temporarily had the policy that every ticket shall have a due date set to at most 4 weeks in the future, in order to be able to be reminded it via Gantt etc. This didn’t work out, as constant needless update emails to extend the due date (e.g. for a postponed issue) annoyed all the users. Also,

My recommendation is to use due dates only where due dates are functionally implied and, at the same time, functionally required by the project.
Example: If the software needs to be delivered in 6 months, maybe the core feature implementation tasks should be due in 3 months.

In general, it is advisable to use milestones instead of due dates to keep a lean development process and still be able to keep close track of the progress. If you e.g. set due dates for tickets belonging to the next spring, you could just as well have created a milestone for said sprint, and therefore keep close track of the progress while removing the often unrealistic dangling sword of due dates from the mind of the developer.

Think of it this way: The most effective way to motivate people in said context is for everyone to work towards a common goal (milestone) instead of everyone fearing to not hit and get yelled at. Don’t come up with reasoning like We don't yell at my company. Even if there is no explicitly negative reaction whatsoever, subconcious reactions to the negative motivation attempt will not make your goal easier to reach – or your project easier to manage.

Corollary

If you make tools difficult to use for people, people will not use the tools – or use them only to the extent that you actively enforce. Since you should have better things to do than enforcing the use of tools, try your best to don’t make them hard to use – and stop coming up with reasons why you seem to make it harder to be used.

In practice, it is often sufficient to avoid punishing behavior that is incompatible with some policy but instead try to encourage & reward improvement. You could, for example, treat badly written issues with a lower priority until they are improved, but never make the mistake of trying to educate the users by dismissing them.

Have a demo project for all users

Often when you introduce new users to Redmine or you want to test out new features, a demo project can come in very handy. It allows you to test out and show features to users without real-world consequences even if you screw up something. Give all users permanent access to the demo project – so they can try out things as well if required.

Posted by Uli Köhler in Project management

How to automatically renew Let’s Encrypt certbot certs on Ubuntu

New: Our new post How to install automated certbot/LetsEncrypt renewal in 30 seconds features an updated procedure using systemd and an automated installer.

On Ubuntu, you can easily setup a daily job that tries to renew almost-expired Let’s Encrypt certificates.

Create /etc/cron.daily/renewcerts:

#!/bin/bash
certbot renew
service nginx reload

After that, sudo chmod a+x /etc/cron.daily/renewcerts.

Now you should verify that the script would actually run:

run-parts --test -v /etc/cron.daily

should print, among other lines, this line:

/etc/cron.daily/renewcerts

IMPORTANT: You still need to run certbot renew manually every 1-2 months to check if there are any errors that might prevent certs from being renewed.

NOTE: Since the script is calling service nginx reload, you need to ensure that your nginx config files are not left in a broken state for too long if you edit them. Use sudo nginx -t to check for errors after you edit them. Also note that if you make nginx config changes, the script might unintentionally apply them to your productive HTTP/HTTPS server!

Posted by Uli Köhler in Cryptography, Linux