Fixing numpy.distutils.system_info.NotFoundError: No lapack/blas resources found on Ubuntu or Travis

Note: If you are on Windows, you can not install scipy using pip! Follow this guide instead: https://www.scipy.org/install.html. This blog post is only for Linux-based systems!

When building some of my libraries on Travis, I encountered this error during

sudo pip3 install numpy scipy --upgrade
numpy.distutils.system_info.NotFoundError: No lapack/blas resources

Solution

Install lapack and blas:

sudo apt-get -y install liblapack-dev libblas-dev

In most cases you will then get this error message:

error: library dfftpack has Fortran sources but no Fortran compiler found

Fix that by

sudo apt-get install -y gfortran

In Travis, you can do it like this in .travis.yml:

before_install:
    - sudo apt-get -y install liblapack-dev libblas-dev gfortran

Easily compute & visualize FFTs in Python using UliEngineering

UliEngineering is a mixed data analytics in Python – one of the utilities it provides is an easy-to-use package to compute FFTs. In contrast to other package, this library is oriented towards practical usecases and allows you to do the FFT in only one line of code! No knowledge of Math required.

Getting started

First, we’ll generate some test data. See this previous post for more details on how to generate sinusoid test data:

from UliEngineering.SignalProcessing.Simulation import *

# Generate test data: 100 Hz + 400 Hz tone
data = sine_wave(frequency=100.0, samplerate=1000, amplitude=1.) \
       + sine_wave(frequency=400.0, samplerate=1000, amplitude=0.5)

The test data consists of a 100 Hz sine wave plus a 400 Hz sine wave (with half the amplitude). That signal is sampled with a sample rate of 1000 Hz.

Now we can compute & visualize the FFT using matplotlib:

# Compute FFT. Use the same samplerate 
# NOTE: Window defaults to "blackman"!
from UliEngineering.SignalProcessing.FFT import compute_fft
fft = compute_fft(data, samplerate=1e3)

# Plot
from matplotlib import pyplot as plt
plt.style.use("ggplot")

plt.gcf().set_size_inches(10, 5) # Use (20, 10) to get a larger plot
plt.plot(fft.frequencies, fft.amplitudes)
plt.xlabel("Frequency")
plt.ylabel("Amplitude")


compute_fft(data, samplerate=1e3) returns a tuple (fftx, ffty). It performs an FFT the size of the input (i.e. since data is a length-1000 array, the FFT will be of size 1000).

fft.frequencies is an array of frequencies (in Hz), corresponding to the values in fft.amplitudes. You can also use fft.angles to get relative angles in degrees, but that is not being covered in this blogpost.

As you can see in the plot shown above, the maximum frequency that can be detected is always half the samplerate, i.e. for our samplerate of f_s = 1000\,\text{Hz} it is 500\,\text{Hz}. See this more math-centric FFT explanation if you want to know more details.

Internally, compute_fft() performs this computation:

2 \cdot \frac{\text{abs}\left(\text{FFT}(\text{data} \cdot \text{Window})\right)}{\text{len(data)}}
  • 2 is a correction factor that takes into account that we throw away the latter half of the raw FFT result (since we’re doing a real FFT)
  • \frac{1}{\text{len(data)}} normalizes the FFT results so they are indepedent of the data length (i.e. if you pass a longer sample of the same sine wave you will still get the same result
  • \text{abs}\left(\cdots\right) Converts the complex phase-aware result of the FFT to a spectrum which is easier to read & visualize.
  • Window (which defaults to blackman) is the window which is applied to the data to alleviate some mathematical effects at the beginning and the end of the dataset. See wikipedia on window functions for more details. UliEngineering currently offers this list of window functions:
    • blackman
    • bartlett
    • hamming
    • hanning
    • kaiser (Parameter is fixed to 2.0)
    • none

Selecting frequency ranges

Using the UliEngineering API, selecting a frequency range of the FFT is trivially easy: Just use fft[lowfreq:highfreq]. You can use fft[lowfreq:] to select everything starting from lowfreq or use fft[:highfreq] to select everything up to highfreq.

from UliEngineering.SignalProcessing.FFT import compute_fft
fft = compute_fft(data, samplerate=1e3)

# Select frequency range: 50 to 200 Hz
fft = fft[50.0:200.0]

# Plot
from matplotlib import pyplot as plt
plt.style.use("ggplot")

plt.gcf().set_size_inches(10, 5) # Use (20, 10) to get a larger plot
plt.plot(fft.frequencies, fft.amplitudes)
plt.xlabel("Frequency")
plt.ylabel("Amplitude")
plt.savefig("/ram/fft-frequency-range.svg")

Extract amplitude & angle at a certain frequency

By using [frequency], i.e. getitem operator with a single value, you get an FFTPoint() object containing the frequency, amplitude and relative angle for a given frequency. The library automatically selects the closest FFT bucket, so even if your FFT does not have a bucket for that specific frequency, you will get sensible results.

from UliEngineering.SignalProcessing.FFT import compute_fft
fft = compute_fft(data, samplerate=1e3)

# Show value at a certain frequency
print(fft[30]) # FFTPoint(frequency=30.0, value=2.91e-08, angle=0.0)
print(fft[100]) # FFTPoint(frequency=100.0, value=0.419, angle=0.0)

Short FFTs for long data

Using compute_fft(), if we have an extremely long data array, this means we’ll compute an extremely long FFT. In many cases, this is not desirable and you want to compute a fixed-size FFT (most often a power-of-two FFT, e.g. 1024, 2048, 4096 etc).

Let’s generate some long test data and assume we want to compute a size-1024 FFT on it

from UliEngineering.SignalProcessing.FFT import *
fftx, ffty = simple_serial_fft_reduce(data, samplerate=1e3, fftsize=1024)

Plotting the data like above yields

which looks almost exactly like our compute_fft() plot before – just as we expected.

simple_serial_fft_reduce() takes care of all the magic and normalization for us, including partitioning the data into overlapping chunks, adding the FFTs and properly normalizing the result.

The naming convention is significant here:

  • simple_...._reduce means this is a variant with sensible defaults (simple) for using a reduction function (default: sum) on multiple FFTs.
  • serial means the individual FFTs

Parallelizing the FFTs

If you have a huge dataset, you can use simple_parallel_fft_reduce() identically tosimple_serial_fft_reduce():

from UliEngineering.SignalProcessing.FFT import *
fftx, ffty = simple_parallel_fft_reduce(data, samplerate=1e3, fftsize=1024)

However in most cases you want to initialize the executor manually so you can re-use it later:

from UliEngineering.SignalProcessing.FFT import *
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor() # No argument => use num_cpus threads
fftx, ffty = simple_parallel_fft_reduce(data, samplerate=1e3, fftsize=1024, executor=executor)

We can use a ThreadPoolExecutor() since scipy.fftpack (which UliEngineering uses to do the hard math) unlocks the Python GIL.

Note that due to the need to do a lot of housekeeping tasks, simple_parallel_fft_reduce() is much slower than simple_serial_fft_reduce() if you have a dataset so small that parallelization is not effective. My initial recommendation is to consider using the parallel variant if the total execution time of the serial variant is larger than 0.5\,s

Install:

sudo pip3 install git+https://github.com/ulikoehler/UliEngineering.git

Easily generate square/triangle/sawtooth/inverse sawtooth waveform data in Python using UliEngineering

In a previous post, I’ve detailed how to generate sine/cosine wave data with frequency, amplitude, offset, phase shift and time offset using only a single line of code.

This post extends this approach by showing how to generate square wave, triangle wave, sawtooth wave and inverse sawtooth wave data – still in only one line of code.

All of the parameters, including frequency, amplitude, offset, phase shift and time offset apply for those functions as well – see the previous post for details and examples for those parameters.

We are using the UliEngineering library, more specifically the UliEngineering.SignalProcessing.Simulation package:

Install

sudo pip3 install git+https://github.com/ulikoehler/UliEngineering.git

Square wave

from UliEngineering.SignalProcessing.Simulation import square_wave

data = square_wave(frequency=10.0, samplerate=10e3)

Triangle wave

from UliEngineering.SignalProcessing.Simulation import triangle_wave

data = triangle_wave(frequency=10.0, samplerate=10e3)

Sawtooth wave

from UliEngineering.SignalProcessing.Simulation import sawtooth

data = sawtooth(frequency=10.0, samplerate=10e3)

Inverse sawtooth wave

from UliEngineering.SignalProcessing.Simulation import inverse_sawtooth

data = inverse_sawtooth(frequency=10.0, samplerate=10e3)

Plotting code

This code was used to generate the plots for this post in Jupyter:

%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use("ggplot")

from UliEngineering.SignalProcessing.Simulation import square_wave

data = square_wave(frequency=10.0, samplerate=10e3)

# set_size_inches(20, 10) to make it even larger!
plt.gcf().set_size_inches(10, 5)
plt.plot(data, label="original")
plt.savefig("/dev/shm/square-wave.svg")

Easily generate sine/cosine waveform data in Python using UliEngineering

In order to generate sinusoid test data in Python you can use the UliEngineering library which provides an easy-to-use functions in UliEngineering.SignalProcessing.Simulation:

Install

sudo pip3 install git+https://github.com/ulikoehler/UliEngineering.git

Basic example

from UliEngineering.SignalProcessing.Simulation import sine_wave
# Default: Generates 1 second of data with amplitude = 1.0 (swing from -1.0 ... 1.0)
sine = sine_wave(frequency=10.0, samplerate=10e3)
cosine = cosine_wave(frequency=10.0, samplerate=10e3)

Amplitude & offset

Use amplitude=0.5 to specify that the sine wave should swing between -0.5 and 0.5.

Use offset=2.0 to specify that the sine wave should be centered vertically around 2.0:

from UliEngineering.SignalProcessing.Simulation import sine_wave

data = sine_wave(frequency=10.0, samplerate=10e3, amplitude=0.5, offset=2.0)

Phaseshift example

You can specify the phaseshift to use – to use a 180° phaseshift, just use phaseshift=180.0

from UliEngineering.SignalProcessing.Simulation import sine_wave

original = sine_wave(frequency=10.0, samplerate=10e3)
shifted = sine_wave(frequency=10.0, samplerate=10e3, phaseshift=180.)

Time delay example

While this is functionally equivalent to phase offset, it is often convenient to specify the time delay in seconds instead of the phase shift in degrees:

from UliEngineering.SignalProcessing.Simulation import sine_wave

original = sine_wave(frequency=10.0, samplerate=10e3)
# Shifted signal is delayed 0.01s = 10 milliseconds compared to original
shifted = sine_wave(frequency=10.0, samplerate=10e3, timedelay=0.01)

Note that in the time domain, the signals appear to be shifted backwards when you use a positive timedelay value. This is in accordance with the delay naming, implying that the signal is delayed by that amount.

You can specify both phase shift and time delay, meaning that both will be applied (the offset is added)

Plotting

If you want to debug your signals visually, this is the code that was used inside Jupyter to generate the plots shown above:

%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use("ggplot")

# Generate data
from UliEngineering.SignalProcessing.Simulation import sine_wave

original = sine_wave(frequency=10.0, samplerate=10e3)
shifted = sine_wave(frequency=10.0, samplerate=10e3, timedelay=0.01)

# set_size_inches(20, 10) to make it even larger!
plt.gcf().set_size_inches(10, 5)
plt.plot(original, label="original")
plt.plot(shifted, label="shifted")
plt.savefig("/dev/shm/timedelay.svg")
plt.legend(loc=1) # Top right

Easy zero crossing detection in Python using UliEngineering

In order to perform zero crossing detection in NumPy arrays you can use the UliEngineering library which provides an easy-to-use zero_crossings function:

Install:

sudo pip3 install git+https://github.com/ulikoehler/UliEngineering.git

Usage example:

from UliEngineering.SignalProcessing.Utils import zero_crossings
# To generate test data
from UliEngineering.SignalProcessing.Simulation import sine_wave

# Generate test data: 10 Hz 10ksps, 1 second (= 1D 10000 values NumPy array)
data = sine_wave(frequency=10.0, samplerate=10e3)

# Prints: [0, 500, 1000, 1500, ...]
print(zero_crossings(data))

Thanks to Jim Brissom on StackOverflow for the original solution!

How to set cv2.VideoCapture() image size in Python

Use cv2.CAP_PROP_FRAME_WIDTH and cv2.CAP_PROP_FRAME_HEIGHT in order to tell OpenCV which image size you would like.

import cv2

video_capture = cv2.VideoCapture(0)
# Check success
if not video_capture.isOpened():
    raise Exception("Could not open video device")
# Set properties. Each returns === True on success (i.e. correct resolution)
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 160)
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
# Read picture. ret === True on success
ret, frame = video_capture.read()
# Close device
video_capture.release()

Note that most video capture devices (like webcams) only support specific sets of widths & heights. Use uvcdynctrl -f to find out which resolutions are supported:

$ uvcdynctrl -f
Listing available frame formats for device video0:
Pixel format: YUYV (YUYV 4:2:2; MIME type: video/x-raw-yuv)
  Frame size: 640x480
    Frame rates: 30, 20, 10
  Frame size: 352x288
    Frame rates: 30, 20, 10
  Frame size: 320x240
    Frame rates: 30, 20, 10
  Frame size: 176x144
    Frame rates: 30, 20, 10
  Frame size: 160x120
    Frame rates: 30, 20, 10

How to take a webcam picture using OpenCV in Python

This code opens /dev/video0 and takes a single picture, closing the device afterwards:

import cv2

video_capture = cv2.VideoCapture(0)
# Check success
if not video_capture.isOpened():
    raise Exception("Could not open video device")
# Read picture. ret === True on success
ret, frame = video_capture.read()
# Close device
video_capture.release()

You can also use cv2.VideoCapture("/dev/video0"), but this approach is platform-dependent. cv2.VideoCapture(0) will also open the first video device on non-Linux platforms.

In Jupyter you can display the picture using

import sys
from matplotlib import pyplot as plt

frameRGB = frame[:,:,::-1] # BGR => RGB
plt.imshow(frameRGB)

 

How to build rav1e on Ubuntu

This will fetch and build the current git master of rav1e. The build process has been tested on Ubuntu 18.04 with rav1e git revision 49dcaada4.

sudo apt update
sudo apt -y install cargo git perl nasm cmake clang pkg-config
# Fetch
git clone https://github.com/xiph/rav1e.git
mv rav1e rav1e-git
cd rav1e-git
git submodule update --init
# Build libaom-av1
cmake aom_build/aom -DAOM_TARGET_CPU=x86_64 -DCONFIG_AV1_ENCODER=0 -DENABLE_TESTS=0 -DENABLE_DOCS=0 -DCONFIG_LOWBITDEPTH=1
make -j$(nproc)
# Build rav1e
cargo build --release
# Copy to parent directory
cp target/

After the build finishes, the rav1e executable is placed in the directory where you ran those commands. You can delete the rav1e-git folder

Download the resulting binary for Ubuntu 18.04 x86_64 here

GPG symmetric encryption: Passphrase on command line

Problem:

You want to use GnuPG’s –symmetric encryption, but instead of interactively entering the password you want to use a command line argument with the cleartext password.

Solution:

Use --batch --yes --passphrase <passphrase>:

gpg --symmetric --batch --yes --passphrase 12345 <input file>

Note that this is potentially insecure as it’s way easier to find out the command line parameters of running programs than intercepting the inputs of the interactive input dialog. Therefore, use this strategy only if neccessary.

Running Gitlab CE via docker behind a reverse proxy on Ubuntu

Similarly to my previous article about installing Redmine via docker behind a reverse proxy, this article details. Since I am running an instance of Redmine and an instance of Gitlab on the same virtual server, plus tens of other services.

While the Gitlab CE docker container is nicely preconfigured for standalone use on a dedicated VPS, running it behind a reverse proxy is not supported and will lead to a multitude of error messages – in effect, requiring lots of extra work to get up and running.

Note that we will not setup GitLab for SSH access. This is possible using this setup, but usually makes more trouble than it is worth. See this article on how to store git https passwords so you don’t have to enter your password every time.

Installing Docker & Docker-Compose

# Install prerequisites
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose

# Add current user to the docker group
sudo usermod -a -G docker $USER
# Enable & start docker service
sudo systemctl enable docker
sudo systemctl start docker

After running this shell script, log out & login from the system in order for the docker group to be added to the current user.

Creating the directory & docker-compose configuration

We will install Gitlab in /var/lib/gitlab which will host the data directories and the docker-compose script. You can use any directory if you use it consistently in all the configs (most importantly, docker-compose.yml and the systemd service).

# Create directories
sudo mkdir /var/lib/gitlab

Next, we’ll create /var/lib/gitlab/docker-compose.yml.

There’s a couple of things you need to change here:

  • Set gitlab_rails['gitlab_email_from'] and gitlab_rails['gitlab_email_display_name'] to whatever sender address & name you want emails to be sent from
  • Set the SMTP credentials (gitlab_rails['smtp_address'], gitlab_rails['smtp_port'], gitlab_rails['smtp_user_name'], gitlab_rails['smtp_password'] & gitlab_rails['smtp_domain']) to a valid SMTP server. In rare cases you also have to change the other gitlab_rails['smtp_...'] settings.
  • You need to change every 4 occurrences of gitlab.mydomain.de to your domain.
  • The ports configuration, in this case '9080:80' means that Gitlab will be mapped to port 9080 on the local PC. This port is chosen somewhat arbitarily – as we will run Gitlab behind an nginx reverse proxy, the port does not need to be any port in particular (as long as you use the same port everywhere), but it may not be used by anything else. You can use any port here, provided that it’s not used for anything else. Leave 80 as-is and only change 9080 if required.
gitlab:
   image: 'gitlab/gitlab-ce:latest'
   restart: always
   hostname: 'gitlab.mydomain.de'
   environment:
     GITLAB_OMNIBUS_CONFIG: |
       external_url 'https://gitlab.mydomain.de'
       letsencrypt['enabled'] = false
       # Email
       gitlab_rails['gitlab_email_enabled'] = true
       gitlab_rails['gitlab_email_from'] = 'gitlab@mydomain.de'
       gitlab_rails['gitlab_email_display_name'] = 'My GitLab'
       # SMTP
       gitlab_rails['smtp_enable'] = true
       gitlab_rails['smtp_address'] = "mail.mydomain.de"
       gitlab_rails['smtp_port'] = 25
       gitlab_rails['smtp_user_name'] = "gitlab@mydomain.de"
       gitlab_rails['smtp_password'] = "yourSMTPPassword"
       gitlab_rails['smtp_domain'] = "mydomain.de"
       gitlab_rails['smtp_authentication'] = "login"
       gitlab_rails['smtp_enable_starttls_auto'] = true
       gitlab_rails['smtp_tls'] = true
       gitlab_rails['smtp_openssl_verify_mode'] = 'none'
       # Reverse proxy nginx config
       nginx['listen_port'] = 80
       nginx['listen_https'] = false
       nginx['proxy_set_headers'] = {
         "X-Forwarded-Proto" => "https",
         "X-Forwarded-Ssl" => "on",
         "Host" => "gitlab.mydomain.de",
         "X-Real-IP" => "$$remote_addr",
         "X-Forwarded-For" => "$$proxy_add_x_forwarded_for",
         "Upgrade" => "$$http_upgrade",
         "Connection" => "$$connection_upgrade"
       }
   ports:
     - '9080:80'
   volumes:
     - '/var/lib/gitlab/config:/etc/gitlab'
     - '/var/lib/gitlab/logs:/var/log/gitlab'
     - '/var/lib/gitlab/data:/var/opt/gitlab'

Setting up the systemd service

Next, we’ll configure the systemd service in /etc/systemd/system/gitlab.service.

Set User=... to your current user in the [Service] section.

[Unit]
Description=Gitlab
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=uli
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /var/lib/gitlab/docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /var/lib/gitlab/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /var/lib/gitlab/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

After creating the file, we can enable and start the redmine service:

sudo systemctl enable gitlab
sudo systemctl start gitlab

The output of sudo systemctl start gitlab should be empty. In case it is

Job for gitlab.service failed because the control process exited with error code.
See "systemctl status gitlab.service" and "journalctl -xe" for details.

you can debug the issue using journalctl -xe and journalctl -e

The first startup usually takes about 10 minutes, so grab at least one cup of coffee. You can follow the progress using journalctl -xefu gitlab. Once you see lines like

Dec 17 17:28:04 instance-1 docker-compose[4087]: gitlab_1  | {"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"duration":28.82,"view":22.82,"db":0.97,"time":"2018-12-17T17:28:03.252Z","params":[],"remote_ip":null,"user_id":null,"username":null,"ua":null}

the startup is finished.

Now you can check if GitLab is running using

wget -O- http://localhost:9080/

(if you changed the port config before, you need to use your custom port in the URL).

If it worked, it will show a debug message output. Since gitlab will automatically redirect you to your domain (gitlab.mydomain.de in this example) you should see something like

--2018-12-17 17:28:32--  http://localhost:9080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:9080... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://gitlab.gridbox.de/users/sign_in [following]
--2018-12-17 17:28:32--  https://gitlab.mydomain.de/users/sign_in
Resolving gitlab.gridbox.de (gitlab.mydomain.de)... 35.198.165.121
Connecting to gitlab.gridbox.de (gitlab.mydomain.de)|35.198.165.121|:443... failed: Connection refused.

Since we have not setup nginx as a reverse proxy yet, it’s totally fine that it’s saying connection refused. The redirection worked if you see the output listed above.

Setting up the nginx reverse proxy (optional but recommended)

We’ll use nginx to proxy the requests from a certain domain (Using Apache, if you use it already, is also possible but it is outside the scope of this tutorial to tell you how to do that). Install it using

sudo apt -y install nginx

First, you’ll need a domain name with DNS being configured. For this example, we’ll assume that your domain name is gitlab.mydomain.de ! You need to change it to your domain name!

First, we’ll create the config file in /etc/nginx/sites-enabled/gitlab.conf. Remember to replace gitlab.mydomain.de by your domain name! If you use a port different from 9080, replace that as ewll.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

server {
    server_name gitlab.mydomain.de;

    access_log /var/log/nginx/gitlab.access_log;
    error_log /var/log/nginx/gitlab.error_log info;

    location / {
        proxy_pass http://127.0.0.1:9080; # docker container listens here
        proxy_read_timeout 3600s;
        proxy_http_version 1.1;
        # Websocket connection
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }

    listen 80;
}

Now run sudo nginx -t to test if there are any errors in the config file. If everything is alright, you’ll see

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once you have fixed all errors, if any, run sudo service nginx reload to apply the configuration.

We need to setup a Let’s Encrypt SSL certificate before we can check if Gitlab is working:

Securing the nginx reverse proxy using Let’s Encrypt

First we need to install certbot and the certbot nginx plugin in order to create & install the certificate in nginx:

sudo apt -y install python3-certbot python3-certbot-nginx

Fortunately certbot automates most of the process of installing & configuring SSL and the certificate. Run

sudo certbot --nginx

It will ask you to enter your Email address and agree to the terms of service and if you want to receive the EFF newsletter.

After that, certbot will ask you to select the correct domain name:

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: gitlab.mydomain.de
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

In this case, there is only one domain name (there will be more if you have more domains active on nginx!).

Therefore, enter 1 and press enter. certbot will now generate the certificate. In case of success you will see an output including a line like

Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/gitlab.mydomain.de.conf

Now it will ask you whether to redirect all requests to HTTPS automatically:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 

Choose Redirect here: Type 2 and press enter. Now you can login to GitLab and finish the installation.

You need to renew the certificate every 3 months for it to stay valid, and run sudo service nginx reload afterwards to use the new certificate. If you fail to do this, users will see certificate expired error messages and won’t be able to access Redmine easily! See this post for details on how to mostly automate this process!

Setting up Redmine

Now you can open a browser and have a first look at your new GitLab installation:

Set the new password and then login with the username root and your newly set password.

After that, open the admin area at the top by clicking at the wrench icon in the purple navigation bar at the top.

At the navigation bar at the left, click on Settings (it’s at the bottom – you need to scroll down) and then click on General.

Click the Expand button to the right of Visibility and access controls. Scroll down until you see Enabled Git access protocols and select Only HTTP(S) in the combo box.

Then click the green Save changes button.

Since we have now disabled SSH access (which we didn’t set up in the first place), you can now use GitLab. A good place to start is to create a new project and try checking it out. See this article on how to store git https passwords so you don’t have to enter your git password every time.

How to disable Let’s Encrypt in the Gitlab CE docker image

Problem:

You want to run the Gitlab CE docker image, but since you want to run it together with other services behind a reverse proxy, you see an error message like this:

gitlab_1  | letsencrypt_certificate[gitlab.mydomain.com] (letsencrypt::http_authorization line 3) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 20) had an error: RuntimeError: [gitlab.mydomain.com] Validation failed for domain gitlab.mydomain.com

Solution:

Add

letsencrypt['enabled'] = false

to GITLAB_OMNIBUS_CONFIG. See this file on GitHub for more Let’s Encrypt-related configs you can add.

In docker-compose.yml it could look like this:

gitlab:
   image: 'gitlab/gitlab-ce:latest'
   restart: always
   hostname: 'gitlab.mydomain.com'
   environment:
     GITLAB_OMNIBUS_CONFIG: |
       external_url 'https://gitlab.mydomain.com'
       letsencrypt['enabled'] = false
   ports:
     - '7080:80'
     - '1022:22'
   volumes:
     - '/var/lib/gitlab/config:/etc/gitlab'
     - '/var/lib/gitlab/logs:/var/log/gitlab'
     - '/var/lib/gitlab/data:/var/opt/gitlab'

 

Tips on how to use Redmine for your project

Having used Redmine for almost 10 years for a multitude of projects and with widely varying team sizes, here are my practical tips on how you should use it for your project:

Super-projects

Create one super-project of which all projects are subprojects. This allows you to use all project management features like Gantt, Milestones etc in the super-project.

Example project structure

  • My Company Inc (Super-project)
    • R&D
      • Migration to cloud computing
      • Machine learning research
    • Operations
      • Customer X
      • Customer Y
    • Management

By opening the ticket list for the super-project, you see all the tickets of all sub-projects – and therefore have more tools at your disposal for high-level management of the company. For example, you could create milestones for business development in the super-project that depend on tasks in some R&D sub-project.

No public projects !

People often make the mistake of making a Redmine project public – maybe because that’s the default setting for new projects.

However, anyone that knows the project URL, e.g. https://redmine.mydomain.com/projects/test/ can see ALL THE DATA in the project by default, even though the project isn’t listed in the search bar project list

Solution: Make all projects non-public: As admin, go to the project, go to the Settings tab, uncheck the Public checkbox and click Save. Unless, of course, you actually want all the data to be publically available via Redmine…

Avoid multiple issue trackers

Companies that grew organically to their current size often tend to have separate systems that fulfill mostly the same purpose but are mostly not interconnected – but are mostly used by separate subgroups of the personnel.

Typical example: (Project) management uses Redmine while the developers use Gitlab for source-code-oriented bug management and only interface to Redmine to communicate with the project managers.

While this constellation will work reasonably well even over long time periods, it inevitably leads to an unhealthy spread of information and, as a consequence, inefficiency in looking for information. Some low-level-development discussion or documentation will end up in Redmine whereas some project-management-related discussions inevitably end up in Gitlab.

If at all possible, try to encourage the use of one system and discourage the use of the other for any purpose that overlaps. Try to actively work with people opposed to that policy to find solutions on how to adapt their known process in the new system.

Enable support for incoming emails

Most Redmine setups are notoriously hard to use on a smartphone – the UI is not really optimized for mobile use.

In order to facilitate the use on mobile or other devices, one easy way is to allow the user to answer to notification emails, with their response being posted as an update to the ticket. While this is somewhat hard to set up, a couple of hours by a skilled admin or expert will most certainly get you a couple of years of maintenance-free updates by incoming emails.

Note that this does not cover the use case of creating new tickets or otherwise interfacing with Redmine, but in practice, people that are currently on mobile-only will either just update existing tickets or whatever they intend to do is sufficiently important that they will deal with the suboptimal mobile UI that Redmine currently provides.

While there are a few Redmine mobile apps out there and it is certainly worth a try at some point, I have not had great success with them in past projects.

Use (parts of) the wiki to collect unstructured information

Many companies lack a structured and searchable way of collecting unstructured information. What about the SSH command the admin uses to connect to some server? Everyone who has worked in R&D knows that perfect-documentation-land is pure utopia and while actually writing top-down documentation for all the tasks at hand might work for some time, it is mostly only used to describe what to do whereas details about how to do it are quite easily omitted.

Moreover, documentation is often not in sync with the real world, especially if the people who originally wrote it have moved on to other projects or even other jobs. Don’t make the mistake of assuming that you have any way of enforcing the discipline to write & keep in sync technical documentation down to the command- or component-level over a long term. Initial success regarding that matter is far worse in predicting the long-term success of documentation projects than most project managers would admit – and it’s debatable if good-but-horribly-outdated-information is of any use in a multi-year project.

If you don’t use the Redmine wiki for any other process, try to push people towards using it for documenting what they did, even if it’s only a prototype command line script or something that doesn’t work out later (you can delete it if it serves no purpose anymore).

If, however, you use the wiki for other documentation, just reserve a certain section, e.g. any page accessible from a special Documentation page, for structured information and create pages for different types of unstructured information

Crying Wolf: Don’t overuse Urgent and Immediate priorities

Everyone who has worked in the technology industry knows that some project managers appear to have the uncanny ability to always have one more Urgent task up their sleeve, no matter how many you have already fixed. Moreover, there are so many Urgent and Immediate tasks that no-one really knows what these words mean any more – especially since you can’t quite remember the last time you fixed a Low-priority task.

When you introduce the use of an advanced issue tracking tool, you have one more change in breaking out of that vicious circle. It’s a productive policy to enforce a rule that project managers must only have one Immediate and three Urgent tasks open at any one time. This enables developers to actually prioritize and avoid the risk of getting yelled at by some manager that didn’t use the tools at hand properly, in effect imposing wrong or unclear constraints on the developers.

Always have someone assigned & Order by priority

In practical projects, one phenomenon that often happens in conjunction with issue trackers is ticket orphanisation. Here’s how it happens.

  • Mgmt creates a ticket with high priority and assigns a developer
  • Developer fixes the issue and un-assigns himself, notifying the management to assign someone to test the fix properly via the issue update text.
  • Mgmt forgets to actually assign someone (e.g. due to holiday or more urgent tasks) and the ticket enters the endless list of unclosed-but-not-that-important tickets that have (almost) been fixed but some small part is still missing.

Unless you are one of the lucky few that manages to close and properly & periodically handle all tickets over a long period of time, the small lapse of not having one assigned for a short period of time exponentially increases the likelihood of a ticket not being properly finished.

Here’s how it should work:

  • Mgmt creates a ticket with high priority and assigns a developer
  • Developer fixes the issue and assigns management, telling them in the update text that the next step is to find some tester capable of testing it. The developer sets the ticket to an appropriate priority level
  • Mgmt will always see the tickets on the My tickets page.

In other words, there must be someone assigned at any one time. While this does not automatically solve issues of people leaving the project, or being on holiday or sick, this is how it should work:

  • If the next step to do on this ticket will be done by you, leave yourself assigned
  • If there are multiple next steps that should happen in parallel, create sub-tickets and assign accordingly
  • If the next task is unknown to you, assign to mgmt or head developer and note that the next task needs to be determined
  • If you know the next sub-task but don’t know who should do it, assign to mgmt and note that X needs to be done and they shall determine by whom (this represents our example listed above)
  • If the task has been assigned to you and you don’t know what to do, assign to head developer or mgmt and note, as clearly as possible, why you are unsure about what to do. Frequent occurence of this case typically indicates a failure of mgmt or development head to communicate the tasks in a developer-compatible way.

Independently of the aforementioned decisions:

  • Use sub-tickets heavily: Any time you see that something can be split into multiple tasks, even if you’re gonna do all of those tasks anyway
  • Don’t create sub-issues for sub-tasks that are projected to take less than 10 minutes
  • Always create sub-issues for sub-tasks that are projected to take more than 60 minutes
  • It is the responsibility of mgmt to look through tasks of temporarily unavailable project members and re-assign issues that can’t wait accordingly
  • It is the responsibility of mgmt to look through tasks of permanently unavailable project members and re-assign all issues as a matter of principle

A good way of conceptualizing sub-tickets is as local mini-milestones. While some tickets might have a scope larger than a typical milestone, sub-tickets should primarily focus on conveying relationship information to developers, while milestones should primarily focus on conveying information to mgmt.

Given that process, anyone should work primarily on the My tickets list, not so much on the project tickets list. The project tickets list is primarily for the mgmt to get an overview and developers should only use it to get some extra information or cross-reference some tasks.

The most efficient (& most obvious) way of using Redmine is to work through the My tickets list starting from the high priority tasks. Remember to regularly check if any higher priority tasks popped up.

One of the main tasks of the project manager is to ensure that the process I described automatically leads to people working on milestones, with some high-priority tasks outside the milestone interspersed in between. Proper communication to the developers ensures that while everyone is working based on the aforementioned scheme, the common goal of reaching the milestone is clearly defined in the minds of the developers.

Enforce regular updates by using Microtasks

A typical failure mode of software and hardware project management is that management thinks top-down and fails to enforce a process that thinks all the way to the bottom.

Regarding issue trackers, this systematic problem is often manifested in there being only coarse macro-tasks that, even if properly assigned to developers, don’t allow anyone to accurately assess the current state of the project (e.g. because there is no way of assessing a 0%…100% progress of an unclearly defined wide-scope task) and often lead to developers not giving regular updates.

Depending on the management style, irregular updates regarding specific parts of the development often lead to procrastination of tasks on the side of developers, greatly reducing their effectivity.

All of those failure modes can be avoided by using smaller, more strictly-defined micro-tasks. While this solution is obvious from a conceptual standpoint, the practical implementation is most often somewhere between utterly disastrous and not-viable-in-the-long term. The most common issues are:

  • Management does not have insight into the “low level” of development and therefore can’t assess how to split up the tasks beyond a certain level
  • Developers are said to not have enough insight into the bigger picture of the project and therefore it’s not considered feasible to let them split up tasks themselves
  • Management considers the overhead of creating and micro-managing micro-tasks to be too large and therefore this processing being inefficient

However, most of these concerns are caused by a combination of inadequate processes and fundamental misconceptions about micro-tasks:

  • Management often tends to raise artificial barriers between them and the developers – such as not considering the possibility that one who is assigned to perform a task should know best how to split the task.
  • Many processes operate under a no-error paradigm regarding the issues (i.e. there shall be virtually no mistakes such as wrong splitting of tasks). Yet under real circumstances, it is almost always more effective in both short- and long-term to embrace the possibility of incorrectly-handled tickets on every level just like the industry has accepted the existence of software bugs. Embracing a certain level of errors being made also significantly reduces certain types of undesirable behavior, the most prominent one being the affinity to hide one’s own errors once one has realized they have been made, in order to avoid social repercussions (such as being yelled at), even if such repercussions are purely hypothetical in practice.This strategy also often eases the tendency to raise artificial barriers as outlined above, because the underlying reason for those barriers is often based in some conflict of competence and therefore a battle for authority.
  • The overhead issue can easily & effectively be taken care of by simply defining the desirable size of microtasks, such as listed in the last chapter.
    • Never create a micro-task if you assume the task will take less than 10 minutes
    • Always create a micro-task if you assume the task will take more than 60 minutes
    • In between those figures, use your own judgment as a developer
  • Moreover, it is effective when using micro-tasks to make it as quick & easy as possible to create a microtask. In the worst case assuming accurate projections, the developer will create one micro-task for every 10 minutes of development work. If some requirement, policy or process makes creating the task inefficient for the developers, the overhead can quickly accumulate – however this is rare in practice as most technology-oriented developers who do not operate under strict & direct management supervision do employ some sort of self-regulation and will create fewer micro-tasks if the overhead is sufficiently large
  • Yet, the same self-regulation mechanism can under some circumstances also lead to too few micro-tasks being created. This can only be solved by active encouragement and communication from the management, including some level of give-and-take on both sides.

Since management of larger companies often operates under the assumption that every subordinate shall be exchangeable to a certain extent and that concept often prevails throughout the hierarchy, some find it hard to swallow that some tasks might be split into multiple micro-tasks by one developer whereas another one will just treat it monolithically based on differences in their projected timespan. Yet given even a large expected deviance from the projected time a task will take to the actual time spent on it (plus the fact that it’s much easier to judge the duration of a short task as compared to a long one) this remains without any practical consequence given that the min-forced-split time of 60 minutes is so small compared to the duration of the entire project. All remaining issues need to be handled by communicating between management and developers anyway, such as telling the developer that he should have split this or that task after repeatedly failing to do so.

In addition to facilitating effective & automatic information flow between management and the developers, micro-tasks in combination with effective supervision by development heads or management tend to avoid many cases of getting stuck: A typical behavior for developers is to continue working on a technically challenging task, even if they are totally stuck dealing with the current issue.

My recommendation is what I call the 15-minute rule: Once you can’t get measurable progress on a problem within 15 minutes, you should try another task, or (if not possible) approach the issue from a different angle.
While this leaves significant room for interpretation and even besides that should be treated more like a heuristic than a rule, self-applying this rule has helped many inexperienced developers.

Micro-tasks deal with this issue on a project setting by allowing the management or the supervisory developer (depending on the project size) to periodically check all the currently active tasks with very short cycles times of less than an hour: If there has been no progress on a micro-task within one or two hours, one can expect that either the developer hasn’t split up the task properly or he is stuck and might be able to profit from assistance by the usually more experiences supervisor.

Without micro-tasks the timespan in which one can expect a feedback or otherwise can assume that it’s stuck not only varies a lot depending on the individual task, but is also extremely long – sometimes even weeks or months. This increases the likelihood that a lot of time is being lost due to long periods of developer being stuck on a task without measureable progress. In effect, micro-tasks add to the supervisory developer’s toolbox by providing means to intervene before much valuable time is lost, while avoiding the work environment being perceived as surveillance-focused by the developers.

Ease-of-use

The way Redmine is set up in many companies effectively makes people, especially non-tech oriented people, unlikely to use Redmine to document their issues.

Example 1: Redmine only over VPN

For security reasons, Redmine is often only available over a virtual private network. Regarding usability, this has the effect that often a user has to either log in through multiple layers of security (login to VPN, login to Redmine).

Moreover, often the VPN client requires a special setup that might not always.

One particular case that I recall is a company where all project mgmt tools were accessible only over VPN – and my VPN account could only handle one user at a time. Therefore, if I switched from my Desktop to my Notebook, the VPN couldn’t connect and therefore I was unable to use Redmine.

Security is usually handled sufficiently well by recurring reviews by admins and/or external specialists – these reviews should also cover your website. My main recommendation regarding Redmine is to copy the project URLs (e.g. https://redmine.gridbox.de/projects/test)  when you’re logged in and opening them in Incognito mode (where you’re not logged in). If you see the login form, everything is alright for this project. If you see the project overview page, you have set the project to Public mode – see above for how to fix it.

In general, security should not be conceptualized as an optimization goal (as it is often done by technology-affine people) but more as a tradeoff between certain types of risk and usability. In that context, usability can be modeled as the likelihood that people will use a tool or process voluntarily – or, equivalently, the amount of effort you have to put in so that a certain group of people uses a certain tool or process to a given extent.

This implies that any educated discussion about security will need to define some model of the risk that is being talked about, and the people that. In many cases, the mistake being made especially by technologically affine people is to base the usability model on their own aptitude & capability of using extremely suboptimal tools effectively, while sometimes their risk model is somewhat influenced by a desire to better than the current state-of-the-market. While both these properties are fundamentally desirable in almost every aspect, practical businesses will have to make the fundamental security-usability tradeoff at some point, explicitly or implicitly, whether they like it or now.

Example 2: Too few privileges in Redmine

Often, Redmine admins strip users of all but the most obviously required privileges for all the subprojects. This means that users often can’t edit descriptions, can’t close or reopen issues. Even if they can contact the admin to e.g. re-open an erroneously closed issue, users will not use it effectively based on some level of fear of doing something they can’t revert themselves – and the social shame of having to contact the admin about it, even it the admin does not seem to judge them for their behavior.

The only action I recommend to not allow is to delete (as opposed to close) an issue as this is very hard to recover from. There are certain projects, of course, where certain (low level) users factually shouldn’t be able to do anything (e.g. marketing might just be able to read R&D info but not contribute to it), but in many projects, restrictions apply to all projects and inhibit effective use especially by non-tech personnel in projects with more that ~3 active participants.

Example 3: Too strict policies for Redmine

If one actively enforces a specific format, length or any other policy for creating or updating tickets or documentation, this will not only create a significant entry barrier to new users (who often will refrain from using project management tools altogether except for the level that is actively enforced) but also create a hard wall of separation between the veteran users (who know the rules, or, as some would say, define the interpretation of the rules) and everyone else – with everyone else being more or less afraid of not being able to follow the rules and therefore refrain from using it beyond the minimal enforced level.

Additionally, if you impose policies on the format of some documentation don’t forget to consider the case that some relevant information might not fit into a predefined scheme. If you are too aggressive on enforcing the policy, important information might be lost as it is dropped from updates. Therefore, you should always include an additional information field in any predefined format, even if it’s often misused.

Example: A company enforces that development tickets must only be closed by git commits. Yet they failed to consider that many bugs are either unintentionally fixed by a secondary ticket without the other developer being aware of the primary ticket – or are based on a misconception and are therefore not really a fixable bug.

The ostensibly good intention of being able to map all tickets to git commits might, therefore, result in either a lot of tickets not being closed at all, or handled in some other far-from-ideal way as people are looking to a workaround for the policy. The corporate process of fixing the policies once such cases are known and documented are often both too slow and too complex for the user to use compared to a quick workaround. However, in the long term, the unknown status or a large portion of your bug tracker will almost certainly come and haunt you!

Think of it this way: Your Bugtracker might suffer the same fate as many of the classical internet forums ubiquitous since the early 2000s: If you ask a question, and you’ll only get an answer like

Use the search function ! What you are looking for has been answered SO MANY TIMES on this forum!

by some veteran user, this will likely be the last question you ask voluntarily on said forum.

Why is that? Because not only isn’t it as obviously easy to use the search function and actually get some meaningful results out of it as said veteran implies (because the new user usually doesn’t know the right question to ask, i.e. what terms to look for), but said veteran could have just as easily used the 10 seconds of his time to get you one or two of the seemingly ubiquitous answer-to-your-question-links to at least get you started.

Example 4: Enforced deadlines

One project I recall temporarily had the policy that every ticket shall have a due date set to at most 4 weeks in the future, in order to be able to be reminded it via Gantt etc. This didn’t work out, as constant needless update emails to extend the due date (e.g. for a postponed issue) annoyed all the users. Also,

My recommendation is to use due dates only where due dates are functionally implied and, at the same time, functionally required by the project.
Example: If the software needs to be delivered in 6 months, maybe the core feature implementation tasks should be due in 3 months.

In general, it is advisable to use milestones instead of due dates to keep a lean development process and still be able to keep close track of the progress. If you e.g. set due dates for tickets belonging to the next spring, you could just as well have created a milestone for said sprint, and therefore keep close track of the progress while removing the often unrealistic dangling sword of due dates from the mind of the developer.

Think of it this way: The most effective way to motivate people in said context is for everyone to work towards a common goal (milestone) instead of everyone fearing to not hit and get yelled at. Don’t come up with reasoning like We don't yell at my company. Even if there is no explicitly negative reaction whatsoever, subconcious reactions to the negative motivation attempt will not make your goal easier to reach – or your project easier to manage.

Corollary

If you make tools difficult to use for people, people will not use the tools – or use them only to the extent that you actively enforce. Since you should have better things to do than enforcing the use of tools, try your best to don’t make them hard to use – and stop coming up with reasons why you seem to make it harder to be used.

In practice, it is often sufficient to avoid punishing behavior that is incompatible with some policy but instead try to encourage & reward improvement. You could, for example, treat badly written issues with a lower priority until they are improved, but never make the mistake of trying to educate the users by dismissing them.

Have a demo project for all users

Often when you introduce new users to Redmine or you want to test out new features, a demo project can come in very handy. It allows you to test out and show features to users without real-world consequences even if you screw up something. Give all users permanent access to the demo project – so they can try out things as well if required.

How to automatically renew Let’s Encrypt certbot certs on Ubuntu

On Ubuntu, you can easily setup a daily job that tries to renew almost-expired Let’s Encrypt certificates.

Create /etc/cron.daily/renewcerts:

#!/bin/bash
certbot renew
service nginx reload

After that, sudo chmod a+x /etc/cron.daily/renewcerts.

Now you should verify that the script would actually run:

run-parts --test -v /etc/cron.daily

should print, among other lines, this line:

/etc/cron.daily/renewcerts

IMPORTANT: You still need to run certbot renew manually every 1-2 months to check if there are any errors that might prevent certs from being renewed.

NOTE: Since the script is calling service nginx reload, you need to ensure that your nginx config files are not left in a broken state for too long if you edit them. Use sudo nginx -t to check for errors after you edit them. Also note that if you make nginx config changes, the script might unintentionally apply them to your productive HTTP/HTTPS server!

How to easily install Redmine using Docker Images

This tutorial shows you step-by-step the easiest method of setting up a fresh redmine installation I have found so far. The commands have been tested on Ubuntu 18.04, but they should work with minimal modification on other DEB-based distributions

Installing Docker & Docker-Compose

# Install prerequisites
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose

# Add current user to the docker group
sudo usermod -a -G docker $USER
# Enable & start docker service
sudo systemctl enable docker
sudo systemctl start docker

After running this shell script, log out & login from the system in order for the docker group to be added to the current user.

Creating the directory & docker-compose configuration

We will install redmine in /var/lib/redmine which will host the data directories and the docker-compose script.

# Create directories
sudo mkdir /var/lib/redmine
sudo mkdir -p /var/lib/redmine/redmine_data /var/lib/redmine/mariadb_data
# Set correct permissions for the directories
sudo chown -R 1001:1001 /var/lib/redmine/redmine_data /var/lib/redmine/mariadb_data
sudo chown -R $USER:docker /var/lib/redmine

Next, we’ll create /var/lib/redmine/docker-compose.yml.

There’s a couple of things you need to change here:

  • Set REDMINE_EMAIL to the email of the admin user you want to use (usually that is your email!)
  • Set the SMTP credentials (SMTP_HOSTSMTP_PORTSMTP_USER and SMTP_PASSWORD) to a valid SMTP server. SMTP_TLS defaults to true – in the rare case that
  • The ports configuration, in this case '3718:3000' means that Redmine will be mapped to port 3718 on the local PC. This port is chosen somewhat arbitarily – as we will run redmine behind an nginx reverse proxy, the port does not need to be any port in particular (as long as you use the same port everywhere), but it may not be used by anything else. You can use any port here, provided that it’s not used for anything else. Leave 3000 as-is and only change 3718 if required.

Note that you do not need to change REDMINE_PASSWORD – when you login for the first time, redmine will force you to change the password anyway.

version: '2'
services:
  mariadb:
    image: 'bitnami/mariadb:latest'
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
    volumes:
      - '/var/lib/redmine/mariadb_data:/bitnami'
  redmine:
    image: 'bitnami/redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - REDMINE_EMAIL=me@gmail.com
      - SMTP_HOST=smtp.gmail.com
      - SMTP_PORT=25
      - SMTP_USER=me@gmail.com
      - SMTP_PASSWORD=yourGmailPassword
    ports:
      - '3718:3000'
    volumes:
      - '/var/lib/redmine/redmine_data:/bitnami'
    depends_on:
      - mariadb

Setting up the systemd service

Next, we’ll configure the systemd service in /etc/systemd/system/redmine.service.

Set User=... to your current user in the [Service] section.

[Unit]
Description=Redmine
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=uli
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /var/lib/redmine/docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /var/lib/redmine/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /var/lib/redmine/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

After creating the file, we can enable and start the redmine service:

sudo systemctl enable redmine
sudo systemctl start redmine

The output of sudo systemctl start redmine should be empty. In case it is

Job for redmine.service failed because the control process exited with error code.
See "systemctl status redmine.service" and "journalctl -xe" for details.

debug the issue using journalctl -xe and journalctl -e

The first startup usually takes about 3 minutes, so grab a cup of coffee.

Now you can check if redmine is running using

wget -qO- http://localhost:3718/

(if you changed the port config before, you need to use your custom port in the URL).

If it worked, it will show a large HTML output, ending with

[...]
<div id="footer">
  <div class="bgl"><div class="bgr">
    Powered by <a href="https://www.redmine.org/">Redmine</a> &copy; 2006-2018 Jean-Philippe Lang
  </div></div>
</div>
</div>
</div>

</body>
</html>

If the output is empty, try wget -O- http://localhost:3718/ to see the error message

Setting up the nginx reverse proxy (optional but recommended)

We’ll use nginx to proxy the requests from a certain domain (Using Apache, if you use it already, is also possible but it is outside the scope of this tutorial to tell you how to do that). Install it using

sudo apt -y install nginx

First, you’ll need a domain name with DNS being configured. For this example, we’ll assume that your domain name is redmine.techoverflow.net ! You need to change it to your domain name!

First, we’ll create the config file in /etc/nginx/sites-enabled/redmine.conf. Remember to replace redmine.techoverflow.net by your domain name! If you use a port different from 3718, replace that as ewll.

server {
    listen 80;
    server_name redmine.techoverflow.net;

    access_log /var/log/nginx/redmine.access_log;
    error_log /var/log/nginx/redmine.error_log info;

    location / {
        proxy_pass http://127.0.0.1:3718; # docker-compose forwarded
        proxy_read_timeout 3600s;
        proxy_http_version 1.1;
    }

}

Now run sudo nginx -t to test if there are any errors in the config file. If everything is alright, you’ll see

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once you have fixed all errors, if any, run sudo service nginx reload to apply the configuration.

Test the setup by navigating your domain name in the browser. You should see the redmine interface:

Securing the nginx reverse proxy using Let’s Encrypt

First we need to install certbot and the certbot nginx plugin in order to create & install the certificate in nginx:

sudo apt -y install python3-certbot python3-certbot-nginx

Fortunately certbot automates most of the process of installing & configuring SSL and the certificate. Run

sudo certbot --nginx

It will ask you to enter your Email address and agree to the terms of service and if you want to receive the EFF newsletter.

After that, certbot will ask you to select the correct domain name:

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: redmine.techoverflow.net
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

In this case, there is only one domain name (there will be more if you have more domains active on nginx!).

Therefore, enter 1 and press enter. certbot will now generate the certificate. In case of success you will see

Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/redmine.techoverflow.net.conf

Now it will ask you whether to redirect all requests to HTTPS automatically:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 

Choose Redirect here: Type 2 and press enter. Now you can login to redmine and finish the installation.

You need to renew the certificate every 3 months for it to stay valid, and run sudo service nginx reload afterwards to use the new certificate. If you fail to do this, users will see certificate expired error messages and won’t be able to access Redmine easily! See this post for details on how to mostly automate this process!

Setting up Redmine

Go to your domain name (if you have followed the instructions above, it should automatically redirect you to HTTPS). Click Login at the top right and login with the username admin and the default password redmineadmin. Upon first login, it will require you to change the password to a new – and more secure password.

I won’t describe in detail how to setup Redmine for your project. However there’s two things you should take care of immediately after the first login:

  1. Configure the correct domain name: Go to Administration -> Settings and set Host name and path to your domain name, e.g. redmine.techoverflow.net. Set Protocol to HTTPS. You can also set a custom name for your Redmine installation under Application Title
  2. Still under Administration -> Settings, go to the Email Notifications tab, set an approriate sender email address under Emission email address (usually you would use redmine@yourdomainname.tld here, but you might want to use your SMTP username for some SMTP providers like GMail)
  3. Scroll down to the bottom of the Email Notifications page and click Send a test email which will send a test email to the current redmine user’s email adress. Unless you have changed it, the default is the address configured in REDMINE_EMAIL in /var/lib/redmine/docker-compose.yml.

In case the email does not work, change SMTP_...=... in /var/lib/redmine/docker-compose.yml but you also have to change it in /var/lib/redmine/redmine_data/redmine/conf/configuration.yml ! After doing the changes, restart redmine by

sudo systemctl restart redmine

which will use the new configuration from the config file.

Block access to the forwarded port using ufw (optional)

ufw is a simple Firewall for Ubuntu. Use sudo apt install ufw to install it and sudo ufw enable to activate it. The default configuration will allow SSH but it will block other ports, including port 3718 or any other custom port you might have used.

In order to enable it, use

sudo ufw enable
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https

Remember to add any ports you need to have open to the list as well. See the ufw docs for more information.

How to use the old ‘classic’ editor in WordPress 5.0

TL;DR: Install & activate the Classic Editor plugin.

WordPress 5.0 introduced the new Gutenberg editor as the new default editor.

In order to activate the old or classic editor by default again, go to Plugins -> Install and enter classic editor in the search bar at the top right.

The first result at the top left should be this plugin:

Click Install now in the top right of that plugin to install the plugin. After it is done installing, click Activate. After doing that, the classic editor will be active by default.

Solving Bitnami Docker Redmine ‘cannot create directory ‘/bitnami/mariadb’: Permission denied’

Problem:

You are setting up a docker-based redmine installation using the bitnami image, but you’re getting this error message when you use a host directory mounted as volume:

cannot create directory '/bitnami/mariadb': Permission denied

Solution:

Run

sudo chown -R 1001:1001 <directory>

on the host directories used by both the MariaDB container and the Redmine container.

In order to find the directories, look for these lines in the docker-compose YML file::

# Example: This can be found in the mariadb section:
    volumes:
      - '/var/lib/myredmine/mariadb_data:/bitnami'
# Example: This can be found in the redmine section
    volumes:
      - '/var/lib/myredmine/redmine_data:/bitnami'

In this example, you would have to run

sudo chown -R 1001:1001 /var/lib/myredmine/mariadb_data /var/lib/myredmine/redmine_data

and then restart the container:

docker-compose down
docker-compose up # Use 'docker-compose up -d' to run in the background

 

A systemd service template for docker-compose

Here’s my template for running a docker-compose service as a systemd service:

# Save as e.g. /etc/systemd/system/my-service.service
[Unit]
Description=MyService
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=uli
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /home/uli/mydockerservice/docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /home/uli/mydockerservice/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /home/uli/mydockerservice/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

In order to get it up and running for your application, you need to modify a couple of things:

  1. Check if you have docker-compose in /usr/local/bin/docker-compose (as I do, because I use the docker-ce installation from the official docker repositories for Ubuntu 18.04) or in /usr/bin/docker-compose (in which case you need to set the correct docker-compose path in all 3 places in the service file)
  2. Ensure that the user you want to run docker-compose as (uli in this example) is a member of the docker group (sudo usermod -a -G docker <user>), and set the correct user in the User=... line
  3. Define a name for your service that should be reflected in both the service filename and the Description=... line
  4. Set the correct path for your docker-compose YML config file in all the Exec…=… lines (i.e. replace /home/uli/mydockerservice/docker-compose.yml by your YML path).

After that, you can start your service using

sudo systemctl start my-service # --> my-service.service, use whatever you named your file as

and optionally enable it at bootup:

systemctl enable docker # Docker is required for your service so you need to enable it as well!
systemctl enable my-service # --> my-service.service, use whatever you named your file as

How to fix docker ‘Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?’ on Ubuntu

Problem:

You’re running a docker command like docker ps, but you only see this error message:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Solution:

As the error message already tells you, the docker daemon is currently not running.

On Ubuntu (16.04 upwards) and many other systemd-based distributions, you can fix this by

sudo systemctl start docker

In most cases, you want to automatically start the docker daemon at boot. In order to do this, run

sudo systemctl enable docker

After that, run your command (e.g. docker ps) again.

How to fix docker ‘Got permission denied while trying to connect to the Docker daemon socket’

Problem:

You are running a command like docker ps but you get this error message:

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.39/containers/json: dial unix /var/run/docker.sock: connect: permission denied

Solution:

As a quick fix, running the command as root (sudo docker ps) will solve the issue temporarily.

The issue here is that the user you’re running the command as is not a member of the docker group. In order to add it to the docker group, run

sudo usermod -a -G docker $USER

After that, you need to log out from the server/computer (e.g. end the SSH session) and log back in.

Running groups should show you that you now belong to the docker group:

$ groups
uli sudo www-data lxd docker # Check if docker appears here!

After that, retry running the command (e.g. docker ps) – the error should now have disappeared