Docker

How to have multiple Dockerfiles in one directory

Usually you build a docker image like

docker build -t myimagename .

but using this method you can only have one Dockerfile in each directory.

In case you don’t want to have separate directories for your Dockerfiles you can use the -f argument to docker build:

docker build -f FirstDockerfile .

Note that you still need to add . at the end so docker build knows where to COPY files from if you have COPY or similar statements in your Dockerfile.

Posted by Uli Köhler in Docker

How to run docker container as current user & group

If you want to prevent your docker container creating files as root, use

--user $(id -u):$(id -g)

as an argument to docker run. Example:

docker run --user $(id -u):$(id -g) -it -v $(pwd):/app myimage

 

Posted by Uli Köhler in Container, Docker, Linux

How to fix ‘Configuring tzdata’ interactive input when building Docker images

Often, when installing deb packages in your Dockerfile, some packages will install tzdata as a dependency.

The tzdata installer will try to interactively prompt you for your location using

Configuring tzdata
------------------

Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.

 1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
 2. America     5. Arctic     8. Europe    11. SystemV
 3. Antarctica  6. Asia       9. Indian    12. US
Geographic area:

This will stall your image build.

In order to fix that, we’ll need to make the tzdata prompt non-interactive.

The preferred method is to add

ENV DEBIAN_FRONTEND=noninteractive

before the first RUN statements in your Dockerfile.

Alternatively you can run just the apt install or apt-get install command using DEBIAN_FRONTEND=noninteractive:

RUN DEBIAN_FRONTEND=noninteractive apt install -y tzdata

This will automatically select a default configuration for tzdata.

Posted by Uli Köhler in Container, Docker

Fixing docker ‘unable to delete …- image is being used by running container’

Problem:

You want to delete a docker image using a command like

docker image rm c91b419ac445

but you see an error message like

Error response from daemon: conflict: unable to delete c91b419ac445 (cannot be forced) - image is being used by running container 3477a4dcdce2

Solution:

There is currently a container running that uses the image you are trying to delete. We will solve this issue by first stopping the container and then deleting the image

Warning: Deleting the image is dangerous since you cannot undo deleting the image ! Also note that force-stopping a running container might result in data loss if that container is doing something important !

Run these commands to stop the container and delete the image:

# Force-stop the container
docker container rm --force <container ID>
# Delete the image
docker image rm <image ID>

Copy <container ID> from the end of your original error message (3477a4dcdce2 in my example).

Copy <image ID> from the beginning of your error message. This is the same image ID you originally intended to delete (c91b419ac445) in my example.

In my example, the command would be

# Force-stop the container
docker container rm --force 3477a4dcdce2
# Delete the image
docker image rm c91b419ac445

Note that there might be multiple containers running using this image, so if you keep getting a similar error message, you might need to repeat this command.

Background information:

Docker will not allow you to force-delete the image using

docker image rm c91b419ac445 --force

as you can also see from the (cannot be forced) clause of your original error message. This behaviour makes sense since the container would crash in an undefineable manner if the underlying image is deleted.

Note that we could use docker image rm --force after stopping the container but this is typically not required and might result in additional risks for other containers, e.g. if other images depend on said image. Since docker uses layered images. Read the background information section of our post Docker: Remove all images and containers to learn more about how docker images work from an image management perspective.

Posted by Uli Köhler in Container, Docker

How to fix Google Cloud Build ignoring .dockerignore

Problem:

You want to run a docker image build on Google Cloud Build, but the client is trying to upload a huge context image to Google Cloud even though you have added all your large directories to your .dockerignore and the build works fine locally.

Solution:

Google Cloud Build ignores .dockerignore by design – the equivalent is called .gcloudignore.

You can copy the .dockerignore behaviour for gcloud by running

cp .dockerignore .gcloudignore

 

Posted by Uli Köhler in Cloud, Container, Docker

How to fix docker push ‘denied: requested access to the resource is denied’

Problem:

You want to push an image to Dockerhub using a command like

docker push myusernameondocker/my-image

but you see an error message like

denied: requested access to the resource is denied

Solution:

First ensure that your local docker client is logged in to Docker by using

docker login

This will ask you for your username and password. In case you have not registered yet on Dockerhub, register here!

Now you can retry your command. Note that you do not need to create the repository on DockerHub explicitly, it will be created automatically!

Posted by Uli Köhler in Container, Docker

How to build & upload a Dockerized application to Google Container Registry in 5 minutes

This post provides an easy example on how to build & upload your application to the private Google Container registry. We assume you have already setup your project and installed Docker. In this example, we’ll build & upload pseudo-perseus v1.0. Since this is a NodeJS-based application, we also assume that you installed a recent version of NodeJS and NPM (see our previous article on how to do that using Ubuntu)

First we configure docker to be able to authenticate to Google:

gcloud auth configure-docker

Now we can checkout the repository and install the NPM packages:

git clone https://github.com/ulikoehler/pseudo-perseus.git
cd pseudo-perseus
git checkout v1.0
npm install

Now we can build the local docker image (we directly name it so that it can be uploaded to the Google Container Registry. Be sure to use the correct google cloud project ID!):

docker build -t eu.gcr.io/myproject-123456/pseudo-perseus:v1.0 .

The next step is to upload the image:

docker push eu.gcr.io/myproject-123456/pseudo-perseus:v1.0

For reference see the official Container Registry documentation.

Posted by Uli Köhler in Cloud, Container, Docker

Fixing gcloud WARNING: `docker-credential-gcloud` not in system PATH

Problem:

You want to configure docker to be able to access Google Container Registry using

gcloud auth configure-docker

but you see this warning message:

WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
gcloud credential helpers already registered correctly.

Solution:

Install docker-credential-gcloud using

sudo gcloud components install docker-credential-gcr

In case you see this error message:

ERROR: (gcloud.components.install) You cannot perform this action because this Cloud SDK installation is managed by an external package manager.
Please consider using a separate installation of the Cloud SDK created through the default mechanism described at: https://cloud.google.com/sdk/

use this alternate installation command instead (this command is for Linux, see the official documentation for other operating systems):

VERSION=1.5.0
OS=linux
ARCH=amd64

curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" \
  | tar xz --to-stdout ./docker-credential-gcr \
  | sudo tee /usr/bin/docker-credential-gcr > /dev/null && sudo chmod +x /usr/bin/docker-credential-gcr

After that, configure docker using

docker-credential-gcr configure-docker

Now you can retry running your original command.

For reference, see the official documentation.

Posted by Uli Köhler in Cloud, Container, Docker, Linux

How to fix ERROR: Couldn’t connect to Docker daemon at http+docker://localhost – is it running?

Problem:

You want to run a docker-container or docker-compose application, but once you try to start it, you see this error message:

ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.

Solution:

There are two possible reasons for this error message.

The common reason is that the user you are running the command as does not have the permissions to access docker.

You can fix this either by running the command as root using sudo (since root has the permission to access docker) or adding your user to the docker group:

sudo usermod -a -G docker $USER

and then logging out and logging back in completely (or restarting the system/server).

The other reason is that you have not started docker. On Ubuntu, you can start it using

sudo systemctl enable docker # Auto-start on boot
sudo systemctl start docker # Start right now

 

TechOverflow’s Docker install instructions automatically takes care of starting & enabling the service

Posted by Uli Köhler in Container, Docker, Linux

ElasticSearch docker-compose.yml and systemd service generator

Just looking for a simple solution with a single ElasticSearch node? See our new post Simple Elasticsearch setup with docker-compose

New: Now with ElasticSearch 7.13.4

This generator allows you to generate a systemd service file for a docker-compose setup that is automatically restarted if it fails.

Continue reading →

Posted by Uli Köhler in Container, Databases, Docker, ElasticSearch, Generators, Linux

docker-compose systemd .service generator

This generator allows you to generate a systemd service file for a docker-compose setup that is automatically restarted if it fails.

Continue reading →

Posted by Uli Köhler in Docker, Generators, Linux

How to fix ‘elasticsearch exited with code 78’

Problem:

You want to run ElasticSearch using docker, but the container immediately stops again using this error message

elasticsearch exited with code 78

or

elasticsearch2 exited with code 78

Solution:

If you look through the entire log message, you’ll find lines like

elasticsearch     | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Therefore we need to increase the vm.max_map_count limit:

sudo sysctl -w vm.max_map_count=524288

Now we need to edit /etc/sysctl.conf so the setting will also be in effect after a reboot.

Look for any vm.max_map_count line in /etc/sysctl.conf. If you find one, set its value to 524288. If there is no such line present, add the line

vm.max_map_count=524288

to the end of /etc/sysctl.conf

Original source: GitHub

 

Posted by Uli Köhler in Container, Databases, Docker, Linux

How to install docker and docker-compose on Ubuntu in 30 seconds

Use our script:

wget -qO- https://techoverflow.net/scripts/install-docker.sh | sudo bash /dev/stdin

After that, logout and login (or close your SSH session and re-connect) (else, you will only be able to run the docker client as root – see Solving Docker permission denied while trying to connect to the Docker daemon socket)

Or do it manually:

Copy and paste these command blocks into your Linux shell. You need to copy & paste one block at a time – you can paste the next block once the previous block is finished!

# Install prerequisites
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y "deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/v2.10.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose
# Enable & start docker
sudo systemctl enable docker
sudo systemctl start docker

Note that this will install Docker as deb package whereas docker-compose will be downloaded to /usr/local/bin.

In case you intend to use docker under your normal user account (i.e. without sudo), you might want to add that user to the docker group (we recommend you do this):

sudo usermod -a -G docker $USER

This settings requires that you logout and log back in (or completely terminate your SSH session and open a new SSH session) in order to take effect.

In case that does not work and you still get permission denied error messages try rebooting your computer.

In order to check if your user is currently a member of the docker groups, run

groups

Example output:

uli adm tty lp uucp dialout cdrom sudo dip plugdev lpadmin sambashare vboxusers lxd docker

These are the groups your user currently belongs to (as said before, changes only take effect after logging out and logging back in or terminating and re-opening your SSH session). If docker is listed in the output of groups (tip: it’s typically near the end in case you have just added it!), you should be able to access the docker socket. See the Background information section of Solving Docker permission denied while trying to connect to the Docker daemon socket for more details on docker sockets and their permissions.

Posted by Uli Köhler in Container, Docker, Linux

How to backup Redmine using the Bitnami Docker image

In a previous post I detailed how to install Redmine on Linux using the excellent Bitnami docker image.

This post will teach you how to easily make an online backup of your Redmine installation. Note that automating the backup is not within the scope of this post.

We assume that the redmine is installed as shown in my previous post in /var/lib/redmine. and that you want to backup to my.backup.server:~/redmine-backup/ using rsync.

Backing up the Redmine data

This is pretty easy, as the data is all in just one directory. You can sync it using

rsync --checksum -Pavz /var/lib/redmine/redmine_data my.backup.server:~/redmine-backup/

Note that old versions of files in redmine_data will be overwritten, however files that are deleted locally will not be deleted on the backup server. To me, this seems like a good compromise between the ability to recover deleted files and the used storage space.

Backing up the Redmine database

This part is slightly more complicated, since we need to access the MariaDB server running in a different container. Important note: The container ID can change so it is not sufficient to just find the container ID once and then use it. You need to determine the appropriate ID each time you do a backup. See below on instructions how to do that.

Full command:

docker exec -it $(docker container ls | grep redmine_mariadb_1 | cut -d' ' -f1) mysqldump -uroot bitnami_redmine | xz -e9 -zc - > redmine-database-dump-$(date -I).sql.xz

Let’s break it down:

  • docker exec -it (container ID) (command): Run a command on a running docker container.
  • docker container ls | grep redmine_mariadb_1 | cut -d' ' -f1: Get the ID (first field of the output cut -d' ' -f1) of the running docker container named redmine_mariadb_1
  • mysqldump -uroot bitnami_redmine: This is run on the docker container and dumps the Redmine Database as SQL to stdout. No password is neccessary since the Bitnami MariaDB image allows access without any password.
  • xz -e9 -zc -: Takes the data from mysqldump from stdin (-), compresses it using maximum compression settings (-e9 -z) and writes the compressed data to stdout.
  • > redmine-database-dump-$(date -I).sql.xz: Writes the compressed data from xz into a file called redmine-database-dump-(current date).sql.xz in the current directory.

The resulting file is called e.g. redmine-database-dump-2019-02-01.sql.xz and it’s placed in the current directory. Ensure that you run the command in a suitable directory. Run it in /tmp if you don’t know which directory might be suitable.

Now we can rsync it to the server:

rsync --checksum -Pavz redmine-backup-*.sql.xz my.backup.server:~/redmine-backup/

Since the filename contains the current data, this approach will not overwrite old daily backups of the database, so you can restore your database very flexibly.

Posted by Uli Köhler in Container, Docker, Linux, Project management

How to use custom themes with the Bitnami Redmine Docker image

In a previous post I detailed how to install Redmine on Linux using the excellent Bitnami docker image.

This post shows you how to install a custom theme like A1 (which I used successfully for more than 5 years) if you use the bitnami Docker image. We will assume that you installed redmine in /var/lib/redmine and your systemd service is called redmine.

Note: If you get any permission denied errors, try running the same command using sudo.

First, we need to create the themes directory.

sudo mkdir /var/lib/redmine/themes

The first thing we need to do is to copy the current (default) themes to that directory, since Redmine won’t be able to start up if the default theme isn’t available in the correct version.

In order to do this, we must first ensure that your container is running:

sudo systemctl start redmine

Now we can find out the container ID of the running redmine container:

uli:/var/lib/redmine$ docker container ps | grep redmine
ae4de10d0b41        bitnami/redmine:latest    "/app-entrypoint.sh …"   30 minutes ago      Up 30 minutes   0.0.0.0:3718->3000/tcp   redmine_redmine_1
c231d11c48e9        bitnami/mariadb:latest    "/entrypoint.sh /run…"   30 minutes ago      Up 30 minutes   3306/tcp redmine_mariadb_1

From these lines, you need to select the line that says redmine_redmine_1 at the end. The one that lists redmine_mariadb_1 at the end is the database container and we don’t need that one for this task.

From that line, copy the first column – this is the container ID – e.g. ae4de10d0b41 in this example.

Now we can copy the default theme folder:

docker cp ae4de10d0b41:/opt/bitnami/redmine/public/themes /var/lib/redmine/themes

Now copy your custom theme (e.g. the a1 folder) to /var/lib/redmine/themes.

The next step is to fix the permissions. The bitnami container uses the user with UID 1001, so we need to change the owner to that. Repeat this every time you changed something in the themes directory:

sudo chown -R 1001:1001 /var/lib/redmine/themes

At this point we need to edit the docker-compose config (in /var/lib/redmine/docker-compose.yml) to mount /var/lib/redmine/themes in the correct directory. This is pretty easy: Just add - '/var/lib/redmine-szalata/themes:/opt/bitnami/redmine/public/themes' to the volumes section of the redmine container.

The finished config file will look like this:

version: '2'
services:
  mariadb:
    image: 'bitnami/mariadb:latest'
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
    volumes:
      - '/var/lib/redmine/mariadb_data:/bitnami'
  redmine:
    image: 'bitnami/redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - [email protected]
      - SMTP_HOST=smtp.gmail.com
      - SMTP_PORT=25
      - [email protected]
      - SMTP_PASSWORD=yourGmailPassword
    ports:
      - '3718:3000'
    volumes:
      - '/var/lib/redmine/redmine_data:/bitnami'
      - '/var/lib/redmine/themes:/opt/bitnami/redmine/public/themes'
    depends_on:
      - mariadb

Now you can restart Redmine:

sudo systemctl restart redmine

and set your new theme by selecting it in Administration -> Settings -> Display.

Posted by Uli Köhler in Container, Docker, Linux, Project management

Running Gitlab CE via docker behind a reverse proxy on Ubuntu

Similarly to my previous article about installing Redmine via docker behind a reverse proxy, this article details. Since I am running an instance of Redmine and an instance of Gitlab on the same virtual server, plus tens of other services.

While the Gitlab CE docker container is nicely preconfigured for standalone use on a dedicated VPS, running it behind a reverse proxy is not supported and will lead to a multitude of error messages – in effect, requiring lots of extra work to get up and running.

Note that we will not setup GitLab for SSH access. This is possible using this setup, but usually makes more trouble than it is worth. See this article on how to store git https passwords so you don’t have to enter your password every time.

Installing Docker & Docker-Compose

# Install prerequisites
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose

# Add current user to the docker group
sudo usermod -a -G docker $USER
# Enable & start docker service
sudo systemctl enable docker
sudo systemctl start docker

After running this shell script, log out & login from the system in order for the docker group to be added to the current user.

Creating the directory & docker-compose configuration

We will install Gitlab in /var/lib/gitlab which will host the data directories and the docker-compose script. You can use any directory if you use it consistently in all the configs (most importantly, docker-compose.yml and the systemd service).

# Create directories
sudo mkdir /var/lib/gitlab

Next, we’ll create /var/lib/gitlab/docker-compose.yml.

There’s a couple of things you need to change here:

  • Set gitlab_rails['gitlab_email_from'] and gitlab_rails['gitlab_email_display_name'] to whatever sender address & name you want emails to be sent from
  • Set the SMTP credentials (gitlab_rails['smtp_address'], gitlab_rails['smtp_port'], gitlab_rails['smtp_user_name'], gitlab_rails['smtp_password'] & gitlab_rails['smtp_domain']) to a valid SMTP server. In rare cases you also have to change the other gitlab_rails['smtp_...'] settings.
  • You need to change every 4 occurrences of gitlab.mydomain.de to your domain.
  • The ports configuration, in this case '9080:80' means that Gitlab will be mapped to port 9080 on the local PC. This port is chosen somewhat arbitarily – as we will run Gitlab behind an nginx reverse proxy, the port does not need to be any port in particular (as long as you use the same port everywhere), but it may not be used by anything else. You can use any port here, provided that it’s not used for anything else. Leave 80 as-is and only change 9080 if required.
gitlab:
   image: 'gitlab/gitlab-ce:latest'
   restart: always
   hostname: 'gitlab.mydomain.de'
   environment:
     GITLAB_OMNIBUS_CONFIG: |
       external_url 'https://gitlab.mydomain.de'
       letsencrypt['enabled'] = false
       # Email
       gitlab_rails['gitlab_email_enabled'] = true
       gitlab_rails['gitlab_email_from'] = '[email protected]'
       gitlab_rails['gitlab_email_display_name'] = 'My GitLab'
       # SMTP
       gitlab_rails['smtp_enable'] = true
       gitlab_rails['smtp_address'] = "mail.mydomain.de"
       gitlab_rails['smtp_port'] = 25
       gitlab_rails['smtp_user_name'] = "[email protected]"
       gitlab_rails['smtp_password'] = "yourSMTPPassword"
       gitlab_rails['smtp_domain'] = "mydomain.de"
       gitlab_rails['smtp_authentication'] = "login"
       gitlab_rails['smtp_enable_starttls_auto'] = true
       gitlab_rails['smtp_tls'] = true
       gitlab_rails['smtp_openssl_verify_mode'] = 'none'
       # Reverse proxy nginx config
       nginx['listen_port'] = 80
       nginx['listen_https'] = false
       nginx['proxy_set_headers'] = {
         "X-Forwarded-Proto" => "https",
         "X-Forwarded-Ssl" => "on",
         "Host" => "gitlab.mydomain.de",
         "X-Real-IP" => "$$remote_addr",
         "X-Forwarded-For" => "$$proxy_add_x_forwarded_for",
         "Upgrade" => "$$http_upgrade",
         "Connection" => "$$connection_upgrade"
       }
   ports:
     - '9080:80'
   volumes:
     - './config:/etc/gitlab'
     - './logs:/var/log/gitlab'
     - './data:/var/opt/gitlab'

Setting up the systemd service

Next, we’ll configure the systemd service in /etc/systemd/system/gitlab.service.

Set User=... to your preferred user in the [Service] section. That user needs to be a member of the docker group. Also check if the WorkingDirectory=... is correct.

[Unit]
Description=Gitlab
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/gitlab
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

After creating the file, we can enable and start the gitlab service:

sudo systemctl enable gitlab
sudo systemctl start gitlab

The output of sudo systemctl start gitlab should be empty. In case it is

Job for gitlab.service failed because the control process exited with error code.
See "systemctl status gitlab.service" and "journalctl -xe" for details.

you can debug the issue using journalctl -xe and journalctl -e

The first startup usually takes about 10 minutes, so grab at least one cup of coffee. You can follow the progress using journalctl -xefu gitlab. Once you see lines like

Dec 17 17:28:04 instance-1 docker-compose[4087]: gitlab_1  | {"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"duration":28.82,"view":22.82,"db":0.97,"time":"2018-12-17T17:28:03.252Z","params":[],"remote_ip":null,"user_id":null,"username":null,"ua":null}

the startup is finished.

Now you can check if GitLab is running using

wget -O- http://localhost:9080/

(if you changed the port config before, you need to use your custom port in the URL).

If it worked, it will show a debug message output. Since gitlab will automatically redirect you to your domain (gitlab.mydomain.de in this example) you should see something like

--2018-12-17 17:28:32--  http://localhost:9080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:9080... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://gitlab.gridbox.de/users/sign_in [following]
--2018-12-17 17:28:32--  https://gitlab.mydomain.de/users/sign_in
Resolving gitlab.gridbox.de (gitlab.mydomain.de)... 35.198.165.121
Connecting to gitlab.gridbox.de (gitlab.mydomain.de)|35.198.165.121|:443... failed: Connection refused.

Since we have not setup nginx as a reverse proxy yet, it’s totally fine that it’s saying connection refused. The redirection worked if you see the output listed above.

Setting up the nginx reverse proxy (optional but recommended)

We’ll use nginx to proxy the requests from a certain domain (Using Apache, if you use it already, is also possible but it is outside the scope of this tutorial to tell you how to do that). Install it using

sudo apt -y install nginx

First, you’ll need a domain name with DNS being configured. For this example, we’ll assume that your domain name is gitlab.mydomain.de ! You need to change it to your domain name!

First, we’ll create the config file in /etc/nginx/sites-enabled/gitlab.conf. Remember to replace gitlab.mydomain.de by your domain name! If you use a port different from 9080, replace that as ewll.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

server {
    server_name gitlab.mydomain.de;

    access_log /var/log/nginx/gitlab.access_log;
    error_log /var/log/nginx/gitlab.error_log info;

    location / {
        proxy_pass http://127.0.0.1:9080; # docker container listens here
        proxy_read_timeout 3600s;
        proxy_http_version 1.1;
        # Websocket connection
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }

    listen 80;
}

Now run sudo nginx -t to test if there are any errors in the config file. If everything is alright, you’ll see

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once you have fixed all errors, if any, run sudo service nginx reload to apply the configuration.

We need to setup a Let’s Encrypt SSL certificate before we can check if Gitlab is working:

Securing the nginx reverse proxy using Let’s Encrypt

First we need to install certbot and the certbot nginx plugin in order to create & install the certificate in nginx:

sudo apt -y install python3-certbot python3-certbot-nginx

Fortunately certbot automates most of the process of installing & configuring SSL and the certificate. Run

sudo certbot --nginx

It will ask you to enter your Email address and agree to the terms of service and if you want to receive the EFF newsletter.

After that, certbot will ask you to select the correct domain name:

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: gitlab.mydomain.de
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

In this case, there is only one domain name (there will be more if you have more domains active on nginx!).

Therefore, enter 1 and press enter. certbot will now generate the certificate. In case of success you will see an output including a line like

Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/gitlab.mydomain.de.conf

Now it will ask you whether to redirect all requests to HTTPS automatically:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 

Choose Redirect here: Type 2 and press enter. Now you can login to GitLab and finish the installation.

You need to renew the certificate every 3 months for it to stay valid, and run sudo service nginx reload afterwards to use the new certificate. If you fail to do this, users will see certificate expired error messages and won’t be able to access Gitlab easily! See this post for details on how to mostly automate this process!

Setting up Gitlab

Now you can open a browser and have a first look at your new GitLab installation:

Set the new password and then login with the username root and your newly set password.

After that, open the admin area at the top by clicking at the wrench icon in the purple navigation bar at the top.

At the navigation bar at the left, click on Settings (it’s at the bottom – you need to scroll down) and then click on General.

Click the Expand button to the right of Visibility and access controls. Scroll down until you see Enabled Git access protocols and select Only HTTP(S) in the combo box.

Then click the green Save changes button.

Since we have now disabled SSH access (which we didn’t set up in the first place), you can now use GitLab. A good place to start is to create a new project and try checking it out. See this article on how to store git https passwords so you don’t have to enter your git password every time.

Note: If GitLab doesn’t send emails, check config/gitlab.rb, search for smtp and if neccessary fix the SMTP settings there. After that, sudo systemctl stop gitlab && sudo systemctl start gitlab

Posted by Uli Köhler in Container, Docker, git, nginx, Version management

How to disable Let’s Encrypt in the Gitlab CE docker image

Note: Previous version of this post listed letsencrypt['enabled'] = false instead of letsencrypt['enable'] = false (the d in enabled is missing in the correct version) – see this GitLab issue for more details. Thanks to Jonas Hohmann for informing me about this.

Problem:

You want to run the Gitlab CE docker image, but since you want to run it together with other services behind a reverse proxy, you see an error message like this:

gitlab_1  | letsencrypt_certificate[gitlab.mydomain.com] (letsencrypt::http_authorization line 3) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 20) had an error: RuntimeError: [gitlab.mydomain.com] Validation failed for domain gitlab.mydomain.com

Solution:

Add

letsencrypt['enable'] = false

to GITLAB_OMNIBUS_CONFIG. See this file on GitHub for more Let’s Encrypt-related configs you can add.

In docker-compose.yml it could look like this:

gitlab:
   image: 'gitlab/gitlab-ce:latest'
   restart: always
   hostname: 'gitlab.mydomain.com'
   environment:
     GITLAB_OMNIBUS_CONFIG: |
       external_url 'https://gitlab.mydomain.com'
       letsencrypt['enable'] = false
   ports:
     - '7080:80'
     - '1022:22'
   volumes:
     - '/var/lib/gitlab/config:/etc/gitlab'
     - '/var/lib/gitlab/logs:/var/log/gitlab'
     - '/var/lib/gitlab/data:/var/opt/gitlab'

 

Posted by Uli Köhler in Container, Docker

How to easily install Redmine using Docker Images

Note: Also see this followup post on how to use custom themes in this setup and this followup post on how to backup Redmine using this setup.

This tutorial shows you step-by-step the easiest method of setting up a fresh redmine installation I have found so far. The commands have been tested on Ubuntu 18.04, but they should work with minimal modification on other DEB-based distributions

Installing Docker & Docker-Compose

Please follow the instructions in How to install docker and docker-compose on Ubuntu in 30 seconds

Creating the directory & docker-compose configuration

We will install redmine in /var/lib/redmine which will host the data directories and the docker-compose script.

# Create directories
sudo mkdir /var/lib/redmine
sudo mkdir -p /var/lib/redmine/redmine_data /var/lib/redmine/mariadb_data
# Set correct permissions for the directories
sudo chown -R $USER:docker /var/lib/redmine
sudo chown -R 1001:1001 /var/lib/redmine/redmine_data /var/lib/redmine/mariadb_data

Next, we’ll create /var/lib/redmine/docker-compose.yml.

There’s a couple of things you need to change here:

  • Set REDMINE_EMAIL to the email of the admin user you want to use (usually that is your email!)
  • Set the SMTP credentials (SMTP_HOSTSMTP_PORTSMTP_USER and SMTP_PASSWORD) to a valid SMTP server. SMTP_TLS defaults to true – in the rare case that
  • The ports configuration, in this case '3718:3000' means that Redmine will be mapped to port 3718 on the local PC. This port is chosen somewhat arbitarily – as we will run redmine behind an nginx reverse proxy, the port does not need to be any port in particular (as long as you use the same port everywhere), but it may not be used by anything else. You can use any port here, provided that it’s not used for anything else. Leave 3000 as-is and only change 3718 if required.

Note that you do not need to change REDMINE_PASSWORD – when you login for the first time, redmine will force you to change the password anyway.

version: '2'
services:
  mariadb:
    image: 'bitnami/mariadb:latest'
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
    volumes:
      - '/var/lib/redmine/mariadb_data:/bitnami'
  redmine:
    image: 'bitnami/redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - [email protected]
      - SMTP_HOST=smtp.gmail.com
      - SMTP_PORT=25
      - [email protected]
      - SMTP_PASSWORD=yourGmailPassword
    ports:
      - '3718:3000'
    volumes:
      - '/var/lib/redmine/redmine_data:/bitnami'
    depends_on:
      - mariadb

Setting up the systemd service

Next, we’ll configure the systemd service in /etc/systemd/system/redmine.service.

Set User=... to your current user in the [Service] section.

[Unit]
Description=Redmine
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=uli
Group=docker
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f /var/lib/redmine/docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /var/lib/redmine/docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f /var/lib/redmine/docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

After creating the file, we can enable and start the redmine service:

sudo systemctl enable redmine
sudo systemctl start redmine

The output of sudo systemctl start redmine should be empty. In case it is

Job for redmine.service failed because the control process exited with error code.
See "systemctl status redmine.service" and "journalctl -xe" for details.

debug the issue using journalctl -xe and journalctl -e

The first startup usually takes about 3 minutes, so grab a cup of coffee.

Now you can check if redmine is running using

wget -qO- http://localhost:3718/

(if you changed the port config before, you need to use your custom port in the URL).

If it worked, it will show a large HTML output, ending with

[...]
<div id="footer">
  <div class="bgl"><div class="bgr">
    Powered by <a href="https://www.redmine.org/">Redmine</a> &copy; 2006-2018 Jean-Philippe Lang
  </div></div>
</div>
</div>
</div>

</body>
</html>

If the output is empty, try wget -O- http://localhost:3718/ to see the error message

Setting up the nginx reverse proxy (optional but recommended)

We’ll use nginx to proxy the requests from a certain domain (Using Apache, if you use it already, is also possible but it is outside the scope of this tutorial to tell you how to do that). Install it using

sudo apt -y install nginx

First, you’ll need a domain name with DNS being configured. For this example, we’ll assume that your domain name is redmine.techoverflow.net ! You need to change it to your domain name!

First, we’ll create the config file in /etc/nginx/sites-enabled/redmine.conf. Remember to replace redmine.techoverflow.net by your domain name! If you use a port different from 3718, replace that as ewll.

server {
    listen 80;
    server_name redmine.techoverflow.net;

    access_log /var/log/nginx/redmine.access_log;
    error_log /var/log/nginx/redmine.error_log info;

    location / {
        proxy_pass http://127.0.0.1:3718; # docker-compose forwarded
        proxy_read_timeout 3600s;
        proxy_http_version 1.1;
    }

}

Now run sudo nginx -t to test if there are any errors in the config file. If everything is alright, you’ll see

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once you have fixed all errors, if any, run sudo service nginx reload to apply the configuration.

Test the setup by navigating your domain name in the browser. You should see the redmine interface:

Securing the nginx reverse proxy using Let’s Encrypt

First we need to install certbot and the certbot nginx plugin in order to create & install the certificate in nginx:

sudo apt -y install python3-certbot python3-certbot-nginx

Fortunately certbot automates most of the process of installing & configuring SSL and the certificate. Run

sudo certbot --nginx

It will ask you to enter your Email address and agree to the terms of service and if you want to receive the EFF newsletter.

After that, certbot will ask you to select the correct domain name:

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: redmine.techoverflow.net
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

In this case, there is only one domain name (there will be more if you have more domains active on nginx!).

Therefore, enter 1 and press enter. certbot will now generate the certificate. In case of success you will see

Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/redmine.techoverflow.net.conf

Now it will ask you whether to redirect all requests to HTTPS automatically:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 

Choose Redirect here: Type 2 and press enter. Now you can login to redmine and finish the installation.

You need to renew the certificate every 3 months for it to stay valid, and run sudo service nginx reload afterwards to use the new certificate. If you fail to do this, users will see certificate expired error messages and won’t be able to access Redmine easily! See this post for details on how to mostly automate this process!

Setting up Redmine

Go to your domain name (if you have followed the instructions above, it should automatically redirect you to HTTPS). Click Login at the top right and login with the username admin and the default password redmineadmin. Upon first login, it will require you to change the password to a new – and more secure password.

I won’t describe in detail how to setup Redmine for your project. However there’s two things you should take care of immediately after the first login:

  1. Configure the correct domain name: Go to Administration -> Settings and set Host name and path to your domain name, e.g. redmine.techoverflow.net. Set Protocol to HTTPS. You can also set a custom name for your Redmine installation under Application Title
  2. Still under Administration -> Settings, go to the Email Notifications tab, set an approriate sender email address under Emission email address (usually you would use [email protected] here, but you might want to use your SMTP username for some SMTP providers like GMail)
  3. Scroll down to the bottom of the Email Notifications page and click Send a test email which will send a test email to the current redmine user’s email adress. Unless you have changed it, the default is the address configured in REDMINE_EMAIL in /var/lib/redmine/docker-compose.yml.

In case the email does not work, change SMTP_...=... in /var/lib/redmine/docker-compose.yml but you also have to change it in /var/lib/redmine/redmine_data/redmine/conf/configuration.yml ! After doing the changes, restart redmine by

sudo systemctl restart redmine

which will use the new configuration from the config file.

Block access to the forwarded port using ufw (optional)

ufw is a simple Firewall for Ubuntu. Use sudo apt install ufw to install it and sudo ufw enable to activate it. The default configuration will allow SSH but it will block other ports, including port 3718 or any other custom port you might have used.

In order to enable it, use

sudo ufw enable
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https

Remember to add any ports you need to have open to the list as well. See the ufw docs for more information.

Posted by Uli Köhler in Container, Docker, Linux, Project management

How to run ‘docker-compose up’ in the background

In order to run docker-compose up in the background, use

docker-compose up -d

The -d option means --detach, i.e. the process is detached from the foreground shell you are running.

Posted by Uli Köhler in Container, Docker, Linux

Solving Bitnami Docker Redmine ‘cannot create directory ‘/bitnami/mariadb’: Permission denied’

Problem:

You are setting up a docker-based redmine installation using the bitnami image, but you’re getting this error message when you use a host directory mounted as volume:

cannot create directory '/bitnami/mariadb': Permission denied

Solution:

Run

sudo chown -R 1001:1001 <directory>

on the host directories used by both the MariaDB container and the Redmine container.

In order to find the directories, look for these lines in the docker-compose YML file::

# Example: This can be found in the mariadb section:
    volumes:
      - '/var/lib/myredmine/mariadb_data:/bitnami'
# Example: This can be found in the redmine section
    volumes:
      - '/var/lib/myredmine/redmine_data:/bitnami'

In this example, you would have to run

sudo chown -R 1001:1001 /var/lib/myredmine/mariadb_data /var/lib/myredmine/redmine_data

and then restart the container:

docker-compose down
docker-compose up # Use 'docker-compose up -d' to run in the background

 

Posted by Uli Köhler in Container, Docker, Linux