Networking

Minimal local nginx setup using Docker

If you have not installed Docker, see our guide at How to install docker and docker-compose on Ubuntu in 30 seconds

1. Create your nginx config file (my-nginx.conf). This is a template that reverse proxys TechOverflow:

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    location / {
        proxy_pass https://techoverflow.net;
        proxy_http_version 1.1;
    }
}

3. Start nginx using docker:

docker run -it -p 80:80 --rm -v $(pwd)/my-nginx.conf:/etc/nginx/conf.d/default.conf nginx:latest

4. Go to http://localhost and see the result!

Explanation of the docker command:

  • docker run -it: Create a new docker container and run it in interactive mode (i.e. it will not run in the background, once you kill the command, nginx will exit)
  • -p 80:80: Makes port 80 of the nginx server (the standard HTTP port) available on the host’s port 80. The first 80 is the host port whereas the second port 80 is the container’s port.
  • --rm: Once the container is stopped, delete it!
  • -v $(pwd)/my-nginx.conf:/etc/nginx/conf.d/default.conf: Map my-nginx.conf in the current directory ($(pwd)) to /etc/nginx/conf.d/default.conf on the container.
  • nginx:latest: In the container run the official nginx image from DockerHub in the latest version.

Explanation of the nginx config file:

  • server { ... }: Everything inside this blog will belong together. You can
  • listen 80 default_server; Listen on port 80 (the standard HTTP port) and make this the default server, i.e. respond to any domain name that does not have any other server configured.
  • listen [::]:80 default_server; Same as the previous line, but for IPv6. [::] means: Listen on all IPv6 addresses.
  • location / { ... }: Everything inside this block is valid for any URL starting with / i.e. any URL at all. In clauses like location /app { ... } the content of the clause would be valid for URLs starting with /app only, e.g. http://localhost/app/ or http://localhost/app/dashboard.
  • proxy_pass https://techoverflow.net; Redirect requests to the current location (/) to the server https://techoverflow.net using a reverse proxy.
  • proxy_http_version 1.1; This sets the HTTP version that nginx uses to make the requests to https://techoverflow.net. This is not always neccessary but might increase compatibility.
Posted by Uli Köhler in Docker, nginx

How to fix mount: unknown filesystem type ‘smbfs’

Problem:

When you’re trying to mount a Windows network share using a command like

sudo mount -t smbfs //Asus/store_n_go /mnt/

you see this error message:

mount: unknown filesystem type 'smbfs'

Solution:

First ensure samba is installed

sudo apt install samba

then try again using cifs as filesystem type instead of smbfs:

sudo mount -t cifs //Asus/store_n_go /mnt/

 

Posted by Uli Köhler in Linux, Networking

Running Gitlab CE via docker behind a reverse proxy on Ubuntu

Similarly to my previous article about installing Redmine via docker behind a reverse proxy, this article details. Since I am running an instance of Redmine and an instance of Gitlab on the same virtual server, plus tens of other services.

While the Gitlab CE docker container is nicely preconfigured for standalone use on a dedicated VPS, running it behind a reverse proxy is not supported and will lead to a multitude of error messages – in effect, requiring lots of extra work to get up and running.

Note that we will not setup GitLab for SSH access. This is possible using this setup, but usually makes more trouble than it is worth. See this article on how to store git https passwords so you don’t have to enter your password every time.

Installing Docker & Docker-Compose

# Install prerequisites
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common
# Add docker's package signing key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add repository
sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install latest stable docker stable version
sudo apt-get update
sudo apt-get -y install docker-ce
# Install docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod a+x /usr/local/bin/docker-compose

# Add current user to the docker group
sudo usermod -a -G docker $USER
# Enable & start docker service
sudo systemctl enable docker
sudo systemctl start docker

After running this shell script, log out & login from the system in order for the docker group to be added to the current user.

Creating the directory & docker-compose configuration

We will install Gitlab in /var/lib/gitlab which will host the data directories and the docker-compose script. You can use any directory if you use it consistently in all the configs (most importantly, docker-compose.yml and the systemd service).

# Create directories
sudo mkdir /var/lib/gitlab

Next, we’ll create /var/lib/gitlab/docker-compose.yml.

There’s a couple of things you need to change here:

  • Set gitlab_rails['gitlab_email_from'] and gitlab_rails['gitlab_email_display_name'] to whatever sender address & name you want emails to be sent from
  • Set the SMTP credentials (gitlab_rails['smtp_address'], gitlab_rails['smtp_port'], gitlab_rails['smtp_user_name'], gitlab_rails['smtp_password'] & gitlab_rails['smtp_domain']) to a valid SMTP server. In rare cases you also have to change the other gitlab_rails['smtp_...'] settings.
  • You need to change every 4 occurrences of gitlab.mydomain.de to your domain.
  • The ports configuration, in this case '9080:80' means that Gitlab will be mapped to port 9080 on the local PC. This port is chosen somewhat arbitarily – as we will run Gitlab behind an nginx reverse proxy, the port does not need to be any port in particular (as long as you use the same port everywhere), but it may not be used by anything else. You can use any port here, provided that it’s not used for anything else. Leave 80 as-is and only change 9080 if required.
gitlab:
   image: 'gitlab/gitlab-ce:latest'
   restart: always
   hostname: 'gitlab.mydomain.de'
   environment:
     GITLAB_OMNIBUS_CONFIG: |
       external_url 'https://gitlab.mydomain.de'
       letsencrypt['enabled'] = false
       # Email
       gitlab_rails['gitlab_email_enabled'] = true
       gitlab_rails['gitlab_email_from'] = '[email protected]'
       gitlab_rails['gitlab_email_display_name'] = 'My GitLab'
       # SMTP
       gitlab_rails['smtp_enable'] = true
       gitlab_rails['smtp_address'] = "mail.mydomain.de"
       gitlab_rails['smtp_port'] = 25
       gitlab_rails['smtp_user_name'] = "[email protected]"
       gitlab_rails['smtp_password'] = "yourSMTPPassword"
       gitlab_rails['smtp_domain'] = "mydomain.de"
       gitlab_rails['smtp_authentication'] = "login"
       gitlab_rails['smtp_enable_starttls_auto'] = true
       gitlab_rails['smtp_tls'] = true
       gitlab_rails['smtp_openssl_verify_mode'] = 'none'
       # Reverse proxy nginx config
       nginx['listen_port'] = 80
       nginx['listen_https'] = false
       nginx['proxy_set_headers'] = {
         "X-Forwarded-Proto" => "https",
         "X-Forwarded-Ssl" => "on",
         "Host" => "gitlab.mydomain.de",
         "X-Real-IP" => "$$remote_addr",
         "X-Forwarded-For" => "$$proxy_add_x_forwarded_for",
         "Upgrade" => "$$http_upgrade",
         "Connection" => "$$connection_upgrade"
       }
   ports:
     - '9080:80'
   volumes:
     - './config:/etc/gitlab'
     - './logs:/var/log/gitlab'
     - './data:/var/opt/gitlab'

Setting up the systemd service

Next, we’ll configure the systemd service in /etc/systemd/system/gitlab.service.

Set User=... to your preferred user in the [Service] section. That user needs to be a member of the docker group. Also check if the WorkingDirectory=... is correct.

[Unit]
Description=Gitlab
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/gitlab
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f /docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

After creating the file, we can enable and start the gitlab service:

sudo systemctl enable gitlab
sudo systemctl start gitlab

The output of sudo systemctl start gitlab should be empty. In case it is

Job for gitlab.service failed because the control process exited with error code.
See "systemctl status gitlab.service" and "journalctl -xe" for details.

you can debug the issue using journalctl -xe and journalctl -e

The first startup usually takes about 10 minutes, so grab at least one cup of coffee. You can follow the progress using journalctl -xefu gitlab. Once you see lines like

Dec 17 17:28:04 instance-1 docker-compose[4087]: gitlab_1  | {"method":"GET","path":"/-/metrics","format":"html","controller":"MetricsController","action":"index","status":200,"duration":28.82,"view":22.82,"db":0.97,"time":"2018-12-17T17:28:03.252Z","params":[],"remote_ip":null,"user_id":null,"username":null,"ua":null}

the startup is finished.

Now you can check if GitLab is running using

wget -O- http://localhost:9080/

(if you changed the port config before, you need to use your custom port in the URL).

If it worked, it will show a debug message output. Since gitlab will automatically redirect you to your domain (gitlab.mydomain.de in this example) you should see something like

--2018-12-17 17:28:32--  http://localhost:9080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:9080... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://gitlab.gridbox.de/users/sign_in [following]
--2018-12-17 17:28:32--  https://gitlab.mydomain.de/users/sign_in
Resolving gitlab.gridbox.de (gitlab.mydomain.de)... 35.198.165.121
Connecting to gitlab.gridbox.de (gitlab.mydomain.de)|35.198.165.121|:443... failed: Connection refused.

Since we have not setup nginx as a reverse proxy yet, it’s totally fine that it’s saying connection refused. The redirection worked if you see the output listed above.

Setting up the nginx reverse proxy (optional but recommended)

We’ll use nginx to proxy the requests from a certain domain (Using Apache, if you use it already, is also possible but it is outside the scope of this tutorial to tell you how to do that). Install it using

sudo apt -y install nginx

First, you’ll need a domain name with DNS being configured. For this example, we’ll assume that your domain name is gitlab.mydomain.de ! You need to change it to your domain name!

First, we’ll create the config file in /etc/nginx/sites-enabled/gitlab.conf. Remember to replace gitlab.mydomain.de by your domain name! If you use a port different from 9080, replace that as ewll.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

server {
    server_name gitlab.mydomain.de;

    access_log /var/log/nginx/gitlab.access_log;
    error_log /var/log/nginx/gitlab.error_log info;

    location / {
        proxy_pass http://127.0.0.1:9080; # docker container listens here
        proxy_read_timeout 3600s;
        proxy_http_version 1.1;
        # Websocket connection
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }

    listen 80;
}

Now run sudo nginx -t to test if there are any errors in the config file. If everything is alright, you’ll see

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Once you have fixed all errors, if any, run sudo service nginx reload to apply the configuration.

We need to setup a Let’s Encrypt SSL certificate before we can check if Gitlab is working:

Securing the nginx reverse proxy using Let’s Encrypt

First we need to install certbot and the certbot nginx plugin in order to create & install the certificate in nginx:

sudo apt -y install python3-certbot python3-certbot-nginx

Fortunately certbot automates most of the process of installing & configuring SSL and the certificate. Run

sudo certbot --nginx

It will ask you to enter your Email address and agree to the terms of service and if you want to receive the EFF newsletter.

After that, certbot will ask you to select the correct domain name:

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: gitlab.mydomain.de
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

In this case, there is only one domain name (there will be more if you have more domains active on nginx!).

Therefore, enter 1 and press enter. certbot will now generate the certificate. In case of success you will see an output including a line like

Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/gitlab.mydomain.de.conf

Now it will ask you whether to redirect all requests to HTTPS automatically:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 

Choose Redirect here: Type 2 and press enter. Now you can login to GitLab and finish the installation.

You need to renew the certificate every 3 months for it to stay valid, and run sudo service nginx reload afterwards to use the new certificate. If you fail to do this, users will see certificate expired error messages and won’t be able to access Gitlab easily! See this post for details on how to mostly automate this process!

Setting up Gitlab

Now you can open a browser and have a first look at your new GitLab installation:

Set the new password and then login with the username root and your newly set password.

After that, open the admin area at the top by clicking at the wrench icon in the purple navigation bar at the top.

At the navigation bar at the left, click on Settings (it’s at the bottom – you need to scroll down) and then click on General.

Click the Expand button to the right of Visibility and access controls. Scroll down until you see Enabled Git access protocols and select Only HTTP(S) in the combo box.

Then click the green Save changes button.

Since we have now disabled SSH access (which we didn’t set up in the first place), you can now use GitLab. A good place to start is to create a new project and try checking it out. See this article on how to store git https passwords so you don’t have to enter your git password every time.

Note: If GitLab doesn’t send emails, check config/gitlab.rb, search for smtp and if neccessary fix the SMTP settings there. After that, sudo systemctl stop gitlab && sudo systemctl start gitlab

Posted by Uli Köhler in Container, Docker, git, nginx, Version management

Fixing ‘netplan apply’ Failed to start NetworkManager.service: Unit NetworkManager.service not found.

Problem:

You’ve configured a wifi or similar (non-ethernet) network in netplan. Your netplan configuration (e.g. in /etc/netplan/50-cloud-init.yaml) looks similar to this:

network:
    ethernets:
        enp0s25:
            addresses: []
            dhcp4: true
    wifis:
        wlxc04a0013c4ca:
            renderer: NetworkManager
            match: {}
            dhcp4: true
            access-points:
                MyWifi:
                    password: "mywifipassword"
    version: 2

 

But when you run

sudo netplan apply

you see an error message like this:

Failed to start NetworkManager.service: Unit NetworkManager.service not found.
Traceback (most recent call last):
  File "/usr/sbin/netplan", line 23, in <module>
    netplan.main()
  File "/usr/share/netplan/netplan/cli/core.py", line 50, in main
    self.run_command()
  File "/usr/share/netplan/netplan/cli/utils.py", line 130, in run_command
    self.func()
  File "/usr/share/netplan/netplan/cli/commands/apply.py", line 41, in run
    self.run_command()
  File "/usr/share/netplan/netplan/cli/utils.py", line 130, in run_command
    self.func()
  File "/usr/share/netplan/netplan/cli/commands/apply.py", line 101, in command_apply
    utils.systemctl_network_manager('start', sync=sync)
  File "/usr/share/netplan/netplan/cli/utils.py", line 68, in systemctl_network_manager
    subprocess.check_call(command)
  File "/usr/lib/python3.6/subprocess.py", line 291, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['systemctl', 'start', '--no-block', 'NetworkManager.service']' returned non-zero exit status 5.

Solution:

The renderer: NetworkManager line tells netplan to use NetworkManager to connect to this network.

The error message tells you that NetworkManager is not installed on your system.

On Ubuntu and Debian, use

sudo apt install network-manager

to install it. On other distributions, try to install network-manager or a similarly named package using your distribution’s package manager.

After than, run

sudo netplan apply

again.

Posted by Uli Köhler in Linux, Networking

How to SSH to an IPv6 address

If your IPv6 address begins with fe80::

This type of IPv6 address is called link-local and is therefore specific to a network interface on your computer. You can use ifconfig to show information about the network interfaces. You are looking for an identifer like eth0, wlan0, enp3s0, wlp4s0 or tap1. For this example we’re using eth0.

Now you can connect to the IPv6 using:

ssh <username>@<ipv6 address>%<interface>

for example

ssh user@fe80::21b:21ff:fe22:e865%eth0

Replace <interface> by the correct interface (if you don’t know, try out every interface), replace <ipv6 address> by the correct IP address and replace <user> by the correct username.

If your IPv6 address does NOT begin with fe80::

You can just use

ssh <username>@<ipv6 address>

for example

ssh uli@2a01:4f9:c010:278::1

Replace <ipv6 address> by the correct IP address and replace <user> by the correct username.

Posted by Uli Köhler in Linux, Networking

Routing public IPv6 addresses to your lxc/lxd containers

The enormous amount of IPv6 addresses available to most commercially hosted VPS / root servers with a public IPv6 prefix allows you to route a public IPv6 address to every container that is running on your server. This tutorial shows you how to do that, even if you have no prior experience with routing,

Step 0: Create your LXC container

We assume you have already done this – just for reference, here’s how you can create a container:

lxc launch ubuntu:18.04 my-container

Step 1: Which IP address do you want to assign to your container?

First you need to find out what prefix is routed to your host. Usually you can do that by checking in your provider’s control panel. You’re looking for something like 2a01:4f9:c010:278::1/64. Another option would be to run sudo ifconfig

and look for a inet6 line in the section of your primary network interface (this only works if you have configured your server to have an IPv6 address). Note that addresses that start with fe80:: and addresses starting with fd, among others, are not public IPv6 addresses.

Then you can define a new IPv6 address to your container. Which one you choose – as long as it’s within the prefix – is entirely your decision.

Often, <prefix>::1 is used for the host itself, therefore you could, for example, choose <prefix>::2. Note that some providers use some IP addresses for other purposes. Check your provider’s documentation for details.

If you don’t want to make it easy to find your container’s public IPv6, don’t choose <prefix>::1<prefix>::2<prefix>::3 etc but something more random like <prefix>:af15:99b1:0b05:1, for example2a01:4f9:c010:278:af15:99b1:0b05:0001. Ensure your IPv6 address has 8 groups of 4 hex digits each!

For this example, we choose the IPv6 address 2a01:4f9:c010:278::8.

Step 2: Find out the ULA of your container

We need to find the ULA (unique local address – similar to a private IPv4 address which is not routed on the internet) of the container. Using lxc, this is quite easy:

uli@myserver:~$ lxc list
+--------------+---------+-----------------------+-----------------------------------------------+
|     NAME     |  STATE  |         IPV4          |                     IPV6                      |
+--------------+---------+-----------------------+-----------------------------------------------+
| my-container | RUNNING | 10.144.118.232 (eth0) | fd42:830b:36dc:3691:216:3eff:fed1:9058 (eth0) |
+--------------+---------+-----------------------+-----------------------------------------------+

You need to look in the IPv6 column and copy the address listed there. In this example, the address is fd42:830b:36dc:3691:216:3eff:fed1:9058.

Step 3: Setup IPv6 routing

Now we can tell the host Linux to route your chosen public IPv6 to the container’s private IPv6. This is quite easy:

sudo ip6tables -t nat -A PREROUTING -d <public IPv6> -j DNAT --to-destination <container private IPv6>

In our example, this would be

sudo ip6tables -t nat -A PREROUTING -d 2a01:4f9:c010:278::8 -j DNAT --to-destination fd42:830b:36dc:3691:216:3eff:fed1:9058

First, test the command by running it in a shell. If it works (i.e. if it doesn’t print any error message), you can permanently store it e.g. by adding it to /etc/rc.local (after #!/bin/bash, before exit 0). Advanced users should prefer to add it to /etc/network/interfaces.

Step 4: Connect to your container using SSH on your public IPv6 (optional)

Note: This step requires that you have working IPv6 connectivity at your local computer. If you are unsure, check at ipv6-test.com

First, open a shell on your container:

lxc exec my-container bash

After running this, you should see a root shell prompt inside your container:

root@my-container:~#

The following commands should be entered in the container shell, not the host!

Now we can create a user to login to (in this example, we create the uli user):

root@my-container:~# adduser uli
Adding user `uli' ...
Adding new group `uli' (1001) ...
Adding new user `uli' (1001) with group `uli' ...
Creating home directory `/home/uli' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for uli
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n]

You only need to enter a password (you won’t see anything on screen when entering it) twice, for all other lines you can just press enter.

The ubuntu:18.04 lxc image used in this example does not allow SSH password authentication in its default configuration. In order to fix this, change PasswordAuthentication no to PasswordAuthentication yes in /etc/ssh/sshd_config and restart the SSH server by running service sshd restart. Be sure you understand the security implications before you do that!

Now, logout of your container shell by pressing Ctrl+D. The following commands can be entered on your desktop or any other server with IPv6 connectivity.

Now login to your server:

ssh <username>@<public IPv6 address>

in this example:

ssh uli@2a01:4f9:c010:278::8

If you configured everything correctly, you’ll see the shell prompt for your container:

uli@my-container:~$

Note: Don’t forget to configure a firewall for your container, e.g. ufw! Your container’s IPv6 is exposed to the internet and just assuming noone will guess it is not good security practice.

Posted by Uli Köhler in Cloud, Container, Linux, LXC, Networking

How to easily find errors in nginx config files

If you edited some nginx config file and nginx doesn’t want to reload or restart, e.g. with an error message like this:

# service nginx reload
Job for nginx.service failed because the control process exited with error code.
See "systemctl  status nginx.service" and "journalctl  -xe" for details.

you likely have some error in one of your config files.

There’s a simple command to check for errors (you need to run it as root): nginx -t

Example output:

nginx: [emerg] unknown directive "autoindex$" in /etc/nginx/sites-enabled/mysite:31
nginx: configuration file /etc/nginx/nginx.conf test failed

Firstly, the last line tells you that there actually is some error in the config files.
The first line tells you exactly where it is: /etc/nginx/sites-enabled/mysite:31 means: Look in the file /etc/nginx/sites-enabled/mysite, line 31.

In this particular case, the actual error message is unknown directive "autoindex$". By checking the aforementioned file I was able to find out that I accidentally entered autoindex $; instead of autoindex on;

After fixing this issue, nginx -t shows that the configuration file seems correct now:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Note that while most cases of nginx failing to (re)start are caused by issues in the config files, there are some cases in which the config file seems correct and nginx will still not start up. In that case. have a look at the logfile which is commonly located at /var/log/nginx/error.log . You need to be root in order to view it. I recommend this command:

sudo tail -n 1000 /var/log/nginx/error.log
Posted by Uli Köhler in Linux, nginx

nginx Let’s Encrypt authentication for reverse-proxy sites

Problem:

You have an nginx host that is configured as reverse-proxy-only like this:

server {
    server_name  my.domain;
    [...]
    location / {
        proxy_pass http://localhost:1234;
    }
}

For this host, you want to use Let’s Encrypt to automatically issue a certificate using the webroot method like this:

certbot certonly -a webroot --webroot-path ??? -d my.domain

The reverse-proxied webserver does not provide a webroot to use for the automated autentication process and you want to keep the flexibility of updating the cert at any time without manually modifying the nginx configuration.

Continue reading →

Posted by Uli Köhler in Linux, nginx

Salt: Increase nginx server_names_hash_bucket_size

Problem:

You use saltstack to automatically deploy configuration to your servers. After installing nginx with the default config, you need to increase the server_names_hash_bucket_size because it won’t startup otherwise.

Continue reading →

Posted by Uli Köhler in Linux, nginx

Simple C++ HTTP download using libcurl easy API

Problem

Using the libcurl easy API you want to download a file using HTTP GET. No extended features such as authentication shall be used.

The download result shall be stored in a std::string

Continue reading →

Posted by Uli Köhler in C/C++, Networking