VPN

How to leave netmaker network

Once you have used netclient join to join a netmaker network, you can use

netclient leave --network mynetwork

to permanently leave the network named mynetwork (until you re-join of course).

 

Posted by Uli Köhler in Networking, VPN

Is tailscale available for OpenWRT 19.07?

No, tailscale can’t be installed using opkg on OpenWRT 19.xx. I have experimentally verified this using a MIPSBE router with OpenWRT 19.07.10.

However, tailscale is available on OpenWRT starting from version 21.02 – source: tailscale package page on OpenWRT.

Posted by Uli Köhler in Headscale, Networking, OpenWRT

How to install Netmaker client (netclient) on Ubuntu in 15 seconds

Just run these commands in your Shell in Ubuntu to install netclient

curl -sL 'https://apt.netmaker.org/gpg.key' | sudo tee /etc/apt/trusted.gpg.d/netclient.asc
curl -sL 'https://apt.netmaker.org/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/netclient.list
sudo apt update
sudo apt install netclient

Source: Netmaker installation page

Posted by Uli Köhler in Networking, VPN, Wireguard

iperf3 benchmark of ZeroTier vs Netmaker vs Tailscale vs direct switched connection

In our setup, a virtual machine (running on an XCP-NG host) on  was connected to my Desktop (HP Z240, i7-6700 @3.4 GHz running Ubuntu 22.04) in a purely switched network with 1Gbit links. Both devices were connected using a MikroTik 10G switch (Marvell chip

I ran iperf3 -s on the VM and ran iperf3 -c [IP address] on the desktop. Reverse tests have not been performed.

Direct switched connection (no VPN)

Connecting to host 10.9.2.103, port 5201
[  5] local 10.9.2.10 port 56848 connected to 10.9.2.103 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  92.8 MBytes   779 Mbits/sec    0    444 KBytes       
[  5]   1.00-2.00   sec  90.7 MBytes   761 Mbits/sec    0    543 KBytes       
[  5]   2.00-3.00   sec  88.6 MBytes   743 Mbits/sec    0    816 KBytes       
[  5]   3.00-4.00   sec  90.0 MBytes   755 Mbits/sec    0    816 KBytes       
[  5]   4.00-5.00   sec  90.0 MBytes   755 Mbits/sec    0    856 KBytes       
[  5]   5.00-6.00   sec  88.8 MBytes   744 Mbits/sec    0    946 KBytes       
[  5]   6.00-7.00   sec  88.8 MBytes   745 Mbits/sec    0    946 KBytes       
[  5]   7.00-8.00   sec  90.0 MBytes   755 Mbits/sec    0    993 KBytes       
[  5]   8.00-9.00   sec  90.0 MBytes   755 Mbits/sec    0    993 KBytes       
[  5]   9.00-10.00  sec  88.8 MBytes   744 Mbits/sec    0    993 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   898 MBytes   754 Mbits/sec    0             sender
[  5]   0.00-10.01  sec   896 MBytes   751 Mbits/sec                  receiver

ZeroTier

Connecting to host 10.80.246.34, port 5201
[  5] local 10.80.246.38 port 35474 connected to 10.80.246.34 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  59.9 MBytes   503 Mbits/sec  338    102 KBytes       
[  5]   1.00-2.00   sec  60.2 MBytes   505 Mbits/sec  313    188 KBytes       
[  5]   2.00-3.00   sec  63.9 MBytes   536 Mbits/sec  176   99.3 KBytes       
[  5]   3.00-4.00   sec  74.3 MBytes   623 Mbits/sec  174    113 KBytes       
[  5]   4.00-5.00   sec  67.7 MBytes   568 Mbits/sec  197   83.2 KBytes       
[  5]   5.00-6.00   sec  72.5 MBytes   609 Mbits/sec  218    228 KBytes       
[  5]   6.00-7.00   sec  61.3 MBytes   514 Mbits/sec  281   77.8 KBytes       
[  5]   7.00-8.00   sec  72.0 MBytes   604 Mbits/sec  213   91.2 KBytes       
[  5]   8.00-9.00   sec  65.4 MBytes   549 Mbits/sec  309    156 KBytes       
[  5]   9.00-10.00  sec  53.9 MBytes   453 Mbits/sec  190    121 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   651 MBytes   546 Mbits/sec  2409             sender
[  5]   0.00-10.01  sec   650 MBytes   545 Mbits/sec                  receiver

NetMaker

Netmaker internally uses a normal (kernel-based) wireguard connection, so in some respect this is a test of Wireguard performance

Connecting to host 10.230.113.3, port 5201
[  5] local 10.230.113.1 port 35534 connected to 10.230.113.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   105 MBytes   881 Mbits/sec    0   1.01 MBytes       
[  5]   1.00-2.00   sec   104 MBytes   870 Mbits/sec   86    422 KBytes       
[  5]   2.00-3.00   sec   101 MBytes   849 Mbits/sec    0    488 KBytes       
[  5]   3.00-4.00   sec  98.8 MBytes   828 Mbits/sec    0    535 KBytes       
[  5]   4.00-5.00   sec  98.8 MBytes   828 Mbits/sec    0    584 KBytes       
[  5]   5.00-6.00   sec   104 MBytes   870 Mbits/sec    0    615 KBytes       
[  5]   6.00-7.00   sec  97.5 MBytes   818 Mbits/sec    7    472 KBytes       
[  5]   7.00-8.00   sec   104 MBytes   870 Mbits/sec    0    522 KBytes       
[  5]   8.00-9.00   sec   101 MBytes   849 Mbits/sec    0    580 KBytes       
[  5]   9.00-10.00  sec   102 MBytes   860 Mbits/sec    0    606 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1016 MBytes   852 Mbits/sec   93             sender
[  5]   0.00-10.00  sec  1014 MBytes   850 Mbits/sec                  receiver

 Tailscale

Tailscale 1.28.0 has been used for this test.

During this test, I ensured that the tailscale connection was established using the switched network and was not going through a DERP server or the routed network.

$ tailscale ping 100.64.0.3
pong from vm (fd5d:7b60:4742::3) via 10.9.2.103:41641 in 1ms

Results:

Connecting to host 100.64.0.3, port 5201
[  5] local 100.64.0.2 port 40690 connected to 100.64.0.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  38.3 MBytes   321 Mbits/sec  389   60.0 KBytes       
[  5]   1.00-2.00   sec  37.6 MBytes   315 Mbits/sec  366   43.2 KBytes       
[  5]   2.00-3.00   sec  36.7 MBytes   308 Mbits/sec  431   52.8 KBytes       
[  5]   3.00-4.00   sec  38.5 MBytes   323 Mbits/sec  488   80.3 KBytes       
[  5]   4.00-5.00   sec  29.3 MBytes   246 Mbits/sec  356   38.4 KBytes       
[  5]   5.00-6.00   sec  31.0 MBytes   260 Mbits/sec  351   86.3 KBytes       
[  5]   6.00-7.00   sec  27.1 MBytes   227 Mbits/sec  287   50.4 KBytes       
[  5]   7.00-8.00   sec  26.1 MBytes   219 Mbits/sec  210   46.8 KBytes       
[  5]   8.00-9.00   sec  27.1 MBytes   227 Mbits/sec  261   39.6 KBytes       
[  5]   9.00-10.00  sec  27.5 MBytes   231 Mbits/sec  222   40.8 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   319 MBytes   268 Mbits/sec  3361             sender
[  5]   0.00-10.01  sec   318 MBytes   267 Mbits/sec                  receiver

Summary

The approximate performance expectation in this specific scenario is:

  • Tailscale: 300 Mbit/s
  • ZeroTier: 550 Mbit/s
  • Netmaker: 850 Mbit/s
  • Direct switched network: 750 Mbit/s

Curiously, netmaker performed better than the direct connection. The reason for this is not known at this point, but a similar effect has been observed in this medium.com article.

Generally, one can see that Tailscale (which internally uses software wireguard) is approximately half the speed of ZeroTier, which in turn is outperformed significantly by Netmaker.

In a followup post I will describe advantages and disadvantages of those solutions and explore under which scenarios I would use the solutions.

Posted by Uli Köhler in Headscale, Networking, Wireguard, ZeroTier

How to install tailscale on XCP-NG host

By installing tailscale on XCP-NG hosts, you can provide easier access to your virtualization host using VPN.

Run the following commands via SSH as root on the XCP-NG host:

sudo yum-config-manager --add-repo https://pkgs.tailscale.com/stable/centos/7/tailscale.repo
sudo yum -y install tailscale

and enable & start the tailscale daemon tailscaled:

systemctl enable --now tailscaled

 

Posted by Uli Köhler in Headscale, Networking, Virtualization, VPN

How to install tailscale on Fedora CoreOS

In order to install tailscale, on Fedora CoreOS (this post has been tested on Fedora CoreOS 35), you can use this sequence of commands:

sudo curl -o /etc/yum.repos.d/tailscale.repo https://pkgs.tailscale.com/stable/fedora/tailscale.repo
sudo rpm-ostree install tailscale

Now reboot using

sudo systemctl reboot

Once rebooted, you can enable the service using

sudo systemctl enable --now tailscaled

and then configure tailscale as usual:

sudo tailscale up --login-server .... --authkey ...

Also see our post on How to connect tailscale to headscale server on Linux

Posted by Uli Köhler in CoreOS, Headscale, VPN

How to connect Synology NAS to Headscale

First, install the Tailscale App using the Synology Package manager. Don’t try to initialize using the UI since this will only work with the commercial tailscale service, not with headscale.

Then login to the NAS using SSH (I’m using the admin account) and run sudo su to run

You should see the following shell prompt:

ash-4.4#

Now you can initialize tailscale using the tailscale command similar to our previous post How to connect tailscale to headscale server on Linux. In my case, I needed to use the --reset flag in order for the command to work.

tailscale up --reset --login-server https://headscale.mydomain.com --authkey ... --accept-routes

This will login to your server just like the normal (non-synology) tailscale client does.

Posted by Uli Köhler in Headscale, VPN

ufw: How to allow traffic to all ports on specific interface

sudo ufw allow in on tailscale0 to any

This will allow any traffic (including routed traffic, if packet forwarding is enabled) coming from the tailscale0 interface.

Posted by Uli Köhler in Linux, Networking, VPN

How to install tailscale on Ubuntu

In order to instal tailscale, on any Ubuntu version, you can use the official tailscale install command:

sudo apt -y install curl apt-transport-https
curl -fsSL https://tailscale.com/install.sh | sh
Posted by Uli Köhler in Headscale, Raspberry Pi, VPN

How to install tailscale on Raspberry Pi

Just use the official install command from the tailscale website:

curl -fsSL https://tailscale.com/install.sh | sh

 

Posted by Uli Köhler in Headscale, Raspberry Pi, VPN

Headscale nginx reverse proxy config

Reverse proxying headscale using nginx is extremly simple. You don’t need any special config.

Here is my config with Let’s Encrypt enabled (part of the config is auto-generated using certbot --nginx).

Ensure to set the port (27896) to match the one mapped to the Headscale port.

server {
    server_name  headscale.mydomain.com;

    location / {
        proxy_pass http://localhost:27896/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_redirect default;
    }

    listen [::]:443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/mydomain-wildcard/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/mydomain-wildcard/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot


}
server {
    if ($host = headscale.mydomain.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    server_name  headscale.mydomain.com;

    listen [::]:80; # managed by Certbot
    return 404; # managed by Certbot
}

The proxy_… upgrade stuff is for proxying websockets. I have no idea if it’s required or not because I have not tried, but it doesn’t really hurt to keep in in there.

Posted by Uli Köhler in Headscale, Networking, nginx

Headscale docker-compose config with PostgreSQL

This config is intended for larger installations than our sqlite based standard config. It tends to be slightly easier to back up correctly and will be faster for larger workloads. However, it will consume more RAM especially for low-workload installations and you have two docker containers to worry about during maintenance (though they are managed using a single docker-compose instance). I do not recommend using a shared postgres server although this is certainly possible.

First, create a random password using

echo POSTGRES_PASSWORD=$(pwgen 30 1) > .env

The docker-compose.yml looks like this:

version: '3.5'
services:
  headscale:
    image: headscale/headscale:latest
    volumes:
      - ./config:/etc/headscale/
      - ./data:/var/lib/headscale
    ports:
      - 27896:8080
    command: headscale serve
    restart: unless-stopped
    depends_on:
      - postgres
  postgres:
    image: postgres
    restart: unless-stopped
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=headscale
      - POSTGRES_USER=headscale

 

Now we create the default headscale config:

mkdir -p ./config
curl https://raw.githubusercontent.com/juanfont/headscale/main/config-example.yaml -o ./config/config.yaml

In config/config.yaml we need to make these changes:

Set server URL:

server_url: https://headscale.mydomain.com

Comment out the sqlite database (add # to the front of every line):

# SQLite config
# db_type: sqlite3
# db_path: /var/lib/headscale/db.sqlite

And uncomment and configure postgres:

# Postgres config
db_type: postgres
db_host: postgres
db_port: 5432
db_name: headscale
db_user: headscale
db_pass: ohngooFaciice2hooGoo1Ahvif3ahl

Make sure all of these are uncommented and you copy the password from .env . It is extremely important that you use a unique password here to prevent attacks from unprivileged host processes to the docker containers.

My recommendation is to reverse proxy headscale using traefik or nginx instead of using the builtin Let’s Encrypt / ACME support. This will allow not only sharing the port & IP address with other services, standard services like Traefik and/or nginx are much more well tested regarding exposure to the internet and hence provide a potential security benefit. Additionally, they make it easier to manage certificates in a service-independent manner and provide an additional layer for debugging etc.

You might also configure custom IP address ranges:

ip_prefixes:
  - fd5d:7b60:4742::/48
  - 100.64.0.0/10

but this is optional.

For more info regarding autostart etc, see How to setup headscale server in 5 minutes using docker-compose

Posted by Uli Köhler in Headscale, Networking, VPN

How to create namespace on headscale server

Currently you need to create a namespace using the command line.

If running without a container

headscale namespaces create -n mynamespace

If running using docker-compose:

docker-compose exec headscale headscale namespaces create mynamespace

If successful, this will show

Namespace created
Posted by Uli Köhler in Headscale, Networking, VPN

How to list headscale namespaces

To list namespaces (which are comparable to accounts) in headscale, run

headscale namespaces list

If you are using headscale using docker-compose, use e.g.

docker-compose exec headscale headscale namespaces list

Example output

ID | Name | Created            
1  | uli  | 2022-01-16 19:54:55
Posted by Uli Köhler in Headscale, Networking, VPN

How to connect tailscale to headscale server on Linux

Also see our guide on How to setup headscale server in 5 minutes using docker-compose

Assuming you are running your headscale server at https://headscale.mydomain.com and you have already created a namespace named mynamespace, use one of the following methods:

Pre-Authkeys method (recommended)

First, create a pre-authkey token which is valid for 24h on the server:

headscale preauthkeys create -e 24h -n mynamespace

or (docker-compose version)

docker-compose exec headscale headscale preauthkeys create -e 24h -n mynamespace

This will generate a pre-auth key such as 3215a1ce7967c11e8ea844b3e199d3c46f9f5e7b660b48fb which you can send to the user.

Now login on the client using

tailscale up --login-server https://headscale.mydomain.com --authkey 3215a1ce7967c11e8ea844b3e199d3c46f9f5e7b660b48fb

Direct login method

tailscale up --login-server https://headscale.mydomain.com

On the client, this will show you an URL to access using your browser on the headscale server. This will in turn give you a command that you need to run on the host running the headscale container. If running headscale using docker-compose, prepend docker-compose exec headscale to the command and replace NAMESPACE by the name of your namespace.

The only reason why this method is not recommended by me is because it requires back-and-forth interaction between the user and the administrator which I don’t consider practical.

Posted by Uli Köhler in Headscale, Linux, Networking, VPN

How to install Tailscale on Ubuntu in less than 1 minute

Just run this sequence of commands to install tailscale. This will automatically determine the correct Ubuntu version

curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/$(lsb_release -sc).gpg | sudo apt-key add -
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/$(lsb_release -sc).list | sudo tee /etc/apt/sources.list.d/tailscale.list
sudo apt-get update
sudo apt-get install tailscale

 

Posted by Uli Köhler in Networking, Wireguard

How to setup headscale server in 5 minutes using docker-compose

This headscale setup is using sqlite – with a much lighter memory & CPU footprint than PostgreSQL for simple usecases, I recommend this for almost any installation: Headscale doesn’t have to manage that many requests and using sqlite3 is fine for all but the most demanding setups.

First, create the directory where headscale and all the data will reside in (we use /opt/headscale in this example).

sudo mkdir -p /opt/headscale

Now run the following script in /opt/headscale to initialize the files and directories headscale requires:

mkdir -p ./config
touch ./config/db.sqlite
curl https://raw.githubusercontent.com/juanfont/headscale/main/config-example.yaml -o ./config/config.yaml

docker-compose config

Now it’s time to create /opt/headscale/docker-compose.yml:

version: '3.5'
services:
  headscale:
    image: headscale/headscale:latest
    volumes:
      - ./config:/etc/headscale/
      - ./data:/var/lib/headscale
    ports:
      - 27896:8080
    command: headscale serve
    restart: unless-stopped

This will configure headscale to run its HTTP server on port 27896. You can reverse proxy this port to the domain of your choice.

Configuration

Now we should edit the server name in config/config.yaml:

server_url: https://headscale.mydomain.com

Note that you need to restart tailscale after each

Next, see How to create namespace on headscale server for details on how you can create a namespace. Once you have created a namespace (comparable to an account on the commercial tailscale service), you can continue connecting clients (the client software is called tailscale), see e.g. How to connect tailscale to headscale server on Linux

Autostart

Using the method described in our previous post Create a systemd service for your docker-compose project in 10 seconds we will now setup autostart on boot for headscale using systemd. This command will also start it immediately:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

How to view the logs

Use this command to view & follow the logs:

docker-compose logs -f

Example output

headscale_1  | [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
headscale_1  | 
headscale_1  | [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
headscale_1  |  - using env:    export GIN_MODE=release
headscale_1  |  - using code:   gin.SetMode(gin.ReleaseMode)
headscale_1  | 
headscale_1  | [GIN-debug] GET    /metrics                  --> github.com/zsais/go-gin-prometheus.prometheusHandler.func1 (4 handlers)
headscale_1  | [GIN-debug] GET    /health                   --> github.com/juanfont/headscale.(*Headscale).Serve.func2 (4 handlers)
headscale_1  | [GIN-debug] GET    /key                      --> github.com/juanfont/headscale.(*Headscale).KeyHandler-fm (4 handlers)
headscale_1  | [GIN-debug] GET    /register                 --> github.com/juanfont/headscale.(*Headscale).RegisterWebAPI-fm (4 handlers)
headscale_1  | [GIN-debug] POST   /machine/:id/map          --> github.com/juanfont/headscale.(*Headscale).PollNetMapHandler-fm (4 handlers)
headscale_1  | [GIN-debug] POST   /machine/:id              --> github.com/juanfont/headscale.(*Headscale).RegistrationHandler-fm (4 handlers)
headscale_1  | [GIN-debug] GET    /oidc/register/:mkey      --> github.com/juanfont/headscale.(*Headscale).RegisterOIDC-fm (4 handlers)
headscale_1  | [GIN-debug] GET    /oidc/callback            --> github.com/juanfont/headscale.(*Headscale).OIDCCallback-fm (4 handlers)
headscale_1  | [GIN-debug] GET    /apple                    --> github.com/juanfont/headscale.(*Headscale).AppleMobileConfig-fm (4 handlers)
headscale_1  | [GIN-debug] GET    /apple/:platform          --> github.com/juanfont/headscale.(*Headscale).ApplePlatformConfig-fm (4 handlers)
headscale_1  | [GIN-debug] GET    /swagger                  --> github.com/juanfont/headscale.SwaggerUI (4 handlers)
headscale_1  | [GIN-debug] GET    /swagger/v1/openapiv2.json --> github.com/juanfont/headscale.SwaggerAPIv1 (4 handlers)
headscale_1  | [GIN-debug] GET    /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] POST   /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] PUT    /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] PATCH  /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] HEAD   /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] OPTIONS /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] DELETE /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] CONNECT /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | [GIN-debug] TRACE  /api/v1/*any              --> github.com/gin-gonic/gin.WrapF.func1 (5 handlers)
headscale_1  | 2022-01-16T19:04:04Z WRN Listening without TLS but ServerURL does not start with http://
headscale_1  | 2022-01-16T19:04:04Z INF listening and serving (multiplexed HTTP and gRPC) on: 0.0.0.0:8080
headscale_1  | 2022-01-16T19:04:04Z INF Setting up a DERPMap update worker frequency=86400000

 

Posted by Uli Köhler in Headscale, Networking, VPN, Wireguard

TPLink WDR3600 OpenWRT wireguard throughput benchmark

TechOverflow tested Wireguard bandwidth / throughput on the TPLink WDR3600 with OpenWRT 21.02, based on a standard iperf3 TCP benchmark. We did not use Pre-shared keys in this setup.

So far we were able to verify that the Wireguard bandwidth is approximately 27Mbit/s (unidirectional), measured using iperf3.

Connecting to host 192.168.239.254, port 5201
[  5] local 10.9.1.104 port 57502 connected to 192.168.239.254 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  3.48 MBytes  29.2 Mbits/sec    0    215 KBytes       
[  5]   1.00-2.00   sec  3.86 MBytes  32.4 Mbits/sec    0    387 KBytes       
[  5]   2.00-3.00   sec  3.13 MBytes  26.2 Mbits/sec    0    470 KBytes       
[  5]   3.00-4.00   sec  3.37 MBytes  28.3 Mbits/sec    0    470 KBytes       
[  5]   4.00-5.00   sec  3.31 MBytes  27.8 Mbits/sec    0    470 KBytes       
[  5]   5.00-6.00   sec  3.31 MBytes  27.8 Mbits/sec    0    470 KBytes       
[  5]   6.00-7.00   sec  3.31 MBytes  27.8 Mbits/sec    0    470 KBytes       
[  5]   7.00-8.00   sec  2.76 MBytes  23.1 Mbits/sec    0    470 KBytes       
[  5]   8.00-9.00   sec  3.31 MBytes  27.8 Mbits/sec    0    470 KBytes       
[  5]   9.00-10.00  sec  2.76 MBytes  23.1 Mbits/sec    0    470 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  32.6 MBytes  27.4 Mbits/sec    0             sender
[  5]   0.00-10.14  sec  32.0 MBytes  26.4 Mbits/sec                  receiver

During the test, top consistently showed 0% idle CPU load, with the load being approximately 51% sys and 49% sirq.

The commands in use were

iperf3 -s

on the WDR3600 and

iperf -c [IP address of WDR3600]

on the client.

Posted by Uli Köhler in Networking, Wireguard

How to install wireguard_watchdog on OpenWRT

Run this on your OpenWRT router to automatically re-resolve DNS names for peers.

/usr/bin/wireguard_watchdog is automatically installed with the standard wireguard package, so you only need to enable it to run every minute:

echo '* * * * * /usr/bin/wireguard_watchdog' >> /etc/crontabs/root

Source: This commit message.

Posted by Uli Köhler in Wireguard

How to install Wireguard on OpenWRT

Install using

opkg update
opkg install luci-proto-wireguard

 

Posted by Uli Köhler in Networking, Wireguard