How to set X-Forwarded-Proto header in nginx

Directly after any proxy_pass line add

proxy_set_header X-Forwarded-Proto $scheme;

Typically X-Forwarded-Proto is used together with X-Forwarded-Host like this:

proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;

 

Posted by Uli Köhler in Networking, nginx, Wordpress

How to run iperf3 on Synology NAS using SSH

I prefer this method to the GUI docker method because:

  • It’s much more reproducible in practice
  • It involves fewer steps
  • It uses --net=host and therefore doesn’t involve additional routing, bridging or forwarding of packets which might impact performance

Login to the Synology NAS over SSH using a user with admin privileges, then sudo su.

For using iperf3 as a serve, use

docker run  -it --rm --name=iperf3-server --net=host networkstatic/iperf3 -s

For using iperf3 as a client, use

docker run  -it --rm --name=iperf3-client --net=host networkstatic/iperf3 -c 10.1.2.3

 

Posted by Uli Köhler in Networking

Real-world Tailscale iperf3 results between a VM and a bare metal Desktop on a switched network

We tested iperf3 performance using our network based on the following devices:

  • Desktop: Ubuntu 21.10 i7-6700 CPU @ 3.40 GHz, connected using 1Gbase-T to
  • Desktop switch: Mikrotik CSS610-8G-2S+IN connected using 10GBase-T multimode SFP+ module to:
  • Core switch: Mikrotik CRS309-1G-8S+IN, connected using 10GBase-T DAC cable to
  • Virtualization server: i5-6500 CPU @ 3.20GHz running XCP-NG 8.2.1
  • Virtual Machine: Ubuntu 20.04, 4 cores, 8GB RAM

Tailscale version was

1.24.1
  tailscale commit: 1a9302b1edba91d0f638e775faeaa0ce2a6a25f8
  other commit: 1331ed5836e1a0ab32b10e6ce8748e17ba2c7598
  go version: go1.18.1-ts710a0d8610

 

The network is completely switched, not routed and we took care that tailscale actually used the switched connection using tailscale ping.

Test 0: Direct connection over switched network

Desktop running iperf -s, VM running iperf -c 10.9.2.10:

Connecting to host 10.9.2.10, port 5201
[  5] local 10.9.2.103 port 52944 connected to 10.9.2.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  94.7 MBytes   794 Mbits/sec  338    109 KBytes       
[  5]   1.00-2.00   sec  98.0 MBytes   822 Mbits/sec  353    148 KBytes       
[  5]   2.00-3.00   sec  96.6 MBytes   811 Mbits/sec  382    117 KBytes       
[  5]   3.00-4.00   sec   103 MBytes   862 Mbits/sec  334    116 KBytes       
[  5]   4.00-5.00   sec   101 MBytes   851 Mbits/sec  483    102 KBytes       
[  5]   5.00-6.00   sec   104 MBytes   874 Mbits/sec  503    126 KBytes       
[  5]   6.00-7.00   sec   105 MBytes   883 Mbits/sec  527    119 KBytes       
[  5]   7.00-8.00   sec   108 MBytes   906 Mbits/sec  451    105 KBytes       
[  5]   8.00-9.00   sec   108 MBytes   903 Mbits/sec  442    117 KBytes       
[  5]   9.00-10.00  sec   107 MBytes   900 Mbits/sec  461    123 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.00 GBytes   861 Mbits/sec  4274             sender
[  5]   0.00-10.00  sec  1.00 GBytes   860 Mbits/sec                  receiver

iperf Done.

VM running iperf -s, Desktop running iperf -c 10.9.2.103

Connecting to host 10.9.2.103, port 5201
[  5] local 10.9.2.10 port 42630 connected to 10.9.2.103 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  88.5 MBytes   742 Mbits/sec    0    966 KBytes       
[  5]   1.00-2.00   sec  90.0 MBytes   755 Mbits/sec    0   1.12 MBytes       
[  5]   2.00-3.00   sec  87.5 MBytes   734 Mbits/sec   33    833 KBytes       
[  5]   3.00-4.00   sec  90.0 MBytes   755 Mbits/sec    0    833 KBytes       
[  5]   4.00-5.00   sec  88.8 MBytes   745 Mbits/sec    0   1.00 MBytes       
[  5]   5.00-6.00   sec  88.8 MBytes   744 Mbits/sec    0   1.00 MBytes       
[  5]   6.00-7.00   sec  87.5 MBytes   734 Mbits/sec    0   1.09 MBytes       
[  5]   7.00-8.00   sec  90.0 MBytes   755 Mbits/sec    0   1.09 MBytes       
[  5]   8.00-9.00   sec  90.0 MBytes   755 Mbits/sec    0   1.09 MBytes       
[  5]   9.00-10.00  sec  90.0 MBytes   755 Mbits/sec   13    863 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   891 MBytes   747 Mbits/sec   46             sender
[  5]   0.00-10.00  sec   888 MBytes   745 Mbits/sec                  receiver

iperf Done.

The direction where the VM hosts the iperf -s server i.e. sends the data shows a slight degradation of performance

Test 1: Desktop running iperf -s, VM running iperf -c

Connecting to host 100.64.0.2, port 5201
[  5] local 100.64.0.3 port 37466 connected to 100.64.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  39.4 MBytes   330 Mbits/sec   62    149 KBytes       
[  5]   1.00-2.00   sec  45.8 MBytes   385 Mbits/sec   44    150 KBytes       
[  5]   2.00-3.00   sec  38.9 MBytes   326 Mbits/sec   97    122 KBytes       
[  5]   3.00-4.00   sec  47.9 MBytes   401 Mbits/sec    7    242 KBytes       
[  5]   4.00-5.00   sec  39.5 MBytes   332 Mbits/sec  118    110 KBytes       
[  5]   5.00-6.00   sec  46.6 MBytes   391 Mbits/sec   32    136 KBytes       
[  5]   6.00-7.00   sec  41.8 MBytes   351 Mbits/sec   42    159 KBytes       
[  5]   7.00-8.00   sec  44.3 MBytes   372 Mbits/sec   91    104 KBytes       
[  5]   8.00-9.00   sec  36.1 MBytes   303 Mbits/sec   72    133 KBytes       
[  5]   9.00-10.00  sec  41.5 MBytes   348 Mbits/sec   39    139 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   422 MBytes   354 Mbits/sec  604             sender
[  5]   0.00-10.00  sec   421 MBytes   353 Mbits/sec                  receiver

iperf Done.

Test 2: VM running iperf -s, Desktop running iperf -c

Connecting to host 100.64.0.3, port 5201
[  5] local 100.64.0.2 port 36744 connected to 100.64.0.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  23.7 MBytes   199 Mbits/sec  104   89.9 KBytes       
[  5]   1.00-2.00   sec  23.6 MBytes   198 Mbits/sec   80   49.2 KBytes       
[  5]   2.00-3.00   sec  21.1 MBytes   177 Mbits/sec   59   54.0 KBytes       
[  5]   3.00-4.00   sec  23.6 MBytes   198 Mbits/sec   68   69.6 KBytes       
[  5]   4.00-5.00   sec  19.1 MBytes   160 Mbits/sec   77   48.0 KBytes       
[  5]   5.00-6.00   sec  25.3 MBytes   212 Mbits/sec   76   62.4 KBytes       
[  5]   6.00-7.00   sec  21.4 MBytes   179 Mbits/sec   50    107 KBytes       
[  5]   7.00-8.00   sec  25.6 MBytes   215 Mbits/sec   35    124 KBytes       
[  5]   8.00-9.00   sec  22.5 MBytes   188 Mbits/sec   71   48.0 KBytes       
[  5]   9.00-10.00  sec  25.0 MBytes   209 Mbits/sec   42   64.8 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   231 MBytes   194 Mbits/sec  662             sender
[  5]   0.00-10.01  sec   230 MBytes   193 Mbits/sec                  receiver

UDP tests

UDP tests were mostly similar to TCP tests (albeit slightly higher throughput at up to 400 Mbit/s), including the sensitivity to the direction of the connection.

Interpretation of the results

Tailscale has significant impact on network speeds and will not regularly be able to achieve near-Gigabit iperf3 speeds given typical setup with Desktop that are a couple of years old, and virtual machines. However, achieving a throughput of 200-400 Mbit/s is more than enough for most applications.

Interestingly, the speed is highly dependent on the direction of transfer between a less powerful VM and a more powerful Desktop, with a factor of x1.5 … x2 between the two directions. This might be attributed to the amount of computation required to encrypt or decrypt the data.

Posted by Uli Köhler in Networking

Recommended library for executing shell commands in Python

I recommend using invoke instead of the built-in subprocess to handle executing any shell command in Python.

Not only does invoke‘s run() it provide a more user friendly syntax compared to e.g. subprocess.check_output():

run('make')

but it also tends to act more like you’d expect especially regarding the output of the command and has easy-to-use parameters such as hide=True to hide the output of shell commands.

Furthermore, it provides a buch of really useful features such as automatically responding to prompts from the shell command.

Posted by Uli Köhler in Python

How to enable USB-C host mode on Raspberry Pi

If you want to connect an USB device such as a 3D printer mainboard to your Raspberry Pi 4 using the USB-C connector as opposed to the larger USB-A connector, you need to first configure the Raspberry Pi kernel to use host mode for the USB-C connector.

To temporarily enable it:

sudo modprobe -r dwc2 && sudo dtoverlay dwc2 dr_mode=host && sudo modprobe dwc2

This method has the advantage of not requiring a reboot.

To permanently enable it:

Edit /boot/config.txt and add

dtoverlay=dwc2,dr_mode=host

at the end of the file (in the [all] section). Then

reboot

Posted by Uli Köhler in Raspberry Pi

How to enable/disable WireGuard peer by comment on MikroTik

To enable the WireGuard peer called MyPeer:

/interface/wireguard/peers/enable [find comment="MyPeer"]

To disable the WireGuard peer called MyPeer:

/interface/wireguard/peers/disable [find comment="MyPeer"]

 

Posted by Uli Köhler in MikroTik, Networking

How to test if MongoDB database exists on command line (bash)

Use this command to test if a given MongoDB database exists:

mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")'

This will return an index such as 0 or 241 if the database is found. On the other hand, it will return -1 if the database does not exist.

docker-compose version:

docker-compose exec mongodb mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")'

where mongodb is the name of your container.

Now we can put it together in a bash script to test if the database exists:

# Query if DB exists in MongoDB
mongo_indexof_db=$(mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")')
if [ $mongo_indexof_db -ne "-1" ]; then
    echo "MongoDB database exists"
else
    echo "MongoDB database does not exist"
fi

 

docker-compose variant:

# Query if DB exists in MongoDB
mongo_indexof_db=$(docker-compose -f inspect.yml exec -T mongodb mongo --quiet --eval 'db.getMongo().getDBNames().indexOf("mydb")')
if [ $mongo_indexof_db -ne "-1" ]; then
    echo "MongoDB database exists"
else
    echo "MongoDB database does not exist"
fi

 

Posted by Uli Köhler in MongoDB, Shell

How to fix Python pyarrow pip install error: Could NOT find Arrow (missing: Arrow_DIR)

Problem:

When trying to install pyarrow such as using

pip install pyarrow

you see an error log like

      -- Found Python3Alt: /home/uli/.pypy3-virtualenv/bin/pypy3
      CMake Warning (dev) at /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:273 (message):
        The package name passed to `find_package_handle_standard_args` (PkgConfig)
        does not match the name of the calling package (Arrow).  This can lead to
        problems in calling code that expects `find_package` result variables
        (e.g., `_FOUND`) to follow a certain pattern.
      Call Stack (most recent call first):
        /usr/share/cmake-3.18/Modules/FindPkgConfig.cmake:59 (find_package_handle_standard_args)
        cmake_modules/FindArrow.cmake:39 (include)
        cmake_modules/FindArrowPython.cmake:46 (find_package)
        CMakeLists.txt:229 (find_package)
      This warning is for project developers.  Use -Wno-dev to suppress it.
      
      -- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.2")
      -- Could NOT find Arrow (missing: Arrow_DIR)
      -- Checking for module 'arrow'
      --   No package 'arrow' found
      CMake Error at /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:165 (message):
        Could NOT find Arrow (missing: ARROW_INCLUDE_DIR ARROW_LIB_DIR
        ARROW_FULL_SO_VERSION ARROW_SO_VERSION)
      Call Stack (most recent call first):
        /usr/share/cmake-3.18/Modules/FindPackageHandleStandardArgs.cmake:458 (_FPHSA_FAILURE_MESSAGE)
        cmake_modules/FindArrow.cmake:450 (find_package_handle_standard_args)
        cmake_modules/FindArrowPython.cmake:46 (find_package)
        CMakeLists.txt:229 (find_package)
      
      
      -- Configuring incomplete, errors occurred!
      See also "/tmp/pip-install-409dctif/pyarrow_b70cde6894c3469483f7360493fc2e65/build/temp.linux-x86_64-pypy39/CMakeFiles/CMakeOutput.log".
      error: command '/usr/bin/cmake' failed with exit code 1
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for pyarrow
Failed to build pyarrow
ERROR: Could not build wheels for pyarrow, which is required to install pyproject.toml-based projects

Solution:

You need to install the arrow library in order to be able to compile pyarrow from source. On Ubuntu, this can be done using

sudo apt install -y -V ca-certificates lsb-release wget
wget https://apache.jfrog.io/artifactory/arrow/$(lsb_release --id --short | tr 'A-Z' 'a-z')/apache-arrow-apt-source-latest-$(lsb_release --codename --short).deb -O /tmp/apache-arrow.deb
sudo apt -y install /tmp/apache-arrow.deb
sudo apt -y update
sudo apt -y install libarrow-dev libarrow-python-dev

 

Posted by Uli Köhler in Python

How to fix Python Pillow pip install exception: RequiredDependencyException: jpeg

Problem:

When trying to install pillow such as using

pip install Pillow

you see an error log like

      running build_ext
      
      
      The headers or library files could not be found for jpeg,
      a required dependency when compiling Pillow from source.
      
      Please see the install instructions at:
         https://pillow.readthedocs.io/en/latest/installation.html
      
      Traceback (most recent call last):
        File "/tmp/pip-install-_g5fa7ox/pillow_7cb18c0d6bec468e8844184b98c8bf45/setup.py", line 989, in <module>
          setup(
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/__init__.py", line 87, in setup
          return distutils.core.setup(**attrs)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/core.py", line 148, in setup
          return run_commands(dist)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
          dist.run_commands()
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
          self.run_command(cmd)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/dist.py", line 1214, in run_command
          super().run_command(command)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
          cmd_obj.run()
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/command/install.py", line 68, in run
          return orig.install.run(self)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/command/install.py", line 670, in run
          self.run_command('build')
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
          self.distribution.run_command(command)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/dist.py", line 1214, in run_command
          super().run_command(command)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
          cmd_obj.run()
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/command/build.py", line 136, in run
          self.run_command(cmd_name)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
          self.distribution.run_command(command)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/dist.py", line 1214, in run_command
          super().run_command(command)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
          cmd_obj.run()
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/command/build_ext.py", line 79, in run
          _build_ext.run(self)
        File "/home/uli/.pypy3-virtualenv/lib/pypy3.9/site-packages/setuptools/_distutils/command/build_ext.py", line 339, in run
          self.build_extensions()
        File "/tmp/pip-install-_g5fa7ox/pillow_7cb18c0d6bec468e8844184b98c8bf45/setup.py", line 804, in build_extensions
          raise RequiredDependencyException(f)
      RequiredDependencyException: jpeg
      
      During handling of the above exception, another exception occurred:
      
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-_g5fa7ox/pillow_7cb18c0d6bec468e8844184b98c8bf45/setup.py", line 1009, in <module>
          raise RequiredDependencyException(msg)
      RequiredDependencyException:
      
      The headers or library files could not be found for jpeg,
      a required dependency when compiling Pillow from source.
      
      Please see the install instructions at:
         https://pillow.readthedocs.io/en/latest/installation.html
      
      
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> Pillow

Solution:

Pillow needs a buch of libraries to be installed in order to work properly. Use the following command from the official Pillow website on Ubuntu:

sudo apt-get install cmake libtiff5-dev libjpeg8-dev libopenjp2-7-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python3-tk libharfbuzz-dev libfribidi-dev libxcb1-dev

or check out the installation guide for commands for other operating systems.

Posted by Uli Köhler in Python

How to install tailscale on Fedora CoreOS

In order to install tailscale, on Fedora CoreOS (this post has been tested on Fedora CoreOS 35), you can use this sequence of commands:

sudo curl -o /etc/yum.repos.d/tailscale.repo https://pkgs.tailscale.com/stable/fedora/tailscale.repo
sudo rpm-ostree install tailscale

Now reboot using

sudo systemctl reboot

Once rebooted, you can enable the service using

sudo systemctl enable --now tailscaled

and then configure tailscale as usual:

sudo tailscale up --login-server .... --authkey ...

Also see our post on How to connect tailscale to headscale server on Linux

Posted by Uli Köhler in CoreOS, Headscale, VPN

How to fix Python MongoDB TypeError: Object of type ObjectId is not JSON serializable

Problem:

When trying to export data as JSON that has originally been queried from MongoDB using code like

with open("alle.json", "w") as outfile:
    json.dump(alle, outfile)

you see the following error message:

File /usr/lib/python3.9/json/__init__.py:179, in dump(obj, fp, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
    173     iterable = cls(skipkeys=skipkeys, ensure_ascii=ensure_ascii,
    174         check_circular=check_circular, allow_nan=allow_nan, indent=indent,
    175         separators=separators,
    176         default=default, sort_keys=sort_keys, **kw).iterencode(obj)
    177 # could accelerate with writelines in some versions of Python, at
    178 # a debuggability cost
--> 179 for chunk in iterable:
    180     fp.write(chunk)

File /usr/lib/python3.9/json/encoder.py:429, in _make_iterencode.<locals>._iterencode(o, _current_indent_level)
    427     yield _floatstr(o)
    428 elif isinstance(o, (list, tuple)):
--> 429     yield from _iterencode_list(o, _current_indent_level)
    430 elif isinstance(o, dict):
    431     yield from _iterencode_dict(o, _current_indent_level)

File /usr/lib/python3.9/json/encoder.py:325, in _make_iterencode.<locals>._iterencode_list(lst, _current_indent_level)
    323         else:
    324             chunks = _iterencode(value, _current_indent_level)
--> 325         yield from chunks
    326 if newline_indent is not None:
    327     _current_indent_level -= 1

File /usr/lib/python3.9/json/encoder.py:405, in _make_iterencode.<locals>._iterencode_dict(dct, _current_indent_level)
    403         else:
    404             chunks = _iterencode(value, _current_indent_level)
--> 405         yield from chunks
    406 if newline_indent is not None:
    407     _current_indent_level -= 1

File /usr/lib/python3.9/json/encoder.py:438, in _make_iterencode.<locals>._iterencode(o, _current_indent_level)
    436         raise ValueError("Circular reference detected")
    437     markers[markerid] = o
--> 438 o = _default(o)
    439 yield from _iterencode(o, _current_indent_level)
    440 if markers is not None:

File /usr/lib/python3.9/json/encoder.py:179, in JSONEncoder.default(self, o)
    160 def default(self, o):
    161     """Implement this method in a subclass such that it returns
    162     a serializable object for ``o``, or calls the base implementation
    163     (to raise a ``TypeError``).
   (...)
    177 
    178     """
--> 179     raise TypeError(f'Object of type {o.__class__.__name__} '
    180                     f'is not JSON serializable')

TypeError: Object of type ObjectId is not JSON serializable

Solution:

This error occurs because objects queried from PyMongo always contain _id which is of type ObjectId and the normal JSON library (or drop-in replacements like simplejson do not know how to create JSON representations of Objects of type ObjectId).

In order to fix this, use pymongo‘s json_util instead of json. Note that the bson.json_util package contains dumps but does not contain dump, so use the following snippet to write to a file:

 

import bson.json_util as json_util

with open("alle.json", "w") as outfile:
    outfile.write(json_util.dumps(alle))

 

Posted by Uli Köhler in MongoDB, Python

How to fix RcppGSL installation error gsl-config: Command not found

Problem:

While trying to install RcppGSL using

BiocManager::install("RcppGSL")

you see the following error message:

checking for gcc option to accept ISO C89... none needed
checking for gsl-config... no
configure: error: gsl-config not found, is GSL installed?
ERROR: configuration failed for package ‘RcppGSL’
* removing ‘/usr/local/lib/R/site-library/RcppGSL’

The downloaded source packages are in
        ‘/tmp/RtmpqSzFab/downloaded_packages’
Warning message:
In .inet_warning(msg) :
  installation of package ‘RcppGSL’ had non-zero exit status

Solution:

You need to install the libgsl2 development headers which include the gsl-config executable.

On Ubuntu, you can install it using

sudo apt -y install libgsl2-dev

 

Posted by Uli Köhler in R

How to iterate all databases in PyMongo

This short example shows how to iterate all databases or list all database names for a MongoDB in Python using pymongo:

from pymongo import MongoClient

client = MongoClient("mongodb://localhost")

for database in client.list_databases():
    print(database['name'])

s

Posted by Uli Köhler in Python

How to start Jupyter Lab for remote access

This will start Jupyter listening on all network interfaces / bind to all IP addresses in order to make direct browser access possible not only from localhost but any remote host that has network access to the host where you’re running Jupyter:

jupyter lab --ip=0.0.0.0

 

Posted by Uli Köhler in Networking, Python

How to remove ALL objects in Google Cloud Storage bucket using gsutil

You can remove all the files / objects in a Google Cloud Storage bucket using gsutil like this:

gsutil -m rm -r gs://my-bucket/\*

This will delete all the data in the bucket and there is usually no way to recover the data!

Posted by Uli Köhler in Cloud

How to install gcsfuse on Ubuntu in 15 seconds

export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install gcsfuse

If that doesn’t work (such as for Ubuntu 21.10 impish at the time of writing this post), use the following method:

curl -L -O https://github.com/GoogleCloudPlatform/gcsfuse/releases/download/v0.39.2/gcsfuse_0.39.2_amd64.deb
sudo dpkg --install gcsfuse_0.39.2_amd64.deb
rm gcsfuse_0.39.2_amd64.deb

 

 

This is a summary from the official docs.

Posted by Uli Köhler in Cloud, Linux

How to fix gsutil 401 Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket

Problem:

While running a command like

gsutil rsync my-folder gs://my-bucket

you see an error message like

Building synchronization state...
Caught non-retryable exception while listing gs://mfwh-backups/: ServiceException: 401 Anonymous caller does not have storage.objects.list access to the Google Cloud Storage bucket.
CommandException: Caught non-retryable exception - aborting rsync

Solution:

This error is basically telling you that you are not logged in !

First, create a service account for the project on Google cloud: Direct link to service account page. You need to figure out depending on your setup what roles you want to assign to the service account. If you are lost and don’t know what to select, just assign it admin rights on the storage, but be aware that this might have security implications, as this account may also delete or create storage buckets etc.

Then open the page for that service account and create a new key!

This will give you a JSON file such as my-project-4d267a915c4e.json. Save it on the server or computer where you want to run gsutil. I recommend to save it in ~ (the user’s home folder) with the original filename, for example ~/my-project-4d267a915c4e.json.

Then you need to activate that service account using

gcloud auth activate-service-account --key-file [path to JSON file]

such as

gcloud auth activate-service-account --key-file [path to JSON file]

 

Posted by Uli Köhler in Cloud

How to install gcloud on Ubuntu in 10 seconds

sudo snap install google-cloud-cli --classic

This is the summary from the official docs. I recommend to install it as snap package as opposed to a deb package since it will auto-update, it’s much easier to use and just works better out of the box in my experience.

Posted by Uli Köhler in Cloud, Linux

git: How to list all files that ever existed in the current branch?

This is useful for cleaning up sensitive data even if you don’t know the specific filename:

git log --pretty=format: --name-only --diff-filter=A | sort -u

Original source: Dustin on StackOverflow

Posted by Uli Köhler in git

Where to find a good .gitignore file for LaTeX?

I analyzed a couple of different .gitignore files for LaTeX floating around the internet.

Clearly, the best one is TeX.gitignore by Brian Douglas which you can check out here on Github.

Posted by Uli Köhler in git, Version management
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPTPrivacy &amp; Cookies Policy