Technologies

How to install gitlab-runner using docker-compose

First, choose a directory where the service will reside in. I recommend /opt/gitlab-runner.  Then create docker-compose.yml in said directory with this content:

version: '3'
services:
  gitlab-runner:
    image: 'gitlab/gitlab-runner:latest'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./config:/etc/gitlab-runner
    restart: unless-stopped

then run this command to configure the runner:

docker-compose up -d
docker-compose exec -T gitlab-runner gitlab-runner register

It will ask you for details about the GitLab instance you want to attach to. You will find this information at https://<your-gitlab-domain>/admin/runners. This example is for my GitLab instance:

Runtime platform                                    arch=amd64 os=linux pid=38 revision=943fc252 version=13.7.0
Running in system-mode.

Enter the GitLab instance URL (for example, https://gitlab.com/):
https://gitlab.techoverflow.net/
Enter the registration token:
Loo2lahf9Shoogheiyae
Enter a description for the runner:
[148a53203df8]: My-Runner
Enter tags for the runner (comma-separated):

Registering runner... succeeded                     runner=oc-oKWMH
Enter an executor: custom, docker-ssh, shell, virtualbox, docker-ssh+machine, docker, parallels, ssh, docker+machine, kubernetes:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Now, restart the runner that is running with the old config (i.e. with no gitlab instance being attached):

docker-compose down

After that’s finished, you can run the script from our previous post Create a systemd service for your docker-compose project in 10 seconds in the directory where docker-compose.yml is located.

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will automatically generate a systemd service and start the runner (also on boot). For more details, see the corresponding blogpost. If your directory is named gitlab-runner, the service file will be stored in /etc/systemd/systemd/gitlab-runner.service, hence these are commands you can use to control the service:

Note that the script that creates the systemd service will automatically start the runner, so you don’t need to start it manually. !

Start by

sudo systemctl start gitlab-runner

Restart by

sudo systemctl restart gitlab-runner

Stop by

sudo systemctl stop gitlab-runner

View status:

sudo systemctl status gitlab-runner

View & follow logs:

sudo journalctl -xfu gitlab-runner

View logs in less:

sudo journalctl -xu gitlab-runner

Also see Mini systemd cheat-sheet

Also see How to register gitlab runner for multiple GitLab instances.

Note that you can also use

docker-compose logs -f

to view the logs (run this from the directory where docker-compose.yml) is located.

In case you see an error message like

error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on 192.168.178.1:53: no such host

in your jobs, see How to fix Gitlab CI error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on &#8230; no such host

Posted by Uli Köhler in GitLab

How to install Emscripten SDK on Ubuntu in 1 minute

This script installs the emscripten SDK in Ubuntu in ~/.emsdk and automatically adds source ~/.emsdk/emsdk_env.sh to .bashrc and .zshrc if they exist. It will also automatically update emscripten in case ~/.emsdk already exists.

Run this one liner to install:

curl -fsSL https://techoverflow.net/scripts/install-emscripten.sh | bash

Script content:

#!/bin/bash
# This script installs emscripten to ~/.emsdk
if [[ -d "~/.emsdk" ]]
then # Update
  echo "Updating emscripten SDK..."
  cd ~/.emsdk && git pull
else # Install
  echo "Installing emscripten SDK..."
  git clone https://github.com/emscripten-core/emsdk.git ~/.emsdk
fi
# Install & activate latest SDK
# See https://emscripten.org/docs/getting_started/downloads.html for more details
cd ~/.emsdk
./emsdk install latest 
./emsdk activate latest    
# Add to .bashrc and .zshrc
if [[ -f "~/.bashrc" ]]; then echo -e "\nsource ~/.emsdk/emsdk_env.sh" >> ~/.bashrc; fi
if [[ -f "~/.zshrc" ]]; then echo -e "\nsource ~/.emsdk/emsdk_env.sh" >> ~/.zshrc; fi

 

 

Posted by Uli Köhler in WASM

How to install xenutils on Linux (XCP-NG)

Using CoreOS? See this post instead!

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run these commands. Note that this will reboot the machine after it finished

sudo mount -o ro /dev/sr0 /mnt/
cd /mnt/Linux
sudo ./install.sh -n
sudo reboot

After the VM reboots, XCP-NG should detect the management agent.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

Best-practice configuration for MongoDB with docker-compose

Create /var/lib/mongodb/docker-compose.yml:

version: '3.1'
services:
  mongo:
    image: mongo
    volumes:
        - ./data:/data/db
    ports:
        - 27017:27017
    command: --serviceExecutor adaptive

This will store the MongoDB data in /var/lib/mongodb/data. I prefer this variant to using docker volumes since this method keeps all MongoDB-related data in the same directory.

Then create a systemd service using

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

See our post on how to Create a systemd service for your docker-compose project in 10 seconds for more details on this method.

You can access MongoDB at localhost:27017! It will autostart after boot

Restart by

sudo systemctl restart mongodb

Stop by

sudo systemctl stop mongodb

View logs:

sudo journalctl -xfu mongodb

View logs in less:

sudo journalctl -xu mongodb

 

Posted by Uli Köhler in Docker, MongoDB

Fedora CoreOS minimal ignition config for XCP-NG

This is the Ignition config that I use to bring up my Fedora CoreOS instance on a VM on my XCP-NG server:

{
  "ignition": {
    "version": "3.2.0"
  },
  "passwd": {
    "users": [
      {
        "groups": [
          "sudo",
          "docker"
        ],
        "name": "uli",
        "sshAuthorizedKeys": [
          "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"
        ]
      }
    ]
  },
  "storage": {
    "files": [
      {
        "path": "/etc/hostname",
        "contents": {
          "source": "data:,coreos-test%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/profile.d/systemd-pager.sh",
        "contents": {
          "source": "data:,%23%20Tell%20systemd%20to%20not%20use%20a%20pager%20when%20printing%20information%0Aexport%20SYSTEMD_PAGER%3Dcat%0A"
        },
        "mode": 420
      },
      {
        "path": "/etc/sysctl.d/20-silence-audit.conf",
        "contents": {
          "source": "data:,%23%20Raise%20console%20message%20logging%20level%20from%20DEBUG%20(7)%20to%20WARNING%20(4)%0A%23%20to%20hide%20audit%20messages%20from%20the%20interactive%20console%0Akernel.printk%3D4"
        },
        "mode": 420
      }
    ]
  },
  "systemd": {
    "units": [
      {
        "enabled": true,
        "name": "docker.service"
      },
      {
        "enabled": true,
        "name": "containerd.service"
      },
      {
        "dropins": [
          {
            "contents": "[Service]\n# Override Execstart in main unit\nExecStart=\n# Add new Execstart with `-` prefix to ignore failure\nExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM\nTTYVTDisallocate=no\n",
            "name": "autologin-core.conf"
          }
        ],
        "name": "[email protected]"
      }
    ]
  }
}

Which is build from this YAML:

variant: fcos
version: 1.2.0
passwd:
  users:
    - name: uli
      groups:
        - "sudo"
        - "docker"
      ssh_authorized_keys:
        - "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDpvDSxIwnyMCFtIPRQmPUV6hh9lBJUR0Yo7ki+0Vxs+kcCHGjtcgDzcaHginj1zvy7nGwmcuGi5w83eKoANjK5CzpFT4vJeiXqtGllh0w+B5s6tbSsD0Wv3SC9Xc4NihjVjLU5gEyYmfs/sTpiow225Al9UVYeg1SzFr1I3oSSuw== [email protected]"

systemd:
  units:
    - name: docker.service
      enabled: true

    - name: containerd.service
      enabled: true
    - name: [email protected]
      dropins:
      - name: autologin-core.conf
        contents: |
          [Service]
          # Override Execstart in main unit
          ExecStart=
          # Add new Execstart with `-` prefix to ignore failure
          ExecStart=-/usr/sbin/agetty --autologin core --noclear %I $TERM
          TTYVTDisallocate=no
storage:
  files:
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: |
          coreos-test
    - path: /etc/profile.d/systemd-pager.sh
      mode: 0644
      contents:
        inline: |
          # Tell systemd to not use a pager when printing information
          export SYSTEMD_PAGER=cat
    - path: /etc/sysctl.d/20-silence-audit.conf
      mode: 0644
      contents:
        inline: |
          # Raise console message logging level from DEBUG (7) to WARNING (4)
          # to hide audit messages from the interactive console
          kernel.printk=4

using

fcct --pretty --strict ignition.yml --output ignition.ign

Install using:

sudo coreos-installer install /dev/xvda --copy-network --ignition-url https://mydomain.com/ignition.ign

Features:

  • DHCP on all network interfaces
  • TTY on the screen
  • No password – remember to replace the SSH key by your key!
Posted by Uli Köhler in Virtualization

How to install XCP-NG xe-guest-utilities on Fedore CoreOS

First, insert the guest-tools.iso supplied with XCP-NG into the DVD drive of the virtual machine.

Then run this sequence of commands to install. Note that this will reboot the CoreOS instance!

curl -fsSL https://techoverflow.net/scripts/install-xenutils-coreos.sh | sudo bash /dev/stdin

This will run the following script:

sudo mount -o ro /dev/sr0 /mnt
sudo rpm-ostree install /mnt/Linux/*.x86_64.rpm
sudo cp -f /mnt/Linux/xen-vcpu-hotplug.rules /etc/udev/rules.d/
sudo cp -f /mnt/Linux/xe-linux-distribution.service /etc/systemd/system/
sudo sed 's/share\/oem\/xs/sbin/g' -i /etc/systemd/system/xe-linux-distribution.service
sudo systemctl daemon-reload
sudo systemctl enable /etc/systemd/system/xe-linux-distribution.service
sudo umount /mnt
sudo systemctl reboot

After rebooting the VM, XCP-NG should detect the management agent.

Based on work by steniofilho on the Fedora Forum.

Please eject the guest tools medium from the machine after the reboot! Sometimes unneccessarily mounted media cause issues.

Posted by Uli Köhler in Virtualization

How to list VMs in XCP-NG on the command line

In order to list VMs on the command line, login to XCP-NG using SSH and run this command:

xe vm-list

Example output:

[16:51 virt01-xcpng ~]# xe vm-list
uuid ( RO)           : 56dc99f2-c617-f7a9-5712-a4c9df54229a
     name-label ( RW): VM 1
    power-state ( RO): running


uuid ( RO)           : 268d56ab-9672-0f45-69ae-efbc88380b21
     name-label ( RW): VM2
    power-state ( RO): running


uuid ( RO)           : 9b1a771f-fb84-8108-8e01-6dac0f957b19
     name-label ( RW): My VM 3
    power-state ( RO): running

 

Posted by Uli Köhler in Virtualization

How to fix ElasticSearch [1]: initial heap size […] not equal to maximum heap size […];

Problem:

Your ElasticSearch server fails to start with an error message like

ERROR: [1] bootstrap checks failed
[1]: initial heap size [536870912] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents memory locking from locking the entire heap
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log

Solution:

Set the initial heap size equal to the maximum heap size: The -Xms argument and the -Xmx argument must be equal, for example:

-Xms2048m -Xmx2048m

Typically (such as in a docker-based setup) you can set this in ES_JAVA_OPTS:

ES_JAVA_OPTS=-Xms2048m -Xmx2048m

For docker-compose based environments, this is an example configuration that works:

environment:
    - cluster.name=docker-cluster
    - node.name=elasticsearch1
    - cluster.initial_master_nodes=elasticsearch1
    - bootstrap.memory_lock=true
    - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
    - http.cors.enabled=true
    - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
    - http.cors.allow-credentials=true
    - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"

After that, restart your ElasticSearch instance.

Posted by Uli Köhler in ElasticSearch

How to run psql in Gitlab Docker image

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Databases, Docker, Linux

How to fix XCP-NG XENAPI_MISSING_PLUGIN(xscontainer) or Error on getting the default coreOS cloud template

Problem:

When creating a CoreOS container on your XCP-NG host, XCP-NG center or XenOrchestra tells you

Cloud config: Error on getting the default coreOS cloud template

with the error message

XENAPI_MISSING_PLUGIN(xscontainer)
This is a XenServer/XCP-ng error

Solution:

Log into the host’s console as root using SSH or the console in XCP-NG center or XenOrchestra and run

yum install xscontainer

After that, reload the page (F5) you use to create your container. No host restart is required.

Note that if you have multiple hosts, you need to yum install xscontainer for each host individually.

Posted by Uli Köhler in Docker, Virtualization

The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This is part of a common docker-compose.yml which is frequently seen on the internet

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=yes
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
 [...]

Simple and secure, right? A no-root-password MariaDB instance that’s running in a separate container and does not have its port 3306 exposed – so only services from the same docker-compose.yml can reach it since docker-compose puts all those services in a separate network.

Wrong.

While the MariaDB instance is not reachable from the internet since no, it can be reached by any process via its internal IP address.

In order to comprehend what’s happening, we shall take a look at docker’s networks. In this case, my docker-compose config is called redmine.

$ docker network ls | grep redmine
ea7ed38f469b        redmine_default           bridge              local

This is the network that docker-compose creates without any explicit network configuration. Let’s inspect the network to show the hosts:

[
    // [...]
        "Containers": {
            "2578fc65b4dab9f204d0a252e421dd4ddd9f41c35642d48350f4e59370581757": {
                "Name": "redmine_mariadb_1",
                "EndpointID": "1e6d81acc096a12fc740173f4e107090333c42e8a86680ac5c9886c148d578e7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "7867f71d2a36265c34c133b70aea487b90ea68fcf30ecb42d6e7e9a376cf8e07": {
                "Name": "redmine_redmine_1",
                "EndpointID": "f5ac7b3325aa9bde12f0c625c4881f9a6fc9957da4965767563ec9a3b76c19c3",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
    // [...]
]

We can see that the IP address of the redmine_mariadb_1 container is 172.18.0.2.

Using the internal IP 172.18.0.2, you can access the MySQL server.

Any process on the host (even from unprivileged users) can connect to the container without any password, e.g.

$ mysqldump -uroot -h172.18.0.2 --all-databases
// This will show the dump of the entire MariaDB database

How to mitigate this security risk?

Mitigation is quite easy since we only need to set a root password for the MariaDB instance.

My recommended best practice is to avoid duplicate passwords. In order to do this, create a .env file in the directory where docker-compose.yml is located.

MARIADB_ROOT_PASSWORD=aiPaipei6ookaemue4voo0NooC0AeH

Remember to replace the password by a random password or use this shell script to automatically create it:

echo MARIADB_ROOT_PASSWORD=$(pwgen 30) > .env

Now we can use ${MARIADB_ROOT_PASSWORD} in docker-compose.yml whereever the MariaDB root password is required, for example:

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
  redmine:
    image: 'redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - [email protected]
      - REDMINE_DB_MYSQL=mariadb
      - REDMINE_DB_USERNAME=root
      - REDMINE_DB_PASSWORD=${MARIADB_ROOT_PASSWORD}
    ports:
      - '3718:3000'
    volumes:
      - './redmine_data/conf:/usr/src/redmine/conf'
      - './redmine_data/files:/usr/src/redmine/files'
      - './redmine_themes:/usr/src/redmine/public/themes'
    depends_on:
      - mariadb

Note that the mariadb docker image will not change the root password if the database directory already exists (mariadb_data in this example).

My recommended best practice for changing the root password is to use mysqldump --all-databases to export the entire database to a SQL file, then backup and delete the data directory, then re-start the container so the new root password will be set. After that, re-import the dump from the SQL file.

Posted by Uli Köhler in Databases, Docker, Linux

Best practice for installing & autostarting OpenVPN client/server configurations on Ubuntu

This post details my systemd-based setup for installing and activating OpenVPN client or server configs on Ubuntu. It might also work for other Linux distributions that are based on systemd..

First, place the OpenVPN config (usually a .ovpn file, but it can also be a .conf file) in /etc/openvpnYou need to change the filename extension to .conf.ovpn won’t work. Furthermore, ensure that there are no spaces in the filename.

In this example, our original OpenVPN config will be called techoverflow.ovpn, hence it needs to be copied to /etc/openvpn/techoverflow.conf!

Now we can enable (i.e. autostart on boot – but not start immediately) the config using

sudo systemctl enable [email protected]

For techoverflow.conf you need to systemctl enable[email protected] whereas for a hypothetical foo.conf you would need to systemctl enable [email protected].

Now we can start the VPN config – i.e. run it immediately using

sudo systemctl start [email protected]

Now your VPN client or server is running – or is it? We shall check the logs using

journalctl -xfu [email protected]

In order to manually restart the VPN client or server use

sudo systemctl restart [email protected]

and similarly run this to stop the VPN client or server:

sudo systemctl stop [email protected]

In order to show if the instance is running – i.e. show its status, use

sudo systemctl status [email protected]

Example output for an OpenVPN client:

[email protected] - OpenVPN connection to techoverflow
     Loaded: loaded (/lib/systemd/system/[email protected]; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2020-11-29 03:37:52 CET; 953ms ago
       Docs: man:openvpn(8)
             https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
             https://community.openvpn.net/openvpn/wiki/HOWTO
   Main PID: 4123809 (openvpn)
     Status: "Pre-connection initialization successful"
      Tasks: 1 (limit: 18689)
     Memory: 1.3M
     CGroup: /system.slice/system-openvpn.slice/[email protected]
             └─4123809 /usr/sbin/openvpn --daemon ovpn-techoverflow --status /run/openvpn/techoverflow.status 10 --cd /etc/openvpn --script-security 2 --config /etc/ope>

Nov 29 03:37:52 localgrid systemd[1]: Starting OpenVPN connection to techoverflow...
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: OpenVPN 2.4.7 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Sep >
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: library versions: OpenSSL 1.1.1f  31 Mar 2020, LZO 2.10
Nov 29 03:37:52 localgrid systemd[1]: Started OpenVPN connection to techoverflow.
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: TCP/UDP: Preserving recently used remote address: [AF_INET]83.135.163.227:19011
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: UDPv4 link local (bound): [AF_INET][undef]:1194
Nov 29 03:37:52 localgrid ovpn-techoverflow[4123809]: UDPv4 link remote: [AF_INET]83.135.163.22:19011
Nov 29 03:37:53 localgrid ovpn-techoverflow[4123809]: [nas-vpn.haar.techoverflow.net] Peer Connection Initiated with [AF_INET]83.135.163.227:19011

 

Posted by Uli Köhler in Linux, VPN

How to fix pyppeteer pyppeteer.errors.BrowserError: Browser closed unexpectedly:

Problem:

You want to run your Pyppeteer application on Linux, but you see an error message like

Traceback (most recent call last):
  File "PyppeteerExample.py", line 15, in <module>
    asyncio.get_event_loop().run_until_complete(main())
  File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
    return future.result()
  File "PyppeteerExample.py", line 6, in main
    browser = await launch()
  File "/usr/local/lib/python3.6/dist-packages/pyppeteer/launcher.py", line 305, in launch
    return await Launcher(options, **kwargs).launch()
  File "/usr/local/lib/python3.6/dist-packages/pyppeteer/launcher.py", line 166, in launch
    self.browserWSEndpoint = get_ws_endpoint(self.url)
  File "/usr/local/lib/python3.6/dist-packages/pyppeteer/launcher.py", line 225, in get_ws_endpoint
    raise BrowserError('Browser closed unexpectedly:\n')
pyppeteer.errors.BrowserError: Browser closed unexpectedly:

Solution:

In most cases, the underlying error for this error message is Puppetteer’s libX11-xcb.so.1: cannot open shared object file: No such file or directory. In order to fix that, you need to install dependency libraries for Chromium which is used internally by Puppeteer / Pyppeteer:

sudo apt install -y gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

 

Posted by Uli Köhler in Pyppeteer, Python

Pyppeteer minimal example

This script is a minimal example on how to use Pyppeteer to fetch a web page and extract the page title:

#!/usr/bin/env python3
import asyncio
from pyppeteer import launch

async def main():
    browser = await launch()
    page = await browser.newPage()
    await page.goto('https://www.techoverflow.net')
    # Get the URL and print it
    title = await page.evaluate("() => document.querySelector('.logo-default').textContent")
    print(f"Page title: {title}") # prints Page title: TechOverflow
    # Cleanup
    await browser.close()

asyncio.get_event_loop().run_until_complete(main())

How to run:

sudo pip3 install pyppeteer
python3 PyppeteerExample.py

 

Posted by Uli Köhler in Pyppeteer

How to get current page URL in pyppeteer

In pyppeteer you can use

url = await page.evaluate("() => window.location.href")

in order to get the current URL. Note that page.evaluate() runs whatever Javascript your give it – hence you can use your Javascript skills in order to create the desired effect.

Full example

import asyncio
from pyppeteer import launch

async def main():
    browser = await launch()
    page = await browser.newPage()
    await page.goto('https://www.techoverflow.net')

    # Get the URL and print it
    url = await page.evaluate("() => window.location.href")
    print(url) # prints https://www.techoverflow.net/

    # Cleanuip
    await browser.close()

asyncio.get_event_loop().run_until_complete(main())

 

Posted by Uli Köhler in Pyppeteer, Python

How simulate click using pyppeteer

In order to click a button or a link using the the pyppeteer library, you can use page.evaluate().

If you have an <button> element or a link (<a>) like

<button id="mybutton">

you can use

# Now click the search button    
await page.evaluate(f"""() => {{
    document.getElementById('mybutton').dispatchEvent(new MouseEvent('click', {{
        bubbles: true,
        cancelable: true,
        view: window
    }}));
}}""")

in order to generate a MouseEvent that simulates a click. Note that page.evaluate() will run any Javascript code you pass to it, so you can use your Javascript skills in order to create the desired effect

Also see https://gomakethings.com/how-to-simulate-a-click-event-with-javascript/ for more details on how to simulate mouse clicks in pure Javascript without relying on jQuery.

Note that page.evaluate() will just run any Javascript code you give it, so you can put your Javascript skills to use in order to manipulate the page.

Full example

This example will open https://techoverflow.net, enter a search term into the search field, click the search button and then create a screenshot

import asyncio
from pyppeteer import launch

async def main():
    browser = await launch()
    page = await browser.newPage()
    await page.goto('https://techoverflow.net')

    # Fill content into the search field
    content = "pypetteer"
    await page.evaluate(f"""() => {{
        document.getElementById('s').value = '{content}';
    }}""")

    # Now click the search button    
    await page.evaluate(f"""() => {{
        document.getElementById('searchsubmit').dispatchEvent(new MouseEvent('click', {{
            bubbles: true,
            cancelable: true,
            view: window
        }}));
    }}""")

    # Wait until search results page has been loaded
    await page.waitForSelector(".archive-title")

    # Now take screenshot and exit
    await page.screenshot({'path': 'screenshot.png'})
    await browser.close()

asyncio.get_event_loop().run_until_complete(main())

The result will look like this:

Posted by Uli Köhler in Pyppeteer, Python

How to fill <input> field using pyppeteer

In order to fill an input field using the pyppeteer library, you can use page.evaluate().

If you have an <input> element like

<input name="myinput" id="myinput" type="text">

you can use

content = "My content" # This will be filled into <input id="myinput"> !
await page.evaluate(f"""() => {{
    document.getElementById('myinput').value = '{content}';
}}""")

Note that page.evaluate() will just run any Javascript code you give it, so you can put your Javascript skills to use in order to manipulate the page.

Full example

This example will open https://techoverflow.net, enter a search term into the search field and then create a screenshot

#!/usr/bin/env python3
import asyncio
from pyppeteer import launch

async def main():
    browser = await launch()
    page = await browser.newPage()
    await page.goto('https://techoverflow.net')
    
    # This example fills content into the search field
    content = "My search term"
    await page.evaluate(f"""() => {{
        document.getElementById('s').value = '{content}';
    }}""")

    # Make screenshot
    await page.screenshot({'path': 'screenshot.png'})
    await browser.close()

asyncio.get_event_loop().run_until_complete(main())

The result will look like this:

Posted by Uli Köhler in Pyppeteer, Python

Traefik docker-compose configuration with secure dashboard and Let’s Encrypt

This configuration only provides only the minimum to get the Traefik Dashboard running with Let’s Encrypt-driven SSL encryption and user authentication. It also redirects all HTTP requests to HTTPS in order to avoid insecure access to the Dashboard and other services.

Let’s encrypt is used with the HTTP-01 challenge. This means that Traefik MUST be reachable by Port 80 from the Internet.

In order to install docker & docker-compose, see How to install docker and docker-compose on Ubuntu in 30 seconds.

First prepare the directory (/var/lib/traefik):

sudo mkdir /var/lib/traefik
sudo chown -R $USER: /var/lib/traefik
cd /var/lib/traefik
mkdir acme conf

Now create docker-compose.yml:

version: "3.3"

services:
  traefik:
    image: "traefik:v2.3"
    container_name: "traefik"
    command:
      - "--api=true"
    ports:
      - "80:80"
      - "443:443"
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./acme:/etc/traefik/acme"
      - "./traefik.toml:/etc/traefik/traefik.toml"
      - "./conf:/etc/traefik/conf"

Now create the main traefik.toml configuration file:

defaultEntryPoints = ["http", "https"]

[api]
dashboard = true

# You can create config files in /var/lib/traefik/traefik.conf and Traefik will automatically reload them
[providers]
[providers.file]
directory = "/etc/traefik/conf/"
watch = true

# Change this to INFO if you don't want as much debug output
[log]
level = "DEBUG"

[entryPoints.web]
address = ":80"
[entryPoints.web.http]
[entryPoints.web.http.redirections]
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"

[certificatesResolvers.letsencrypttls.acme]
# TODO Add your email here
email = "[email protected]"
storage = "/etc/traefik/acme/acme.json"
[certificatesResolvers.letsencrypttls.acme.httpChallenge]
entryPoint = "web"

Now we need to create the API config file in conf/api.toml:

[http.routers.traefik-api]
# TODO: Set your domain here !!!
rule = "Host(`traefik.example.com`)"
service = "[email protected]"
middlewares = ["auth"]
[http.routers.traefik-api.tls]
certresolver = "letsencrypttls"
[http.middlewares.auth.basicAuth]
# TODO Add your admin user & password here, generate e.g. using https://wtools.io/generate-htpasswd-online
users = [
  "admin:$1$ySFBr~_y$GsKgEasDQkpCX8sO8vNia0",
]

Don’t forget to change your email address and the domain name in the config files (marked by TODO). Ensure you have setup all DNS records correctly so that your domains points to the server running Traefik!

Now it’s time to startup Traefik for the first time:

docker-compose up

Traefik will take a few seconds to automatically generate the Let’s Encrypt certificate for your domain. Once you see a message like

traefik    | time="2020-09-20T23:48:30Z" level=debug msg="Certificates obtained for domains [traefik.mydomain.com]" providerName=letsencrypttls.acme [email protected] rule="Host(`traefik.mydomain.com`)"

the certificate is available and loaded automatically.

Now you can go to https://traefik.mydomain.com/ , login with the username and password you have generated and check out the dashboard.

 

If desired, you can also setup a systemd service to automatically start Traefik on boot (generated using docker-compose systemd .service generator). In order to do this, first stop the running docker-compose instance using Ctrl-C if you still have the terminal open and docker-compose down.

Now add this as /etc/systemd/system/traefik.service:

[Unit]
Description=traefik
Requires=docker.service
After=docker.service

[Service]
Restart=always
User=root
Group=docker
WorkingDirectory=/var/lib/traefik
# Shutdown container (if running) when unit is stopped
ExecStartPre=/usr/local/bin/docker-compose -f docker-compose.yml down -v
# Start container when unit is started
ExecStart=/usr/local/bin/docker-compose -f docker-compose.yml up
# Stop container when unit is stopped
ExecStop=/usr/local/bin/docker-compose -f docker-compose.yml down -v

[Install]
WantedBy=multi-user.target

and run

sudo systemctl enable traefik.service
sudo systemctl start traefik.service

 

Posted by Uli Köhler in Traefik