Container

How to list just container names of all Docker containers

Also see:

Use

docker ps --format "{{.Names}}"

Example output:

$ docker ps --format "{{.Names}}"
flamboyant_cohen
lucid_shtern
gitlab-techoverflow_gitlab_1

 

Posted by Uli Köhler in Docker

How to list just container ID of all Docker containers

Also see:

Use

docker ps -q

Example output:

$ docker ps -q
29f721fa3124
39a240769ae8
cae90fe55b9a
90580dc4a6d2
348c24768864
5e64779be4f0
78874ae92a8e
92650c527106
948a8718050f
7aad5a210e3c

 

Posted by Uli Köhler in Docker

How to correctly use apk in Dockerfile

In Dockerfiles you should always use apk with --no-cache to prevent extra files being deposited on the containers, for example:

FROM alpine:3.17
RUN apk add --no-cache python3-dev

 

Posted by Uli Köhler in Alpine Linux, Docker

How to install magic-wormhole on CoreOS

Step 1: Install pip

sudo rpm-ostree install python3-pip

then reboot for the changes to take effect:

sudo systemctl reboot

Step 2: Install magic-wormhole

sudo pip install magic-wormhole

 

Posted by Uli Köhler in CoreOS

How to specify which docker image to use in .gitlab-ci.yml

The following .gitlab-ci.yml will build a native executable project using cmake with a custom docker image:

stages:
  - build

buildmyexe:
  stage: build
  image: 'ulikoehler/ubuntu-gcc-cmake:latest'
  script:
    - cmake .
    - make -j4

In this example, we have only one stage – if you have multiple stages, you can specify different images for each of them.

Posted by Uli Köhler in Docker, git, GitLab

How to show current CoreOS system version using rpm-ostree

In CoreOS, run

sudo rpm-ostree status

and look for the entry with the dot () in front of it to see which deployment – i.e. which CoreOS version is currently active. Then, look for Version:  in the line below. This serves as the alternative to lsb_release -a which is not available on CoreOS.

Example output:

State: idle
AutomaticUpdatesDriver: Zincati
  DriverState: active; periodically polling for updates (last checked Thu 2022-12-08 03:49:05 UTC)
Deployments:
● fedora:fedora/x86_64/coreos/stable
                  Version: 37.20221106.3.0 (2022-11-28T20:05:48Z)
               BaseCommit: 6278bd1e5f311880a6975307e7ce734076a0b1a37f8a97c875c07037c748ddcc
             GPGSignature: Valid signature by ACB5EE4E831C74BB7C168D27F55AD3FB5323552A
          LayeredPackages: bmon docker-compose htop iotop make tailscale tree wget xe-guest-utilities-latest

  fedora:fedora/x86_64/coreos/stable
                  Version: 36.20221030.3.0 (2022-11-11T15:51:02Z)
               BaseCommit: eab21e5b533407b67b1751ba64d83c809d076edffa1ff002334603bf13655a14
             GPGSignature: Valid signature by 53DED2CB922D8B8D9E63FD18999F7CBF38AB71F4
          LayeredPackages: bmon docker-compose htop iotop make tailscale tree wget xe-guest-utilities-latest

In this example, CoreOS 37.20221106.3.0 is active.

Posted by Uli Köhler in CoreOS

How to fix zincati not updating CoreOS: rpm-ostree deploy failed: error: Packages not found: …

Problem:

My zincati service – the service that automatically updates CoreOS could not update CoreOS due to the following logs (view with journalctl -xfu zincati.service):

[ERROR zincati::update_agent::actor] failed to stage deployment: rpm-ostree deploy failed:
    error: Packages not found: magic-wormhole

Solution:

The solution typically involves uninstalling the package – in this case magic-wormhole using

sudo rpm-ostree uninstall magic-wormhole

Note that this might uninstall a service that is required for your infrastructure, and it will delete files associated with the package in the process of uninstalling it. You should make a backup of valuable data in any case.

Posted by Uli Köhler in Allgemein, CoreOS

How to configure SMTP E-Mail for InvenTree (docker/docker-compose)

You can easily configure SMTP email for InvenTree by adding the following config to your .env file (I’m using the docker production config):

INVENTREE_EMAIL_HOST=smtp.mydomain.com
[email protected]
INVENTREE_EMAIL_PASSWORD=cheen1zaiCh4yaithaecieng2jazey
INVENTREE_EMAIL_TLS=true
[email protected]

Even after setting up InvenTree, it is sufficient to just add this config to the .env file and restarting the server.

Posted by Uli Köhler in Docker, InvenTree

How to remove container_name: … statements from docker-compose.yml automatically using sed

This script will remove alle container_name statements from a docker-compose.yml config file:

sed -zie 's#container_name:\s*[a-zA-Z0-9_-]*\s*##g' docker-compose.yml

Example input:

services:
    inventree-proxy:
        container_name: inventree-proxy
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Example output:

services:
    inventree-proxy:
        image: nginx:stable
        depends_on:
            - inventree-server
[...]

 

Posted by Uli Köhler in Docker, Linux

How to fix lxc launch Failed getting root disk: No root device could be found

Problem:

While trying to launch a lxc container using a command like

lxc launch ubuntu:22.04 mycontainer

you see the following error message:

Creating mycontainer
Error: Failed instance creation: Failed creating instance record: Failed initialising instance: Failed getting root disk: No root device could be found

Solution:

You didn’t initialize your LXD storage properly. Run

lxd init

in order to configure the storage for lxd. For most setups except performance-critical production setups, I recommend to use the dir storage backend because it does not require any further configuration. You can leave all other options at their default values.

Name of the storage backend to use (zfs, btrfs, ceph, cephobject, dir, lvm) [default=zfs]: dir
Posted by Uli Köhler in Container, LXC

Move LXC container to new VM

Create snapshot on your current VM

lxc snapshot container_name snapshot_name

Validate created snapshot by checking the snapshots list displayed with:

lxc info container_name 

In case you have not named your snapshot, look for the most recent creation date. It might have a default name like snap1.

Create an image from the snapshot

lxc publish container_name/snapshot_name --alias="image_alias" description="image_description"

Verify your created image by checking the image list displayed with:

lxc image info image_alias

Export the created image to an archive in your current path

lxc image export image_alias image_archive_name

Send the file to your new VM

Make sure, that you can establish an SSH connection to your new VM from your old VM, e.g. by a VPN or Wireguard connection. Use scp to copy the image like so:

scp ./image_archive_name.tar.gz usera@host:/home/user

Import image and launch new container on your new VM

Make sure lxc and lxd are installed on your new VM and then import the image like so:

lxc image import image_archive_name.tar.gz --alias image_alias_on_new_vm

Make sure the imported image appears in the list on your new VM.

lxc image list

Then launch a new container from the image with:

lxc launch image_alias_on_new_vm container_name
Posted by Joshua Simon in Container, LXC

How to fix docker emqx_ctl Node ’[email protected]’ not responding to pings.

Problem:

When trying to run emqx_ctl in a dockerized emqx setup using a command like

docker-compose exec emqx ./bin/emqx status

you see an error message like

Node '[email protected]' not responding to pings.
/opt/emqx/bin/emqx: line 41: die: command not found

Solution:

The problem here is that emqx_ctl is trying to connect to the IP address for node1.emqx.mydomain.com but that IP address does not point to the IP address for the docker container (maybe it’s the public IP address for your server?)

The solution here is to create a network alias within docker/docker-compose so that the Docker DNS system resolves node1.emqx.mydomain.com to the internal IP address of the container.

For example, in docker-compose, you can create your network using

networks:
  emqx:
    driver: bridge

and then configure the alias for the container using

services:
  emqx:
    image: emqx:4.4.4
    environment:
      - "EMQX_NAME=emqx"
      - "EMQX_HOST=node1.emqx.mydomain.com"
      - "EMQX_LOADED_PLUGINS=emqx_recon,emqx_retainer,emqx_management,emqx_dashboard"
    ports:
      - 18083:18083
      - 1883:1883
    volumes:
      - ./emqx_data:/opt/emqx/data
      - ./emqx_log:/opt/emqx/log
    networks:
      emqx:
        aliases:
          - "node1.emqx.mydomain.com"

 

 

Posted by Uli Köhler in Container, Docker, EMQX, MQTT

How to setup Netmaker using docker-compose in under 15 minutes

In this post, we’ll build a simple setup for running netmaker with PostgreSQL backend using docker-compose with an external Trik

First, create a directory for the Netmaker files to reside in, e.g.:

mkdir /opt/netmaker

cd to that directory:

cd /opt/netmaker

At this point we’ll download the Mosquitto config from Github and enable the ports in the firewall ufw. I reserved 1000 ports 51821:52821 in order to facilitate having a lot of networks (more than the default 9). My traefik config is the one from Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges which allows having a single *.netmaker.mydomain.com Let’s Encrypt wildcard certificate.

Now, create docker-compose.yml in that directory

version: "3.4"

services:
  postgres:
    image: postgres
    restart: unless-stopped
    volumes:
      - ./pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=netmaker
      - POSTGRES_USER=netmaker
  netmaker: # The Primary Server for running Netmaker
    image: gravitl/netmaker:v0.14.5
    depends_on:
      - postgres
    cap_add:
      - NET_ADMIN
      - NET_RAW
      - SYS_MODULE
    sysctls:
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=0
      - net.ipv6.conf.all.forwarding=1
    restart: always
    volumes: # Volume mounts necessary for sql, coredns, and mqtt
      - ./netmaker_dnsconfig:/root/config/dnsconfig
      - ./netmaker_sqldata:/root/data
      - ./netmaker_shared_certs:/etc/netmaker
    environment: # Necessary capabilities to set iptables when running in container
      SERVER_NAME: "broker.${NETMAKER_BASE_DOMAIN}" # The domain/host IP indicating the mq broker address
      SERVER_HOST: "${NETMAKER_PUBLIC_IP}" # Set to public IP of machine.
      SERVER_HTTP_HOST: "api.${NETMAKER_BASE_DOMAIN}" # Overrides SERVER_HOST if set. Useful for making HTTP available via different interfaces/networks.
      SERVER_API_CONN_STRING: "api.${NETMAKER_BASE_DOMAIN}:443"
      # COREDNS_ADDR: "${NETMAKER_PUBLIC_IP}" # Address of the CoreDNS server. Defaults to SERVER_HOST
      DNS_MODE: "off" # Enables DNS Mode, meaning all nodes will set hosts file for private dns settings.
      API_PORT: "8081" # The HTTP API port for Netmaker. Used for API calls / communication from front end. If changed, need to change port of BACKEND_URL for netmaker-ui.
      CLIENT_MODE: "on" # Depricated. CLIENT_MODE should always be ON
      REST_BACKEND: "on" # Enables the REST backend (API running on API_PORT at SERVER_HTTP_HOST). Change to "off" to turn off.
      DISABLE_REMOTE_IP_CHECK: "off" # If turned "on", Server will not set Host based on remote IP check. This is already overridden if SERVER_HOST is set. Turned "off" by default.
      TELEMETRY: "on" # Whether or not to send telemetry data to help improve Netmaker. Switch to "off" to opt out of sending telemetry.
      RCE: "off" # Enables setting PostUp and PostDown (arbitrary commands) on nodes from the server. Off by default.
      MASTER_KEY: "${NETMAKER_MASTER_KEY}" # The admin master key for accessing the API. Change this in any production installation.
      CORS_ALLOWED_ORIGIN: "*" # The "allowed origin" for API requests. Change to restrict where API requests can come from.
      DISPLAY_KEYS: "on" # Show keys permanently in UI (until deleted) as opposed to 1-time display.
      DATABASE: "postgres" # Database to use - sqlite, postgres, or rqlite
      SQL_HOST: "postgres"
      SQL_DB: "netmaker"
      SQL_USER: "netmaker"
      SQL_PASS: "${POSTGRES_PASSWORD}"
      NODE_ID: "${SERVER_NAME}" # used for HA - identifies this server vs other servers
      MQ_HOST: "mq"  # the address of the mq server. If running from docker compose it will be "mq". Otherwise, need to input address. If using "host networking", it will find and detect the IP of the mq container.
      MQ_SERVER_PORT: "1883" # the reachable port of MQ by the server - change if internal MQ port changes (or use external port if MQ is not on the same machine)
      MQ_PORT: "443" # the reachable port of MQ - change if external MQ port changes (port on proxy, not necessarily the one exposed in docker-compose)
      HOST_NETWORK: "off" # whether or not host networking is turned on. Only turn on if configured for host networking (see docker-compose.hostnetwork.yml). Will set host-level settings like iptables.
      VERBOSITY: "1" # logging verbosity level - 1, 2, or 3
      MANAGE_IPTABLES: "on" # deprecated
      # PORT_FORWARD_SERVICES: "ssh,mq" # decide which services to port forward ("dns","ssh", or "mq")
      # this section is for OAuth
      AUTH_PROVIDER: "" # "<azure-ad|github|google|oidc>"
      CLIENT_ID: "" # "<client id of your oauth provider>"
      CLIENT_SECRET: "" # "<client secret of your oauth provider>"
      FRONTEND_URL: "" # "https://dashboard.<netmaker base domain>"
      AZURE_TENANT: "" # "<only for azure, you may optionally specify the tenant for the OAuth>"
      OIDC_ISSUER: "" # https://oidc.yourprovider.com - URL of oidc provider
    ports:
      - "51821-52821:51821-52821/udp" # wireguard ports
    expose:
      - "8081" # api port
    labels: # only for use with traefik proxy (default)
      - traefik.enable=true
      - traefik.http.routers.netmaker-api.entrypoints=websecure
      - traefik.http.routers.netmaker-api.rule=Host(`api.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-api.service=netmaker-api
      - traefik.http.services.netmaker-api.loadbalancer.server.port=8081
      - "traefik.http.routers.netmaker-api.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-api.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-api.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  netmaker-ui:  # The Netmaker UI Component
    container_name: netmaker-ui
    image: gravitl/netmaker-ui:v0.14.5
    depends_on:
      - netmaker
    links:
      - "netmaker:api"
    restart: always
    environment:
      BACKEND_URL: "https://api.${NETMAKER_BASE_DOMAIN}" # URL where UI will send API requests. Change based on SERVER_HOST, SERVER_HTTP_HOST, and API_PORT
    expose:
      - "80"
    labels:
      - traefik.enable=true
      - traefik.http.middlewares.nmui-security.headers.accessControlAllowOriginList=*.${NETMAKER_BASE_DOMAIN}
      - traefik.http.middlewares.nmui-security.headers.stsSeconds=31536000
      - traefik.http.middlewares.nmui-security.headers.browserXssFilter=true
      - traefik.http.middlewares.nmui-security.headers.customFrameOptionsValue=SAMEORIGIN
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.X-Robots-Tag=none
      - traefik.http.middlewares.nmui-security.headers.customResponseHeaders.Server= # Remove the server name
      - traefik.http.routers.netmaker-ui.entrypoints=websecure
      - traefik.http.routers.netmaker-ui.middlewares=nmui-security@docker
      - traefik.http.routers.netmaker-ui.rule=Host(`dashboard.${NETMAKER_BASE_DOMAIN}`)
      - traefik.http.routers.netmaker-ui.service=netmaker-ui
      - traefik.http.services.netmaker-ui.loadbalancer.server.port=80
      - "traefik.http.routers.netmaker-ui.tls.certresolver=cloudflare"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.http.routers.netmaker-ui.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"
  mq: # the MQTT broker for netmaker
    container_name: mq
    image: eclipse-mosquitto:2.0.11-openssl
    depends_on:
      - netmaker
    restart: unless-stopped
    volumes:
      - ./mosquitto.conf:/mosquitto/config/mosquitto.conf # need to pull conf file from github before running (under docker/mosquitto.conf)
      - ./mosquitto_data:/mosquitto/data
      - ./mosquitto_logs:/mosquitto/log
      - ./netmaker_shared_certs:/mosquitto/certs
    expose:
      - "8883"
    labels:
      - traefik.enable=true
      - traefik.tcp.routers.mqtts.rule=HostSNI(`broker.${NETMAKER_BASE_DOMAIN}`)
      - traefik.tcp.routers.mqtts.tls.passthrough=true
      - traefik.tcp.services.mqtts-svc.loadbalancer.server.port=8883
      - traefik.tcp.routers.mqtts.service=mqtts-svc
      - traefik.tcp.routers.mqtts.entrypoints=websecure
      - "traefik.tcp.routers.mqtts.tls.certresolver=cloudflare"
      - "traefik.tcp.routers.mqtts.tls.domains[0].main=${NETMAKER_BASE_DOMAIN}"
      - "traefik.tcp.routers.mqtts.tls.domains[0].sans=*.${NETMAKER_BASE_DOMAIN}"

After that, create .env in said directory containing some info about your node:

SERVER_NAME=netmaker-01
NETMAKER_BASE_DOMAIN=netmaker.mydomain.com
NETMAKER_MASTER_KEY=geeveeBaeQuie1cie6aaz6eepahleo
NETMAKER_PUBLIC_IP=101.252.11.54
POSTGRES_PASSWORD=ahph8Jih4aesheel7id1uyo0gietai

Adjust the NETMAKER_BASE_DOMAIN and the NETMAKER_PUBLIC_IP accordingly. Also, you need to choose a random NETMAKER_MASTER_KEY and POSTGRES_PASSWORD.

Now we’ll use the script from Create a systemd service for your docker-compose project in 10 seconds in order to create a systemd service to automatically run the service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will also automatically start the service (i.e. docker-compose up).

 

Posted by Uli Köhler in Container, Docker

How to setup Wekan using docker-compose in just 3 minutes

First, create a directory for the Wekan files to reside in, e.g.:

mkdir /opt/wekan

Change to that directory:

cd /opt/wekan

Now, we need to create the data directory which needs to be owned by UID 999: in order for Wekan to store uploads:

mkdir -p wekan_data && chown -R 999:999 wekan_data

Now, create docker-compose.yml in that directory

version: '3.4'

services:
  wekandb:
    restart: always
    image: mongo:5
    command: mongod --logpath /dev/null --oplogSize 128 --quiet
    healthcheck:
      test: ["CMD", "mongo", "--quiet", "--eval", "'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'"]
      interval: 15s
      timeout: 10s
      retries: 3
      start_period: 10s
    volumes:
      - ./wekan-db:/data/db
      - ./wekan-db-dump:/dump
      - /etc/localtime:/etc/localtime:ro
      - /etc/timezone:/etc/timezone:ro


  wekan:
    image: quay.io/wekan/wekan
    restart: always
    ports:
      - 7972:8080
    environment:
      - MONGO_URL=mongodb://wekandb:27017/wekan
      - ROOT_URL=${URL}
      - MAIL_URL=${EMAIL_URL}
      - MAIL_FROM=${EMAIL_FROM}
      - WITH_API=true
      - WRITABLE_PATH=/data
    volumes:
      - ./wekan_data:/data
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      - wekandb

After that, create .env in said directory containing some info about your node:

URL=https://wekan.mydomain.com
EMAIL_URL=smtp://noreply%40mydomain.com:[email protected]:25/
EMAIL_FROM='My Wekan <[email protected]>'

Now we’ll use the script from Create a systemd service for your docker-compose project in 10 seconds in order to create a systemd service to automatically run the service:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This script will also automatically start the service (i.e. docker-compose up).

Now you can setup your reverse proxy to point your domain – e.g. wekan.techoverflow.net to port 7972 (or change that port in docker-compose.yml). Which reverse proxy you use doesn’t matter all too much, I use both nginx and traefik. I will cover the configuration for these reverse proxies in future posts.

Once you can access Wekan, you can register as a new user. The first user will be an admin and can also disable registration using the Web UI.

Posted by Uli Köhler in Container, Docker

How to fix wekan Path “/data/attachments” is not writable!

Problem:

While starting your dockerized wekan setup, you see an error message like this:

wekan_1    | errorClass [Error]: [FilesCollection.attachments] Path "/data/attachments" is not writable! [401]
wekan_1    |     at new FilesCollection (packages/ostrio:files/server.js:354:15)
wekan_1    |     at module (models/attachments.js:52:15)
wekan_1    |     at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |     at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |     at Module.moduleLink [as link] (/build/programs/server/npm/node_modules/meteor/modules/node_modules/@meteorjs/reify/lib/runtime/index.js:52:22)
wekan_1    |     at module (server/publications/attachments.js:1:24)
wekan_1    |     at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |     at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |     at require (packages/modules-runtime.js:258:21)
wekan_1    |     at /build/programs/server/app/app.js:162362:1
wekan_1    |     at /build/programs/server/boot.js:401:38
wekan_1    |     at Array.forEach (<anonymous>)
wekan_1    |     at /build/programs/server/boot.js:226:21
wekan_1    |     at /build/programs/server/boot.js:464:7
wekan_1    |     at Function.run (/build/programs/server/profile.js:280:14)
wekan_1    |     at /build/programs/server/boot.js:463:13 {
wekan_1    |   isClientSafe: true,
wekan_1    |   error: 401,
wekan_1    |   reason: '[FilesCollection.attachments] Path "/data/attachments" is not writable!',
wekan_1    |   details: Error: EACCES: permission denied, mkdir '/data/attachments'
wekan_1    |       at Object.mkdirSync (fs.js:1014:3)
wekan_1    |       at new FilesCollection (packages/ostrio:files/server.js:348:10)
wekan_1    |       at module (models/attachments.js:52:15)
wekan_1    |       at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |       at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |       at Module.moduleLink [as link] (/build/programs/server/npm/node_modules/meteor/modules/node_modules/@meteorjs/reify/lib/runtime/index.js:52:22)
wekan_1    |       at module (server/publications/attachments.js:1:24)
wekan_1    |       at fileEvaluate (packages/modules-runtime.js:336:7)
wekan_1    |       at Module.require (packages/modules-runtime.js:238:14)
wekan_1    |       at require (packages/modules-runtime.js:258:21)
wekan_1    |       at /build/programs/server/app/app.js:162362:1
wekan_1    |       at /build/programs/server/boot.js:401:38
wekan_1    |       at Array.forEach (<anonymous>)
wekan_1    |       at /build/programs/server/boot.js:226:21
wekan_1    |       at /build/programs/server/boot.js:464:7
wekan_1    |       at Function.run (/build/programs/server/profile.js:280:14)
wekan_1    |       at /build/programs/server/boot.js:463:13 {
wekan_1    |     errno: -13,
wekan_1    |     syscall: 'mkdir',
wekan_1    |     code: 'EACCES',
wekan_1    |     path: '/data/attachments'
wekan_1    |   },
wekan_1    |   errorType: 'Meteor.Error'
wekan_1    | }

Solution:

You have mounted a local directory as wekan data directory, for example like this in docker-compose.yml:

wekan:
  image: quay.io/wekan/wekan
  /* ... */
  environment:
    /*...*/
    - WRITABLE_PATH=/data
  volumes:
    - ./wekan_data:/data

But this directory does not have the correct permissions set. You can fix it using this command on the directory (wekan_data in this example):

sudo chown -R 999:999 wekan_data

After that, restart wekan and the issue should be fixed.

Posted by Uli Köhler in Container, Docker, Networking

How to fix docker MariaDB correct definition of table mysql.column_stats: expected column ‘hist_type’ at position 9…

Problem:

In the log of your MySQL docker server, you see logs like

mariadb_1    | 2022-06-07 20:24:00 283 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'hist_type' at position 9 to have type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB','JSON_HB'), found type enum('SINGLE_PREC_HB','DOUBLE_PREC_HB').
mariadb_1    | 2022-06-07 20:24:00 283 [ERROR] Incorrect definition of table mysql.column_stats: expected column 'histogram' at position 10 to have type longblob, found type varbinary(255).

Solution:

This happens after doing a version upgrade of the MySQL version.

In order to fix it, run the upgrade by running mysql_upgrade in the contaienr

docker-compose exec mariadb mysql_upgrade -uroot -pchopahl0aib4eiphuk5bao3shiVoow

where chopahl0aib4eiphuk5bao3shiVoow is your MySQL root password.

If you have your password in .env as we recommend, you can use this command:

source .env && docker-compose exec mariadb mysql_upgrade -uroot -p${MARIADB_ROOT_PASSWORD}

 

 

Posted by Uli Köhler in Docker

LXC delete image

1. List your lxc images

lxc image list

2. Delete image

lxc image delete [image alias]
Posted by Joshua Simon in Container, LXC

LXC create container from snapshot

1. Create a snapshot

lxc snapshot [mycontainer] [snapshot name]

2. Create local image from snapshot

lxc publish [mycontainer]/[snapshot name] --alias [image alias]

3. List your images

lxc image list

4. Create container from iamge

lxc launch [image alias] [mynewcontainer]
Posted by Joshua Simon in Container, LXC

How to copy an LXC container

You can copy a running lxc container like this

lxc copy [name of container to be copied] [new container]

for example

lxc copy mycontainer mycontainerCopy
Posted by Joshua Simon in LXC

How to fix WordPress docker image upload size 2M limit

Problem:

You are running your WordPress instance using the official WordPress Apache image.

However, the WordPress Media page has a maximum upload size of 2 Megabytes.

Solution:

This setting is configured in the php.ini used by the WordPress docker image internally. While it is possible to use a custom php.ini, it’s much easier to edit .htaccess . Just edit .htaccess in the wordpress directory where wp-config.php is located and append this after # END WordPress to set the upload limit to 256 Megabytes:

php_value upload_max_filesize 256M
php_value post_max_size 256M
php_value max_execution_time 300
php_value max_input_time 300

The change should be effective immediately after reloading the page. Note that you still might need to configure your reverse proxy (if any) to allow larger uploads. My recommendation is to just try it out as is and if large uploads fail, it’s likely that your reverse proxy is at fault.

Full .htaccess example:

# BEGIN WordPress
# Die Anweisungen (Zeilen) zwischen „BEGIN WordPress“ und „END WordPress“ sind
# dynamisch generiert und sollten nur über WordPress-Filter geändert werden.
# Alle Änderungen an den Anweisungen zwischen diesen Markierungen werden überschrieben.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
RewriteBase /
RewriteRule ^index\.php$ - [L]
RewriteRule ^en/wp-login.php /wp-login.php [QSA,L]
RewriteRule ^de/wp-login.php /wp-login.php [QSA,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>

# END WordPress

php_value upload_max_filesize 256M
php_value post_max_size 256M
php_value max_execution_time 300
php_value max_input_time 300

 

Posted by Uli Köhler in Container, Docker, Wordpress