1. List your lxc images
lxc image list
2. Delete image
lxc image delete [image alias]
1. List your lxc images
lxc image list
2. Delete image
lxc image delete [image alias]
1. Create a snapshot
lxc snapshot [mycontainer] [snapshot name]
2. Create local image from snapshot
lxc publish [mycontainer]/[snapshot name] --alias [image alias]
3. List your images
lxc image list
4. Create container from iamge
lxc launch [image alias] [mynewcontainer]
You can copy a running lxc container like this
lxc copy [name of container to be copied] [new container]
for example
lxc copy mycontainer mycontainerCopy
You are running your WordPress instance using the official WordPress Apache image.
However, the WordPress Media page has a maximum upload size of 2 Megabytes
.
This setting is configured in the php.ini
used by the WordPress docker image internally. While it is possible to use a custom php.ini, it’s much easier to edit .htaccess
. Just edit .htaccess
in the wordpress
directory where wp-config.php
is located and append this after # END WordPress
to set the upload limit to 256 Megabytes:
php_value upload_max_filesize 256M php_value post_max_size 256M php_value max_execution_time 300 php_value max_input_time 300
The change should be effective immediately after reloading the page. Note that you still might need to configure your reverse proxy (if any) to allow larger uploads. My recommendation is to just try it out as is and if large uploads fail, it’s likely that your reverse proxy is at fault.
# BEGIN WordPress # Die Anweisungen (Zeilen) zwischen „BEGIN WordPress“ und „END WordPress“ sind # dynamisch generiert und sollten nur über WordPress-Filter geändert werden. # Alle Änderungen an den Anweisungen zwischen diesen Markierungen werden überschrieben. <IfModule mod_rewrite.c> RewriteEngine On RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] RewriteBase / RewriteRule ^index\.php$ - [L] RewriteRule ^en/wp-login.php /wp-login.php [QSA,L] RewriteRule ^de/wp-login.php /wp-login.php [QSA,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress php_value upload_max_filesize 256M php_value post_max_size 256M php_value max_execution_time 300 php_value max_input_time 300
In order to install tailscale, on Fedora CoreOS (this post has been tested on Fedora CoreOS 35), you can use this sequence of commands:
sudo curl -o /etc/yum.repos.d/tailscale.repo https://pkgs.tailscale.com/stable/fedora/tailscale.repo sudo rpm-ostree install tailscale
Now reboot using
sudo systemctl reboot
Once rebooted, you can enable the service using
sudo systemctl enable --now tailscaled
and then configure tailscale as usual:
sudo tailscale up --login-server .... --authkey ...
Also see our post on How to connect tailscale to headscale server on Linux
This post shows you a really quick method to create a systemd timer that runs cron.php
on dockerized nextcloud (using docker-compose
). We created a script that automatically creates a systemd timer and related service to run cron.php
hourly using the command from our previous post How to run Nextcloud cron in a docker-compose based setup:
In order to run our autoinstall script, run:
wget -qO- https://techoverflow.net/scripts/install-nextcloud-cron.sh | sudo bash /dev/stdin
from the directory where docker-compose.yml
is located. Note that the script will use the directory name as a name for the service and timer that is created. For example, running the script in /var/lib/nextcloud-mydomain
will cause nextcloud-mydomain-cron
to be used a service name.
Example output from the script:
Creating systemd service... /etc/systemd/system/nextcloud-mydomain-cron.service Creating systemd timer... /etc/systemd/system/nextcloud-mydomain-cron.timer Enabling & starting nextcloud-mydomain-cron.timer Created symlink /etc/systemd/system/timers.target.wants/nextcloud-mydomain-cron.timer → /etc/systemd/system/nextcloud-mydomain-cron.timer.
The script will create /etc/systemd/systemd/nextcloud-mydomain-cron.service
containing the specification on what exactly to run:
[Unit] Description=nextcloud-mydomain-cron [Service] Type=oneshot ExecStart=/usr/bin/docker-compose exec -T -u www-data nextcloud php cron.php WorkingDirectory=/var/opt/nextcloud-mydomain
and /etc/systemd/systemd/nextcloud-mydomain-cron.timer
containing the logic when the .service
is started:
[Unit] Description=nextcloud-mydomain-cron [Timer] OnCalendar=hourly Persistent=true [Install] WantedBy=timers.target
and will automatically start and enable the timer. This means: no further steps are needed after running this script!
In order to show the current status of the service, use e.g.
sudo systemctl status nextcloud-mydomain-cron.timer
● nextcloud-mydomain-cron.timer - nextcloud-mydomain-cron Loaded: loaded (/etc/systemd/system/nextcloud-mydomain-cron.timer; enabled; vendor preset: disabled) Active: active (waiting) since Fri 2022-04-01 00:33:48 UTC; 6min ago Trigger: Fri 2022-04-01 01:00:00 UTC; 19min left Triggers: ● nextcloud-mydomain-cron.service Apr 01 00:33:48 CoreOS systemd[1]: Started nextcloud-mydomain-cron.
In the
Trigger: Fri 2020-12-11 00:00:00 CET; 20h left
line you can see when the service will be run next. By default, the script generates tasks that run OnCalendar=daily
, which means the service will be run on 00:00:00
every day. Checkout the systemd.time manpage for further information on the syntax you can use to specify other timeframes.
In order to run the backup immediately (it will still run daily after doing this), do
sudo systemctl start nextcloud-mydomain-cron.service
(note that you need to run systemctl start
on the .service
! Running systemctl start
on the .timer
will only enable the timer and not run the service immediately).
In order to view the logs, use
sudo journalctl -xfu nextcloud-mydomain-cron.service
(just like above, you need to run journalctl -xfu
on the .service
, not on the .timer
).
In order to disable automatic backups, use e.g.
sudo systemctl disable nextcloud-mydomain-cron.timer
#!/bin/bash # Create a systemd service & timer that runs cron.php on dockerized nextcloud # by Uli Köhler - https://techoverflow.net # Licensed as CC0 1.0 Universal export SERVICENAME=$(basename $(pwd))-cron export SERVICEFILE=/etc/systemd/system/${SERVICENAME}.service export TIMERFILE=/etc/systemd/system/${SERVICENAME}.timer echo "Creating systemd service... $SERVICEFILE" sudo cat >$SERVICEFILE <<EOF [Unit] Description=$SERVICENAME [Service] Type=oneshot ExecStart=$(which docker-compose) exec -T -u www-data nextcloud php cron.php WorkingDirectory=$(pwd) EOF echo "Creating systemd timer... $TIMERFILE" sudo cat >$TIMERFILE <<EOF [Unit] Description=$SERVICENAME [Timer] OnCalendar=hourly Persistent=true [Install] WantedBy=timers.target EOF echo "Enabling & starting $SERVICENAME.timer" sudo systemctl enable $SERVICENAME.timer sudo systemctl start $SERVICENAME.timer
Run this command in the directory where docker-compose.yml
is located in order to run the Nextcloud cron job:
docker-compose exec -u www-data nextcloud php cron.php
The following config works by using two domains: minio.mydomain.com
and console.minio.mydomain.com
.
For the basic Traefik setup this is based on, see Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges. Regarding this setup, the important part is to enabled the docker autodiscovery and defining the certificate resolve (we’re using the ALPN resolver).
Be sure to choose a random MINIO_ROOT_PASSWORD
!
version: '3.5' services: minio: image: quay.io/minio/minio:latest command: server --console-address ":9001" /data volumes: - ./data:/data - ./config:/root/.minio environment: - MINIO_ROOT_USER=minioadmin - MINIO_ROOT_PASSWORD=uikui5choRith0ZieV2zohN5aish5r - MINIO_DOMAIN=minio.mydomain.com - MINIO_SERVER_URL=https://minio.mydomain.com - MINIO_BROWSER_REDIRECT_URL=https://console.minio.mydomain.com labels: - "traefik.enable=true" # Console - "traefik.http.routers.minio-console.rule=Host(`console.minio.mydomain.com`)" - "traefik.http.routers.minio-console.entrypoints=websecure" - "traefik.http.routers.minio-console.tls.certresolver=alpn" - "traefik.http.routers.minio-console.service=minio-console" - "traefik.http.services.minio-console.loadbalancer.server.port=9001" # APi - "traefik.http.routers.minio.rule=Host(`minio.mydomain.com`)" - "traefik.http.routers.minio.entrypoints=websecure" - "traefik.http.routers.minio.tls.certresolver=alpn" - "traefik.http.routers.minio.service=minio" - "traefik.http.services.minio.loadbalancer.server.port=9000"
Traefik does not load some of your services and you see an error message like the following one:
traefik_1 | time="2022-03-27T15:22:05Z" level=error msg="Could not define the service name for the router: too many services" routerName=myapp providerName=docker
with a docker label config with multiple routers like this:
labels: - "traefik.enable=true" - "traefik.http.routers.myapp-console.rule=Host(`console.myapp.mydomain.com`)" - "traefik.http.routers.myapp-console.entrypoints=websecure" - "traefik.http.routers.myapp-console.tls.certresolver=alpn" - "traefik.http.services.myapp-console.loadbalancer.server.port=9001" # - "traefik.http.routers.myapp.rule=Host(`myapp.mydomain.com`)" - "traefik.http.routers.myapp.entrypoints=websecure" - "traefik.http.routers.myapp.tls.certresolver=cloudflare-techoverflow" - "traefik.http.routers.myapp.tls.domains[0].main=mydomain.com" - "traefik.http.routers.myapp.tls.domains[0].sans=*.mydomain.com" - "traefik.http.services.myapp.loadbalancer.server.port=9000"
The basic issue here is that you have multiple routers defined for a single docker container and Traefik does not know which http.services
belongs to which http.routers
!
In order to fix this, explicitly tell traefik for each router what service it should use like this:
- "traefik.http.routers.myapp-console.service=myapp-console"
labels: - "traefik.enable=true" - "traefik.http.routers.myapp-console.rule=Host(`console.myapp.mydomain.com`)" - "traefik.http.routers.myapp-console.entrypoints=websecure" - "traefik.http.routers.myapp-console.tls.certresolver=alpn" - "traefik.http.routers.myapp-console.service=myapp-console" - "traefik.http.services.myapp-console.loadbalancer.server.port=9001" # - "traefik.http.routers.myapp.rule=Host(`myapp.mydomain.com`)" - "traefik.http.routers.myapp.entrypoints=websecure" - "traefik.http.routers.myapp.tls.certresolver=cloudflare-techoverflow" - "traefik.http.routers.myapp.tls.domains[0].main=mydomain.com" - "traefik.http.routers.myapp.tls.domains[0].sans=*.mydomain.com" - "traefik.http.routers.myapp.service=myapp" - "traefik.http.services.myapp.loadbalancer.server.port=9000"
I use the following docker-compose.yml
service:
version: '3.5' services: postgres: image: postgres restart: unless-stopped volumes: - ./pg_data:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} - POSTGRES_DB=${POSTGRES_DB} - POSTGRES_USER=${POSTGRES_USER}
With the following .env
:
POSTGRES_DB=headscale POSTGRES_USER=headscale POSTGRES_PASSWORD=vah2phuen3shesahc6Jeenaechecee
Using .env
has the huge advantage that other services like my backup script can access the configuration in a standardized manner using environment variables.
I have the following docker-compose.yml
service:
version: '3.5' services: postgres: image: postgres restart: unless-stopped volumes: - ./pg_data:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} - POSTGRES_DB=${POSTGRES_DB} - POSTGRES_USER=${POSTGRES_USER}
With the following .env
:
POSTGRES_DB=headscale POSTGRES_USER=headscale POSTGRES_PASSWORD=vah2phuen3shesahc6Jeenaechecee
Given that setup, I run pg_dump
like this:
source .env && docker-compose exec postgres pg_dump -U${POSTGRES_USER} > pgdump-$(date +%F_%H-%M-%S).sql
I use this in git repositories to ease the deployment process
#!/bin/bash # This script installs and enables/starts a systemd service # It also installs the service file export NAME=MyService cat >/etc/systemd/system/${NAME}.service <<EOF # TODO Copy & paste the systemd .service file here EOF # Enable and start service systemctl enable --now ${NAME}.service
The following example automatically installs a docker-compose
systemd .service:
#!/bin/bash # This script installs and enables/starts a systemd service # It also installs the service file export NAME=UkraineBase cat >/etc/systemd/system/${NAME}.service <<EOF [Unit] Description=${NAME} Requires=docker.service After=docker.service [Service] Restart=always User=root Group=docker WorkingDirectory=$(pwd) # Shutdown container (if running) before unit is being started ExecStartPre=$(which docker-compose) -f docker-compose.yml down # Start container when unit is started ExecStart=$(which docker-compose) -f docker-compose.yml up # Stop container when unit is stopped ExecStop=$(which docker-compose) -f docker-compose.yml down [Install] WantedBy=multi-user.target EOF # Enable and start service systemctl enable --now ${NAME}.service
Any time I was trying to transfer a project in my docker-hosted gitlab instance. the transfer failed with error 500
and I was presented with the following error log:
==> /var/log/gitlab/gitlab-rails/production.log <== URI::InvalidURIError (query conflicts with opaque): lib/container_registry/client.rb:84:in `repository_tags' app/models/container_repository.rb:94:in `manifest' app/models/container_repository.rb:98:in `tags' app/models/container_repository.rb:118:in `has_tags?' app/models/project.rb:2890:in `has_root_container_repository_tags?' app/models/project.rb:1037:in `has_container_registry_tags?' app/services/projects/transfer_service.rb:61:in `transfer' app/services/projects/transfer_service.rb:35:in `execute' app/controllers/projects_controller.rb:120:in `transfer' app/controllers/application_controller.rb:490:in `set_current_admin' lib/gitlab/session.rb:11:in `with_session' app/controllers/application_controller.rb:481:in `set_session_storage' lib/gitlab/i18n.rb:105:in `with_locale' lib/gitlab/i18n.rb:111:in `with_user_locale' app/controllers/application_controller.rb:475:in `set_locale' app/controllers/application_controller.rb:469:in `set_current_context' lib/gitlab/metrics/elasticsearch_rack_middleware.rb:16:in `call' lib/gitlab/middleware/rails_queue_duration.rb:33:in `call' lib/gitlab/middleware/speedscope.rb:13:in `call' lib/gitlab/request_profiler/middleware.rb:17:in `call' lib/gitlab/database/load_balancing/rack_middleware.rb:23:in `call' lib/gitlab/metrics/rack_middleware.rb:16:in `block in call' lib/gitlab/metrics/web_transaction.rb:46:in `run' lib/gitlab/metrics/rack_middleware.rb:16:in `call' lib/gitlab/jira/middleware.rb:19:in `call' lib/gitlab/middleware/go.rb:20:in `call' lib/gitlab/etag_caching/middleware.rb:21:in `call' lib/gitlab/middleware/multipart.rb:173:in `call' lib/gitlab/middleware/read_only/controller.rb:50:in `call' lib/gitlab/middleware/read_only.rb:18:in `call' lib/gitlab/middleware/same_site_cookies.rb:27:in `call' lib/gitlab/middleware/handle_malformed_strings.rb:21:in `call' lib/gitlab/middleware/basic_health_check.rb:25:in `call' lib/gitlab/middleware/handle_ip_spoof_attack_error.rb:25:in `call' lib/gitlab/middleware/request_context.rb:21:in `call' lib/gitlab/middleware/webhook_recursion_detection.rb:15:in `call' config/initializers/fix_local_cache_middleware.rb:11:in `call' lib/gitlab/middleware/compressed_json.rb:26:in `call' lib/gitlab/middleware/rack_multipart_tempfile_factory.rb:19:in `call' lib/gitlab/middleware/sidekiq_web_static.rb:20:in `call' lib/gitlab/metrics/requests_rack_middleware.rb:75:in `call' lib/gitlab/middleware/release_env.rb:13:in `call'
This error seems to occur if you had a docker registry configured in previous gitlab versions (a legacy docker repository) but doesn’t disappear even after deconfiguring the registry.
In order to fix it, I logged into the container using
docker-compose exec gitlab /bin/bash
and edited /opt/gitlab/embedded/service/gitlab-rails/app/models/project.rb
using
vi /opt/gitlab/embedded/service/gitlab-rails/app/models/project.rb
where on line 2890
you can find the following function:
## # This method is here because of support for legacy container repository # which has exactly the same path like project does, but which might not be # persisted in `container_repositories` table. # def has_root_container_repository_tags? return false unless Gitlab.config.registry.enabled ContainerRepository.build_root_repository(self).has_tags? end
We just want Gitlab to ignore the repository stuff, so we insert return false
after the return false unless Gitlab.config.registry.enabled
line:
## # This method is here because of support for legacy container repository # which has exactly the same path like project does, but which might not be # persisted in `container_repositories` table. # def has_root_container_repository_tags? return false unless Gitlab.config.registry.enabled return false ContainerRepository.build_root_repository(self).has_tags? end
to fake Gitlab into thinking that repository does not have any legacy registries. After that, save the file and
sudo gitlab-ctl restart
after which you should be able to transfer your repositories just fine.
When starting docker on a VM that got suddenly turned off before during a power outage, the docker daemon failed to start up with the following error log:
Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.946815577Z" level=info msg="Starting up" Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.947842629Z" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf" Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949500623Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949617127Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949705114Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.949776371Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950747679Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950788173Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950806216Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc Jan 05 15:15:14 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:14.950815090Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.012683899Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.027806434Z" level=warning msg="Docker could not enable SELinux on the host system" Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.176098505Z" level=info msg="Loading containers: start." Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: time="2022-01-05T15:15:15.728503609Z" level=info msg="Removing stale sandbox 22e03a9f65217fa0ce1603fa1d6326b7bf412777be94e930b02dbe6554549084 (154aa4bd403045e229b39cc4dda1d16a72b45d18671cab7c993bff4eaee9c2c9)" Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: panic: assertion failed: write: circular dependency occurred Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: goroutine 1 [running]: Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt._assert(...) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/db.go:1172 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*node).write(0xc0001e16c0, 0xc001381000) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/node.go:233 +0x3c5 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*node).spill(0xc0001e16c0, 0xc0012da078, 0x1) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/node.go:374 +0x225 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*node).spill(0xc0001e1650, 0xc000fefaa0, 0xc000d9dc50) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/node.go:346 +0xbc Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*Bucket).spill(0xc000d6f180, 0xc000fefa00, 0xc000d9de40) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/bucket.go:570 +0x49a Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*Bucket).spill(0xc000fca2b8, 0x310b759f, 0x55dfe7bf17e0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/bucket.go:537 +0x3f6 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*Tx).Commit(0xc000fca2a0, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/tx.go:160 +0xe8 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/go.etcd.io/bbolt.(*DB).Update(0xc000d6bc00, 0xc000d9e078, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/go.etcd.io/bbolt/db.go:701 +0x105 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libkv/store/boltdb.(*BoltDB).AtomicPut(0xc000a862d0, 0xc000fd32d0, 0x65, 0xc000d46300, 0x179, 0x180, 0xc000fef890, 0x0, 0x55dfe5407900, 0x0, ...) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libkv/store/boltdb/boltdb.go:371 +0x225 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore.(*datastore).PutObjectAtomic(0xc000c3bbc0, 0x55dfe6bd1398, 0xc000efad20, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/datastore/datastore.go:415 +0x3ca Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).storeUpdate(0xc0000c3c00, 0x55dfe6bd1398, 0xc000efad20, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge_store.go:106 +0x6e Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge.(*driver).RevokeExternalConnectivity(0xc0000c3c00, 0xc000f19d80, 0x40, 0xc00111c300, 0x40, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/drivers/bridge/bridge.go:1405 +0x1f1 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*endpoint).sbLeave(0xc000d41e40, 0xc000f63200, 0x1, 0x0, 0x0, 0x0, 0x0, 0xc001b91ce0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/endpoint.go:751 +0x1326 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*endpoint).Delete(0xc000d41b80, 0xc000f19d01, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/endpoint.go:842 +0x374 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*sandbox).delete(0xc000f63200, 0x1, 0x55dfe61ee216, 0x1e) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/sandbox.go:229 +0x191 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.(*controller).sandboxCleanup(0xc000432a00, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/sandbox_store.go:278 +0xdae Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/docker/libnetwork.New(0xc0000c3b80, 0x9, 0x10, 0xc000878090, 0xc001e69e60, 0xc0000c3b80, 0x9) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/docker/libnetwork/controller.go:248 +0x726 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/daemon.(*Daemon).initNetworkController(0xc00000c1e0, 0xc00021a000, 0xc001e69e60, 0xc000151270, 0xc00000c1e0, 0xc000686a10, 0xc001e69e60) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/daemon/daemon_unix.go:855 +0xac Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/daemon.(*Daemon).restore(0xc00000c1e0, 0xc00045c4c0, 0xc00023e2a0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/daemon/daemon.go:490 +0x52c Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/daemon.NewDaemon(0x55dfe6bad710, 0xc00045c4c0, 0xc00021a000, 0xc000878090, 0x0, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/daemon/daemon.go:1150 +0x2c1d Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.(*DaemonCli).start(0xc000b62720, 0xc00009a8a0, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/daemon.go:195 +0x785 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.runDaemon(...) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/docker_unix.go:13 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.newDaemonCommand.func1(0xc0000bc2c0, 0xc0000b2000, 0x0, 0xc, 0x0, 0x0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/docker.go:34 +0x7d Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).execute(0xc0000bc2c0, 0xc00011caa0, 0xc, 0xc, 0xc0000bc2c0, 0xc00011caa0) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:850 +0x472 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000bc2c0, 0x0, 0x0, 0x10) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:958 +0x375 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: github.com/docker/docker/vendor/github.com/spf13/cobra.(*Command).Execute(...) Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/vendor/github.com/spf13/cobra/command.go:895 Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: main.main() Jan 05 15:15:15 CoreOS-Haar dockerd[5333]: /builddir/build/BUILD/moby-20.10.11/src/github.com/docker/docker/cmd/dockerd/docker.go:97 +0x185 Jan 05 15:15:15 CoreOS-Haar systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
First, try moving /var/lib/docker/network/files/local-kv.db
:
sudo mv /var/lib/docker/network/files/local-kv.db /opt/old-docker-local-kv.db
and then restarting docker
using e.g. sudo systemctl restart docker,
which worked for many users on GitHub.
Only if that does not work, try the following brute-force method!
Move the /var/lib/docker
directory to a new location for backup, after which docker was able to start up again:
sudo mv /var/lib/docker /opt/old-var-lib-docker
This is a pretty brute-force method of doing that but for me it worked totally fine since my setup did not use volumes but local directories. In case you are actively using volumes, you might need to restore the volumes from /opt/old-var-lib-docker
manually!
In order to install Uptime-Kuma, first create a directory for the service to reside in. In this example, we’ll use /opt/uptimekuma
Note that at the moment UptimeKuma only supports one user, so if you need multiple users, you need to create multiple instances of Uptime-Kuma. Fortunately, this is extremely easy using docker-compose.
Now we will create docker-compose.yml
version: '3' services: kuma: image: 'louislam/uptime-kuma:1' ports: - '17958:3001' volumes: - './uptimekuma_data:/app/data'
This will listen on port 17958
. You can choose any port you want here, just make sure to choose different ports when running different instances of uptime-kuma.
Now you can use our script from Create a systemd service for your docker-compose project in 10 seconds to automatically start Uptime-Kuma on boot (and start it immediately):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Now access https://<IP of controller>:17958
(or your custom HTTP port) to get started with the Uptime-Kuma setup.
Create a directory such as /opt/xenorchestra
and create docker-compose.yml
:
version: '3' services: xen-orchestra: restart: unless-stopped image: ronivay/xen-orchestra:latest container_name: xen-orchestra network_mode: host stop_grace_period: 1m environment: - HTTP_PORT=1780 cap_add: - SYS_ADMIN security_opt: - apparmor:unconfined volumes: - ./xo-data:/var/lib/xo-server - ./redis-data:/var/lib/redis
You can choose any HTTP port you want using HTTP_PORT=1780
. In this case, we opted for using network_mode: host
to bypass the docker networking, since XenOrchestra seems to work better with full network access instead of the container having an own IP.
Now you can use our script from Create a systemd service for your docker-compose project in 10 seconds to automatically start XenOrchestra on boot (and start it immediately):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Now access https://<IP of controller>:1780
(or your custom HTTP port) to get started with the XO setup.
This setup runs both MongoDB and unifi using network_mode: host
, this is why we are running MongoDB on a nonstandard port (so it will not interfere with other MongoDB instances). This has the huge benefit of allowing direct Layer 2 network access allowing L2 access point adoption.
Create a directory such as /opt/unifi
and create docker-compose.yml
version: '2.3' services: mongo: image: mongo:3.6 network_mode: host restart: always volumes: - ./mongo_db:/data/db - ./mongo/dbcfg:/data/configdb command: mongod --port 29718 controller: image: "jacobalberty/unifi:latest" depends_on: - mongo init: true network_mode: host restart: always volumes: - ./unifi_dir:/unifi - ./unifi_data:/unifi/data - ./unifi_log:/unifi/log - ./unifi_cert:/unifi/cert - ./unifi_init:/unifi/init.d - ./unifi_run:/var/run/unifi - ./unifi_backup:/unifi/data/backup environment: - DB_URI=mongodb://localhost:29718/unifi - STATDB_URI=mongodb://localhost:29718/unifi_stat - DB_NAME=unifi logs: image: bash depends_on: - controller command: bash -c 'tail -F /unifi/log/*.log' restart: always volumes: - ./unifi_log:/unifi/log
Now create the directories with the correct permissions:
mkdir -p unifi_backup unifi_cert unifi_data unifi_dir unifi_init unifi_log unifi_run chown -R 999:999 unifi_backup unifi_cert unifi_data unifi_dir unifi_init unifi_log unifi_run
Now you can use our script from Create a systemd service for your docker-compose project in 10 seconds to automatically start the controller on boot (and start it immediately):
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Now access https://<IP of controller>:8443
to get started with the setup or import a backup.
Your Unifi controller running on docker or docker-compose shows log messages like
controller_1 | [2021-12-29 17:37:26,396] <docker-entrypoint> Starting unifi controller service. controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied controller_1 | /usr/local/bin/docker-entrypoint.sh: line 97: /unifi/data/system.properties: Permission denied
on startup.
Fix the permissions of the directory mounted. I have listed
volumes: - ./unifi_data:/unifi/data
in my docker-compose.yml
.
Fix the permissions by
sudo chown -R 999:999 unifi_data
First, create a directory where HomeAssistant will reside. I use /opt/homeassistant
.
Create docker-compose.yml
:
version: '3.5' services: homeassistant: container_name: homeassistant restart: unless-stopped image: ghcr.io/home-assistant/home-assistant:stable network_mode: host privileged: true environment: - TZ=Europe/Berlin volumes: - ./homeassistant_config:/config depends_on: - mosquitto mosquitto: image: eclipse-mosquitto network_mode: host volumes: - ./mosquitto_conf:/mosquitto/config - ./mosquitto_data:/mosquitto/data - ./mosquitto_log:/mosquitto/log
Now start homeassistant so it creates the default config files:
docker-compose up
Once you see
homeassistant | [services.d] done.
Press Ctrl+C to abort.
Now we’ll create the Mosquitto MQTT server config file in mosquitto_conf/mosquitto.conf
:
persistence true persistence_location /mosquitto/data/ log_dest file /mosquitto/log/mosquitto.log listener 1883 ## Authentication ## allow_anonymous false password_file /mosquitto/config/mosquitto.passwd
Now create the mosquitto password file and fix the permissions using
touch mosquitto_conf/mosquitto.passwd chown -R 1883:1883 mosquitto_conf
We can now start create the homeassistant
mosquitto user using
docker-compose run mosquitto mosquitto_passwd -c /mosquitto/config/mosquitto.passwd homeassistant
Enter a random password that will be used for the homeassistant
user
Now we can edit the homeassistant config homeassistant_config/configuration.yml
. This is my config – ensure to insert the random MQTT password we used before instead of ep2ooy8di3avohn1Ahm6eegheiResh
:
# Configure a default setup of Home Assistant (frontend, api, etc) default_config: http: use_x_forwarded_for: true trusted_proxies: - 127.0.0.1 ip_ban_enabled: true login_attempts_threshold: 5 mqtt: broker: "127.0.0.1" username: "homeassistant" password: "ep2ooy8di3avohn1Ahm6eegheiResh" # Text to speech tts: - platform: google_translate group: !include groups.yaml automation: !include automations.yaml script: !include scripts.yaml scene: !include scenes.yaml
Now we can start the server using
docker-compose up
You can also use our script to generate a systemd service to autostart the docker-compose config on boot:
curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin
Now login to the web interface on port 8123
and configure your HomeAssistant!
When running
rpm-ostree ex apply-live
you will see the following error message in newer versions of CoreOS:
error: Unknown "ex" subcommand "apply-live"
The new equivalent of rpm-ostree ex apply-live
is
rpm-ostree ex livefs --i-like-danger