This cloud-init
example installs nginx
on Debian- or Ubuntu- based systems:
packages: - nginx
If you want to enable upgrading packages, use:
package_upgrade: true packages: - nginx
This cloud-init
example installs nginx
on Debian- or Ubuntu- based systems:
packages: - nginx
If you want to enable upgrading packages, use:
package_upgrade: true packages: - nginx
You want to run a docker image build on Google Cloud Build, but the client is trying to upload a huge context image to Google Cloud even though you have added all your large directories to your .dockerignore
and the build works fine locally.
Google Cloud Build ignores .dockerignore
by design – the equivalent is called .gcloudignore
.
You can copy the .dockerignore behaviour for gcloud by running
cp .dockerignore .gcloudignore
Use this command to set the default zone for project myproject-123456
to europe-west4-a
and the default region to europe-west4
:
gcloud compute project-info add-metadata \ --metadata google-compute-default-region=europe-west4,google-compute-default-zone=europe-west4-a\ --project myproject-123456
Also see the official reference for more detailed information.
Important note: By default, volumes will not be resized immediately but instead require a restart of the associated pod.
First, ensure that you have set allowVolumeExpansion: true
for the storage class of your PVC. See our previous post on How to allow Physical Volume Claim (PVC) resize for Kubernetes storage class for more details.
We can expand the volume (named myapp-myapp-pvc-myapp-myapp-1
in this example) by running
kubectl patch pvc/"myapp-myapp-pvc-myapp-myapp-1" \ --namespace "default" \ --patch '{"spec": {"resources": {"requests": {"storage": "40Gi"}}}}'
Ensure that you have replaced the name of the PVC (myapp-myapp-pvc-myapp-myapp-1
in this example) and the storage size. It’s only possible to increase the size of the volume / expand it and not to downsize / shrink it. If your size is less than the previous value, you’ll see this error message:
The PersistentVolumeClaim "myapp-myapp-pvc-myapp-myapp-1" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
After running this command, the PVC will be in the FileSystemResizePending
state.
In order for the update to have effect, you’ll need to force Kubernetes to re-create all the pods for your deployment. To find out how to do this, read our post on How to force restarting all Pods in a Kubernetes Deployment.
For reference, see the official documentation on expanding persistent volumes
In contrast to classical deployment managers like systemd
or pm2
, Kubernetes does not provide a simple restart my application command.
However there’s an easy workaround: If you chance anything in your configuration, even innocuous things that don’t have any effect, Kubernetes will restart your pods.
Consider configuring a rolling update strategy before doing this if you are updating a production application that should have minimal downtime.
In this example we’ll assume you have a StatefulSet
your want to update and it’s named elasticsearch-elasticsearch
. Be sure to fill in the actual name of your deployment here.
kubectl patch statefulset/elasticsearch-elasticsearch -p \ "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"dummy-date\":\"`date +'%s'`\"}}}}}"
This will just set a dummy-date
annotation which does not have any effect.
You can monitor the update by
kubectl rollout status statefulset/elasticsearch-elasticsearch
Credits for the original solution idea to pstadler on GitHub.
Google Cloud offers a convenient way of installing an ElasticSearch cluster on top of a Google Cloud Kubernetes cluster. However, the documentation tells you to expose the ElasticSearch instance using
kubectl patch service/"elasticsearch-elasticsearch-svc" \ --namespace "default" \ --patch '{"spec": {"type": "LoadBalancer"}}'
However this command will expost ElasticSearch to an external IP which will make it publically accessible in the default configuration.
Here’s the equivalent command that will expose ElasticSearch to an internal load balancer with an internal IP address that will only be available from Google Cloud.
kubectl patch service/"elasticsearch-elasticsearch-svc" \ --namespace "default" \ --patch '{"spec": {"type": "LoadBalancer"}, "metadata": {"annotations": {"cloud.google.com/load-balancer-type": "Internal"}}}'
You might need to replace the name of your service (elasticsearch-elasticsearch-svc
in this example) and possibly your namespace.
This set of commands will install & start MikroK8S (MikroKubernetes) on Ubuntu and similar Linux distributions.
sudo snap install microk8s --classic sudo snap install kubectl --classic sudo microk8s.enable # Autostart on boot sudo microk8s.start # Start right now # Wait until microk8s has started until microk8s.status ; do sleep 1 ; done # Enable some standard modules microk8s.enable dashboard registry istio
For reference see the official quickstart manual.
You want to run a Kubernetes kubectl
command like
kubectl -f my-app-deployment.yaml
but you see this error message after kubectl
prints its entire help page:
unknown shorthand flag: 'f' in -f
You are missing an actual command to kubectl
. Most likely you want create
something on your Kubernetes instance, in which case you want to run this instead:
kubectl create -f my-app-deployment.yaml
You might also want to apply
or replace
your config instead. Note that apply
does not automatically restart your Kubernetes Pods. Read How to fix Kubernetes kubectl apply not restarting pods for more information.
If you want to connect to a Google Cloud VM instance (my-instance
in this example) from your command line using SSH, use this command:
rsync -Pavz [local file] $(gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].accessConfigs[0].natIP)"):
The subcommand (enclosed in $(...)
) finds the correct external IP address for your instance (see How to find IP address of Google Cloud VM instance on command line for more details), so this command boils down to for example
rsync -Pavz [local file] 35.207.77.101:
Using the -Pavz
option is not specifically neccessary but these are the options I regularly use for rsync
file transfers. You can use any rsync
options, Google Cloud does not impose any specific restrictions here. For reference see the rsync manpage.
In case you want to use a different username for the SSH login, you can of course prefix the $(...)
section like this:
rsync -Pavz [local file] [email protected]$(gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].accessConfigs[0].natIP)"):
If you want to connect to a Google Cloud VM instance (my-instance
in this example) from your command line using SSH, you have two options:
This will always work if your instance has SSH enabled, even if it does not have an external IP:
gcloud compute ssh my-instance --zone $(gcloud compute instances list --filter="name=my-instance" --format "get(zone)" | awk -F/ '{print $NF}')
Note that your have to replace my-instance by your actual instance name two times in the command above. The subcommand (enclosed in $(...)
) finds the correct zone for your instance since at the time of writing this article gcloud compute ssh
will not work unless you set the correct zone for that instance. See How to find zone of Google Cloud VM instance on command line for more details.
You can also use gcloud
to get the external IP and connect to it using your standard SSH client.
ssh $(gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].accessConfigs[0].natIP)")
This has the added advantage that your will be able to use this in other SSH-like command like rsync
.
For reference also see the official manual on Securely Connecting to Instances
You have a VM instance (my-instance
in our example) for which you want to find out the zone it’s residing in using the gcloud
command line tool.
If you just want to see the zone of the instance (remember to replace my-instance
by your instance name!), use
gcloud compute instances list --filter="name=my-instance" --format "[box]"
This will format the output nicely and show you more information about your instance. Example output:
┌─────────────┬────────────────┬─────────────────────────────┬─────────────┬─────────────┬───────────────┬─────────┐ │ NAME │ ZONE │ MACHINE_TYPE │ PREEMPTIBLE │ INTERNAL_IP │ EXTERNAL_IP │ STATUS │ ├─────────────┼────────────────┼─────────────────────────────┼─────────────┼─────────────┼───────────────┼─────────┤ │ my-instance │ europe-west3-c │ custom (16 vCPU, 32.00 GiB) │ │ 10.156.0.1 │ 35.207.77.101 │ RUNNING │ └─────────────┴────────────────┴─────────────────────────────┴─────────────┴─────────────┴───────────────┴─────────┘
In this example, the zone is europe-west3-c
.
In case you want to see only the zone, use this command instead:
gcloud compute instances list --filter="name=katc-main" --format "get(zone)" | awk -F/ '{print $NF}'
Example output:
europe-west3-c
Also see our other post How to find IP address of Google Cloud VM instance on command line.
In order to see what other information about instances you can see in a similar fashion, use
gcloud compute instances list --filter="name=my-instance" --format "text"
You have a VM instance (my-instance
in our example) for which you want to get the external or internal IP using the gcloud
command line tool.
If you just want to see the external IP of the instance (remember to replace my-instance
by your instance name!), use
gcloud compute instances list --filter="name=my-instance" --format "[box]"
This will format the output nicely and show you more information about your instance. Example output:
┌─────────────┬────────────────┬─────────────────────────────┬─────────────┬─────────────┬───────────────┬─────────┐ │ NAME │ ZONE │ MACHINE_TYPE │ PREEMPTIBLE │ INTERNAL_IP │ EXTERNAL_IP │ STATUS │ ├─────────────┼────────────────┼─────────────────────────────┼─────────────┼─────────────┼───────────────┼─────────┤ │ my-instance │ europe-west3-c │ custom (16 vCPU, 32.00 GiB) │ │ 10.156.0.1 │ 35.207.77.101 │ RUNNING │ └─────────────┴────────────────┴─────────────────────────────┴─────────────┴─────────────┴───────────────┴─────────┘
In this example, the external IP address is 35.207.77.101
.
In case you want to see only the IP address, use this command instead:
gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].accessConfigs[0].natIP)"
Example output:
35.207.77.101
In order to see only the internal IP address (accessible only from Google Cloud), use
gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].networkIP)"
In the linux shell, the result of this command can easily be used as input to other commands. For example, to ping my-instance
, use
ping $(gcloud compute instances list --filter="name=katc-main" --format "get(networkInterfaces[0].accessConfigs[0].natIP)")
Also see our related post How to find zone of Google Cloud VM instance on command line
In order to see what other information about instances you can see in a similar fashion, use
gcloud compute instances list --filter="name=my-instance" --format "text"
You made an update to your Kubernetes YAML configuration which you applied with
kubectl apply -f [YAML filename]
but Kubernetes still keeps the old version of the software running.
Instead of kubectl apply -f ...
use
kubectl replace --force -f [YAML filename]
This will update the configuration on the server and also update the running pods.
You want to create or edit a Kubernetes service but when running e.g.
kubectl create -f my-service.yml
you see an error message similar to this:
Unable to connect to the server: dial tcp 35.198.129.60:443: i/o timeout
There are three common reasons for this issue:
In case of Google Cloud Kubernetes, case (3) can easily be fixed by configuring Kubernetes to use your current cluster:
gcloud container clusters get-credentials [cluster name] --zone [zone]
This will automatically update the default cluster for kubectl.
In case you don’t know the correct cluster name and zone, use
gcloud container clusters list
This post provides an easy example on how to build & upload your application to the private Google Container registry. We assume you have already setup your project and installed Docker. In this example, we’ll build & upload pseudo-perseus v1.0
. Since this is a NodeJS-based application, we also assume that you installed a recent version of NodeJS and NPM (see our previous article on how to do that using Ubuntu)
First we configure docker to be able to authenticate to Google:
gcloud auth configure-docker
Now we can checkout the repository and install the NPM packages:
git clone https://github.com/ulikoehler/pseudo-perseus.git cd pseudo-perseus git checkout v1.0 npm install
Now we can build the local docker image (we directly name it so that it can be uploaded to the Google Container Registry. Be sure to use the correct google cloud project ID!):
docker build -t eu.gcr.io/myproject-123456/pseudo-perseus:v1.0 .
The next step is to upload the image:
docker push eu.gcr.io/myproject-123456/pseudo-perseus:v1.0
For reference see the official Container Registry documentation.
You want to configure docker to be able to access Google Container Registry using
gcloud auth configure-docker
but you see this warning message:
WARNING: `docker-credential-gcloud` not in system PATH. gcloud's Docker credential helper can be configured but it will not work until this is corrected. gcloud credential helpers already registered correctly.
Install docker-credential-gcloud
using
sudo gcloud components install docker-credential-gcr
In case you see this error message:
ERROR: (gcloud.components.install) You cannot perform this action because this Cloud SDK installation is managed by an external package manager. Please consider using a separate installation of the Cloud SDK created through the default mechanism described at: https://cloud.google.com/sdk/
use this alternate installation command instead (this command is for Linux, see the official documentation for other operating systems):
VERSION=1.5.0 OS=linux ARCH=amd64 curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" \ | tar xz --to-stdout ./docker-credential-gcr \ | sudo tee /usr/bin/docker-credential-gcr > /dev/null && sudo chmod +x /usr/bin/docker-credential-gcr
After that, configure docker using
docker-credential-gcr configure-docker
Now you can retry running your original command.
For reference, see the official documentation.
You want to configure a Kubernetes service using kubectl
using a command like
kubectl patch service/"my-elasticsearch-svc" --namespace "default" --patch '{"spec": {"type": "LoadBalancer"}}'
but you only see this error message:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Kubernetes does not have the correct credentials to access the cluster.
Add the correct credentials to the kubectl config using
gcloud container clusters get-credentials [cluster name] --zone [cluster zone]
e.g.
gcloud container clusters get-credentials cluster-1 --zone europe-west3-c
After that, retry your original command.
In case you don’t know your cluster name or zone, use
gcloud container clusters list
to display the cluster metadata.
Credits to this StackOverflow answer for the original solution.
You want to run a Python script that is using some Google Cloud services. However you see an error message similar to this:
[...] File "/usr/local/lib/python3.6/dist-packages/google/api_core/gapic_v1/__init__.py", line 16, in <module> from google.api_core.gapic_v1 import config File "/usr/local/lib/python3.6/dist-packages/google/api_core/gapic_v1/config.py", line 23, in <module> import grpc ModuleNotFoundError: No module named 'grpc'
Install the grpcio
Python module:
sudo pip3 install grpcio
or, for Python 2.x
sudo pip install grpcio
You want to run a Python script that uses one of the Google Cloud Python APIs but you get this error message:
ModuleNotFoundError: No module named 'google.cloud.iam'
Reinstall any google cloud package using pip:
sudo pip install --upgrade google-cloud-storage
or
sudo pip3 install --upgrade google-cloud-storage
That will also reinstall the relevant google.cloud.iam
module.
After that, re-run your script. If that didn’t work, try to install --upgrade
some other google-cloud-*
module, especially the modules you actually use in your script.
The enormous amount of IPv6 addresses available to most commercially hosted VPS / root servers with a public IPv6 prefix allows you to route a public IPv6 address to every container that is running on your server. This tutorial shows you how to do that, even if you have no prior experience with routing,
We assume you have already done this – just for reference, here’s how you can create a container:
lxc launch ubuntu:18.04 my-container
First you need to find out what prefix is routed to your host. Usually you can do that by checking in your provider’s control panel. You’re looking for something like 2a01:4f9:c010:278::1/64
. Another option would be to run sudo ifconfig
and look for a inet6 line in the section of your primary network interface (this only works if you have configured your server to have an IPv6 address). Note that addresses that start with fe80::
and addresses starting with fd
, among others, are not public IPv6 addresses.
Then you can define a new IPv6 address to your container. Which one you choose – as long as it’s within the prefix – is entirely your decision.
Often, <prefix>::1
is used for the host itself, therefore you could, for example, choose <prefix>::2
. Note that some providers use some IP addresses for other purposes. Check your provider’s documentation for details.
If you don’t want to make it easy to find your container’s public IPv6, don’t choose <prefix>::1
, <prefix>::2
, <prefix>::3
etc but something more random like <prefix>:af15:99b1:0b05:1
, for example2a01:4f9:c010:278:af15:99b1:0b05:0001
. Ensure your IPv6 address has 8 groups of 4 hex digits each!
For this example, we choose the IPv6 address 2a01:4f9:c010:278::8
.
We need to find the ULA (unique local address – similar to a private IPv4 address which is not routed on the internet) of the container. Using lxc, this is quite easy:
[email protected]:~$ lxc list +--------------+---------+-----------------------+-----------------------------------------------+ | NAME | STATE | IPV4 | IPV6 | +--------------+---------+-----------------------+-----------------------------------------------+ | my-container | RUNNING | 10.144.118.232 (eth0) | fd42:830b:36dc:3691:216:3eff:fed1:9058 (eth0) | +--------------+---------+-----------------------+-----------------------------------------------+
You need to look in the IPv6 column and copy the address listed there. In this example, the address is fd42:830b:36dc:3691:216:3eff:fed1:9058
.
Now we can tell the host Linux to route your chosen public IPv6 to the container’s private IPv6. This is quite easy:
sudo ip6tables -t nat -A PREROUTING -d <public IPv6> -j DNAT --to-destination <container private IPv6>
In our example, this would be
sudo ip6tables -t nat -A PREROUTING -d 2a01:4f9:c010:278::8 -j DNAT --to-destination fd42:830b:36dc:3691:216:3eff:fed1:9058
First, test the command by running it in a shell. If it works (i.e. if it doesn’t print any error message), you can permanently store it e.g. by adding it to /etc/rc.local
(after #!/bin/bash
, before exit 0
). Advanced users should prefer to add it to /etc/network/interfaces
.
Note: This step requires that you have working IPv6 connectivity at your local computer. If you are unsure, check at ipv6-test.com
First, open a shell on your container:
lxc exec my-container bash
After running this, you should see a root shell prompt inside your container:
[email protected]:~#
The following commands should be entered in the container shell, not the host!
Now we can create a user to login to (in this example, we create the uli
user):
[email protected]:~# adduser uli Adding user `uli' ... Adding new group `uli' (1001) ... Adding new user `uli' (1001) with group `uli' ... Creating home directory `/home/uli' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for uli Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n]
You only need to enter a password (you won’t see anything on screen when entering it) twice, for all other lines you can just press enter.
The ubuntu:18.04
lxc image used in this example does not allow SSH password authentication in its default configuration. In order to fix this, change PasswordAuthentication no
to PasswordAuthentication yes
in /etc/ssh/sshd_config
and restart the SSH server by running service sshd restart
. Be sure you understand the security implications before you do that!
Now, logout of your container shell by pressing Ctrl+D
. The following commands can be entered on your desktop or any other server with IPv6 connectivity.
Now login to your server:
ssh <username>@<public IPv6 address>
in this example:
ssh [email protected]:4f9:c010:278::8
If you configured everything correctly, you’ll see the shell prompt for your container:
[email protected]:~$
Note: Don’t forget to configure a firewall for your container, e.g. ufw! Your container’s IPv6 is exposed to the internet and just assuming noone will guess it is not good security practice.