Cloud

How to SSH to Google Cloud VM instance on command line

If you want to connect to a Google Cloud VM instance (my-instance in this example) from your command line using SSH, you have two options:

Directly connect using gcloud

This will always work if your instance has SSH enabled, even if it does not have an external IP:

gcloud compute ssh my-instance --zone $(gcloud compute instances list --filter="name=my-instance" --format "get(zone)" | awk -F/ '{print $NF}')

Note that your have to replace my-instance by your actual instance name two times in the command above. The subcommand (enclosed in $(...) ) finds the correct zone for your instance since at the time of writing this article gcloud compute ssh will not work unless you set the correct zone for that instance. See How to find zone of Google Cloud VM instance on command line for more details.

Connect using external IP

You can also use gcloud to get the external IP and connect to it using your standard SSH client.

ssh $(gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].accessConfigs[0].natIP)")

This has the added advantage that your will be able to use this in other SSH-like command like rsync.

For reference also see the official manual on Securely Connecting to Instances

Posted by Uli Köhler in Cloud

How to find zone of Google Cloud VM instance on command line

Problem:

You have a VM instance (my-instance in our example) for which you want to find out the zone it’s residing in using the gcloud command line tool.

Solution:

If you just want to see the zone of the instance (remember to replace my-instance by your instance name!), use

gcloud compute instances list --filter="name=my-instance" --format "[box]"

This will format the output nicely and show you more information about your instance. Example output:

┌─────────────┬────────────────┬─────────────────────────────┬─────────────┬─────────────┬───────────────┬─────────┐
│    NAME     │      ZONE      │         MACHINE_TYPE        │ PREEMPTIBLE │ INTERNAL_IP │  EXTERNAL_IP  │  STATUS │
├─────────────┼────────────────┼─────────────────────────────┼─────────────┼─────────────┼───────────────┼─────────┤
│ my-instance │ europe-west3-c │ custom (16 vCPU, 32.00 GiB) │             │ 10.156.0.1  │ 35.207.77.101 │ RUNNING │
└─────────────┴────────────────┴─────────────────────────────┴─────────────┴─────────────┴───────────────┴─────────┘

In this example, the zone is europe-west3-c.

In case you want to see only the zone, use this command instead:

gcloud compute instances list --filter="name=katc-main" --format "get(zone)" | awk -F/ '{print $NF}'

Example output:

europe-west3-c

Also see our other post How to find IP address of Google Cloud VM instance on command line.

In order to see what other information about instances you can see in a similar fashion, use

gcloud compute instances list --filter="name=my-instance" --format "text"
Posted by Uli Köhler in Cloud

How to find IP address of Google Cloud VM instance on command line

Problem:

You have a VM instance (my-instance in our example) for which you want to get the external or internal IP using the gcloud command line tool.

Solution:

If you just want to see the external IP of the instance (remember to replace my-instance by your instance name!), use

gcloud compute instances list --filter="name=my-instance" --format "[box]"

This will format the output nicely and show you more information about your instance. Example output:

┌─────────────┬────────────────┬─────────────────────────────┬─────────────┬─────────────┬───────────────┬─────────┐
│    NAME     │      ZONE      │         MACHINE_TYPE        │ PREEMPTIBLE │ INTERNAL_IP │  EXTERNAL_IP  │  STATUS │
├─────────────┼────────────────┼─────────────────────────────┼─────────────┼─────────────┼───────────────┼─────────┤
│ my-instance │ europe-west3-c │ custom (16 vCPU, 32.00 GiB) │             │ 10.156.0.1  │ 35.207.77.101 │ RUNNING │
└─────────────┴────────────────┴─────────────────────────────┴─────────────┴─────────────┴───────────────┴─────────┘

In this example, the external IP address is 35.207.77.101.

In case you want to see only the IP address, use this command instead:

gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].accessConfigs[0].natIP)"

Example output:

35.207.77.101

In order to see only the internal IP address (accessible only from Google Cloud), use

gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].networkIP)"

In the linux shell, the result of this command can easily be used as input to other commands. For example, to ping my-instance, use

ping $(gcloud compute instances list --filter="name=katc-main" --format "get(networkInterfaces[0].accessConfigs[0].natIP)")

Also see our related post How to find zone of Google Cloud VM instance on command line

In order to see what other information about instances you can see in a similar fashion, use

gcloud compute instances list --filter="name=my-instance" --format "text"
Posted by Uli Köhler in Cloud

How to fix Kubernetes kubectl apply not restarting pods

Problem:

You made an update to your Kubernetes YAML configuration which you applied with

kubectl apply -f [YAML filename]

but Kubernetes still keeps the old version of the software running.

Solution:

Instead of kubectl apply -f ... use

kubectl replace --force -f [YAML filename]

This will update the configuration on the server and also update the running pods.

Original answer on StackOverflow

Posted by Uli Köhler in Cloud, Container, Kubernetes

How to fix kubectl Unable to connect to the server: dial tcp …:443: i/o timeout

Problem:

You want to create or edit a Kubernetes service but when running e.g.

kubectl create -f my-service.yml

you see an error message similar to this:

Unable to connect to the server: dial tcp 35.198.129.60:443: i/o timeout

Solution:

There are three common reasons for this issue:

  1. Your Kubernetes cluster is not running. Verify that your cluster has been started, e.g. by pinging the IP address.
  2. There are networking issues that prevent you from accessing the cluster. Verify that you can ping the IP and try to track down whether there is a firewall in place preventing the access
  3. You have configured a cluster that does not exist any more.

In case of Google Cloud Kubernetes, case (3) can easily be fixed by configuring Kubernetes to use your current cluster:

gcloud container clusters get-credentials [cluster name] --zone [zone]

This will automatically update the default cluster for kubectl.

In case you don’t know the correct cluster name and zone, use

gcloud container clusters list
Posted by Uli Köhler in Cloud, Container, Kubernetes

How to build & upload a Dockerized application to Google Container Registry in 5 minutes

This post provides an easy example on how to build & upload your application to the private Google Container registry. We assume you have already setup your project and installed Docker. In this example, we’ll build & upload pseudo-perseus v1.0. Since this is a NodeJS-based application, we also assume that you installed a recent version of NodeJS and NPM (see our previous article on how to do that using Ubuntu)

First we configure docker to be able to authenticate to Google:

gcloud auth configure-docker

Now we can checkout the repository and install the NPM packages:

git clone https://github.com/ulikoehler/pseudo-perseus.git
cd pseudo-perseus
git checkout v1.0
npm install

Now we can build the local docker image (we directly name it so that it can be uploaded to the Google Container Registry. Be sure to use the correct google cloud project ID!):

docker build -t eu.gcr.io/myproject-123456/pseudo-perseus:v1.0 .

The next step is to upload the image:

docker push eu.gcr.io/myproject-123456/pseudo-perseus:v1.0

For reference see the official Container Registry documentation.

Posted by Uli Köhler in Cloud, Container, Docker

Fixing gcloud WARNING: `docker-credential-gcloud` not in system PATH

Problem:

You want to configure docker to be able to access Google Container Registry using

gcloud auth configure-docker

but you see this warning message:

WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
gcloud credential helpers already registered correctly.

Solution:

Install docker-credential-gcloud using

sudo gcloud components install docker-credential-gcr

In case you see this error message:

ERROR: (gcloud.components.install) You cannot perform this action because this Cloud SDK installation is managed by an external package manager.
Please consider using a separate installation of the Cloud SDK created through the default mechanism described at: https://cloud.google.com/sdk/

use this alternate installation command instead (this command is for Linux, see the official documentation for other operating systems):

VERSION=1.5.0
OS=linux
ARCH=amd64

curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" \
  | tar xz --to-stdout ./docker-credential-gcr \
  | sudo tee /usr/bin/docker-credential-gcr > /dev/null && sudo chmod +x /usr/bin/docker-credential-gcr

After that, configure docker using

docker-credential-gcr configure-docker

Now you can retry running your original command.

For reference, see the official documentation.

Posted by Uli Köhler in Cloud, Container, Docker, Linux

How to fix kubectl ‘The connection to the server localhost:8080 was refused – did you specify the right host or port?’

Problem:

You want to configure a Kubernetes service using kubectl using a command like

kubectl patch service/"my-elasticsearch-svc" --namespace "default"   --patch '{"spec": {"type": "LoadBalancer"}}'

but you only see this error message:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Solution:

Kubernetes does not have the correct credentials to access the cluster.

Add the correct credentials to the kubectl config using

gcloud container clusters get-credentials [cluster name] --zone [cluster zone]

e.g.

gcloud container clusters get-credentials cluster-1 --zone europe-west3-c

After that, retry your original command.

In case you don’t know your cluster name or zone, use

gcloud container clusters list

to display the cluster metadata.

Credits to this StackOverflow answer for the original solution.

Posted by Uli Köhler in Allgemein, Cloud, Container, Kubernetes

How to fix ModuleNotFoundError: No module named ‘grpc’ in Python

Problem:

You want to run a Python script that is using some Google Cloud services. However you see an error message similar to this:

[...]
  File "/usr/local/lib/python3.6/dist-packages/google/api_core/gapic_v1/__init__.py", line 16, in <module>
    from google.api_core.gapic_v1 import config
  File "/usr/local/lib/python3.6/dist-packages/google/api_core/gapic_v1/config.py", line 23, in <module>
    import grpc
ModuleNotFoundError: No module named 'grpc'

Solution:

Install the grpcio Python module:

sudo pip3 install grpcio

or, for Python 2.x

sudo pip install grpcio
Posted by Uli Köhler in Cloud, Linux, Python

How to fix ModuleNotFoundError: No module named ‘google.cloud.iam’

Problem:

You want to run a Python script that uses one of the Google Cloud Python APIs but you get this error message:

ModuleNotFoundError: No module named 'google.cloud.iam'

Solution:

Reinstall any google cloud package using pip:

sudo pip install --upgrade google-cloud-storage

or

sudo pip3 install --upgrade google-cloud-storage

That will also reinstall the relevant google.cloud.iam module.

After that, re-run your script. If that didn’t work, try to install --upgrade some other google-cloud-* module, especially the modules you actually use in your script.

 

Posted by Uli Köhler in Cloud, Python

Routing public IPv6 addresses to your lxc/lxd containers

The enormous amount of IPv6 addresses available to most commercially hosted VPS / root servers with a public IPv6 prefix allows you to route a public IPv6 address to every container that is running on your server. This tutorial shows you how to do that, even if you have no prior experience with routing,

Step 0: Create your LXC container

We assume you have already done this – just for reference, here’s how you can create a container:

lxc launch ubuntu:18.04 my-container

Step 1: Which IP address do you want to assign to your container?

First you need to find out what prefix is routed to your host. Usually you can do that by checking in your provider’s control panel. You’re looking for something like 2a01:4f9:c010:278::1/64. Another option would be to run sudo ifconfig

and look for a inet6 line in the section of your primary network interface (this only works if you have configured your server to have an IPv6 address). Note that addresses that start with fe80:: and addresses starting with fd, among others, are not public IPv6 addresses.

Then you can define a new IPv6 address to your container. Which one you choose – as long as it’s within the prefix – is entirely your decision.

Often, <prefix>::1 is used for the host itself, therefore you could, for example, choose <prefix>::2. Note that some providers use some IP addresses for other purposes. Check your provider’s documentation for details.

If you don’t want to make it easy to find your container’s public IPv6, don’t choose <prefix>::1<prefix>::2<prefix>::3 etc but something more random like <prefix>:af15:99b1:0b05:1, for example2a01:4f9:c010:278:af15:99b1:0b05:0001. Ensure your IPv6 address has 8 groups of 4 hex digits each!

For this example, we choose the IPv6 address 2a01:4f9:c010:278::8.

Step 2: Find out the ULA of your container

We need to find the ULA (unique local address – similar to a private IPv4 address which is not routed on the internet) of the container. Using lxc, this is quite easy:

uli@myserver:~$ lxc list
+--------------+---------+-----------------------+-----------------------------------------------+
|     NAME     |  STATE  |         IPV4          |                     IPV6                      |
+--------------+---------+-----------------------+-----------------------------------------------+
| my-container | RUNNING | 10.144.118.232 (eth0) | fd42:830b:36dc:3691:216:3eff:fed1:9058 (eth0) |
+--------------+---------+-----------------------+-----------------------------------------------+

You need to look in the IPv6 column and copy the address listed there. In this example, the address is fd42:830b:36dc:3691:216:3eff:fed1:9058.

Step 3: Setup IPv6 routing

Now we can tell the host Linux to route your chosen public IPv6 to the container’s private IPv6. This is quite easy:

sudo ip6tables -t nat -A PREROUTING -d <public IPv6> -j DNAT --to-destination <container private IPv6>

In our example, this would be

sudo ip6tables -t nat -A PREROUTING -d 2a01:4f9:c010:278::8 -j DNAT --to-destination fd42:830b:36dc:3691:216:3eff:fed1:9058

First, test the command by running it in a shell. If it works (i.e. if it doesn’t print any error message), you can permanently store it e.g. by adding it to /etc/rc.local (after #!/bin/bash, before exit 0). Advanced users should prefer to add it to /etc/network/interfaces.

Step 4: Connect to your container using SSH on your public IPv6 (optional)

Note: This step requires that you have working IPv6 connectivity at your local computer. If you are unsure, check at ipv6-test.com

First, open a shell on your container:

lxc exec my-container bash

After running this, you should see a root shell prompt inside your container:

root@my-container:~#

The following commands should be entered in the container shell, not the host!

Now we can create a user to login to (in this example, we create the uli user):

root@my-container:~# adduser uli
Adding user `uli' ...
Adding new group `uli' (1001) ...
Adding new user `uli' (1001) with group `uli' ...
Creating home directory `/home/uli' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for uli
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n]

You only need to enter a password (you won’t see anything on screen when entering it) twice, for all other lines you can just press enter.

The ubuntu:18.04 lxc image used in this example does not allow SSH password authentication in its default configuration. In order to fix this, change PasswordAuthentication no to PasswordAuthentication yes in /etc/ssh/sshd_config and restart the SSH server by running service sshd restart. Be sure you understand the security implications before you do that!

Now, logout of your container shell by pressing Ctrl+D. The following commands can be entered on your desktop or any other server with IPv6 connectivity.

Now login to your server:

ssh <username>@<public IPv6 address>

in this example:

ssh uli@2a01:4f9:c010:278::8

If you configured everything correctly, you’ll see the shell prompt for your container:

uli@my-container:~$

Note: Don’t forget to configure a firewall for your container, e.g. ufw! Your container’s IPv6 is exposed to the internet and just assuming noone will guess it is not good security practice.

Posted by Uli Köhler in Cloud, Container, Linux, LXC, Networking

How to circumvent Google Cloud Storage 1000 read / 400 write limit in Python

Google Cloud Datastore has a built-in 1000 keys limit for get requests and a 400 entities per request for put limit. If you hit it, you will see one of these error messages:

google.api_core.exceptions.InvalidArgument: 400 cannot get more than 1000 keys in a single call
google.api_core.exceptions.InvalidArgument: 400 cannot write more than 500 entities in a single call

You can fix it by chunking the requests, i.e. only do 1000 requests at one time for get etc.

This code provides a ready-to-use example for a class that automates this process. As an added benefit, it performs the requests in chunks of 1000 (for get) or 400 (for put) in parallel using a concurrent.futures.Executor. As the performance is expected to be IO-bound, it is recommended to use a concurrent.futures.ThreadPoolExecutor.
If you dont give the class an executor on construction, it will create one by itself.

import itertools
from concurrent.futures import ThreadPoolExecutor

def _chunks(l, n=1000):
    """
    Yield successive n-sized chunks from l.
    https://stackoverflow.com/a/312464/2597135
    """
    for i in range(0, len(l), n):
        yield l[i:i + n]

def _get_chunk(client, keys):
    """
    Get a single chunk
    """
    missing = []
    vals = client.get_multi(keys, missing=missing)
    return vals, missing

class DatastoreChunkClient(object):
    """
    Provides a thin wrapper around a Google Cloud Datastore client providing means
    of reading nd
    """
    def __init__(self, client, executor=None):
        self.client = client
        if executor is None:
            executor = ThreadPoolExecutor(16)
        self.executor = executor
    
    def get_multi(self, keys):
        """
        Thin wrapper around client.get_multi() that circumvents
        the 1000 read requests limit by doing 1000-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        all_missing = []
        all_vals = []
        for vals, missing in self.executor.map(lambda chunk: _get_chunk(self.client, chunk), _chunks(keys, 1000)):
            all_vals += vals
            all_missing += missing
        return all_vals, all_missing

    def put_multi(self, entities):
        """
        Thin wrapper around client.put_multi() that circumvents
        the 400 read requests limit by doing 400-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        for none in self.executor.map(lambda chunk: self.client.put_multi(chunk), _chunks(entities, 400)):
            pass

Usage example:

# Create "raw" google datastore client
client = datastore.Client(project="myproject-123456")
chunkClient = DatastoreChunkClient(client)

# The size of the key list is only limited by memory
keys = [...]
values, missing = chunkClient.get_multi(keys)

# The size of the entity list is only limited by memory
entities = [...]
chunkClient.put_multi(entities)

 

Posted by Uli Köhler in Cloud, Python

Saving an entity in Google Cloud Datastore using Python: A minimal example

Here’s a minimal example for inserting an entity in the Google Cloud Datastore object database using the Python API:

#!/usr/bin/env python3
from google.cloud import datastore
# Create & store an entity
client = datastore.Client(project="myproject-12345")
entity = datastore.Entity(key=client.key('MyEntityKind', 'MyTestID'))
entity.update({
    'foo': u'bar',
    'baz': 1337,
    'qux': False,
})
# Actually save the entity
client.put(entity)

This assumes you have already created an entity kind with the name MyEntityKind in the project with ID myproject-12345.

Posted by Uli Köhler in Cloud, Python

How to fix Google Cloud Datastore ValueError: A Key must have a project set.

Problem:

You are trying to connect to the Google Cloud Storage object database:

#!/usr/bin/env python3
from google.cloud import datastore
# Create, populate and persist an entity
entity = datastore.Entity(key=datastore.Key('MyEntityKind')) # Line of error
# ...

but when running that code, you get this error message:

Traceback (most recent call last):
  File "./IndexIntoDB.py", line 4, in <module>
    entity = datastore.Entity(key=datastore.Key('MyEntityKind'))
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 109, in __init__
    self._project = _validate_project(project, parent)
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 512, in _validate_project
    raise ValueError("A Key must have a project set.")
ValueError: A Key must have a project set.

Solution:

Note: While the solution below fixes the error message listed above, you might be more interested in having a look at this minimal entity insertion example

As the error message indicates, you need to add a project name. If you don’t know the project name, go to the Google Cloud Console, select the right project at the top and then look at the URL:

https://console.cloud.google.com/datastore/welcome?project=perceptive-tape-12345

In this example, the project ID (which you have to use in the Python code is perceptive-tape-12345.

See also the Keys section of the google-cloud-datastore python documentation.

Posted by Uli Köhler in Cloud, Python