How to find zone of Google Cloud VM instance on command line

Problem:

You have a VM instance (my-instance in our example) for which you want to find out the zone it’s residing in using the gcloud command line tool.

Solution:

If you just want to see the zone of the instance (remember to replace my-instance by your instance name!), use

gcloud compute instances list --filter="name=my-instance" --format "[box]"

This will format the output nicely and show you more information about your instance. Example output:

┌─────────────┬────────────────┬─────────────────────────────┬─────────────┬─────────────┬───────────────┬─────────┐
│    NAME     │      ZONE      │         MACHINE_TYPE        │ PREEMPTIBLE │ INTERNAL_IP │  EXTERNAL_IP  │  STATUS │
├─────────────┼────────────────┼─────────────────────────────┼─────────────┼─────────────┼───────────────┼─────────┤
│ my-instance │ europe-west3-c │ custom (16 vCPU, 32.00 GiB) │             │ 10.156.0.1  │ 35.207.77.101 │ RUNNING │
└─────────────┴────────────────┴─────────────────────────────┴─────────────┴─────────────┴───────────────┴─────────┘

In this example, the zone is europe-west3-c.

In case you want to see only the zone, use this command instead:

gcloud compute instances list --filter="name=katc-main" --format "get(zone)" | awk -F/ '{print $NF}'

Example output:

europe-west3-c

Also see our other post How to find IP address of Google Cloud VM instance on command line.

In order to see what other information about instances you can see in a similar fashion, use

gcloud compute instances list --filter="name=my-instance" --format "text"
Posted by Uli Köhler in Cloud

How to find IP address of Google Cloud VM instance on command line

Problem:

You have a VM instance (my-instance in our example) for which you want to get the external or internal IP using the gcloud command line tool.

Solution:

If you just want to see the external IP of the instance (remember to replace my-instance by your instance name!), use

gcloud compute instances list --filter="name=my-instance" --format "[box]"

This will format the output nicely and show you more information about your instance. Example output:

┌─────────────┬────────────────┬─────────────────────────────┬─────────────┬─────────────┬───────────────┬─────────┐
│    NAME     │      ZONE      │         MACHINE_TYPE        │ PREEMPTIBLE │ INTERNAL_IP │  EXTERNAL_IP  │  STATUS │
├─────────────┼────────────────┼─────────────────────────────┼─────────────┼─────────────┼───────────────┼─────────┤
│ my-instance │ europe-west3-c │ custom (16 vCPU, 32.00 GiB) │             │ 10.156.0.1  │ 35.207.77.101 │ RUNNING │
└─────────────┴────────────────┴─────────────────────────────┴─────────────┴─────────────┴───────────────┴─────────┘

In this example, the external IP address is 35.207.77.101.

In case you want to see only the IP address, use this command instead:

gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].accessConfigs[0].natIP)"

Example output:

35.207.77.101

In order to see only the internal IP address (accessible only from Google Cloud), use

gcloud compute instances list --filter="name=my-instance" --format "get(networkInterfaces[0].networkIP)"

In the linux shell, the result of this command can easily be used as input to other commands. For example, to ping my-instance, use

ping $(gcloud compute instances list --filter="name=katc-main" --format "get(networkInterfaces[0].accessConfigs[0].natIP)")

Also see our related post How to find zone of Google Cloud VM instance on command line

In order to see what other information about instances you can see in a similar fashion, use

gcloud compute instances list --filter="name=my-instance" --format "text"
Posted by Uli Köhler in Cloud

How to fix Kubernetes kubectl apply not restarting pods

Problem:

You made an update to your Kubernetes YAML configuration which you applied with

kubectl apply -f [YAML filename]

but Kubernetes still keeps the old version of the software running.

Solution:

Instead of kubectl apply -f ... use

kubectl replace --force -f [YAML filename]

This will update the configuration on the server and also update the running pods.

Original answer on StackOverflow

Posted by Uli Köhler in Cloud, Container, Kubernetes

How to fix npm ERR! missing script: start

Problem:

You want to run a NodeJS app using

npm start

but you only see this error message:

npm ERR! missing script: start

Solution:

You need to tell npm what to do when you run npm start explicitly by editing package.json.

First, identify the main file of your application. Most often it is called index.js, server.js or app.js. When you open package.json in an editor, you can also often find a line like

"main": "index.js",

In this example, index.js is the main file of the application.

Now we can edit package.json to add a start script.

In package.json, find the "scripts" section. If you have a default package.json, it will look like this:

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1"
},

Now add a comma to the end of the "test" line and add this line after it:

"start": "node index.js"

and replace index.js by the main file of your application (e.g. app.js, server.js, index.js etc).

The "scripts" section should now look like this:

"scripts": {
  "test": "echo \"Error: no test specified\" && exit 1",
  "start": "node index.js"
},

Now save and close package.json and run

npm start

to start your application.

Posted by Uli Köhler in Javascript, NodeJS

Koa minimal example

Also see Minimal Koa.JS example with Router & Body parser

const Koa = require('koa');
const app = new Koa();

app.use(async ctx => {
  ctx.body = {status: 'success'};
});

app.listen(8080);

To install (see this previous post for a guide on installing NodeJS & NPM):

npm install --save koa
node index.js

Then go to http://localhost:8080/ to see the test page.

Posted by Uli Köhler in Javascript

How to fix kubectl Unable to connect to the server: dial tcp …:443: i/o timeout

Problem:

You want to create or edit a Kubernetes service but when running e.g.

kubectl create -f my-service.yml

you see an error message similar to this:

Unable to connect to the server: dial tcp 35.198.129.60:443: i/o timeout

Solution:

There are three common reasons for this issue:

  1. Your Kubernetes cluster is not running. Verify that your cluster has been started, e.g. by pinging the IP address.
  2. There are networking issues that prevent you from accessing the cluster. Verify that you can ping the IP and try to track down whether there is a firewall in place preventing the access
  3. You have configured a cluster that does not exist any more.

In case of Google Cloud Kubernetes, case (3) can easily be fixed by configuring Kubernetes to use your current cluster:

gcloud container clusters get-credentials [cluster name] --zone [zone]

This will automatically update the default cluster for kubectl.

In case you don’t know the correct cluster name and zone, use

gcloud container clusters list
Posted by Uli Köhler in Cloud, Container, Kubernetes

How to build & upload a Dockerized application to Google Container Registry in 5 minutes

This post provides an easy example on how to build & upload your application to the private Google Container registry. We assume you have already setup your project and installed Docker. In this example, we’ll build & upload pseudo-perseus v1.0. Since this is a NodeJS-based application, we also assume that you installed a recent version of NodeJS and NPM (see our previous article on how to do that using Ubuntu)

First we configure docker to be able to authenticate to Google:

gcloud auth configure-docker

Now we can checkout the repository and install the NPM packages:

git clone https://github.com/ulikoehler/pseudo-perseus.git
cd pseudo-perseus
git checkout v1.0
npm install

Now we can build the local docker image (we directly name it so that it can be uploaded to the Google Container Registry. Be sure to use the correct google cloud project ID!):

docker build -t eu.gcr.io/myproject-123456/pseudo-perseus:v1.0 .

The next step is to upload the image:

docker push eu.gcr.io/myproject-123456/pseudo-perseus:v1.0

For reference see the official Container Registry documentation.

Posted by Uli Köhler in Cloud, Container, Docker

Fixing gcloud WARNING: `docker-credential-gcloud` not in system PATH

Problem:

You want to configure docker to be able to access Google Container Registry using

gcloud auth configure-docker

but you see this warning message:

WARNING: `docker-credential-gcloud` not in system PATH.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
gcloud credential helpers already registered correctly.

Solution:

Install docker-credential-gcloud using

sudo gcloud components install docker-credential-gcr

In case you see this error message:

ERROR: (gcloud.components.install) You cannot perform this action because this Cloud SDK installation is managed by an external package manager.
Please consider using a separate installation of the Cloud SDK created through the default mechanism described at: https://cloud.google.com/sdk/

use this alternate installation command instead (this command is for Linux, see the official documentation for other operating systems):

VERSION=1.5.0
OS=linux
ARCH=amd64

curl -fsSL "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz" \
  | tar xz --to-stdout ./docker-credential-gcr \
  | sudo tee /usr/bin/docker-credential-gcr > /dev/null && sudo chmod +x /usr/bin/docker-credential-gcr

After that, configure docker using

docker-credential-gcr configure-docker

Now you can retry running your original command.

For reference, see the official documentation.

Posted by Uli Köhler in Cloud, Container, Docker, Linux

How to fix kubectl ‘The connection to the server localhost:8080 was refused – did you specify the right host or port?’

Problem:

You want to configure a Kubernetes service using kubectl using a command like

kubectl patch service/"my-elasticsearch-svc" --namespace "default"   --patch '{"spec": {"type": "LoadBalancer"}}'

but you only see this error message:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

Solution:

Kubernetes does not have the correct credentials to access the cluster.

Add the correct credentials to the kubectl config using

gcloud container clusters get-credentials [cluster name] --zone [cluster zone]

e.g.

gcloud container clusters get-credentials cluster-1 --zone europe-west3-c

After that, retry your original command.

In case you don’t know your cluster name or zone, use

gcloud container clusters list

to display the cluster metadata.

Credits to this StackOverflow answer for the original solution.

Posted by Uli Köhler in Allgemein, Cloud, Container, Kubernetes

How to install kubectl on Ubuntu

Problem:

You want to run the Kubernetes kubectl command on Ubuntu but you see an error message like this:

command not found: kubectl

Solution:

Install kubectl using snap:

sudo snap install kubectl --classic

After this command has finished installing kubectl, in most cases you can use it immediately. In case you still get the command not found: kubectl error message, run $SHELL to reload your shell and check if /snap/bin is in your $PATH environment variable.

Posted by Uli Köhler in Container, Kubernetes, Linux

How to fix Linux/Windows dual boot clock shift

Problem:

You have a dual-boot system. Every time you reboot from Linux to Windows, the time is shifted by several hours.

Solution:

On Linux, run

sudo timedatectl set-local-rtc 1

This will configure Linux to store local time in the RTC.

See this StackOverflow post for alternate solutions

Background:

Both Linux and Windows use the hardware clock (RTC – Real time clock) integrated into the computer hardware. However, Windows assumes that the RTC stores local time by default whereas Linux assumes the RTC stores UTC time.

Posted by Uli Köhler in Linux, Windows

ElasticSearch equivalent to MongoDB .distinct(…)

Let’s say we have an ElasticSearch index called strings with a field pattern of {"type": "keyword"}.

Now we want to do the equivalent of MongoDB db.getCollection('...').distinct('pattern'):

Solution:

In Python you can use the iterate_distinct_field() helper from this previous post on ElasticSearch distinct. Full example:

from elasticsearch import Elasticsearch

es = Elasticsearch()

def iterate_distinct_field(es, fieldname, pagesize=250, **kwargs):
    """
    Helper to get all distinct values from ElasticSearch
    (ordered by number of occurrences)
    """
    compositeQuery = {
        "size": pagesize,
        "sources": [{
                fieldname: {
                    "terms": {
                        "field": fieldname
                    }
                }
            }
        ]
    }
    # Iterate over pages
    while True:
        result = es.search(**kwargs, body={
            "aggs": {
                "values": {
                    "composite": compositeQuery
                }
            }
        })
        # Yield each bucket
        for aggregation in result["aggregations"]["values"]["buckets"]:
            yield aggregation
        # Set "after" field
        if "after_key" in result["aggregations"]["values"]:
            compositeQuery["after"] = \
                result["aggregations"]["values"]["after_key"]
        else: # Finished!
            break

# Usage example
for result in iterate_distinct_field(es, fieldname="pattern.keyword", index="strings"):
    print(result) # e.g. {'key': {'pattern': 'mypattern'}, 'doc_count': 315}
Posted by Uli Köhler in Databases, ElasticSearch, Python

How to query distinct field values in ElasticSearch

Let’s say we have an ElasticSearch index called strings with a field pattern of {"type": "keyword"}.

Get the top N values of the column

If we want to get the top N ( 12 in our example) entries, i.e. the patterns that are present in the most documents, we can use this query:

{
    "aggs" : {
        "patterns" : {
            "terms" : {
                "field" : "pattern.keyword",
                "size": 12
            }
        }
    }
}

Full example in Python:

from elasticsearch import Elasticsearch

es = Elasticsearch()

result = es.search(index="strings", body={
    "aggs" : {
        "patterns" : {
            "terms" : {
                "field" : "pattern.keyword",
                "size": 12
            }
        }
    }
})
for aggregation in result["aggregations"]["patterns"]["buckets"]:
    print(aggregation) # e.g. {'key': 'mypattern, 'doc_count': 2802}

See the terms aggregation documentation for more infos.

Get all the distinct values of the column

Getting all the values is slightly more complicated since we need to use a composite aggregation that returns an after_key to paginate the query.

This Python helper function will automatically paginate the query with configurable page size:

from elasticsearch import Elasticsearch

es = Elasticsearch()

def iterate_distinct_field(es, fieldname, pagesize=250, **kwargs):
    """
    Helper to get all distinct values from ElasticSearch
    (ordered by number of occurrences)
    """
    compositeQuery = {
        "size": pagesize,
        "sources": [{
                fieldname: {
                    "terms": {
                        "field": fieldname
                    }
                }
            }
        ]
    }
    # Iterate over pages
    while True:
        result = es.search(**kwargs, body={
            "aggs": {
                "values": {
                    "composite": compositeQuery
                }
            }
        })
        # Yield each bucket
        for aggregation in result["aggregations"]["values"]["buckets"]:
            yield aggregation
        # Set "after" field
        if "after_key" in result["aggregations"]["values"]:
            compositeQuery["after"] = \
                result["aggregations"]["values"]["after_key"]
        else: # Finished!
            break

# Usage example
for result in iterate_distinct_field(es, fieldname="pattern.keyword", index="strings"):
    print(result) # e.g. {'key': {'pattern': 'mypattern'}, 'doc_count': 315}
Posted by Uli Köhler in Databases, ElasticSearch, Python

How to fix ElasticSearch ‘Fielddata is disabled on text fields by default’ for keyword field

Problem:

You have a field in ElasticSearch named e.g. patterns of type keyword. However, when you query for an aggregation of this field e.g.

es.search(index="strings", body={
    "size": 0,
    "aggs" : {
        "patterns" : {
            "terms" : { "field" : "pattern" }
        }
    }
})

you see this error message:

elasticsearch.exceptions.RequestError: RequestError(400, 'search_phase_execution_exception', 'Fielddata is disabled on text fields by default. Set fielddata=true on [pattern] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.'

Solution:

This error message is confusing since you already have a keyword field. However, the ElasticSearch fielddata documentation tells us that you need to to use pattern.keyword in the query instead of just pattern.

Full example:

es.search(index="strings", body={
    "size": 0,
    "aggs" : {
        "patterns" : {
            "terms" : { "field" : "pattern.keyword" }
        }
    }
})
Posted by Uli Köhler in Databases, ElasticSearch

How to fix ModuleNotFoundError: No module named ‘grpc’ in Python

Problem:

You want to run a Python script that is using some Google Cloud services. However you see an error message similar to this:

[...]
  File "/usr/local/lib/python3.6/dist-packages/google/api_core/gapic_v1/__init__.py", line 16, in <module>
    from google.api_core.gapic_v1 import config
  File "/usr/local/lib/python3.6/dist-packages/google/api_core/gapic_v1/config.py", line 23, in <module>
    import grpc
ModuleNotFoundError: No module named 'grpc'

Solution:

Install the grpcio Python module:

sudo pip3 install grpcio

or, for Python 2.x

sudo pip install grpcio
Posted by Uli Köhler in Cloud, Linux, Python

How to fix echo printing literal \n (backslash newline)

Problem:

You want to echo into a file like this:

echo "\ntest\n" > test.txt

but after doing that, test.txt contains the literal

\ntest\n

Solution:

Use echo -e (-e means: interpret backslash escapes):

echo -e "\ntest\n" > test.txt

After doing that, test.txt will contain test with a newline before and after.

Posted by Uli Köhler in Linux

How to solve permission denied error when trying to echo into a file

Problem:

You are trying to echo a string into a file only accessible to root, e.g.:

sudo echo "foo" > /etc/my.cnf

but you only see this error message:

-bash: /etc/my.cnf: Permission denied

Solution:

Use tee instead (this will also echo the string to stdout)

echo "foo" | sudo tee /etc/my.cfg # Overwrite: Equivalent of echo "foo" > /etc/my.cnf
echo "foo" | sudo tee -a /etc/my.cfg # Append: Equivalent of echo "foo" >> /etc/my.cnf

The reason for the error message is that while echo is executed using sudo, >> /etc/my.cnf is run as normal user (not root).

An alternate approach is to run a sub-shell as sudo:

sudo bash -c 'echo "foo" > /etc/my.cnf'

but this has several caveats e.g. related to escaping so I usually don’t recommend this approach.

Posted by Uli Köhler in Linux

How to fix ERROR: Couldn’t connect to Docker daemon at http+docker://localhost – is it running?

Problem:

You want to run a docker-container or docker-compose application, but once you try to start it, you see this error message:

ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.

Solution:

There are two possible reasons for this error message.

The common reason is that the user you are running the command as does not have the permissions to access docker.

You can fix this either by running the command as root using sudo (since root has the permission to access docker) or adding your user to the docker group:

sudo usermod -a -G docker $USER

and then logging out and logging back in completely (or restarting the system/server).

The other reason is that you have not started docker. On Ubuntu, you can start it using

sudo systemctl enable docker # Auto-start on boot
sudo systemctl start docker # Start right now

 

TechOverflow’s Docker install instructions automatically takes care of starting & enabling the service

Posted by Uli Köhler in Container, Docker, Linux

ElasticSearch docker-compose.yml and systemd service generator

Just looking for a simple solution with a single ElasticSearch node? See our new post Simple Elasticsearch setup with docker-compose

New: Now with ElasticSearch 7.13.4

This generator allows you to generate a systemd service file for a docker-compose setup that is automatically restarted if it fails.

Continue reading →

Posted by Uli Köhler in Container, Databases, Docker, ElasticSearch, Generators, Linux

docker-compose systemd .service generator

This generator allows you to generate a systemd service file for a docker-compose setup that is automatically restarted if it fails.

Continue reading →

Posted by Uli Köhler in Docker, Generators, Linux
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPTPrivacy &amp; Cookies Policy