How to fix Python IMAP ‘command CLOSE illegal in state AUTH, only allowed in states SELECTED

Problem:

You have IMAP code in Python similar to

server = imaplib.IMAP4_SSL('imap.mydomain.com')
server.login('[email protected]', 'password')
# ...
# Cleanup
server.close()

but when you run it, server.close() fails with an error message like

Traceback (most recent call last):
  File "./imaptest.py", line 13, in <module>
    server.close()
  File "/usr/lib/python3.6/imaplib.py", line 461, in close
    typ, dat = self._simple_command('CLOSE')
  File "/usr/lib/python3.6/imaplib.py", line 1196, in _simple_command
    return self._command_complete(name, self._command(name, *args))
  File "/usr/lib/python3.6/imaplib.py", line 944, in _command
    ', '.join(Commands[name])))
imaplib.error: command CLOSE illegal in state AUTH, only allowed in states SELECTED

Solution:

Prior to server.close(), you must run server.select() at least once. If in doubt, just server.select("INBOX")because this will always work.

Insert this line before server.close():

server.select("INBOX")

It should look like this:

server = imaplib.IMAP4_SSL('imap.mydomain.com')
server.login('[email protected]', 'password')
# ...
# Cleanup
server.select("INBOX")
server.close()

For a complete example see Minimal Python IMAP over SSL example

 

Posted by Uli Köhler in Python

Minimal Python IMAP over SSL example

Note: Consider using IMAP with TLS instead of IMAP over SSL. See Minimal Python IMAP over TLS example.

This example code will login to the server using port 993 (IMAP over SSL), list the mailboxes and logout immediately.

#!/usr/bin/env python3
import imaplib

server = imaplib.IMAP4_SSL('imap.mydomain.com')
server.login('[email protected]', 'password')
# Print list of mailboxes on server
code, mailboxes = server.list()
for mailbox in mailboxes:
    print(mailbox.decode("utf-8"))
# Select mailbox
server.select("INBOX")
# Cleanup
server.close()

Remember to replace:

  • imap.mydomain.com with the domain name or IP address of your IMAP server
  • [email protected] by the email address you want to login with
  • password by the password you want to login with

When running this script, a successful output might look like this:

(\HasChildren) "." INBOX
(\HasNoChildren) "." INBOX.Spam
(\HasNoChildren) "." INBOX.Drafts
(\HasNoChildren) "." INBOX.Sent
(\HasNoChildren) "." INBOX.Trash

If your credentials don’t work you’ll see an error message like this:

Traceback (most recent call last):
  File "./imaptest.py", line 5, in <module>
    server.login('[email protected]', 'mypassword')
  File "/usr/lib/python3.6/imaplib.py", line 598, in login
    raise self.error(dat[-1])
imaplib.error: b'[AUTHENTICATIONFAILED] Authentication failed.'

Note that in order to be able to server.close() the connection, it’s required that you server.select() a mailbox first ; this is why we can’t just omit the server.select("INBOX") line even though we don’t actually do anything with the mailbox. See this post for a more concise example on this behaviour.

Posted by Uli Köhler in Python

How to set default zone for Google Cloud project using gcloud command-line tool

Use this command to set the default zone for project myproject-123456 to europe-west4-a and the default region to europe-west4:

gcloud compute project-info add-metadata \
--metadata google-compute-default-region=europe-west4,google-compute-default-zone=europe-west4-a\
--project myproject-123456

Also see the official reference for more detailed information.

Posted by Uli Köhler in Cloud

How to fix mount: unknown filesystem type ‘smbfs’

Problem:

When you’re trying to mount a Windows network share using a command like

sudo mount -t smbfs //Asus/store_n_go /mnt/

you see this error message:

mount: unknown filesystem type 'smbfs'

Solution:

First ensure samba is installed

sudo apt install samba

then try again using cifs as filesystem type instead of smbfs:

sudo mount -t cifs //Asus/store_n_go /mnt/

 

Posted by Uli Köhler in Linux, Networking

How to insert test data into ElasticSearch 6.x

If you just want to insert some test documents into ElasticSearch 6.x, you can use this simple command:

curl -X POST "localhost:9200/mydocuments/_doc/" -H 'Content-Type: application/json' -d"
{
    \"test\" : true,
    \"post_date\" : \"$(date -Ins)\"
}"

Run this command multiple times to insert multiple documents!

In case of success, this will output a message like

{"_index":"mydocuments","_type":"_doc","_id":"vCxB82kBn2U9QxlET2aG","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":0,"_primary_term":1}

Also see the official docs on the indexing API.

Posted by Uli Köhler in ElasticSearch

How to print ISO8601 date using ‘date’ on command line

Use one of these commands to print the ISO8601 time using the date tool included in most Linux distributions.

date -I # 2019-04-06
date -Iseconds # 2019-04-06T17:22:49+02:00
date -Ins # 2019-04-06T17:23:08,505995625+02:00

For reference see the date manpage.

Posted by Uli Köhler in Linux

How to install curl on Ubuntu Linux

On most Ubuntu installations, curl is already installed. In order to check, type

curl

into your shell and press return.

If you see a message like

curl: try 'curl --help' or 'curl --manual' for more information

curl is already installed and you don’t need to do anything.

In case you see this message:

Command 'curl' not found, but can be installed with:

apt install curl
Please ask your administrator.

or (more rarely) this message:

-bash: /usr/bin/curl: No such file or directory

you can install curl by copying

sudo apt install curl

into your shell and pressing return.

Posted by Uli Köhler in Linux

How to view & interpret disk space usage of your ElasticSearch cluster

In order to find out how much disk space every node of your elasticsearch cluster is using and how much disk space is remaining, use

curl -XGET "http://localhost:9200/_cat/allocation?v&pretty"

Example output for one node without any data:

shards disk.indices disk.used disk.avail disk.total disk.percent host       ip         node
     0           0b     2.4gb    200.9gb    203.3gb            1 172.18.0.2 172.18.0.2 TxYuHLF

This example output tells you:

  • shards: 0 The cluster currently has no shards. This means there is no data in the cluster
  • disk.indices: 0b The cluster currently uses 0 bytes of disk space for indexes.
  • disk.used: 2.4gb The disk ElasticSearch will store its data on has 2.4 Gigabytes used spaced. This does not mean that ElasticSearch uses 2.4 Gigabytes, any other application (including the operating system) might also use (part of) that space.
  • disk.avail: 200.9gb The disk ElasticSearch will store its data on has 200.9 Gigabytes of free space. Remember that this will not shrink only if ElasticSearch is using data on said disk, other applications might also consume some of the disk space depending on how you set up ElasticSearch.
  • disk.total: 203.3gb The disk ElasticSearch will store its data on has a total size of 203.3 gigabytes (total size as in available space on the filesystem without any files)
  • disk.percent: 1 Currently 1 % of the total disk space available (disk.total) is used. This value is always rounded to full percents.
  • host, ip, node: Which node this line is referring to.

Example with one node with some test data (see this TechOverflow post on how to generate test data):

shards disk.indices disk.used disk.avail disk.total disk.percent host       ip         node
     5        6.8kb     2.4gb     21.6gb       24gb           10 172.18.0.2 172.18.0.2 J3W5zqj
     5                                                                                 UNASSIGNED

As we can see, ElasticSearch now has 5 shards. Note that the second line tells us that 5 shards are UNASSIGNED. This is because ElasticSearch has been configured to make one replica for each shard and there is no second node where it can put the replica. For development configurations this is usually OK, but production configurations should usually have at least two nodes. See our docker-compose and systemd service generator for ElasticSearch for instructions on how to configure a local multi-node cluster using docker.

Posted by Uli Köhler in ElasticSearch

How to upload your Python package to PyPI in 30 seconds

Prerequisite: Install twine:

sudo pip3 install twine

Before the next step, ensure you have no uncommitted files, because those will be deleted!

Also, ensure that your package is ready for release. Ensure that you have the correct version listed in setup.py

sudo chown -R $USER: .
git clean -xdf
python3 setup.py sdist
twine upload dist/*

Variant if you don’t have python3:

sudo chown -R $USER: .
git clean -xdf
python setup.py sdist
twine upload dist/*

For more detailed instructions see this post.

Detailed explanation of the commands:

  • sudo chown -R $USER: . Fix permission issues possibly introduced by sudo python3 setup.py install
  • git clean -xdf Remove uncommitted files and other fuzz
  • python3 setup.py sdist Build source package
  • twine upload dist/* This will ask you for user PyPI username and password and then upload the package.
Posted by Uli Köhler in Linux, Python

How to backup all indices from ElasticSearch

You can use elasticdump to backup all indices from your ElasticSearch cluster. Install using

sudo npm install elasticdump -g

If you don’t have npm, see How to install NodeJS 10.x on Ubuntu in 1 minute.

This package installs two binarys: elasticdump (used to dump a single index) and multielasticdump (used to dump multiple indices in parallel)

We can use multielasticdump to dump all indexes:

mkdir -p es_backup
multielasticdump --direction=dump --input=http://localhost:9200 --output=es_backup

Restore using:

multielasticdump --direction=load --input=es_backup --output=http://localhost:9200

 

Posted by Uli Köhler in ElasticSearch, Linux

How to expand Kubernetes Physical Volume Claim (PVC)

Important note: By default, volumes will not be resized immediately but instead require a restart of the associated pod.

First, ensure that you have set allowVolumeExpansion: true for the storage class of your PVC. See our previous post on How to allow Physical Volume Claim (PVC) resize for Kubernetes storage class for more details.

We can expand the volume (named myapp-myapp-pvc-myapp-myapp-1 in this example) by running

kubectl patch pvc/"myapp-myapp-pvc-myapp-myapp-1" \
  --namespace "default" \
  --patch '{"spec": {"resources": {"requests": {"storage": "40Gi"}}}}'

Ensure that you have replaced  the name of the PVC (myapp-myapp-pvc-myapp-myapp-1 in this example) and the storage size. It’s only possible to increase the size of the volume / expand it and not to downsize / shrink it. If your size is less than the previous value, you’ll see this error message:

The PersistentVolumeClaim "myapp-myapp-pvc-myapp-myapp-1" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value

After running this command, the PVC will be in the FileSystemResizePending state.

In order for the update to have effect, you’ll need to force Kubernetes to re-create all the pods for your deployment. To find out how to do this, read our post on How to force restarting all Pods in a Kubernetes Deployment.

For reference, see the official documentation on expanding persistent volumes

Posted by Uli Köhler in Cloud, Kubernetes

How to force restarting all Pods in a Kubernetes Deployment

In contrast to classical deployment managers like systemd or pm2, Kubernetes does not provide a simple restart my application command.

However there’s an easy workaround: If you chance anything in your configuration, even innocuous things that don’t have any effect, Kubernetes will restart your pods.

Consider configuring a rolling update strategy before doing this if you are updating a production application that should have minimal downtime.

In this example we’ll assume you have a StatefulSet your want to update and it’s named elasticsearch-elasticsearch. Be sure to fill in the actual name of your deployment here.

kubectl patch statefulset/elasticsearch-elasticsearch -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"dummy-date\":\"`date +'%s'`\"}}}}}"

This will just set a dummy-date annotation which does not have any effect.

You can monitor the update by

kubectl rollout status statefulset/elasticsearch-elasticsearch

Credits for the original solution idea to pstadler on GitHub.

Posted by Uli Köhler in Cloud, Kubernetes

How to allow Physical Volume Claim (PVC) resize for Kubernetes storage class

The prerequisite for resizing a Kubernetes Physical Volume Claim is that you allow volume expansion in the storage class the PVC belongs to (standard storage class for this example).

We can allow this by setting allowVolumeExpansion: true for that storage class.

Patching the configuration on-the-go can easily be done using

kubectl patch storageclass/"standard" \
  --namespace "default" \
  --patch '{"allowVolumeExpansion": true}'

Remember that you might need to adjust your storage class and namespace depending on which ones you used. For any standard configuration, however, the namespace default and the storage class standard will be the ones you need.

Posted by Uli Köhler in Kubernetes

How to configure Google Cloud Kubernetes Elasticsearch Cluster with internal load balancer

Google Cloud offers a convenient way of installing an ElasticSearch cluster on top of a Google Cloud Kubernetes cluster. However, the documentation tells you to expose the ElasticSearch instance using

kubectl patch service/"elasticsearch-elasticsearch-svc" \
  --namespace "default" \
  --patch '{"spec": {"type": "LoadBalancer"}}'

However this command will expost ElasticSearch to an external IP which will make it publically accessible in the default configuration.

Here’s the equivalent command that will expose ElasticSearch to an internal load balancer with an internal IP address that will only be available from Google Cloud.

kubectl patch service/"elasticsearch-elasticsearch-svc" \
  --namespace "default" \
  --patch '{"spec": {"type": "LoadBalancer"}, "metadata": {"annotations": {"cloud.google.com/load-balancer-type": "Internal"}}}'

You might need to replace the name of your service (elasticsearch-elasticsearch-svc in this example) and possibly your namespace.

 

Posted by Uli Köhler in Cloud, ElasticSearch, Kubernetes

How to fix OpenVPN ‘failed to find GID for group openvpn’

Problem:

In your OpenVPN server logs you see this error message

failed to find GID for group openvpn

followed by a server restart (Exiting due to fatal error).

Solution:

Run this command to add the OpenVPN group:

sudo groupadd openvpn

In most cases, you’ll see this in your server log after doing that:

failed to find UID for user openvpn
Exiting due to fatal error

In that case, refer to our previous post on How to fix OpenVPN “failed to find UID for user openvpn”

Posted by Uli Köhler in Cryptography, Linux

How to fix OpenVPN ‘failed to find UID for user openvpn’

Problem:

In your OpenVPN server logs you see this error message

failed to find UID for user openvpn

followed by a server restart (Exiting due to fatal error).

Solution:

Run this command to add the openvpn user and add that user to the openvpn group:

sudo useradd openvpn -g openvpn
Posted by Uli Köhler in Cryptography, Linux

How to generate Diffie-Hellman (DH) parameters using OpenSSL

Problem:

For our webserver or VPN server, you want to use unique Diffie-Hellman parameters but you don’t know how to generate the .pem file using OpenSSL.

Solution:

Use this command to generate the parameters and save them in dhparams.pem:

openssl dhparam -out dhparams.pem 4096

This command generates Diffie-Hellman parameters with 4096 bits. This provides good security while still providing a very reasonable performance for modern devices. Depending on your preferred level of Paranoia you might want to increase the number of bits even more.

Note that even for “only” 4096 bits generating the parameters will usually take a couple of minutes. Larger parameter sizes might take many hours to days to generate. Ensure that you are generating the parameters on a fast computer and not on your Raspberry Pi or similar!

Posted by Uli Köhler in Cryptography, Linux

How to enable SSH on Raspbian without a screen

You can open the boot partition on the SD card (the FAT32 partition) and create an empty file named ssh in the root directory of that partition. Ensure that the file is names ssh and not ssh.txt !

If you are in the correct working directory in the command line, use

touch ssh

On recent Ubuntu version, this will switch to the correct directory and create the file (but you need to mount the directory manually e.g. using your file explorer:

cd /media/$USER/boot && touch ssh

Don’t forget to unmount the boot drive before removing the SD card. Once you restart the Raspberry Pi with the modified SD card, SSH will be enabled without you having to attach a keyoard or screen to the Pi.

This approach was tested with the 2018-11-13 version of Raspbian and works with Raspberry Pi 1, Raspberry Pi 2 and Raspberry Pi 3.

Credits to Yahor for the original solution on StackOverflow!

Posted by Uli Köhler in Embedded, Linux

How to install MicroK8S (MicroKubernetes) on Ubuntu in 30 seconds

This set of commands will install & start MikroK8S (MikroKubernetes) on Ubuntu and similar Linux distributions.

sudo snap install microk8s --classic
sudo snap install kubectl --classic
sudo microk8s.enable # Autostart on boot
sudo microk8s.start # Start right now
# Wait until microk8s has started
until microk8s.status ; do sleep 1 ; done
# Enable some standard modules
microk8s.enable dashboard registry istio

For reference see the official quickstart manual.

Posted by Uli Köhler in Allgemein, Cloud, Container, Kubernetes

How to fix docker push ‘denied: requested access to the resource is denied’

Problem:

You want to push an image to Dockerhub using a command like

docker push myusernameondocker/my-image

but you see an error message like

denied: requested access to the resource is denied

Solution:

First ensure that your local docker client is logged in to Docker by using

docker login

This will ask you for your username and password. In case you have not registered yet on Dockerhub, register here!

Now you can retry your command. Note that you do not need to create the repository on DockerHub explicitly, it will be created automatically!

Posted by Uli Köhler in Container, Docker
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPTPrivacy &amp; Cookies Policy