Databases

How to optimize MySQL/MariaDB tables in docker-compose

If your MariaDB / MySQL root password is stored in .env , use this command:

source .env && docker-compose exec mariadb mysqlcheck -uroot -p$MARIADB_ROOT_PASSWORD --auto-repair --optimize --all-databases

You can also directly use the root password in the command:

docker-compose exec mariadb mysqlcheck -uroot -phoox8AiFahuniPaivatoh2iexighee --auto-repair --optimize --all-databases

 

Posted by Uli Köhler in Container, Databases, Docker

How to fix Nextcloud 4047 InnoDB refuses to write tables with ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE.

Problem:

When trying to maintenance:repair your Nextcloud instance, e.g. during a utf8mb4 upgrade, you see an error message like

ERROR: An exception occurred while executing a query: SQLSTATE[HY000]: General error: 4047 InnoDB refuses to write tables with ROW_FORMAT=COMPRESSED or KEY_BLOCK_SIZE.

Solution:

You need to turn off innodb-read-only-compressed. I do this by starting my MySQL docker with

--skip-innodb-read-only-compressed

Full command line which worked for me:

--transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-read-only-compressed

Full docker-compose.yml nextcloud section:

nextcloud-db:
  image: mariadb
  command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --skip-innodb-read-only-compressed
  restart: always
  volumes:
    - ./nextcloud-db:/var/lib/mysql
  environment:
    - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
    - MYSQL_PASSWORD=${MYSQL_PASSWORD}

 

Posted by Uli Köhler in Cloud, Databases, Technologies

How to fix elasticsearch.exceptions.RequestError: RequestError(400, ‘resource_already_exists_exception’, ‘index […] already exists’) in Python

Problem:

You want to create an ElasticSearch index in Python using code like

es.indices.create("nodes") # Create an index names "nodes"

but you see the following error message:

Traceback (most recent call last):
  File "estest.py", line 22, in <module>
    es.indices.create("nodes")
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/client/utils.py", line 168, in _wrapped
    return func(*args, params=params, headers=headers, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/client/indices.py", line 123, in create
    return self.transport.perform_request(
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/transport.py", line 415, in perform_request
    raise e
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/transport.py", line 381, in perform_request
    status, headers_response, data = connection.perform_request(
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/connection/http_urllib3.py", line 277, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/connection/base.py", line 330, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
elasticsearch.exceptions.RequestError: RequestError(400, 'resource_already_exists_exception', 'index [nodes/mXAiBt0wTKK4Y31HpshVbw] already exists')

Solution:

The error message tells you that the index you are trying to create already exists!

The simples solution is to use the code from our post on How to create ElasticSearch index if it doesn’t already exist in Python:

def es_create_index_if_not_exists(es, index):
    """Create the given ElasticSearch index and ignore error if it already exists"""
    try:
        es.indices.create(index)
    except elasticsearch.exceptions.RequestError as ex:
        if ex.error == 'resource_already_exists_exception':
            pass # Index already exists. Ignore.
        else: # Other exception - raise it
            raise ex

and use that function to create your index:

es_create_index_if_not_exists(es, "nodes") # Creates the "nodes" index ; doesn't fail if it already exists

 

Posted by Uli Köhler in Databases, ElasticSearch, Python

How to create ElasticSearch index if it doesn’t already exist in Python

The following utility will create an index if it doesn’t exist already by ignoring any resource_already_exists_exception

def es_create_index_if_not_exists(es, index):
    """Create the given ElasticSearch index and ignore error if it already exists"""
    try:
        es.indices.create(index)
    except elasticsearch.exceptions.RequestError as ex:
        if ex.error == 'resource_already_exists_exception':
            pass # Index already exists. Ignore.
        else: # Other exception - raise it
            raise ex

# Example usage: Create "nodes" index
es_create_index_if_not_exists(es, "nodes")

 

Posted by Uli Köhler in Databases, ElasticSearch, Python

What is the ElasticSearch equivalent to an SQL table?

In ElasticSearch, the concept which closely resembles an SQL table is called an index.

Compared to an SQL table, the index does not neccessarily need to have a predefined structure – the ElasticSearch index is more similar to a MongoDB collection.

However, an elasticsearch index has many features similar to SQL tables such as indices (which you typically don’t need to create explicity – ElasticSearch takes care of that for you).

Typically, indices contain lots of similar documents that have (mostly) the same properties.

Posted by Uli Köhler in Databases, ElasticSearch

How to fix Elasticsearch Python elasticsearch.exceptions.NotFoundError: NotFoundError(404, ‘index_not_found_exception’, ‘no such index [node]’, node, index_or_alias)

Problem:

You want to update ElasticSearch index settings in Python using code like

es.indices.put_settings(index="node", body={
    "index.mapping.total_fields.limit": 100000
})

but you see an error message like this one:

Traceback (most recent call last):
  File "estest.py", line 11, in <module>
    es.indices.put_settings(index="node", body={
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/client/utils.py", line 168, in _wrapped
    return func(*args, params=params, headers=headers, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/client/indices.py", line 786, in put_settings
    return self.transport.perform_request(
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/transport.py", line 415, in perform_request
    raise e
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/transport.py", line 381, in perform_request
    status, headers_response, data = connection.perform_request(
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/connection/http_urllib3.py", line 277, in perform_request
    self._raise_error(response.status, raw_data)
  File "/usr/local/lib/python3.8/dist-packages/elasticsearch/connection/base.py", line 330, in _raise_error
    raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
elasticsearch.exceptions.NotFoundError: NotFoundError(404, 'index_not_found_exception', 'no such index [node]', node, index_or_alias)

Solution:

The index needs to be created first in order to be able to put settings. First, double-check if you spelled the index name correctly! You can see the index name in the exception: no such index [node] means that the index is called node

The direct way to create an index is

es.indices.create("node")

But note that this will fail if the index already exists. In order to work around this issue, I recommend to use the code from our previous post How to create ElasticSearch index if it doesn’t already exist in Python:

def es_create_index_if_not_exists(es, index):
    """Create the given ElasticSearch index and ignore error if it already exists"""
    try:
        es.indices.create(index)
    except elasticsearch.exceptions.RequestError as ex:
        if ex.error == 'resource_already_exists_exception':
            pass # Index already exists. Ignore.
        else: # Other exception - raise it
            raise exnodes

and use the es_create_index_if_not_exists()  function to create your index:

es_create_index_if_not_exists(es, "node") # Creates the "node" index ; doesn't fail if it already exists

 

Posted by Uli Köhler in Databases, ElasticSearch, Python

How to set index setting using Python ElasticSearch client

You can set index settings using the official ElasticSearch python client library by using:

es.indices.put_settings(index="my-index", body={
    # Put your index settings here
    # Example: "index.mapping.total_fields.limit": 100000
})

Full example:

from elasticsearch import Elasticsearch

es = Elasticsearch()

es.indices.put_settings(index="ways", body={
    "index.mapping.total_fields.limit": 100000
})
Posted by Uli Köhler in Databases, ElasticSearch, Python

How to increase ElasticSearch total field limit using Python API

When using ElasticSearch, you will sometimes encounter an Limit of total fields [1000] has been exceeded when you insert a large document.

One solution that often works for real-world scenarios is to just increase the default limit of 1000 to, for example, 100000 to account for even the largest documents.

How this can be done is, however, not well-documented for the ElasticSearch Python API. Here’s how you can do it:

from elasticsearch import Elasticsearch

es = Elasticsearch()

es.indices.put_settings(index="my-index", body={
    "index.mapping.total_fields.limit": 100000
})

 

Posted by Uli Köhler in Databases, ElasticSearch, Python

Faster Python Elasticsearch index() by using concurrent.futures ThreadPoolExecutor

In our previous post Elasticsearch Python minimal index() / insert example we showed how to insert a document into Elasticsearch.

When inserting a large number of documents into Elasticsearch, you will notice that it’s extremely slow to wait for the API call to finish before trying to insert the document.

In this post we’ll show a simple way of doing many requests in parallel so multiple index operations are running concurrently while your code is processing more documents. For this, we’ll use concurrent.futures.ThreadPoolExecutor and – after inserting all documents into the queue, use concurrent.futures.wait to wait for all requests to finish before we’ll exit.

#!/usr/bin/env python3
from elasticsearch import Elasticsearch
from concurrent.futures import ThreadPoolExecutor
import concurrent.futures

index_executor = ThreadPoolExecutor(64)
futures = []

es = Elasticsearch()
for i in range(1000):
    future = index_executor.submit(es.index, index="test-index", id=i, body={"test": 123})
    futures.append(future)

print("Waiting for requests to complete...")
concurrent.futures.wait(futures)

 

Posted by Uli Köhler in Databases, ElasticSearch, Python

Elasticsearch Python minimal index() / insert example

This minimal example inserts a single document into Elasticsearch running at http://localhost:9200:

#!/usr/bin/env python3
from elasticsearch import Elasticsearch

es = Elasticsearch()
es.index(index="test-index", id=1, body={"test": 123})

 

Posted by Uli Köhler in Databases, ElasticSearch, Python

Simple Elasticsearch setup with docker-compose

The following docker-compose.yml is a simple starting point for using ElasticSearch within a docker-based setup:

version: '2.2'
services:
    elasticsearch1:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.13.4
        container_name: elasticsearch1
        environment:
            - cluster.name=docker-cluster
            - node.name=elasticsearch1
            - cluster.initial_master_nodes=elasticsearch1
            - bootstrap.memory_lock=true
            - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
            - http.cors.enabled=true
            - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
            - http.cors.allow-credentials=true
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
        ulimits:
            memlock:
                soft: -1
                hard: -1
        volumes:
            - ./esdata1:/usr/share/elasticsearch/data
        ports:
            - 9200:9200
    dejavu:
        image: appbaseio/dejavu
        container_name: dejavu
        ports:
            - 1358:1358

Now create the esdata1 directory with the correct permissions:

sudo mkdir esdata1
sudo chown -R 1000:1000 esdata1

We also need to configure the vm.max_map_count sysctl parameter:

echo -e "\nvm.max_map_count=524288\n" | sudo tee -a /etc/sysctl.conf && sudo sysctl -w vm.max_map_count=524288

 

I recommend to place it in /opt/elasticsearch, but you can place wherever you like.

If you want to autostart it on boot, see Create a systemd service for your docker-compose project in 10 seconds or just use this snippet from said post:

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

This will create a systemd service named elasticsearch (if your directory is named elasticsearch like /opt/elasticsearch) and enable and start it immediately. Hence you can restart using

sudo systemctl restart elasticsearch

and view the logs using

sudo journalctl -xfu elasticsearch

For more complex setup involving more than one node, see our previous post on ElasticSearch docker-compose.yml and systemd service generator

Posted by Uli Köhler in Container, Databases, Docker, ElasticSearch

Best-practice configuration for MongoDB with docker-compose

Create /var/lib/mongodb/docker-compose.yml:

version: '3.1'
services:
  mongo:
    image: mongo
    volumes:
        - ./data:/data/db
    ports:
        - 27017:27017
    command: --serviceExecutor adaptive

This will store the MongoDB data in /var/lib/mongodb/data. I prefer this variant to using docker volumes since this method keeps all MongoDB-related data in the same directory.

Then create a systemd service using

curl -fsSL https://techoverflow.net/scripts/create-docker-compose-service.sh | sudo bash /dev/stdin

See our post on how to Create a systemd service for your docker-compose project in 10 seconds for more details on this method.

You can access MongoDB at localhost:27017! It will autostart after boot

Restart by

sudo systemctl restart mongodb

Stop by

sudo systemctl stop mongodb

View logs:

sudo journalctl -xfu mongodb

View logs in less:

sudo journalctl -xu mongodb

 

Posted by Uli Köhler in Docker, MongoDB

How to fix ElasticSearch [1]: initial heap size […] not equal to maximum heap size […];

Problem:

Your ElasticSearch server fails to start with an error message like

ERROR: [1] bootstrap checks failed
[1]: initial heap size [536870912] not equal to maximum heap size [2147483648]; this can cause resize pauses and prevents memory locking from locking the entire heap
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log

Solution:

Set the initial heap size equal to the maximum heap size: The -Xms argument and the -Xmx argument must be equal, for example:

-Xms2048m -Xmx2048m

Typically (such as in a docker-based setup) you can set this in ES_JAVA_OPTS:

ES_JAVA_OPTS=-Xms2048m -Xmx2048m

For docker-compose based environments, this is an example configuration that works:

environment:
    - cluster.name=docker-cluster
    - node.name=elasticsearch1
    - cluster.initial_master_nodes=elasticsearch1
    - bootstrap.memory_lock=true
    - http.cors.allow-origin=http://localhost:1358,http://127.0.0.1:1358
    - http.cors.enabled=true
    - http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
    - http.cors.allow-credentials=true
    - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"

After that, restart your ElasticSearch instance.

Posted by Uli Köhler in ElasticSearch

How to run psql in Gitlab Docker image

When using the offical gitlab Docker container, you can use this command to run psql:

docker exec -t -u gitlab-psql [container name] psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

In case you’re using a docker-compose based setup, use this command:

docker-compose exec -u gitlab-psql gitlab psql -h /var/opt/gitlab/postgresql/ -d gitlabhq_production

Note that gitlab in this command is the container name.

Posted by Uli Köhler in Databases, Docker, Linux

The security risk of running docker mariadb/mysql with MYSQL_ALLOW_EMPTY_PASSWORD=yes

This is part of a common docker-compose.yml which is frequently seen on the internet

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=yes
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
 [...]

Simple and secure, right? A no-root-password MariaDB instance that’s running in a separate container and does not have its port 3306 exposed – so only services from the same docker-compose.yml can reach it since docker-compose puts all those services in a separate network.

Wrong.

While the MariaDB instance is not reachable from the internet since no, it can be reached by any process via its internal IP address.

In order to comprehend what’s happening, we shall take a look at docker’s networks. In this case, my docker-compose config is called redmine.

$ docker network ls | grep redmine
ea7ed38f469b        redmine_default           bridge              local

This is the network that docker-compose creates without any explicit network configuration. Let’s inspect the network to show the hosts:

[
    // [...]
        "Containers": {
            "2578fc65b4dab9f204d0a252e421dd4ddd9f41c35642d48350f4e59370581757": {
                "Name": "redmine_mariadb_1",
                "EndpointID": "1e6d81acc096a12fc740173f4e107090333c42e8a86680ac5c9886c148d578e7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "7867f71d2a36265c34c133b70aea487b90ea68fcf30ecb42d6e7e9a376cf8e07": {
                "Name": "redmine_redmine_1",
                "EndpointID": "f5ac7b3325aa9bde12f0c625c4881f9a6fc9957da4965767563ec9a3b76c19c3",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
    // [...]
]

We can see that the IP address of the redmine_mariadb_1 container is 172.18.0.2.

Using the internal IP 172.18.0.2, you can access the MySQL server.

Any process on the host (even from unprivileged users) can connect to the container without any password, e.g.

$ mysqldump -uroot -h172.18.0.2 --all-databases
// This will show the dump of the entire MariaDB database

How to mitigate this security risk?

Mitigation is quite easy since we only need to set a root password for the MariaDB instance.

My recommended best practice is to avoid duplicate passwords. In order to do this, create a .env file in the directory where docker-compose.yml is located.

MARIADB_ROOT_PASSWORD=aiPaipei6ookaemue4voo0NooC0AeH

Remember to replace the password by a random password or use this shell script to automatically create it:

echo MARIADB_ROOT_PASSWORD=$(pwgen 30) > .env

Now we can use ${MARIADB_ROOT_PASSWORD} in docker-compose.yml whereever the MariaDB root password is required, for example:

version: '3'
services:
  mariadb:
    image: 'mariadb:latest'
    environment:
      - MYSQL_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
      - MYSQL_DATABASE=redmine
    volumes:
      - './mariadb_data:/var/lib/mysql'
  redmine:
    image: 'redmine:latest'
    environment:
      - REDMINE_USERNAME=admin
      - REDMINE_PASSWORD=redmineadmin
      - [email protected]
      - REDMINE_DB_MYSQL=mariadb
      - REDMINE_DB_USERNAME=root
      - REDMINE_DB_PASSWORD=${MARIADB_ROOT_PASSWORD}
    ports:
      - '3718:3000'
    volumes:
      - './redmine_data/conf:/usr/src/redmine/conf'
      - './redmine_data/files:/usr/src/redmine/files'
      - './redmine_themes:/usr/src/redmine/public/themes'
    depends_on:
      - mariadb

Note that the mariadb docker image will not change the root password if the database directory already exists (mariadb_data in this example).

My recommended best practice for changing the root password is to use mysqldump --all-databases to export the entire database to a SQL file, then backup and delete the data directory, then re-start the container so the new root password will be set. After that, re-import the dump from the SQL file.

Posted by Uli Köhler in Databases, Docker, Linux

How to fix ElasticSearch docker AccessDeniedException[/usr/share/elasticsearch/data/nodes];”,

Problem:

You are trying to start a dockerized ElasticSearch instance but you see an error log like

lasticsearch1    | {"type": "server", "timestamp": "2020-04-18T01:17:27,564Z", "level": "ERROR", "component": "o.e.b.ElasticsearchUncaughtExceptionHandler", "cluster.name": "docker-cluster", "node.name": "elasticsearch1", "message": "uncaught exception in thread [main]", 
elasticsearch1    | "stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];",
elasticsearch1    | "at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "Caused by: org.elasticsearch.ElasticsearchException: failed to bind service",
elasticsearch1    | "at org.elasticsearch.node.Node.<init>(Node.java:615) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.node.Node.<init>(Node.java:257) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "... 6 more",
elasticsearch1    | "Caused by: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes",
elasticsearch1    | "at sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) ~[?:?]",
elasticsearch1    | "at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) ~[?:?]",
elasticsearch1    | "at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) ~[?:?]",
elasticsearch1    | "at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:389) ~[?:?]",
elasticsearch1    | "at java.nio.file.Files.createDirectory(Files.java:693) ~[?:?]",
elasticsearch1    | "at java.nio.file.Files.createAndCheckIsDirectory(Files.java:800) ~[?:?]",
elasticsearch1    | "at java.nio.file.Files.createDirectories(Files.java:786) ~[?:?]",
elasticsearch1    | uncaught exception in thread [main]
elasticsearch1    | "at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:274) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:211) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:271) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.node.Node.<init>(Node.java:277) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.node.Node.<init>(Node.java:257) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.6.2.jar:7.6.2]",
elasticsearch1    | "... 6 more"] }
elasticsearch1    | ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/usr/share/elasticsearch/data/nodes];
elasticsearch1    | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes
elasticsearch1    |     at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
elasticsearch1    |     at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
elasticsearch1    |     at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
elasticsearch1    |     at java.base/sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:389)
elasticsearch1    |     at java.base/java.nio.file.Files.createDirectory(Files.java:693)
elasticsearch1    |     at java.base/java.nio.file.Files.createAndCheckIsDirectory(Files.java:800)
elasticsearch1    |     at java.base/java.nio.file.Files.createDirectories(Files.java:786)
elasticsearch1    |     at org.elasticsearch.env.NodeEnvironment.lambda$new$0(NodeEnvironment.java:274)
elasticsearch1    |     at org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:211)
elasticsearch1    |     at org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:271)
elasticsearch1    |     at org.elasticsearch.node.Node.<init>(Node.java:277)
elasticsearch1    |     at org.elasticsearch.node.Node.<init>(Node.java:257)
elasticsearch1    |     at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:221)
elasticsearch1    |     at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:221)
elasticsearch1    |     at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349)
elasticsearch1    |     at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170)
elasticsearch1    |     at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161)
elasticsearch1    |     at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
elasticsearch1    |     at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125)
elasticsearch1    |     at org.elasticsearch.cli.Command.main(Command.java:90)
elasticsearch1    |     at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126)
elasticsearch1    |     at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
elasticsearch1    | For complete error details, refer to the log at /usr/share/elasticsearch/logs/docker-cluster.log

Solution:

Fix the permissions of the host directory mapped to /usr/share/elasticsearch/data. On my instance that directory is /var/lib/elasticsearch/esdata1.

Run

sudo chown -R 1000:1000 [directory]

e.g.

sudo chown -R 1000:1000 /var/lib/elasticsearch/esdata1

 

Posted by Uli Köhler in ElasticSearch

How to repair docker-compose MariaDB instances (aria_chk -r)

Problem:

You are trying to run a MariaDB container using docker-compose. However, the database container doesn’t start up and you see error messages like these in the logs:

[ERROR] mysqld: Aria recovery failed. Please run aria_chk -r on all Aria tables and delete all aria_log.######## files
[ERROR] Plugin 'Aria' init function returned error.
[ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed.
....
[ERROR] Could not open mysql.plugin table. Some plugins may be not loaded
[ERROR] Failed to initialize plugins.
[ERROR] Aborting

Solution:

The log messages already tell you what to do – but they don’t tell you how to do it:

Aria recovery failed. Please run aria_chk -r on all Aria tables and delete all aria_log.######## files

First, backup the entire MariaDB data directory: Check onto which host directory the data directory (/var/lib/mysql) of the container is mapped and copy the entire directory to a backup space. This is important in case the repair process fails.

Now let’s run aria_chk -r to check and repair MySQL table files.

docker-compose run my-db bash -c 'aria_chk -r /var/lib/mysql/**/*'

Replace my-db by the name of your database container. This will attempt to repair a lot of non-table-files as well but aria_chk will happily ignore those.

Now we can delete the log files:

docker-compose run my-db bash -c 'rm /var/lib/mysql/aria_log.*'

Again, replace my-db by the name of your database container.

Posted by Uli Köhler in Databases, Docker

RocksDB minimal example in C++

This minimal example shows how to open a RocksDB database, write a key and how to read it.

#include <cassert>
#include <string>
#include <rocksdb/db.h>

using namespace std;

int main(int argc, char** argv) {
    rocksdb::DB* db;
    rocksdb::Options options;
    options.create_if_missing = true;
    rocksdb::Status status =
    rocksdb::DB::Open(options, "/tmp/testdb", &db);
    assert(status.ok());

    // Insert value
    status = db->Put(rocksdb::WriteOptions(), "Test key", "Test value");
    assert(status.ok());

    // Read back value
    std::string value;
    status = db->Get(rocksdb::ReadOptions(), "Test key", &value);
    assert(status.ok());
    assert(!status.IsNotFound());

    // Read key which does not exist
    status = db->Get(rocksdb::ReadOptions(), "This key does not exist", &value);
    assert(status.IsNotFound());
}

Build using this CMakeLists.txt

add_executable(rocksdb-example rocksdb-example.cpp)
target_link_libraries(rocksdb-example rocksdb dl)

Compile using

cmake .
make
./rocksdb-example

 

Posted by Uli Köhler in C/C++, Databases

How to list MongoDB databases on command line

Use this command to list the MongoDB databases on the command line:

echo 'show dbs' | mongo

Example output:

MongoDB shell version v4.0.13
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("5c1e505e-9b05-4270-ab20-c537c0760481") }
MongoDB server version: 4.0.13
admin       0.000GB
config      0.000GB
drawing     0.001GB
order       0.000GB
production  0.001GB
standards   0.001GB
user        0.000GB
bye

 

Posted by Uli Köhler in MongoDB