Technologies

How to fix ‘elasticsearch exited with code 78’

Problem:

You want to run ElasticSearch using docker, but the container immediately stops again using this error message

elasticsearch exited with code 78

or

elasticsearch2 exited with code 78

Solution:

If you look through the entire log message, you’ll find lines like

elasticsearch     | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Therefore we need to increase the vm.max_map_count limit:

sudo sysctl -w vm.max_map_count=524288

Now we need to edit /etc/sysctl.conf so the setting will also be in effect after a reboot.

Look for any vm.max_map_count line in /etc/sysctl.conf. If you find one, set its value to 524288. If there is no such line present, add the line

vm.max_map_count=524288

to the end of /etc/sysctl.conf

Original source: GitHub

 

Posted by Uli Köhler in Container, Databases, Docker, Linux

MongoDB: How to run db.adminCommand() in NodeJS

Problem:

You want to run a db.adminCommand() in NodeJS using the node-mongodb-native client, e.g. you want to run the NodeJS equivalent of

db.adminCommand({setParameter: 1, internalQueryExecMaxBlockingSortBytes: 100151432});

Solution:

Use conn.executeDbAdminCommand() where db is a MongoDB database object.

db.executeDbAdminCommand({setParameter: 1, internalQueryExecMaxBlockingSortBytes: 100151432});

Full example:

// To install, use npm i --save mongodb
const MongoClient = require('mongodb').MongoClient;

async function configureMongoDB() {
    // Connect to MongoDB
    const conn = await MongoClient.connect('mongodb://localhost:27017/', { useNewUrlParser: true });
    const db = await conn.db('mydb');
    // Configure MongoDB settings
    await db.executeDbAdminCommand({
        setParameter: 1,
        internalQueryExecMaxBlockingSortBytes: 100151432
    });
    // Cleanup
    return conn.close();
}

// Run configureMongoDB()
configureMongoDB().then(() => {}).catch(console.error)

 

Posted by Uli Köhler in Databases, NodeJS

How to fix NodeJS MongoDB ‘Cannot read property ‘high_’ of null’

When encountering an error message like

TypeError: Cannot read property 'high_' of null
    at Long.equals (/home/uli/dev/NMUN/node_modules/bson/lib/bson/long.js:236:31)
    at nextFunction (/home/uli/dev/NMUN/node_modules/mongodb-core/lib/cursor.js:473:16)
    at Cursor.next (/home/uli/dev/NMUN/node_modules/mongodb-core/lib/cursor.js:763:3)
    at Cursor._next (/home/uli/dev/NMUN/node_modules/mongodb/lib/cursor.js:211:36)
    at nextObject (/home/uli/dev/NMUN/node_modules/mongodb/lib/operations/cursor_ops.js:192:10)
    at hasNext (/home/uli/dev/NMUN/node_modules/mongodb/lib/operations/cursor_ops.js:135:3)
    (...)

you likely have code like this:

const cursor = db.getCollection('mycollection').find({})
while (cursor.hasNext()) {
    const doc = cursor.next();
    // ... handle doc ...
}

The solution is quite simple: Since find(), cursor.hasNext() and cursor.next() all return Promises, you can’t use their results directly.

This example shows you how to do it properly using async/await:

const cursor = await db.getCollection('mycollection').find({})
while (await cursor.hasNext()) {
    const doc = await cursor.next();
    // ... handle doc ...
}

In order to do this remember that the function containing this code will need to be an async function. See the Mozilla documentation or google for Javascript async tutorial in order to learn about the details!

Posted by Uli Köhler in Databases, Javascript

How to install MongoDB CE on Ubuntu in 1 minute

Quick install using

wget -qO- https://techoverflow.net/scripts/install-mongodb.sh | bash

Run these shell commands on your Ubuntu computer to install the current MongoDB community edition and automatically start it (both instantly and on bootup)

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 4B7C549A058F8B6B
echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list
sudo apt-get update
sudo apt-get install -y mongodb-org
sudo systemctl enable mongod
sudo systemctl start mongod

Source: Official MongoDB documentation

Posted by Uli Köhler in Databases, Linux

MongoDB equivalent of SQL ‘SELECT * FROM …’

Whereas in SQL you might run

SELECT * FROM mytable;

to view all entries in a table, you can run

db.getCollection("mytable").find({})

in MongoDB. Also see the official SQL to MongoDB mapping chart.

In order to run that (assuming the default configuration of mongod), you can run

mongo [name of database]

in your preferred shell.

Posted by Uli Köhler in Databases

How to download a file or directory from a LXC container

To download files, use

lxc file pull <container name>/<path>/<filename> <target directory>

To download directories, use

lxc file pull --recursive <container name>/<path>/<filename> <target directory>

Examples:

Download /root/myfile.txt from mycontainer to the current directory (.):

lxc file pull mycontainer/root/myfile.txt .

Download /root/mydirectory from mycontainer to the current directory (.):

lxc file pull -r mycontainer/root/mydirectory .

 

Posted by Uli Köhler in Container, Linux, LXC, Virtualization

Puppeteer: Get text content / inner HTML of an element

Problem:

You want to use puppeteer to automate testing a webpage. You need to get either the text or the inner HTML of some element, e.g. of

<div id="mydiv">
</div>

on the page.

Solution:

// Get inner text
const innerText = await page.evaluate(() => document.querySelector('#mydiv').innerText);

// Get inner HTML
const innerHTML = await page.evaluate(() => document.querySelector('#mydiv').innerHTML);

Note that .innerText includes the text of sub-elements. You can use the complete DOM API inside page.evaluate(...). You can use any CSS selector as an argument for document.querySelector(...).

Posted by Uli Köhler in Javascript, Puppeteer

How to fix ModuleNotFoundError: No module named ‘google.cloud.iam’

Problem:

You want to run a Python script that uses one of the Google Cloud Python APIs but you get this error message:

ModuleNotFoundError: No module named 'google.cloud.iam'

Solution:

Reinstall any google cloud package using pip:

sudo pip install --upgrade google-cloud-storage

or

sudo pip3 install --upgrade google-cloud-storage

That will also reinstall the relevant google.cloud.iam module.

After that, re-run your script. If that didn’t work, try to install --upgrade some other google-cloud-* module, especially the modules you actually use in your script.

 

Posted by Uli Köhler in Cloud, Python

How to set cv2.VideoCapture() image size in Python

Use cv2.CAP_PROP_FRAME_WIDTH and cv2.CAP_PROP_FRAME_HEIGHT in order to tell OpenCV which image size you would like.

import cv2

video_capture = cv2.VideoCapture(0)
# Check success
if not video_capture.isOpened():
    raise Exception("Could not open video device")
# Set properties. Each returns === True on success (i.e. correct resolution)
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 160)
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 120)
# Read picture. ret === True on success
ret, frame = video_capture.read()
# Close device
video_capture.release()

Note that most video capture devices (like webcams) only support specific sets of widths & heights. Use uvcdynctrl -f to find out which resolutions are supported:

$ uvcdynctrl -f
Listing available frame formats for device video0:
Pixel format: YUYV (YUYV 4:2:2; MIME type: video/x-raw-yuv)
  Frame size: 640x480
    Frame rates: 30, 20, 10
  Frame size: 352x288
    Frame rates: 30, 20, 10
  Frame size: 320x240
    Frame rates: 30, 20, 10
  Frame size: 176x144
    Frame rates: 30, 20, 10
  Frame size: 160x120
    Frame rates: 30, 20, 10
Posted by Uli Köhler in OpenCV, Python, Video

How to take a webcam picture using OpenCV in Python

This code opens /dev/video0 and takes a single picture, closing the device afterwards:

import cv2

video_capture = cv2.VideoCapture(0)
# Check success
if not video_capture.isOpened():
    raise Exception("Could not open video device")
# Read picture. ret === True on success
ret, frame = video_capture.read()
# Close device
video_capture.release()

You can also use cv2.VideoCapture("/dev/video0"), but this approach is platform-dependent. cv2.VideoCapture(0) will also open the first video device on non-Linux platforms.

In Jupyter you can display the picture using

import sys
from matplotlib import pyplot as plt

frameRGB = frame[:,:,::-1] # BGR => RGB
plt.imshow(frameRGB)

 

Posted by Uli Köhler in OpenCV, Python, Video

Launching Debian containers using LXC on Ubuntu

Problem:

You know you can launch an Ubuntu LXC container using

lxc launch ubuntu:18.04 myvm

Now you want to launch a Debian VM using

lxc launch debian:jessie myvm

but you only get this error message:

Error: The remote "debian" doesn't exist

Solution:

The debian images are (by default) available from the images remote, not the debian remote, so you need to use this:

lxc launch images:debian/jessie myvm

 

Posted by Uli Köhler in Container, Linux, LXC, Virtualization

Routing public IPv6 addresses to your lxc/lxd containers

The enormous amount of IPv6 addresses available to most commercially hosted VPS / root servers with a public IPv6 prefix allows you to route a public IPv6 address to every container that is running on your server. This tutorial shows you how to do that, even if you have no prior experience with routing,

Step 0: Create your LXC container

We assume you have already done this – just for reference, here’s how you can create a container:

lxc launch ubuntu:18.04 my-container

Step 1: Which IP address do you want to assign to your container?

First you need to find out what prefix is routed to your host. Usually you can do that by checking in your provider’s control panel. You’re looking for something like 2a01:4f9:c010:278::1/64. Another option would be to run sudo ifconfig

and look for a inet6 line in the section of your primary network interface (this only works if you have configured your server to have an IPv6 address). Note that addresses that start with fe80:: and addresses starting with fd, among others, are not public IPv6 addresses.

Then you can define a new IPv6 address to your container. Which one you choose – as long as it’s within the prefix – is entirely your decision.

Often, <prefix>::1 is used for the host itself, therefore you could, for example, choose <prefix>::2. Note that some providers use some IP addresses for other purposes. Check your provider’s documentation for details.

If you don’t want to make it easy to find your container’s public IPv6, don’t choose <prefix>::1<prefix>::2<prefix>::3 etc but something more random like <prefix>:af15:99b1:0b05:1, for example2a01:4f9:c010:278:af15:99b1:0b05:0001. Ensure your IPv6 address has 8 groups of 4 hex digits each!

For this example, we choose the IPv6 address 2a01:4f9:c010:278::8.

Step 2: Find out the ULA of your container

We need to find the ULA (unique local address – similar to a private IPv4 address which is not routed on the internet) of the container. Using lxc, this is quite easy:

uli@myserver:~$ lxc list
+--------------+---------+-----------------------+-----------------------------------------------+
|     NAME     |  STATE  |         IPV4          |                     IPV6                      |
+--------------+---------+-----------------------+-----------------------------------------------+
| my-container | RUNNING | 10.144.118.232 (eth0) | fd42:830b:36dc:3691:216:3eff:fed1:9058 (eth0) |
+--------------+---------+-----------------------+-----------------------------------------------+

You need to look in the IPv6 column and copy the address listed there. In this example, the address is fd42:830b:36dc:3691:216:3eff:fed1:9058.

Step 3: Setup IPv6 routing

Now we can tell the host Linux to route your chosen public IPv6 to the container’s private IPv6. This is quite easy:

sudo ip6tables -t nat -A PREROUTING -d <public IPv6> -j DNAT --to-destination <container private IPv6>

In our example, this would be

sudo ip6tables -t nat -A PREROUTING -d 2a01:4f9:c010:278::8 -j DNAT --to-destination fd42:830b:36dc:3691:216:3eff:fed1:9058

First, test the command by running it in a shell. If it works (i.e. if it doesn’t print any error message), you can permanently store it e.g. by adding it to /etc/rc.local (after #!/bin/bash, before exit 0). Advanced users should prefer to add it to /etc/network/interfaces.

Step 4: Connect to your container using SSH on your public IPv6 (optional)

Note: This step requires that you have working IPv6 connectivity at your local computer. If you are unsure, check at ipv6-test.com

First, open a shell on your container:

lxc exec my-container bash

After running this, you should see a root shell prompt inside your container:

root@my-container:~#

The following commands should be entered in the container shell, not the host!

Now we can create a user to login to (in this example, we create the uli user):

root@my-container:~# adduser uli
Adding user `uli' ...
Adding new group `uli' (1001) ...
Adding new user `uli' (1001) with group `uli' ...
Creating home directory `/home/uli' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for uli
Enter the new value, or press ENTER for the default
        Full Name []: 
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n]

You only need to enter a password (you won’t see anything on screen when entering it) twice, for all other lines you can just press enter.

The ubuntu:18.04 lxc image used in this example does not allow SSH password authentication in its default configuration. In order to fix this, change PasswordAuthentication no to PasswordAuthentication yes in /etc/ssh/sshd_config and restart the SSH server by running service sshd restart. Be sure you understand the security implications before you do that!

Now, logout of your container shell by pressing Ctrl+D. The following commands can be entered on your desktop or any other server with IPv6 connectivity.

Now login to your server:

ssh <username>@<public IPv6 address>

in this example:

ssh uli@2a01:4f9:c010:278::8

If you configured everything correctly, you’ll see the shell prompt for your container:

uli@my-container:~$

Note: Don’t forget to configure a firewall for your container, e.g. ufw! Your container’s IPv6 is exposed to the internet and just assuming noone will guess it is not good security practice.

Posted by Uli Köhler in Cloud, Container, Linux, LXC, Networking

How to fix puppetteer error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory

Problem:

You are trying to run puppetteer on Ubuntu, but when it starts to run chrome, you are facing the following issue:

/home/user/erp/node_modules/puppeteer/.local-chromium/linux-555668/chrome-linux/chrome: error while loading shared libraries: libX11-xcb.so.1: cannot open shared object file: No such file or directory

Solution:

Install the missing packages using

sudo apt install -y gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

Credits to @coldner on the puppetteer issue tracker for assembling the required pkgs.

If you encounter E: Unable to locate package errors, run sudo apt-get update.

Background information

If you want to know more on why this issue occurs, continue reading here.

Puppetteer is essentially a minimal headless (see What is a headless program or application?) Chromium instance with an additional API for controlling and monitoring it from NodeJS.

Even though Puppetteer does not actually display a GUI, the Chromium instance it uses still requires some of the libraries to draw a GUI and connect to the X11 server, even though that isn’t used in Puppetteer. One of those libraries is libxcb which provides the shared library libX11-xcb.so.1. You can fix this by installing the libx11-xcb1 package on most Debian-based systems.

However, as it is so often the case with missing shared libraries, once you install the one that is missing, there will be at least one other library missing after that. That’s why we need to install the large number of libraries listed above.

Posted by Uli Köhler in Linux, Puppeteer

How to make mongodump export JSON instead of BSON

mongodump exports your MongoDB database to a BSON format, but if you want to use JSON instead, you need to use the mongoexport tool.

Examples:

mongoexport -d mydatabase -c mycollection # Prints to stdout
mongoexport -d mydatabase -c mycollection -o mycollections.json # Write to mycollections.json

Note that some datatypes that can be stored in MongoDB can only be represented in BSON – so in some cases, information may be lost by exporting to JSON.

Posted by Uli Köhler in Databases

How to circumvent Google Cloud Storage 1000 read / 400 write limit in Python

Google Cloud Datastore has a built-in 1000 keys limit for get requests and a 400 entities per request for put limit. If you hit it, you will see one of these error messages:

google.api_core.exceptions.InvalidArgument: 400 cannot get more than 1000 keys in a single call
google.api_core.exceptions.InvalidArgument: 400 cannot write more than 500 entities in a single call

You can fix it by chunking the requests, i.e. only do 1000 requests at one time for get etc.

This code provides a ready-to-use example for a class that automates this process. As an added benefit, it performs the requests in chunks of 1000 (for get) or 400 (for put) in parallel using a concurrent.futures.Executor. As the performance is expected to be IO-bound, it is recommended to use a concurrent.futures.ThreadPoolExecutor.
If you dont give the class an executor on construction, it will create one by itself.

import itertools
from concurrent.futures import ThreadPoolExecutor

def _chunks(l, n=1000):
    """
    Yield successive n-sized chunks from l.
    https://stackoverflow.com/a/312464/2597135
    """
    for i in range(0, len(l), n):
        yield l[i:i + n]

def _get_chunk(client, keys):
    """
    Get a single chunk
    """
    missing = []
    vals = client.get_multi(keys, missing=missing)
    return vals, missing

class DatastoreChunkClient(object):
    """
    Provides a thin wrapper around a Google Cloud Datastore client providing means
    of reading nd
    """
    def __init__(self, client, executor=None):
        self.client = client
        if executor is None:
            executor = ThreadPoolExecutor(16)
        self.executor = executor
    
    def get_multi(self, keys):
        """
        Thin wrapper around client.get_multi() that circumvents
        the 1000 read requests limit by doing 1000-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        all_missing = []
        all_vals = []
        for vals, missing in self.executor.map(lambda chunk: _get_chunk(self.client, chunk), _chunks(keys, 1000)):
            all_vals += vals
            all_missing += missing
        return all_vals, all_missing

    def put_multi(self, entities):
        """
        Thin wrapper around client.put_multi() that circumvents
        the 400 read requests limit by doing 400-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        for none in self.executor.map(lambda chunk: self.client.put_multi(chunk), _chunks(entities, 400)):
            pass

Usage example:

# Create "raw" google datastore client
client = datastore.Client(project="myproject-123456")
chunkClient = DatastoreChunkClient(client)

# The size of the key list is only limited by memory
keys = [...]
values, missing = chunkClient.get_multi(keys)

# The size of the entity list is only limited by memory
entities = [...]
chunkClient.put_multi(entities)

 

Posted by Uli Köhler in Cloud, Python

Saving an entity in Google Cloud Datastore using Python: A minimal example

Here’s a minimal example for inserting an entity in the Google Cloud Datastore object database using the Python API:

#!/usr/bin/env python3
from google.cloud import datastore
# Create & store an entity
client = datastore.Client(project="myproject-12345")
entity = datastore.Entity(key=client.key('MyEntityKind', 'MyTestID'))
entity.update({
    'foo': u'bar',
    'baz': 1337,
    'qux': False,
})
# Actually save the entity
client.put(entity)

This assumes you have already created an entity kind with the name MyEntityKind in the project with ID myproject-12345.

Posted by Uli Köhler in Cloud, Python

How to fix Google Cloud Datastore ValueError: A Key must have a project set.

Problem:

You are trying to connect to the Google Cloud Storage object database:

#!/usr/bin/env python3
from google.cloud import datastore
# Create, populate and persist an entity
entity = datastore.Entity(key=datastore.Key('MyEntityKind')) # Line of error
# ...

but when running that code, you get this error message:

Traceback (most recent call last):
  File "./IndexIntoDB.py", line 4, in <module>
    entity = datastore.Entity(key=datastore.Key('MyEntityKind'))
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 109, in __init__
    self._project = _validate_project(project, parent)
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 512, in _validate_project
    raise ValueError("A Key must have a project set.")
ValueError: A Key must have a project set.

Solution:

Note: While the solution below fixes the error message listed above, you might be more interested in having a look at this minimal entity insertion example

As the error message indicates, you need to add a project name. If you don’t know the project name, go to the Google Cloud Console, select the right project at the top and then look at the URL:

https://console.cloud.google.com/datastore/welcome?project=perceptive-tape-12345

In this example, the project ID (which you have to use in the Python code is perceptive-tape-12345.

See also the Keys section of the google-cloud-datastore python documentation.

Posted by Uli Köhler in Cloud, Python

How to fix lxd ‘Failed container creation: No storage pool found. Please create a new storage pool.’

Problem:

You want to launch some lxd container using lxc launch […] but instead you get the following error message:

Failed container creation: No storage pool found. Please create a new storage pool.

Solution:

You need to initialize lxd before using it:

lxd init

When it asks you about the backend

Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:

choosing the default option (btrfs) means that you’ll have to use a dedicated block device (or a dedicated preallocated file image) for storage. While this is more efficient if you run many containers at a time, I recommend to choose the dir backend for the default storage pool, because that option will be easiest to configure and will not occupy as much space on your hard drive.

See Storage management in lxd for more more details, including different options for storage pools in case you need a more advanced setup.

Posted by Uli Köhler in Linux, LXC, Virtualization

Which version on CuDNN should you install for TensorFlow GPU on Ubuntu?

Problem:

You’ve followed my previous blogpost

Fixing TensorFlow libcublas.so.8.0: cannot open shared object file on Ubuntu

on how to install CuBLAS etc. in order to get TensorFlow working.

Now you are getting an error message similar to this:

ImportError: libcudnn.so.6: cannot open shared object file: No such file or directory

You are wondering how you can install CuDNN as it’s not available from your

Solution:

In order to install CuDNN, first go to the NVIDIA CuDNN page. At the time of writing this, downloading CuDNN is only possible if you have an NVIDIA account, so you need to register (click on Join) if you dont have one or Login if you already have one.

On the CuDNN download page you have several versions of CuDNN to choose from. Don’t just download the newest one as TensorFlow requires a specific one.

Look at your error message: It tells you that TensorFlow is missing libcudnn.so.6 – can you see the 6 in that string? That means that you need CuDNN 6.x(TensorFlow 1.5.0, at the time of writing this, always requires CuDNN 6.x). Although you can install CuDNN 7.x, 8.x, 9.x in parallel to 6.x,

Once you have selected the correct version, you need to select a package type.

The first important choice is whether you want a developer package or just the runtime package. You don’t need the developer package to run TensorFlow, even if you are developing applications using TensorFlow! Just select the runtime package.

Regarding the type of package, of course if you are on Linux, you absolutely need to select a linux package. If you use Ubuntu 16.04+, the easiest option is to select cuDNN v6.0 Runtime Library for Ubuntu16.04 (Deb) – even though the name suggest it supports only 16.04, this package worked flawlessly for me on Ubuntu 17.04 and 17.10 as well.

I recommend to download the Ubuntu 16.04 DEB package option unless you have a specific reason not to use it.

Posted by Uli Köhler in GPU, Machine learning

Exporting campaign contact CSVs from VTiger using SQL

In Exporting campaign account CSVs from VTiger using SQL we investigated how to use direct SQL database access to export names and emails for organizations for a given campaign.

This post shows how to do the same thing. Please refer to the previous post for a detailed explanation.

SELECT CONCAT(cont.firstname, " ", cont.lastname) AS "Name", cont.email AS "EMail" FROM vtiger_campaigncontrel AS ccr JOIN vtiger_contactdetails cont ON (cont.contactid = ccr.contactid) WHERE `campaignid` =(SELECT postvalue FROM `vtiger_modtracker_detail` WHERE fieldname = "record_id" AND `id` = (SELECT `id` FROM `vtiger_modtracker_detail` WHERE postvalue = (SELECT campaign_no FROM vtiger_campaign WHERE `campaignname` = "MyCampaign") and fieldname = "campaign_no")) AND cont.email <> ''

Besides joining on vtiger_contactdetails and vtiger_campaigncontrel instead of the account-related equivalent, the only major difference is in assembling the name. For contacts, first name and last name are stored separatedly. Therefore we use CONCAT(cont.firstname, " ", cont.lastname) to get a single name string out of that information.

As shown in the previous post, you can use phpmyadmin to export the result of the query to CSV or to Excel directly. While this might seem like a tedious process, it is much less tedious than copy-and-pasting hundreds of contacts manually.

Posted by Uli Köhler in Databases