How to translate using your custom AutoML model in Python

If you’ve successfully trained your first custom AutoML neuronal translation model, the next step is to integrate it into your application.

Here’s a python3 utility class that easily allows you to translate using your custom model:

class GNTMAutoMLTranslationDriver(object):
    """
    Custom AutoML model translator.

    Usage example (be sure to use your own model here!):

    >>> translator = GNTMAutoMLTranslationDriver('myproject-101472', 'TRL455090968000816104449')
    >>> translator.translate("This is a translation test")
    """
    def __init__(self, project_id, model_id):
        self.client = automl_v1beta1.PredictionServiceClient()
        self._name = 'projects/{}/locations/us-central1/models/{}'.format(project_id, model_id)
    
    def translate(self, engl):
        payload = {'text_snippet': {'content': engl}}
        params = {}
        request = self.client.predict(self._name, payload, params)
        return request.payload[0].translation.translated_content.content

See the class documentation for a usage example. Most of the code is also present in the official AutoML example, but I had to figure out some parts for myself, e.g. how to extract the string from the protobuf (request.payload[0].translation.translated_content.content).

Also note that AutoML is currently in Beta and therefore the API might change without prior notice.

How to circumvent Google Cloud Storage 1000 read / 400 write limit in Python

Google Cloud Datastore has a built-in 1000 keys limit for get requests and a 400 entities per request for put limit. If you hit it, you will see one of these error messages:

google.api_core.exceptions.InvalidArgument: 400 cannot get more than 1000 keys in a single call
google.api_core.exceptions.InvalidArgument: 400 cannot write more than 500 entities in a single call

You can fix it by chunking the requests, i.e. only do 1000 requests at one time for get etc.

This code provides a ready-to-use example for a class that automates this process. As an added benefit, it performs the requests in chunks of 1000 (for get) or 400 (for put) in parallel using a concurrent.futures.Executor. As the performance is expected to be IO-bound, it is recommended to use a concurrent.futures.ThreadPoolExecutor.
If you dont give the class an executor on construction, it will create one by itself.

import itertools
from concurrent.futures import ThreadPoolExecutor

def _chunks(l, n=1000):
    """
    Yield successive n-sized chunks from l.
    https://stackoverflow.com/a/312464/2597135
    """
    for i in range(0, len(l), n):
        yield l[i:i + n]

def _get_chunk(client, keys):
    """
    Get a single chunk
    """
    missing = []
    vals = client.get_multi(keys, missing=missing)
    return vals, missing

class DatastoreChunkClient(object):
    """
    Provides a thin wrapper around a Google Cloud Datastore client providing means
    of reading nd
    """
    def __init__(self, client, executor=None):
        self.client = client
        if executor is None:
            executor = ThreadPoolExecutor(16)
        self.executor = executor
    
    def get_multi(self, keys):
        """
        Thin wrapper around client.get_multi() that circumvents
        the 1000 read requests limit by doing 1000-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        all_missing = []
        all_vals = []
        for vals, missing in self.executor.map(lambda chunk: _get_chunk(self.client, chunk), _chunks(keys, 1000)):
            all_vals += vals
            all_missing += missing
        return all_vals, all_missing

    def put_multi(self, entities):
        """
        Thin wrapper around client.put_multi() that circumvents
        the 400 read requests limit by doing 400-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        for none in self.executor.map(lambda chunk: self.client.put_multi(chunk), _chunks(entities, 400)):
            pass

Usage example:

# Create "raw" google datastore client
client = datastore.Client(project="myproject-123456")
chunkClient = DatastoreChunkClient(client)

# The size of the key list is only limited by memory
keys = [...]
values, missing = chunkClient.get_multi(keys)

# The size of the entity list is only limited by memory
entities = [...]
chunkClient.put_multi(entities)

 

Saving an entity in Google Cloud Datastore using Python: A minimal example

Here’s a minimal example for inserting an entity in the Google Cloud Datastore object database using the Python API:

#!/usr/bin/env python3
from google.cloud import datastore
# Create & store an entity
client = datastore.Client(project="myproject-12345")
entity = datastore.Entity(key=client.key('MyEntityKind', 'MyTestID'))
entity.update({
    'foo': u'bar',
    'baz': 1337,
    'qux': False,
})
# Actually save the entity
client.put(entity)

This assumes you have already created an entity kind with the name MyEntityKind in the project with ID myproject-12345.

How to fix Google Cloud Datastore ValueError: A Key must have a project set.

Problem:

You are trying to connect to the Google Cloud Storage object database:

#!/usr/bin/env python3
from google.cloud import datastore
# Create, populate and persist an entity
entity = datastore.Entity(key=datastore.Key('MyEntityKind')) # Line of error
# ...

but when running that code, you get this error message:

Traceback (most recent call last):
  File "./IndexIntoDB.py", line 4, in <module>
    entity = datastore.Entity(key=datastore.Key('MyEntityKind'))
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 109, in __init__
    self._project = _validate_project(project, parent)
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 512, in _validate_project
    raise ValueError("A Key must have a project set.")
ValueError: A Key must have a project set.

Solution:

Note: While the solution below fixes the error message listed above, you might be more interested in having a look at this minimal entity insertion example

As the error message indicates, you need to add a project name. If you don’t know the project name, go to the Google Cloud Console, select the right project at the top and then look at the URL:

https://console.cloud.google.com/datastore/welcome?project=perceptive-tape-12345

In this example, the project ID (which you have to use in the Python code is perceptive-tape-12345.

See also the Keys section of the google-cloud-datastore python documentation.

Converting namedtuples to XLSX in Python

This Python snippet allows you to convert an iterable of namedtuple instances to an XLSX file using xlsxwriter.

The header is automatically determined from the first element of the iterable. If the iterable is empty, the resulting XLSX file will also be empty.

import xlsxwriter
import itertools
from collections import namedtuple

def xlsx_write_rows(filename, rows):
    """
    Write XLSX rows from an iterable of rows.
    Each row must be an iterable of writeable values.

    Returns the number of rows written
    """
    workbook = xlsxwriter.Workbook(filename)
    worksheet = workbook.add_worksheet()
    # Write values
    nrows = 0
    for i, row in enumerate(rows):
        for j, val in enumerate(row):
            worksheet.write(i, j, val)
        nrows += 1
    # Cleanup
    workbook.close()
    return nrows


def namedtuples_to_xlsx(filename, values):
    """
    Convert a list or generator of namedtuples to an XLSX file.
    Returns the number of rows written.
    """
    try:
        # Ensure its a generator (next() not allowed on lists)
        values = (v for v in values)
        # Use first row to generate header
        peek = next(values)
        header = list(peek.__class__._fields)
        return xlsx_write_rows(filename, itertools.chain([header], [peek], values))
    except StopIteration:  # Empty generator
        # Write empty xlsx
        return xlsx_write_rows(filename, [])

Example Usage:

MyType = namedtuple("MyType", ["ID", "Name", "Value"])
namedtuples_to_xlsx("test.xlsx", [
    MyType(1, "a", "b"),
    MyType(2, "c", "d"),
    MyType(3, "e", "f"),
])

This example will generate this table:

ID	Name	Value
1	a	b
2	c	d
3	e	f

 

Fixing TensorFlow libcublas.so.8.0: cannot open shared object file on Ubuntu

Problem:

When you run import tensorflow in Python, you get one of the following errors:

ImportError: libcublas.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcufft.so.8.0: cannot open shared object file: No such file or directory
ImportError: libcurand.so.8.0: cannot open shared object file: No such file or directory

Solution:

Install the required packages using:

apt-get install libcublas8.0 libcusolver8.0 libcudart8.0 libcufft8.0 libcurand8.0

Note that you also need to install cuDNN – see this followup post

Which version on CuDNN should you install for TensorFlow GPU on Ubuntu?

for details on how to do that.

If this method does not work, you can (as a quick workaround) uninstall tensorflow-gpu and install the tensorflow – the version without GPU support:

pip3 uninstall tensorflow-gpu
pip3 install tensorflow

However, this will likely make your applications much slower.

For other solutions see the TensorFlow bugtracker on GitHub.

How to use concurrent.futures map with a tqdm progress bar

Problem:

You have a concurrent.futures executor, e.g.

import concurrent.futures

executor = concurrent.futures.ThreadPoolExecutor(64)

Using this executor, you want to map a function over an iterable in parallel (e.g. parallel download of HTTP pages).

In order to aid interactive execution, you want to use tqdm to provide a progress bar, showing the fraction of futures

Solution:

You can use this function:

from tqdm import tqdm
import concurrent.futures

def tqdm_parallel_map(executor, fn, *iterables, **kwargs):
    """
    Equivalent to executor.map(fn, *iterables),
    but displays a tqdm-based progress bar.
    
    Does not support timeout or chunksize as executor.submit is used internally
    
    **kwargs is passed to tqdm.
    """
    futures_list = []
    for iterable in iterables:
        futures_list += [executor.submit(fn, i) for i in iterable]
    for f in tqdm(concurrent.futures.as_completed(futures_list), total=len(futures_list), **kwargs):
        yield f.result()

Note that internally, executor.submit() is used, not executor.map() because there is no way of calling concurrent.futures.as_completed() on the iterator returned by executor.map().

Note: In constract to executor.map() this function does NOT yield the arguments in the same order as the input.

requests: Download file if it doesn’t exist

Problem:

You want to download a URL to a file using the requests python library, but you want to skip the download if it doesn’t exist

Solution:

Use the following functions:

import requests
import os.path

def download_file(filename, url):
    """
    Download an URL to a file
    """
    with open(filename, 'wb') as fout:
        response = requests.get(url, stream=True)
        response.raise_for_status()
        # Write response data to file
        for block in response.iter_content(4096):
            fout.write(block)

def download_if_not_exists(filename, url):
    """
    Download a URL to a file if the file
    does not exist already.

    Returns
    -------
    True if the file was downloaded,
    False if it already existed
    """
    if not os.path.exists(filename):
        download_file(filename, url)
        return True
    return False

 

Removing spans/divs with style attributes from HTML

Occasionally I have to clean up some HTML code – mostly because parts of it were pasted into a CMS like WordPress from rich text editor like Word.

I’ve noticed that the formatting I want to remove is mostly based on span and div elements with a style attribute. Therefore, I’ve written a simple Python script based on BeautifulSoup4 which will replace certain tags with their contents if they have a style attribute. While in some cases some other formatting might be destroyed by such a script, it is very useful for some recurring usecases.

Read more

Upload multiple files to the tornado webserver

The following html code can be used to create an html form that allows uploading multiple files at once:

<form enctype="multipart/form-data" method="POST" action="upload.py">
  <table style="width: 100%">
    <tr>
      <td>Choose the files to upload:</td>
      <td style="text-align: right"><input type="file" multiple="" id="files" name="files"></td>
    </tr>
    <tr>
      <td><input id="fileUploadButton" type="submit" value="Upload &gt;&gt;"></td>
      <td></td>
    </tr>
  </table>
</form>

Read more

Normalizing electronics engineering value notations using Python

In electronics engineering there is a wide variety of notations for values that need to be recognized by intuitive user interfaces. Examples include:

  • 1fA
  • 0.1A
  • 0.00001
  • 1e-6
  • 4,5nA
  • 4,500.123 A
  • 4A5
  • 4k0 A

The wide variety of options, including thousands separators, comma-as-decimal-separator and suffix-as-decimal-separator, optional whitespace and scientific notations makes it difficult to normalize values without using specialized libraries. Read more

Engineering for the super-lazy: Solving equations without activating your brain

Preface

In electronics engineering, from time to time you have to use standard formulas to characterize your circuits. To what extent you need to calculate all parameters most often depends on the requirement.

For example, consider the formula for the -3dB cutoff frequency of a 1st order RC lowpass filter:

f_c=\frac{1}{2\pi RC}

Although this equation is fairly simple and most people won’t have any problem solving it for any particular variable in a few seconds, it can serve as a basic example on how to solve an equation symbolically.

One of the easiest ways of performing this task is to use SymPy, a Python library for symbolic mathematics.

Read more