How to set cell value to string using js-xlsx

This snippet reads a XLSX file using js-xlsx, sets the C2 cell to abc123 and writes the result to another file:

const XLSX = require('xlsx');

const table = XLSX.readFile('mytable.xlsx');
// Use first sheet
const sheet = table.Sheets[table.SheetNames[0]];

// Option 1: If you have numeric row and column indexes
sheet[XLSX.utils.encode_cell({r: 1 /* 2 */, c: 2 /* C */})] = {t: 's' /* type: string */, v: 'abc123' /* value */};
// Option 2: If you have a cell coordinate like 'C2' or 'D15'
sheet['C2'] = {t: 's' /* type: string */, v: 'abc123' /* value */};

XLSX.writeFile(table, 'result.xlsx');

 

How to iterate over XLSX rows using js-xlsx

This snippet allows you to easily iterate over rows in any XLSX files using the js-xlsx library (in this example we don’t iterate over all columns but rather only get the B column as an example):

const table = XLSX.readFile('mytable.xlsx');
const sheet = table.Sheets[table.SheetNames[0]];

var range = XLSX.utils.decode_range(sheet['!ref']);
for (let rowNum = range.s.r; rowNum <= range.e.r; rowNum++) {
    // Example: Get second cell in each row, i.e. Column "B"
    const secondCell = sheet[XLSX.utils.encode_cell({r: rowNum, c: 1})];
    // NOTE: secondCell is undefined if it does not exist (i.e. if its empty)
    console.log(secondCell); // secondCell.v contains the value, i.e. string or number
}

 

How to fix Angular4/5/6 Unexpected token ‘px’

If you encounter an error message like this:

Parser Error: Unexpected token 'px' at column 3 in [70px] in ng:///AppModule/MyComponent.html@5:34 ("="let string of strings">

look at the line the error is referring to. It will look similar to this:

<mat-expansion-panel-header [collapsedHeight]="70px">

You have two options of fixing this:

Option 1: Recommended if the value (70px in this case) is always constant.

Remove the brackets from the attribute: [collapsedHeight] to collapsedHeight. The brackets mean that the value shall be interpreted as Javascript and removing them means interpreting the value as attribute. You code should look like this:

<mat-expansion-panel-header collapsedHeight="70px">

Option 2: Force angular to interpret the value (70px in this case) as a string:

Add single quotes before and after the value makes it valid Javascript:

<mat-expansion-panel-header [collapsedHeight]="'70px'">

I recommend to use this option only if you expect the value to be a non-constant javascript expression in the future.

How to circumvent Google Cloud Storage 1000 read / 400 write limit in Python

Google Cloud Datastore has a built-in 1000 keys limit for get requests and a 400 entities per request for put limit. If you hit it, you will see one of these error messages:

google.api_core.exceptions.InvalidArgument: 400 cannot get more than 1000 keys in a single call
google.api_core.exceptions.InvalidArgument: 400 cannot write more than 500 entities in a single call

You can fix it by chunking the requests, i.e. only do 1000 requests at one time for get etc.

This code provides a ready-to-use example for a class that automates this process. As an added benefit, it performs the requests in chunks of 1000 (for get) or 400 (for put) in parallel using a concurrent.futures.Executor. As the performance is expected to be IO-bound, it is recommended to use a concurrent.futures.ThreadPoolExecutor.
If you dont give the class an executor on construction, it will create one by itself.

import itertools
from concurrent.futures import ThreadPoolExecutor

def _chunks(l, n=1000):
    """
    Yield successive n-sized chunks from l.
    https://stackoverflow.com/a/312464/2597135
    """
    for i in range(0, len(l), n):
        yield l[i:i + n]

def _get_chunk(client, keys):
    """
    Get a single chunk
    """
    missing = []
    vals = client.get_multi(keys, missing=missing)
    return vals, missing

class DatastoreChunkClient(object):
    """
    Provides a thin wrapper around a Google Cloud Datastore client providing means
    of reading nd
    """
    def __init__(self, client, executor=None):
        self.client = client
        if executor is None:
            executor = ThreadPoolExecutor(16)
        self.executor = executor
    
    def get_multi(self, keys):
        """
        Thin wrapper around client.get_multi() that circumvents
        the 1000 read requests limit by doing 1000-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        all_missing = []
        all_vals = []
        for vals, missing in self.executor.map(lambda chunk: _get_chunk(self.client, chunk), _chunks(keys, 1000)):
            all_vals += vals
            all_missing += missing
        return all_vals, all_missing

    def put_multi(self, entities):
        """
        Thin wrapper around client.put_multi() that circumvents
        the 400 read requests limit by doing 400-sized chunked reads
        in parallel using self.executor.

        Returns (values, missing).
        """
        for none in self.executor.map(lambda chunk: self.client.put_multi(chunk), _chunks(entities, 400)):
            pass

Usage example:

# Create "raw" google datastore client
client = datastore.Client(project="myproject-123456")
chunkClient = DatastoreChunkClient(client)

# The size of the key list is only limited by memory
keys = [...]
values, missing = chunkClient.get_multi(keys)

# The size of the entity list is only limited by memory
entities = [...]
chunkClient.put_multi(entities)

 

Saving an entity in Google Cloud Datastore using Python: A minimal example

Here’s a minimal example for inserting an entity in the Google Cloud Datastore object database using the Python API:

#!/usr/bin/env python3
from google.cloud import datastore
# Create & store an entity
client = datastore.Client(project="myproject-12345")
entity = datastore.Entity(key=client.key('MyEntityKind', 'MyTestID'))
entity.update({
    'foo': u'bar',
    'baz': 1337,
    'qux': False,
})
# Actually save the entity
client.put(entity)

This assumes you have already created an entity kind with the name MyEntityKind in the project with ID myproject-12345.

How to fix Google Cloud Datastore ValueError: A Key must have a project set.

Problem:

You are trying to connect to the Google Cloud Storage object database:

#!/usr/bin/env python3
from google.cloud import datastore
# Create, populate and persist an entity
entity = datastore.Entity(key=datastore.Key('MyEntityKind')) # Line of error
# ...

but when running that code, you get this error message:

Traceback (most recent call last):
  File "./IndexIntoDB.py", line 4, in <module>
    entity = datastore.Entity(key=datastore.Key('MyEntityKind'))
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 109, in __init__
    self._project = _validate_project(project, parent)
  File "/usr/local/lib/python3.6/dist-packages/google/cloud/datastore/key.py", line 512, in _validate_project
    raise ValueError("A Key must have a project set.")
ValueError: A Key must have a project set.

Solution:

Note: While the solution below fixes the error message listed above, you might be more interested in having a look at this minimal entity insertion example

As the error message indicates, you need to add a project name. If you don’t know the project name, go to the Google Cloud Console, select the right project at the top and then look at the URL:

https://console.cloud.google.com/datastore/welcome?project=perceptive-tape-12345

In this example, the project ID (which you have to use in the Python code is perceptive-tape-12345.

See also the Keys section of the google-cloud-datastore python documentation.

How to fix apt-get source You must put some ‘source’ URIs in your sources.list

Problem:

You want to download an apt source package using

apt-get source <package name>

but instead you see this error message:

E: You must put some 'source' URIs in your sources.list

Solution:

In most cases, you can fix this easily using

sudo apt-get update

If this does not fix the issue, edit /etc/apt/sources.list, e.g. using

sudo nano /etc/apt/sources.list

and ensure that the deb-src lines are not commented out.

Example: You need to change

deb http://archive.ubuntu.com/ubuntu artful main restricted
# deb-src http://archive.ubuntu.com/ubuntu artful main restricted

to

deb http://archive.ubuntu.com/ubuntu artful main restricted
deb-src http://archive.ubuntu.com/ubuntu artful main restricted

How to fix lxd ‘Failed container creation: No storage pool found. Please create a new storage pool.’

Problem:

You want to launch some lxd container using lxc launch […] but instead you get the following error message:

Failed container creation: No storage pool found. Please create a new storage pool.

Solution:

You need to initialize lxd before using it:

lxd init

When it asks you about the backend

Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:

choosing the default option (btrfs) means that you’ll have to use a dedicated block device (or a dedicated preallocated file image) for storage. While this is more efficient if you run many containers at a time, I recommend to choose the dir backend for the default storage pool, because that option will be easiest to configure and will not occupy as much space on your hard drive.

See Storage management in lxd for more more details, including different options for storage pools in case you need a more advanced setup.

How to create a partition table using fdisk

Warning: If you run fdisk on the wrong drive here or there is some important data left, you might lose all your data and it will be very hard to restore. Before running these commands, triple-check that you’ve used the correct device (e.g. /dev/sdh)!

In order to create a partition table on a device (e.g. /dev/sdh/dev/sdh1 is not a device but a partition, so using that does not make any sense!), run these commands

sudo fdisk <device file>

If the device doesn’t have a valid partition table, fdisk will automatically create a partition table (but not write it to the disk yet). It will show this output if that is the case (the identifier is random and different every time):

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x81ee11ff.

Command (m for help):

If you are sure that you want to run the partition, enter w and press return to write the partition table to disk & exit.

The partition table will be effective immediately, but will not contain any partition. In order to create a partition (for this example we will create one partition being as large as the entire device), run

sudo fdisk <device file>

again.

This time, enter the n command (new partition). When it asks you about the partition type and its size, just press return every time to select the defaults. It should look like this (<return> added to show you where you should press return).

Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): <return>
Partition number (1-4, default 1): <return>
First sector (2048-31143935, default 2048): <return>
Last sector, +sectors or +size{K,M,G,T,P} (2048-31143935, default 31143935): <return>

After that, when fdisk prompts for a command again (i.e. when it says Command (m for help): ), enter w in order to write the changes (i.e. the new partition) to the disk & exit. After fdisk exits, you can see the partition in /dev, e.g. /dev/sdh1

After that, you’ll likely need to create a filesystem on that partition, e.g. sudo mkfs.ext4 /dev/sdh1 or sudo mkfs.vfat /dev/sdh1 . Make sure to create the correct filesystem for the operating system and usecase the device will be used in.

How to easily find errors in nginx config files

If you edited some nginx config file and nginx doesn’t want to reload or restart, e.g. with an error message like this:

# service nginx reload
Job for nginx.service failed because the control process exited with error code.
See "systemctl  status nginx.service" and "journalctl  -xe" for details.

you likely have some error in one of your config files.

There’s a simple command to check for errors (you need to run it as root): nginx -t

Example output:

nginx: [emerg] unknown directive "autoindex$" in /etc/nginx/sites-enabled/mysite:31
nginx: configuration file /etc/nginx/nginx.conf test failed

Firstly, the last line tells you that there actually is some error in the config files.
The first line tells you exactly where it is: /etc/nginx/sites-enabled/mysite:31 means: Look in the file /etc/nginx/sites-enabled/mysite, line 31.

In this particular case, the actual error message is unknown directive "autoindex$". By checking the aforementioned file I was able to find out that I accidentally entered autoindex $; instead of autoindex on;

After fixing this issue, nginx -t shows that the configuration file seems correct now:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Note that while most cases of nginx failing to (re)start are caused by issues in the config files, there are some cases in which the config file seems correct and nginx will still not start up. In that case. have a look at the logfile which is commonly located at /var/log/nginx/error.log . You need to be root in order to view it. I recommend this command:

sudo tail -n 1000 /var/log/nginx/error.log

How to exit the GNU nano editor?

Just press Ctrl+X. If you dont have unsaved changes, this will exit nano immediately.

In case you have unsaved changes, it will ask you whether to save those changes after pressing.

  • Press Y to tell it to save the changes you’ve made. It will then ask you to check or enter the filename to save to. Once you are finished with the filename, press Enter.
  • Press N to discard all changes (you won’t be able to restore your changes later) and exit nano immediately.

PDFJS: Read PDF from memory Buffer in NodeJS

Note: This post uses async/await and therefore requires NodeJS 8+.

This is how to read a PDF file from a file, e.g. mypdf.pdf:

pdfjs.getDocument('mypdf.pdf');

Full example:

const pdfjs = require('pdfjs-dist');

async function readPDF() {
    const pdf = await pdfjs.getDocument('mypdf.pdf');
    // ...
}

Here’s how you can read the PDF from a memory buffer:

pdfjs.getDocument({data: buffer});

Full example

const fs = require('mz/fs')
const pdfjs = require('pdfjs-dist');

async function readPDF() {
    // Read file into buffer
    const buffer = await fs.readFile('mypdf.pdf')
    // Parse PDF from buffer
    const pdf = await pdfjs.getDocument({data: buffer});
    // ...
}

Using mz/fs is not required, it’s just used as an utility library to be able to use await with files.

 

Convert pt (postscript/PDF unit) to inch or mm in Javascript

Here are some simple utility functions to convert the preprint unit pt (defined as 1/72 inch) into inches or mm.

function convertPtToInch(pt) { return pt / 72; }
function convertInchToMM(inch) { return inch * 25.4; }
function convertPtToMM(pt) {
  return convertInchToMM(convertPtToInch(pt)); }

// Example usage
console.log(convertPtToMM(595)) // Prints 209.90277777777777

Note that while this conversion is exact, there is some tolerance required when comparing these units:
An ISO A4 paper is defined as 210x297 mm – or 595x842 pt.

However, converting 595×842 pt into mm results in 209.902777 mm and 297.038888 mm respectively. Watch out for those tolerances if you try to compare paper sizes. I recommend a tolerance of at least 0.25 mm.

Extract PDF page sizes using PDFJS & NodeJS

Although most PDFs have some pages with only one page size (e.g. DIN A4 or Letter in portrait orientation), PDFs sometimes also have pages that have another size or orientation (which is treated just like another size) that other pages in the same document.

This post provides an easy-to-reuse example on how to use PDFJS in NodeJS (though it will be just as easy to do in the browser) to extract the PDF

It is based on this previous post on how to read all pages from a PDF document using PDFJS, so be sure to check that out first.

First install the required dependencies:

npm install bereich pdfjs-dist

then you can use this source code to read the page sizes of mypdf.pdf:

const pdfjs = require('pdfjs-dist');
const bereich = require('bereich');

class PageSize {
  constructor(width, height) {
    this.width = width;
    this.height = height
  }
}

function getPageSize (page) {
    const [x, y, w, h] = page.pageInfo.view;
    const width = w - x;
    const height = h - y;
    const rotate = page.pageInfo.rotate;
    // Consider rotation
    return (rotate === 90 || rotate === 270)
        ? new PageSize(height, width) : new PageSize(width, height);
}

async function readPDFPageSizes() {
  const pdf = await pdfjs.getDocument('mypdf.pdf');
  const numPages = pdf.numPages;

  const pageNumbers = Array.from(bereich(1, numPages));
  // Start reading all pages 1...numPages
  const promises = pageNumbers.map(pageNo => pdf.getPage(pageNo));
  // Wait until all pages have been read
  const pages = await Promise.all(promises);
  // You can do something with pages here.
  return pages.map(getPageSize);
}

readPDFPageSizes()
    .then(pageSizes => {console.log(pageSizes)})
    .catch(err => {console.error(`Error while reading PDF: ${err}`)})

Running this with a document having a single A4 page will result in

[ PageSize { width: 595, height: 842 } ]

Note that the width & height unit is pt (Points). One pt is defined as 1/72 inches. A DIN A4 page (portrait) is 595x842pt, therefore you see those values here.
See this TechOverflow post for code to convert pt to mm and inches.

PDFJS: Read all pages using async/await in NodeJS

PDFJS has an official example that – among other things, reads all pages from a PDF document.
However, their promise-based method is rather complex to understand and to write. Luckily, there is an easier way using async/await (which is supported starting from NodeJS 8.x).

I’m using the bereich library (bereich is german for range) in order to generate an array of page numbers (1..numPages).
Install the required libraries using

npm install pdfjs-dist bereich

Here’s the source code example:

const pdfjs = require('pdfjs-dist');
const bereich = require('bereich');

async function readPDFPages() {
  const pdf = await pdfjs.getDocument('mypdf.pdf');
  const numPages = pdf.numPages;

  const pageNumbers = Array.from(bereich(1, numPages));
  // Start reading all pages 1...numPages
  const promises = pageNumbers.map(pageNo => pdf.getPage(pageNo));
  // Wait until all pages have been read
  const pages = await Promise.all(promises);
  // You can do something with pages here.
  return pages;
}

readPDFPages().then(pages => {
    console.log(pages)
}).catch(err => {
    console.error(`Error while reading PDF: ${err}`)
})

 

How to read PDF creation & modification date in NodeJS

Problem:

You have a PDF file from which you want to know the creation and modification date: Not the dates stored in the file itself but those from the PDF metadata.

Solution:

This solution assumes you use NodeJS version 8+ which supports async/await.
You can use pdfjs to read these dates. First install it using

npm install pdfjs-dist

Then use this code to extract the dates.

const pdfjs = require('pdfjs-dist');

async function readPDFDates() {
  const pdf = await pdfjs.getDocument('mypdf.pdf');
  const metadata = await pdf.getMetadata();

  const modDate = new Date(metadata.metadata._metadata['xmp:modifydate']);
  const createDate = new Date(metadata.metadata._metadata['xmp:createdate']);
  return [modDate, createDate]
}

readPDFDates().then(([modDate, createDate]) => {
    console.log(`Creation date: ${createDate}`)
    console.log(`Modification date: ${modDate}`)
}).catch(err => {
    console.error(`Error while reading PDF: ${err}`)
})

 

The PDF files I’ve seen use ISO8601-style formatting, but without a timezone specification. The code therefore assumes that the times are in the local timezone.

Note: metadata is e.g. the following object (not all attributes are present for all PDFs):

{ info: 
   { PDFFormatVersion: '1.5',
     IsAcroFormPresent: false,
     IsXFAPresent: false,
     Title: 'Microsoft Word - mypdf',
     Author: 'uli',
     Creator: 'PScript5.dll Version 5.2.2',
     Producer: 'Acrobat Distiller 9.3.0 (Windows)',
     CreationDate: 'D:20100209100924+01\'00\'',
     ModDate: 'D:20100209100924+01\'00\'' },
  metadata: 
   Metadata {
     _metadata: 
      { 'dc:format': 'application/pdf',
        'dc:creator': 'peter',
        'dc:title': 'Microsoft Word - mypdf',
        'xmp:createdate': '2010-02-09T10:09:24+01:00',
        'xmp:creatortool': 'PScript5.dll Version 5.2.2',
        'xmp:modifydate': '2010-02-09T10:09:24+01:00',
        'pdf:producer': 'Acrobat Distiller 9.3.0 (Windows)',
        'xmpmm:documentid': 'uuid:2fd66f45-5f2a-4dd6-8cb0-297ce85ee9e1',
        'xmpmm:instanceid': 'uuid:f6e62218-4b40-47c7-837b-6cb1e6e90995' } },

 

AutoBenchmark: Automatic multi-interval benchmarking in C++ using std::chrono

Problem:

Some part of your C++ code is suffering from performance issues. You are looking for a lightweight solution that allows you to easily record different time points and adaptively print the results (i.e. you don’t want to know something ran for 1102564643 nanoseconds, you just want to now that it took 1.102 seconds)

Solution:

I wrote AutoBenchmark so you can have the most hassle-free C++11 experience possible for your micro-benchmarking needs.

AutoBenchmark allows you to record different points in time, each with a label. The first time point is recorded when this instance is constructed. AutoBenchmark supports an arbitrary number of time points.
When an instance of this class is destructed, it will automatically print all the benchmark results, but only if a configurable amount of time has passed since its construction – this is extremely handy especially if you have multiple exit points in your function that would otherwise require calling Print() multiple times.
It allows you to ignore the benchmark when some performance goal is passed (e.g. if you have a for loop that is slow only for some datapoints, you can configure AutoBenchmark to print infos only for the slow runs).
The default behaviour (i.e. constructor with default parameters) is to disable automatic printing – in that case, you can call Print() yourself.

Header (AutoBenchmark.hpp):

/**
 * AutoBenchmark v1.0
 * Written by Uli Köhler
 * https://techoverflow.net
 * 
 * Published under CC0 1.0 Universal
 */
#pragma once

#include <chrono>
#include <string>
#include <limits>
#include <vector>

using namespace std;

/**
 * Automatic benchmark: Allows you to record different points in time,
 * each with a label. The first time point is recorded when this instance
 * is constructed. This class supports an arbitrary number of time points.
 * 
 * When an instance of this class is destructed, it will automatically
 * print all the benchmark results, but only if a configurable amount
 * of time has passed since its construction, allowing you to automatically
 */
class AutoBenchmark {
  public:
    /**
     * Initialize a benchmark that automatically prints its records
     * on destruction if the total time consumed is >= autoPrintThreshold
     * at the time of destruction. Only the time up until the last Record()ed
     * label is printed.
     * @param autoPrintThreshold: How many seconds will need to have passed
     * so that the destructor will automatically print. Default is to never print.
     * @param benchmarkLabel: A label that will be printed once, before all the results
     * @param lineLabel: A label (e.g. indent) that will be printed before each result line
     */
    AutoBenchmark(double autoPrintThreshold=std::numeric_limits::max(), const std::string& benchmarkLabel = "", const std::string& lineLabel = "    ");
    ~AutoBenchmark();
    /**
     * Record a datapoint
     */
    void Record(const std::string& label = "");
    void Record(const char *label = "");
    /**
     * Print all time deltas
     */
    void Print();
    
    /**
     * Reset the benchmark, as if it were a new instance.
     */
    void Reset();

    /**
     * Return now() - first timepoint in seconds.
     */
    double TotalSeconds();

  private:
    vector times;
    vector labels;
    double autoPrintThreshold;
    std::string benchmarkLabel;
    std::string lineLabel;
};

Source (AutoBenchmark.cpp):

#include "AutoBenchmark.hpp"

#include <iostream>

AutoBenchmark::AutoBenchmark(double autoPrintThreshold, const std::string& benchmarkLabel, const std::string& lineLabel)
    : autoPrintThreshold(autoPrintThreshold), benchmarkLabel(benchmarkLabel), lineLabel(lineLabel) {
    times.push_back(chrono::system_clock::now());
    labels.emplace_back("Begin"); // Just to keep indices the same
}

AutoBenchmark::~AutoBenchmark() {
    if(TotalSeconds() >= autoPrintThreshold) {
        Print();
    }
}

void AutoBenchmark::Record(const std::string &label) {
    times.push_back(chrono::system_clock::now());
    labels.emplace_back(label); // Just to keep indices the same
}

void AutoBenchmark::Record(const char *label) {
    Record(string(label));
}

void AutoBenchmark::Print() {
    if(benchmarkLabel.length()) {
        cout << benchmarkLabel << '\n';
    }
    for (size_t i = 1; i < times.size(); i++) {
        // Compute time interval for size comparison
        chrono::duration<double, std::nano> ns = times[i] - times[i - 1];
        chrono::duration<double, std::micro> us = times[i] - times[i - 1];
        chrono::duration<double, std::milli> ms = times[i] - times[i - 1];
        chrono::duration s = times[i] - times[i - 1];
        chrono::duration<double, std::ratio<60>> min = times[i] - times[i - 1];
        chrono::duration<double, std::ratio<3600>> hrs = times[i] - times[i - 1];
        // Print
        if(ns.count() < 1000.0) {
            cout << lineLabel << labels[i] << " took " << ns.count() << " ns\n";
        } else if(us.count() < 1000.0) {
            cout << lineLabel << labels[i] << " took " << us.count() << " μs\n";
        } else if (ms.count() < 1000.0) {
            cout << lineLabel << labels[i] << " took " << ms.count() << " ms\n";
        } else if (s.count() < 60.0) {
            cout << lineLabel << labels[i] << " took " << s.count() << " seconds\n";
        } else if (min.count() < 1000.0) {
            cout << lineLabel << labels[i] << " took " << min.count() << " minutes\n";
        } else {
            cout << lineLabel << labels[i] << " took " << hrs.count() << " hours\n";
        }
    }
    cout << flush;
}

void AutoBenchmark::Reset() {
    times.clear();
    labels.clear();
    times.push_back(chrono::system_clock::now());
    labels.emplace_back("Begin");
}

double AutoBenchmark::TotalSeconds() {
    chrono::duration s = chrono::system_clock::now() - times[0];
    return s.count();
}

Usage example:

#include "AutoBenchmark.hpp"

void MySlowFunction() {
    // Every run that takes >= 0.3 seconds will auto-print
    AutoBenchmark myBenchmark(0.3, "Results of running MySlowFunction():");
    // .. do task 1 ...
    myBenchmark.Record("Running task 1"); // will print as: Running task 1 took 1.2ms
    // .. do task 2 ...
    myBenchmark.Record("Running task 2");

    // Loop example
    for(size_t i = 0; ....) {
        // ... do loop iteration task here ...
        myBenchmark.Record("Loop iteration " + std::to_string(i));
    }

    // myBenchmark will be destructed here, so if MySlowFunction() took
    // more than 0.3s to run until it returned, the result will be printed
    // to cout automatically.
}

If MySlowFunction() took more than 0.3s to run overall, AutoBenchmark will print the results when it is destructed – i.e. when MySlowFunction( ) returns:

Results of running MySlowFunction():
    Running task 1 took 260.826 ms
    Running task 2 took 36.148 μs
    Loop iteration 0 took 2.5522 seconds
    Loop iteration 1 took 664.059 ms
    Loop iteration 2 took 22.2772 ms
    Loop iteration 3 took 57.4024 ms
    Loop iteration 4 took 16.9928 ms
    Loop iteration 5 took 14.0497 ms
    Loop iteration 6 took 62.5218 ms