Embedded

mbed STM32F4DISCOVERY simple LED demo

This demo shows you how to control the STM32F4DISCOVERY LEDs using mbed.
I use mbed from inside PlatformIO.

#include <mbed.h>

DigitalOut greenLED(PD_12);
DigitalOut orangeLED(PD_13);
DigitalOut redLED(PD_14);
DigitalOut blueLED(PD_15);

int main() {
  while(1) {
    // Cycle LEDs in order
    // NOTE: You can toggle a LED using
    //  blueLED = !blueLED;
    blueLED = 0;
    greenLED = 1;
    wait(0.25);
    greenLED = 0;
    orangeLED = 1;
    wait(0.25);
    orangeLED = 0;
    redLED = 1;
    wait(0.25);
    redLED = 0;
    blueLED = 1;
    wait(0.25);
  }
}

 

Posted by Uli Köhler in mbed, PlatformIO

STM32F4DISCOVERY LED pin reference / pinout

The STM32F4 DISCOVERY board has these LEDs:

  • Green LED: PD12, active-high (LED emits light when PD12 is high)
  • Orange LED: PD13, active-high (LED emits light when PD13 is high)
  • Red LED: PD14, active-high (LED emits light when PD14 is high)
  • Blue LED: PD15, active-high (LED emits light when PD15 is high)
Posted by Uli Köhler in Electronics, Embedded

Is pypng 16-bit PNG encoding faster using pypy on the Raspberry Pi?

In our previous post How to save Raspberry Pi raw 10-bit image as 16-bit PNG using pypng we investigated how to use the pypng library to save 10-bit raw Raspberry Pi Camera images to 16-bit PNG files.

However, saving a single image took ~26 seconds using CPython 3.7.3. Since pypy can provide speedups to many Python workloads, we tried using pypy3 7.0.0 (see How to install pypy3 on the Raspberry Pi) to speed up the PNG encoding.

Results

pypng PNG export seems to be one of the workloads that are much slower using pypy3.

  • CPython 3.7.3: Encoding took 24.22 seconds
  • pypy3 7.0.0: Encoding took 266.60 seconds

Encoding is more that 10x slower when using pypy3!

Hence I don’t recommend using pypy3 to speed up pypng encoding workloads, at least not on the Raspberry Pi!

Full example

This example is derived from our full example previously posted on How to save Raspberry Pi raw 10-bit image as 16-bit PNG using pypng:

#!/usr/bin/env python3
import time
import picamera
import picamera.array
import numpy as np
import png

# Capture image
print("Capturing image...")
with picamera.PiCamera() as camera:
    with picamera.array.PiBayerArray(camera) as stream:
        camera.capture(stream, 'jpeg', bayer=True)
        # Demosaic data and write to rawimg
        # (stream.array contains the non-demosaiced data)
        rawimg = stream.demosaic()

# Write to PNG
print("Writing 16-bit PNG...")
t0 = time.time()
with open('16bit.png', 'wb') as outfile:
    writer = png.Writer(width=rawimg.shape[1], height=rawimg.shape[0], bitdepth=16, greyscale=False)
    # rawimg is a (w, h, 3) RGB uint16 array
    # but PyPNG needs a (w, h*3) array
    png_data = np.reshape(rawimg, (-1, rawimg.shape[1]*3))
    # Scale 10 bit data to 16 bit values (else it will appear black)
    # NOTE: Depending on your photo and the settings,
    #  it might still appear quite dark!
    png_data *= int(2**6)
    writer.write(outfile, png_data)
t1 = time.time()

print(f"Encoding took {(t1 - t0):.2f} seconds")

 

Posted by Uli Köhler in Python, Raspberry Pi

How to install pypy3 on the Raspberry Pi

This post shows you an easy way of getting pypy3 running on the Raspberry Pi. I used Raspbian Buster on a Raspberry Pi 3 for this example. On Raspbian buster this will install pypy3 7.x!

First install pypy3 and virtualenv:

sudo apt update && sudo apt -y install pypy3 pypy3-dev virtualenv

Now we can create a virtualenv to install pypy packages into:

virtualenv -p /usr/bin/pypy3 ~/pypy3-virtualenv

Now we can activate the virtualenv. You need to do this every time you want to use pypy, for each shell / SSH connection separately:

source ~/pypy3-virtualenv/bin/activate

If your shell prompt is now prefixed by (pypy3-virtualenv) you have successfully activated the virtualenv:

(pypy3-virtualenv) uli@raspberrypi:~ $

Now python points to pypy3 and pip will install packages locally to ~/pypy3-virtualenv.

Now you can use e.g.

python3 myscript.py

to run your script (both python and python3 will point to pypy3 if you activated the virtual environment!).

Note: Installing pypy3-dev is not strictly neccessary to get pypy3 running, but you need it in order to compile native librarie like numpy.

Posted by Uli Köhler in Python, Raspberry Pi

How to save Raspberry Pi raw 10-bit image as 16-bit PNG using pypng

In our previous post How to capture RaspberryPi camera 10-bit raw image in Python we showed how you can use the picamera Python library to capture raw 10-bit image data.

The PNG image format supports storing 16-bit image data. This post shows you how to do that using the NumPy arrays we generated in our previous post. We are using the pypng library.

with open('16bit.png', 'wb') as outfile:
    writer = png.Writer(width=rawimg.shape[1], height=rawimg.shape[0], bitdepth=16, greyscale=False)
    # rawimg is a (w, h, 3) RGB uint16 array
    # but PyPNG needs a (w, h*3) array
    png_data = np.reshape(rawimg, (-1, rawimg.shape[1]*3))
    # Scale 10 bit data to 16 bit values (else it will appear black)
    # NOTE: Depending on your photo and the settings,
    #  it might still appear quite dark!
    png_data *= int(2**6)
    writer.write(outfile, png_data)

Note that the resulting PNGs are ~9.9 Megabytes in size and saving them using pypng takes about 27 seconds to save the image on my Raspberry Pi 3!

For comparison, the raw NumPy data is ~29 Megabytes whereas the compressed NumPy data is 9.3 Megabytes,

  • Raw NumPy data (np.save): 29 Megabytes, takes 0.11 seconds to save.
  • Compressed NumPy data (np.savez_compressed): 9.3 Megabytes, take 12 seconds to save.

So if your motivation for using PNG is to save space, you might be better off using NumPy compressed data, especially if you need to save many camera frames in quick succession and hence are limited.

In case you need to use PNGs, you might want to check Pypy since pypng is a pure Python library and hence might benefit from Pypy’s increased execution speed. However, in practice, pypy3 is more than 10x slower. Please read our detailed analysis at Is pypng 16-bit PNG encoding faster using pypy on the Raspberry Pi?

Full example:

#!/usr/bin/env python3
import picamera
import picamera.array
import numpy as np
import png

# Capture image
print("Capturing image...")
with picamera.PiCamera() as camera:
    with picamera.array.PiBayerArray(camera) as stream:
        camera.capture(stream, 'jpeg', bayer=True)
        # Demosaic data and write to rawimg
        # (stream.array contains the non-demosaiced data)
        rawimg = stream.demosaic()

# Write to PNG
print("Writing 16-bit PNG...")
with open('16bit.png', 'wb') as outfile:
    writer = png.Writer(width=rawimg.shape[1], height=rawimg.shape[0], bitdepth=16, greyscale=False)
    # rawimg is a (w, h, 3) RGB uint16 array
    # but PyPNG needs a (w, h*3) array
    png_data = np.reshape(rawimg, (-1, rawimg.shape[1]*3))
    # Scale 10 bit data to 16 bit values (else it will appear black)
    # NOTE: Depending on your photo and the settings,
    #  it might still appear quite dark!
    png_data *= int(2**6)
    writer.write(outfile, png_data)

 

Posted by Uli Köhler in Raspberry Pi

How to capture RaspberryPi camera 10-bit raw image in Python

You can use the picamera Python library to capture a raw sensor image of a camera attached to the Raspberry Pi via CSI:

#!/usr/bin/env python3
import picamera
import picamera.array
import numpy as np

# Capture image
print("Capturing image...")
with picamera.PiCamera() as camera:
    with picamera.array.PiBayerArray(camera) as stream:
        camera.capture(stream, 'jpeg', bayer=True)
        # Demosaic data and write to rawimg
        # (stream.array contains the non-demosaiced data)
        rawimg = stream.demosaic()

rawimg is a numpy uint16 array of dimensions (w, h, 3), e.g. (1944, 2592, 3) and contains integer values from 0 to 1023.

You can, for example, save it in a NumPy file using

np.save("rawimg.npy", rawimg) # Reload with np.load("rawimg.npy")

or save it in a compressed format using

np.savez_compressed("rawimg.npz", rawimg) # Reload with np.load("rawimg.npz")
Posted by Uli Köhler in Python, Raspberry Pi

How to fix ModuleNotFoundError: No module named ‘picamera’

Problem:

You want to run a Python script using the Raspberry Pi camera but you see an error message like

Traceback (most recent call last):
  File "mycamera.py", line 2, in <module>
    import picamera
ModuleNotFoundError: No module named 'picamera'

Solution:

You need to install the picamera Python module using pip:

sudo pip3 install picamera

or, if you are still using Python 2.x:

sudo pip install picamera

In case you see

sudo: pip3: command not found

install pip3 using

sudo apt install -y python3-pip

 

Posted by Uli Köhler in Python, Raspberry Pi

How to install picamraw using pip

First try installing it normally:

sudo pip3 install picamraw

In case that fails with this error message (like for me):

Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting picamraw
Could not install packages due to an EnvironmentError: 404 Client Error: Not Found for url: https://www.piwheels.org/simple/picamraw/

download it and install it manually: Copy the link of the most recent .whl file from https://pypi.org/project/picamraw/#files, download it using wget and install it using pip3, e.g.:

wget https://files.pythonhosted.org/packages/1e/47/4efb0d0ab5d40142424e7f3db545e276733a45bd7f7f9095919ef30c96b3/picamraw-1.2.64-py3-none-any.whl
sudo pip3 install picamraw-1.2.64-py3-none-any.whl

 

Posted by Uli Köhler in Python, Raspberry Pi

How to capture Raspi Camera image using OpenCV & Python

First, install OpenCV for Python 3:

sudo apt install python3-opencv

Here’s the code to acquire the image and store it in image.png:

#!/usr/bin/env python3
import cv2
video_capture = cv2.VideoCapture(0)
# Check success
if not video_capture.isOpened():
    raise Exception("Could not open video device")
# Read picture. ret === True on success
ret, frame = video_capture.read()

cv2.imwrite('image.png', frame)
# Close device
video_capture.release()

Run it using

python3 cv-raspicapture.py

 

Posted by Uli Köhler in OpenCV, Python, Raspberry Pi

How to fix Raspi camera ‘mmal_vc_component_create: failed to create component ‘vc.ril.camera’ (1:ENOMEM)’

Problem:

You are trying to acess the Raspberry Pi camera using raspistill or raspivid, but you see an error message like this:

mmal: Cannot read camera info, keeping the defaults for OV5647
mmal: mmal_vc_component_create: failed to create component 'vc.ril.camera' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.camera' (1)
mmal: Failed to create camera component
mmal: main: Failed to create camera component
mmal: Camera is not enabled in this build. Try running "sudo raspi-config" and ensure that "camera" has been enabled

Solution:

The most common issue here is that you don’t have the camera interface enabled. Read How to enable Raspberry Pi camera using raspi-config for instructions on how to do that.

If the camera interface has already been enabled (remember that you need to reboot for the changes to take effect), the most likely reason for error messages like this is that the camera is not connected correctly to the Raspberry Pi:

  • Is the CSI cable inserted the right way? The silvery contacts need to face away from the Ethernet connector!
  • Is the CSI cable fully seated?
  • Did you insert the CSI cable into the Display connector? It needs to be inserted into the CSI connector, which is the one closer to the Ethernet connector.
  • Is the other end of the CSI cable correctly attached to the camera board?
Posted by Uli Köhler in Embedded, Raspberry Pi

How to enable Raspberry Pi camera using raspi-config

You can enable the camera interface by running

sudo raspi-config

Select 5 Interfacing Options and press Return.

Now select P1 Camera and press Return.

Select Yes and press Return.

Now that the camera interface has been enabled, press Return.

Press Tab twice to select Finish and press Return. You are now asked if you want to reboot:

Select Yes and wait for your Raspberry Pi to reboot. Your camera interface will be enabled after the reboot.

In case you are not asked to reboot, your camera interface was already enabled. In this case, select Finish on the main raspi-config screen by pressing Tab twice and pressing Return:

After pressing Return, check your shell again. Reboot (sudo reboot) so that any previous settings will take effect and then try

sudo raspistill -o myimg.jpg

in order to test your camera.

Posted by Uli Köhler in Embedded, Raspberry Pi

How to fix Raspi camera ‘failed to open vchiq instance’

Problem:

You are trying to access the Raspberry Pi camera using raspistill or raspivid. Although you already enabled the camera interface using raspi-config, you see this error message:

* failed to open vchiq instance

Solution:

Your user does not have the permissions to access the camera interface. As a quick fix, you can run raspistill or raspivid as root using sudo, or add your user to the video group to acquire the required permissions:

sudo usermod -a -G video $USER

After doing this, log out and log back in again for the changes to take effect – or close your SSH session and connect again. If in doubt, try to reboot.

Posted by Uli Köhler in Embedded, Raspberry Pi

How to fix SDCC ‘at 1: warning 119: don’t know what to do with file ‘main.o’. file extension unsupported’

If you use SDCC to compile your C code for your microcontroller and you encounter an error message like this:

at 1: warning 119: don't know what to do with file 'main.o'. file extension unsupported

you have to configure your build system to use the .rel suffix for object files instead of the standard .o. SDCC expects built object files to have the .rel extension! See How to change CMake object file suffix from default “.o” for details on how to do that in CMake.

Posted by Uli Köhler in Build systems, Embedded

A working SDCC STM8 CMake configuration

If you have been looking desperately for a working CMake example for the SDCC compiler for STM8 microcontrollers here’s my take on it:

cmake_minimum_required(VERSION 3.2)

set(CMAKE_C_OUTPUT_EXTENSION ".rel")
set(CMAKE_C_COMPILER sdcc)
set(CMAKE_SYSTEM_NAME Generic) # No linux target etc

# Prevent default configuration
set(CMAKE_C_FLAGS_INIT "")
set(CMAKE_EXE_LINKER_FLAGS_INIT "")

project(STM8Blink C)
SET(CMAKE_C_FLAGS "-mstm8 --std-c99")
add_executable(main.ihx main.c)

# Flash targets
add_custom_target(flash ALL COMMAND stm8flash -c stlink -p stm8s105c6 -w main.ihx)

This will build main.ihx from main.c. main.ihx is a Intel Hex file which can be directly flashed using stm8flash.

The last lines setup make flash ; you might need to use the correct microcontroller (stm8s105c6 in this example, run stm8flash -l to show supported devices) and the correct flash adapter (stlink, stlinkv2, stlinkv21, stlinkv3 or espstlink).

The setup example shown here is for the STM8S eval board.

I suppose it can be easily modified for other microcontrollers, but I haven’t tried that so far.

Posted by Uli Köhler in CMake, Embedded, Hardware

How to fix raspivid ‘* failed to open vchiq instance’

Problem:

You want to capture a video using raspivid, e.g. using a command like

raspivid -o vid.h264

but you only see this error message:

* failed to open vchiq instance

with no video being captured.

Quick solution:

Run raspivid using sudo:

sudo raspivid -o vid.h264

Now either the video will be captured or you will see another error message if there is another issue with your camera setup.

Better solution:

Add your user to the video group so you don’t have to run raspivid as root:

sudo usermod -a -G video $USER

You only need to do this once. After that, log out and log back in (or close your SSH session and connect again). If in doubt, restart the Raspberry Pi. See What does &#8217;sudo usermod -a -G group $USER&#8216; do on Linux? for more details.

Now you will be able to capture video as normal user:

raspivid -o vid.h264

 

Posted by Uli Köhler in Embedded, Linux

How to enable SSH on Raspbian without a screen

You can open the boot partition on the SD card (the FAT32 partition) and create an empty file named ssh in the root directory of that partition. Ensure that the file is names ssh and not ssh.txt !

If you are in the correct working directory in the command line, use

touch ssh

On recent Ubuntu version, this will switch to the correct directory and create the file (but you need to mount the directory manually e.g. using your file explorer:

cd /media/$USER/boot && touch ssh

Don’t forget to unmount the boot drive before removing the SD card. Once you restart the Raspberry Pi with the modified SD card, SSH will be enabled without you having to attach a keyoard or screen to the Pi.

This approach was tested with the 2018-11-13 version of Raspbian and works with Raspberry Pi 1, Raspberry Pi 2 and Raspberry Pi 3.

Credits to Yahor for the original solution on StackOverflow!

Posted by Uli Köhler in Embedded, Linux

How to configure OctoPrint/OctoPi with Ethernet using a static IP

In order to configure OctoPrint/OctoPi to use the Raspberry Pi Ethernet interface with a static IP, first open the rootfs partition on the SD card. After that, open etc/network/interfaces in your preferred text editor (you might need to open it as root, e.g. sudo nano etc/network/interfaces – ensure that you don’t edit your local computer’s /etc/network/interfaces but the one on the SD card).

Now copy the following text to the end of etc/network/interfaces:

auto eth0
allow-hotplug eth0
iface eth0 inet static
    address 192.168.1.234
    netmask 255.255.255.0
    gateway 192.168.1.1
    network 192.168.1.0
    broadcast 192.168.0.255
    dns-nameservers 8.8.8.8 8.8.4.4

You might need to adjust the IP addresses so they match your router.

Save the file and insert the SD card into your Raspberry Pi. You should be able to ping in – in our example, ping 192.168.1.234.

Tested with OctoPrint 0.16.0

Original source: OctoPrint forum

Posted by Uli Köhler in Embedded, Linux

How to fix MicroPython I2C no data

Problem:

You’ve configured MicroPython’s I2C similar to this (in my case on the ESP8266 but this applies to many MCUs):

i2c = machine.I2C(-1, machine.Pin(5), machine.Pin(4))

but you can’t find any devices on the bus:

>>> i2c.scan()
[]

Solution:

Likely you forgot to configure the pins as pullups. I2C needs pullups to work, and many MCUs (like the ESP8266) provide support for integrated (weak) pull-ups.

p4 = machine.Pin(4, mode=machine.Pin.OUT, pull=machine.Pin.PULL_UP)
p5 = machine.Pin(5, mode=machine.Pin.OUT, pull=machine.Pin.PULL_UP)
i2c = machine.I2C(-1, p5, p4)

i2c.scan() # [47]

You can also verify this by checking with a multimeter or an oscilloscope: When no communication is going on on the I2C bus, the voltage should be equivalent to the supply voltage of your MCU (usually 3.3V or 5V – 0V indicates a missing pullup or some other error).

Posted by Uli Köhler in Electronics, MicroPython, Python