You can find 3D models as 3D-PDF for JST XH connectors on the JST website.
However, in order to obtain STEP or IGES models, you have to look on the jst-mfg.com site.
You can find 3D models as 3D-PDF for JST XH connectors on the JST website.
However, in order to obtain STEP or IGES models, you have to look on the jst-mfg.com site.
First, clone kicad-library-utils
using
git clone https://gitlab.com/kicad/libraries/kicad-library-utils.git
Then, run the check script against your footprint – Connector_RJ.pretty/RJ9_Evercom_5301-4P4C.kicad_mod
in this example – using
~/kicad-library-utils/klc-check/check_footprint.py -vv Connector_RJ.pretty/RJ9_Evercom_5301-4P4C.kicad_mod
You might need to adjust the path to kicad-library-utils
accordingly.
This will provide colored output on the command line such as
Checking footprint 'RJ9_Evercom_5301-4P4C': Violating F5.2 - https://klc.kicad.org/footprint/f5/f5.2/ Fabrication layer requirements Value Label Errors Violating F7.2 - https://klc.kicad.org/footprint/f7/f7.2/ For through-hole components, footprint anchor is set on pad 1 Pad '1' not located at origin Violating F9.1 - https://klc.kicad.org/footprint/f9/f9.1/ Footprint meta-data is filled in as appropriate Value label '5301-4P4C' does not match filename 'RJ9_Evercom_5301-4P4C' Violating F9.3 - https://klc.kicad.org/footprint/f9/f9.3/ Footprint 3D model requirements 3D model file path missing from the 3D model settings of the footprint
LCSC is typically the cheapest vendor for buying reels. Note, however, that the shipping fee is rather expensive (around 35€ to Germany), so it will only be worth the saving if you order larger quantities of components.
10nF
0603 50V: 4000pcs of Yageo CC0603KRX7R9BB103 for 5,20€
100nF
0603 50V: 4000pcs of CCTC TCC0603X7R104K500CT for 4,80€
This command will generate a PostgreSQL dump using pg_dump
and immediately feed it into bup split
(without creating an intermediate file) for backup.
It assumes that .env
contains a line
POSTGRES_USER=myuser
so that the sc
.bup
variantSet BUP_DIR
export BUP_DIR=/var/bup/my-database.bup source .env && docker-compose exec -u postgres -T postgres pg_dump -U${POSTGRES_USER} | bup -d $BUP_DIR split -n mydb-pgdump.sql
export BUP_DIR=/var/bup/my-database.index.bup export BUP_REMOTE=bup-server:/bup/my-database.bup source .env && docker-compose exec -u postgres -T postgres pg_dump -U${POSTGRES_USER} | bup -d $BUP_DIR split -r $BUP_REMOTE -n mydb-pgdump.sql
This config is based on our previous post How to setup headscale server in 5 minutes using docker-compose and our Traefik configuration with Cloudflare wildcard certs (see Simple Traefik docker-compose setup with Lets Encrypt Cloudflare DNS-01 & TLS-ALPN-01 & HTTP-01 challenges)
version: '3.5' services: headscale: image: headscale/headscale:latest volumes: - ./config:/etc/headscale/ - ./data:/var/lib/headscale ports: # - 27896:8080 - 9090:9090 - 3478:3478/udp command: headscale serve restart: unless-stopped depends_on: - postgres labels: - "traefik.enable=true" - "traefik.http.routers.headscale.rule=Host(`headscale.mydomain.com`)" - "traefik.http.routers.headscale.entrypoints=websecure" - "traefik.http.routers.headscale.tls.certresolver=cloudflare" - "traefik.http.routers.headscale.tls.domains[0].main=mydomain.com" - "traefik.http.routers.headscale.tls.domains[0].sans=*.mydomain.com" - "traefik.http.services.headscale.loadbalancer.server.port=8080" postgres: image: postgres:14 restart: unless-stopped volumes: - ./pg_data:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=${POSTGRES_PASSWORD} - POSTGRES_DB=${POSTGRES_DB} - POSTGRES_USER=${POSTGRES_USER}
When trying to connect to your Oracle Cloud instance using VNC using Linux, you’ll get a command like
ssh -o ProxyCommand='ssh -W %h:%p -p 443 ocid1.instanceconsoleconnection.oc1.eu-frankfurt-1.antheljtwxs32nycl7rgwekcj4t2pecwwcsm7mgzy5c3tt3iiovq564wubta@instance-console.eu-frankfurt-1.oci.oraclecloud.com' -N -L localhost:5900:ocid1.instance.oc1.eu-frankfurt-1.antheljtwxs32nycblplzbuamqsqbi4ipz377f3qhs6a4tdh74j673jfsjtq:5900 ocid1.instance.oc1.eu-frankfurt-1.antheljtwxs32nycblplzbuamqsqbi4ipz377f3qhs6a4tdh74j673jfsjtq
but you see error messages like
Unable to negotiate with 130.61.0.255 port 443: no matching host key type found. Their offer: ssh-rsa
Add the following text at the end of your ~/.ssh/config
and retry:
Host * HostkeyAlgorithms +ssh-rsa PubkeyAcceptedAlgorithms +ssh-rsa
This will make SSH accept RSA host keys.
LCSC has a really cheap all-metal push button for only 0.0129€/pc @50pc
It works with the KiCAD 6.0
Button_Switch_SMD:SW_SPST_TL3342
footprint and the KiCAD 6.0
Switch:SW_Push
symbol.
You can find the 3D models on the Taiyo Yuden website.
You can login to your Raspi using ssh -CX pi@IPADDRESS
and then run
DISPLAY=:0 scrot screenshot.png
to take a screenshot of the display that is currently attached. After that, use
feh screenshot.png
(due to ssh -CX
this will display the image locally on your Linux desktop) or copyscreenshot.png
to your local computer using scp
, rsync
, WinSCP
or any other tool.
This is useful for debugging what is happening on your display.
feh
is a modern image viewer which you can parameterize on the command line.
Use the -z
flag (or --randomize
) to randomize the order of the images.
This will get the IP address of a running docker-compose
container for the mongo
service.
docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
When you use this in shell scripts, it’s often convenient to store the IP address in a variable:
export MONGO_IP=$(docker inspect $(docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].ID') --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}')
which you can then use as $MONGO_IP
.
For more details on how this works, see the following posts:
Lets’s assume the directory where your docker-compose.yml
is located is called myservice
If you have, for example, a docker-compose.yml
that declares a service mongo
running MongoDB, docker-compose
will call the container mongo
or mongo-1
.
However, docker
itself will call that container myservice-mongo-1
.
In order to find out the actual docker name of your container – assuming the container is running – use the following code:
docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name'
This uses docker-compose ps
to list running containers, exporting some information as JSON, for example:
[{ "ID": "2d68b1c1625dbfb41e05f55af0a333b5700332112c6c7551f78afe27b1dfc7ad", "Name": "production-mongo-1", "Command": "docker-entrypoint.sh mongod", "Project": "production", "Service": "mongo", "State": "running", "Health": "", "ExitCode": 0, "Publishers": [ { "URL": "", "TargetPort": 27017, "PublishedPort": 0, "Protocol": "tcp" } ] }]
Then we use jq
(a command line JSON processor) to a) select only the entry in the list of running containers where the Service
attribute equals mongo
, b) take the first one using [0]
and get the Name
attribute which stores the name of the container.
$ docker-compose ps --format json | jq -r 'map(select(.Service=="mongo"))[0].Name' myservice-mongo-1
This example selects, given a list of JSON entries, all entries where .service
equals "mongodb"
(i.e. {"service": "mongo"}
:
jq 'map(select(.service=="mongo"))'
This guide shows you how to create a bup
server. This is based on our previous post How to setup a “bup remote” server in 5 minutes using docker-compose but uses Synology’s built-in Docker GUI instead of docker-compose
.
First, create two shared directories bup-backups
(which will store the backups itself) and bup-config
)which will store the dropbear
SSH server configuration, that is SSH host keys and authorized client keys).
Alternatively, you can also use sub-directories of existing shared directories, but I’d like to keep them separate.
Then create a new Docker container by opening Docker
-> Container
, clicking Create
and follow these steps:
ulikoehler/bup-server:lastest
2022
(bup
server SSH port)You can choose any other port in Local Port
but keep the Container Port
the same.
As we said before, any directory will do. Create the sub-directories as needed.
On your local linux computer, create a SSH key using
ssh-keygen -t ed25519 -f id_bup -N ""
Upload id_bup
and id_bup.pub
to the bup-config
shared folder.
Furthermore, copy id_pub.pub
to bup-config/dotssh/authorized_keys
.
After that you can startup the container.
Use
ssh -i id_bup -p 2022 bup@[AS IP address]
to try to connect to your NAS.
In case connecting via SSH does not work, most likely the issue is with your public/private key and/or your authorized_keys
file. Check if it is in the right directory (/home/bup/.ssh/authorized_keys
on the container). Also check the logs of the Docker container.
The following sequence allows you to enter the UEFI setup and set the screen size. It does not work for VMs running BIOS!
F2
repeatedly until you see the UEFI setup screen:Device Manager
OVMF Platform Configuration
and select the screen resolutionESC
and select Y
to save the changes. Continue pressing ESC
until you are at the start screen.Reset
and wait for the OS to boot. You need to select Reset
because the change will only be effective after the next complete reboot.
On the Pi, run
libcamera-vid -t 0 --width 1920 --height 1080 --codec h264 -o out.h264
This will record Full-HD video (1920×1080) to out.h264
Use this command to list all available cameras:
libcamera-still --list-cameras
$ libcamera-still --list-cameras Available cameras ----------------- 0 : imx477 [4056x3040] (/base/soc/i2c0mux/i2c@1/imx477@1a) Modes: 'SRGGB10_CSI2P' : 1332x990 [120.05 fps - (696, 528)/2664x1980 crop] 'SRGGB12_CSI2P' : 2028x1080 [50.03 fps - (0, 440)/4056x2160 crop] 2028x1520 [40.01 fps - (0, 0)/4056x3040 crop] 4056x3040 [10.00 fps - (0, 0)/4056x3040 crop]
When trying to run raspivid
on Raspberry Pi OS Lite, you will see the following error message:
bash: raspivid: command not found
In recent versions of Raspberry Pi OS, raspivid
has been replaced by libcamera-vid
. Therefore, use libcamera-vid
instead of raspivid
.
First, install samba using
sudo apt -y install samba
then append the following to /etc/samba/smb.conf
[pi] comment = pi path = /home/pi writeable = yes browseable = yes public = yes create mask = 0644 directory mask = 0755 force user = pi
and finally restart samba
:
sudo systemctl restart smbd
Now your /home/pi
will be accessible via SMB (including write access).
First, install the WordPress REST API Authentication
wordpress plugin, which you can find by searching for WordPress REST API Authentication
:
Then you need to open the plugin configuration page. Open Plugins
in the WordPress admin panel, locate the WordPress REST API Authentication
plugin and click Configure
Select Basic Authentication
:
Then click Next
on the top right:
and click Finish
on the next page:
Assuming you have a WordPress user admin
with password abc123
we can modify our code from How to get WordPress posts as JSON using Python & the WordPress REST API in order to query a non-public endpoint:
import requests import base64 # Compute basic authentication header auth_header = b"Basic " + base64.b64encode(b"admin:abc123") # posts is a list of JSON objects, each representing a post posts = requests.get("https://mydomain.com/wp-json/wp/v2/posts", params={"context": "edit"}, headers={"Authorization": auth_header}).json()