Archiv 2019

Integrating wmBus devices into iobroker

After my quite expensive MH-Collector (identical with the easy.MUC from solvimus) died (it survived just a little longer that warranty protects), I decided to collect my wmBus devices‘ data with some home brewn solution. I’m also the owner of a Ubiquity US-24-250W, so the descision to go with PoE supply is quite an easy descision. So what lies closer than using a Raspberry Pi 3B+ with PoE hat and an USB-wmBus stick?

No sooner said than done, I bought the parts, installed raspbian and an iobroker slave. The iobroker master ist now running on a Debian 10 buster VM on my tiny HP Proliant server… How to install the iobroker slave can be found here.

wmBus Hardware

For me, the appropriate hardware was the IMST iM871A-USB (you can buy it directly from IMST or from tekmodul). This wmBus stick provides a serial interface (e.g. /dev/ttyUSB0), is quite cheap and supported by most open source wmBus software. But here comes the tricky part. There are quite some paths you can go, but for me, using the Messhelden heat cost allocators, I found myself in a very frustrating situation. These devices stick to the OMS standard for most of the telegram, but unfortunately do some very shitty stuff at slot 2 and 3, so many decoders fall out of sync just after the first data slot.

wmBus Software

After trying different ioborker adapters (like iobroker.wm-bus) and also deamon solutions (like wmbusmeters), sending the data to some MQTT broker (a server is easy to rise up in iobroker), I ended with the iobroker.wmbus (beware of the dash, it is not the same as above). Somehow the author of this adapter managed to come up with the inconsistencies, I even could not decode manually, looking at each single bit and byte of the wmBus telegram.

After attaching the iM871A-USB stick to the Pi and placing it at some location where it can receive all meters you are interested in, you need to install and configure the iobroker adapter iobroker.wmbus.

Adapter Configuration

The configuration is also quite easy and should look like the following:

It could also be, that your slaves need other modes to be received. One widespread mod for battery driven devices is also mode C. Unfortunately, a single stick can not receive multiple modes. But usually you only run devices with a single mode. Another important setting is the baud rate. For the IMST device, it needs to be 57600. The stick contains some serial converter that attaches the IMST module with a real serial connection.

Add Encrypted Meters

After finishing configuration and starting up the adapter, it is time to have a look into the log. There you will see, if the adapter started up correctly. If it did, you soon should see a line that says „Updated device state:<MANUF>-<ID>“ or an error saying, that it could not decrypt a telegram due to missing decryption key. If this occurs, go to the adapter configuration again. There you should see a new entry with a key „UNKNOWN“. Place the correct key there and push „Save“.

The follwoing telegram of that device should be decrypted correctly and a new state will be created within the object tree of iobroker.

If you see other unencrypted devices that pollute your object tree or your log with encryption failed messages, simply put them below „Blocked Devices“ tab in the adapters configuration. My Pi can see at least 20 unencrypted Techem water meters and heat cost allocators.

Let’s encrypt (also on wmBus)

I don’t know, how they can survive in a time of GDPR (General Data Protection Regulation), but they still have no hurry to encrypt their telegrams with a device-unique key. I think it is a security issue, when burglars can easily find people that do not heat in wintertime or have no water demand currently. But at least, Techem sticks closely to the OMS. If you rent a flat, that still has unencrypted wmBus meters, I would definitely claim to get encrypted meters. Even if encryption of wmBus has some weaknesses, it is by far better than plaintext.

STM32 BLDC Motor Control

Introduction

ST offers quite a broad BLDC controller portfolio, but the most interesting to me seems the STSPIN family of controllers. They include mostly anything except the MOSFETs to drive a BLDC, including a ST32 Microcontroller, a DC/DC-Converter (with external passives), the MOSFET drivers,…

Steps of designing a BLDC control circuit with STM32

First step is to find one or some BLDC motors for your specific need. The available motors range from cheap no-name motors (~4000 rpm) to high performance high turn ratio (>60000 rpm) and from a few watts up to kilo-watts of power. For CNC applications, you can find HF spindles with 2200W and 30000 rpm.

When you have your BLDC, it’s time to find the right controller for your application. When you go with STSPIN, you should think about getting the STEVAL-SPIN320x for prototyping your application, but additionally, you should always get the NUCLEO board with an appropriate BLDC driver hat. Why? Because the STM Motor Profiler Tool only runs with a few specific boards. To find the right board, install the ST Motor Control SDK and open the Motor Profiler Tool. Then you can browse through the supported kits to do Motor Profiling.

For further hands-on example, I will use a „Generic BLDC Motor“ with very poor documentation and quite low performance. It looks like a stepper motor and has the following (known) performance characteristics:

  • 8 pole pairs
  • 4000 rpm
  • 24V

We also measures some characteristics that have not been given by the specs, most important: The winding resistance. Set your power supply to current limiting mode with approx 5% of the nominal current of the motor. Then connect it to two wires of the motor. In our example, the power supply showed 0,36V / 0,3A = 1,2 Ohms, this gives, taking the circuit of three windings in star configuration (we have a simple series configuration of two windings) into account, 0,6 Ohm (1,2 Ohm / 2 = 0.6). Applied with above power, you can easily determine the pole-pair count by turning the shaft a full turn counting the ripples, you should feel the while turning. It will be easier to use a pen to mark the positions. Apply the power only for a minute, otherwise, you could damage the motor…

With above data, you can select the appropriate BLDC controller board. For smaller motors, the X-NUCLEO-IHM16M1 should serve well. For larger types, the X-NUCLEO-IHM08M1 is a good choice. But always be aware, that for every power hat board, there is only a limited set of compatible nucleos.

To be continued…

Merging the Contents of Two InfluxDBs

Eveer had the problem that data runs into two different influx databases and you want to merge the data into a single one? You wonder, why this can happen? Then just think about migrating some data aquisition project from one server to another without a downtime by spooling the aquired data into both DBs for some time or simply setting up a fresh system after the old one is dying slowly because of low performance. This happened to me with my smart home system, running iobroker and some influxdb on it. The old one ran on a ODROID-HC1 with only 2GB RAM. The new one is a Debian 10.0 VM on my brand new HP Proliant running XCP-NG. (OK, it’s not new, it’s used HW, but for private use, it is a monster πŸ™‚ )

For sure, there exist great tools from the influx inventor, but this is way to much for a small project to set up. So I decided to go the easy way of first doing the absolutely necessary (bring up the new system) and later pull in the historical data.

Backing up and restoring iobroker is well documented and works smooth and will not be part of this post. For influx, I just setup a new instance with a copy of the /etc/influx.conf and started the service. Then I started iobroker and everything worked like before, except the availability of the historical data.

After endless searching throughout the web and many tries to export the DB as CSV, JSON,…, I found a set of scripts from ETZ, that helpt me half the way up and down the backup-restore-process.

The key to success in the end was, to restore the DB on the new server not as the original one, but to rename it. In my case, I called the new historical DB on the new server iobroker_old.

Now I had all the data I want to import at least on the new instance, but how could iobroker find it? It won’t! I needed to do some internal import within influxdb to migrate the iobroker_old into the iobroker database.

To export the DB from the old system, just issue (on the old system):

$ backup-influxdb
$ scp /var/backups/influxdb/2019.... <myuser>@<newserver>:~

On the new server, then do:

$ restore-influxdb-database-online \
    ~/2019... \
    iobroker iobroker_old

The command to my success was (with help from GIST, before you run the command, doing a snapshot of the system may be a good advise):

$ influx -database iobroker_old -execute \
   'SELECT * INTO "iobroker"."global".:MEASUREMENT FROM "iobroker_old"."global"./.*/ GROUP BY *'

To make this succeed, I needed to raise the physical RAM of the VM to 16 GB. With 4 GB it simply swallowed the swap and ran into some timeout. Afterwards, I turned it back to 4 GB and everything runs fine.

Headless Rescue System over SSH

Again, I stumbled over some problem… I need to draw some backup disk image of a headless bare metal server, to run a risky update of atlassian tools. And yes, it was one of the old servers in my network, not beeing virtualized.

I tried a lot of different solutions, including debian (and derivatives) with automated installer and it’s network-console, but it was not leading to a working ssh access. So I searched a lot and came to ploplinux. Just put it on a USB-Stick and run it. It will start up with ssh server and a default password for root (root:ploplinux).

To find your server in your network, just do (on another Linux machine):

nmap -sn 192.168.1.0/24 | grep ^Nmap > network_snapshot.txt

and when ploplinux was booted (its a one-liner…)

nmap -sn 192.168.1.0/24 | grep ^Nmap > network_rescue.txt && \
      diff network_snapshot.txt network_rescue.txt | grep ^\>

To find the IP address of the running ploplinux. It will show somewhat like:

> Nmap scan report for XYZ.fritz.box (192.168.1.30)   # Some other machine
> Nmap scan report for 192.168.1.124                  # This is ploplinux
> Nmap done: 256 IP addresses (39 hosts up) scanned in 7.85 seconds 

Then I logged in over SSH on 192.168.1.124 as root using ploplinux as the password and was able to do what I need.

Mounting a NTFS drive (some large USB drives use it) needed the command mount.ntfs-3g <drive> <mountpoint> to work…

BTRFS-Backup (using squashfs)

Taking a raw image of some partition with BTRFS is not as convenient as it seems. To get some image of the containing filesystem, it is easier, to mount it directly and run mksquashfs on it. It then is easily mountable in any linux that supports squashfs. To create the backup, run the following commands, replacing sda1 with the device your filesystem to backup resides in. If you connected a large USB drive for putting your backup on, cd to there…

cd <where_some_space_is_available>
mkdir /mnt/tmp
mount -o ro /dev/sda1 /mnt/tmp
mksquashfs /mnt/tmp backup.squashfs

To test you backup-image:

mkdir /mnt/test
mount -o loop backup.squashfs /mnt/test

and examine the contents of your mounted filesystem tree. More information about squashing filesystems can be found here or here. If everything is fine, the backup is finished… Don’t forget to eject your USB drive cleanly, to not corrupt data with unwritten cache. Therefore, unmount your USB drive or use eject <usb_device> to do it.

Happy plopping πŸ˜‰

STM32 UART Continuous Receive with Interrupt

My last post is quite some time ago, due to vacations and high workload. But now I encountered some problem within an embedded project, I want to share the solution with you. Continuously receive data using interrupts on UART is complicated (or even impossible) in HAL. Most approaches I found crawling the internet are using the LL library to achieve this and many discussions around HAL do not end in satisfaction. Some work around the problems with dirty approaches (e.g. changing the HAL code itself), other step back from interrupt and use a polling approach.

To be honest, the high levels of HAL do not offer such a solutions. Instead, it offers functions to receive a special amount of data using a non-blocking interrupt approach, handling all the difficulties with tracking the state in the instance stucture (huartX) and entering a callback for the diverse states of the reception/transmission, e.g.
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart) or
void HAL_UART_RxHalfCpltCallback(UART_HandleTypeDef *huart)

Using HAL_UART_Receive_IT (not recommended)

A nearby approach without touching HAL code itself is, to call HAL_UART_Receive_IT(&huart3, &rxbuf, 1) once after initalization and at the end of the RxCpltCallback, to retrigger the reception, but this leads to some undesired lock (possibly a HAL-Bug), when transmitting data using HAL_StatusTypeDef HAL_UART_Transmit_IT(UART_HandleTypeDef *huart, uint8_t *pData, uint16_t Size), which could also not be the desired behaviour and the beginning of endless debug sessions and frustration.

Simply Enable the IRQ

The best solution in my opinion instead is really simple. Don’t use the high level receive functions at all for the continuous RX behaviour, since you do not want to receive a special amount of data but be called at each reception. So, configure the UART with interrupt in CubeMX and after it’s initalization, enable the interrupt itself, never calling the HAL_UART_Receive_IT or any other UART receive function (it will disable the IT after finishing).

In the section of the appropriate instance in void HAL_UART_MspInit(UART_HandleTypeDef* uartHandle), add the following line of code:

__HAL_UART_ENABLE_IT(&huartX, UART_IT_RXNE);

In stm32xxx_it.c do:

void USART3_IRQHandler(void)  {    
    /* USER CODE BEGIN USART3_IRQn 0 */
    CallMyCodeHere();
    return;  // To avoid calling the handler at all 
             // (in case you want to save the time)
    /* USER CODE END USART3_IRQn 0 */
    HAL_UART_IRQHandler(&huart3);
    /* USER CODE BEGIN USART3_IRQn 1 */
    /* USER CODE END USART3_IRQn 1 */ 
}

The return statement will avoid calling the HAL IRQ handler. I did not try during transmit, but it seems not disturbing anything. If you plan to use the HAL_UART_Receive_IT functions in parallel, you could try to put your code below the handler. I did not test it, but there is a good chance that it works.

Since this approach only touches the user code functions, none of your code will be destroyed by code re-generation of CubeMX.

This is all you need… Happy UART processing πŸ˜‰

If Timestamping is Needed

Simple Millisecond Timestamps

If you want to trigger on inactive time durations (some serial protocols use it as a synchronisation condition), save a timestamp (e.g. HAL_GetTick()) within the UART-RX-Interrupt and look at the difference to the previous one (subtract the duration of a byte to get the real inactive time).

High Resolution Timestamps

If sub-milli-second resolution is required, run a timer with a prescaler of desired resolution and take the counter value of the timer instead of the tick counter. (you can get it with __HAL_TIM_GET_COUNTER(&htimX)).

Hope this helps in your next project using UART πŸ™‚

How to build a Smart Home

Since everbody complains about smart homes are vendor lock in, too expensive, giving you data for free to some suspecting cloud provider,… I need to preset my smart home solution to you, that can be completely running inside your „four walls“ and does not need any third party, if you do not want it to. Additionally, there is NO vendor lock in. Everything can be open source and connect to almost any existing smart home appliance.

What I’m talking about? It’s iobroker.

If you already own a Homematic System from EQ-3 (ELV in older ages), iobroker is a must-have. BTW: If you already own a Charly, you can install iobroker on it without addional hardware, but be advised to install a USB drive with enough storage for the data that accumulates when years pass by. I would not suggest to use a flash based drive, since it needs a tradeoff between amount of data lost when power cuts and the endurance of your drive. Write cycles will be quite often…

Welp, where to start? I would first get some low-power and reliable mini PC. The solution I selected is from hardkernel and called ODROID HC-1 (in Germany best bought at Pollin for 60€). This is a little ARM board running ubuntu linux and providing space for a single 2.5″ HDD. I selected a WD Red 1TB drive for that purpose. The linux OS itself needs to be put on a Micro-SD card.

I would suggest to also buy the following:

  • Power supply (or a PoE+ adapter if you own a PoE+-capable switch)
  • Micro-SD card with at least 8GB (best with ubuntu preinstalled)
  • The serial cord from hardkernel
  • The Battery (for RTC)
  • A hard disk for all the data your smart home will collect and
  • A little case against the dust, that definitely will render you appliance as very attractive πŸ˜‰

Start installing your software. First you need to install the Ubuntu OS, provided by hardkernel (if not already installed).

When everything arrived, connect it to your network, find out the IP address or use the serial cord and start installing everything you need (supposing, you do all under root, if not, prepend a sudo where needed):

Prepare your hard disk

dmesg  # Find out, which is your HDD, assuming /dev/sda
fdisk /dev/sda
# Create 2 Partitions
# 8 GB with swap
# Remainder with ext4 or something else (I prefer btrfs)
# write partition table with entering 'w'
mkswap /dev/sda1
swapon /dev/sda2 
mkfs.ext4 /dev/sda2
mkdir /var/data
blkid /dev/sda2   # Take the UUID-part
echo "UUID=<the-above> /var/data   ext4 errors=remount-ro,noatime 0 1" \
  >> /etc/fstab
mount /var/data
mkdir /var/data/iobroker
mkdir /opt/iobroker /var/lib/influxdb
echo "/opt/iobroker/ /var/data/iobroker  none bind" \
  >> /etc/fstab
echo "/var/data/influxdb/ /var/lib/influxdb none bind" \
  >> /etc/fstab

Installing node.js

sudo apt-get install python build-essential curl
mkdir src
cd src
wget https://nodejs.org/dist/v10.15.3/node-v10.15.3.tar.gz
tar xzvf node-v10.15.3.tar.gz
cd node-v10.15.3.tar.gz
./configure --without-snapshot
make
./node -v  # If version is returned than 'make' was OK 
make install

Installing InfluxDB

curl -sL https://repos.influxdata.com/influxdb.key | \
   sudo apt-key add - source /etc/lsb-release echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo apt-get update && sudo apt-get install influxdb sudo systemctl unmask influxdb.service sudo systemctl start influxdb 

Installing Go language

apt install git golang-go  # needed to build a newer go
cd /usr/lib
git clone https://go.googlesource.com/go
cd go
git checkout go1.12.5
cd src
./all.bash
cd $HOME
cd /usr/bin
rm go
ln -s ../lib/go-1.12 go
cd $HOME
cat >>.profile <<HERE
export GOROOT=/usr/lib/go
export GOPATH=$HOME/go
[ -d $GOPATH ] || mkdir $GOPATH
[ -d $GOPATH/bin ] || mkdir $GOPATH/bin
export PATH=$GOPATH/bin:$PATH
HERE

Installing yarn

see: https://yarnpkg.com/en/docs/install#debian-stable

Installing Chronograph

For creating your own querys to InfluxDB (e.g. with node red), it is easiest to have some good InfluxDB interface, best to be used in a browser…

With go and yarn, it is as easy as boiling water:

go get github.com/influxdata/chronograf
cd $GOPATH/src/github.com/influxdata/chronograf
make
go install github.com/influxdata/chronograf/cmd/chronograf

Now you can start chronograph

./chronograph

Now visit the chronograph web interface by browsing to http://<host>:8888. A Wizard will welcome you. Enter the appropriate data to access your InfluxDB. Youcan skip the Kapacitor question. After finishing the wizard, you should be able to see the following on chronographs Config tab.

When this is done, issue your first query on the Explore tab (use the datapoint selector on the lower half of the page and select an appropriate date from the date picker in the upper right corner:

Install iobroker

COMING SOON…

OrangePi 4G-IoT Complete Pack

Since Andorid 8.1 is quite slow on the OrangePi 4G-IoT, I decided to give Andorid 6 a try. OrangePi.org also provides mega.nz-links for these packages, which is quite inconvenient… Here is a torrent, that contains all stuff for the 4G-IoT. In many torrent clients, you can choose, which files should be downloaded…

https://downloads.the78mole.de/OrangePi-4G-IoT_Full.torrent

I’ve repackaged the tars, since xz is more dense than gz. It saves around 50% of space and traffic.

Enjoy the 4G-IoT πŸ˜‰

How to Build A Private Storage Cluster (with Ceph)

Since my NAS (a QNAP TS-419P II) get more and more buggy, especially with non-working Windows shares and the painfully low processing power of the integrated ARM single core, wished something like a SAN for myself. But SAN is quite expensive, the peripherial hardware (Switches, UPS,…) not included. So I decided to skip a few levels and build up a NAS 2.0 storage cluster based on open source ceph using low-budget ODROID HC2 (Octa-Core 4 x Cortex-A15 + 4 x Cortex-A7) from Hardkernel as the work horse to create storage nodes. To make it even more dense, you can use the ODROID HC1 that is just the same but for 2.5″ disks (be aware of the power supply: HC2 = 12V, HC1 = 5V !!!).

If you don’t need a SATA drive (e.g. for the controlling nodes of the cluster: mgr, metadata, nfs, cifs,…), you can use the MC1, MC1 solo, XU4 or XU4Q.

If you want to go with x86 instead of ARM, the ODROID H2 looks like a great alternative, but it will also be a bit more expensive (e.g. RAM is not included).

In fact, installing ceph will be much less pain, going for 64-bit x86 than going with ARM 32 bit. I decided to go with ARM 32, because I want to build up the most energy efficient cluster, to maximize scale out capabilities also in sense of my private budget.

To build up the cluster, I currently use 4 x ODRID HC2 with WD Red 4 TB drives (WD40EFRX), also installing the ceph non-OSD-services distributed accross this little cluster. The BOM for my test cluster is as follows:

If powering up the cluster in sequence (not all at once), you could reduce the power requirements of the supply component a lot (currently 12V/2A per node, 5V/4A for HC1). I will dive into this topic a bit deeper in future. I think, it can be done in software by delaying the spin up through some bootarg. Nevertheless, an optimum solution would be to have a power distribution unit for switching and measuring the supply current and also providing some UPS capabilities on the low voltage path. Additionally, current measuments could give you the ability to regulate the power through e.g. cpufreq to optimize the efficiency of the cluster and the power supply.

To generate the debian packages for installing ceph on the nodes, follow the instructions here. When you have built the debian packages, move them over to some http(s) server, to be easily accessible by your nodes.

Your Own Debian Package Repository

To be accessible, a Debian Package Repository needs to be placed in a webserver’s directory accessible at least in your own network. It is best practice to secure this repository with SSL, since Debian APT more or less expects this… So first we start with creating a (self signed CA). Later, if needed, you can easily replace the certifciate by an official one or let an authorithy also sign your server certificate.

Generating a CA for SSL

This part is based on the tutorial here. First, we will simply use self signed certificates, since it is much easier and faster than using officially signed certificates. We will then place the CA in the cert storage of our linux OS, to make it trust ourself. πŸ˜‰

mkdir ~/CA
cd ~/CA
# Generate the CA key
openssl genrsa -out ca.key 4096
# Generate the CA certificate, here, you can leave the CN empty
openssl req -new -x509 -key ca.key -days 366 -out ca.crt
# Make it unaccessible by other users
chmod 700 ca.key
# Generate a certificate configuration
wget https://raw.githubusercontent.com/the78mole/scripts/master/templates/configs/ssl/cert.conf -O example.org.conf
# Edit the configuration
vi example.org.conf
# Create a server certificate key and the signing request (not the yet cert)
openssl req -new -out example.org.csr -config example.org.conf
# Create the public key
openssl rsa -in example.org.key -pubout -out example.org.pubkey
# Sign the CSR with your CA and create the certificate
openssl x509 -req -in example.org.csr -CA ca.crt -CAkey ca.key -CAcreateserial -extensions my_extensions -extfile example.org.conf -days 366 -out example.org.crt

To get the alternative DNS names and IPs added to the certificate, you need to specify the config file as an extensions and point to the config section, where the extensions are located. This is because the extensions in the CSR get ignored by openssl when signing and you need to specify it explicitly.

After generating the certificate, you need to import it, where you need it to be accepted (Browser, APT). For testing, it is best to try with a browser. Some tutorial can be found here (it’s german, so use google translator, to read in english) and here. Use the shell of your desktop Debian system.

scp <CA_HOST>:/<PATH_TO_CA>/ca.crt example_ca.pem
sudo cp example_ca.pem /usr/local/share/ca-certificates/
sudo update-ca-certificates

To add the certificate to your browser, e.g. chromium

sudo apt install libnss3-tools
certutil -A -n "Example Company CA" -t "TCu,Cu,Tu" -i example_ca.pem -d ~/.pki/nssdb

Note: Maybe this does not work correctly… Then, in Chromium, use Settings –> Privacy and Security –> Manage Certificates –> Import –> Select the CA –> Check all boxes.

Now we need the CA’s and the server’s certificate along with the server key for securing webserver traffic.

Install and Configure the Webserver

Welp, we will use nginx as our webserver. Feel free to use any other, it does not really matter. In fact, every further step (e.g. the let’s encrypt tutorial) will be based on nginx.

sudo apt install nginx
cd /etc/nginx
cp snippets/snakeoil.conf snippets/ssl_example.org.conf
mkdir -p ssl/pub
mkdir -p ssl/priv
sudo chown -R root:www-data ssl
sudo chmod -R 0755 ssl/pub
sudo chmod -R 0750 ssl/priv
cp ~/CA/ca.crt ~/CA/example.org.crt
cp ~/CA/example.org.key
# Edit the ssl config file to your needs
vi snippets/ssl_example.org.conf
# Now adjust the nginx configuration to use SSL
vi sites-enabled/default
# Ensure following lines are added and not commented out
# listen 443 ssl default_server
# listen [::]:443 ssl default_server
# include snippets/ssl_example.org.conf
service nginx restart

When everything is OK, use your desktop web browser and point it to the location https://example.org. Your should get the page without an error. Thos means, you have setup a CA you can use to sign server certificates and they get trusted.

If you plan to use the Debian package repository on many of your linux hosts, then you should add your CA certificate to the certificate store on all the machines.

Generating GnuPG Key-Pair

To sign a file, email, hash, debian package, repository,… you often need GnuPG. To be able to sign something, you need to first generate your own key, that get trusted from at least the receiving party. All this works again with asymmetric encraption, like the signing of certificates does. An in depth tutorial with links to even deeper knowledge can be found here.

First we should install a tool to gather some entropy, otherwise gnupg may be not able to generate a key on a headless system (no real user input,… –> very few entropy sources).

apt install rng-tools

IF it can not find a hw-rng, you can still try to get randomsound working (if you have a soundcard…)

apt install randomsound

Run this in a seperate window, when gpg is collecting entropy for too long. It will abort after some time, if it can not generate the key.

arecord -l # Do you have any soundcard?
randomsound -v

If all fails, you can still pipe some data into /dev/random to feed the entropy pool, e.g. with (also in a seperate window when gpg gen-key is running.

sudo dd if=/dev/sda of=/dev/random status=progress

You can watch the entropy-pool with:

watch -n 0.5 cat /proc/sys/kernel/random/entropy_avail

To finally generate a GPG-key, simply follow the instructions below:

apt install gnupg
# Create the .gnupg directory easily and add a secure configuration
gpg --list-keys --fingerprint
wget https://raw.githubusercontent.com/the78mole/scripts/master/templates/configs/gnupg/gpg.conf -O ~/.gnupg/gpg.conf
gpg --full-gen-key
# Select:
# Key type : RSA and RSA
# Keysize : 4096
# Expiration: 1y
# Then enter your name and email, but don't include a comment
# Skipping the password makes CI much easier, but less secure...
# It will take some time (maybe minutes) to generate the key

Creating the debian repository (reprepro)

make-debs already created a debian repository, but we will create one, that is more general, also serving well for other software packages. make-deps

…. to be continued …

Coming soon: To add some real NAS features, we could use just another embedded board with e.g. FreeNAS or NextCloud installed to mount the cluster file system and using the cluster as the storage backend. We already have the nginx SSL configured, so we easily can add reverse proxy targets… (for HTTPS-HTTPS-proxy, see here)

Compile Ceph (master) on ARM (32-Bit)

TODO: Test this all on a virgin armhf system (raspberry, odroid hc1/2/XU4,…) and complete the TODOs for openssl and phantomjs (and the sass-dependency). Maybe with the new master tree, it is not needed to build it outside the ceph repo.

First install prerequisites:

sudo apt install python-pip build-essential libgmp-dev \
libmpfr-dev libmpc-dev reprepro

Install nodejs from nodejs.org

curl -sL https://deb.nodesource.com/setup_11.x | sudo -E bash - sudo apt-get install -y nodejs
sudo npm install -g npm

Then prepare a swap partition (you will need it πŸ˜‰ )

dd if=/dev/zero of=/<some-hdd-path>/swapfile \
bs=1M count=8192 progress=status
mkswap /<some-hdd-path>/swapfile
swapon /<some-hdd-path>/swapfile

Then we should install some dependencies

sudo apt install libgmp-dev libmpfr-dev libmpc-dev ruby

Now install a new GCC that supports C++17.

wget https://ftp.gnu.org/gnu/gcc/gcc-8.2.0/gcc-8.2.0.tar.xz
tar xfJ gcc-8.2.0.tar.xz
cd gcc-8.2.0
./configure # for armhf
# ./configure --disable-multilib # for x86_64/arm64
make

Building ceph with do_cmake, building a debian package with make-debs.sh or simply build packages using another compiler than the debian default one (6.3.0) requires you to change the default compiler e.g. to gcc-8.2.0 for the whole system:

sudo update-alternatives --install /usr/bin/cc cc /usr/local/gcc-8.2/bin/gcc-8.2 50
sudo update-alternatives --install /usr/bin/c++ c++ /usr/local/gcc-8.2/bin/g++-8.2 50

Checkout OpenSSL-1.0.2-stable (seems also necessary for armhf), PhantomJS, compile and install it:

cd /opt/GIT
git clone git@github.com:openssl/openssl.git
cd openssl
git checkout OpenSSL-1_0_2-stable
...TODO...
# Following seems only necessary on arm
# (or all platforms wihtout precompiled binary)
cd /opt/GIT
git clone git@github.com:ariya/phantomjs.git
cd phantomjs
...TODO...
sudo LD_LIBRARY_PATH=/opt/openssl_build_stable/lib/ \
deploy/package.sh --bundle-libs

Add the following to build.py (at L:244, just after PlatformOptions.extend)

phantom_openssl = os.getenv("PHANTOM_OPENSSL_PATH", "")
if phantom_openssl != "":
openssl = os.putenv("OPENSSL_LIBS", "-L" + phantom_openssl + "/lib -lssl -lcrypto")
openssl_include = "-I" + phantom_openssl + "/include"
openssl_lib = "-L" + phantom_openssl + "/lib"
platformOptions.extend([openssl_include, openssl_lib])
print("Using OpenSSL at %s" % phantom_openssl)

Then install it to /opt

Build and compile Ceph

git clone git@github.com:the78mole/ceph.git
cd ceph
git checkout wip-32-bit-arm-fixes
./install-deps.sh
./do_cmake_arm32.sh # for armhf
# ./do_cmake.sh # for x86_64/amd64 or arm64
cd build
make -j4
# if it gets really slow due to swapping, break an do make -j1
# or use the scheduler-script from link below

Here you can find a rudimentary (but working) script that suspends compilers processes based on total compilers memory consumption. Running it through ‚watch‘-tool you can start e.g. 8 tasks and when memory limit is reached, it will suspend (kill -TSPT) the youngest tasks in sense of user space runtime.
https://github.com/the78mole/scripts/blob/master/linux/bash/schedule_compile.sh

Now do…

cd ..   # Back to ceph base dir
./make-debs-arm32.sh # for armhf
# ./make-debs.sh # fox x86_64/amd64 or arm64

If you encounter problems with setuptools (Exception –> TypeError: unsupported operand type(s) for -= ‚Retry‘ and ‚int‘) try to get a more recent version of python pip with the following commands and rerun make-debs-arm32.sh.

apt-get remove python-pip python3-pip
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
python3 get-pip.py

If I forgot anything to make it work, feel free to write some comment…

Compiling Software on RAM-limited Multi-Core Systems

Since I often compile stuff on embedded ARM targets that are well equipped with processing power (Exinos Octa-Core), but are neglected regarding RAM (2G), I often facing the trade-off between running multiple or only a single/few compilation jobs (make -j8 vs. make -j1). If you start too many jobs and if you have large compilation units (e.g. with the Ceph Project sources), the system will feel like jam, as soon as it begins swapping.

I feel, that deciding the job count at the very beginning is (was) a trade-off, I was not willing to accept. Therefore, I decided to write a little script to suspend compile processes, that cross a certain memory limit. This way, the suspended processes get moved to swap and the still runnig processes get a comfortable amount of RAM. This way, the kernel is not forced to move pages around with every scheduling round. Instead, it will move it once on swap for suspended processes, when it needs RAM for the running ones and as soon as the processes with large memory footprint finish, the supended ones get back to live.

Welp, I decided to base the priority on the time the processes ate up user space processing time, so the older ones (often the most memory hungy ones) get processed first. This scheduling scheme proofed to be a optimal descision, that is also not hard to implement as a bash script.

Here you can find the little script, that needs to be run within a loop or simply with the watch-tool (maybe with sudo).

watch scripts/linux/bash/schedule_compile.sh

Happy compiling!