Archiv 22. Januar 2021

Angular Web App on ESP32

In progress…

TL;DR (take me to the battle field)

Introduction and Motivation

In some recent project, we added some ESP32 to replace wired communication with a BLE solution. Additionally, the customer wants to have some administrative web application to configure the device more easily and to get more detailed information in service.

Creating a simple web ab is not a hard task for an embedded C developer. But to create a beatiful or at least not shitty looking web application is just another game. When you look more closely on the available space on some ESP32, you will realize, that there is at most 16 MB of space to live with.

If you create a simple WiFi AP on some ESP32 4MB version with ESP-IDF, you can easily see, that there is not much left for BLE and APP.

Build statistics for a simple WiFi AP with only a hello world index.html on some lolin32 board

Unfortunately, I only have the 4 MB version available, which complicates the task even more. And no, I can not simply change to 16MB, since a few hundred units are already in operation, waiting for some firmware update to provide WiFi service. You know how this works… Deliver the hardware as early as possible and hand features in later ūüėõ

But this means, there is much headroom for optimization and not much space for resources and code. If you have a look at web application frameworks (e.g. react) and build some basic apps, they all come with megabytes of resources in the production build output folder. If you build a very basic hello-world angular application, you will find a few 100 kilobytes of resources after building it, to be deployed on a static web space. This is also just small enough, to be fitted in the flash.

Now, let’s talk about the solution…

Prerequisites

First of all, you will need to create an angular and an ESP-IDF project. This is easily done, if you already installed Visual Studio Code with PlatformIO.

To create an angular project, just follow the official guide. If you installed npm correctly, you can issue the commands in your Visual Studio Code Terminal, to fire up the project.

ng new esp-on-angular

After these steps, you have two separate projects. You can easily setup a script to copy the angular production build over to the ESP project, do it manually or link it in using git submodules. But this is not part of this tutorial. We simply copy over the website content manually from the angular project output folder.

In addition to npm, VSCode and PlatformIO, you could need some basic Linux system, when working on Windows, if you intend to use the automation script I’l provide later. On my machine, I use MSYS2/MinGW to get the CLI tools I need: bash, find, gzip, stat

Getting Started

Firmware Web Content Integration

While it is quite easy to simply handling incoming requests and responding to it, like you would do for some RESTful Web Service (which is the usual case for IoT stuff like the ESP32), integrating plain files is usually a more complex task. Thanks to the ESP-IDF, this is not the case.

In general, there are two options. Adding the files to a SPIFFS partition and reading it from there. Another solution is to embed the files in your firmware object. The latter one is what we will use in this tutorial, since it is more dense (in terms of size), althought it is less flexible and requires a bit more lumber in your config.

For it to work, simply add the files to board_build.embed_files environment in your platform.ini file of your ESP project. I usually put these ressource files into a separate folder called res.

board_build.embed_txtfiles = 
    res/index.html
board_build.embed_files =
    res/logo.png

When you did so, you can link the content easily with the following lines in your code, PlatformIO will care for it to be a valid symbol and to be linked in:

extern const uint8_t index_html_start[] asm("_binary_index_html_start");
extern const uint8_t index_html_end[]   asm("_binary_index_html_end");
extern const uint8_t logo_start[] asm("_binary_logo_png_start");
extern const uint8_t logo_end[]   asm("_binary_logo_png_end");

When you want to use that data in your http handler, just use the pointers declared above:

httpd_resp_set_type(httpRequest, "text/html");
httpd_resp_send(httpRequest, (const char*) 
    index_html_start, HTTPD_RESP_USE_STRLEN);

…or for your png image file…

httpd_resp_set_type(httpRequest, "image/png");
httpd_resp_send(httpRequest, (const char*) 
    logo_start, logo_start - logo_end);

I personally use only the binary version, since it can also process text files and makes the code more universal. You will later see, that there will not be much of a text left ūüôā

The symbols of your file object (the part in the asm function) is built by concatenating _binary_ with the name of your file with all special chars replaced by underscores _ and the suffix _start and _end respectively. In short: _binary_<FILENAMEREPL>_start and _binary_<FILENAMEREPL>_end.

Welp, that is quite fine, but the files still take up quite some space and we do not have a lot. When using the 4M version of the ESP32, we simply can not allow it to do so. So, what can we do further to reduce the size of our resources?

You could ask: Why not push the images and fonts to a publically accessible location in the wolrd wide web? Because some of us want to make it self contained. The reason for me have been simply the security and safety requirements of the project: Keep the system closed to the outside world, provide some WiFi Access Point and serve all files that are needed yourself. Do not depend on internet in general. If you have other requirements, kick your ESP in your local WiFi and access it from there or use the AP/Client mode, where the ESP connects to a wifi as a client, provides a WiFi AP and acts as a router between these two networks. But be warned, don’t expect too much performance from the latter option.

Welp, back to the size problem… What to do to make it fit?

Some quirks to shrink it down

Optimizing the ESP-IDF libraries and build scripts to free up some flash, is maybe quite an intensive task and needs a deep understanding of the IDF internals. So, we will first try to find some other quirks, to reduce the size of our own big blobs.

Angular – Stripping the unneeded

While angular itself is really tiny, it links in some fonts from google (Remeber, we want to be self contained). This is usually not needed, because devices nowadays have some replacement fonts installed, we can easily delete the links from the generated index.html and rely on the local resources of the client.

<link href="https://fonts.googleapis.com/css?family=Roboto:300,400,500&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">

Zero Effort Zipping

While Angular already minimizes most of it’s output files, it would be really awesome to compress the resources somehow. But integrating a compression library and using it, stacks up again…

Then I had an flash of thought. Once upon a time, web speed analyzis tools started to raise warnings, when your website content was not delivered in a compressed format. Nowadays, webspace providers enable brotli and/or gzip compresson by default and there are plenty of tutorials available about WordPress and how to enable compression. All modern web browsers support gzip compression by default and offer it as a supported content encoding sheme.

Welp, but what this has to do with my web application content. Shall the ESP compress all content ahead of the transfer? Of course not! We simply compress all files ahead of linking the firmware and simply fake the content type to be the original while setting the content encoding to gzip. This way, the ESP is completely liberated from compression and we can reduce the size of our firmware as much as possible.

Do I need to compress all files manually and somehow save the original content type? Not really. I wrote a simple bash script, that does most of the work for you and creates some header with a look-up table (LUT) for the code to access it easily. A tiny URI-router will search it within the LUT and pick all information it needs from it.

Putting the pieces together

The Angular Project

The ESP Project

After we created the ESP project as described in my other post, we will start implementing a simple web service. This is done with

ESP32-EVB, PlatformIO And ESP-IDF ‚Äď Yet Another ESP32 tutorial

I already posted a tutorial on using the ESP32-EVB from olimex with Arduino. This time, I will provide the same with ESP-IDF, the original SDK from Espressif. Why I decided to do this? Because I had a project using it ūüėõ

To install PlatformIO, just follow the guide.

As shown in the below screenshots, create a new project VSCode -> The PIO-Icon -> New Project, define a project name, select Olimex ESP32-EVB with ESP-IDF framework and press Finish. After PIO downloaded all dependencies and configured your project, we are ready to go!

Now just wait until it finishes…

After PIO finished setting up the project, you will find the platformio.ini with its configuration. Add the line serial_speed = 115200 to the file.

After this, run an update on pio (especially useful if you had PlatformIO already installed).

pio update
pio upgrade --dev

We can also now build the empty application by entering pio run in the terminal.

If this succeeds, we will edit the main.c file and create some hello world application.

Hello World – Relay Toggle

Open src/main.c and edit it, so it will look like that:

#include <stdio.h>
#include "sdkconfig.h"
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_system.h"
#include "driver/gpio.h"

#define RELAY_GPIO      32

void app_main() {

    gpio_pad_select_gpio(RELAY_GPIO); 
    /* Set the GPIO as a push/pull output */ 
    gpio_set_direction(RELAY_GPIO, GPIO_MODE_DEF_OUTPUT); 

    while(1) {
        printf("Test\r\n");
        gpio_set_level(RELAY_GPIO, 0);
        vTaskDelay(pdMS_TO_TICKS(500));
        gpio_set_level(RELAY_GPIO, 1);
        vTaskDelay(pdMS_TO_TICKS(500));
    }
}

…connect your Olimex ESP32-EVB and enter pio run -t upload in terminal.

Done! Your Relay should toogle twice every second.

The pin numbers are simply the GPIO numbers you can find in the schematics of your board, in this case, it is 32.

printf-Debugging

Despite the fact, that printf debugging is somewhat frowned upon, it is a very practical first step. And with platform.io and evaluation boards like this one, it is as easy as pressing a button. No additional wiring, no hassle with pins, ports or blown up code. We already prepared the test in our code above. It prints „Test“ every second on the serial console, just that we can not see it yet. For this to show up, you need to open the serial monitor:

From now on, you should see the printf output in your terminal every time you upload your code again:

Do not forget to close the serial monitor, when you upload your code next time. The serial is blocked by the monitor, so the uploader can not access this port.

After closing the serial monitor, you should drop back to your terminal, where you can start over with pio run -t upload.

OK, that’s all. There are many tutorials out there, how to setup a webserver, controlling pins, using SPI or bluetooth or whatever. You will find information on all peripherials here. Just remember to include the header(s) that are mentioned on top of every peripherial page, like we did with #include "driver/gpio.h" above in main.c.

Now, you have some great starting point for ESP-IDF ūüėõ

Have fun coding!

Sorting Your Digital Mess – How to Easily Set-Up a Private Search Engine

Motivation

In my vacations, I kicked-off some new projects. One of it is an ARM64 based SBC with integrated SATA mostly like the Odroid-HC1/2. The main difference is the ARM (64 vs. 32 Bit) to be able to run Ceph on it.

If you read some of my previous posts, you will notice, that I already put in some effort to make Ceph run on 32-bit controllers. But the effort was way to high, to make and keep it running. But the new SBC is another story, I will tell, when the time is ripe for it.

When I thought of putting all my data on a Ceph cluster, I just stumbeld over the probem, how to manage all these bits and bytes and how to keep the overview. The should be something like a Google-Search for your private data. When you dig through the net, you will find many sites, blogs and books about elastic search. But there is always a problem, how to get your data in there without a deep knowledge on data providers, ETL, graphs,… But you could also find OpenSemanticSearch (OSS) and I gave it a try.

A Private Search Engine – Open Semantic Search

To be honest, OpenSemanticSearch is not the most beatiful engine you could think of, but it gives you a very deep insight on your data, assuming you configured it correctly. Unfortunately, the documentation is quite sparse and there have not been many bloggers writing on the topic, at least not for some current version of it.

After you finished Installation, first do some deeper configuration. If you choose to use a VM, I suggest, to usa a RAM centric configuration. Better take 12+ GB of RAM and only 4 cores. Every core will lead to an etl-task that eats up a significant amount of RAM, depending on the data it is advised to index.

To avoid links of the WebUI to point you to some unreachable destination, do some configuration ahead of adding local crawler paths. If you mount your data using mount to a path in your filesystem on the OSS server, the link will be provided unchanged (e.g. /mnt/myData) and the URL will just be the location it is on your server without http, hostame, location-suffix,… Not really helpful, if you have mounted NFS shares and want to access it on a remote windows machine.

OSS provides already some document server, that proxies your requests to HTTP, but you need to configure it correctly. But if you do not configure it in the beginning, your whole indexing run will create wrong links and it will be hard to stop the indexing and do a rerun (see the troubleshooting section below). The task will be running, until it has done it’s job. For terabytes of data, this will last at least some days.

Installation and Configuration of OSS

Installation of OSS is quite straight forward, as described on their web page.

Configuration is a bit more complex due to the fact, that the documentation is a bit sparse. Some of it can be done with the web UI, but others are a bit more covered. But first things first.

Mounting Your Data

The first to do is, to make your data accessible to the indexer.

[TODO]

Configuring Apache to Proxy the Documents

The quick hack is, to add your path to proxy in /etc/apache2/sites-available/000-default.conf

Insecure Proxy for documents

This allows the documents to be accessed throuch http://<IP_OF_YOUR_OSS>/documents/mnt but also through http://<IP_OF_YOUR_OSS>/mnt. We will secure this later on. But for debugging, this is quite helpful.

To get the links in OSS right, you need also to adjust /etc/opensemanticsearch/connector-files to have a line with the following content

config['mappings'] = { "/": "http://172.22.2.108/documents/" }

You can also add more mappings, but this helps to get it right for the links in OSS Web UI.

(Re-)Starting the OSS server

I would advise to do a simple reboot of your server, to also check, if the shares get mounted correcly. For my debian buster, this is not the case. It is failing to mount the NFS shares for some reason, I have not yet digged down. I manually mount it, after it has startet. I assume, the systemd start dependencies are a bit buggy.

[Much more to write on… Coming soon]

Troubleshooting

So, what to do, if something goes wrong? E.g. if your decided to index a ton of data, it is time to purge the queue. But how?

Purging the Queue

OpenSemanticSearch uses RabbitMQ to organize the indexing tasks. If OSS decides to index a path, it will simply list all files in this path and put it in the open_semantic_etl_tasks-Queue of your RabbitMQ server. But the user interface of OSS does not provide a means of purging or deleting the content of the queue. But therefore, you neet to activate the rabbitmq management web ui.

sudo rabbitmq-plugins enable rabbitmq_management

After this, you can check, if your server listens on the apropriate interface (0.0.0.0 for acces from another host) and port (15672)

netstat -nlpt output of a typical OSS host with enabled RabbitMQ Web UI

This checked, we need to add a user with administrative rights to the OSS worker.

rabbitmqctl cluster_status   # Check if everything is fine
rabbitmqctl list_users       # Check if your webuser doesn't exist
rabbitmqctl add_user webuser <PASSWORD>
rabbitmqctl set_user_tags webuser administrator
rabbitmqctl set_permissions -p / webuser "." "." "."

This done, you can simply acces the user interface. Type in your browser http://<IP_OF_OSS_HOST>:15672/ and the login will be presented to you.

RabbitMQ Web Login

After loging in, head over to the queues and enter your open_semantic_etl_task-queue.

Queues view of RabbitMQ web UI

You should be presented with the queue details

Queue details of open_semantic_etl_tasks

In my example, you can see a very limited count of ready tasks. This can go up to a few thousand tasks when indexing a large directory with many files. Each file will add a task to this queue and RabbitMQ will hand this out to the etl workers of OSS.

To delete all messages in the queue, you can simply hit the Purge Messages button.

Purging the queue

After this, you can re-enqueue the index files on your OSS search site through http://<IP_OF_OSS_HOST>/search-apps/files/

Cookie Banner von Real Cookie Banner