Startseite

FreeRTOS debugging on STM32 – CPU usage

Introduction

Since the information about FreeRTOS debugging with STM32CubeIDE is sparse and ST is not yet providing the task list view (that was part of the Atollic TrueStudio), here is, how you get it by installing a plugin from freescale and adding the approprite stuff to your code. I assume, you already have a project with FreeRTOS setup and running…

Adding the plugins

First start STM32CubeIDE and go to Help -> Install New Software…

Then add an Update Site by clicking the „Manage“-Button. Here you need to add the update site from freescale. And yes, NXP/Freescales plugin works with STM’s CubeIDE 🙂

http://freescale.com/lgfiles/updates/Eclipse/KDS

„Apply and Close“ and select the new site to „Work with“

Select the FreeRTOS Task Aware Debugger for GDB.

And click Next… Follow the Wizard until complete and after installation, restart your STM32CubeIDE.

Configuring the FreeRTOS project

Now add a timer and configure a reasonably a high tick rate (e.g. I used TIM13 of my STM32F469, running with 180 MHz HCLK, 90 MHz APB1 Timer clock and a timer counter period of 899 -> 100 kHz resolution).


Enable the interrupt

And in Middleware -> FreeRTOS, enable the run-time stats

If you like, you can also enable RECORD_STACK_HIGH_ADDRESS. Sometimes this is quite useful and avoids the little warning symbol in stack usage column of task list view.

Now regenerate your project…

Adjusting the code

Now it’s time to adjust your code for collecting the stats. Add a line for starting the timer in IT-mode by adding a function in some user code section in main.c.

volatile unsigned long ulHighFrequencyTimerTicks;

void configureTimerForRunTimeStats(void) {
  ulHighFrequencyTimerTicks = 0;
  HAL_TIM_Base_Start_IT(&htim13);
}

unsigned long getRunTimeCounterValue(void) {
  return ulHighFrequencyTimerTicks;
}

In stm32f4xx_it.c, add the following lines to the appropriate user sections

/* USER CODE BEGIN EV */
extern volatile unsigned long ulHighFrequencyTimerTicks;
/* USER CODE END EV */

[...]

void TIM8_UP_TIM13_IRQHandler(void)
{
  /* USER CODE BEGIN TIM8_UP_TIM13_IRQn 0 */
  ulHighFrequencyTimerTicks++;
  /* USER CODE END TIM8_UP_TIM13_IRQn 0 */
  HAL_TIM_IRQHandler(&htim13); 
  /* USER CODE BEGIN TIM8_UP_TIM13_IRQn 1 */
  /* USER CODE END TIM8_UP_TIM13_IRQn 1 */
}

If you are compiling with optimization levels above -O0, you also need to fix a bug (it is one in my opinion) in freertos tasks.c.

There are two possibilities:

  1. Switch of optimizations for tasks.c by right clicking on the file in project browser and changing the compiler optimization to -O0
  2. Change the line in tasks.c adding a volatile (see picture)

The problem with solution 2 is, that you need to do it after each STM32CubeMX code generation again. But there is a 3rd solution, that makes solution 2 persistend (until you update the MCU package).

Go to `%HOMEPATH%\STM32Cube\Repository\STM32Cube_FW_F4_V1.25.0\Middlewares\Third_Party\FreeRTOS\Source\` and edit the file like in solution 2, adding a volatile statement.

When you regenerate your project from CubeMX, it will include the correct line.

Profiling in action

Now after you put everything in place, it is time to run your code. Start the project in debugging mode, make the FreeRTOS/Task List view visible and let it run for some seconds. Then hit the pause button. The task list will collect the information from your target (from GDB) and show it nicely:

If the Task List view complains about FreeRTOS not have being detected, restart STM32CubeIDE and it should show up again.

Edit: During my last weeks of uing this Eclipse plugin, I had some problems seeing all tasks in Task Analyzer. In fact, the FreeRTOS functions to print the run-time statistics show them, while the plugin doesn’t. Also the data seems to be currupted sometimes within the plugin. So I would suggest, to better use the FreeRTOS internal stuff: https://www.freertos.org/rtos-run-time-stats.html

Citations

The information was collected from these links:

Just Another Hobby – 3D Printing

When it comes to leasure time, there are not many activities, that are more exciting than 3D printing. After you printed diverse things from Thingiverse or any other 3D printing gallery, there is a great wish, deeply buried in your mind, to design stuff yourself to get the real excitement — and what is more obvious than upgrading your 3D printer with parts printed by itself/ yourself 😛

If this was the right appetizer, look how my Ender3 got upgraded with a full metal frontend or take a look at my Thingiverse profile with some other designs. If you have any questions, especially the decisions, e.g. why I designed the new hotend exactly as it is (a wide monster hotend) or suggestions, how to improve it, do not hesitate to comment!

     

Just digged up

Welcome to MolesBlog.

In my first post, I want to introduce to you, who am I and why I started this blog.

I’m a professional elelctronics engineer, never getting enough from hardware, software, IT, … everything that has to do with creativity and technology is a good candidate to attract me 😉

I started my experience around 30 years ago, when I was a child. 1987 I got a Commodore C64 and a few electronics construction kits from my uncle, including a soldering iron. This made me locking myself in the garage for weeks. I soon blew away the CIA of the so called User-Port (max output current is 2mA) of the C64 with my first clavilux LED (drawing 20mA) experiments and soon, beside going to school, I spent most of my time in my garage, sometimes together with friends having the same interrests. This was the starting point of my career, leading to a diploma degree in electronics engineering with a great emphasis on VHDL, software in general and open source in particular.

During the last years, I realised that engineering has a lot of best practices, but hardly anybody is doing it with heart and soul. Many projects, independent from the company’s industry, either rule their developers with strict processes or give them plenty of rope. Neither plenty rope, nor strictness leads to high quality software. This I encountered many times with brownfield projects I need to take as a base for my developments.

But even when I got a greenfield project (not yet happened in my business career, only with private projects), it is hard to find some guideline, how quality assurance works best and makes fun. There are too many pitfalls and time wasting hurdles to not being forced to take some shortcut, always guiding you to a messy product, lacking real software quality. This always leads to dissatisfaction. Sometimes you just skip the testing, sometimes the toolchain effectively resists to play well or at all in a continuous integration environment. This mostly applies to very expensive commercial toolchains in my experience.

The most important question seems: Who else should back the quality of software, hardware and embedded systems, if not professional developers, engineers or technology addicted people like me and possibly you? Your boss, your project manager? Most probably some (so called) quality engineer, if he/she has time or exists at all. Regarding to SCRUM (which does not define a Project Manager and a quality engineer), the responsibility for quality belongs to the development team, not anybody else. This means, YOU, the developer is totally responsible and always needs to fight against commercial demands from the product management.

Another question simply is: Does it give you satisfaction to write bad code, hunt silly bugs or getting an „enema“ from your boss because of bad quality?

Welp, would you like to be satisfied about your work? Could quality lead to this satisfaction?

From my experience in various projects, a good starting point to get satisfaction (and quality) are the following rules:

  • Try to be the best in your class – You are a professional, then be one. If you order a mason, you expect he uses a mason’s level and delivers extraordinary quality. With software, customers usually don’t see „your work“, but your colleauges and yourself do. Apart from this, it is simply awkward if customers experience a lot of bugs.
  • Keep yourself up-to-date – Read blogs, magazines, books and watch videos about your profession. Especially with technology, you can not expect the world keeps as it is until you retire.
  • Review your work and yourself – One of the most important things is review and retrospection. Look back on your work and yourself and always ask, what could have been done better.
  • Keep human 🙂 – Take advantage of not working on your own. Your teammates are also professionals and have deeper knowledge in some parts than you. Trust them, talk to them and work together to get the most out of your project and life.

These are only a few thoughs to prepare the base for being a responsible and accountable development team. BTW, SCRUM already introduces some of these concepts as a part of it’s process.

I’m happy to receive responses about my statements above and I’m also interested in topic suggestions for future posts.

In my following posts, I plan to write about the STM32 test driven development (TDD) process, I currently try to establish, using unit tests and a CI to ensure a high quality and about how it is to (try to) be a Clean Code Developer (CCD).

Best Regards,
Daniel /themole

STM32 FreeRTOS and printf

After some more coding, I found some more issues with FreeRTOS and printf, not being solved by my fix below. If you need to get it fixed completely, look at that forums post: ST Community
and the website of Dave Nadler: newlib and FreeRTOS
In my current project, I replaced the newlib-nano-printf implementation by adding github:mpaland/printf as a git submodule to my project and including the printf.h (it overwrites the printf-library function with macro defines) in my topmost header file.

This will be a very short post. If you experience hard faults when using printf (this happens mostly, when using floats) and you already ticked the appropriate settings in the project’s properties…

…don’t waste your time digging through assembler instructions with instruction stepping (like I did) just to realize, that memory management is broken when using FreeRTOS. It is simply a bug in CubeMX-generated source files. Locate your _sbrk-function (either in syscalls.c or in sysmem.c) and change it to the following:

caddr_t _sbrk(int incr)
{
  extern char end asm("end");
  static char *heap_end;
  char *prev_heap_end,*min_stack_ptr;
  if (heap_end == 0) 
    heap_end = &end; prev_heap_end = heap_end; 
  /* Use the NVIC offset register to locate the main stack pointer. */
  /* Locate the STACK bottom address */
  min_stack_ptr = (char*)(*(unsigned int *)*(unsigned int *)0xE000ED08);
  min_stack_ptr -= MAX_STACK_SIZE; 
  if (heap_end + incr > min_stack_ptr) {
    errno = ENOMEM;
    return (caddr_t) -1;
  }
  heap_end += incr; 
  return (caddr_t) prev_heap_end;
}

For what _sbrk does, have a look here.

If you want to digg a bit deeper, here are some websites dealing with this problem:

STM32 USB-DFU

I’m not sure, if I’m simply a problem magnet or why some stuff does not work as described… Here is another case. The tool I tend to use can be found here at STM’s site.

When I connected a piece of custom designed hardware with my laptops USB port with BOOT0 tied to VCC, it immediately showed up a new USB-Device called „STM32 Bootloader“ in group „USB Devices“. I was really happy about that and started the DFuSe Demo Software from STM. But hey, it could not find an appropriate device. What the heck???

After some digging through the web, I found different suggestions and problem solutions, but none of it worked.The simple solution was, to uninstall the device’s driver in „Device Manager“, unconnect and reconnet again. A different device showed up just a goup above the previous one: „USB Controller“.

Welp, as soon as the device showed up, also the DFuSe Demo Software recognized the device and stepped into life.

To create a DFU file is well described in all the other resources on the web. Just use the DFU file manager that was installed along with the DFuSe Demo Software to create a DFU file out of e.g. a hex file.

Then press the lower „Choose…“ button to select the generated DFU file and press „Upgrade“… That’s all. Your device has a new Firmware on it.

Have fun!

Doxygen – Tips and Tricks

LaTeX non-interactive

To make LaTeX skip some errors without user interaction, you can add the option --interaction=nonstopmode to the pdflatex call. Easiest way to do so, is changing the LATEX_COMMAND_NAME in your Doxyfile.

LATEX_CMD_NAME = „latex –interaction=nonstopmode“

Do not forget the double quotation marks. Otherwise doxygen will remove the space and the command in your make.bat will fail.

If you now want to generate the code, step into your doxygen-generated latex folder (designated by LATEX_OUTPUT option in Doxyfile) and execute make.bat (on Windows) or make all (on *nix).

Adding a favicon to html output

Adding a favicon to html output, you need to specify it in a custom header and include the original image in HTML, as described here. To extract the default header file:

doxygen -w html headerFile

Add the follwing line to in headerFile within the html header

<link rel="shortcut icon" href="favicon.png" type="image/png">

And add your headerFile and the image to the HTML_EXTRA_FILES in your Doxyfile. Its path is relative to your Doxyfile.

HTML_HEADER = headerFile
HTML_EXTRA_FILES = some_rel_path/favicon.png

Now you can generate your html documentation with some favicon in place.

PDF output destination

Did you ever search for the PDF file, doxygen (or better the Makefile in latex) generates? I just added an option to doxygen, copying the refman.pdf to a location of your choice. (Hopefully it soon get’s merged and released).

If you want to test it out? Compile doxygen from my doxygen fork and add the following option to the Doxyfile of your project.

PDF_DST_FILE = ../MyGenerated.pdf

The destination is relative to your Makefile in your doxygen latex folder. As soon as make finished it’s job, the PDF is just in the same folder, the latex folder resides in.

That’s all. Enjoy generating software documentation with doxygen

Integrating wmBus devices into iobroker

After my quite expensive MH-Collector (identical with the easy.MUC from solvimus) died (it survived just a little longer that warranty protects), I decided to collect my wmBus devices‘ data with some home brewn solution. I’m also the owner of a Ubiquity US-24-250W, so the descision to go with PoE supply is quite an easy descision. So what lies closer than using a Raspberry Pi 3B+ with PoE hat and an USB-wmBus stick?

No sooner said than done, I bought the parts, installed raspbian and an iobroker slave. The iobroker master ist now running on a Debian 10 buster VM on my tiny HP Proliant server… How to install the iobroker slave can be found here.

wmBus Hardware

For me, the appropriate hardware was the IMST iM871A-USB (you can buy it directly from IMST or from tekmodul). This wmBus stick provides a serial interface (e.g. /dev/ttyUSB0), is quite cheap and supported by most open source wmBus software. But here comes the tricky part. There are quite some paths you can go, but for me, using the Messhelden heat cost allocators, I found myself in a very frustrating situation. These devices stick to the OMS standard for most of the telegram, but unfortunately do some very shitty stuff at slot 2 and 3, so many decoders fall out of sync just after the first data slot.

wmBus Software

After trying different ioborker adapters (like iobroker.wm-bus) and also deamon solutions (like wmbusmeters), sending the data to some MQTT broker (a server is easy to rise up in iobroker), I ended with the iobroker.wmbus (beware of the dash, it is not the same as above). Somehow the author of this adapter managed to come up with the inconsistencies, I even could not decode manually, looking at each single bit and byte of the wmBus telegram.

After attaching the iM871A-USB stick to the Pi and placing it at some location where it can receive all meters you are interested in, you need to install and configure the iobroker adapter iobroker.wmbus.

Adapter Configuration

The configuration is also quite easy and should look like the following:

It could also be, that your slaves need other modes to be received. One widespread mod for battery driven devices is also mode C. Unfortunately, a single stick can not receive multiple modes. But usually you only run devices with a single mode. Another important setting is the baud rate. For the IMST device, it needs to be 57600. The stick contains some serial converter that attaches the IMST module with a real serial connection.

Add Encrypted Meters

After finishing configuration and starting up the adapter, it is time to have a look into the log. There you will see, if the adapter started up correctly. If it did, you soon should see a line that says „Updated device state:<MANUF>-<ID>“ or an error saying, that it could not decrypt a telegram due to missing decryption key. If this occurs, go to the adapter configuration again. There you should see a new entry with a key „UNKNOWN“. Place the correct key there and push „Save“.

The follwoing telegram of that device should be decrypted correctly and a new state will be created within the object tree of iobroker.

If you see other unencrypted devices that pollute your object tree or your log with encryption failed messages, simply put them below „Blocked Devices“ tab in the adapters configuration. My Pi can see at least 20 unencrypted Techem water meters and heat cost allocators.

Let’s encrypt (also on wmBus)

I don’t know, how they can survive in a time of GDPR (General Data Protection Regulation), but they still have no hurry to encrypt their telegrams with a device-unique key. I think it is a security issue, when burglars can easily find people that do not heat in wintertime or have no water demand currently. But at least, Techem sticks closely to the OMS. If you rent a flat, that still has unencrypted wmBus meters, I would definitely claim to get encrypted meters. Even if encryption of wmBus has some weaknesses, it is by far better than plaintext.

STM32 UART Continuous Receive with Interrupt

My last post is quite some time ago, due to vacations and high workload. But now I encountered some problem within an embedded project, I want to share the solution with you. Continuously receive data using interrupts on UART is complicated (or even impossible) in HAL. Most approaches I found crawling the internet are using the LL library to achieve this and many discussions around HAL do not end in satisfaction. Some work around the problems with dirty approaches (e.g. changing the HAL code itself), other step back from interrupt and use a polling approach.

To be honest, the high levels of HAL do not offer such a solution. Instead, it offers functions to receive a special amount of data using a non-blocking interrupt approach, handling all the difficulties with tracking the state in the instance stucture (huartX) and entering a callback for the diverse states of the reception/transmission, e.g.
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart) or
void HAL_UART_RxHalfCpltCallback(UART_HandleTypeDef *huart)

Using HAL_UART_Receive_IT (not recommended)

A nearby approach without touching HAL code itself is, to call HAL_UART_Receive_IT(&huart3, &rxbuf, 1) once after initalization and at the end of the RxCpltCallback, to retrigger the reception, but this leads to some undesired lock (possibly a HAL-Bug), when transmitting data using HAL_StatusTypeDef HAL_UART_Transmit_IT(UART_HandleTypeDef *huart, uint8_t *pData, uint16_t Size), which could also not be the desired behaviour and the beginning of endless debug sessions and frustration.

Simply Enable the IRQ

The best solution in my opinion instead is really simple. Don’t use the high level receive functions at all for the continuous RX behaviour, since you do not want to receive a special amount of data but be called at each reception. So, configure the UART with interrupt in CubeMX and after it’s initalization, enable the interrupt itself, never calling the HAL_UART_Receive_IT or any other UART receive function (it will disable the IT after finishing).

In the section of the appropriate instance in void HAL_UART_MspInit(UART_HandleTypeDef* uartHandle), add the following line of code:

__HAL_UART_ENABLE_IT(&huartX, UART_IT_RXNE);

In stm32xxx_it.c do:

void USART3_IRQHandler(void)  {    
    /* USER CODE BEGIN USART3_IRQn 0 */
    CallMyCodeHere();
    return;  // To avoid calling the handler at all 
             // (in case you want to save the time)
    /* USER CODE END USART3_IRQn 0 */
    HAL_UART_IRQHandler(&huart3);
    /* USER CODE BEGIN USART3_IRQn 1 */
    /* USER CODE END USART3_IRQn 1 */ 
}

The return statement will avoid calling the HAL IRQ handler. I did not try during transmit, but it seems not disturbing anything. If you plan to use the HAL_UART_Receive_IT functions in parallel, you could try to put your code below the handler. I did not test it, but there is a good chance that it works.

Since this approach only touches the user code functions, none of your code will be destroyed by code re-generation of CubeMX.

This is all you need… Happy UART processing 😉

If Timestamping is Needed

Simple Millisecond Timestamps

If you want to trigger on inactive time durations (some serial protocols use it as a synchronisation condition), save a timestamp (e.g. HAL_GetTick()) within the UART-RX-Interrupt and look at the difference to the previous one (subtract the duration of a byte to get the real inactive time).

High Resolution Timestamps

If sub-milli-second resolution is required, run a timer with a prescaler of desired resolution and take the counter value of the timer instead of the tick counter. (you can get it with __HAL_TIM_GET_COUNTER(&htimX)).

Hope this helps in your next project using UART 🙂

Installing BigBlueButton on Your Dedicated Server

Introduction

After struggling with a dedicated server from Strato Webhosting, running ubuntu 18.04 and playing around with schroot to get some ubuntu 16.04 environment, I gave up with this solution. The systemd hurdle was much to high to be taken within the available time I currently have.

Finding the right server

The first act was to find a suitable dedicated server throughout various providers from cheap to expensive. There have been only a few but hard requirements: German location, 4+ Cores, 8+ GB RAM, 100+ GB SSD, 400+ MBit internet connection, 1+ TB Traffic. And the hardest requirement: Ubuntu 16.04

At hetzner I found a german provider. A bit expensive in general, but there is also some server auction section where you could find really valuable servers with great support and a nice price tag. They start at around 30€/month and they include almost everything you could dream about.

Long story short, I ordered a dedicated server from hetzner, located in Germany (Falkenstein, FSN1), and installed some ubuntu 16.04 using the rescue system. The automated installer of the rescue console offers (possibly) all available Linux derivatives. From the Hetzner Robot you can get already a hand full of supported distributions, but through the rescue system, there are many supported distros more to be chosen, not talking about the available unsupported ones.

The whole procedure took around 20-30 minutes from customer registration through ordering the server to get it running. The hetzner-wiki is crystal clear, the installer is easy, the whole stuff is rock stable.

TL;DR

Setting up the server

To setup the server, you usually need to wait a few minutes, until the server shows up in the so-called Hetzner Robot. Here you can choose to start the rescue system with your SSH pubkey to be deployed (details at hetzner wiki)…

…and then do an automated reset.

Then log in the rescue system with SSH and enter installimage.

This command issued, a menu will appear and guide you through the install procedure of the operationg system of your choice (for BigBlueButton you need ubuntu-16-04-minimal). Amazingly, there is a ton of hetzner supported systems, but a megaton of unsupported but available derivatives.

Welp, this menuconfig-style tool is a bit old-school, but it serves it’s purpose so well… I’m really fascinated. And within 5 minutes, the system is poured on your server’s hard disk. When finished, type reboot and the server will boot the new operating system.

Setting up BigBlueButton…

…with Let’s Entcrypt SSL, Greenlight and (almost) the whole configuration.

Just do a system update/upgrade…

apt update
apt upgrade

… and then you can start over installing bbb…

wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh |\
    bash -s -- -v xenial-220 -s bbb.example.com \
    -e info@example.com -g

That’s it… When you change your Greenlight config in ~/greenlight/.env (e.g. for enabling Google OAuth2), just follow this procedure:

cd ~/greenlight
vi .env   # do there what you want or need
docker-compose down
docker-compose up -d

If you see a 404 Error when loading your page (https://bbb.example.com), just give it 30 seconds (or more if you did not follow the system requirements 🙂 ) to come up and enjoy your conferences.

My first real world experiences

After some days, I had a „real world“ video conference with 20 attendees, all using audio and 14 of them video. It was the first virtual classroom meeting of my daughter, related to my School’s out post. Most of the attendees had been crystal clear without interruptions. Some of them had minor audio and/or video stuttering and two of them I could hardly understand. OK, there have been a few that had problems with their own hardware (video and audio), but this was not related to BBB.

The stuttering connection originates mainly from WiFi connections, loosing and delaying data packets. So, if you don’t have a chance to get wired, get as close as you can to your WiFi hotspot. There can also be a big difference between cheap consumer equipment and the professional one. I run some UniFi based installation from ubiquity networks that simply outperforms the FritzBox WiFi in every aspect (speed, reliability, configurability, VLAN capability, … and so on)

Also watching htop for a while gives some interesting insights what BBB expects from your server. With 20 attendees and many having the microphone turned on and talking, the load rises astonishingly. My quad-core core-i7 (8 virtual HT CPUs, see server details) was pushed up to 30% per cpu during this meeting. This mostly originated from freeswitch and kurento-media-server:

This means two things:

  1. It takes quite a lot of power to mix audio in realtime.
  2. The freeswitch code seems to make very efficient use of multiple processors

That’s both, good and bad news for large installations. Bigger hardware with more cores is a constructive solution. But honestly, if you are planning to put more than 100 concurrent users on a server in a production environment, you should think about high-availabilty solutions (keywords: AWS Elastic Load Balancer, AWS RDS with failover and availability zones,…). But then, the bbb-install.sh approach get’s to it’s limits. However, it serves well as a starting point to get up a test system and to understand BBB’s and Greenlight’s architecture.

Some words about the client

When you know Zoom Meetings, which is a bit bullheaded about installing an executable and you need to rick it to get the browser version of the client, you will find BBB a real pleasure.

To use the BBB server, there is nothing needed beside a web browser. It supports audio, video and screen sharing (screen or application based) and it supports uploading a presentation or watching a web video (youtube, vimeo,..) together while the moderator controls the content. I tested it already with a few friends and we found out, that it works best with Chrome and Firefox on Windows and Linux. But even on smartphones it runs fine, as long as you have a high quality internet connection.

What next?

BigBlueButton advises to not install any other stuff beside BBB (and maybe Greenlight) on the Server because of the realtime audio processing in FreeSwitch, every little delay can destroy the user experience of your video conference.

In fact, if you have enough cores and RAM and running on a NVMe-RAID 1, you can for sure install other web applications on your server, if you don’t do heavy number crunching. If you can live with the risk, that the other applications potentially influence the conference quality, there is no technical reason not to do so.

The configuration of nginx looks very clear and straight forward. You should be able to add another site (in /etc/nginx/sites-available) and activate it (soft-link it to /etc/nginx/sites-enabled/). The only thing I would advise is to run it on another subdomain (otherapp.example.com). For more encapsulation and also with a bit more effort, you can go with a second IP or whatever you like…

Backup thoughts

With a dedicated server, you should keep your backup always in mind. It is not simply creating a snapshot like in virtualized environments. If you build up on e.g. ext4, your backup might brake. If you want to create an online snapshot backup of your system block devices, better stick with an LVM base for your block devices or, if you like the bleeding edge, with btrfs.

Doing it the safe way

The safest way for sure is to stop all services and take a snapshot then. The most secure is to boot some rescue system, fetch the raw disk image snapshot and then boot back in the system. This is safe, but it could mean, that you need to do it manually. Hetzner is providing information for its Robot-API (the one that Hetzner Robot is using), but it could mean quite some work to implement it… and you need a second server (or a RaspberryPi at your home) that steers the backup process using the Robot-API and SSH commands (intersting thought BTW, maybe I’ll try it and write some new post about it one day)…

Doing it the less safe way

But there is also a quite safe way of doing your backup with all services running. There are five different parts to be considered.

  1. The system environment (e.g. /boot –> block device based)
  2. The Home Directores (e.g. /root –> file based)
  3. The static system files (e.g. /usr/share –> file based)
  4. The variable system files (e.g. /var/lib –> file based)
  5. The database directories (dynamic –> dump based)

With this said, the process should be quite clear.

Firstly, you should backup your whole system (finished installing your services) on block device level using a rescue system, just to make sure, you can easily revert to a running state or bind-mount the block device snapshot to compare with your file based backups.

Secondly, dump your database files to place a like root’s home directory, so the content gets somewhat like static content. Usually transaction based databases dump a consistent state. It depends on the software, how well the transactions are formed, but usually this should at least keep the database itself intact.

Thirdly, backup the files from all other directories (exluding temporary ones) using a efficient approach (e.g. rsync, rsnapshot or borg) over to a safe place.

And last but not least, check your backup consientiously, or better, try a restore. For a production environment, a regular and automated restore test is indispensable. Being honestly, for a real production environment with hundreds of users, a simple dedicated server is not the right way to go.

Doing it the Chuck Norris way

A quite risky, but mostly working way, is to run a snapshot tool simply on the running system. As it is stated on the borg page, most directories are stable enough to be backed up using an rsync approach. If you do your first backup (that takes quite long) with all services stopped and then only do incremental backups (being lightning fast), there is a low chance of e.g. a home directory changing during the backup process. To avoid getting inconsistent databases, just do a new dump of your databases beforehand (e.g. to backup user’s home). You can even prevent more of the risky parts (using LVM, btrfs) to lower the chance for unexpected changes. And moreover you can stop the relevant services.

For the docker containers (postgres and greenlight), you need to keep in mind, that it is a good practice for docker services to store their data in a bind mount that is kept in the host system. This means, that the thoughts on the database dumps can not be relaxed and may get even more complicated.

Installing Moodle on the BBB-Server

Next step is to install Moodle on this server and migrate all the users and courses from the old one (see my post about it’s installation) to the new one (stay tuned).

Have fun with BBB and Greenlight 🙂

School’s out – How to Tame Your Children

Introduction – Historic Reasons for this Post

Here in Germany, executive decided to lock all public life down to a minimum (only system relevant shops are allowed to be opened and for each state in germany , there are different exceptions) and it seems, that all countries do the same. This means, many people need to work in their home office and if you have kids, they „join you at work“. And as if that’s not enough, they refuse to learn anything for school. Parents are simply not made to guide their kids through school stuff.

After some days of lock down the teacher of my daughter sent around a voicemail, telling the kids she misses all of them and they should do some tasks at least until easter holidays (from 2020-04-04 to 2020-04-19) start. I was completely excited, how my little girl changed. She stopped every activity and listened to the teacher as she would never listen to me, telling her the tasks she needs to do. But as soon as it came, the magic was over…

Welp, what can I do as an engineer to help myself and my girl out of this…? Right! Help the school to raise a digital classroom. Technically no problem, but in germany, most problems are not technical. Most problems arise due to privacy protection and simply the reluctance of official employees to deal with the work. But this shall not be the topic of this post. The solution is, to be very bullheaded and try to explain the situation based on sience. (Corona is just now starting, german video).

Short side notice: In Bavaria we have a system called Mebis that was intended to complement the daily school with digital information and courses. Teachers even can collect tasks and share courses. Unfortunately, there are no means of communication for whole classes or at least a simple video and/or audio distribution. The highest level of group communication is a chat for 6 persons at max. So, the system needs to be steered by the parents for the little ones (grade one to four).

This means, that all information for our children was distributed by Email or some WhatsApp group (inofficially, because of the privacy laws). Lately, the teacher also has setup a class in Anton-App. But even this is not a good solution for the little ones…

Then, WHAT Would be the Solution?

So, what would you do as a technician, if you expect this lockdown to continue after the easter holidays (as it is a logical conclusion from the above video)? Right! Evaluate what exists out there and find a solution that fulfills the requirements for the low grade scholars:

  • Audio/Video communication (virtual class room)
  • Distribution and collection of task sheets/images (not digital courses, little ones can not operate the keyboard efficiently)
  • Easy to setup courses
  • Complies with GDPR (DSGVO) –> Runs on own server or in a german cloud
  • Easy to use
  • Cheap or free to use

TL;DR

I quickly found a digital classroom soultion that fulfills the requirements: Moodle (BTW: there is also moodlecloud, but this does not comply with GDPR). I decided to run it on my home server (HP Proliant Gen 7 + 100M DSL). So the architecture is quite simple: Reverse Proxy VM as exposed host and Moodle VM connected to internal proxy net (10.0.0.0/16).

My proxy it using Nginx in a very minimal configuration and Moodle is protected and accessed through the proxy from the outside world. The proxy net is also only connected to the services that shall connect to the public internet.

The installation of Moodle is straight forward and following the how-to (German how-to) leads to the the expected results. As database I used MariaDB and Nginx as the webserver.

Beside MariaDB and Nginx, you need to install php-fpm. Which additional php modules you will need is perfectly evaluated during moodle web setup and you should simply keep a console open to run sudo apt install <required-module>.

If you want to run all this through a reverse proxy, don’t delay its configuration to a point in time after the moodle web setup. The setup will determine the url it was accessed and you will need to change this somewhere in the config afterwards…

One of the problems was, to get Moodle running smoothly through the reverse proxy. It had some minor problems to resolve the CSS, JS and all PHP indirectly served files. Here is my proxy configuration to make it work (I assume you already did your SSL config and certificates correctly, hint: wildcard certificates for your domain make everything much easier).

Nginx proxy config (for reverse proxying the Moodle host)

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name moodle.<mydomain>;
  include snippets/my_ssl.conf; 
  include snippets/ssl-params.conf; 
  root /var/www/html/; 
  location / { 
    proxy_set_header X-Forwarded-Host $host:$server_port; 
    proxy_set_header X-Forwarded-Server $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
    proxy_pass http://10.0.2.20:80; 
    client_max_body_size 100M;
    proxy_set_header Host $host; 
  }
}

Nginx conf on Moodle host

server {
  listen 80 default_server;
  listen [::]:80 default_server; 
  root /var/www/html; 
  index index.html index.htm index.nginx-debian.html index.php;
  server_name _; 
  location / { 
    try_files $uri $uri/ =404; 
  }
  location ~ \.php/.*$ { 
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; 
  } 
  location ~ \.php$ { 
    include snippets/fastcgi-php.conf; 
    fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; 
  }
}

You only need to unzip a Moodle distro package to /var/www/html and as soon as you point to https://moodle.<your-domain>, it will show up the setup process. Now follow the guide and install the packages Moodle needs.

If you keep everything at the default values, it will only be possible to upload files up to 2M in size. This is not sufficient, if you expect parents to upload photos of the completed task sheets and also sometimes too less for PDFs or images you want to share. Therefore, you need to raise the limit for Nginx and php-fpm. This is done on Debian 10 buster in:

  • /etc/php/7.3/fpm/php.ini
    • upload_max_filesize = 50M
    • post_max_size
  • /etc/nginx/nginx.conf (section http)
    • client_max_body_size 50M;

On the proxy, this is already included in above config.

Lessons Learned

Some of the dependencies, moodle and it’s plugins have are not hard during installation, but they will show up when you use moodle for a while. If you just don’t want to know, why these dependencies are needed, you simply need to do (as root or with sudo)

apt install unoconv ghostscript graphviz 

A bug in the onuconv release 0.7 (which is deployed by debian 10) prevent the unoconv to create it’s own listener and to create some tempfiles. To circumvent this, we simply create a little wrapper script:

vi /usr/bin/unoconv_wrapper.sh

Put in the following content:

#!/bin/bash
HOME=/tmp /usr/bin/unoconv $@

And make it executable.

chmod 0755 /usr/bin/unoconv_wrapper.sh

Now enable unoconv and adjust the path settings in the moodle unoconv configuration to point to the wrapper /usr/bin/unoconv_wrapper.sh

And afterwards, enable the unoconv document converter in website administration (search for unoconv to get the settings).

If you want to use the PDF annotation for all supported file formatts, you need to celect it in Feedback-Types of the Activity

You can also select it in the default settings, so you don’t need to change it for every activity you add to a course. Unoconv then converts different types of files to PDF, and as a trainer, you can simply annotate these PDFs online. If you don’t use unoconv, you need to download e.g. each and every image or other document to your PC and send back your comments in a different way to the student/scholar. Also as a file upload or as a textual comment.

Happy teaching

As soon as your nginx runs, you possibly wish to use your Moodle. One of the first things I did is installing the BigBlueButton-addon for video conferencing. Blindsidenetworks is thankfully providing a free conferencing server for moodle. This means, that all the audio and video traffic is not flooding your server (this is, why I can run it at home). If you wish to use an own on-premise solution, I would give OpenMeetings a try.

Conclusion

The setup of Moodle is quite simple, even if there are little pitfalls with reverse proxy configuration. It also needs some time to get up and running, using the system. But creating simple courses and putting in some PDF files is really dead easy. The higher levels are to concentrate the results (trainer task) and to add little extra candy (e.g. Badges) to motivate children to learn.

All this said, I wish you happy teaching and I would be happy to receive comments on it.

Here a little hint, how a course could look like with content (Tasks in PDFs and a place to put the pictures of the task sheet in).

Update

KW 17 – Three weeks after introducing Moodle

After three weeks and a (partly) official video conference using BigBlueButton (see my other post about installing BBB), More and more children tend to register for this platform. And it seems to be the only available platform, where all that stuff is organized well, corrections can be easily done and even rating is possible. Additionally, with the moodle android and iOS app, the camera can be easily used to commit the scholars/students work within seconds. No scanning, not transportation of finished tasks,…

Cookie Banner von Real Cookie Banner