Installing BigBlueButton on Your Dedicated Server

Introduction

After struggling with a dedicated server from Strato Webhosting, running ubuntu 18.04 and playing around with schroot to get some ubuntu 16.04 environment, I gave up with this solution. The systemd hurdle was much to high to be taken within the available time I currently have.

Finding the right server

The first act was to find a suitable dedicated server throughout various providers from cheap to expensive. There have been only a few but hard requirements: German location, 4+ Cores, 8+ GB RAM, 100+ GB SSD, 400+ MBit internet connection, 1+ TB Traffic. And the hardest requirement: Ubuntu 16.04

At hetzner I found a german provider. A bit expensive in general, but there is also some server auction section where you could find really valuable servers with great support and a nice price tag. They start at around 30€/month and they include almost everything you could dream about.

Long story short, I ordered a dedicated server from hetzner, located in Germany (Falkenstein, FSN1), and installed some ubuntu 16.04 using the rescue system. The automated installer of the rescue console offers (possibly) all available Linux derivatives. From the Hetzner Robot you can get already a hand full of supported distributions, but through the rescue system, there are many supported distros more to be chosen, not talking about the available unsupported ones.

The whole procedure took around 20-30 minutes from customer registration through ordering the server to get it running. The hetzner-wiki is crystal clear, the installer is easy, the whole stuff is rock stable.

TL;DR

Setting up the server

To setup the server, you usually need to wait a few minutes, until the server shows up in the so-called Hetzner Robot. Here you can choose to start the rescue system with your SSH pubkey to be deployed (details at hetzner wiki)…

…and then do an automated reset.

Then log in the rescue system with SSH and enter installimage.

This command issued, a menu will appear and guide you through the install procedure of the operationg system of your choice (for BigBlueButton you need ubuntu-16-04-minimal). Amazingly, there is a ton of hetzner supported systems, but a megaton of unsupported but available derivatives.

Welp, this menuconfig-style tool is a bit old-school, but it serves it’s purpose so well… I’m really fascinated. And within 5 minutes, the system is poured on your server’s hard disk. When finished, type reboot and the server will boot the new operating system.

Setting up BigBlueButton…

…with Let’s Entcrypt SSL, Greenlight and (almost) the whole configuration.

Just do a system update/upgrade…

apt update
apt upgrade

… and then you can start over installing bbb…

wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh |\
    bash -s -- -v xenial-220 -s bbb.example.com \
    -e info@example.com -g

That’s it… When you change your Greenlight config in ~/greenlight/.env (e.g. for enabling Google OAuth2), just follow this procedure:

cd ~/greenlight
vi .env   # do there what you want or need
docker-compose down
docker-compose up -d

If you see a 404 Error when loading your page (https://bbb.example.com), just give it 30 seconds (or more if you did not follow the system requirements 🙂 ) to come up and enjoy your conferences.

My first real world experiences

After some days, I had a „real world“ video conference with 20 attendees, all using audio and 14 of them video. It was the first virtual classroom meeting of my daughter, related to my School’s out post. Most of the attendees had been crystal clear without interruptions. Some of them had minor audio and/or video stuttering and two of them I could hardly understand. OK, there have been a few that had problems with their own hardware (video and audio), but this was not related to BBB.

The stuttering connection originates mainly from WiFi connections, loosing and delaying data packets. So, if you don’t have a chance to get wired, get as close as you can to your WiFi hotspot. There can also be a big difference between cheap consumer equipment and the professional one. I run some UniFi based installation from ubiquity networks that simply outperforms the FritzBox WiFi in every aspect (speed, reliability, configurability, VLAN capability, … and so on)

Also watching htop for a while gives some interesting insights what BBB expects from your server. With 20 attendees and many having the microphone turned on and talking, the load rises astonishingly. My quad-core core-i7 (8 virtual HT CPUs, see server details) was pushed up to 30% per cpu during this meeting. This mostly originated from freeswitch and kurento-media-server:

This means two things:

  1. It takes quite a lot of power to mix audio in realtime.
  2. The freeswitch code seems to make very efficient use of multiple processors

That’s both, good and bad news for large installations. Bigger hardware with more cores is a constructive solution. But honestly, if you are planning to put more than 100 concurrent users on a server in a production environment, you should think about high-availabilty solutions (keywords: AWS Elastic Load Balancer, AWS RDS with failover and availability zones,…). But then, the bbb-install.sh approach get’s to it’s limits. However, it serves well as a starting point to get up a test system and to understand BBB’s and Greenlight’s architecture.

Some words about the client

When you know Zoom Meetings, which is a bit bullheaded about installing an executable and you need to rick it to get the browser version of the client, you will find BBB a real pleasure.

To use the BBB server, there is nothing needed beside a web browser. It supports audio, video and screen sharing (screen or application based) and it supports uploading a presentation or watching a web video (youtube, vimeo,..) together while the moderator controls the content. I tested it already with a few friends and we found out, that it works best with Chrome and Firefox on Windows and Linux. But even on smartphones it runs fine, as long as you have a high quality internet connection.

What next?

BigBlueButton advises to not install any other stuff beside BBB (and maybe Greenlight) on the Server because of the realtime audio processing in FreeSwitch, every little delay can destroy the user experience of your video conference.

In fact, if you have enough cores and RAM and running on a NVMe-RAID 1, you can for sure install other web applications on your server, if you don’t do heavy number crunching. If you can live with the risk, that the other applications potentially influence the conference quality, there is no technical reason not to do so.

The configuration of nginx looks very clear and straight forward. You should be able to add another site (in /etc/nginx/sites-available) and activate it (soft-link it to /etc/nginx/sites-enabled/). The only thing I would advise is to run it on another subdomain (otherapp.example.com). For more encapsulation and also with a bit more effort, you can go with a second IP or whatever you like…

Backup thoughts

With a dedicated server, you should keep your backup always in mind. It is not simply creating a snapshot like in virtualized environments. If you build up on e.g. ext4, your backup might brake. If you want to create an online snapshot backup of your system block devices, better stick with an LVM base for your block devices or, if you like the bleeding edge, with btrfs.

Doing it the safe way

The safest way for sure is to stop all services and take a snapshot then. The most secure is to boot some rescue system, fetch the raw disk image snapshot and then boot back in the system. This is safe, but it could mean, that you need to do it manually. Hetzner is providing information for its Robot-API (the one that Hetzner Robot is using), but it could mean quite some work to implement it… and you need a second server (or a RaspberryPi at your home) that steers the backup process using the Robot-API and SSH commands (intersting thought BTW, maybe I’ll try it and write some new post about it one day)…

Doing it the less safe way

But there is also a quite safe way of doing your backup with all services running. There are five different parts to be considered.

  1. The system environment (e.g. /boot –> block device based)
  2. The Home Directores (e.g. /root –> file based)
  3. The static system files (e.g. /usr/share –> file based)
  4. The variable system files (e.g. /var/lib –> file based)
  5. The database directories (dynamic –> dump based)

With this said, the process should be quite clear.

Firstly, you should backup your whole system (finished installing your services) on block device level using a rescue system, just to make sure, you can easily revert to a running state or bind-mount the block device snapshot to compare with your file based backups.

Secondly, dump your database files to place a like root’s home directory, so the content gets somewhat like static content. Usually transaction based databases dump a consistent state. It depends on the software, how well the transactions are formed, but usually this should at least keep the database itself intact.

Thirdly, backup the files from all other directories (exluding temporary ones) using a efficient approach (e.g. rsync, rsnapshot or borg) over to a safe place.

And last but not least, check your backup consientiously, or better, try a restore. For a production environment, a regular and automated restore test is indispensable. Being honestly, for a real production environment with hundreds of users, a simple dedicated server is not the right way to go.

Doing it the Chuck Norris way

A quite risky, but mostly working way, is to run a snapshot tool simply on the running system. As it is stated on the borg page, most directories are stable enough to be backed up using an rsync approach. If you do your first backup (that takes quite long) with all services stopped and then only do incremental backups (being lightning fast), there is a low chance of e.g. a home directory changing during the backup process. To avoid getting inconsistent databases, just do a new dump of your databases beforehand (e.g. to backup user’s home). You can even prevent more of the risky parts (using LVM, btrfs) to lower the chance for unexpected changes. And moreover you can stop the relevant services.

For the docker containers (postgres and greenlight), you need to keep in mind, that it is a good practice for docker services to store their data in a bind mount that is kept in the host system. This means, that the thoughts on the database dumps can not be relaxed and may get even more complicated.

Installing Moodle on the BBB-Server

Next step is to install Moodle on this server and migrate all the users and courses from the old one (see my post about it’s installation) to the new one (stay tuned).

Have fun with BBB and Greenlight 🙂

How To Debug Hardware-Faults on Your Dedicated Server

While installing some stuff on a dedicated server at Strato, I encountered a problem with the server two times. While having no clue what happened during the first time, I was prepared during the second. I kept a connection to the serial port proxy of my provider open, to see the most low level messages (kprints).

And you won’t believe, after 13295 seconds of uptime (3 hours 41 minutes), the kernel had a panic:

This is a great indication to run a Hardware-Test and to address the support of your hosting provider.

School’s out – How to Tame Your Children

Introduction – Historic Reasons for this Post

Here in Germany, executive decided to lock all public life down to a minimum (only system relevant shops are allowed to be opened and for each state in germany , there are different exceptions) and it seems, that all countries do the same. This means, many people need to work in their home office and if you have kids, they „join you at work“. And as if that’s not enough, they refuse to learn anything for school. Parents are simply not made to guide their kids through school stuff.

After some days of lock down the teacher of my daughter sent around a voicemail, telling the kids she misses all of them and they should do some tasks at least until easter holidays (from 2020-04-04 to 2020-04-19) start. I was completely excited, how my little girl changed. She stopped every activity and listened to the teacher as she would never listen to me, telling her the tasks she needs to do. But as soon as it came, the magic was over…

Welp, what can I do as an engineer to help myself and my girl out of this…? Right! Help the school to raise a digital classroom. Technically no problem, but in germany, most problems are not technical. Most problems arise due to privacy protection and simply the reluctance of official employees to deal with the work. But this shall not be the topic of this post. The solution is, to be very bullheaded and try to explain the situation based on sience. (Corona is just now starting, german video).

Short side notice: In Bavaria we have a system called Mebis that was intended to complement the daily school with digital information and courses. Teachers even can collect tasks and share courses. Unfortunately, there are no means of communication for whole classes or at least a simple video and/or audio distribution. The highest level of group communication is a chat for 6 persons at max. So, the system needs to be steered by the parents for the little ones (grade one to four).

This means, that all information for our children was distributed by Email or some WhatsApp group (inofficially, because of the privacy laws). Lately, the teacher also has setup a class in Anton-App. But even this is not a good solution for the little ones…

Then, WHAT Would be the Solution?

So, what would you do as a technician, if you expect this lockdown to continue after the easter holidays (as it is a logical conclusion from the above video)? Right! Evaluate what exists out there and find a solution that fulfills the requirements for the low grade scholars:

  • Audio/Video communication (virtual class room)
  • Distribution and collection of task sheets/images (not digital courses, little ones can not operate the keyboard efficiently)
  • Easy to setup courses
  • Complies with GDPR (DSGVO) –> Runs on own server or in a german cloud
  • Easy to use
  • Cheap or free to use

TL;DR

I quickly found a digital classroom soultion that fulfills the requirements: Moodle (BTW: there is also moodlecloud, but this does not comply with GDPR). I decided to run it on my home server (HP Proliant Gen 7 + 100M DSL). So the architecture is quite simple: Reverse Proxy VM as exposed host and Moodle VM connected to internal proxy net (10.0.0.0/16).

My proxy it using Nginx in a very minimal configuration and Moodle is protected and accessed through the proxy from the outside world. The proxy net is also only connected to the services that shall connect to the public internet.

The installation of Moodle is straight forward and following the how-to (German how-to) leads to the the expected results. As database I used MariaDB and Nginx as the webserver.

Beside MariaDB and Nginx, you need to install php-fpm. Which additional php modules you will need is perfectly evaluated during moodle web setup and you should simply keep a console open to run sudo apt install <required-module>.

If you want to run all this through a reverse proxy, don’t delay its configuration to a point in time after the moodle web setup. The setup will determine the url it was accessed and you will need to change this somewhere in the config afterwards…

One of the problems was, to get Moodle running smoothly through the reverse proxy. It had some minor problems to resolve the CSS, JS and all PHP indirectly served files. Here is my proxy configuration to make it work (I assume you already did your SSL config and certificates correctly, hint: wildcard certificates for your domain make everything much easier).

Nginx proxy config (for reverse proxying the Moodle host)

server {
  listen 443 ssl http2;
  listen [::]:443 ssl http2;
  server_name moodle.<mydomain>;
  include snippets/my_ssl.conf; 
  include snippets/ssl-params.conf; 
  root /var/www/html/; 
  location / { 
    proxy_set_header X-Forwarded-Host $host:$server_port; 
    proxy_set_header X-Forwarded-Server $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
    proxy_pass http://10.0.2.20:80; 
    client_max_body_size 100M;
    proxy_set_header Host $host; 
  }
}

Nginx conf on Moodle host

server {
  listen 80 default_server;
  listen [::]:80 default_server; 
  root /var/www/html; 
  index index.html index.htm index.nginx-debian.html index.php;
  server_name _; 
  location / { 
    try_files $uri $uri/ =404; 
  }
  location ~ \.php/.*$ { 
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; 
  } 
  location ~ \.php$ { 
    include snippets/fastcgi-php.conf; 
    fastcgi_pass unix:/var/run/php/php7.3-fpm.sock; 
  }
}

You only need to unzip a Moodle distro package to /var/www/html and as soon as you point to https://moodle.<your-domain>, it will show up the setup process. Now follow the guide and install the packages Moodle needs.

If you keep everything at the default values, it will only be possible to upload files up to 2M in size. This is not sufficient, if you expect parents to upload photos of the completed task sheets and also sometimes too less for PDFs or images you want to share. Therefore, you need to raise the limit for Nginx and php-fpm. This is done on Debian 10 buster in:

  • /etc/php/7.3/fpm/php.ini
    • upload_max_filesize = 50M
    • post_max_size
  • /etc/nginx/nginx.conf (section http)
    • client_max_body_size 50M;

On the proxy, this is already included in above config.

Lessons Learned

Some of the dependencies, moodle and it’s plugins have are not hard during installation, but they will show up when you use moodle for a while. If you just don’t want to know, why these dependencies are needed, you simply need to do (as root or with sudo)

apt install unoconv ghostscript graphviz 

A bug in the onuconv release 0.7 (which is deployed by debian 10) prevent the unoconv to create it’s own listener and to create some tempfiles. To circumvent this, we simply create a little wrapper script:

vi /usr/bin/unoconv_wrapper.sh

Put in the following content:

#!/bin/bash
HOME=/tmp /usr/bin/unoconv $@

And make it executable.

chmod 0755 /usr/bin/unoconv_wrapper.sh

Now enable unoconv and adjust the path settings in the moodle unoconv configuration to point to the wrapper /usr/bin/unoconv_wrapper.sh

And afterwards, enable the unoconv document converter in website administration (search for unoconv to get the settings).

If you want to use the PDF annotation for all supported file formatts, you need to celect it in Feedback-Types of the Activity

You can also select it in the default settings, so you don’t need to change it for every activity you add to a course. Unoconv then converts different types of files to PDF, and as a trainer, you can simply annotate these PDFs online. If you don’t use unoconv, you need to download e.g. each and every image or other document to your PC and send back your comments in a different way to the student/scholar. Also as a file upload or as a textual comment.

Happy teaching

As soon as your nginx runs, you possibly wish to use your Moodle. One of the first things I did is installing the BigBlueButton-addon for video conferencing. Blindsidenetworks is thankfully providing a free conferencing server for moodle. This means, that all the audio and video traffic is not flooding your server (this is, why I can run it at home). If you wish to use an own on-premise solution, I would give OpenMeetings a try.

Conclusion

The setup of Moodle is quite simple, even if there are little pitfalls with reverse proxy configuration. It also needs some time to get up and running, using the system. But creating simple courses and putting in some PDF files is really dead easy. The higher levels are to concentrate the results (trainer task) and to add little extra candy (e.g. Badges) to motivate children to learn.

All this said, I wish you happy teaching and I would be happy to receive comments on it.

Here a little hint, how a course could look like with content (Tasks in PDFs and a place to put the pictures of the task sheet in).

Update

KW 17 – Three weeks after introducing Moodle

After three weeks and a (partly) official video conference using BigBlueButton (see my other post about installing BBB), More and more children tend to register for this platform. And it seems to be the only available platform, where all that stuff is organized well, corrections can be easily done and even rating is possible. Additionally, with the moodle android and iOS app, the camera can be easily used to commit the scholars/students work within seconds. No scanning, not transportation of finished tasks,…

Integrating wmBus devices into iobroker

After my quite expensive MH-Collector (identical with the easy.MUC from solvimus) died (it survived just a little longer that warranty protects), I decided to collect my wmBus devices‘ data with some home brewn solution. I’m also the owner of a Ubiquity US-24-250W, so the descision to go with PoE supply is quite an easy descision. So what lies closer than using a Raspberry Pi 3B+ with PoE hat and an USB-wmBus stick?

No sooner said than done, I bought the parts, installed raspbian and an iobroker slave. The iobroker master ist now running on a Debian 10 buster VM on my tiny HP Proliant server… How to install the iobroker slave can be found here.

wmBus Hardware

For me, the appropriate hardware was the IMST iM871A-USB (you can buy it directly from IMST or from tekmodul). This wmBus stick provides a serial interface (e.g. /dev/ttyUSB0), is quite cheap and supported by most open source wmBus software. But here comes the tricky part. There are quite some paths you can go, but for me, using the Messhelden heat cost allocators, I found myself in a very frustrating situation. These devices stick to the OMS standard for most of the telegram, but unfortunately do some very shitty stuff at slot 2 and 3, so many decoders fall out of sync just after the first data slot.

wmBus Software

After trying different ioborker adapters (like iobroker.wm-bus) and also deamon solutions (like wmbusmeters), sending the data to some MQTT broker (a server is easy to rise up in iobroker), I ended with the iobroker.wmbus (beware of the dash, it is not the same as above). Somehow the author of this adapter managed to come up with the inconsistencies, I even could not decode manually, looking at each single bit and byte of the wmBus telegram.

After attaching the iM871A-USB stick to the Pi and placing it at some location where it can receive all meters you are interested in, you need to install and configure the iobroker adapter iobroker.wmbus.

Adapter Configuration

The configuration is also quite easy and should look like the following:

It could also be, that your slaves need other modes to be received. One widespread mod for battery driven devices is also mode C. Unfortunately, a single stick can not receive multiple modes. But usually you only run devices with a single mode. Another important setting is the baud rate. For the IMST device, it needs to be 57600. The stick contains some serial converter that attaches the IMST module with a real serial connection.

Add Encrypted Meters

After finishing configuration and starting up the adapter, it is time to have a look into the log. There you will see, if the adapter started up correctly. If it did, you soon should see a line that says „Updated device state:<MANUF>-<ID>“ or an error saying, that it could not decrypt a telegram due to missing decryption key. If this occurs, go to the adapter configuration again. There you should see a new entry with a key „UNKNOWN“. Place the correct key there and push „Save“.

The follwoing telegram of that device should be decrypted correctly and a new state will be created within the object tree of iobroker.

If you see other unencrypted devices that pollute your object tree or your log with encryption failed messages, simply put them below „Blocked Devices“ tab in the adapters configuration. My Pi can see at least 20 unencrypted Techem water meters and heat cost allocators.

Let’s encrypt (also on wmBus)

I don’t know, how they can survive in a time of GDPR (General Data Protection Regulation), but they still have no hurry to encrypt their telegrams with a device-unique key. I think it is a security issue, when burglars can easily find people that do not heat in wintertime or have no water demand currently. But at least, Techem sticks closely to the OMS. If you rent a flat, that still has unencrypted wmBus meters, I would definitely claim to get encrypted meters. Even if encryption of wmBus has some weaknesses, it is by far better than plaintext.

Merging the Contents of Two InfluxDBs

Eveer had the problem that data runs into two different influx databases and you want to merge the data into a single one? You wonder, why this can happen? Then just think about migrating some data aquisition project from one server to another without a downtime by spooling the aquired data into both DBs for some time or simply setting up a fresh system after the old one is dying slowly because of low performance. This happened to me with my smart home system, running iobroker and some influxdb on it. The old one ran on a ODROID-HC1 with only 2GB RAM. The new one is a Debian 10.0 VM on my brand new HP Proliant running XCP-NG. (OK, it’s not new, it’s used HW, but for private use, it is a monster 🙂 )

For sure, there exist great tools from the influx inventor, but this is way to much for a small project to set up. So I decided to go the easy way of first doing the absolutely necessary (bring up the new system) and later pull in the historical data.

Backing up and restoring iobroker is well documented and works smooth and will not be part of this post. For influx, I just setup a new instance with a copy of the /etc/influx.conf and started the service. Then I started iobroker and everything worked like before, except the availability of the historical data.

After endless searching throughout the web and many tries to export the DB as CSV, JSON,…, I found a set of scripts from ETZ, that helpt me half the way up and down the backup-restore-process.

The key to success in the end was, to restore the DB on the new server not as the original one, but to rename it. In my case, I called the new historical DB on the new server iobroker_old.

Now I had all the data I want to import at least on the new instance, but how could iobroker find it? It won’t! I needed to do some internal import within influxdb to migrate the iobroker_old into the iobroker database.

To export the DB from the old system, just issue (on the old system):

$ backup-influxdb
$ scp /var/backups/influxdb/2019.... <myuser>@<newserver>:~

On the new server, then do:

$ restore-influxdb-database-online \
    ~/2019... \
    iobroker iobroker_old

The command to my success was (with help from GIST, before you run the command, doing a snapshot of the system may be a good advise):

$ influx -database iobroker_old -execute \
   'SELECT * INTO "iobroker"."global".:MEASUREMENT FROM "iobroker_old"."global"./.*/ GROUP BY *'

To make this succeed, I needed to raise the physical RAM of the VM to 16 GB. With 4 GB it simply swallowed the swap and ran into some timeout. Afterwards, I turned it back to 4 GB and everything runs fine.

Headless Rescue System over SSH

Again, I stumbled over some problem… I need to draw some backup disk image of a headless bare metal server, to run a risky update of atlassian tools. And yes, it was one of the old servers in my network, not beeing virtualized.

I tried a lot of different solutions, including debian (and derivatives) with automated installer and it’s network-console, but it was not leading to a working ssh access. So I searched a lot and came to ploplinux. Just put it on a USB-Stick and run it. It will start up with ssh server and a default password for root (root:ploplinux).

To find your server in your network, just do (on another Linux machine):

nmap -sn 192.168.1.0/24 | grep ^Nmap > network_snapshot.txt

and when ploplinux was booted (its a one-liner…)

nmap -sn 192.168.1.0/24 | grep ^Nmap > network_rescue.txt && \
      diff network_snapshot.txt network_rescue.txt | grep ^\>

To find the IP address of the running ploplinux. It will show somewhat like:

> Nmap scan report for XYZ.fritz.box (192.168.1.30)   # Some other machine
> Nmap scan report for 192.168.1.124                  # This is ploplinux
> Nmap done: 256 IP addresses (39 hosts up) scanned in 7.85 seconds 

Then I logged in over SSH on 192.168.1.124 as root using ploplinux as the password and was able to do what I need.

Mounting a NTFS drive (some large USB drives use it) needed the command mount.ntfs-3g <drive> <mountpoint> to work…

BTRFS-Backup (using squashfs)

Taking a raw image of some partition with BTRFS is not as convenient as it seems. To get some image of the containing filesystem, it is easier, to mount it directly and run mksquashfs on it. It then is easily mountable in any linux that supports squashfs. To create the backup, run the following commands, replacing sda1 with the device your filesystem to backup resides in. If you connected a large USB drive for putting your backup on, cd to there…

cd <where_some_space_is_available>
mkdir /mnt/tmp
mount -o ro /dev/sda1 /mnt/tmp
mksquashfs /mnt/tmp backup.squashfs

To test you backup-image:

mkdir /mnt/test
mount -o loop backup.squashfs /mnt/test

and examine the contents of your mounted filesystem tree. More information about squashing filesystems can be found here or here. If everything is fine, the backup is finished… Don’t forget to eject your USB drive cleanly, to not corrupt data with unwritten cache. Therefore, unmount your USB drive or use eject <usb_device> to do it.

Happy plopping 😉

OrangePi 4G-IoT Android 8.1 SDK

Same as with the 2G-IOT, OrangePi also provides a quite inconvenient way (through mega.nz) to get the Android SDKs for the 4G-IOT… Here are the torrents (tar.gz and tar.xz have the same content but xz is much smaller)

Have fun…

OrangePi 2G-IOT Android 4.4 SDK

Getting the Android SDK

Since I just struggled getting the Android SDK for my little OrangePi 2G-IOT, I felt responsible to share it using bittorrent. The main problems have been, to download all that stuff from mega.nz (it raises limits for users not paying a monthly fee) and another one was, to concat all 7 files to a working tar.gz. I think, the wildcard they use in the OrangePi’s user manual does not expand the filesnames of the parts in the right order. But then, wired errors occur.

If you encounter problems with not being able to create symbolic links when unpacking, just use a real file system as the base for your unpacked archive (e.g. ext4, btrfs, …, but NOT NTFS or FAT).

Here you can find the torrent: 

 https://downloads.the78mole.de/OrangePi_2G-IOT.tar.gz.torrent

Getting the toolchain

Another dangling point of OrangePi’s SDK is the lack of a toolchain. They have a large download, but it does not contain a toolchain. So just download the correct one from linaro (take the appropriate one for your system, mine is x86_64), unpack it inside the folder where the empty folder toolchain is (here it’s ~/somewhere/OrangePi in the example) located, remove this toolchain folder (rmdir should succeed with a fresh unpack, when it is already a link, use rm) and set a symbolic link to the fresh linaro one.

$ cd ~/somewhere/OrangePi/
$ wget https://releases.linaro.org/components/toolchain/binaries/latest-5/arm-linux-gnueabi/gcc-linaro-5.5.0-2017.10-x86_64_arm-linux-gnueabi.tar.xz
$ rmdir toolchain
$ ln -s
gcc-linaro-5.5.0-2017.10-x86_64_arm-linux-gnueabi toolchain

Running the kernel build

Just stick to your User’s guide again and do:

$ ./build.sh

Windows (was) just a pain

Since I use Linux at home and love to develop embedded, backend and (web)-fronteds within a real operating system, I sometimes get crazy at work, when I just search for an alternative to a simple command line… So, what’s the alternative in Windows? The Command Prompt, then PowerShell or some specialized, magic, woodoo,… GUI application with the worst design ever seen in the universe and beyond?

OK, I see, you need an example 😉 Here it is one of my favorite: Syncronize a 100-GB-folder from one machine to another when one is at the end of the world, connected by avian carriers (see also RFC 2549 – IP over avian carriers) with a perceived rate of 2 bit per hour. 22 years ago, rsync was invented and serves every (unidirectional or pseude-bidirectional) syncronization desire with a sheer infinite amount of options… But, it is not available (directly) for windows…

Welp, some time ago, there was cygwin, which was driven by Red Hat. OK, it is still being developed, but somehow, I feel it is not serving my desires very well. At least not the desires a developer has. I also found MinGW and MSYS some years ago, but as I ran into trouble with wget and rsync when handling large files, I tested MSYS2 (it already includes MinGW-32 or MinGW-64, whichever you prefer). That was the starting point to test MSYS2 and to my surprise, it is exactly what makes my heart pound faster. All previous POSIX/Linux/Universe/Multiverse/… compatibility layers for Windows had some GUI to select packages, run updates,… But not MSYS2! It uses a pacman port. OK, I didn’t use pacman before when I was not forced to (I prefer APT), but it is COMMAND LINE and it runs on Windows (64-bit).

And even better, it did the ssh configuration well (a problem in old MSYS), so that you could generate and use an SSH key, which makes rsync even more powerful…

Hey guys of MSYS2, whenever you pass by in Germany (Erlangen), I will spend you some beer at Steinbach Bräu. This brewery is just to beer, what you are to Windows. Simply the greatest enrichtment 😉

TL;DR

So, if you would like to use rsync and wget in windows, just install MSYS2, do the obligatory update described on their page and execute the following:

pacman -S wget openssh rsync

Have fun with Bash, SSH, Rsync and all that other cool Linux tools on Windows!