Jun 26, 2022
Today I'm finally going to get around to describing the Four node chromebook cluster I built last year. I had always been interested in clusters, especially clusters built out of single board computers. Due to the price of amassing several machines to cluster, however, it never seemed all that viable of a thing to try.
The Minimum Viable Chromebook
Last year some time, I came across several Dell Chromebook 11s that were being tossed. These devices had broken screens/keyboards/etc, and were no longer supported by ChromeOS. Because of this they were not worth fixing to be sold, and had no usefulness left for their intended purpose.
These Chromebooks (Dell CB1C13) have the following specs:
- Intel Celeron 2955U
- 4GB Ram
- 16GB SSD
My idea to build a cluster out of these started with an attempt to find what I called the 'Minimum Viable Chromebook.' I started stripping parts off of a Chromebook motherboard until it wouldn't boot anymore. With this model of Chromebook it turned out that, unlike some other models, the battery; keyboard; daughter-board; screen; and touchpad (basically everything except the motherboard) were all not necessary for the device to boot. These devices are actively cooled by a fan/heat pipe, and all of their connections once the daughter-board is removed are on one side, making them even more enticing for service in a cluster.
With all extraneous parts removed from my first subject (read: victim) I started working on installing linux on the device. There are really great instructions avavailable for this, making it a pretty simple process. The full set of instructions can be found on the Arch Wiki but it basically goes as follows:
- Enable developer mode
- Become root
- Patch the BIOS
- Boot installation media and install as normal, some hardware may not work... some hardware may not be present
I opted to install Arch linux on this first machine because I wanted to have a lightweight system and I had dotfiles already created to make a comfy but minimal environment. I dubbed this system the minimum viable chromebook, or Minnie for short.
When life gives you chromebooks... Cluster them!
We ended up having a lot of these boards going into the dumpster. Since I was having fun installing linux on them, and I do one in a lunch break, I decided these might be my opportunity to build/experiment with a cluster.
Going with the theme set by Minnie soon I had Mickey, Goofy and Pluto.
The boards were all stripped, and their cooling fans mounted directly to the motherboard reusing screws and standoffs from the case it came out of. The boards were mounted to eachother using brass standoffs and screws.
Lets talk about the layout
Now that we're talking about four linux machines, we should start talking about network layout.
Since I did this work on my lunch breaks at my desk, and I didn't want this to actually be on our work network, I had to come up with a way for these machines to be able to be networked and to have access to the internet for updates/software downloads.
Minnie, being the first machine, became the controller node. Minnie had a comfy/lightweight desktop environment that made it a really nice entry point to the cluster. My solution to the networking issue is as follows:
- All nodes get a USB NIC
- Minnie gets a makeshift WiFi antenna (Ham radio trick, dipole made from the coax) and connects to the hotspot on my android phone
- Minnie runs a DHCP server on it's ethernet interface with NAT Masquerading
- Static leases and hostfile entries for each of the other nodes
Now all the nodes need is power, and a connection to a network switch in order to be networked with the others and also have access to the internet. This makes expansion of the cluster REALLY simple. Provision a node, plug it in, give it a static lease, done.
Software/OS provisioning on these devices is just like any other linux cluster. I went with what I think is a pretty standard beowulf cluster using Open MPI. I also ran these nodes in a docker swarm so workloads could be set up either way.
Okay... but what does it do?
I have no need for a cluster.
The whole point of this project was, because they were there and because I could. As such, I never really had any interesting workloads to run on this thing. The only workload that ever got deployed was a Monte Carlo Pi Estimation that served as a demonstration of "Look how bad the estimation is with one node, now it's not as bad with four nodes."
The cluster is now an 8-core 16GB of RAM paperweight, an interesting proof-of-concept though!
Oct 11, 2021
Introduction
Hamshack Hotline
Hamshack Hotline (HH) is a SIP phone system set up by a group of Amateur Radio Operators (hams). There are different use cases for HH, for me it is an excuse to run a PBX at home and play with phones.
My PBX
I run an instance of FreePBX in a VM on my main server (Dell R610) FreePBX is CentOS + Asterisk + a nice web UI.
Some of what I did to build my Weather Alert extension is done from the UI and some is done by manipulating Asterisk's config files directly.
Goal
My goal was to create an extension button on my hard phones (desk sets) that would play a stream of the local NOAA Weather Radio broadcast. I wanted the LED controlled by the Busy Lamp Field (BLF) for that button to light up with a specific pattern if there is a weather alert for my area. This would give me visual indication of potential hazardous weather affecting the area, and one press access to the weather broadcast to listen to.
How-To
For this how-to I will assume you are running FreePBX, I'm on version 15.0.17.55
Set up Music On Hold (MoH), queue and extension
Edit Config Files, Create Custom Device State
- Open
/etc/asterisk/extensions_custom.conf
for editing
- add the following lines:
#!bash
[ext-local-custom]
exten => UNUSED_EXTENSION,hint,Custom:DEVICE
same => 1,Goto(from-internal,ABOVE_EXTENSION,1)
Substitute:
- UNUSED_EXTENSION: Another unused extension
- DEVICE: Create a name for the Custom Device State
- ABOVE_EXTENSION: The Extension you created in the above section
example from my PBX:
#!bash
[ext-local-custom]
exten => 101,hint,Custom:WXAlert
same => 1,Goto(from-internal,6000,1)
Install/Set-up control script
The control script queries NOAA's API for active alerts for a selected 'zone'
#!console
usage: wxalert [-h] -z ZONE -d DEVICE
optional arguments:
-h, --help show this help message and exit
-z ZONE, --zone ZONE NOAA Weather Zone
-d DEVICE, --device DEVICE
Custom device state to target
If there is an active alert it will set the state of the device passed into it with the -d
flag to an appropriate state for the highest severity of active alerts.
Source code for the control scipt can be found here
The fastest way to install the control script is to sudo pip install asterisk_wxalert
Once installed you can add an entry into the root user's crontab so the script runs regularly:
#!bash
*/5 * * * * /usr/bin/wxalert -z ZONE -d DEVICE
example from my PBX:
#!bash
*/5 * * * * /usr/bin/wxalert -z WAZ039 -d WXAlert
Your NOAA Forecast Zone can be found on this site
It will be in the form WAZ039 (2 Letter state, The letter Z, then the 3 digits from the map of your state above)
Phone Set-Up
Phone set-up will differ between phones, but for the most part, pointing one of your BLF buttons at the extension created in /etc/asterisk/extensions_custom.conf
in my case 101
should get you the desired behavior. The BLF LED will illuminate with the status set by the control script, and pressing the button will call the extension that immediately forwards to the MoH Queue.
Conclusion
Hopefully this gets you at least part of the way toward a working implementation of this system. I will treat this as a living document, and will update it based on my own experiences and feedback from anyone who tries to follow the directions. Feel free to email me at the address listed on my QRZ page, or message me on the HH Discord server.
Thanks and 73, Tyler - AG7SU
Jun 21, 2021
I've been experimenting with self hosting, and hosting at home for several years now. My 'lab' has evolved significantly throughout the years, starting with the cheapest VPS I provider I could find I started learning things like remote access via ssh and basic security (often the hard way).
Eventually my lab moved into my home and onto an old roadkill laptop. I started learning about docker, reverse proxies, firewall rules and networking. Services in this era of my lab consisted of 'off the shelf' containers and some applications I wrote myself. Lots of Python, Flask and Docker. This 'server' (Carr-Lab1) is still running but hosts very little now.
In April of this year (2021) I convinced my wife to let me buy some old enterprise gear. This newest iteration of my lab consists of a second hand Dell Poweredge r610 [2x 6 core CPUs and 32GB of ram] running Proxmox as a hypervisor. With this new (to me) machine, I'm loving the hands on experience I'm getting with gear I don't normally get to play with at work. Services running on this machine are Homeassistant ('supervised' version in it's own Debian VM), hastebin, hedgedoc, plex, and babybuddy (we have our second daughter, Evelyn, on the way in July). A 5 node K3S kubernetes cluster is running with rancher as it's only current workload. I run a virtualized instance of pfSense on the server as well, which lets me better isolate lab VMs and reduces how much configuration I do on my main network in support of the lab.
My home network, which is also a semi-experimental component of my lab, runs behind a bare-metal installation of pfSense on netgate hardware (recently replacing a janky install on an atomic pi freeBSD support for realtek NICs bit me). I run VLANs for my main LAN (trusted devices), a guest LAN (untrusted, throttled), and a LAN for my IOT devices (REALLY untrusted, severely throttled). Due to where I live (rural), bandwidth is limited (3mbps/1mbps), throttling the IOT devices keeps an untimely update from interrupting youtube or other browsing. An app I wrote re-rolls displays and updates the guest LAN passphrase on my Aruba AP.