Chatting DoOO, Open Source, and More with Ed Beck

Early last week Ed Beck asked if I would be willing to talk with him (and by extension his class) about some advantages (and limits) of open source applications in education. I think Ed has been following my recent championing of PeerTube, as a result I figured format and function is the best argument so I both streamed and recorded this session using PeerTube vis-à-vis a self-hosted instance of Jitsi Meet—talk about the power of open source. Reclaim Media!

One of the reasons I enjoyed this conversation was it is an articulation of some of the thinking we have been doing at Reclaim Hosting about how certain open source applications like PeerTube (video management and streaming), Azuracast (web radio), and Jitsi Meet (video conferencing), to name just a few, might be understood as a package of open source tools that can be hosted quickly and easily on Reclaim Cloud.

In fact, this was part of the conversation in today’s Reclaim EdTech meeting wherein we were thinking through how something like this could help a student media organization that is trying to support students run their college radio station or even a TV/video streaming service. In fact, Grant Potter’s work as an edtech back in 2010/2011 at UNBC helping students run a web radio station resulted in the mighty ds106radio!

And in this rich edtech tradition, it was cool to hear how Taylor Jadin worked with his campus radio station at St. Norbert College to get them up and running on Azuracast at a fraction of their current hosting costs, not to mention better software. What if Reclaim could reach out and support student media outfits on campus by pointing them to these tools and providing both hosting and support? Reclaim Student Media!

Anyway, it was a fun conversation with Ed, and by proxy his class, that led to more exciting conversations at Reclaim. I appreciate the invite to chat because reflecting on these open source explorations at Reclaim inspires me to keep imagining the possible because I think there is some real value in thinking through a suite of next generation open source tools for the field of education, and it continues to be a very valuable way to spend our time.

Posted in open source, PeerTube, reclaim, Reclaim Cloud | Tagged , , , | Leave a comment

Re-visiting How we Work at Reclaim

Lauren Hanks has been working on a really cool space that will highlight the various channels through which we are building community at Reclaim Hosting. It’s still under construction, but I’m sure Lauren will share the magic, as well as the process, on her blog at some point soon given it’s coming along beautifully.  While she was going through some old Reclaim posts/announcements she came across this post I wrote in July of 2015. For some reason 2015 still seems like yesterday, but it is 7 years ago now. Crazy.

Thinking about how we work at Reclaim

For context Reclaim had 3 employees when this was written: Tim, myself, and a freshly hired Lauren. I had just given my notice at UMW and was transitioning out of that role and jumping off the cliff of the self-employed. Antonella and I were also preparing to move the family to Italy for what we thought would be “just a year.” It was a moment of transition for sure.

Screenshot of Gusto's admin panel

Gusto’s admin panel

One of the interesting things about that post from 2015 is the discussion of the tools we were using at the time. For example, we were investigating the payroll tool ZenPayroll, which would soon after be re-branded to Gusto. We did decide to use Gusto rather than the more established ADP and it was one of the best decisions we made as a young, distributed, and growing company. It’s like a payroll department that pays everyone, tracks vacation/sick time, files quarterly paperwork, and now even enables stuff like employee reviews, extensive benefits packaging, and much more. I’m not exaggerating the case when I say that Gusto makes much of what we do in the realm of human resources possible for less than $200 per month, it’s truly revolutionary software for a small business like ours.

Screenshot of Reclaim's Slack

Reclaim’s Slack

We were just starting to use Slack in earnest in 2015, which meant finally getting our daily management of Reclaim out of Twitter DMs 🙂  We had just signed our 10th Domains school and we needed to get organizized.

Slack has been the central pillar of growing and managing Reclaim, but at a certain point it starts gets too big for managing projects and runs into issues of easily finding stuff. We have continued to tweak and prune, but it remains home base for us to check-in and out, provide internal help on tickets, and shoot the breeze by the water cooler. The best thing we did on the Slack front over the last year or so was to stop pulling in all support tickets. Too many ticket notifications increasingly made Slack a space of dread, especially early on when we were much smaller. Changing it so that only escalated tickets show up in Slack shifted the tenor of the space significantly. Not only did it cut down on notification fatigue, but it made sure every one of them wasn’t a potential problem. And while we still use Slack channels to monitor our servers for downtime and malware, it’s far less mental overhead on the regular which helps balance Slack as both a space for monitoring as well as a space to congregate, get help, and share as a team. That rather small tweak has made a gigantic difference. Another thing to mention here is that Slack huddles are absolutely solid gold for collaborating internally—Meredith Fierro uses them like a boss.

Screenshot of My Zendesk Admin interface

My Zendesk Tickets

The only tool listed in that post 7 years ago that we no longer use is, and there’s no looking back on that one. Chat-based tools for hosting support is a recipe for disaster. Nothing worse than Tweet-length responses with next to no detail to make the investigation process that much more laborious, not to mention the expectation that support is always immediate. Our time using Intercom ended in 2016 after signs that Tim was going into a chat-induced coma 🙂 We looked at a few tools and decided on Zendesk for support tickets which are all done via email. That’s been a really good choice. While Zendesk is not cheap, it’s money well spent given support remains our bread and butter and we can add additional tools like Zendesk Sell pretty seamlessly for managing customer relationships, which has become crucial to our growth at Reclaim. When you have 200+ large accounts you need to start tracking all the various details to make sure folks aren’t lost in the crowd. in fact, Zendesk Sell ranks amongst the software we added after 2015 that has become indispensable.*

Image of Asana admin area

Asana work requests and more

Asana, one of the applications discussed back in 2015, plays a much bigger role at Reclaim than I would’ve ever predicted seven years ago. I joked at the time it was the “anti-Slack” because it’s more focused on lists and tasks than a stream of notifications and updates, and Lauren really has made this application work to help us both organize and streamline projects, work requests, and more. In fact, she’s written a brilliant blog post about Asana workflows that is an in-depth look at how we use Asana to great effect. In short, it helps us manage long-term projects, work request for infrastructure, edtech workshops, and much more.

Reclaim in Whereby circa Winter 2021

Thinking about other tools that have become crucial since 2015 and asking other folks at Reclaim means the list quickly became unruly. I’ll start with our OG video conferencing tool (it used to be that we’ve stuck with despite the seemingly unavoidable need to use Zoom by the rest of the known world. We have since been split between Whereby and Jitsi Meet (a nice open source alternative Taylor has been championing to great effect) and I can see us going Jitsi Meet entirely at some point, but Whereby is so cheap and has been so solid it has been hard to leave them entirely, even as a backup they would be well worth the $18 per month.

Early testing with Jitsi Meet on Reclaim Cloud two years ago

It’s almost invisible at this point, but there is no way we could even run Reclaim without Google Workspace. Google Docs are always useful, but running Gmail to manage employee email has been essential. Thankfully Tim knew when we started that self-hosted email, even for a hosting company, was a non-starter given the blacklist problem that only recognizes power and money. So it’s Google all the way cause email is still the killer app, and this is something we are trying to communicate to Reclaim Hosting customers given it is increasingly more difficult to manage your own email without risking being blacklisted and effectively cut off from communicating via email.

Speaking of which, two other email apps have become crucial for our workflow when it come to outreach and announcements, namely the transactional, API-driven email service Mailgun and the campaign-driven email service Mailchimp. Mailgun delivers our Reclaim Roundup  newsletter as well as our emails for new client welcome emails, etc. in our client portal WHMCS. It delivers thousands of emails for us a month with few failures at a very affordable price point.

Mailchimp is a campaign-based email tool that allows us to do targeted outreach with admins quite well, but it can get expensive to use for thousands of clients, so there are some limits—but its various integrations, templates, and ease-of-use does make it a valuable tool in the Reclaim shed.

Image of 1Password IconAnother tool that is similar to Google Workspace in that we could not really function without it is 1Password. I am not sure why I didn’t mention it back in 2015, because I’m sure we used it. This tool is absolutely ground zero for managing hundreds of logins that need to have unique and secure passwords shared across a group with various levels of access for different apps. I often joked if I was doing edtech in a university I would be proselytizing this tool as ground zero for web literacy, namely organizing your web credentials securely.

Another tool that has become indispensable for how we work is Cloudflare. In particular  Cloudflare’s Zero Trust Network Access has been crucial to help us secure our server fleet by locking down SSH. Chris Blankenship wrote about his work locking down SSH access to our servers, and I am sure we will be doing much more of this in the coming year or two. I have also been using Cloudflare’s load balancing for testing fail-over in Reclaim Cloud, and more and more using Cloudflare’s DNS service to manage various domains that point to public IPs on Reclaim Cloud. Cloudflare is a whole world that I think will play an increasingly larger role seven years from now.

This post started small enough, but has quickly become an unordered list of tools we use at Reclaim, and I’m not even going to dive into our core infrastructure running through WordPress/WHMCS/WHM for most of our cPanel-driven shared and Domain of One’s Own servers. And then there’s Virtuozzo  (previously Jelastic, but re-branded after the acquisition) which we now use to run Reclaim Cloud.†

Image of Reclaim Roundup icon

Reclaim’s Roundup newsletter running on Ghost

The last set of tools that have become increasingly crucial to our daily work deal predominantly with our evolving community engagement. I have already mentioned Reclaim Roundup when talking about Mailgun, and while only in existence for 9 months, I feel like this newsletter driven by Ghost has become foundational for our broader outreach, all hail Pilot Irwin! In fact, our use of Ghost inspired Taylor Jadin to build a pretty awesome installer for the tool for Reclaim Cloud.

Image of the OERxDomains21 Schedule

MBS’s TV Guide-inspired design for theOERxDomains21 Schedule

On top of that we have worked with Tom Woodward and Michael Branson Smith (MBS) to design a conference platform for OERxDomains21 that integrates Streamyard, Youtube and Discord using a WordPress headless site as well as some custom code from MBS to build the foundations of what is now our interactive space for Reclaim’s Edtech group that has been running workshops, monthly events, and more. This integration works by using Streamyard as the web-based production space and pushing it out to stream live via Youtube, and the live chat conversation can be integrated and managed through Discord. It’s a pretty slick integration, but another tool we have been experimenting with a lot and I hope becomes foundational is the open source Youtube alternative Peertube.

Image of Peertube homepage

Reclaim control of your videos with Peertube’s

We still manage our forums using Discourse, but no longer manage the support docs and all elements of our community through that channel. In fact, we’ve been intentional about becoming more faceted about the various channels we use to provide support, community engagement, and general announcements across at Reclaim, but you’ll have to wait for Lauren’s post about the new Reclaim Community site I alluded to at the beginning of this post.

If nothing else, I learned a couple of things through writing this long, wandering post: 1) integrating various technologies is crucial to make Reclaim Hosting work, and it reinforces why showing students how the web actually works through linking and integrating is an essential literacy in the 21st century; 2) how dependent we are on various softwares and how that reinforces the value of open source as the pricing for many of the proprietary applications continues to go through the roof—making open source alternatives not only interesting but almost essential.

*It’s also worth mentioning we moved all our Support documentation from Discourse to Zendesk’s Guides over a year ago as well, and that has helped our ticket system and support documents integrate cleanly.

†There’s also Digital Ocean and OVH for our data centers, and Bitninja to defend our servers from malware attacks, which has been crucial.

Posted in open source, reclaim | Tagged , , | 1 Comment

And then there were 29…


The container came in….


Everything arrived safe and sound, but I have spent much of the last week unpacking, playing with retro game consoles, and doing repairs like I still run an arcade 🙂


Elevator Action‘s working fine but the red color in the monitor is out, which gave everything a strong blue tint. I tried changing the chassis (a K4900 5-pot) with Moon Patrol‘s (K4900 4-pot), but red was still missing. Tried extra PCB game board I have and same issue with missing red so thinking this is an issue with the red video signal somewhere on the line or at an edge connector, turns out I was right. There was no continuity on the red video signal on the filter board. After re-soldering that connection and red cam back!  I also got my Wells Gardner Pattern Generator working with this monitor, and that is cool.

elevator action filter board


Moon Patrol was working fine in US, but the board is now throwing garbage. I tested the  monitor chassis from Elevator Action and it works fine. Also tested all voltages from power supply to game board and all good there, although the coin door voltages are off—but that should not effect the game board or game play. Tried changing main CPU on board (Z80) but still same issue. Pretty sure this is a board issue and might have to post to KLOV for a second opinion or send it out.

Joust was randomly resetting with installed FPGA board to white screen, similar to Stargate issues I had in the Spring. Reverted to original board and that worked, no resetting during game play, but there is an issue with settings not being saved every time it is turned off, got a “High Score Table Reset Bookkeeping Totals Cleared”  error.


I found this forum post on KLOV that pointed me in the right direction. Essentially there were no batteries to save hi-score installed, but I thought I had installed a new chip for this, but evidently not. I soldered on a 3.6v lithium battery, which should last a few years.


And then I hit the reset switch on the board, and it started to work.


Next up, put the FPGA board from Joust that was having issues in Defender which fixed the graphic issues at the top of that board. The original boards for Defender do work, but with the FPGA board there are no lines at the top of the screen, plus it doesn’t work with Joust at the moment anyway 🙂 One issue is the High Score Reset button inside the coin door doesn’t work so I can’t change number of ships, free play, etc. will need to look into how to change that Williams switch on Defender.

Slight jitter of image on Dig Dug monitor. It’s G07, which I am familiar with now, so re-soldered the vertical/horizontal adjustment pots (and actually swapped one out) and it is better after testing it for a while. There is still some very rare shifting of the screen, so may swap out all four adjustment pots for this chassis if it continues.

I still have to look at the Pleaides cocktail, which won’t turn on. Probably gonna have to screw through the lock to get it open and investigate given I am pretty sure something came loose during the move. The other cocktail, Rally X, is working but the sound is muffled, so will be opening that up tomorrow. I also have to try the new board for Cheyenne out to see if that fixes my issue with that board, and then I should have everything but Moon Patrol and Venture working. I know the problem with Moon Patrol is board related, and venture is most likely power supply, and I am pretty excited cause I think I can get that one running.


Arrow pointing to burnt EMI line filter, I think

The other thing that happened this week while working on Joust is I mistakenly plugged the game into a European 220V socket versus the 110V US plug and it blew the power brick in Joust. This is in part linked to the terrible design of the transformers I bought to step-down the voltage from 220V to 110V, and part my stupidity. Live and learn. I used the transformer in Stargate to make sure nothing was ruined in terms of the boards, monitor, etc. and all was good, so I am using the power brick from Stargate in Joust temporarily. While dismantling Moon Patrol yesterday, which I will be minting out entirely, I realized it seems to have the same power brick and connectors.


As you can see from the side-by-side image of the two power bricks above, they are seemingly identical, but the Moon Patrol power brick on the right seems to have been soaked in mud at some point over the last 40 years. It is literally caked on. I will be cleaning it up and then testing the voltages to see if I can use it in Stargate until I finish Moon Patrol and hopefully figure out how to rebuild the Joust power brick I blew.

Anyway, this is a very drawn out post simply to say the games have arrived and I am loving it!

Posted in bavacade, bavarcade | Tagged , , , , , , , | Leave a comment

Copying Files between Environments in Reclaim Cloud

I’ll be submitting this guide to Reclaim Hosting’s support documentation to illustrate how to move files between environments using the Reclaim Cloud GUI. I discovered this trick while migrating a Ghost instance in Reclaim Cloud from one environment to another as discussed in this post.

I am borrowing liberally from elastx’s documentation of this process given they do an excellent job of breaking it down. Moving files between environments uses the somewhat confusing terminology Export, and you can find this option if you click on the configuration wrench of the node within the environment you want to move. Below is a video tutorial documenting the process followed by step-by-step directions.

Export directory

  1. Find the configuration wrench on the Application Servers source node.
Images of Config of Node from which files will be exported

Config of Node from which files will be exported

  1. Go to the folder you want to Export files from. In this case /var/lib/ghost/content/. Click the cogwheel of the content folder as shown on the image below and choose Export
Image of Export option in Reclaim Cloud

Export Directory Contents

  1. Make sure to choose the correct target container where your target Application Server is at.

Select proper target environment in Reclaim Cloud to export content

  1. Here we use the path /var/lib/ghost/content-source which will create and mount this folder on the source environments Application Server.

Copy files

  1. Click the configuration wrench on the target environment’s Application Server.

Config wrench

  • To the left, click the cogwheel and click Copy Path, which will copy the full path location to your clipboard.

  1. Click on the terminal icon to engage the Web SSH terminal

  1. In the terminal you should write cd then paste previously copied path /var/lib/ghost/content-source and then enter. After that you should see the folder content-source executing ls -l as shown on the image below.

  1. Now you should be able to copy files anywhere within the new environment node. Using the --verbose flags gives you this output.

Use the cp –verbose to copy files between source and destination folders

  1. You can confirm that the files been copied by browsing to that folder in the GUI.

Zip files copied cleanly between folders

Clean up

  1. Unmount the exported directory on the target application server as shown below.

Unmount networked directory from target environment

Posted in Reclaim Cloud | Tagged , , , , | Leave a comment

Learning from Reclaim Cloud’s Ghost Installer

As I already mentioned in my last post, I spent some time this week moving my bavaGhost install over to a new environment in Reclaim Cloud to take advantage of Taylor Jadin’s snazzy new Ghost one-click installer. Taylor and I did a fullblown stream talking about his groovy installer if you are interested in the deep-dive:

Soon after this discussion I decided to port my Ghost install to an environment using Taylor’s installer given it makes updating versions, setting up Mailgun, and mapping a domain with Let’s Encrypt dead simple. It’s the closest thing we have to a complete solution on Reclaim Cloud for running an application without feeling you need to dig into SSH and server administration.

The tale of the tape for my explorations with Ghost over the last 6 months are pretty well documented on this blog, but for quick context I was running Ghost using the Community Docker Image straight from Jelastic (no Docker Engine) with a Nginx load balancer to manage the domain mapping and SSL certificate. I did move a similar Ghost instance for Reclaim Roundup from this setup to run within Docker Engine given a bug with the SQLite database that prevented us from sending over 500 emails.

So this was my first go at playing with Taylor’s Ghost installer. The migration of posts and settings was simple. After using the Ghost installer to create a fresh instance, Ghost’s native export/import tools moved the 19 posts and various settings from one environment to the other beautifully. The only hitch was I needed to copy over about 40 MB of featured images and headers from one instance to the other.  It’s at this point where I started running into issues while at the same time feeling my understanding of Docker containers in Reclaim Cloud painfully expanding. So, I’m going to take a bit to document at least two ways at the ostensibly simple task of moving a directory with 40 MBs worth of images.

[On a separate but related note, Taylor and I have been struggling with the problem of moving data between containers in Reclaim Cloud during a series of streams this week. Turns out rsyncing files between different environments in different data centers or with different environmental settings (permissions, firewall, etc.) can result in some serious frustration. The project we’re working on, porting from an Ubuntu VPS to a Docker instance of Peertube, highlighted some of these issues, many of which Taylor has been working through and documenting in order for us to streamline things. It’s been fun streaming it for sure, but I do feel bad for subjecting Taylor to so much PostgreSQL 🙂 ]

The above is separate but related because it highlights my concern when trying to move 40 MB of files from one container to another within Reclaim Cloud. In fact, it should be easy, right? Just zip up the folder and download it via command line, right? Well, yes and no. Yes if you knew that the custom Ghost container was running Alpine Linux and to install zip on that container you needed to run the following command: apk add zip That would work, and the folder would be zipped up and available in the /var/lib/ghost/ path which you could access and download via the Config area for that container:

After that you would upload the file to the new Ghost instance, which for Taylor’s installer is located at /root/ghost/ghostdata/. You would also need to make sure zip is installed on the new server, and unzip the images directory after uploading and move the contents into the existing images folder. And while not terribly onerous, it is anything but seamless.*

While futzing with zipping directories to move all of 40 MB of files I found an option in the Config option for Reclaim Cloud node to export a directory from one container to another:

When I clicked on this option it seems to allow a link from this container to any directory of any other other container I own. This effectively solves the issue of moving content between containers in Reclaim Cloud!

Yeah! The images directory was copied to the new container as images-move.

After that, I could easily move the contents of the exported directory images-move into the images directory on the new container. This saved me from rsyncing, zipping, FTP, or any other round about means I would try to get a simple 40 MB file over to the new container. What’s more, it opens up a whole new way to moving data between containers in Reclaim Cloud that could save us a whole lot of headaches.


*I also tried rsyncing to my local desktop and then rsyncing back up to the new container, but I was getting a bit turned around so I abandoned that.

Posted in Ghost, reclaim, Reclaim Cloud | Tagged , , | 1 Comment

Ghosting WordPress

Image of cow grazing in Val di Rabbi

Picturesque Cow in Val di Rabbi

It’s been quiet on the bava for a bit after an awesome mountain getaway to Val di Rabbi in early August, followed by some digging-in on work and entertaining guests over the last week or so. What’s more, the blog silence will continue starting next week through the end of August given we’re heading to Sicilia for a proper beach vacation!

Bugs Bunny on vacation


Things have been intense this month, we’ve had good friends from the States visiting; Miles prepping for his semester abroad in Berlin (he leaves tomorrow!); as well as Reclaim Hosting starting to ramp up for Fall. In between life I have continued exploring Ghost, in particular Taylor Jadin‘s impressive work to make a seamless Ghost installer for Reclaim Cloud that does everything from updating versions to making Mailgun integration simple to automatically mapping a domain and issuing an SSL certificate. It’s some truly amazing work, and if you’re interested in learning more we did a stream about it last week that provides some insight into his process. And if you want to run Ghost on Reclaim Cloud, Taylor documented that brilliantly as well here.

This coincided with another stream last week wherein we had Anne McCarthy from Automattic joined us for our August Community Chat to talk about Full Site Editing in WordPress:

It was pretty cool to have Anne join us given her own experience with WordPress started at at the University of North Carolina, Chapel Hill supporting WordPress on campus. There is definitely a strong tradition of WordPress in higher ed that this blog was grounded in for years, so seeing folks move into the ranks of Automattic is exciting! Anne demonstrated the features of WordPress’s new Full Site Editing features adeptly, and there was quite a turn-out of interested folks. One of the things that struck me during the session was that I’m old in WordPress years.

I’m still using the classic editor with the TwentyTen theme, and I admit to a certain amount of resistance to Gutenberg and now the Full Site Editor. Granted part of that is stubbornness and intolerance,  but there is another part of me that strongly believes the reason for broad adoption of WordPress in higher ed had everything to do with a simple core that leveraged an active open source plugin and theme community. That simplicity made it possible to get an entire class and faculty up and running with a site on WordPress Multisite in minutes, spending a bit of time highlighting the simple WYSIWYG editor (links, images, embedding content), mentioning tags and categories, and then it was off to the races. I have to believe that’s why WordPress blew up not only in higher ed, but across the web more generally. Over the last 15 years it has become an immense ecosystem, and over the last 5-6 years the introduction of Gutenberg, block editing, and now Full Site Editing has certainly reflected the intention to compete with Squarespace, WIX, and other web builders. That said, in the transition things have gotten far more complex. I mean WordPress was never particularly good at on-ramping new users—hence the birth of the SPLOTs to avoid the dreaded empty page—but with the advent of Full Site Editing the vertiginous experience of entering the bifurcated editing world of WordPress is that much more labyrinthine. I really don’t think I would subject Full Site Editing to a group of non-developers who just wanted to post something online, and I’m not alone in this. Lauren Hanks just wrote a post documenting her first impressions with Full Site Editor as a person who has years of experience both using and designing for WordPress, and the experience is mixed at best, even for an expert user.

So after the Reclaim Community Chat on Full Site Editing and a deeper sense that I just wanted a simple publishing experience for my blog I started to seriously consider what it would mean to migrate all my content in bavatuesdays to Ghost. I think I could move the almost 3700 posts pretty cleanly, it’s the 16,000 comments I was concerned about. What’s more, one of the real issues with Ghost is that there is was no native commenting feature, it all depended on third-party tools like Disqus that I wanted no part of. So, I was mulling all of this for the last week or so when Ghost announced they are now supporting native comments as part of their membership feature, in addition to integrated search, which means two huge barriers to moving bavatuesdays to Ghost have now been removed. I have a feeling migrating to Ghost may be a bit painful, and there is a part of me that is WordPress 4life given I built my career on it for roughly 17 years. That said, after running the Reclaim Roundup in Ghost for the last six months I have to acknowledge the publishing experience is far superior. All the overhead around design and plugins and themes are removed and the blog can just be a blog again. Saying this, I fully acknowledge my endless posts and presentations about WordPress being more than a blog early on—funny how things work.

Yesterday I spent some time migrating and updating the Ghost instance I have been hosting since 2014 using Taylor’s installer, which enabled me to seamlessly update to the latest version. So bavaGhost now has both integrated search and comments. At this point I think I’m going to take the plunge and migrate all my content on bavatuesdays to Ghost to see how that works and if it is viable to make the switch after 17+ years of blogging with WordPress. I imagine it will take quite some time given I plan on combing through all my posts since 2005 and cleaning-up broken links, bad images, broken embeds, etc. before even attempting the move, so at minimum it will be several months before I’m ready, but at the same time it is a project I’m excited about. Even if I ultimately stay with WordPress, exploring what’s out there and learning through tinkering is what makes me happy with the work I do, so why stop now?

Posted in Ghost, Reclaim Cloud, WordPress | Tagged , , , | 9 Comments

9 Years of Reclaim Hosting

On Saturday Lauren Hanks reminded me that Reclaim Hosting celebrated its 9th birthday. I get confused on the official formation date, I oscillate between the 28th of July (which Gusto—our payroll/hr service—notes as my anniversary) and the 23rd—the date I traditionally associated with it in celebratory posts over the years like this one. So I guess am going to make 23rd the formation date and the 28th my first day of work 🙂

Nine years. Crazy to think we are approaching a decade of Reclaim. I mentioned in the 7 year post how Reclaim was 1) growing and starting to dial in a more definitive sense of culture, and 2) imagining a reality where Tim and I were not as central as we had been to start.

Image Image of TV with Reclaim EdTech on screenTo the first point, I think our hires in 2021 and 2022 have really solidified the questions around culture, which have been to intentionally build a team that is rooted in a support mind-set that is informed and reinforced by educational technology. It helps that Goutam, Pilot, Taylor, and Amanda all came from Domain of One’s Own programs, there understanding of higher education and a deep commitment to vision of technology to both augment and transform education is foundational to being able to both dream up and roll-out an Instructional Technology team in a few short months to start 2022. That has been a gigantic shift in Reclaim’s understanding of itself, that said what it means more specifically is still yet to be determined—which makes it that much more fun! It’s a moment where we can explore, experiment, and figure it out, which i believe is a sandbox for all kinds of magical possibilities.

As to Reclaim operating without Tim and I as central, this has been sealed over the course of our ninth year. If you told me a year ago that Tim would be entirely removed from the day-to-day of Reclaim Hosting starting January 2022, I would’ve laughed …. nervously. But that has been the case, between Lauren ruling the Director of Operations position like a boss; Chris taking over infrastructure and truly shining through not only adroitly managing a mighty fleet of servers—but also making them that much more secure; and Meredith stepping up big time on the regular to train everyone on our team to become fluent in frontline support; we’ve all gotten better as a result. And while Tim’s creative innovation at Reclaim is legend, we now have nine well-rounded team members that truly do make Reclaim bigger than either of its founders, and that’s the dream.

As for me, I’m not going anywhere cause Tim now owns the amazing Reclaim Arcade, so Hosting is all I got! It does help that I love it, and I want to keep experimenting with what a marriage of hosting, support, and edtech looks like as we continue on this journey. In this regard, I have to say our ninth birthday marks a moment where we can not only sustain the laser-focused support our community has come to expect, but also provide a broader outreach thanks to Taylor’s community work and Pilot’s Roundup newsletters—we’re now able to think beyond the immediate. This means building on experiments like the OERxDOmains21 conference delivery platform for ongoing professional development (thanks to Tom Woodward and Michael Branson Smith); more experimentation with container-based edtech; as well thinking through how Domain of One’s Own, WordPress Multisite, and Reclaim Cloud represent a multi-level offering for schools to provide a wide range of options as part of our services—all of which remains undergirded by edtech-drive support.

So, as I reflect on our ninth year of Reclaim Hosting I believe we are entering a new phase wherein we have the headspace to experiment more, re-think how our ostensibly unrelated products can be understood as part of a greater whole, all while creating a culture of the possible at Reclaim Hosting that understands educational technology need not be a clarion call for the apocalypse, but an imaginative way to build cool, fun things that make a difference on a human scale. I’ll take nine more years of that!

Posted in reclaim | Tagged , | 4 Comments

Bavacade Update: G07 Capkit, Condor PCB, and Cheyenne Audio

I have been working on and off on the bavacade, and I am pretty close to having every game working perfectly, that said actual perfection continues to allude me. But we need lofty goals and standards, right? -so the hunt continues.

At the end of June I did my first capkit on a G07-CBO chassis that was removed from Robotron. This chassis is now a backup given I put a replacement G07 chassis the Arcade Buffett fixed into Robotron and it works beautifully. After doing the capkit on the bad G07 I tested it on my Condor machine because that also uses a G07 (I have 4 games that do: Astro Invader, Bagman, Condor, and Robotron) and the picture suffers from some waviness at the bottom, so I figured it would be a good candidate for replacement. I tested the chassis with the new capkit on and it was out of sync for about 5 minutes or so when turned on, but once it warmed up it was perfect. I read around and there might be an issue with my capkit—surprise, surprise—so I’m gonna to have to try it again, and if that works I will do another on the original G07 from the Condor that was wavy.

Image of Condor play field

Condor with G07 from Robotron looking amazing

Also, I played a bit of musical chairs with the G07 chassis, so gonna document that here. During testing I moved the G07 Arcade Buffett repaired that was in Robotron into Condor, and that looks awesome.  I then took the G07 in Bagman and put that in Robotron, and that might be the best G07 chassis of the lot. So gorgeous.

image of Robotron screen

Robotron with G07 chassis from Bagman looking good!

So, right now Bagman is without a chassis, but I will rectify that once I re-do the capkit I flubbed. And even if that fails, I have the original G07 from Condor that I can do a capkit on—practice makes perfect—and then take the working G07 from Condor that was in Robotron previously and put it in Bagman, after thatI should be all set until the next one goes down. Also, are you following all of this? 🙂

Another point worth mentioning is that while working on the G07 capkit and testing it on Condor the tube was making a crackling noise, which is not good. This is what they call “arcing” and it means the high voltage from the anode in the tube is somehow jumping. The solutions I have read about suggest taking out the tube and cleaning the tube as well as you can, and then clean it again and again and again. I did a bit of cleaning and it got much better, but there is still a slight crackle so I may have to clean it again and then use some isolation lacquer around the anode. Here is a post on KLOV that describes a similar issue.* I also noticed on the chassis I did the capkit for had issues with the suction cup that connects to the anode hole on the tube. It was not grabbing well. and was in overall bad shape. So, I will also be replacing the flyback on my next attempt, which includes a new suction cup for the anode.

In terms of the Condor, I got the PCB issues with the audio fixed (it was really low and staticky), and turns out it was a bad volume pot (or potentiometer), so that works perfectly and the monitor looks good. The original board, which is now the backup, was having some graphical issues so getting it looked at locally before committing to sending it off to be fixed. Condor will be like new with a capped chassis, a refurbished board, and if the arcing gets out of control, a new tube might be next 🙂

Cheyenne Cabinet Playing Crossbow

The other game that has had some issues is Cheyenne. At first it was the curling of the monitor image that led to getting a backup Polo 20″ chassis (that was itself having issues and went back in for repair, and while I jut got it back today I still need to test it). The original monitor chassis seems to be working well now, but in the interim issues with the audio started to occur. We tested the original speakers and they seemed to have some power-related issues, so they were replaced with new speakers. But despite that there was still no sound and it seems like the issue might be related to some resistors on the sound board that might have gotten damaged. We tried routing around the board based amplifier to test that theory this morning, but we still got nothing, so Roberto took the audio board with him for more testing. The board work continues to fascinate me, and watching Roberto read a schematic has given me some hope, but the fact he doesn’t speak English and my Italian sucks is definitely a challenge 🙂 But the problem seems to be related to the arrow at P8 (closest to the margin in image below), which is the audio connector from the board to the speakers, he tried to by-pass that and tap into the audio signal before that connector to test where the issue was occurring, but the test didn’t work, so he will be digging in more on his bench and I will be cheering him on via email.

Cheyenne Audio PCB schematic

The good news is I have an extra Cheyenne/Crossbow board coming from the US that should arrive any day with the rest of my containerized belongings, so we should be able to test even further with a different audio board, and hopefully solve this issue.

The one last thing I need to do is re-visit Asterock and add the voltage regulator we removed to see if that is fixed because right now we are using a switching power supply and I think that when the voltage regulator goes back in the machine we can have everything run off the original power supply. Will need to look at that this week, as well as doing a capkit or two of the G07 chassis I have in front of me. Unfortunately it has been so damn hot that the idea of doing any soldering-related work has been less than appealing, unless I take care of it at 5 or 6 AM. Nonetheless, this gets me caught up on recording the on-again, off-again work that has been happening the last month or two.


*Although I just turned this game on and I am not hearing any cracking, so perhaps this is sorted.

Posted in bavacade, bavarcade | Tagged , , , , , , | 2 Comments

WordPress Multi-Region: Is the Third Time a Charm?

Hope springs eternal in the bava breast. This is the third attempt since November of 2021 to try and get a WordPress Multi-Region setup working on Reclaim Cloud, I blogged attempts one and two in some detail. And I’m glad because returning to it after four short months it’s like I’m starting from scratch, so the blog posts help tremenously with the amnesia—they serve a similar purpose as the polaroids in Memento.

My return to Multi-Region was spurred on by a realization that the documentation provided by Virtuozzo (formerly known as Jelastic) noted the WordPress Multi-Region Cluster installer was a Beta version, whereas the one I’ve been playing with in Reclaim Cloud is still Alpha. This led me to look through the Jelastic marketplace installers (JPS files) on Github to see if there’s more than one installer for WordPress Multi-Region setups, and while I could not find the Beta version of the WordPress Multi-Region Cluster installer, I did find a beta installer for a WordPress Multi-Region Standalone installer. The difference between the two is that the standalone is not creating multiple app servers, databases, etc., within a single environment that is then reproduced across as many as 4 environments in different regions.  This significantly reduces the complexity, which gave me some hope that this just might work.

What’s more, I’m already running bavatuesdays in a standalone WordPress environment using Nginx (LEMP) on Reclaim Cloud, so the difference would be this new instance uses LiteSpeed (LLSMP) and replicates the entire instance in one additional data center (there were not options for more that two regions). It is two stand alone instances that are replicated across two different regional data centers. Here is to hoping simpler is better.

The fact that I am deep into container learning this month helped my confidence a bit, particularly when grabbing the URL for the manifest.yml file that provides the directives for setting this up in Reclaim Cloud. We don’t have the WordPress Multi-Rgion Standalone installer available, but you can still install it by going to the Import option in Reclaim Cloud and copying the manifest.yml URL into the URL tab:

Import tool to grab a manifest.yml file to build out the WordPress Multi-Region Standalone installer

Once you click Import you will be given the options for setting up your WordPress Multi-Region Standalone setup:

You are limited to two regions with this installer, and the first region you list becomes your primary, but I’m not convinced that matters as much as it does with the Cluster Multi-Region setup. After that, you let the two environments setup, and the URLs of each will be something like and

Once the environments are created you will get an email with the details for logging into the WordPress admin area (same for both setups) as well as LiteSpeed and MySQL database credentials. [Note to self: that is an important email, so be sure to save it.] Once everything is setup you need to do a few things to each environment:

  • Make sure you have added two A records for your custom domain. There should be one record for each of the environments public IP addresses. I use Cloudflare for this and it looks something like:

  • Add a Let’s Encrypt Certificate for your custom domain in each environment:

  • Update the site URL in both environments using the WordPress Site Address addon:

If you are starting from scratch then you should be good to go at this point with the setup. I had a few extra steps given I’m importing my files and data from an existing WordPress, and to do this I use rsync to copy files between environments and a command line database import given the web-based phpMyAdmin import was consistently timing out.

Rsyncing files between environments has been a bit of a struggle for me in previous attempts, but luckily I documented this process, and finally feel like I have made a breakthrough in my understanding, although I still had to lean on Chris Blankenship for help this time around. Here are the steps:

  • Create a key pair on the environment node you are migrating from and copy the public key (file ending in pub) into the authorized keys file on the new environment node you are moving to. Here is the command I used to create the key pair on the environment node I am migrating from:

ssh-keygen -t ed25519 -a 200 -C "" -f ~/.ssh/bava_ssh_key

  • Make sure to do everything as root user on the server you are moving from and to, and there is a Root Access addon for this in the multi-region environment. Also, keep in mind you only need to rsync to one of the two multi-region environments you created, I chose to copy files and import the database to the environment.

  • For rsyncing I added the keys successfully, did everything as root, and still ran into an issue using the following command to rsync. Turns out that the multi-region WordPress has an application firewall built-in that was blocking access to the public IP address, so I needed to use the private IP LAN address instead, which worked!

rsync --dry-run -e "ssh -i ~/.ssh/bava_ssh_key" -avzh /var/www/webroot/ROOT root@multi.region.public.ip.address:/var/www/webroot/ROOT
If no luck, try:
rsync --dry-run -e "ssh -i ~/.ssh/bava_ssh_key" -avzh /var/www/webroot/ROOT root@multi.region.private.ip.address:/var/www/webroot/ROOT

    • After that I needed to access phpMyAdmin on the old site and download the database and then upload it to the environment. I tried importing via the phpMyAdmin interface but it timed out, even when I compressed the file it was still taking too long. So I uploaded the sql file to the new multi-region environment and used the following mysql database import command:

mysql -u username -p new_database < data-dump.sql

And that worked perfectly and everything was imported and the site was immediately running cleanly in its new home. I checked if the files and data had been replicated to (the Canadian based environment) and I was happy to see that it happened almost instantly. So you only need to import to one environment and all files and data are immediately copied to another, which is exactly how you want multi-region to work.

The next test was turning off one of the other environments to see if the site stays online, and that worked as well. So far it looks like a success. One of the issues I had with the Multi-Region cluster was getting new comments and posts to populate cleanly across all regions if one of the environments was down during the posting, so will need to test that on the comments of this post, while also making sure one of the servers is down when I publish this.

I decided to re-visit some of my previous work in Cloudflare setting up a load balancer and monitoring the two environments to accomplish a few things:

  • Ensuring that if one of the two servers goes down I am notified
  • Steering traffic so that visitors closest to the Canadian server get directed there, and visitors in Europe get pushed to the UK server.
  • Testing load balancing to ensure if one of the two environments goes down the online server is the only available IP so that there are no errors for incoming traffic

All of these details are laid out beautifully in this post on the Jelastic blog about load balancing a multi-region setup, so much of what I will be sharing below is just my walking through there steps.

In Cloudflare under Traffic–>Load Balancer for a specific domain you can create a load balancer that allows you to define pools of servers that can be monitored so that when downtime is detected you not only get an email, but they redirect all traffic from the server with an issue to a server that is online.

Cloudflare load balancing

In the image below you can see that the UK environment is reported as critical, meaning it’s offline. In that scenario all traffic should be pointed to the healthy server in Canada.

View of a server with an issue in Cloudflare load balancer

I can also confirm the email monitoring works:

Example of a monitoring email from Cloudflare notifying me of issues with a server

And I did a test to ensure when both servers are online both IP addresses show up:

dig -notice that when all servers are healthy all IP addresses are listed

And below is a test when one server goes down, in this case the Canadian server, only the UK server IP address shows as available, which means the load balancing and fail over are working perfectly:

dig when one of two servers is down, notice just one IP address shows, the one that works!

That is awesome. I have to say that Cloudflare is quickly becoming my new favorite tool to play with all this stuff. And Cloudflare in conjunction with the standalone WordPress Multi-Region is a powerful demonstration of how much Cloudflare can do with DNS to help you abstract your server setup to manage failover and multi-region load balancing seamlessly.

The final thing I’m playing with on this setup is using Traffic Steering in Cloudflare, which allows me to locate a server by a rough latitude and longitude so that Cloudflare can calculate how close a visitor is to which IP and steer them towards the closer server. In this way, it is essentially geo-locating visitor traffic to the closest server, which is pretty awesome—although i am not sure how to test it just yet.

So, by all indicators the third time my very well be a charm, and simple is better! But the question remains if this post will populate across both servers when published with one down, and if comments will also sync once the failed server comes back online. I’ll report more about this in the comments below….

Posted in Reclaim Cloud, WordPress | Tagged , , , | 4 Comments

Understanding Containers through Nextcloud

We are into week 3 of our Reclaim Edtech flex course “Understanding Containers,” and I have to say Taylor Jadin is doing a bang-up job taking folks through the basics. Yesterday was the premiere of week 3’s video that covers using load balancers, mapping domains, installing SSL certificates, and more. In week 2 we went through installing Nextcloud and it all started in week 1 with a broader framework for understanding containers as well as getting familiar with Reclaim Cloud. Taylor’s pacing has been excellent, and his weekly videos are accompanied by a weekly blog post with all necessary resources as well as a to-do list. The way he has set it up has a very ds106 weekly assignment vibe, and I am loving it.* I’m also loving Taylor’s Container Glossary, which provides a nice guide for understanding the basic terminology and concepts undergirding containers.

So, this week I sat down to catch up on my Nextcloud work, so what follows are mostly notes for me, but if they come in useful all the better. I used Taylor’s basic docker-compose.yml file to spin up a very basic Nextcloud instance within a Docker Engine node on Reclaim Cloud.

You would copy this docker-compose.yml file into a directory like ~/nextcloud/ and be sure to change the port from 8080:80 to 80:80. Then run the following command:

docker-compose up -d

At that point you’ll have a basic SQLite driven instance of Nextcloud on the default Reclaim Cloud domain, something like I also wanted to get a separate instance running MariaDB spun-up given that works well for larger setups and syncs with various devices more seamlessly. To do this you can either spin up a new Docker Engine node in a separate environment (which is what I did for testing purposes), or just replace the contents of the existing docker-compose.yml with the directives for creating a Nextcloud instance that uses MariaDB.

To do this you need to completely remove the existing containers from the original instance using the following command:

docker-compose down -v

The -v is import  in the above command because it not only spins down containers, but entirely removes them. From there I go back into ~/nextcloud and edit the docker-compose.yml file replacing what’s there with these details (be sure to create your own passwords):

Image of docker-compose.yml file for a MariaDB Nextcloud setup

This is the docker-compose.yml file for a MariaDB Nextcloud setup

Once you update the docker-compose.yml file with the new details and passwords, being sure, once again, to change the port from 8080:80 to 80:80. Save the file and run the following command to spin it up:

docker-compose up -d

After that you should have Nextcloud running on a MariaDB instance. Go to the URL and setup the account.

Once you have done that and you want to map a custom domain you will need to add a Nginx load balancer to the environment, ensuring it has a public IP address. After that, grab the public IP address and use it to point an A record for a custom domain, something like

Once that is done you can remove the public IP address from the Nextcloud node (not the Load Balancer). From now on the load balancer will provide the public IP, so the IP originally associated with the Nextcloud node is no longer of use so no need to pay for it.

There are 3 more things to do: 1) add a Let’s Encrypt Certificate using the Load Balancer addon and specifying the mapped domain; 2) redirect to  SSL using Nginx, which Taylor blogged; and 3) ensuring your mapped domain is recognized by NextCloud by editing the /var/lib/docker/volumes/nextcloud_nextcloud/_data/config/config.php file to include the custom mapped domain like shown on line 25 below:

Image of the config.php file that needs to be edited to include the domain name

config.php file that needs to be edited to include the domain name, this was for my SQLite instance

Once you do these edits be sure to restart the respective nodes in your environments. I was able to get both the SQlite and MariaDB instance up and running, and it’s worth noting the MariaDB environment uses 8 Cloudlets (roughly $24 per month) versus the SQLite instance using 4 Cloudlets (roughly $12 per month).

Ok, that’s my Nextcloud progress thus far and I understand there may be some gaps in the notes above, so feel free to ask any clarifying questions in the comments.


*Most of us at Reclaim are struggling with not sharing these immediately after they are produced given they are currently part of our subscription model for Reclaim Edtech, but whether or not that continues to make sense as a model remains a question.

Posted in docker, reclaim, Reclaim Edtech | Tagged , , , , | 1 Comment