Getting Ready for Reclaim Open

I’m sitting in Zurich airport preparing to return to the motherland for Reclaim Open after celebrating 20 years of marriage to my special lady friend. Lisbon was amazing, and life continues to be a gift with Antonella, I’m a very lucky man. It was our first time visiting Lisbon, and that city’s attractions are like something out of an amazing Dungeons and Dragons map, I really loved it.

But that is just one of the many anniversaries—albeit by far the most important—that are happening this year. I celebrated 10 years off the sauce, WordPress turned 20 years old last month, the world wide web went open 30 years ago in April, and next month Reclaim Hosting gets its double-digit wings by  turning 10 years old in July—which is insane. Next week we kick off the celebration with our bi-annual conference re-framed as Reclaim Open, which will find many of the original DTLT crew back at the University of Mary Washington.

Reclaim Open at UMW from June 5-7, 2023

I’m really excited for this event, especially given this conference is going to be fairly small and focused. We’ll be lucky if we get 50 people all told. And all those attending represent a community of folks that still believe in the possibilities of the open web for education, and I count myself a proud member of this shrinking club. In fact, that was the take-away for me from OER23 in April, that re-connecting and re-committing to the work open ed techs do is crucial. It’s work I love, work I believe in, and work that has never been more important. We’re a motley crew strewn across the globe here and there, but for a few days Reclaim Open will bring together a handful of those mutants for a celebratory meeting to help us remember what it is we do and why—especially in the wake of the pandemic.

What’s more, while we’ve tried to keep it on the down-low given Reclaim hates to over-promise and under-deliver, the the official event (everything but the un-confernece) will be accessible remotely. We’re using our watch site setup discussed in my last post about karaoke to play with a hybrid conference delivery setup, and we plan on streaming all sessions and managing the online conversations through Reclaim Hosting’s Discord server. We can even stream out the Tuesday night karaoke from Reclaim Arcade! So in many ways the conference will be an exploration of how we can deliver an effective hybrid event, and it will be freely accessibly during the conference as well as throughout the month of July when we transition the sessions from the in-person conference to a fully online, month long 10-year anniversary celebration.

So yeah, I am definitely getting fired up for Reclaim Open, and given it is accessible from anywhere you should be too!

Posted in reclaim, Reclaim Edtech, Reclaim Open | Tagged , | 2 Comments

Reclaim Karaoke at the Rockaway Club

It’s been a little while since I blogged, and that’s due to a combination of factors, including visitors, conference preparation, and a little bit of karaoke experimentation. The last of those will be the focus of this post, cause I’d rather just ….SING!

There have been a bunch of different iterations of karaoke over many years, dating back to the heady days of ds106 in 2011. It started with the idea of Karaoke Friday’s on ds106radio in March 2011, and Tim soon figured out how to bring the magic to ds106.tv. The push for online karaoke re-surfaced during the pandemic, in particular as OER20 was forced online and Reclaim wanted to help figure out how to do online karaoke. Tim’s awesome because he’s never satisfied with good enough, and in the following video he talks through the KaraOERke setup for the great OER online conference pivot of 2020.

That KaraOERke session for OER20 was really a lot of fun, and in many ways was the impetus for more karaoke experimentation, not to mention joining forces with ALT for OERxDomains21 (a high watermark of online fun) and ultimately the birth of the “Watch” platform Chahira and I are using for Reclaim Karaoke at the Rockaway Club. It’s a good reminder that all these little fun, seemingly throwaway experiments can lead to amazing things when you have the right people working together for a given time. And it’s always about the people working together, whether Tim, Lauren Hanks, Maren Deepwell, Michael Branson Smith (MBS), Tom Woodward, or Bryan Mathers, magic can happen when folks believe and time and focus allow.

Summer Summit 2020 KaraOERke Setup Update

Anyway, the experimenting with online karaoke continued in 2020 with ALT’s Summer Summit, in which I was seeing if I could substitute Jitsi for Zoom, but at the time Zoom proved more capable when it came to audio. In the Fall of 2020 we did a session on Streamyard for Digital Ocean’s Deploy conference, and that started us thinking through a pretty slick, fully online setup using Streamyard, YouTube, and Discord for OERxDomains21. That’s when we teamed up with ALT and brought in MBS, Tom Woodward, and Bryan Mathers to make the design, code, and art happen.

OERxDomains TV Guide

In my mind the success of that conference was directly linked to all the small, half-baked experiments and experiences that led up to it. In fact, the night before the conference officially started we had an OERxDomains21 karaoke session that keynote Rajiv Jhangiani joined, amongst others, and really set the tone for what would prove an amazing two days. It always goes back to the karaoke!

I got another shot at Friday Night Karaoke for the OEGlobal online event in December of 2021. For that setup I used a combination of Zoom and OBS to stream to Owncast, which was the the first non-ds106.tv based karaoke event.†  This setup worked quite well given the chat and stream are combined for an integrated homepage. I had them map the URL karaoke.oercamp.global (now dead) to the container, which in retrospect was overkill, but at the same time I was excited by the possibilities of Owncast. In this iteration the goal was to minimize the window complications of Zoom in OBS, to that end singers joined Zoom and everyone else can watch and chat on Owncast. Having everyone in Zoom can create issues with feedback and open mic problems, but the biggest issue for me on the streaming side was how confusing windows in Zoom prove to be, making it hard to focus on just the singer and their shared karaoke video.

Reclaim Edtech watch site with Discord channel integration

Reclaim Edtech watch site with Discord channel integration

That pretty much gets us caught up to the latest (and greatest!) karaoke setup we’ve been playing with these days. Not surprisingly, the work we’ve done with the watch site for Reclaim EdTech, which was born of OERxDomains21, significantly informed this most recent setup. Chahira and I ran a final test using Zoom, but have since decided we are all-in on Streamyard given it handles the screen sharing with audio just as well. It also removes the need for OBS, which is huge. Another thing I love about Streamyard is when folks enter they’re immediately placed in a waiting room to prevent potential bad actors. And then there’s the way Streamyard uses video templates to make it easy to move between shots while still being able to participate in the fun.

Streamyard interface with shot templates and participants list below the video

Different template layout for Streamyard with two participants and video sharing

With the Zoom and OBS setup I often felt like I needed an additional person just to help manage the video production and streaming—I could never have enough screens. With Streamyard it’s all in one browser tab, which also makes the audio setup simpler than with Zoom.* In short, Streamyard is the bomb.com because it does just enough of what both Zoom and OBS do, making it a more cohesive, streamlined solution.

Another shot layout in Streamyard that allows all additional participants to be placed around the central screen share

For the current setup we have been pushing the karaoke stream directly to YouTube live, which I’ll talk about more shortly. From there, the live YouTube URL is added to a headless WordPress instance that Tom Woodward created as part of OERxDomains21. We have used that same setup for subsequent online workshops like the Domains 101/201 workshop a year ago, as well as all of Reclaim EdTech’s flex courses thereafter. So Reclaim karaoke is yet another page on reclaimed.tech that the domains reclaimkaraoke.com and rockaway.club re-direct to. This also allows me to pull in the chat from the Reclaim EdTech Discord channel that is designated for the Rockaway Club (I decided on Rockaway so I could repurpose Bryan Mathers’ amazing Rockaway Hosting art).

The club metaphor obviously jives with this idea of going to an intentional space to experience something, whether music, film, some fun with friends, etc. This whole two or three year exploration around this setup has focused on “integrating” with tools like Discord, Streamyard, Youtube, or their open source counterparts like Mattermost, OBS, or PeerTube as a mashed-up prototype. The model can be reproduced in all kinds of ways, but I like the idea of trying to tie it all together with a headless WordPress backend that folks can use to quickly generate and schedule sessions that can stream seamlessly to and from a variety of tools. What’s more, we can further design out the TV Guide metaphor MBS already created, which is pure magic. I feel like the pieces are all there to move to some next attempt at integration, but in the interim I’ll karaoke!

The karaoke setup at https://watch.reclaimed.tech/karaoke

As you can see in the Watch site above, the scheduled karaoke sessions are above the video embed, the Discord channel to the right, and the stream from Streamyard to the left, it is quite tight. When I add a new date with the appropriate YouTube link pre-scheduled, it will automatically go live when the stream starts in Streamyard, it is pretty cool, and live stream comments come through Discord. But this extends beyond karaoke because I was testing this setup out with Olia Lialina for her online keynote for the virtual conference in July, and we both agreed that this setup is good enough for the multimedia she’ll be sharing to discuss 30 years of the web. See, it’s never just playing, the fun matters.

Stream suspended for copyright violations notice from YouTube live

The last piece to discuss here—and it’s pretty important—is the live streaming to YouTube. I’ve been down the slippery slope of copyright controls with video in the past, so I know how that ends. In fact, during our first and second sessions using YouTube live they interrupted the stream based on copyright claims. That means the stream goes off-air until the song is over, and if we were being serious about this for academic sessions where we may need to study culture, an alternative to mid-stream copyright claims is crucial. While right now Streamyard can stream just about anywhere, including PeerTube, the Watch site only embeds YouTube videos, so we would have to re-visit that because PeerTube is a solid substitute. In fact, my whole streaming setup for Antonio Vantagiatto’s The Girl Who Knew Too Much class session was run seamlessly through that platform.

That session worked in large part because of some variation of this setup to share clips, talk over them, and then stream the film for all to watch from OBS and PeerTube. No copyright-driven interruptions from Youtube. The hybrid piece of the class was difficult given I could not see them without cameras in the room, but if it was all setup right—it could be magic. My only regret is using Zoom, but we can fix (or rise above) that Blackboard of the 2020s. In that spirit, the first piece of development would be to get PeerTube to work in the current Watch site, and even abstract that out so any link/embed could work when you schedule something, but that’s another post that Taylor Jadin could write better than me.

Anyway, I guess this post was supposed to be about Reclaim Karaoke, but it turns out to really be about the possibility of integrating the Reclaim Watch site to work with various streaming and discussion platforms to make it less custom. But I’m happy to report that Karaoke (save some YouTube interruptions) works swimmingly with the current setup, but it would be even better with PeerTube for so many reasons.

______________________________

*Given Zoom is a different application you need to ensure audio is pulled into OBS for streaming, suffice to say it quickly gets complicated—but Streamyard simplifies so much of that.

†ds106.tv used the Ant Media setup which was part of the Reclaim Video stream experiments—so much playing.

Posted in digital storytelling, ds106radio, ds106tv, edupunk, fun, karaoke, OER21, OERxDomains, OERxDomains21, PeerTube, reclaim, Reclaim Edtech, Reclaim Open, Reclaim Video, Streaming, YouTube | Tagged , , , , , , , | 5 Comments

Metaphor’s in the Water, You Go in the Water

Sorry for the oblique Jaws reference in the title, but I really couldn’t resist given I’m talking about Martin Weller‘s podcast about Metaphors in Edtechcaptain Quint himself 🙂 In fact, one of the many joys of OER23 was not only seeing Martin, but that re-connection resulting in an invitation to discuss metaphors of edtech with him.

Martin and I immediately connected over our love of 80s horror films when the edtech blogosphere was still a thing, and I think of him as a kindred blog spirit in so many ways. His crisp, concise writing (that might be where we diverge), his understated wit (again, not a similarity), and his deep love of a good pop culture metaphor (bingo!) all resonate deeply with me. In fact, I think so many of those elements are what make a great teacher, and  reading—and now listening to—Martin’s blog reinforces that connection in spades.

Shining GIF

So, I figured talking with him about metaphors would be a lot of fun, and that proved true. It’s my inclination to ruin the punchline by over explaining the joke, so for the sake of Martin’s podcast stats I’m going to resist the urge to give a play-by-play and simply say if you like good edtech metaphors and have an inkling for 80s horror films, this might be right up your niche alley 🙂

Posted in 25yearsedtech, art, blogging, reclaim | Tagged , , , , , | Leave a comment

The Boy Who Streamed Too Much

So after digging in on the streaming for Reclaim Karaoke Tuesday night, I turned to preparing a discussion about Mario Bava’s seminal giallo film The Girl Who Knew Too Much (1963). That discussion happened last night, and it was streamed all the way from a basement in Trento, Italy to a classroom in San Juan, Puerto Rico—the internet still amazes me! Paul Bond and I have collaborated on a couple of sessions for Antonio Vantaggiato‘s Italian Cinema and Culture class back in May of 2020 and 2021. In May of 2020 the Pandemic was still relatively young, and I had just started experimenting with streaming in earnest, so doing that session with Paul for Antonio’s class was a bit of a trainwreck.

Discussion of Bava’s Evil Eye with Paul Bond back in May of 2020

Despite quite a few issues, I think we got our point across. In that session we discussed the U.S. cut of Mario Bava’s  The Girl Who Knew Too Much titled Evil Eye, and you can read all about that session in both mine and Paul’s follow-up posts.

Diabolik: a Cultural Revolution Comic on Film

In 2021 we changed it up for the course visit and discussed Mario Bava’s adaptation of the comic Diabolik in his 1968 film Danger: Diabolik.  Again, there’s a blog re-cap of that session,  but the real break-through for me with that session was upping my streaming game from the previous year’s disaster.

In fact, that’s a good segue into last night’s session because there were a few things different from 2021, most noticeably Paul Bond’s absence. That was entirely on me given I didn’t allow enough time to prepare given relatively short notice, but I’ll fix that next time as he was sorely missed. Another new element is that I used PeerTube to provide a livestream of the introduction, that immediately after the class ended became an instant archive for the talk. I was able to stream both my introduction as well as the entire film for the class in Puerto Rico from my Italian basement, it’s like they were really watching an Italian film from Italy 🙂 probably the best news was that the stream proved quite solid throughout, there was a glitch on the Puerto Rico side a few minutes before the end of the film, but they were able to wait until the stream ended and then published to go back and finish it, which is pretty awesome.

And finally, I think my ability to produce the stream on-the-fly was lightyears beyond my attempt in 2020. And while the 2021 iteration went pretty well overall—especially the addition of  OBS Ninja to bring Paul in—I had yet to really play with OBS Studio’s features, which allows for previewing scenes before switching them, which made a huge difference. I really just have to dial-in my green screen and get the Streamdeck programmed in time and I’ll be off to the races given I could have removed myself from the video clips sooner than I did with my current setup, but again it all worked remarkably well considering who was running it!

It still blows my mind that I was simultaneously producing and live streaming an hour-long intro to Mario Bava’s The Girl Who Knew Too Much to a classroom halfway around the world, and then was able to stream the entire film for them seconds later. It was almost quicker than inserting a DVD. All of this made possible by a personal video platform, bava.tv (powered by PeerTube running on Reclaim Cloud). It’s pretty amazing how far all this tech has come, and I really love playing with it for something like this, so thank you Antonio for letting me experiment so wildly and being so cool year after year. You rule!

Image of film streaming on Peertube with live chat

Film streaming on Peertube with live chat!

And while we did use Zoom to connect, in the end it was somewhat unnecessary given there was no camera on the students and there was no easy way to read the room or interact. The other part of this experience that would make it even better would ensuring the room had a camera on the audience with its own IP address that I could feed into something like OBS Ninja or an ATEM Mini or VLC in order to provide a real sense of interaction. In fact, from my point-of-view having people remotely join Zoom in previous years’ sessions was better than me talking to a room full of people I couldn’t see. That’s the real trick of doing a hybrid presentations/events like this, having a visual of the room and the attendees for remote presenters so there’s a mutual feeling of connection.

Some other streaming notes, I used VLC to create a playlist of clips to talk about during the introduction. I had to remember to pull the audio feed from VLC into OBS through Loopback. Once I resolved that it all worked a treat. I also had Zoom as a source in Loopback to pick up Antonio and the class for the stream. Having separate video and audio for Antonio and the students would have been the cherry on top.

I did finally get Handbrake to make the english subtitles available on the ripped DVD, but it was unnecessarily frustrating. So to avoid future struggles, I’m going to record my solution. I found that selecting a .mkv formatted video with the subtitles track having only  “Default” selected was the only combination that worked.

Image of subtitle interface in Handbrake

Subtitle interface in Handbrake

All in all I’m happy with how the production came together technically. There is still more to do, but this signaled progress for me. As for my introduction to the film, well that might be worth another blog post, but suffice to say I’m pretty good at getting excited about Mario Bava then parroting what others smarter than me have said to back some of those emotions up adequately 🙂 I mean this is a “b” blog after all.

Posted in bava.tv, bavastock, film, films, Italy, movies, PeerTube, presentations | Tagged , , , , | 1 Comment

Reclaim Karaoke: Testing 1,2,3

I’m starting to get back into the stream of things after a month of seemingly non-stop blogging. This week I returned to both Reclaim Karaoke and playing a docent on the internet. Doing karaoke on the internet is something I’ve been returning to on-and-off for over a decade now. When I find make some time to do it, it’s really a joy. Luckily Chahira Nouira shares that passion,  and we also share a time zone, so she’s been a go-to for karaoke.  What’s more, we have finally decided to try and make these sessions a bit more regularly on Wednesday night’s at 9:00 PM CET.

Image of Jim Carey doing karaoke from film Cable Guy

Jim Carey doing karaoke in film Cable Guy

We had our first test of things earlier this week, and as usual with me it was a bit of a mess. But hope springs eternal, and I believe we’re zeroing in on a workable solution. The issue this go around was that we did not have a paid Zoom account, so just as we started getting going around 30 minutes in we had to change gears. The upside of Zoom for Karaoke is they have audio options to turn-off any sound cancellation or automatic adjustments, which is a requirement for doing karaoke online. Once we got kicked out of Zoom for over-extending our 40 minute welcome, we tested karaoke in Whereby—which we do have subscription for. Unfortunately Whereby seems to automatically adjust for competing noises (the karaoke video shared from from YouTube and the singer’s voice), which makes for a less than ideal experience. Given I am doing the streaming I can bypass that compression through the video conferencing app, but anyone who joins to sing will not have that luxury.

So, what did I figure out? That Zoom is still the best bet and I may have to bite the bullet there, but before I do I want to re-visit trying Karaoke in both Streamyard and Jitsi one more time. We have a Streamyard subscription that we use extensively for Reclaim EdTech, and I love how that app manages pre-defined templates; offers a muted waiting area; integrates YouTube live streams; and provides behind-the-scenes chat. It really would be ideal if it could manage sound for musicians like Zoom. And after testing while mid blog post I can confirm it does work quite well! The YouTube videos come in strong, so you have to manage the audio there, but other than that there’s no automatic level adjustments!

Now the other test will be Jitsi, which I can spin up on Reclaim Cloud to test the latest version. When Jitsi works, it works well, but we’ve seen issues with it being a bit demanding for folks with under-powered computers, given it can be a resource hog—we’ll see.

Image of PeerTube Waiting for Live Screenshot

PeerTube Waiting for Live Screenshot

The other piece worth noting in the Karaoke saga is that streaming on PeerTube has been working seamlessly. I like this because I’m worried YouTube will ding us for streaming copyrighted music, even if the karaoke videos we use is from YouTube—they all always are. So knowing we have a more than viable alternative for streaming with PeerTube that offers a constant live URL, integrated chat, an instant archive, etc. is pretty exciting.

Image of the Reclaim EdTech Watch site for live streams

Reclaim Edtech Watch Site for live streams

I’m currently working out how to setup the Reclaim Karaoke PeerTube instance in relationship to the main domain. Leaning towards making the reclaimkaraoke.com a clone of the Reclaim EdTech watch site, which is itself a reprisal of the OERxDomains21 Discord/YouTube integration. I just need to see how/if we can embed a PeerTube live stream instead of YouTube live streams just as easily.

Image of me in Streamyard

Streamyard dashboard in action

The other piece will be creating a Reclaim Karaoke channel in Reclaim Hosting’s Discord server where live chat for these sessions can happen. While I like that PeerTube and YouTube have chat built-in, a pre-vetted chat for a streaming karaoke session seems more sensible. What’s more, we can share the video conferencing link in that channel for folks to jump in and sing without the same concern of sharing it on other networks.

ds106radio stream of Reclaim Karaoke testing

Oh yeah, and by the way the entire Reclaim Karaoke testing session was also x-cast to ds106radio, and there were no interruptions by pesky apps that cut you off after 40 minutes. So the whole session was both streamed and archive in its entirety there.

Posted in bava.tv, ds106radio, karaoke, PeerTube, Reclaim Karaoke, Streaming | Tagged , , , , , , , | 2 Comments

Reclaim Karaoke Test on ds106radio

This was a test performed on 5/2/2023 to see if Chahira and I could figure out how to get Reclaim karaoke to work seamlessly through Zoom and/or Whereby streaming into PeerTube. The PeerTube side was seamless, but we learned a bit about the limits of both Zoom and Whereby, and more recent tests of Streamyard have proved illuminating in regards to that being a solid alternative.
There are also videos capturing the exploits that I will include here for posterity by obscurity?
First test of Reclaim Karaoke in Zoom cut short
Second test of Reclaim Karaoke in Whereby
Posted in ds106radio, karaoke, on air | Tagged | Leave a comment

Presenting Reclaim Cloud at OER23

This is the last OER23 blog post on my to-do list, there may be more, but this is the last one I’ve planned. Long live OER23 blogging!

The Industrial Reclaim Cloud Logo from Bryan Mathers

Lauren Hanks and I presented about Reclaim Cloud on the final day of the conference. This was in the afternoon after my “Web 2.0 and Web3 Walk into a Bar…” talk, and I think the two work together quite nicely. Whereas in the morning I discussed the loose federation of blogging that characterized early Web 2.0 and how much of that spirit was present in Web3 tools like Mastodon, this talk focused on how Reclaim Cloud enables folks to more easily install and host many of these next generation tools.

Retro-future art of a container by Bryan mathers

I’ll try and give a basic walk-through of the presentation in a bit, but it’s worth noting here that the break-through for us with this presentation were the following two sketches by Bryan Mathers: “Reclaim Containers” and “Reclaim Container Cranes.” These were images that were part of the original Reclaim Cloud concept artwork. We never did had these fully fleshed out and colored given we wanted to focus on the retro space-age vision inspired by the Jetsons. That said, when putting this presentation together Lauren and I realized that these visual/conceptual aides allowed us to describe clearly and concisely how containerization works.

Illustrated black and white image of container ship carrying VHS-shaped containers

“Reclaim Containers” by the ever amazing Bryan Mathers

With the ship you can think of that as analogous to the server that delivers a series of containers. In this example those containers are brilliantly visualized as VHS tapes, not only to remain on brand, but also to communicate each is its own product, such as WordPress, PeerTube, Mastodon, Ghost, etc. And I think that provides a visual metaphor you can wrap your head around quickly and easily, something Martin Weller and I discuss at length with the Milton-esque extended nautical simile that is Docker in a forthcoming episode of his indispensable edtech podcast series on metaphors.

Image of a VHS-taped shaped container being moved off a ship

“Reclaim Container Crane” by the indubitably wonderful Bryan Mathers

The illustrative power of the “Container Crane” image was not immediately clear to me until talking with Lauren when she noted that this is Reclaim’s role in helping folks understand and manage this new environment. So the cranes in the shipyard help manage and orchestrate these containers, becoming the metaphor for how Reclaim can help with the “heavy lifting” of the next generation of server infrastructure. It all starts to work, and this is something I’ve been banging my head on since 2015 or so. I’ve not been able to adequately communicate the brilliance of this metaphor. Lo and behold, it only took two images from Bryan Mathers for that breakthrough. If nothing else, folks listening to us might now have an apt visual metaphor to make sense of containerization and how it differs from more traditional web hosting. Long live Mathers art!

In terms of the presentation, we started with a Demo of the Reclaim Cloud interface and the one-click Marketplace installer.

Image of a bunch of applications icons with text "Reclaim Cloud demo"

Levine’s Law: Start with the Demo

What’s interesting about the above image, is it’s really similar to the image Martha Burtis made for our presentation in 2006 when talking about Fantastico and one-click apps for cPanel hosting in Bluehost:

Derivative slide of the Bluehost experiment

This is not the exact slide from the Bluehost experiment talk, but it’s some derivative version of it that I continued to change and use for years afterwards, and funny how it makes the point, at least to me, that Reclaim Cloud is essentially the next iteration of what we started doing at UMW 18 years ago.

Text from NGDLE article highlighting the vision of "Cloud-like spaces"

NGDLE: “For users, it will be cloud-like spaces”

Lauren then discussed the idea floated by EDUCAUSE back in 2015 called the Next Generation Digital Learning Environment (NGDLE). It was the promise of a Lego-like structure of applications and components that would be built around the LMS to provide more “personalized” learning and broader access to data, resources, and, most importantly, tool integration. It was also predicated on a cloud-like space for users that define the apps they use for learning. Obviously we felt Domain of One’s Own, just gaining some traction in 2015, was an excellent example of one instance of this vision—just without all the LMS nonsense 🙂

In this regard, the NGDLE white paper was appealing, but the follow-up essays in a special issue in 2017 were a bit more disconcerting.  At first the calls for open standards and a general sense of the limitlessness of transparency seemed promising, but it was troubling when these words were written by then Chief Digital Officer for McGraw-Hill Publishing. Stephen Laster‘s call for radical openness when it comes to the “free flow of identity, rostering, and learning data,” which seems more like a free-for-all for student data collection than a thoughtful integration of open tools.

While integration might seem to be the concern of IT departments, in truth it has serious implications for teaching and learning. Technologies that live within closed systems create roadblocks for students and instructors as edtech is used to accelerate learner success and faculty efficiency. The free flow of identity, rostering, and learning data, harnessed in service of confident learners and caring faculty, is what allows technology to move us along Bloom’s journey toward mastery learning.

Quote from Stephen Laster’s “Tearing Down Walls to Deliver on the Promise of Edtech”

One of the travesties of the term open has been its seemingly uncontested goodness when it comes to edtech. I’ve certainly contributed to that problem, but when we’re talking about openness of identity and learning data in relationship to students within systems—this sounds more like a plan to make sure those various interested parties gain unbridled access to personal data within the NGDLE. That approach is very much the opposite of  vision we have for Reclaim Cloud. While the NGDLE proved an acronym only a mother could love, the idea of collecting student data en masse and ensuring a more invasive and surveillance-ready series of tool integrations did come to bare. And I would argue none of these free-flowing efficiencies have resulted in anything resembling better experiences for students, not to say anything about that laughable metric of “success” in this context. Beware the wolf in open sheep’s clothing.

But despite all the scheming and posturing of companies large and small vying for student data, an open standard around containerization for the web was congealing and the vision of a cloud-based space for students to explore various applications was on the horizon. Docker was a huge piece of this shift. I talked a bit about the shipping metaphor at more length, this time using examples from Marc Levinson’s book The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger.

Image of book The Box

Marc Levinson’s book The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger.

I also re-used part of the script I wrote for my small piece in the “Understanding Containers” flex-course Taylor Jadin and I ran back in July, and I do think it holds up:

From there we pulled out the container ship images Bryan Mathers created and we could finally hear the “AHA” moment everyone was having 🙂

Illustrated black and white image of container ship carrying VHS-shaped containers

“Reclaim Containers” by the ever amazing Bryan Mathers

Image of a VHS-taped shaped container being moved off a ship

“Reclaim Container Crane” by the indubitably wonderful Bryan Mathers

After that we returned to the Reclaim Cloud interface, and hit what I think is another huge point of this presentation. The difference, arguably, between what was happening with Web 2.0 software in 2006 and what is happening with containers in 2023 is that not only can it be a sandbox for edtechs to explore a wide-range of tools that were heretofore unimaginable to run without major infrastructure investments from IT. But, even crazier, any of those sandbox products that hit and gain traction can be seamlessly scaled to enterprise for a fraction of the costs. We used our recent WordPress Multiregion work in Reclaim Cloud to demonstrate this, and I believe this is the other piece of the evolution of this infrastructure over the last 20 years. You not only get a next generation sandbox, but that can seamlessly transition to an enterprise solution that fails over to multiple data centers across multiple regions with automated backups all with a few clicks in GUI interface.

Slide with three interlocking WordPress icons with text "Brave new world of multiregion"

Brave New Worlds of WordPress MultiRegion

I have much more to say about our forthcoming cloud-based WordPress hosting interface, which is quite exciting, but I think this same logic extends more broadly to cloud-based apps well beyond Reclaim Cloud. We’re in a brave new world of infrastructure that allows us to run our own federated social networks akin to Twitter or Facebook with the click of a button, there is some degree of magic to that—at least it would be for 2006 me trying to figure out how to keep WordPress up and running on shared hosting. AVANTI!

Posted in docker, OER23, presentations, Reclaim Cloud | Tagged , , , , , , | 2 Comments

The Allure of Mastodon

Since November I’ve been toying around with Mastodon on Reclaim Cloud. At first it was to take up the gauntlet thrown, and see if I could create a space for the ds106 community. I did a bunch of work for a month or so spinning up a few instances, and Taylor even got an installer built, which is awesome. We have partnered with ALT since January to run Mission Mastodon, a sandbox Mastodon server so folks could test out this tool and begin to wrap their heads around how it works in practice. We had three sessions, the server will “explode” next month, but one of the things we came away with is that the tech is eminently manageable for a fledgling sysadmin like myself (I speak from my social.ds106.us experience) and costs to run it are not crazy. The question that looms is should organizations feel compelled to stand-up their own servers, or does it make more sense for folks to sign-up where ever, and then just use the magic of federation inherent to the system to glue together that community.

I think we leaned towards the latter given the overhead of managing a community for most orgs in our last discussion “The Final Countdown,” but I can still see tremendous value in creating intentional community servers—like our little ds106 Mastodon experiment—that provides a more localized space outside the maddening crowd. What’s more, I’m really excited about the much grander work Kathleen Fitzpatrick and company are spearheading with hcommons.social to build a full blown scholarly community that is built around an open source, federated infrastructure that is only beholden to its own community rules and policies. The key here is it stands intentionally apart from the corporate overlords that everywhere dominate Social Media presently—which is why new services like Bluesky for me are a non-starter at this late stage of my metastasizing Web 2.0 growth-cycle.

And that for me, right now, is the allure of Mastodon, and other federated tools I’ve been playing with like PeerTube. The network effects may be less, but personally I’m fine with that. It’s a re-calibration to a more human scale I desperately needed. My tweets were getting lost in all the noise for many years, but on Mastodon I’m finding connection again, folks are actually responding, conversations are happening, and there’s a growing sense of community. That’s what I signed-on to Twitter for 16+ years ago, and I’m realizing I missed it.

But there’s another piece that’s re-kindled something in me that I’m appreciating these days. It allows me to tinker in service of that community. Just like with WordPress Multiuser back in the day, I really enjoy trying to support a community using a tool to share and connect. I’m digging figuring out how to run Mastodon; what’s the best storage option; how to map domains; how to integrate Azuracast; some basic tweaks and maintenance; etc. It’s that old idea of narrating your work openly, with hopes it could benefit others. And look who else is blogging their exciting experiments with Mastodon. Lead with the work, it’s how you learn what you’re doing and give others a peek into how you’re doing it.

“I’m doing the work! I’m baby-stepping! I’m not a slacker!”

That might be what allures me the most, the idea of having a shared object of attention (other than the AI goldrush) to gather around and learn from one another. It helps me understand what we’re building with Reclaim Cloud, it provides a real-world example, and it’s the open infrastructure I want to see in this virtual world to help support healthy online communities.

Posted in Mastodon, Reclaim Cloud | Tagged , , | 8 Comments

Creating a Domain Alias for Mastodon Files Served through DigitalOcean Spaces

As promised in my last missive, this post will take you through creating a custom domain alias for the files being served by a DigitalOcean Spaces bucket. What does that mean? Well, DigitalOcean Spaces is a cloud-based file storage that serves media like images, video, etc. This media can take-up a lot of storage, especially for an application like Mastodon, so it makes sense to offload it to a service is relatively inexpensive and prevents you from running into storage caps. Another benefit is that you can move your application more easily between hosting services without worrying about moving all the files, which often is the biggest time suck for migrations.

So that’s why one might use an S3-compatible storage service like Spaces for an application like Mastodon which I discussed in my last post, but why do you need a custom domain for serving those files? There are a few reasons:

  1. It makes moving between S3-cloud-based storage providers easier. For example, say I want to move between Spaces and my own, self-hosted Minio S-3 object storage server eventually, that would be possible without breaking links by keeping the domain consistent regardless of storage provider.
  2. When using a custom domain with Spaces you can actually access their content delivery network (CDN) which means the files are cached around their global network and files will be served faster.
  3. Finally, custom domains are awesome, how much better is a domain like  custom domain like https://ds106.social versus https://ds106social.fra1.digitaloceanspaces.com …. no contest, right?

Ok, so now that you know the what and the why, let’s get into the how-to for DigitalOcean Spaces. The first piece is making sure you can point the domain you want to map on a Spaces bucket to DigitalOcean’s nameservers.  DNS can be confusing, but the piece here to keep in mind that for this to work is the entire domain needs to be pointed at DO’s nameservers, and this means you would control the DNS zones and pointing from their networking interface.This is not necessarily bad, it’s essentially the same thing you do with Cloudflare, but an issue could arise if you already have this domain managed elsewhere and changing the DNS records to a new service could mean overhead and downtime.

I just happened to have the ds106.social domain I was thinking about running Mastodon on, but having already setup the server on social.ds106.us, it was a simply redirecting traffic. So, I decided to point ds106.social to DigitalOcean’s custom nameservers, which are  ns1.digitalocean.com, ns2.digitalocean.com, and ns3.digitalocean.com. The nameservers are usually changed where ever you registered the domain. After that, you go to the Networking area of Digital ocean and add the domain, which will then allow you to manage and add new DNS records:

DNS management panel on DigitalOcean for ds106.social

Once the domain is added, return to the Spaces bucket and enable the CDN option in settings:

Enable CDN option in Spaces bucket settings

Once the CDN option is enabled you will be asked to select a subdomain to load the files over, and now that you have pointed your domain’s nameservers to DigitalOcean you can do this. You can choose the domain ds106.social and define the subdomain.

Creating subdomain and issuing certificate for Spaces CDN

I used files.ds106.social, and this will automatically create a CNAME (or alias) that will make all the files served over ds106social.fra1.digitaloceanspaces.com also work over ds106social.fra1.cdn.digitaloceanspaces.com for the CDN, but most importantly served over files.ds106.social. Below are three examples of the same file resolving over all three domain names:

https://ds106social.fra1.digitaloceanspaces.com/accounts/avatars/109/309/621/412/056/867/original/f758838c34a58ad2.jpg
https://ds106social.fra1.cdn.digitaloceanspaces.com/accounts/avatars/109/309/621/412/056/867/original/f758838c34a58ad2.jpg
https://files.ds106.social/accounts/avatars/109/309/621/412/056/867/original/f758838c34a58ad2.jpg

And that is the magic of alias domain mapping! Digital Ocean provides the Let’s Encrypt certificate that works for all the above domains, and now the ds106 Mastodon instance defaults to the files.ds106.social domain for all media, which is quite slick. The subdomain, files.ds106.social for this example, is a CNAME hostname, pointed to the value ds106social.fra1.cdn.digitaloceanspaces.com. —this makes files.ds106.social an alias of ds106social.fra1.cdn.digitaloceanspaces.com.

CNAME for subdomain files.ds106.social in DigitalOcean DNS panel

I learned this the hard way given I was initially confused about whether I could point a subdomain from another service like Cloudflare to the Spaces bucket (you can do this on AWS I’m pretty sure), and the short answer is you can’t. You have to point nameservers and then use DigitalOcean’s networking option to manage the domain, subdomain and SSL certificate.

Final step would be to add the environmental variable for the new alias to your .env.production file in Mastodon:

S3_ALIAS_HOST=files.ds106.social

So my .env.production configuration for the object storage now looks like this, which is just the additional of the S3_ALIAS_HOST variable:

S3_ENABLED=true
S3_PROTOCOL=https
S3_BUCKET=ds106social
S3_REGION=fra1
S3_HOSTNAME=fra1.digitaloceanspaces.com
S3_ENDPOINT=https://fra1.digitaloceanspaces.com
S3_ALIAS_HOST=files.ds106.social
AWS_ACCESS_KEY_ID=yourspacesaccesskey
AWS_SECRET_ACCESS_KEY=yoursupersecretspacesaccesskey

And that’s all folks! If everything is working as it should be your domain alias is working and you can test that by inspecting any image or avatar and checking the domain file URL.

NB: If I were a better guide writer I would re-do adding this subdomain to Spaces given I can’t remember the exact steps of how the subdomain was created (complicated by the fact I uploaded a certificate from Cloudflare while I was trying the old method) and how it creates the certificate, but I believe it is automatic and you can create a new subdomain such as files, but I really don’t want to delete this subdomain and do it again for the sake of this post given it will create a bit of havoc on the server 🙂

Posted in Mastodon | Tagged , , , , | 1 Comment

Moving Mastodon Object Storage from AWS S3 to DigitalOcean Spaces

These days I’ve been in a Mastodon maintenance state-of-mind, as my last post highlights. Early this week I tried to shore up some security issues linked with listing files publicly in our AWS S3 bucket. This encouraged me to explore additional options for object storage (the fancy name for cloud-based file storage) and I decided to try and move all the files for social.ds106.us from the AWS bucket to DigitalOcean’s Spaces, which is their S3-compatible cloud storage offering.

“Why are you doing this?”

Why am I doing this? Firstly, to see if I could, and secondly because AWS is unnecessarily difficult. What annoys me about AWS is the interface remains a nightmare and a simple permissions policy like preventing listing files in a bucket becomes overly complex, which then results in potential security risks. I eventually figured out how to restrict file listing on AWS, but seeing how easy it was with my other Mastodon instance using Spaces— literally a check box—I decided to jump.

Screenshot of Digital Ocean's Spaces's option for disabling File Listing

DigitalOcean’s Spaces provides option for disabling File Listing

To be clear, I didn’t take this lightly. The ds106 Mastodon server has over 30 active users, and I’m loving that small, tight community. In fact, I’m loving the work of administering Mastodon more generally, it transports me back to the joyful process of discovery when figuring out how to run WordPress Multiuser.

“Back when I had hair”

Anyway, moving people’s stuff means you can also lose their stuff, and that is not fun. So I did take a snapshot of the server before embarking on this expedition, but the service does not stop with a snapshot. Folks keep posting and the federated feed keeps pulling in new posts, so that was something I had to keep in mind along the way.

So, in order to get started I had to setup a Spaces bucket on DigitalOcean where I would move all the files from the AWS S3 bucket. Doing this is really quite simple, and you can follow DigitalOcean’s guide for “How to Create Spaces Bucket.”  Keep in mind you want to restrict file listing in the settings for this bucket.

After that, we need to move the files over, and after reading another DigitalOcean guide on “How to Migrate from Amazon S3 to DigitalOcean Spaces with rclone” the best tool for the job seems to be rclone utility—but that guide is from 2017 so would love to hear if anyone has something else they prefer. You download rclone locally and setup the configuration file with the details for both AWS and Spaces.

My config file looked something like what follows, but keep in mind it is highly recommended if you’re doing this for Mastodon that you make the acl (access control list) public, not private as the rclone guide linked previously suggests. Leaving this setting as private led to a huge headache for me given I was not reading the fine print, and with over 400,000 objects being moved to Spaces, all of which were private, I had to do another pass using the s3cmd utility to make them all public—which took many hours and added an unnecessary step given these files are designed to be public.

Anyway, here is template for your rclone.conf file, keep in mind you’ll need your own access keys for each service.

[s3]
type = s3
env_auth = false
access_key_id = yourawsbucketaccesskey
secret_access_key = yourawsbucketsecretaccesskey
region = us-east-1
location_constraint = us-east-1
acl = public

[spaces]
type = s3
env_auth = false
access_key_id = yourspacesaccesskey
secret_access_key = yourspacessecretacccesskey
endpoint = fra1.digitaloceanspaces.com
acl = public

After that’s set up correctly, you can run the following command to clone all files in the AWS S3 bucket that is specified over to the DigitalOcean’s Spaces bucket specified:

rclone sync s3:reclaimsocialdev spaces:ds106social

Given the sheer amount of files this took hours. And one of the things I learned when doing this is most of those files are in the cache folder given that is where all the federated posts are stored, and this is what can quickly get your Mastodon instance storage out of control. In fact, our local user files are all stored in media_attachments:

Mastodon Bucket file storage directories

The cache folder is by far the largest for the ds106 server, and knowing this allowed me to do a final clone of every other directory before switching to Spaces to make sure everything was up to date. Some of the federated media may be lost in the transition given the sheer volume, and that proved to be true for about an hour’s worth of media on this server.* Luckily it is still all on the original source, and this only effected images and thumbnails, but I do what to figure out if there is a way of re-pulling in specific federated feeds, given I tried to tun the commands for accounts refresh:

RAILS_ENV=production /home/mastodon/live/bin/tootctl accounts refresh [email protected]

As well as rebuilding all feeds, but that didn’t help either:

RAILS_ENV=production /home/mastodon/live/bin/tootctl feeds build --all

That said, it was not too big an issue given just a few images were lost in the transition from federated posts, so I was willing to swallow that loss and move on. After the cloning was finished and I made the switch I realized all the media on Spaces was private given that errant acl setting referenced above. I was chasing my tail for a bit until I figured that issue out. I had to use the s3cmd tool for the Spaces bucket to update the permissions for each directory. Rather than trying to change permissions for all 400,000 files at once, which would take too long, I just updated permissions for every directory but cache (which has the lion’s share of files on this server) and this was done in under an hour. Here are the s3cmd commands I used, and here is a useful guide for getting the s3cmd utility installed on your machine.

s3cmd setacl s3://ds106social/accounts --acl-public --recursive
s3cmd setacl s3://ds106social/media_attachments --acl-public --recursive
s3cmd setacl s3://ds106social/site_uploads --acl-public --recursive

Once these were done, I then went into each sub-directory of the cache folder and did the same for them, which will 400,000 files took forever.

s3cmd setacl s3://ds106social/cache/preview_cards --acl-public --recursive
s3cmd setacl s3://ds106social/cache/custom_emojis --acl-public --recursive
s3cmd setacl s3://ds106social/cache/accounts/avatars --acl-public --recursive
s3cmd setacl s3://ds106social/cache/accounts/headers --acl-public --recursive

Once I updated all the file permissions for every directory except cache, I could switch to Spaces and let the permissions update for the cache directory run in the background for several hours. That did mean that media for older federated posts were not resolving on the ds106 instance in the interim which was not ideal, but at the same time nothing was lost and that issue would be resolved. Moreover, all newly federated post media coming in after the switch was resolving cleanly, so there was at least a patina of normalcy.

In terms of switching, you update the .env.production file in /home/mastodon/live with the new credentials, so after updating the file my Spaces details look like the following:

S3_ENABLED=true
S3_PROTOCOL=https
S3_BUCKET=ds106social
S3_REGION=fra1
S3_HOSTNAME=fra1.digitaloceanspaces.com
S3_ENDPOINT=https://fra1.digitaloceanspaces.com
AWS_ACCESS_KEY_ID=yourspacesacccesskey
AWS_SECRET_ACCESS_KEY=yourspacessecretacccesskey

After you make these changes and save the file you can restart the web service:

systemctl restart mastodon-web

You’ll notice the region is now in Frankfurt, Germany for the ds106 Spaces bucket versus US-East in AWS, this makes it a bit more GDPR compliant given the ds106 Mastodon server is in Canada. And  with that, if you don’t make the same permissions mistakes as me, your media will now be on Spaces which means you have simplified your Mastodon sysadmin life tremendously.

With that done and the files being successfully served from Spaces I started to get cocky. So, I decided to load all the files over a custom domain like https://ds106.social versus versus https://ds106social.fra1.digitaloceanspaces.com, but I’ll save that story for my next post….

Update: For posterity, it might be good to record that this was the StackExchange post that helped me figure out the permissions issues with the rclone transfer making all files private.

____________________________________________________

*It seems, though I’m not certain, that when trying to clone specific directories within the cached directory of media that were not synced overnight, the re-cloning of specific directories removed the existing directories/files in the cloned directory. I was hoping it would act more like rsync in that regard, but not sure it does that, which may also be a limitation of my knowledge here.

Posted in AWS, Mastodon, s3, sysadmin | Tagged , , , , , , , , , | 7 Comments