The bava Media Center

Image of an entertainment center with a bunch of component parts listed in the post

The bava media center, featuring a Sony Trinitron 19” television to the far left with a Commodore 128 right below it. Beneath that is a used Panasonic DP-U89000 DVD Blu-ray/UHD multi-region player for the new fangled higher-end media. The beige entertainment center (starting with component pieces) has the gold Pioneer DVL-909 CD/LaserDisc/DVD multi-region, below that is an Onkyo DC-C390 6-disc changer for your ultimate compact disc fix. Moving right, you have a Panasonic AG-6200 NTSC and PAL multi-region VCR (a must since arriving in Italy), a Marantz PM6006 stereo receiver (which is running out of spots), and on top of the entertainment center is an audio-technica A-LP-120-USB turntable, and to the right of that the 100 pound 27” Sony Trinitron—an absolute gem.

Image of two drawers filled with vinyl and laserdiscs

Beneath all the component pieces are two drawers filled with vinyl records and laserdiscs. This makes for some handy storage and keeping things clean, but the vinyl and laserdiscs always find a way to get out of their holders and strewn around the office.

 

As Tim knows from when we were building the Console Living Room and then Reclaim Video, I love to wire together and play with component systems like this. I’m not particularly good at it, but it gives me endless hours of frustrating delight. I have my current setup in decent shape, and almost every receiver input has been accounted for. Recorder is the only free input, and once I figure it out I am thinking that is a good input for the commodore 128—we’ll see. The above image highlights the mapping of the various inputs, here they are as of today:

  • Phono ->  Audio technica turntable
  • CD -> Onkyo 6 disc changer
  • Network -> Pioneer DVD/LD/CD
  • Tuner -> Panasonic VcR
  • Recorder -> N/A
  • Coaxial -> Panasonic Bluray/UHD/DVD
  • Optical 1 -> Onkyo CD/Computer
  • Optical 2  -> Pioneer DVD/LD/CD

You’ll notice from the mapping above that I have the Onkyo and Pioneer going in as both RCA and Optical inputs. I have found recently that some DVDs do not pickup the optical feed on the Pioneer, so sometimes I have to switch the audio feed back to RCA.

But that begs a bigger question, how does this all look on the backside, which audio/video feed is going where? How is it coming into the computer for ds106radio casts? That’s something I want to work through given I’ll be trying to break it down on a stream this coming Friday. So let me see if I can make heads or tails of it here.

The Audio hijack setup is a Shure MV7 microphone with a USB-C input into computer, the USB Audio Codec input is a USB-B input from the audio tecnica turntable. Those are pretty straightforward, but the Cam Link 4K is an Elgato 1080p HDMI capture card that is coming in from a switcher for two RCA devices: the Onkyo 6-disc changer and the Pioneer CD/LD/DVD player for the CD as well as video (if and when I need it). This switcher is primarily for converting RCA video inputs to HDMI, but works just as well with a CD player, so it was the best option I had at hand to convert the CD players into an audio feed for the radio. The switcher has a button that allows me to select between the two RCA inputs for each of those devices, which is pretty slick. Just need a better labeling system.

 

This RCA switcher could also work for the VCR input, and I can pull the Bluray and UHD player feed in from another Elgato CAM Link capture card I have lying around (although I might need to get one that does 4K) straight to HDMI, no RCA conversion needed. The video streaming setup might be worth another post, but this does give you a pretty good sense of the audio inputs for the various  ds106radio/Reclaim Radio sources that can do both vinyl and compact discs without issue….YEAH!

Posted in audio, ds106radio, reclaim, Reclaim Radio, video, vinylcast | Tagged , , , , , , , , | 1 Comment

It Came from the bava Archive, Volume 1

Back in September I installed the On This Day plugin to start trying to review and clean-up the bava archive on a more regular basis. With almost 4,000 posts, this blog has accumulated a lot of jettisoned media and broken links, and while I understand things will inevitably fall apart on this fragile web, I have an unhealthy obsession with trying to keep up appearances. My original plan of checking the archives page every day proved a bit too ambitious, but I’ve been able to check-in now and again since September, and I must say it is a wild ride.

Animated GIF from 400 BlowsMore than anything, I continue to be blown away by the fact that I don’t remember so much of it. Let me start with a telling example of the experience on a few levels. Going through the On This Page post links a few days ago I found a post I actually do remember writing about Spacesick’s “I Can Read Movies” book cover series during the height of the ds106 design assignment days. The post was riddled with dead links given the Flickr account originally housing the images of these amazing re-imagined movie book covers was no longer active. The account may have been victim of the shift to charging folks for larger accounts, but whatever the reason, the images were long gone.

Screenshot of ds106 Needs Mo Better Design with wholes left from broken image links

Screenshot of ds106 Needs Mo Better Design with wholes left from broken image links

So to try and fill the broken image gaps, I searched for any of the “I Can Read Movies” posters I referenced and embedded in that post. I found a few, but not all, but I also noticed some other “I Can Read Movies” titles while I was searching, and there was one that I thought was clever, the “I Can Read Alien” version:

After following the link I was brought to this post on bavatuesdays, turns out I created this book cover as a ds106 assignment almost 12 years ago, and after reading my own post I have only a vague memory. I said it already, but going through the blog archive is akin to experiencing some Memento-like creative amnesia—“I actually did that?” So that element of the experience is omnipresent, but it’s not all existential angst. Another element I found amazing was discovering to my pleasant surprise that the Dawn of the Dead cover I couldn’t find was readily available after searching my own blog on the Wayback Machine for the originalpost and hitting pay dirt thanks to a November 24, 2010 capture:

screenshot of the November 24, 2010 capture the Wayback Machine took of this blog post which captured all the now gone images

screenshot of the November 24, 2010 capture the Wayback Machine took of this blog post which captured all the now gone images

It’s so beautiful, a K2/Kubrick themed bavatuesdays no less!

More extensive screenshot of the post that was archive on the Wayback Machines with all the links

I was impressed how well they archived the Flickr images, the only thing not working was a dynamic slideshow I included of Spacesick’s book covers within an album on Flickr, but besides that it was perfect. What’s more, the capture does a great job reproducing the K2/Kubrick theme style of the bava at that time, as well as context specific sidebar updates. I never really used the Wayback machine for my own blog, but I have to say the Internet Archive is #4life!

Screenshot of bava post with just captions but no images from Google Maps live photos that came from a tumblr that collected them

Screenshot of bava post with just captions but no images from Google Maps live photos that came from a tumblr that collected them

After that, I resurrected a post with a ton of broken image links from a Tumblr account highlighting all the weird images captured by the Google Maps photo cars. In fact, we had a ds106 class wherein we captioned the photos, which I shared in this post, and they are pretty wild. I cannot believe we spent an entire class session on this, but at the same time how awesome! 🙂 Well, turns out the Wayback Machine captured almost every image in that post, and they were right there for me to browse, and ultimately copy over to my blog:

Screenshot of a blog post on bavatuesdays including Google Maps images captioned by students. This was when Google Maps started adding a live view to its interface

Screenshot of a blog post on bavatuesdays including Google Maps images captioned by students. This was when Google Maps started adding a live view to its interface

This was a pretty amazing discovery, and there was another post I had unearthed that was missing images so I figured, “Why not?” So I rolled the dice, but no luck. Upon further inspection I realized I was linking to a UMW Blogs-hosted course about Ethics in Literature by the great Mara Scanlon. But wait, UMW Blogs was recently archived by those fine folks, so why not give that Webrecorder instance a search. And wham, dam, thank you UMW, I found the Ethics and Lit blog in all its glory:

Screenshot Mara Scanlon's Ethics and Literature course blog from Fall 2009

Screenshot Mara Scanlon’s Ethics and Literature course blog from Fall 2009

Now to find the images about Faulkner’s As I Lay Dying wherein Vardeman compares his dead mother to a fish. Search…..Bingo!

Image of a fish-like corpse in a coffin as a interpretation of a scene from Faulkner's As I Lay Dying

Image of a fish-like corpse in a coffin as a interpretation of a scene from Faulkner’s As I Lay Dying

So I copy that image and the other into my own blog, which had holes in the tapestry, and…

Voila, the bava is whole again!

The fact that Reclaim Hosting could partner with UMW on preserving that platform makes me ridiculously happy. I love it conceptually, ideologically, and even more in practice—which this post attests to.

These recent raids of the bava archives have me excited about what I might find next, and even more now given might be able to actually future-proof quite a bit of the media on the bava thanks to organizations like Internet Archive and UMW Blogs—I love those two flavors side-by-side in that statement 🙂

The things you discover and realize by reflecting on a few love letters sent to the web years and years ago.

Animated GIF from Criss Cross Burt Lancaster Animated Car

Me driving back through my GIF posts in the bava archives

Anyway, I’ve also realized my blogging is pretty thematic: I write about WordPress and UMW Blogs a lot early on; then ds106 takes over; then Reclaim Hosting—not that surprising given it’s a document of my work over 18 years. But one theme I particularly enjoyed on my recent travels into the archive were all the writings about GIFs. In fact, a couple of my very favorite bava posts are just GIFs.  The “Give it Up for the GIF” post written on November 17, 2012 may be my favorite post of all time given it highlights all the GIFs I made during a two-year, quite intensive run of ds106:

Give it Up for the GIF

I also documented the moment Zach Davis shared the “If We Don’t, Remember Me” tumblr that gave GIFs an entirely new world of possibilities with a deeply meditative aesthetic that highlights the creative and instructional power of the format.

blade runner gif from ifdrm

My mother? Let me tell you about my mother….

And then there is the lamenting about not making as good a GIF as I used to, which still manages a decent GIF from Magnificent Obsession:

In 2011 and 2012 there were so many GIFs on the bava, and it for me was a direct link to the creative excitement that came out of ds106, quite possibly this blog at its creative peak—definitely if it comes to sheer number of unique, exciting posts about almost anything I found interesting culturally.

Animated GIF from Blade Runner eye

There is much more I can say, and I have earmarked a whole bunch of posts that highlight some of the other themes as they are fleshed out with more research, but for now let volume 1 of the archive reflections highlight the magic of the GIF, something that since 2010/2011 has become an widely accepted way of communicating digitally, and one that I love, love, love.

What an amazing first issue 🙂

Posted in Archiving, bavatuesdays, digital storytelling | Tagged , , , , , , | 3 Comments

Today is the Day I Got My Mastodons Dockerizized

Low-hanging Taxi Driver Reference

To battle the immense feeling of frustration with yesterday’s West Coast outage at Reclaim, early this morning I decided to take Travis Bickle‘s advice and get my Mastodon instances dockerizized. I would blog the process of migrating from a source instance of Mastodon to a Docker instance, but once again Taylor already did as much on his blog a few weeks back: “Migrating Mastodon from a Source Install to a Dockerized Install.” Hell, we even did a stream of the process that takes you through it step-by-step, and I re-watched that this morning in order to get both social.ds106.us and reclaim.rocks containerized.

So this post is not as such about the details of containerizing, as it is a love letter to Mastodon. It’s become like a neighborhood bar where I can follow and catch-up with a few folks I actually want to see, rather than every jackass in high school I wanted to leave far, far behind. I think Mastodon epitomizes the idea of faith in a seed when it comes to community and open source tools, and it is become the most important for me when it comes to connecting with folks. Mastodon reminds me why I got into this edtech racket: 1) I love to tinker with open source tools, and Mastodon is a remarkably powerful open tool for building community and as a result has helped to curb the impact of algorithms, commerce-driven data extraction, and the usual crap that ruins these spaces–what’s not to love?* 2) ratcheting back scale and connections right now is probably the healthiest mental approach to a news/hype cycle that continues to go fully off the rails. Mastodon is akin to what people have told me about marijuans :), it makes you feel happy and inclined to laugh without the sense of needing more, and any potential paranoia of being found out quickly dissipates when you realize no one is there to find you out—they’re all looking for their own home away from the maddening crowd.

The modest scale of the ds106 social server is a case in point, there are maybe 20 or 30 users that are “active,” and there is a general sense of folks dipping their toes in and trying to wrap their head around it, as a result the conversations are slower and more direct, the drive-by attention culture is much less, and it provides a personalized, contextualized feed that can open up to a wide range variegated topics. Twitter at its best, but with far less soap boxes and self-promoters that craved the follower count capital far more than actually connecting. I’m happy on Mastodon, it’s a kind of enclosure that offers protection from the onslaught of the market in those other social spaces, and I’m ready to settle in the murky territories and pioneer the socials for a while.

I say this knowing that just over a year ago I, too, was dipping my toe in the Mastodon pool, considering jumping, and I finally did and installed three or four servers and managed to figure out the what and how. But it has taken me much longer to figure out the why, which it turns out is pretty simple. I do want to be social in these spaces, but I want it on my terms outside the templated, market-driven spaces that have proved time and time again that they do not respect privacy, civility, tolerance, compassion, or just about any of the qualities that need to define a space I want to call my local. Mastodon has become the rock we have built the ds106 Social Local on, and one year later I have no regrets, do you you corporate social media hippies? Be more than a burned out threat!

_________________________________

*And I have to say, what causistry will folks who know better use to explain away their merry posting on Facebook, LinkedIn, and Instagram to keep those likes and subscribes coming? Staying on those sites, no matter the reason, is cynical at best.

Posted in Mastodon | Tagged , , , | 1 Comment

Offloading Peertube Media to S3

My last post documented offloading Azuracast media to Digital Ocean’s Spaces, and this is a follow-up on the offloading theme. Only this time it was for my Peertube instance hosted at bava.tv. I’ll be honest, this could have been a very painful process if Taylor hadn’t  blogged how he did it with Backblaze last December, so in many ways thanks his work made the process a piece of cake, so thank you, Taylor!

I made all media public in the Digital Ocean Spaces bucket, and also created a CORS rule to give the bava.tv domain full control over bucket. After that, I just got the keys, endpoint, region, and bucket name, and I was good to go:

object_storage:
  enabled: true
  endpoint: 'nyc3.digitaloceanspaces.com' 
  upload_acl: 
    public: 'public-read'
    private: 'private' 
  videos: 
    bucket_name: 'bavatv'
    prefix: 'videos/' 
  streaming_playlists: 
    bucket_name: 'bavatv' 
    prefix: 'streaming-playlists/' 
  credentials: 
    access_key_id: 'youraccesskey' 
    secret_access_key: 'yoursupersecretkeytothekingdom'

Once I replaced that with existing object_storage code at the bottom of  the production.yaml file (found in /home/peertube/docker-volume/config/) and restarted the container the site was offloading videos to my bucket in Spaces seamlessly. Now I just needed to offload all existing videos to that bucket, which this command did perfectly:

docker-compose exec -u peertube peertube npm run create-move-video-storage-job -- --to-object-storage --all-videos

After it was done running it removed close to 120GBs from the server and my videos loaded from the Spaces bucket. The videos load a bit slower, but that is to be expected, and the next piece will be finding a way to speed them up using a proxy of some kind. But for now I am offloading media like it’s my job, well, actually, it kind of is.

Posted in PeerTube, s3 | Tagged , , , , | 4 Comments

Offloading Azuracast Media, Recordings, and Backups to S3

I’ve been on a bit of an offloading kick these last months, and I’m digging the idea of controlling server environment bloat by having media hosted in an S3 storage situation. It simplifies cloning, updating, and/or migrating a server environment, and often times results in faster load times. I’ve been doing it with bigger WordPress sites for months now, and it’s high time to turn my attention to some other applications, in this case the web radio software Azuracast.

More than 3 years ago ds106radio was migrated from Airtime to Azuracast. It was the right move, and Azuracast has proven to be the best in the business, I love it. Buster Neece, the primary developer of Azuracast, has made a big push the last several years to containerize the software, making it much easier to host. Updating Azuracast is now a piece of cake, and new features just keep on coming. One feature that’s been around for a while that I’ve yet to take advantage of is offloading media, and given the instance has surpassed 100GB of live broadcast recordings, it’s officially overdue. I realized the value of offloading media after hosting Mastodon for a year, social.ds106.us has 168GB of offloaded media, and that’s for a very small server. Offloading at just $5 a month for up to 250 GB on Digital Ocean’s Spaces is a total deal, and it frees up significant space on Reclaim Cloud.

A few of our S3 buckets in Digital Ocean’s Spaces

So, below are details of how I offloaded all media, backups, and recordings on the mighty ds106rad.io to Digital Ocean’s S3-compatible object storage Spaces. You can find a quick-start guide for getting a bucket up and running on DO’s Spaces. Apart from that, I installed the handy, dandy s3cmd tool on the Azuracast server to help with moving files to S3. I also recommend installing NCDU on your environment to see where all the larger files live (this could have saved me some hassle if I installed in sooner).

When working in Reclaim Cloud, I do all my testing on a clone of the production site that I spin up to make sure no changes I make break the live site. If all works as planned I do the same thing on production. One could also just as easily destroy the production server and point DNS to the clone where all the changes were made successfully, but I am superstitious and inefficient so I do everything twice.

When configuring s3cmd (s3cmd --configure)you will need your bucket access key, secret key, region, endpoint, and bucket name. Something like this:

Access Key: <your S3 access key>
Secret key: <Your s3 secret key>
Region (my region for Spaces is frankfurt): fra1
bucket: filesds106radfio
endpoint: fra1.digitaloceanspaces.com

The above format worked for me on DO’s Spaces (you will need your own access keys), and thereafter the server will automatically connect to the S3 bucket, which is convenient. Beyond that, it is time to locate the files and start offloading them to S3. Here is where I should have used the ncdu command sooner, given it’s a handy command line tool that lets you know how much space is being used in each directory, allowing you the ability to drill down by directory. So, if I’m trying to figure out where the 100GB of recordings are stored on the server, I could track it down with this command. But I did things the hard way, I essentially used a command I had learned from my colleague Taylor to remote login to the container within the server environment to copy all the files out of the container into the broader server environment.

I know, it’s confusing, but that’s part of wrapping your head around working with containers. Each container can be logged into separately using the command docker exec -it containerid sh -this will allow you to move around in the Alpine linux container and use a few, very limited commands. For example, s3cmd does not run within the specific containers. So I got the brilliant idea of copying the media, backups, and recordings directories from the container to the docker environment so I could then use s3cmd to put them in the S3 bucket. You following? Probably not, but think of it like this, the server environment is a container running a Docker Engine environment that can harness all the tools and commands of CentOS 7, whereas the specific container running  Azuracast is packaged as an Alpine Linux container with far fewer server bells and whistles and only limited packages and command line affordances. Containers within containers, and the simplification of the container envrionments within the Docker environment make them run more efficiently.

Here is the command I used to copy files from the Azuracast container into the general server environment:

docker cp containerid:/var/azuracast/stations/ds106radio/recordings /var/azuracast/

I was stoked that this worked, and all the files from /var/azuracast/stations/ds106radio/recordings could be copied out of the container and into /var/azuracast so that I could then run the following command to put everything I just copied into the S3 bucket:

s3cmd put /var/azuracast/recordings/* s3://filesds106radio --acl-public  --recursive

That worked, but when I had finished the process and was deleting the media on the server I offloaded, I fired up the NCDU tool to find any larger files. It’s then I realized all the media was in /var/lib/docker/volume/azuracast_station_data/_data/ds106radio/recordings and I did not need to remote ssh into the container and copy everything, I could much more easily have run s3cmd from that directory to offload media. Nothing like learning that after the fact, but working through this conceptually has helps me understand how containers operate, which is very useful–despite all the extra work.

S3 Bucket with directories for backups, media, podcasts, and recordings

Once all the data was moved to the Spaces bucket, I then needed to login to Azuracast and update the storage locations to point to S3 for media, recordings, podcasts, and backups.

In System Maintenance–>Storage locations you can add the new S3 storage location using much of the same data referenced above for setting up s3cmd.

Once you add the bucket data for each type of media (having some way to carry the S3 details over for each storage type would make it easier/quicker) you’ll be asked to update your server configuration with a quick reboot, and the media will start offloading immediately. Given we’ve already copied pre-existing media to those buckets, the broadcasts and media archive will be already be populated with old recordings and previously uploaded media.

With that, you can remove all the media files on the server and save yourself well over 100GB of space.

Posted in audio, Azuracast, ds106radio, open source, reclaim, Reclaim Cloud | Tagged , , , , , , | 1 Comment

bava on the Edge

On the edge, I’ve been there
And it’s just as crowded as back home.

Dag Nasty, “La Peñita”

Yesterday I did a little experimenting on the good old bava.blog to test the notion of application delivery networks (ADNs). You probably have heard of Content Delivery Networks (CDNs) wherein static content is delivered via caches all over a service’s global network (most popular being Cloudflare). Well, in this new acronym, beyond the content the whole application itself is cached across the network, so when one (or in my case both) servers driving the bava go down, the site is unaffected, it begins to deliver the application itself through the network. Which means not only high availability, but virtually guaranteed 100% uptime.* I found it hard to believe, and I have been looking into edge computing thanks to Phil Windley’s recent post, but this was my first exploration of the concept.

Our cloud hosting at Reclaim Cloud is driven by the software developed for Jelastic, which was bought by Virtuozzo. It has been something we’ve been pushing pretty hard on with not only apps well beyond the LAMP stack, but also containers and the wonderful work of Docker, which in turn led us to start building a dedicated WordPress service on top of performant, affordable containerized WordPress hosting: ReclaimPress. As I’ve been working through ReclaimPress, I was shown the tool/service Edgeport. Very much positioned as a simplified, easy-to-use Cloudflare competitor, EdgePort was designed as a security-first, cloud-native Web Application Firewall with a global network that delivers applications dynamically, even when the origin servers are off. Their DNS options are an affordable alternative to Cloudflare for similar plans, which has been a key factor for me. To get in the door for enterprise at Cloudflare is somewhere in the ballpark of $3,000 a month (which the condescending Cloudflare sales agent was sure to remind me), whereas all the features we need–many of which are Cloudflare enterprise only—are part of a $199 a month plan at Edgeport. What’s more, I have not seen anything like ADN delivery networks at Cloudflare, so we now have a viable, affordable alternative to Cloudflare which can do even more. That makes me very happy.

I can harness a globally cached network, as well as load balancing fail-over, and the emergency backup of applications being cached and delivered in their entirety from the network (whether or not my servers load), and that is not even including the vast security tools that I have to dig into with Noah in more detail. It seemed like magic, so I spent much of yesterday testing it on this old blog.

I turned off both servers in the failover setup at 10:59 UTC and then powered them back on at 19:48, so just under 9 hours of downtime that did not stope a single page or post from working cleanly on my site.

Image of Log for when the servers were turned off and then back on

Log for when the servers were turned off and then back on

I had Antonella try and comment and that was not successful, and never thought to try logging into /wp-admin area, given it would seem impossible, but maybe not?  Will return to that, but perhaps comments and posting do work in an ADN?†

Regardless, it was fun to occasionally search for blog posts that I hadn’t read in years, and see them load without issue, even though both servers were down.

This comes at an amazing time at Reclaim when we’re going into our second year of stable, solid .edu hosting for a number of schools, and adding this possibility for not only guaranteed uptime, but increased vigilance and next-level cloud-based security is pretty thrilling. I really want to get out on the presentation trail again and talk this through because more and more these leaps in infrastructure are something we have been just able to almost keep up with, but this one almost feels like we are not only well-positioned to offer it, but maybe even early to the party.

Reclaim4life, small and limber is beautiful!

_________________________________________________
*With the caveat that is an imagined Shangra-la if you push hard enough on the idea.

†Turns out they cannot make the database writable in the ADN, so it is read only. They mentioned it is technically possible, but not legally—which makes sense when you think about it in terms of security and spoofing, and then there is the whole issue of syncing back changes. It might make sense, if only for practical purposes, to keep everything write-only during any extended downtime.

Posted in reclaim, Reclaim Cloud | Tagged , , , , , | 1 Comment

The Perfect Gameroom

If you were a New York Yankees baseball fan in 1998, one of the highlights of an all-around amazing year is the unlikely feat of everyman David Wells pitching a perfect game one sunny May afternoon. The excitement was electric, the whole world seemed to be re-framed by this one person’s specific achievement. The idea that for that one day, those 2 hours and forty minutes, his pitching was perfect. Everything worked, everything in its place—magic.

David Wells being carried on his teammates shoulders after perfect game in 1998.

While I may have David Wells’ body frame, I have none of his remarkable athletic abilities; so this is a tale of perseverance and commitment (rather than skill) over 2 and a half years to make sure every single game in my personal arcade collection (the bavacade) works perfectly. Well, the day has come, and I have evidence in the form of video below, but the real point is that for a few hours (days even? -never weeks 🙂 ) this collection can be perfect, every arcade cabinet in the bava manse works as they did back in the early 80s.

Today is a good day, even I can be a hero, just for one day!

Posted in bavacade, bavarcade | Tagged , | 1 Comment

bavacade repair log 11-12-2023

Well, I think the time has come where all 28 games here in the bavacade are working 100%. There are still some small things to do, but if everything goes as planned in the walk-through today I’ll have 28 games running and working seamlessly in just a few hours. But before I mark that momentous occasion with it’s own post, I wanted to track what I have done since my last repair log update.

Elevator Action is stencilled, re-assembled, and working beautifully. With the new control panel overlay and slick paint job, this game is aces right now. I just have to do some touch-up paint on the rear door and get a few scrap pieces of wood to build something the AC input unit can hang on. I love this game, and I love that it is now mint and a pillar of the collection highlighted in the Foyer 8*

Elevator Action

Stargate continues to allude me, I had the resetting problem return again, which still threatens the perfect game room. Last night I swapped the switching power supply for an original unit I had, and that game seems to be playing now. We’ll see if that is the one that spoils today’s fun.

I had Cheyenne back up and running, but when I turned it on last weekend the power would not go on. I pulled the power supply out and tested everything, but all seemed to work, it was just the interlock switch that was not working—>I should have know then. Anyway, after pulling the game apart I realized the kill switch in the coin door was open, causing no power to flow. A rookie mistake that cost me hours.

Yie Ar Kung fu custom side art

The Yie-Ar Kung-fu artwork is amazing. I cut that into slivers and applied that last week, and a few places the stickers were not perfect, but well hidden and overall it is pretty amazing. All credit goes to Bryan Mathers for creating the art and Ricky making it part of his high school project and printing the art out and working with me to make it all fit.

Yie Ar Kung fu custom side art

At this point I just need to get the glass bezel and glass marquee printed, and then figure out the best way to approach the control panel print-out.

Yie Ar Kung fu custom side art

Despite all my love and attention, the freaking Yie-Ar Kung-Fu board started having some character pixelation, which has happened before. I swapped boards so that it’s now working well. But I wonder if something else is up because Mike (who fixes these boards) assured me it is not a board issue, and power supply checks out, so this might need a re-worked edge connector. But right now it works!!!

Apart from that, the original power supply for Scramble was fixed, so that is now installed alongside the switching power supply—both working. With that working, I decided to get the Super Cobra power supply fixed as well, so the next thing I might try and do is go through all the power supplies and fix those, and then try converting more games to 220V.

I think that is pretty much it, after two and a half years I have a fully operational arcade, at least for today, and that’s enough!

_____________________________

*The eight games I have in the foyer that all shine like new: Pac-man, Donkey Kong Jr, Galaxian, Dig Dug, Super Cobra, Elevator Action, Joust, and Defender.

Posted in bavacade, bavarcade | Tagged , , , , , , | Leave a comment

Future of What or, Abstractions of Self in an Online Oblivion

Image of Unwounds studio album Future of White, a black and white modernist affair with cubist illustrated building and stark blacn and white background

Unwound’s studio album Future of What, a modernist affair with cubist illustrated building and stark black and white background

Just finished a vinylcast on #ds106radio of Unwound’s 1995 album Future of What while I came up with an idea for a possible talk at OER24 in Cork. I’m going regardless, and tomorrow is the deadline so I might be remiss, but I would love to be able to present something semi-intelligible at my very favorite conference. I’m not sure this is it, but it is the beginning of something for sure…

This presentation will be a meditation on the changing nature of one’s online self when corporate social media has all but colonized the web. The moment wherein social media companies rise to the level of geopolitical powerhouses with seemingly limitless resources viz-a-viz billions and billions of individualized data points has arrived. The question many are struggling with in the aftermath is whether resistance is futile and assimilation into the pod-cast-people inevitable. 

All of this is further complicated by the rise of the machines in the form of artificial intelligence hype at the precise moment when social media diaspora and online existential crises are at peak levels not only for higher education, but for any soul interested in being social online in 2024.

The digital storytelling course ds106 is taking all these questions head-on during the Spring 2024 semester, across three universities and an open contingent, the attempt to usurp the pervading sense of powerlessness underlined by a dreadful seriousness with a healthy, sustained dose of  playful collegiality wherein we learn to interrogate the tools being used to access and exploit the immense pools of data collected over the previous twenty years of the social media free-for-all. The future of what is what this session will be.

Posted in digital storytelling, OER24, presentations | Tagged | Leave a comment

Macaulay Migrations, Random Litespeed Issues, and S3 Media Offloads – Oh My!

[Lessons learned: never do a victory lap on your blog before the migration is in the bag.]

It’s been well over a month in the making, and in many ways it started my descent into the Mouth of Madness that I’m currently ascending from—so let’s revisit, shall we.

In the Mouth of Madness GIF

Reclaim Hosting is in migration mode, we’re working to migrate larger WordPress Multisite instances and cPanel servers to newer infrastructure as we stare down the barrel of the upcoming CentOS 7 end of life. I took the occasion of getting ReclaimPress up and running to take on one of our more complex WordPress Multisite instances, Macaulay’s venerable Eportfolios, and moving it into a containerized WP instance.

What’s more, part of the complexity of this site was that the MySQL database was running in a separate Digital Ocean droplet given how resource intensive it was when Tim migrated it 6 or 7 years ago. So, this migration would be moving it off a cPanel server into its own container, and also consolidating the database into the same container. We needed to make sure there would no be performance issues, which luckily there were not, and also ensure that w gave ourselves enough time for the migration given it took 17+ hours to drop the database on the outdated Ubuntu server—making this migration particularly onerous.

The other piece we wanted to solve was trying to get the over 700 GBs of media off into an S3 bucket to save space on the server. As we run more and more in the cloud, offloading sites with a large amount of media (200+ GB ) becomes more an more important given media eats up space on our dedicated, bare metal cloud servers (the cloud is just another server, it turns out). So, Eportfolios was going to be our first experiment with doing this not only in a larger WordPress Multisite besides ds106, but also running the media through AWS’s S3 and content delivery network (CDN) Cloudfront, given we’ve only ever done this previously with Cloudflare.

So, it took me some time, but I did get the consolidation of the database and the migration from cPanel and ReclaimPress figured out. What’s more, once I did the site was running lightening fast in dev, so that piece seemed all set. A major issue I ran into was getting a PHP 7.4 LLSMP* container running easily on ReclaimPress, the other was timing the MySQL dump from the stand-alone MySQL Droplet. But once they were under control, the site ran cleanly and was quite fast, even with the consolidated database.†

So once all was set and the switch to the new environment happened the site was loading quickly, the only issue was all the media was broken. WTF?! Oh, probably just the upload media path…no! Wait, it has to be permissions…no!! OK, then really it HAS to be a .htaccess problem…NO!!! You get the idea, for about 12 hours Macaulay was not resolving images and media files while we frantically dug to figure out what the hell was happening. Luckily, Chris came in for the assist and did what was some next-level forensic analysis on one of the images that were not loading to scan for any differences between the original that did load, and the migrated image. Turns out there was a single character added to the beginning of every media file’s hex code that was corrupting anything trying to load.

Here is hex code of an image that was loading cleanly on the old server, and we confirmed it’s the same as the image on the new server as well, so no issue with the image itself:

Image of Hex code of the uncorrupted image file that was from the old server before migration

Hex code of the uncorrupted image file that was from the old server before migration

But when the image loaded the characters 0A were added, essentially corrupting this file, and every other one.

Image of Hex code of the additional characters added the characters 0A to beginning of all media that was corrupting files loading on eportfolios

Hex code of the additional characters added the characters 0A to beginning of all media that was corrupting files loading on eportfolios

Major kudos to Chris Blankenship for figuring this out using a image hex decoder that ultimately allowed us to put in a ticket to Litespeed, given the corruption was happening at the level of the web server, not the image. Turns out the file wp-includes/ms-files.php was adding this extra character, and once we resolved that the entire site was working as expected. Damn that was rough, I was so convinced I had nailed that migration, what did I say about premature victory laps?

With that resolved and the site loading as expected, other priorities quickly took over, but we had pushed 700+ GB of media to an S3 bucket on AWS. We were not loading media from the S3 bucket yet, but that was the plan to free up almost a terabyte of space on the server. Just this week we returned to this part of the migration in earnest, and this was in many ways a first for our team to experiment with mapping a domain to AWS’s Cloudfront so that all media would run off a Macaulay branded subdomain: files.eportfolios.macaulay.cuny.edu. This required copying everything in the wp-content/blogs.dir directory to a bucket named files.eportfolios.macaulay.cuny.edu. After that, we needed to create a distribution network on Cloudfront that used files.eportfolios.macaulay.cuny.edu as an alternate domain. We also needed to get an SSL certificate through Cloudfront using a CNAME certificate—which was set to expire after 72 hours if not activated, so a bit tricky to coordinate.

Image of WP Offload Media Interface

WP Offload Media Interface being delivered through Cloudfront

The other big piece here is using WP Offload Media to make this all work, and that has been clutch. I  wrote about that plugin while using it to offload media for both this blog and ds106.us, but this is the first time we did it for this many files and using Cloudfront, so definitely a brave new world for us. There may be a bit of clean-up this morning, but as of now—and a bigger proof-of-concept for offloading media for much larger, media intensive sites—this is a huge win for us.
_________________________________________

*Getting a PHP 7.4 environment to run on ReclaimPress is an unnecessarily round about process right now given you are forced to import from an older container manifest, namely https://raw.githubusercontent.com/jelastic-jps/wordpress/v2.2.0/manifest.yml

†Dropping the database in the new environment only took 15 minutes (as opposed to 17 hours) to dump in the new environment—that alone was a huge win.

Posted in AWS, plugins, s3, WordPress, wordpress multi-user | Tagged , , , , , , , , | 1 Comment