Zoomfloppy Driver Issues

This Christmas holiday I decided to return to playing with some peripheral hardware I bought for the Commodore 128D machine I discovered here in Italy 8 years ago. I have already written extensively about both the Zoomfloppy Drive (the subject of this post) as well as the uIEC SD card drive that provides a SD-card enabled drive, which is pretty slick. In fact, I went so far as doing a 5-minute video comparing what each does and does not do. I was good, and thorough.

But 7 and a half years have past and things have changed on the computer front. I erased my portion of the Macbook Pro that I originally setup for the Zoomfloppy that was acting as the bridge with the c128D. OpenCBM is the software/driver that the Zoomfloppy depends on to access the drive, and thanks to this brilliant post by Christian Vogelgsang I was able to get OpenCBM running on my Mac back in 2016. But, not surprisingly, a lot has changed since and  when I returned to the process recently I have been hitting some snags.

So, this post is a “message in the bottle” approach to a solution, and I’ll be throwing it out into the vast ocean of the internet (being sure to comment on Christian’s blog) with hopes someone might know a way I can get the drivers working on my Mac again so I can use the Zoomfloppy.

My Zoomfloppy board is running Firmware v7, so I tried to install version 2 of the code Christian provides to install with MacPorts, but I get the following error message:

Can't map the URL 'file://.' to a port description file
("can't set "revision": revision must be an integer").
Please verify that the directory and portfile syntax are correct.
To use the current port, you must be in a port's directory.

The main log error code reads as follows:

:debug:build Environment:
:debug:build CC_PRINT_OPTIONS='YES'
:debug:build CC_PRINT_OPTIONS_FILE='/opt/local/var/macports/build/_Users_miles_opencbm/opencbm/work/.CC_PRINT_OPTIONS'
:debug:build CPATH='/opt/local/include'
:debug:build DEVELOPER_DIR='/Library/Developer/CommandLineTools'
:debug:build LIBRARY_PATH='/opt/local/lib'
:debug:build MACOSX_DEPLOYMENT_TARGET='10.14'
:debug:build SDKROOT='/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk'
:info:build Executing: cd "/opt/local/var/macports/build/_Users_miles_opencbm/opencbm/work/code" && cd opencbm && make -f LINUX/Makefile PREFIX=/opt/local MANDIR=/opt/local/share/man/man$
:debug:build system: cd "/opt/local/var/macports/build/_Users_miles_opencbm/opencbm/work/code" && cd opencbm && make -f LINUX/Makefile PREFIX=/opt/local MANDIR=/opt/local/share/man/man1 $
:info:build make: LINUX/Makefile: No such file or directory
:info:build make: *** No rule to make target `LINUX/Makefile'. Stop.
:info:build Command failed: cd "/opt/local/var/macports/build/_Users_miles_opencbm/opencbm/work/code" && cd opencbm && make -f LINUX/Makefile PREFIX=/opt/local MANDIR=/opt/local/share/ma$
:info:build Exit code: 2
:error:build Failed to build opencbm: command execution failed
:debug:build Error code: CHILDSTATUS 25255 2
:debug:build Backtrace: command execution failed
:debug:build while executing
:debug:build "system {*}$notty {*}$callback {*}$nice $fullcmdstring"
:debug:build invoked from within
:debug:build "command_exec -callback portprogress::target_progress_callback build"
:debug:build (procedure "portbuild::build_main" line 8)
:debug:build invoked from within
:debug:build "$procedure $targetname"
:error:build See /opt/local/var/macports/logs/_Users_miles_opencbm/opencbm/main.log for details.

When I try and install the bleeding edge version of the code I get the following errors:

Miless-Air:opencbm miles$ sudo port install
---> Computing dependencies for opencbm
---> Fetching distfiles for opencbm
---> Verifying checksums for opencbm
---> Extracting opencbm
---> Configuring opencbm
---> Building opencbm
Error: Failed to build opencbm: command execution failed
Error: See /opt/local/var/macports/logs/_Users_miles_opencbm/opencbm/main.log for details.
Error: Follow https://guide.macports.org/#project.tickets if you believe there is a bug.
Error: Processing of port opencbm failed

The main log error code reads almost identical to the one got for version 2, so the same issue seems to be happening.

All that said, I can get version 1 of the code that Christian provided up and running, but whenIi try and detect the xum1541 drive (or the Zoomfloppy) I get a no xum1541 device found

Miless-Air:opencbm miles$ cbmctrl detect
error: no xum1541 device found

I’m running this on an old Macbook Air running Mac OS Mojave, and it rarely sees the light of the day, but I figured to try an older machine with an outdated OS and still running Intel chip to reproduce the situation I had when the Zoomfloopy driers actually worked.

Anyway, I’m not sure anyone out there is running into similar issues, but if so and you have any ideas and/or just want to commiserate, feel free in the comments. That said, solutions are always most welcome 🙂

Posted in Commodore 128 | Tagged , , , , | Leave a comment

bava.studio: Space is the Place

Yesterday was a big moment for this blog: the bava.blog finally moved into a physical storefront of its own: bava.studio (click that link ans subscribe now!) 🙂

bava storefront

I’m not sure what exactly bava.studio is, or will be, but I do know it’s a direct, physical outgrowth of everything I’ve done on this blog for almost two decades. I can try to wax poetic about virtual versus physical space, or even theorize shifting notions of community as traditional brick and mortar commercial enterprises give way to virtual businesses, that are beginning to change our understanding of online communities (Facebook, Instagram, anyone?).*

With the rise of vacant storefronts in Trento after the pandemic was in full swing, I started thinking about finding a space given how much fun and generative that was in Fredericksburg, and the resulting Reclaim Arcade. But I also knew this time I wanted to do what Lloyd Dobbler would do (WWLDD?) and not “sell anything bought or processed, or buy anything sold or processed, or process anything sold, bought, or processed…”

In other words, how to create a space not focused on buying and selling, but building a sense of community around media elements of our past, present, and future. In Italy you can create something akin to a cultural organization, which is what I am currently looking into, so more details to follow on that point.

Empty bava studio (view from front of office)

But as of now we do have a space, and as of yesterday it went from entirely empty to half-full:

Getting settled

Which was really exciting, and as a few people have already commented, the floor is absolutely amazing: it really ties the room together. In fact, I brought over the eight cabinets that were in my foyer, and while that cleaned out my house, it filled up half the space pretty quickly!

Morning's Work to Move

Empty Foyer

The first game in was Donkey Kong Jr., which made it seem like there would be plenty of room:

Donkey Kong Jr. First In

But once we got the remaining 7 cabinets in, as well as a work table, about one-third to half the space was already accounted for.

Settling in view from back of spaceI still have more space to play with, and today I’m going to see how many games I can get in comfortably, leaving room for moving things around and workshopping the exhibit window (more on the proscenium shortly), but given this is not necessarily going to be an arcade, I won’t need to worry about making room for all of them anyway.

Back of bava studio

As you can see from the image above, the front part of the space is somewhat cordoned off by the games, and I have the work table with the Windows 98 machine up against one wall. I am thinking the Windows 98 setup might be one of the next months exhibits as I play around with it during a hopefully slower end of December.

Make shift Windows 98 Working Desk

The front part of the studio has a corner of five games, two on the right wall, Joust and Defender, and three cutting across the center of the space: Dig Dug, Super Cobra, and Elevator Action.

Front room cordoned off by games

Finally, I have Pac-man, Donkey Kong Jr., and Galaxian as the first proto-exhibit in front of the proscenium window. This is kind of a happy marriage between bavacade and bava.studio, and the thing that was convenient is putting three mint, authentic late 70s, early 80s arcade cabinets in front of the store window is enough of a display all by itself.

Window Exhibition Line-up

Exhibition Window #001: Pac-man, Dinkey Kong Jr, and Galaxian

In fact, yesterday bava.studio had its first visitor, Giulia, who saw the space and couldn’t resist entering—which is a good sign. Giulia asked me a very difficult question yesterday, namely, “What is this space?” To which I retorted, “Well, it’s kind of a website, but physical….think of it as a laboratory, a studio, a creative space, a community space, blah blah blah.” Fact is, I’m not entirely sure yet, and I want to see it come into its own somewhat organically.

First Quest: Guilia

I had Guilia pose looking at the games in the window to get a sense of the scale of the proscenium, and it will work pretty well, I just need to create a kind of fake wall that stands behind it that I can decorate, and re-decorate depending on what’s “showing.”

Guilia in the Exhibit Window

The rough idea is to try and change out the window every month or so with a different peek inside some piece of bavatuesdays’s cultural media brain. So, in the end, I guess we’ll see what comes of it, but so far it feels pretty right—I guess I just needed some space!

_______________________________________________

* The conflation of social networks and commerce has been happening for a long while, but it seems to have gotten to a tipping point wherein online social spaces are far more commercial than communal, leaving the social evermore dictated by buying and selling—-so like cities, fewer community centers and far more chain storefronts.

Posted in bavacade, bavarcade, bavastudio | Tagged , | 14 Comments

Reclaim Hosting: the Site on the Edgeport of Forever Uptime

This post was cross-posted to Reclaim Hosting’s new company blog “Reclaim the Blog,” so you can read this post there as well.

Screenshot from Star Trek Episode "The City on the Edge of Forever"

Are we ready for internet time travel with 100% uptime?

To be clear, forever uptime is a dangerous claim, and anyone that promises you 24/7, 100% uptime is flirting with disaster in the hosting world. That said, my experimentation with Edgeport—a new enterprise-grade DNS, CDN, and Load Balancing service much in the vein of Cloudflare—has moved beyond this blog and has extended to Reclaim Hosting’s main website: https://www.reclaimhosting.com.

As already noted, I was blown away by the fact that even with both containers that drive this blog completely offline, the site loaded without issue for the better part of nine hours. It could’ve gone on much longer, but I had to re-enable the servers to actually write about the amazingness of it all 🙂

What was driving the uptime, regardless of the servers’ health, was the application delivery network, or ADN, which reproduces and caches not only the static site, but also its dynamic elements (search, page loading, etc.) across a vast network of servers that ensure the content remains online even when the underlying infrastructure goes offline. It’s pretty amazing to me, and it makes one flirt with that elusive and seductive portal dream of 100% uptime, even though one must always account for the imminent entropy of any system.

Screenshot from Star Trek Episode "The City on the Edge of Forever"

www.reclaimhosting.com boldly going where no site has gone before!

But that being said, Reclaim Hosting has now gone where only the bava has boldly gone before it 🙂 The implications for our high-availability ReclaimEDU WordPress multi-region hosting is truly next generation. While we will refrain from promising 100% uptime, with fail-over between two servers (because Edgeport does that, just like Cloudflare), a robust content delivery network, and  CNAME flattening, we are able to post a lot of .9999999999s. With Edgeport we can harness all the benefits of the Cloudflare setup we engineered a year ago, but using a simpler interface and more approachable and affordable service.

But beyond the load-balancing and sophisticated application caching going on, the real power of Edgeport lies in the manifold security improvements it provides. Over a year ago we hired Noah Dorsett, who has proved to be an amazing addition on the Reclaim security front, and I asked him to try and boil down some of the features Edgeport offers for a meeting on high-availability hosting I was taking last week. So, in true Noah fashion, he did an awesome job and provided an understandable, succinct highlight of the security affordances Edgeport provides. Here is what he sent me:

DDOS Protection: The application layer distributed denial of service protection is great for hosting web applications, as these live in this ‘application layer’. Layer 7 DDOS attacks target this layer specifically as this is where HTTP GET & POST requests occur, and can eat up large amounts of server resources. These attacks are very effective compared to their network layer alternatives, as they consume server resources as well as network resources. With Application Layer DDOS, your site would be much more secure.

WAF:  A WAF, or web application firewall, helps protect web applications by filtering and monitoring HTTP traffic between a web application and the Internet. It typically protects web applications from attacks such as cross-site forgery, cross-site-scripting (XSS), file inclusion, and SQL injection, among others. This type of firewall exists in the application layer, acting as a ‘shield’ between your web application (aka website) and the internet. Edgeport uses a dual WAF, which can be a confusing term. What this means is that there is an audit WAF that logs traffic and updates rules, but does not perform blocking. This audit WAF passes information to a production WAF which uses this information to actively protect and block malicious requests/attacks against the website. A dual WAF is much faster than a regular WAF, and provides better security to boot. WAF rules are generated by Edgeport’s dedicated security team as well, which means your rules will always be up to date and performing efficiently.

Bot Management: Edgeport uses an agentless, server-side, machine-learning fueled bot management system to detect and stop bot traffic that could be slowing down your site or maliciously scraping content. The benefits of an agentless, server-side system like this is that you don’t have to run any code or do anything on the client end, and the detection is nearly invisible from a user perspective (and to bots as well). This allows the detection rules to catch more and impact performance less, keeping the website secure from all sorts of automated malicious tools and scrapers.

Image of bavatuesdays traffic over the previous month

You can literally see the moment, in terms of bot traffic, when I turned on the bot management tool in Edgeport

That last bit on bot management is a big difference I immediately noticed between Edgeport and Cloudflare. Whereas my daily traffic on Cloudflare clocked anywhere from 5,000 to 6,000 hits per day, when I moved to Edgeport those statistics dropped dramatically, closer to 1,000 to 2,000 hits per day. That’s not only much more in the realm of believability of actual traffic for this humble blog, but it highlights just how many bots had been regularly scraping and otherwise accessing my site, whichis not only a security risk, but also eating up unnecessary resources. So with Edgeport my site not only is safer, but is less resource intensive, and as a result more performant.

Now, to be clear, running Edgeport on my blog might be a bit of overkill given it does not need to be up 24/7 and it does not have the sensitive data and security needs of an institutional .edu site, for example. But if you are running a mission critical, high-availability site for your institution, then Edgeport opens up a whole new world of cloud-native security on top of the industrial-grade DNS, CDN, and load balancing service that are truly a powerful combination. It has provided Reclaim exactly what we needed for scaling our multi-region setups, and I couldn’t be more thrilled there’s a new player is this field that’s pushing the envelope, and opening up possibilities for smaller companies like Reclaim Hosting with infinite heart but finite resources.

Posted in reclaim, Reclaim Cloud, WordPress | Tagged , , , , | Leave a comment

AI106: Long Live the New Flesh

I’ve been hinting a bit about Oblivion University, and part of being coy was that it wasn’t entirely clear what form it would take. I knew after Michael Branson Smith‘s (MBS Productions) brilliant example of training AI Levine for his Reclaim open presentation that I wanted to be part of a ds106 course focused on AI, but what that looked like was vague at best this past summer. In early Fall the idea was still rattling around in my head, so the inimitable Paul Bond and I convinced UMW to host at least one section of the AI-focused ds106 course, now known as AI106, and today I try and finally seal Georgetown on a second section with the ever-amazing Martha Burtis. The talented Ryan Seslow is running a section at York College, so by January or February we may have a few courses running parallel explorations of Artificial Intelligence and digital storytelling. But all that is just prelude the emergence of Dr. Brian Oblivion in his new AI flesh, truly a university of one who will be championing truth in the aftermath with 24/7 office hours, a veritable 7-11 of next generation Education. You don’t believe me? Well behold the new flesh!

Screenshot of Oblivion University

Click on the image to be taken to Oblivion Univeristy and ask the good Dr. a question, and then wait a few seconds for the audio response

Intrigued? Well then click on the above image and ask Dr. Brian Oblivion a question, and give him a few seconds to process the audio response, which will pop up once it is done. The video piece is a work in progress over the coming semester because we always build the plane while it is auto-piloting 🙂

Did you go? If not, why not? If so, insane, right!? I’m not sure exactly what to think, but all the praise and glory goes to MBS productions, who has taken the work he demoed at Reclaim Open in June and converted it into a voice-driven chatbot trained on numerous hours of the 2011 ds106 videos featuring the web-flesh version of Dr. Oblivion.

How did he do this, well I asked Dr. Oblivion just that question, and here are the two, fairly similar responses I got:

First answer to “How did Michael Branson Smith create you?”

Second answer to “How did Michael Branson Smith create you?” with the ever important (and recurring) caveat…

The answers are intentionally basic given the program is assuming an audience at a 4th grade level. This helps simplify some of the responses and cut down a bit on length given the potential waste and cost of long, drawn out 12th grade responses 🙂 I will not/cannot steal Michael Branson Smith’s thunder given he has done ALL the work here and I’ve just a vague idea of how he did it, but the 30,000 foot view is he used the generative voice AI tool from elevenlabs.io to replicate the nuance and voice of Dr. Oblivion by training it on hours of the original ds106 Summer of Oblivion videos lectures. Once refined, that voice is used to read back that answers received from ChatGPT 3.5 (or such) making for the near immediate and quite elaborate responses. He used python code and the ChatGPT API to control snark level, grade level, and other settings that allowed him to fine-tune the answers, and those adjustments can be quite entertaining.

The final piece MBS is working on is using a video syncing tool such as Synclabs that can train a video on the good Doctor’s wisdom to include lip syncing (see the above video for multi-lingual AI lip-syncing of character voices). This will require a bit of playing with wav2lip code (anyone reading out there have any interest) as well as some GPU power to make the video response sync instantanously with the audio. But that is the Holy Grail, he already has the chatGPT text trained through elevenlabs, and once we have the jump to lip-synced video of Dr. Oblivion the new flesh will have arrived and ds106 as we know it will never be the same!

Posted in AI, art, digital storytelling, Oblivion University | Tagged , , , , , | 6 Comments

AI Song

Posted in AI, digital storytelling, Oblivion University | Tagged , , , | 2 Comments

The bava Media Center

Image of an entertainment center with a bunch of component parts listed in the post

The bava media center, featuring a Sony Trinitron 19” television to the far left with a Commodore 128 right below it. Beneath that is a used Panasonic DP-U89000 DVD Blu-ray/UHD multi-region player for the new fangled higher-end media. The beige entertainment center (starting with component pieces) has the gold Pioneer DVL-909 CD/LaserDisc/DVD multi-region, below that is an Onkyo DC-C390 6-disc changer for your ultimate compact disc fix. Moving right, you have a Panasonic AG-6200 NTSC and PAL multi-region VCR (a must since arriving in Italy), a Marantz PM6006 stereo receiver (which is running out of spots), and on top of the entertainment center is an audio-technica A-LP-120-USB turntable, and to the right of that the 100 pound 27” Sony Trinitron—an absolute gem.

Image of two drawers filled with vinyl and laserdiscs

Beneath all the component pieces are two drawers filled with vinyl records and laserdiscs. This makes for some handy storage and keeping things clean, but the vinyl and laserdiscs always find a way to get out of their holders and strewn around the office.

 

As Tim knows from when we were building the Console Living Room and then Reclaim Video, I love to wire together and play with component systems like this. I’m not particularly good at it, but it gives me endless hours of frustrating delight. I have my current setup in decent shape, and almost every receiver input has been accounted for. Recorder is the only free input, and once I figure it out I am thinking that is a good input for the commodore 128—we’ll see. The above image highlights the mapping of the various inputs, here they are as of today:

  • Phono ->  Audio technica turntable
  • CD -> Onkyo 6 disc changer
  • Network -> Pioneer DVD/LD/CD
  • Tuner -> Panasonic VcR
  • Recorder -> N/A
  • Coaxial -> Panasonic Bluray/UHD/DVD
  • Optical 1 -> Onkyo CD/Computer
  • Optical 2  -> Pioneer DVD/LD/CD

You’ll notice from the mapping above that I have the Onkyo and Pioneer going in as both RCA and Optical inputs. I have found recently that some DVDs do not pickup the optical feed on the Pioneer, so sometimes I have to switch the audio feed back to RCA.

But that begs a bigger question, how does this all look on the backside, which audio/video feed is going where? How is it coming into the computer for ds106radio casts? That’s something I want to work through given I’ll be trying to break it down on a stream this coming Friday. So let me see if I can make heads or tails of it here.

The Audio hijack setup is a Shure MV7 microphone with a USB-C input into computer, the USB Audio Codec input is a USB-B input from the audio tecnica turntable. Those are pretty straightforward, but the Cam Link 4K is an Elgato 1080p HDMI capture card that is coming in from a switcher for two RCA devices: the Onkyo 6-disc changer and the Pioneer CD/LD/DVD player for the CD as well as video (if and when I need it). This switcher is primarily for converting RCA video inputs to HDMI, but works just as well with a CD player, so it was the best option I had at hand to convert the CD players into an audio feed for the radio. The switcher has a button that allows me to select between the two RCA inputs for each of those devices, which is pretty slick. Just need a better labeling system.

 

This RCA switcher could also work for the VCR input, and I can pull the Bluray and UHD player feed in from another Elgato CAM Link capture card I have lying around (although I might need to get one that does 4K) straight to HDMI, no RCA conversion needed. The video streaming setup might be worth another post, but this does give you a pretty good sense of the audio inputs for the various  ds106radio/Reclaim Radio sources that can do both vinyl and compact discs without issue….YEAH!

Posted in audio, ds106radio, reclaim, Reclaim Radio, video, vinylcast | Tagged , , , , , , , , | 1 Comment

It Came from the bava Archive, Volume 1

Back in September I installed the On This Day plugin to start trying to review and clean-up the bava archive on a more regular basis. With almost 4,000 posts, this blog has accumulated a lot of jettisoned media and broken links, and while I understand things will inevitably fall apart on this fragile web, I have an unhealthy obsession with trying to keep up appearances. My original plan of checking the archives page every day proved a bit too ambitious, but I’ve been able to check-in now and again since September, and I must say it is a wild ride.

Animated GIF from 400 BlowsMore than anything, I continue to be blown away by the fact that I don’t remember so much of it. Let me start with a telling example of the experience on a few levels. Going through the On This Page post links a few days ago I found a post I actually do remember writing about Spacesick’s “I Can Read Movies” book cover series during the height of the ds106 design assignment days. The post was riddled with dead links given the Flickr account originally housing the images of these amazing re-imagined movie book covers was no longer active. The account may have been victim of the shift to charging folks for larger accounts, but whatever the reason, the images were long gone.

Screenshot of ds106 Needs Mo Better Design with wholes left from broken image links

Screenshot of ds106 Needs Mo Better Design with wholes left from broken image links

So to try and fill the broken image gaps, I searched for any of the “I Can Read Movies” posters I referenced and embedded in that post. I found a few, but not all, but I also noticed some other “I Can Read Movies” titles while I was searching, and there was one that I thought was clever, the “I Can Read Alien” version:

After following the link I was brought to this post on bavatuesdays, turns out I created this book cover as a ds106 assignment almost 12 years ago, and after reading my own post I have only a vague memory. I said it already, but going through the blog archive is akin to experiencing some Memento-like creative amnesia—“I actually did that?” So that element of the experience is omnipresent, but it’s not all existential angst. Another element I found amazing was discovering to my pleasant surprise that the Dawn of the Dead cover I couldn’t find was readily available after searching my own blog on the Wayback Machine for the originalpost and hitting pay dirt thanks to a November 24, 2010 capture:

screenshot of the November 24, 2010 capture the Wayback Machine took of this blog post which captured all the now gone images

screenshot of the November 24, 2010 capture the Wayback Machine took of this blog post which captured all the now gone images

It’s so beautiful, a K2/Kubrick themed bavatuesdays no less!

More extensive screenshot of the post that was archive on the Wayback Machines with all the links

I was impressed how well they archived the Flickr images, the only thing not working was a dynamic slideshow I included of Spacesick’s book covers within an album on Flickr, but besides that it was perfect. What’s more, the capture does a great job reproducing the K2/Kubrick theme style of the bava at that time, as well as context specific sidebar updates. I never really used the Wayback machine for my own blog, but I have to say the Internet Archive is #4life!

Screenshot of bava post with just captions but no images from Google Maps live photos that came from a tumblr that collected them

Screenshot of bava post with just captions but no images from Google Maps live photos that came from a tumblr that collected them

After that, I resurrected a post with a ton of broken image links from a Tumblr account highlighting all the weird images captured by the Google Maps photo cars. In fact, we had a ds106 class wherein we captioned the photos, which I shared in this post, and they are pretty wild. I cannot believe we spent an entire class session on this, but at the same time how awesome! 🙂 Well, turns out the Wayback Machine captured almost every image in that post, and they were right there for me to browse, and ultimately copy over to my blog:

Screenshot of a blog post on bavatuesdays including Google Maps images captioned by students. This was when Google Maps started adding a live view to its interface

Screenshot of a blog post on bavatuesdays including Google Maps images captioned by students. This was when Google Maps started adding a live view to its interface

This was a pretty amazing discovery, and there was another post I had unearthed that was missing images so I figured, “Why not?” So I rolled the dice, but no luck. Upon further inspection I realized I was linking to a UMW Blogs-hosted course about Ethics in Literature by the great Mara Scanlon. But wait, UMW Blogs was recently archived by those fine folks, so why not give that Webrecorder instance a search. And wham, dam, thank you UMW, I found the Ethics and Lit blog in all its glory:

Screenshot Mara Scanlon's Ethics and Literature course blog from Fall 2009

Screenshot Mara Scanlon’s Ethics and Literature course blog from Fall 2009

Now to find the images about Faulkner’s As I Lay Dying wherein Vardeman compares his dead mother to a fish. Search…..Bingo!

Image of a fish-like corpse in a coffin as a interpretation of a scene from Faulkner's As I Lay Dying

Image of a fish-like corpse in a coffin as a interpretation of a scene from Faulkner’s As I Lay Dying

So I copy that image and the other into my own blog, which had holes in the tapestry, and…

Voila, the bava is whole again!

The fact that Reclaim Hosting could partner with UMW on preserving that platform makes me ridiculously happy. I love it conceptually, ideologically, and even more in practice—which this post attests to.

These recent raids of the bava archives have me excited about what I might find next, and even more now given might be able to actually future-proof quite a bit of the media on the bava thanks to organizations like Internet Archive and UMW Blogs—I love those two flavors side-by-side in that statement 🙂

The things you discover and realize by reflecting on a few love letters sent to the web years and years ago.

Animated GIF from Criss Cross Burt Lancaster Animated Car

Me driving back through my GIF posts in the bava archives

Anyway, I’ve also realized my blogging is pretty thematic: I write about WordPress and UMW Blogs a lot early on; then ds106 takes over; then Reclaim Hosting—not that surprising given it’s a document of my work over 18 years. But one theme I particularly enjoyed on my recent travels into the archive were all the writings about GIFs. In fact, a couple of my very favorite bava posts are just GIFs.  The “Give it Up for the GIF” post written on November 17, 2012 may be my favorite post of all time given it highlights all the GIFs I made during a two-year, quite intensive run of ds106:

Give it Up for the GIF

I also documented the moment Zach Davis shared the “If We Don’t, Remember Me” tumblr that gave GIFs an entirely new world of possibilities with a deeply meditative aesthetic that highlights the creative and instructional power of the format.

blade runner gif from ifdrm

My mother? Let me tell you about my mother….

And then there is the lamenting about not making as good a GIF as I used to, which still manages a decent GIF from Magnificent Obsession:

In 2011 and 2012 there were so many GIFs on the bava, and it for me was a direct link to the creative excitement that came out of ds106, quite possibly this blog at its creative peak—definitely if it comes to sheer number of unique, exciting posts about almost anything I found interesting culturally.

Animated GIF from Blade Runner eye

There is much more I can say, and I have earmarked a whole bunch of posts that highlight some of the other themes as they are fleshed out with more research, but for now let volume 1 of the archive reflections highlight the magic of the GIF, something that since 2010/2011 has become an widely accepted way of communicating digitally, and one that I love, love, love.

What an amazing first issue 🙂

Posted in Archiving, bavatuesdays, digital storytelling | Tagged , , , , , , | 3 Comments

Today is the Day I Got My Mastodons Dockerizized

Low-hanging Taxi Driver Reference

To battle the immense feeling of frustration with yesterday’s West Coast outage at Reclaim, early this morning I decided to take Travis Bickle‘s advice and get my Mastodon instances dockerizized. I would blog the process of migrating from a source instance of Mastodon to a Docker instance, but once again Taylor already did as much on his blog a few weeks back: “Migrating Mastodon from a Source Install to a Dockerized Install.” Hell, we even did a stream of the process that takes you through it step-by-step, and I re-watched that this morning in order to get both social.ds106.us and reclaim.rocks containerized.

So this post is not as such about the details of containerizing, as it is a love letter to Mastodon. It’s become like a neighborhood bar where I can follow and catch-up with a few folks I actually want to see, rather than every jackass in high school I wanted to leave far, far behind. I think Mastodon epitomizes the idea of faith in a seed when it comes to community and open source tools, and it is become the most important for me when it comes to connecting with folks. Mastodon reminds me why I got into this edtech racket: 1) I love to tinker with open source tools, and Mastodon is a remarkably powerful open tool for building community and as a result has helped to curb the impact of algorithms, commerce-driven data extraction, and the usual crap that ruins these spaces–what’s not to love?* 2) ratcheting back scale and connections right now is probably the healthiest mental approach to a news/hype cycle that continues to go fully off the rails. Mastodon is akin to what people have told me about marijuans :), it makes you feel happy and inclined to laugh without the sense of needing more, and any potential paranoia of being found out quickly dissipates when you realize no one is there to find you out—they’re all looking for their own home away from the maddening crowd.

The modest scale of the ds106 social server is a case in point, there are maybe 20 or 30 users that are “active,” and there is a general sense of folks dipping their toes in and trying to wrap their head around it, as a result the conversations are slower and more direct, the drive-by attention culture is much less, and it provides a personalized, contextualized feed that can open up to a wide range variegated topics. Twitter at its best, but with far less soap boxes and self-promoters that craved the follower count capital far more than actually connecting. I’m happy on Mastodon, it’s a kind of enclosure that offers protection from the onslaught of the market in those other social spaces, and I’m ready to settle in the murky territories and pioneer the socials for a while.

I say this knowing that just over a year ago I, too, was dipping my toe in the Mastodon pool, considering jumping, and I finally did and installed three or four servers and managed to figure out the what and how. But it has taken me much longer to figure out the why, which it turns out is pretty simple. I do want to be social in these spaces, but I want it on my terms outside the templated, market-driven spaces that have proved time and time again that they do not respect privacy, civility, tolerance, compassion, or just about any of the qualities that need to define a space I want to call my local. Mastodon has become the rock we have built the ds106 Social Local on, and one year later I have no regrets, do you you corporate social media hippies? Be more than a burned out threat!

_________________________________

*And I have to say, what causistry will folks who know better use to explain away their merry posting on Facebook, LinkedIn, and Instagram to keep those likes and subscribes coming? Staying on those sites, no matter the reason, is cynical at best.

Posted in Mastodon | Tagged , , , | 1 Comment

Offloading Peertube Media to S3

My last post documented offloading Azuracast media to Digital Ocean’s Spaces, and this is a follow-up on the offloading theme. Only this time it was for my Peertube instance hosted at bava.tv. I’ll be honest, this could have been a very painful process if Taylor hadn’t  blogged how he did it with Backblaze last December, so in many ways thanks his work made the process a piece of cake, so thank you, Taylor!

I made all media public in the Digital Ocean Spaces bucket, and also created a CORS rule to give the bava.tv domain full control over bucket. After that, I just got the keys, endpoint, region, and bucket name, and I was good to go:

object_storage:
  enabled: true
  endpoint: 'nyc3.digitaloceanspaces.com' 
  upload_acl: 
    public: 'public-read'
    private: 'private' 
  videos: 
    bucket_name: 'bavatv'
    prefix: 'videos/' 
  streaming_playlists: 
    bucket_name: 'bavatv' 
    prefix: 'streaming-playlists/' 
  credentials: 
    access_key_id: 'youraccesskey' 
    secret_access_key: 'yoursupersecretkeytothekingdom'

Once I replaced that with existing object_storage code at the bottom of  the production.yaml file (found in /home/peertube/docker-volume/config/) and restarted the container the site was offloading videos to my bucket in Spaces seamlessly. Now I just needed to offload all existing videos to that bucket, which this command did perfectly:

docker-compose exec -u peertube peertube npm run create-move-video-storage-job -- --to-object-storage --all-videos

After it was done running it removed close to 120GBs from the server and my videos loaded from the Spaces bucket. The videos load a bit slower, but that is to be expected, and the next piece will be finding a way to speed them up using a proxy of some kind. But for now I am offloading media like it’s my job, well, actually, it kind of is.

Posted in PeerTube, s3 | Tagged , , , , | 4 Comments

Offloading Azuracast Media, Recordings, and Backups to S3

I’ve been on a bit of an offloading kick these last months, and I’m digging the idea of controlling server environment bloat by having media hosted in an S3 storage situation. It simplifies cloning, updating, and/or migrating a server environment, and often times results in faster load times. I’ve been doing it with bigger WordPress sites for months now, and it’s high time to turn my attention to some other applications, in this case the web radio software Azuracast.

More than 3 years ago ds106radio was migrated from Airtime to Azuracast. It was the right move, and Azuracast has proven to be the best in the business, I love it. Buster Neece, the primary developer of Azuracast, has made a big push the last several years to containerize the software, making it much easier to host. Updating Azuracast is now a piece of cake, and new features just keep on coming. One feature that’s been around for a while that I’ve yet to take advantage of is offloading media, and given the instance has surpassed 100GB of live broadcast recordings, it’s officially overdue. I realized the value of offloading media after hosting Mastodon for a year, social.ds106.us has 168GB of offloaded media, and that’s for a very small server. Offloading at just $5 a month for up to 250 GB on Digital Ocean’s Spaces is a total deal, and it frees up significant space on Reclaim Cloud.

A few of our S3 buckets in Digital Ocean’s Spaces

So, below are details of how I offloaded all media, backups, and recordings on the mighty ds106rad.io to Digital Ocean’s S3-compatible object storage Spaces. You can find a quick-start guide for getting a bucket up and running on DO’s Spaces. Apart from that, I installed the handy, dandy s3cmd tool on the Azuracast server to help with moving files to S3. I also recommend installing NCDU on your environment to see where all the larger files live (this could have saved me some hassle if I installed in sooner).

When working in Reclaim Cloud, I do all my testing on a clone of the production site that I spin up to make sure no changes I make break the live site. If all works as planned I do the same thing on production. One could also just as easily destroy the production server and point DNS to the clone where all the changes were made successfully, but I am superstitious and inefficient so I do everything twice.

When configuring s3cmd (s3cmd --configure)you will need your bucket access key, secret key, region, endpoint, and bucket name. Something like this:

Access Key: <your S3 access key>
Secret key: <Your s3 secret key>
Region (my region for Spaces is frankfurt): fra1
bucket: filesds106radfio
endpoint: fra1.digitaloceanspaces.com

The above format worked for me on DO’s Spaces (you will need your own access keys), and thereafter the server will automatically connect to the S3 bucket, which is convenient. Beyond that, it is time to locate the files and start offloading them to S3. Here is where I should have used the ncdu command sooner, given it’s a handy command line tool that lets you know how much space is being used in each directory, allowing you the ability to drill down by directory. So, if I’m trying to figure out where the 100GB of recordings are stored on the server, I could track it down with this command. But I did things the hard way, I essentially used a command I had learned from my colleague Taylor to remote login to the container within the server environment to copy all the files out of the container into the broader server environment.

I know, it’s confusing, but that’s part of wrapping your head around working with containers. Each container can be logged into separately using the command docker exec -it containerid sh -this will allow you to move around in the Alpine linux container and use a few, very limited commands. For example, s3cmd does not run within the specific containers. So I got the brilliant idea of copying the media, backups, and recordings directories from the container to the docker environment so I could then use s3cmd to put them in the S3 bucket. You following? Probably not, but think of it like this, the server environment is a container running a Docker Engine environment that can harness all the tools and commands of CentOS 7, whereas the specific container running  Azuracast is packaged as an Alpine Linux container with far fewer server bells and whistles and only limited packages and command line affordances. Containers within containers, and the simplification of the container envrionments within the Docker environment make them run more efficiently.

Here is the command I used to copy files from the Azuracast container into the general server environment:

docker cp containerid:/var/azuracast/stations/ds106radio/recordings /var/azuracast/

I was stoked that this worked, and all the files from /var/azuracast/stations/ds106radio/recordings could be copied out of the container and into /var/azuracast so that I could then run the following command to put everything I just copied into the S3 bucket:

s3cmd put /var/azuracast/recordings/* s3://filesds106radio --acl-public  --recursive

That worked, but when I had finished the process and was deleting the media on the server I offloaded, I fired up the NCDU tool to find any larger files. It’s then I realized all the media was in /var/lib/docker/volume/azuracast_station_data/_data/ds106radio/recordings and I did not need to remote ssh into the container and copy everything, I could much more easily have run s3cmd from that directory to offload media. Nothing like learning that after the fact, but working through this conceptually has helps me understand how containers operate, which is very useful–despite all the extra work.

S3 Bucket with directories for backups, media, podcasts, and recordings

Once all the data was moved to the Spaces bucket, I then needed to login to Azuracast and update the storage locations to point to S3 for media, recordings, podcasts, and backups.

In System Maintenance–>Storage locations you can add the new S3 storage location using much of the same data referenced above for setting up s3cmd.

Once you add the bucket data for each type of media (having some way to carry the S3 details over for each storage type would make it easier/quicker) you’ll be asked to update your server configuration with a quick reboot, and the media will start offloading immediately. Given we’ve already copied pre-existing media to those buckets, the broadcasts and media archive will be already be populated with old recordings and previously uploaded media.

With that, you can remove all the media files on the server and save yourself well over 100GB of space.

Posted in audio, Azuracast, ds106radio, open source, reclaim, Reclaim Cloud | Tagged , , , , , , | 5 Comments