ds106.club to the Cloud!

As the tale of the bava can attest, I have been knee-deep in migrating projects to the Reclaim Cloud. It’s been equal parts exhilarating and frustrating given the possibilities and the necessary learning curve, but that’s often the cost of personal and professional growth. That said, after migrating ds106.us things are starting to feel downhill, and yesterday I got the silly side-project ds106.club moved over from Digital Ocean, which means the last piece I have to move from my personal Digital Ocean account are some files I’m hosting on a Spaces instance.

I moved ds106.club over to Digital Ocean from AWS back in 2016 when I realized I was never going to be a master of AWS’s infrastructure (part of what makes the Reclaim Cloud so welcome and exciting). The ds106.club is a straight-up UNIX apache server that was exploring the tilde.club experiment back in 2015. Our version of that experiment was hosted on a 1 GB Ubuntu 16.04 VPS through Digital Ocean at $5 a month. Not much has happened there, though there are some really fun site like the Prisoner 106 GIF story, my own little experiment, and many more. It’s a trailing edge corner of the web that is forgettable in many ways, and that’s why I love it. So, I spent Monday and Tuesday exploring how to get it migrated over.

I had everything moved over cleanly on Monday following the original tilde.club how-to, but I missed a couple of things specific to the Reclaim Cloud (which is built on Jelastic’s virtualizing, container-based software). Such as the fact our Cloud has its own firewall for VPS instances. You can now see why we are still kicking the tires on this before an open, public beta in early July.  There were also some edits I needed to make to the Apache configuration file I missed, but this is a good moment to reflect on why we are able to even think about moving forward with Reclaim Cloud, which Tim documents our history with elastic computing and containers starting as far back as 2011.

Whiteboard from a brainstorming session with Kin Lane back in December of 2014

If it wasn’t for our current team, namely Lauren Brumfield, Meredith Fierro, Chris Blankenship, Gordon Hawley, and Katie Hartraft none of this would even be thinkable, no less possible. We have gotten to a moment wherein Tim and I have both been relieved from a majority of the dat-0to-day operations of Reclaim, which has provided us the head space to actually push forward with a next generation infrastructure that will allow us to go far beyond even our wildest expectations 7 years ago when we started this whole thing. So, thank you all. You rule, I drool! 

In addition that that, we have setup an internal forum for our Reclaim Cloud project so that we can start to push hard on our current private beta before opening it up next month, and I tried to get things going with a post about my struggles migrating ds106.club, which I am documenting below:

I am setting up an Ubuntu 16.04 VPS in the Reclaim Cloud, and after spinning it up I can’t seem to get the public IP to resolve. To be specific, I’m migrating the ds106.club 1 instance of an Apache/UNIX tilde space server over from Digital Ocean that is also running on Ubuntu 16.04.

I am following the tilde.club setup guide 1 and have updated the hostname:

$ sudo hostnamectl set-hostname ds106.club

When I run the above command and reboot, the ds106.club hostnae is replaced with node366-env-7531836.us.reclaim.cloud, so it is not sticking. Although, from what I understand that might not be an issue for Jelastic, and editing the /etc/hosts file may be enough?

In that vein, I updated /etc/hosts to the following (notice Jelastic keeps a record for the original hostname in this file underneath the commented line): localhost ds106.club ds106.club
# Auto-generated hostname. Please do not remove this comment. node366-env-7531836.us.reclaim.cloud node366-env-7531836

After that I am still getting nothing at the IP or domain, I went ahead and tried installing Apache2, and I get the following error:

insserv: warning: current start runlevel(s) (empty) of script `apache2' overrides LSB defaults (2 3 4 5).
invoke-rc.d: policy-rc.d denied execution of start.
Setting up ssl-cert (1.0.37) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...

I looked this up and did see a Stack Exchange post on the issue 1, but when I ran the recommended command to fix:

RUN printf '#!/bin/sh\nexit 0' > /usr/sbin/policy-rc.d

I got the following:

RUN: command not found

At this point I backed away slowly from command line and decided to high tail it to this part of the Reclaim Community forums to see if I can get a lifeline :slight_smile:

To which, in less than an hour, Chris Blankenship response with the following:

I just ran through this successfully on an ubuntu vps, I did have to deviate from the steps outlined a bit.
End Result: http://hostnametest.chrisblankenship.cloud/~testuser/

I spun up the Ubuntu VPS, and edited /etc/hosts to add these lines to specify the IP and hostname I’d be using: hostnametest.chrisblankenship.cloud hostnametest hostnametest.chrisblankenship.cloud hostnametest

This doesn’t change the hostname for the VPS itself, I’ve been having trouble with that as it will re-set each reboot, but adding these lines should be sufficient so your server is recognized with the proper hostname.

Then I created the user testuser using the adduser command, switched into the user by running su - testuser, and created a public_html dir with all the permissions and a test index file by running: mkdir ~/public_html && chmod 755 ~/public_html && echo "<h1>TESTING</h1>" >> ~/public_html/index.html && chmod 644 ~/public_html/index.html && exit

Once I was back in the root shell, then I installed apache by running: apt install apache2

Before edits can be made, it has the be run to generate the config files, so I ran systemctl start apache2

Then I had to enable userdir support using a2enmod userdir and restart apache using systemctl restart apache2

And then in the default enabled site’s file /etc/apache2/sites-enabled/000-default.conf, I added a line at the top to specify the servername: ServerName hostnametest.chrisblankenship.cloud

I gave it one more restart using systemctl restart apache2 and then I had to open up the HTTP/HTTPS ports in the Jelastic Environment Firewall

If you hit add under Inbound Rules, you can specify HTTP/HTTPS as the name, and it will autoconfigure with the ports.

I can now get the tilde space for the test user. I’m having some issues enabling the service to run at startup (systemctl enable apache2), but I’ll update once I figure that out.

Not only did this work for me, and the only other thing I needed to figure out was migrating content and user permissions, which this post on nixCraft was textbook for. So, thanks to Chris I have ds106.club up and running on Reclaim Cloud, and this really cemented for me that we are ready for this. We are ready to start helping students, faculty, and institutions think through the cloud for their offerings, and that is pretty exciting. Reclaim has been quite a journey thus far, and I think this marks a new, exciting chapter. And while it is important to temper excitement in the current political situation, I have always believed strongly that part of what Reclaim has been doing has always been about a sense of reclaiming control and educating as many folks as possible that it is indeed possible, and here is one way at it.

Posted in digital storytelling, Reclaim Cloud | Tagged , , | Leave a comment

Migrating ds106 to the Reclaim Cloud

If the migration of bavatuesdays was a relatively simple move to Reclaim Cloud, doing the same for ds106 was anything but. Five days after starting the move I finally was successful, but not before a visceral sense of anguish consumed my entire week. Obsession is not healthy, and at least half the pain was my own damn fault. If I would have taken the time to read Mika Epstein’s 2012 meticulous post about moving a pre-3.5 version of WordPress Multisite from blogs.dir to uploads/sites in its entirety, none of this would have ever happened.

I started the migration on Tuesday of last week, and I got everything over pretty cleanly on the first try. At first glance everything was pretty much working so I was thrilled. I was even confident enough to point DNS away from the low-tenant shared hosting server it had been residing on.*

The question might be asked, why move the ds106 sites to Reclaim Cloud at all?  First off, I thought it would be a good test for seeing how the new environment handles a WordPress Cluster that is running multisite with subdomains. What’s more, I was interested in finding out during our Reclaim Cloud beta exactly how many resources are consumed and how often the site needs to scale to meet resource demands. Not only to do a little stress-testing on our one-click WordPress Cluster, but also try and get insight into costs and pricing. All that said, Tim did warn me that I was diving into the deep end of the cloud given the number of moving parts ds106 has, but when have I ever listened to reason?

Like I said, everything seemed smooth at first. All pages and images on ds106.us were loading as expected, I was just having issues getting local images to load on subdomain sites like http://assignments.ds106.us or http://tdc.ds106.us. I figured this would be an easy fix, and started playing with the NGINX configuration given from experience I knew this was most likely a WordPress Multisite re-direct issue. WordPress Multisite was merged into WordPress core in version 3.0, when this happened older WordPress Multi-user instances (like ds106) were working off legacy code, one of the biggest differences is where images were uploaded and how they were masked in the URL. In WPMU images for sub sites were uploaded to wp-content/blogs.dir/siteID/files, and using .htaccess rules were re-written to show the URL as http://ds106.us/files/image1.jpg. After WordPress 3.0 was released, all new WordPress Multisite instances (no longer was it called multi-user) would be uploaded to wp-content/uploads/sites/siteID, and they they no longer mask, effectively including the entire URL, namely http://ds106.us/wp-content/uploads/sites/siteID/image1.jpg.

So, that’s a little history to explain why I assumed it was an issue with the .htaccess rules masking the subdomain URLs. In fact, in the end I was right about that part at least. But given ds106.us was moving from an apache server-based stack to one running NGINX, I made another assumption that the issue was with the NGINX redirects—and that’s where I was wrong and lost a ton of time. On the bright side, I learned more than a little about the nginx.conf file, and let me take a moment to document some of that below for ds106 infrastructure posterity. So, the .htaccess file is what Apache uses to control re-directs, and the those look something like this for a WordPress Multisite instance before 3.4.2:

# BEGIN WordPress
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# uploaded files
RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule . index.php [L]
# END WordPress

In WordPress 3.5 the ms-files.php function was deprecated, and this was my entire problem, or so I believe. Here is a copy of the .htaccess file for WordPress Multisite after version 3.5:

RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# add a trailing slash to /wp-admin
RewriteRule ^wp-admin$ wp-admin/ [R=301,L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
RewriteRule ^(.*\.php)$ $1 [L]
RewriteRule . index.php [L]

No reference to ms-files.php at all. But (here is where I got confused cause I do not have the same comfort level with nginx.conf as I do .htaccess) in the nginx.conf file on the Reclaim Cloud server there is a separate subdom.conf file that deals with these re-directs like so:

    #WPMU Files
        location ~ ^/files/(.*)$ {
                try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ;
                access_log off; log_not_found off;      expires max;

    #WPMU x-sendfile to avoid php readfile()
    location ^~ /blogs.dir {
        alias /var/www/example.com/htdocs/wp-content/blogs.dir;
        access_log off;     log_not_found off;      expires max;

    #add some rules for static content expiry-headers here

[See more on nginx.conf files for WordPress here).]

Notice the reference to WPMU in the comments, not WPMS. But I checked the ds106.us instance on the apache server it was being migrated from and this line existed:

RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L]

So ds106 was still trying to use ms-files.php even though it was deprecated long ago. While this is very much a legacy issue that comes with having a relatively complex site online for over 10 years, I’m still stumped as to why the domain masking and redirects for images on the subdomain sites worked cleanly on the Apache server but broke on the NGINX server (any insight there would be greatly appreciated). Regardless, they did and everything I tried to do to fix it (and I tried pretty much everything) was to no avail.

I hit this post on Stack Exchange that was exactly my problem fairly early on in my searches, but avoided doing it right away given I figured moving all uploads for subdomain  sites out of blog.dir into uploads/sites would be a last resort. But alas 3 days and 4 separate migrations of ds106 later—I finally capitulated and realized that Mika Epstein’s brilliant guide was the only solution I could find to get this site moved and working. On the bright side, this change should help future-proof ds106.us for the next 10 years 🙂

I really don’t have much to add to Mika’s post, but I will make note of some of the specific settings and commands I used along the way as a reminder when in another 10 years I forget I even did this.

I’ll use Martha Burtis‘s May 2011 ds106 course (SiteID 3) as an example of a subdomain migrated to capture the commands.

The following command moves the files for site with ID 3 (may11.ds106.us) into its new location at uploads/sites/3

mv ~/wp-content/blogs.dir/3 ~/wp-content/uploads/sites/

This command takes all the year and month-based files in 3/files/* and moves them up one level, effectively getting rid of the files directory level:

mv ~/wp-content/uploads/sites/3/files/* ~/wp-content/uploads/sites/3

At this point we use the WP-CLI tool do a find and replace of the database for all URLs referring to may11.ds106.us/files and replace them with may11.ds106.us/wp-content/uploads/sites/3:

wp --network --allow-root search-replace 'may11.ds106.us/files' 'may11.ds106.us/wp-content/uploads/sites/3'

The you do this 8 or 9 more times for each subdomain, this would obviously be very , very painful and need to be scripted for a much bigger site with many 10s, 100s or 1000s of sub sites.†

To move over all the files and the database I had to run two commands. The first was to sync files with the new server:

rsync -avz [email protected]:/home/ds106/public_html/ /data/ROOT/

Rsync is is the best command ever and moves GBs and GBS of data in minutes.

The second command was importing the database, which is 1.5 GBs! I exported the database locally, then zipped it up and uploaded it to the database cluster container and then unzipped it and ran the database import tool, which takes a bit of time:

mysql -u user_name -p database_name < SQL_file_to_import

After that, I had to turn off ms_files_rewriting, the culprit behind all my issues. That command was provided in Mika’s post linked to above:

INSERT INTO `my_database`.`wp_sitemeta` (`meta_id`, `site_id`, `meta_key`, `meta_value`) VALUES (NULL, '1', 'ms_files_rewriting', '0');

You also need to add the following line to wp-config.php:

define( 'UPLOADBLOGSDIR', 'wp-content/uploads/sites' );

The only other thing I did for safe-keeping was create a quick plugin function based on Mika’s stupid_ms_files_rewriting to force the re-writing for any stragglers to the new URL:

function stupid_ms_files_rewriting() {
$url = '/wp-content/uploads/sites/' . get_current_blog_id();
define( 'BLOGUPLOADDIR', $url );

I put that in mu-plugins, and the migrated ds106.us multisite install worked! There was some elation and relief this past Saturday when it finally worked. I was struggle-bussing all week as a result of this failed migration, but I am happy to say the Reclaim Cloud environment was not the issue, rather legacy WordPress file re-writes seemed to be the root cause of my problems.

I did have to also update some hardcoded image URLs in the assignment bank theme , but that was easy. The only thing left to do now is fix the ds106 MediaWIki instance and write that to HTML so I can preserve some of the early syllabi and other assorted resources. It was a bit of a beast, but I am very happy to report that ds106 is now on the Reclaim Cloud and receiving all the resources it deserves on-demand 🙂

*VIP1 was the most recent in a series of temporary homes given how resource intensive the site can be given the syndication hub it has become.

†I did all these changes on the Apache live site before moving them over (take a database back-up if you are living on the edge like me), and then used the following tool to link all the

Posted in digital storytelling, Reclaim Cloud, WordPress, wpmu | Tagged , , , , , | Leave a comment

bava in the cloud with clusters

Last weekend I took a small step for the bava, but potentially a huge step for Reclaim Hosting. This modest blog was migrated (once again!) into a containerized stack in the cloud in ways we could only dream about 7 years ago. There is more to say about this, but I’m not sure now is the right time given there is terror in the streets of the US of A and the fascist-in-charge is declaring warfare on the people. What fresh hell is this?!  But, that said, I’ve been hiding from American for years now, and quality lockdown time in Italy can make all the difference. Nonetheless, I find myself oscillating wildly between unfettered excitement about the possibilities of Reclaim and fear and loathing of our geo-political moment. As all the cool technologists say, I can’t go on, I’ll go on….

For anyone following along with my migrations since January, there have been 4 total. I migrated from one of Reclaim Hosting’s shared hosting server’s in early January because the bava was becoming an increasingly unpredictable neighbor. The HOA stepped in, it wasn’t pretty. So, it meant new digs for the bava, and I blogged my moved from cPanel to a Digital Ocean droplet that I spun up. I installed a LEMP environment, setup email, firewall, etc. I started with a fresh Centos 7.6 sever and set it up as a means to get more comfortable with my inner-sysadmin. It went pretty well, and costs me about $30 per month with weekly backups. But while doing a migration I discovered a container-based WordPress hosting service called Kinsta which piqued my interest, so I tried that out. But it started to get pricey, so I jumped back to Digital Ocean in April (that’s the third move) thinking that was my last.*


But a couple of weeks later I was strongly considering a fourth move to test out a new platform we’re working on, Reclaim Cloud, that would provide our community a virtualized container environment to fill a long-standing gap in our offerings to host a wide array of applications run in environments other than LAMP. I started with a quick migration of my test Ghost instance using the one-click installer for Ghost (yep, that’s right, a one-click installer for Ghost). After that it was a single export/import of content and copying over of some image files. As you can see from the screenshot above, while this Ghost was a one-click install, the server stack it runs on is made visible. The site has a load balancer, an NGINX application server, and a database which we can then scale or migrate to different data centers around the world.

In fact, geo-location at Reclaim for cloud-based apps will soon be a drop-down option. You can see the UK flag at the top of this one as hope springs eternal London will always be trEU. This was dead simple, especially given I was previously hosting my Ghost instance on a cPanel account which was non-trival to setup. So, feeling confident after just a few minutes on a Saturday, I spent last Sunday taking on the fourth (and hopefully final) migration of this blog to the Reclaim Cloud! I’ve become an old hand at this by now, so grabbing a database dump was dead simple, but I did run into an issue with using the rsync command to move files to the new server, but I’ll get to that shortly.

First, I had to setup a WordPress cluster that has a NGINX load balancer, 2 NGINX application servers, a Gallera cluster of 3 MariaDB databases, and a NFS file system. Each of these are within their own containers, pretty cool, no? But don’t be fooled, I didn’t set this up manually—though one could with some dragging and dropping—the Reclaim Cloud has a one-click WordPress Custer install that allows me to spin-up a high-performance WordPress instance, all of which are different layers of a containerized stack:

And like having my own VPS at Digital Ocean, I have SSH and SFTP access to each and every container (or node) in the stack.

In fact, the interface also allows access and the ability to edit files right from the web interface—a kind of cloud-based version of the File Manager in cPanel.

I needed SSH access to rsync files from Digital Ocean, but that is where I ran into my only real hiccup. My Digital Ocean server was refusing the connection because it was defaulting to a SSH key, and given the key on the Reclaim Cloud stack was not what it was looking for, I started to get confused. SSH keys can make my head spin, Tim explained it like this:

I never liked that ssh keys were both called keys. Better analogy would be “private key and public door”. You put your door on both servers but your laptop has the private key to both. But the key on your laptop is not on either server, they both only have the public door uploaded. On your laptop at ~/.ssh you have two files id_rsa and id_rsa.pub. The first is the key. Any computer including a server that needs to communicate over ssh without a password would need the key. And your old server was refusing password authentication and requiring a key.

That’s why Timmy rules, after that I enabled the prompting of an SSH server password when syncing between the Cloud and Digital Ocean using this guide. After that hiccup, I was in business. The last piece was mapping the domain bavatuesdays.com:

And issuing an SSL certificate through Let’s Encrypt:

It’s worth noting here that I am using Cloudflare for DNS, and once I pointed bavatuesdays.com to the new IP address and cleared the local hosts file on my laptop the site resolved cleanly with https and was showing secure. Mission accomplished. I was a cloud professional, I can do anything. THE BAVA REIGNS! I RULE!  Ya know, the usual crap from me.

But that was all before I was terribly humbled by trying to migrate ds106.us the following day. That was a 5-day ordeal that I will blog about directly, but until then—let me enjoy the triumph of a new, clustered day of seamless expansion of resources for my blog whenever resources run high.

I woke up to this email which is what the clustering is all about, I have bavatuesdays set to add another NGINX application server to the mix when resource on the existing two go over 50%. That’s the elasticity of the Cloud that got lost when anything not on your local machine was referred to as the cloud. A seamlessly scaling environment to meet the resource demands, but only costing you what you use like a utility was always the promise that most “cloud” VPS providers could not live up to. Once the resource spike was over I got an email telling me the additional NGINX node was spun down. I am digging this feature of the bava’s new home; I can sleep tight knowing the server Gremlins will be held at bay my the elastic bands of virtualized hardware.

*I worked out the costs of Digital Hosting vs Kinsta, and that was a big reason to leave Kinsta given the bava was running quite well in their environment.

N.B:  While writing this Tim was working on his own post and he found some dead image links on the bava as a result of my various moves, and with the following command I fixed a few of them 🙂
wp search-replace 'https://bavatuesdays.com/wp-content/uploads' 'https://bavatuesdays.com/wp-content/uploads'
Made 8865 replacements. Please remember to flush your persistent object cache with `wp cache flush`.

Posted in bavatuesdays, reclaim, Reclaim Cloud, sysadmin | Tagged , , , | 2 Comments

Reclaiming Vimeo

I’m hoping to catch up on some blogging about stuff I have been doing with ds106.tv over the last month or so, but before that I wanted to quickly share an awesome tool that Chris Lott pointed me to a couple of years back called youtube-dl. Youtube-dl is a script you install on your computer (using Homebrew on the Mac) and once you do it allows you to effectively download all the videos associated with a Youtube account using a command such as:

youtube-dl https://vimeo.com/USERNAME -o "/Users/YOURUSER/Movies/%(title)s.%(ext)s"

As you can see from the command line above, this tool is not limited to Youtube, in particular it works just as well with Vimeo. And special thanks to Andrew Gormley for this guide that documents the process making installing everything from installing youtube-dl to backing-up all your videos dead simple. And just like that I was backing up all 266 videos to my hard drive.

Having lost 240 videos when my Youtube account was deleted in 2012 (something that still pains me 8 years later), I’ve done my best to avoid inviting the copyright ghouls to my content. Although, back in 2014 I did upload several Wire episodes to my Vimeo account for the Wire106 course, and I got 2 of 3 allowed copyright strikes by Vimeo, so I stopped uploading to that platform for quite a while. I did use it here and there again over the last 5 years, and after the return to ds106.tv my needs for a video platform seem more pressing. I’ve already written about presenting about The Girl Who Knew Too Much with Paul Bond for Antonio Vantaggiato’s Italian Culture course, but that was the instance that returned me to the conundrum I had avoided for years of not teaching: how do I share clips of a film I think are crucial to creating an argument as part of a course? Doing this led to issues on Youtube and then again in 2014 with Vimeo, so I was gun shy to say the least. That said, I believe sharing these clips as embedded links in my blog or as part of a recorded course lecture should be fair use, but Youtube (and most likely Vimeo) will never let it get to the point of such a defense given they’ll often cow to the entity claiming copyright and either take it down or delete your account.†

In this regard, video remains one of the hardest pieces of one’s digital life to truly Reclaim given it is still relatively expensive to stream technically, but as we continue to see that cost of storage and server CPU falling significantly, it’s not hard to imagine sometime soon it will be feasible to run your own video streaming service. I personally look forward to that day, because it will truly be a multi-headed hydra for bullshit DMCA copyright claims. So, in preparation for a liberated future for video, I now can upload my fair use clips of films I will be discussing in the coming months with the understanding that my account could go away at any point. And while it’s not just the videos that would deleted, I’ll also lose any metadata like views, comments etc., but luckily I have next to no metadata on that system because that platform is not the context for my discussion, it’s purely a means to an embedded end on my blog.

Back in 2013-2014 in the wake of my Youtube account getting deleted I uploaded a decent number of videos up to UMW’s media server Andy Rush was playing around with at the time. It worked well for a while but between the lack of institutional commitment and institutional knowledge moving on, those videos were relegated to a backup drive. A long-term solution for reclaim video online remains an issue (although the Internet Archive still reigns supreme in this regard), and the Reclaim Media idea I had back in 2017 when the Reclaim Hosting team was in NYC still is something I’d love to help build.* This is effectively a tool where you can Reclaim your media from sites like Youtube, Vimeo, Instagram, Flickr, Twitter, etc. and brought into your own ecosystem whether as HTML for archival purposes, or into a comparable open source tool. Anyway, this is a small thing, but this project of having all my Vimeo files regularly backed up makes me feel freed up to actually blog the way I want and figure out the where and how of video as I continue down the road to full digital-self actualization 🙂

*All of which really stems from the Reclaim Your Domain conversation that led to the direction and name of Reclaim Hosting.

†In this regard I read an interesting post on TorrentFreak the other day wherein copyright folks were trying to get the source code for the open source bittorrent streaming platform Popcorn taken off Github given it is used by pirates to share and watch pirated films. And while the code was initially taken down by Github, after an appeal it was re-instated given the actual source code neither links to nor automatically downloads copyrighted material.

Posted in Archiving, Domain of One's Own, reclaim, YouTube | Tagged , , , , | 3 Comments

DVD: It’s a movie on a disc the size of a CD

At last year’s OER19 conference the great Laura Ritchie brought me a suitcase full of VHS tapes. Within this there were the first five episodes of The Sopranos on tape that when lockdown started a couple of weeks ago I decided to start watching, which led me to buy the rest of season 1, season 2, and more recently season 3 on VHS from the UK which is on the way. Who does that in 2020? I could sense the surprise from the Ebay sellers, but you could do a lot worse than a VHS fetish.

As is often the case, the things that intrigue me beyond the actual series (which holds up brilliantly) are the advertisements before the episodes. Before season 2, episode 1 the above advertisement for DVDs came on, and really dated the series. Season 1 was aired in 1999 and season 2 in 2000 which corresponds with the popularization of that format, here is a bit from the DVD Player wikipedia article that is quite telling about how quickly this format became ubiquitous:

Players slowly trickled into other regions around the world. Prices for the first players in 1997 started at $600 and could top out at prices over $1000. By the end of 2000, players were available for under $100 at discount retailers. In 2003 players became available for under $50. Six years after the initial launch, close to one thousand models of DVD players were available from over a hundred consumer electronics manufacturers.

The explosion of DVDs meant the precipitous decline of the reigning video format king: the VHS. In fact, what struck me most about the advertisement for the DVD was it was on a VHS tape. I understand it makes total sense, given those watching The Sopranos on VHS are the perfect audience to sell a DVD player. That said, there was also a strange sense that the VHS tape was put in a situation to sacrifice itself, as a medium, for the conquest of the DVD. A passing of the baton if you will. I understand I’m anthropomorphizing a piece of dumb technology, but nonetheless The Sopranos marks a moment in technological time—recent history that is not only written on physicality of the format (i.e., VHS, DVD, or consumed “live” weekly via cable on HBO) but it is also written into the actual narrative of the show. Take, for example, episode 2 of season 1, “46 Long,” which is focused on an ill-advised hijacking of a truck carrying DVD players. In 1999 DVD players would still be a relatively high-ticket item, and most households were still weighing the costs. In fact, when Christopher comes into the Bada Bing! announcing, “Technology has finally come to the Bing!” they all go out to the parking lot where each of the crew members are given a player:

The conversation is awesome, Tony notes there aren’t as many titles as “Laser” (suggesting Tony has a laserdisc player and is a bit of a format snob). The Pauly jumps in and suggests the image is as good as laserdisc, and Brendan notes but the sound is the really difference. It’s like a commercial! But not to be seduced Tony comes back with the best line of the scene in response to Brendan’s appeal to audio fidelity as he grabs the DVD that seals his position as a film/format snob: “‘Cause nothing beats popping some Orville Redenbachers and listening to Men in Black, ya know.” So good, and at this point Chris is pissed and tries to take the DVD player away from Tony given he doesn’t seem to want it, but Tony will not let go, because he knows this is the format of the future. It’s an awesome scene, and not testifies to the brilliant writing and acting that made this series legend, but also the reflection on home entertainment formats and the changing nature of that industry is exactly what HBO was doing within the cable TV game. The Sopranos was the crowning achievement in making network television passé and cable a must, and that would soon be combined with cable becoming synonymous with an ISP—sealing and fueling the shift of cable as utility/infrastructure (the AOL/Time Warner merger being demonstrative of this fact), all of which would set the stage for the next wave of the web, and part of what I love about The Sopranos is they document this in-between moment for technology and the web in interesting ways. 

I have a bunch of other clips to share, but archiving VHS tapes is painstaking work, so I’ll have to wait to share some more clips about DVD players, the internet, chat rooms, and much more. They even talk about compact discs in episode 1 of season 1 when Tony’s mom is introduced. He brings her a CD player in hopes she will break out of her funk, he notes “You love music and all the old stuff is being re-issued on CD…” After that he tries to dance with her, and thus begins the long, painful relationship in the series between Tony and his infanticidal mother.

I could go on forever about this, but I really do love these seemingly throw-away moments on a VHS tape before the main attraction, an advertisement for the new format know as DVD, to the moments in the actual show where the creators are reflecting on the changing landscape of media and how it is always met with equal parts confusion, excitement, and suspicion.  

Posted in Reclaim Video, ReclaimVideo, TV, VHS, YouTube | Tagged , , , | 2 Comments

vinylcast #35: Joy Division’s Unknown Pleasures

May 18th, 2020 marked the 40th anniversary of Joy Division frontman Ian Curtis’s untimely death, so this #vinylcast today was a tribute to the long shadow his work and death cast. I spent the first part of the broadcast playing songs from Youtube, iTunes, and vinyl, trying to move seamlessly between the three. I was also able to incorporate some stories around the lore of Ian Curtis’s death as early as 1984 or 1985 as a high school sophomore. I’m getting live DJing through the web and beyond down, but without any real song list prep (which is the usual) I eventually fumble things sooner or later. I did play a few songs off the Vinyl Lovers version of “Love Will Tear us Apart” which is a gorgeous see-through vinyl, and eventually settled in for the entirety of Unknown Pleasures. I talked some when moving from the outside in, and it’s a general sprawl of a show as the usually are ?
Ian Curtis Tribute
Posted in ds106radio, on air | Tagged , | Leave a comment

vinylcast #34: Brian Jonestown Massacre

Was in the mood for some Brian Jonestown Massacre, and realized that their self-titled album was released not in the mid-90s, as I assumed, but in 2019! It sounds so good and fresh! Not to say they always aren’t, but was genuinely surprised this was not one of their earliest. Also, we followed up on this fake vinylcast (not real vinyl, but a full album nonetheless via Youtube) with a reprisal of the Jim & Anto Show given this was broadcast on the eve of when Italy was set to open back up after more than two months of lockdown.
Brian Jonestown Massacre #finylcast followed by the Jim & Anto Show
Posted in Brian Jonestown Massacre, ds106radio, finylcast, on air | Leave a comment

Utopian Tendencies, Episode 2: Slipping Effortlessly into No Place

Map of Thomas More's Utopia

It wasn’t a given by any means, but Lauren Heywood and I were able to pull off a second episode of Utopian Tendencies on ds106radio. It’s still early but I am enjoying listening to it take shape. The process is pretty simple, Lauren send me some ideas that usually include an essay or two. This week she sent along a link to the Wikipedia article about Sir Thomas More‘s book Utopia (1517), which led to some reminiscing on my part of an English Literature survey course I took in undergrad.* There’s so much there, but something that stuck out for me between More’s Utopia and the other piece we read in preparation for this discussion, Ingrid Burrington’s essay “Effortless Slippage,” was the idea of enclosure of land in England (something Utopia can be read as a criticism of) that paralleled brilliantly Burrington’s discussion of the enclosure of the public/national spaces of the web into private, extra-national, commercial web platforms.

In fact, the role of mapping is crucial, and from there the discussion discussed everything from [[Friedrich Engels]]’s The Condition of the Working Class in England in 1844

As it relates to Burrington’s ideas of platform disinformation bubbles and media literacy:

The tendency of users to remain in filter bubbles and propagation of myopic communities via platforms and recommendation engines has been well-documented, as has the tendency of these narrow environments to enable mass harassment, misinformation, and propaganda campaigns. In response, companies offer resources on media literacy and piecemeal hiring of content moderators or fact-checkers. But placing the onus on individuals to season their news feeds with opposing viewpoints (rather than, say, designing a platform optimized for bringing multiple viewpoints to users) or assuming the issue is the ability to critically discern sources (rather than recognizing that many users are entirely media-literate but happen to hold racist or fascist beliefs) suggests platforms consider power something that the invisible hand of the market has clumsily pushed them into against their will, and not something abused in the absence of a meaningful praxis. Furthermore, it assumes that greater legibility of a user and their nuanced perspectives is a desirable outcome. Users have agency to reshape the territory of their digital bubble, but they remain mapped subjects.

Burrington challenges the idea that media-literacy alone is enough to deal with the information platform dystopia we are currently experiencing, and it made for a nice point of discussion. We also talked about Neil Smith’s idea of the Revanchist City in relationship to the pioneering spirit of John Perry Barlow‘s 1996 techno-utopian manifesto “Declaration of the Independence of Cyberspace,” which I have not yet read. I also was struck by the start of Burrington’s essay that discusses the initial impulse of top-level domains (TLDs) as state and nation-specific, such as .ny.us. And the colonial impulse towards the commercialization of TLDs which reminded me of this awesome site by Citizen-Ex that discusses the troubling recent history behind the .io domain extension, amongst other stories.

Our discussion of mapping networks made me think about Nicole Starosielski’s The Undersea Network, which is a book I still want to read. That said, I did listen to an interview with the author and blogged about it for the CUNY Academic Commons. As you can tell from these “notes,” the conversation was fairly unwieldy (which I love!), and Lauren’s choices to spark the discussion were spot-on and led to a wide-ranging discussion that inspired all sorts of ideas that I am still mulling over. Mission accomplished!

*In fact, returning to college was a theme for me in this show, I kept thinking about the time when I read literature, interesting essays, was pursuing a Ph.D.,and even taught literature 🙂 I spent many a year doing all these things, and the last 7 years or so has been head down helping to build Reclaim, and this episode was a welcome return to some of the things I really love: reading literature, thoughtful criticism, and gabbing about them. Who knew? 

Posted in ds106radio, Utopian Tendencies | Tagged , , , , , | Leave a comment

vinylcast #33: The Flaming Lips’ The Soft Bulletin

Continuing on a theme of 1990s albums that I loved, I chose to spin The Flaming Lips’ critical and commercial success The Soft Bulletin. Bought this one at Kim’s Mondo Video as well, and it is of a piece with Bombay the Hard Way and the K&D Sessions in my mind.

So many good songs on this album, and now whenever I hear it I am reminded of the time in 2015 when Adam Croom took Tim and I to see The Flaming Lips’ art gallery/performance space called The Womb in downtown OKC. This was another finylcast (or fake vinylcast) and part of that story is in the recording below. Enjoy! [embed]https://bavatuesdays.com/wp-content/audio/20200512_1402_Recording.mp3[/embed]

Posted in ds106radio, finylcast, on air, The Flaming Lips | Leave a comment

vinylcast #32: Blonde Redhead’s Fake Can Be Just as Good

I got this vinyl delivered to Italy recently, I had god memories of it when I owned it on CD, and I have been in a 90s state-of-mind on ds106radio these days. Blonde Redhead was a favorite band of mine during the mid 90s. I first heard their second album La Mia Vita Violenta (on Smells Like Records not Matador Records like I said) and was immediately blown away by their melodic duets set against their noise-driven music. Fake Can be Just as Good is their third album, and they recruited Vern Rumsey of Unwound fame to record with them, and his bass style is hard too miss.*
And while Side 1 was good, side 2 of this album is amazing. From “Bipolar” to “Pier Paolo” to “Oh James” to “Futurism vs Passéism” is all but perfect. Every song works together and they all build up to an instrumental that is probably their best.
Blonde Redhead’s Fake Can be Just as Good
The ds106radio stream, on the other hand, was a mess. It was interrupted by an electrician who turned off power to the wifi, and I was offline for a bit. After that, it was on and off given I was tethering off the phone. Timing is everything, but luckily the recording is solid, and given nobody’s listening, I think it is safe to see people got their money’s worth ?
Posted in Blonde Redhead, ds106radio, on air | Leave a comment