This post provides a place for me to capture some of the maintenance work I’ve been doing on social.ds106.us and reclaim.rocks; the two Mastodon instances of note I’m currently administering. Keep in mind these two instances are running on a Debian VPS, so they’re not Docker images like our one-click Mastodon installs through Reclaim Cloud, so updates and management will be a bit different for those—and hopefully even easier.
Updating Mastodon to Latest Versions
I’ve been really pleased with how easy it’s been to update Mastodon to the latest version. In fact, it would have been seamless if I actually read the directions. The Upgrading to the latest version doc will get you in the right directory and have you pull the recent release. You then go to the notes for the release you upgrading to, for me it was version 4.1.2, and follow the update notes which will tell you if there have been any core dependency changes you need to account for:
Non-Docker only:
- The recommended Ruby version has been bumped to 3.0.6. If you are using
rbenv
, you will be require to install it withRUBY_CONFIGURE_OPTS=--with-jemalloc rbenv install 3.0.6
. You may need to updaterbenv
itself withgit -C /home/mastodon/.rbenv/plugins/ruby-build pull
.- Install dependencies:
bundle install
andyarn install
Both Docker and non-Docker:
- Restart all Mastodon processes
Following these instructions ensured the upgrades worked cleanly. I did notice on both instances in the Administration panel there was a notice that “there are pending database migrations. please run them …” which led me down a Google rabbit hole that ended at this post which helped me push through the database migrations with the following command run from the /home/mastodon/live directory:
RAILS_ENV=production bundle exec rails db:migrate
After that the warning went away and was good.
Preventing Object Files from Being Listed Publicly
One of the things I realized was the way I had setup the AWS S3 bucket for the ds106 server was not secure given anyone could list all files in the bucket. This is an issue because this could give access to someone’s takeout downloads should they want to migrate elsewhere. Luckily ee did not have that scenario on the ds106 server, but I still wanted to fix this. I was not having luck until I came across this Stack Exchange post that Chris Blankenship shared with a Reclaimer using S3. Lo and behold, it worked perfectly. Essentially it’s a S3 bucket policy that allows anyone to see files, but not list them at social.ds106.us/system/ —which is exactly what I needed. Here it is, just replace Bucket-Name with your S3 bucket name:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::Bucket-Name/*"
]
}
]
}
From AWS’s S3 to Digital Ocean Spaces
The other thing I’m currently working on is transferring all the media for the social.ds106.us Mastodon instance from AWS’s S3 to Digital Ocean’s Spaces because Spaces is 1,000,000x easier than S3. The S3 bucket policy feature is, to be sure, powerful for those in the know, but the AWS interface remains a nightmare to navigate for the rest of us. The above permissions policy gives you a taste of that, compare the above code block with doing the same thing in Digital Ocean’s Spaces. It is a radio box selection in the setup screen, and even provides some understandable guidance.
Anyway, I’m cloning the S3 bucket over to spaces using the rclone tool that Digital Ocean has a good tutorial. Once everything is moved over I’ll need to update the environment variables for object storage and see if the move is that simple, not sure if the database stores full URLs for media that need to be overwritten, but assuming not given that would be a lot of overhead and somewhat counterintuitive.
Adding Certbot Renewal as Cronjob
Another bit that Tim Owens tipped us off on that I wanted to get documented is that when we installed Mastodon on Reclaim Cloud using the Debian VPS, we setup the SSL certificate using certbot, but forgot to setup a cronjob to check to see if we need to renew daily, so I manually renewed the certbot with the following command:
certbot renew
After that, be sure restart nginx:
systemctl reload nginx
That being done, I updated the crontab on this server to check for certbot updates daily using crontab -e
to edit the file, and then added the following line to the bottom of that file:
43 6 * * * certbot renew --renew-hook "systemctl reload nginx"
I found this StackExchnage thread that suggested adding --renew-hook "systemctl reload nginx"
to the daily certbot renewal check, so that if the certificate renews nginx will be restarted. We’ll see how that works out in a few months.
Pingback: Moving Mastodon Object Storage from AWS S3 to DigitalOcean Spaces | bavatuesdays
Pingback: Creating a Domain Alias for Mastodon Files Served through DigitalOcean Spaces | bavatuesdays
Pingback: The Allure of Mastodon | bavatuesdays