New Year, New Digs for the bava

I’ve been pretty good about trying to catch up and organize my photos on Flickr, more on that anon, but I got sucked into a New Year’s project that consumed me for the last two days. I decided to move my WordPress blog from shared hosting to a VPS running a LEMP stack (basically substituting Nginx for Apache). The reasons were several:

1) My site has been running into resource issues on our shared hosting servers for a little while now, and I need to be the change I want to see in owning my shit;

2) I’ve been itching to experiment with setting up my own performant WordPress server on Digital Ocean from a scratch for a while now;

3) I wanted to learn more about Nginx, and the promise of increased speed was something the bava could use;

4) But more than anything I’ve slowed on the personal development, experimenting front the last couple of years, and one of my goals for 2020 was to get back to that given it gives me great pleasure.

Although, no pleasure but pain for the bava. One of the things I forget about these projects is they consume me and I’m just emerging from a two day bender of some serious head banging. I wanted to do this with as few lifelines to the Reclaim Hosting braintrust as possible, having accomplished that I need to try and capture what I’ve learned here (with more links!) because I surely will be needing to follow my own bread crumbs in the future. So, here it goes….

It all started with the Digital Ocean Newsletter, in particular a link to this tutorial for creating a “DigitalOcean setup for WordPress website – secure, performant and budget-wise.” I saved the site for when I had some downtime, and then decided to dig in. But I almost immediately deviated from the guide given they were decided to use Ubuntu, and given most of Reclaim’s servers run Centos, I figured it would make more sense to go that route in terms of not only applicability, but also familiarity with file structure, commands, dependencies, etc. So, pretty much as soon as I got to the section in that tutorial of setting up Nginx (which was the next step after creating the Droplet and pointing DNS) I turned to the wilds of Google. One of the many things I love about Digital Ocean is their extensive guides/recipes for doing things with their infrastructure. As soon as I Googled setting up a LEMP environment one of the top hits was this article from Digital Ocean, “How To Install Linux, Nginx, MySQL, PHP (LEMP) stack On CentOS 7.” I used it fairly extensively, although I should have paid attention to the expiration date, given it was published in 2014. A lot of updates to core code happen in five years, but that’s alright because part of my learning curve this go around was updating PHP and MariaDB versions from 5.5.x to 7.2.x and 5.5.x and 10.3.x respectively.* The joys!

Two other general guides for setting up a LEMP environment were “How to Install PHP 7, NGINX, MySQL on CentOS/RHEL 7.6 & 6.10” (published in May 2019) from techadmin.net and “How to Install WordPress with LEMP Stack on Centos 7” by Linux4one. Although, with both of those I was mostly interested in comparing the Nginx configuration files for reference and nabbing various useful command lines, which reminds me how cool it felt to install WordPress using the command line on my own server—the future is now! 

Two utilities I needed to install almost immediately were nano, my preferred command line editor:

yum install nano

And wget for adding packages:

sudo yum install wget

Anyway, all this is prelude to where the issues arose, and how I navigated them. The first major, server destroying issue was when I installed MySQL. Installing Nginx was surprisingly simple, and I ran into next to no issues (at least with the installation), it’s as easy as:

sudo yum install nginx

And then you turn it on:

sudo systemctl start nginx

It’s the configuring of Nginx that can be a royal pain, but we’ll return to that a bit later.

It’s worth noting that I installed the latest version of MySQL, namely 8-1, which was raising numerous compatibility issues so I destroyed the droplet and started over being sure to install 7-1, although I eventually abandoned that for MariaDB 5.5 (thanks to the 2014 article from Digital Ocean about creating a LEMP environment) that I then upgraded to MariaDb 10.3.21. A useful resource during this process was learning how to clean up MySQL installation on CentOS. So, truth be told, there were two major resets in the process wherein destroying the droplet and starting over was easier than try to clean up the various messes I had made. But luckily with cloud infrastructure as it is, that was relatively painless. This was also when, while sharing my woes with Tim, he asked me why I wasn’t using one of Digital Ocean’s one-click LEMP images, but I figured doing from scratch might make things stick for me on a more fundamental level—hope springs eternal. I did notice, however, that there is still no one-click LEMP + WordPress app through Digital Ocean, so at least it is not all in vein just yet.

Turns out, in the end, using the 2014 tutorial by Digital Ocean was the most efficient way forward in terms of getting a working MariaDB and Nginx environment on PHP 5.5. And updating MariaDb and PHP was not as bad as expected and the server is now running Centos 7 with Nginx 1.16, MariaDB 10.3.21 and PHP 7.2 which puts me in good shape in terms of recent versions of both the operating system and the web server. So, let’s backtrack to configuring Nginx, which was where I spent a fair amount of the two days. In the end this is what my configuration file looked like, which is found at /etc/nginx/conf.d/default.conf

server {
    listen   80;
    server_name   www.;

    # note that these lines are originally from the "location /" block
    root   /usr/share/nginx/html;
    index index.php index.html index.htm;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live//fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live//privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


}

I invite comment from folks on what I might be missing and if there are better ways at this configuration. One of my next tasks is to dig in on this and figure out what’s the best way to configure things, but as of now this seems to be working. Previously I had included a path to php-fpm, but that has gotten omitted during future iterations via manic trial and error. Speaking of which, I found the below command for the Nginx error log very useful:

tail /var/log/nginx/error.log

And after every change to the configuration file a reload is in order:

systemctl reload nginx.service

One of the errors that had me stumped, but I found out was not an actual issue was “centos 7: nginx Failed to read PID from file /run/nginx.pid: Invalid argument.” I confirmed the file was in the correct location, and read that this can happen but is not necessary a issue if Nginx reloads cleanly. But, if you disagree, I would love to know more…

The next piece after Nginx was MySQL, and once I got MariaDB 10.3.21 update this was pretty straightforward. I was relatively familiar with mysql commands for creating users, databases, adding privileges, etc. So nothing crazy there, though always useful for a refresher:

Logging into MySQl on the server:

mysql -u username -p

After that you add the MySQL password prompted for, and then the following commands can be run to create a database, user, grant privileges, etc.

CREATE DATABASE [datbasename] DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
CREATE USER '[username]'@'localhost' IDENTIFIED BY '[password]';
GRANT ALL ON [databasename].* TO '[username]'@'localhost' IDENTIFIED BY '[password]';
FLUSH PRIVILEGES;
EXIT;

The frustratingly stupid mistake I made that took up to much of my time was not logging out of the mysql shell before trying to import bavatuesdays sql file. I’ve done this same mistake before, and that makes it double annoying. basically, you need to upload the SQL file (or database) that you want to import to the site directory (although not sure it has to be there, but I do, and then delete it afterward). After that, you run the following command, but being sure you are not logged into the mysql shell:

mysql -u [username] -p [databasename] < [file_to_import.sql]

After that, MySQL was working and my data was imported to the database I created with privileges given to the appropriate user with a password—all details that need to be added to the wp-config.php file.

But before I get to syncing files from my old server, which was the easiest bit, I should mention a bit about setting up PHP, which was not, at least until I found this excellent guide by NixCraft on “How to install PHP 7.2 on CentOS 7/RHEL 7,” as simple as I had hoped. The issue was when I installed PHP 5.5.x  and then needed to upgrade, I was unclear how the PHP FastCGI Processes Manager (PHP FPM) worked with Nginx. That guide took me through not only getting PHP 7.2 up and running, but also what I needed to do to make sure PHP FPM was compatible with Nginx. 

With Nginx, PHP, and MariaDB all running smoothly, I was able to install WordPress and import the database, replace the wp-content folder and update the wp-config.php file, and that easiest way to get those file over was rsync, which is a way to copy files between servers, and it rules. I consulted this post I wrote several years ago, and it worked a treat:

rsync -avz . [email protected]:/usr/share/nginx/html/

It is also ridiculously fast and moving 10 MB per second, so 9 GB is moved in a few minutes. 

sent 8,919,411,276 bytes received 813,984 bytes 10,689,305.28 bytes/sec
total size is 9,208,402,639 speedup is 1.03

So, the last bit on this project was issuing an SSL certificate. And luckily certbot made that simple and painless: 

sudo certbot --nginx -d  -d www.

The even provide the code to make sure the certificate is automatically renewed every few months:

echo "0 0,12 * * * root python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew" | sudo tee -a /etc/crontab > /dev/null

And with that, bavatuesdays is running on a Digital Ocean VPS with Nginx. The speed difference is already noticeable, and the next think I have to work on is installing firewalls (thinking of going with an internal and external firewall), installing PHPMyAdmin, getting outgoing mail for WordPress working on the server, and optimizing Nginx, MariaDB, and PHP settings for best performance.

So, there you have it, a relatively useless guide in and of itself, but a wealth of links for me to return to over the coming days, weeks, months, and possibly even years as I continue to try and learn more about the various technologies that underpin the web as we know it. To infinity and beyond!


*The guides linked here are the best and easiest ones I found for updating PHP and MariaDB versions on Centos7.

This entry was posted in bavatuesdays, Domain of One's Own, sysadmin, WordPress and tagged , , , , , , . Bookmark the permalink.

12 Responses to New Year, New Digs for the bava

  1. That’s quite the journey – it’s definitely speedy now 🙂 It’s surprising how much work is needed to keep WP up and running, and to fine tune it. Static sites don’t require any of that. Just generate a bunch of files, rsync them up to the server, done. High performing, no-firewall-needing static files. They’ll even run in a ~/bava webspace, as god intended.

    • Reverend says:

      Oh no, Hugo-boy is on my case 🙂 I’ve watched that move with interest, and a static page generator may be something I explore, I dabbled with Jekyl, but the Github hosting for me was a no-go. And besides the fact I’ve been dye in the wool WordPress for so long, the allure of a static website is increasingly more alluring, so I’m listening, I’m not fully hearing you, but I’m listening 🙂

      • hey, man. you do you. I’m not here to sell anything. I just got tired of having to tweak performance etc., and worrying about how fragile my website was – works great, as long as Apache, PHP, MySQL, WordPress, a long list of plugins, and a few javascript libraries all work perfectly.

        • Reverend says:

          I can only do me, and all I have to say for you is it is a slippery slope, Norman,. Soon you’ll be doing digital detoxes with the rest of the online hippies. There is no joy in the critical, and less is never more, more is always more!

  2. Pingback: The Great Firewalls of bava | bavatuesdays

  3. Alan Levine says:

    Look at the command-line jujitso here (wax on wax off)! Impressive.

    I got lost in the acronyms but have to see a current project where I am having to do some Ubuntu server set up in a cloud, the Digital Ocean docs have been a gold mine too.

    Hee hee ‘Hugo Boy’. Gotta get you going on a static Twenty-Ten theme.

  4. Pingback: Managing Mail on the bavaserver | bavatuesdays

  5. Arek Panek says:

    Hey!

    Author of https://apollin.com/digitalocean-wordpress-setup/ here, thank you for mentioning me in your article! And I’m sorry you didn’t find there what you were looking for. However, it’s not that bad – it motivates me to update the article for other popular systems’ usage as well, so I’ll probably do that soon.

    All the best!
    Arek from apollin.com

  6. Pingback: Running the bava’s DNS through Cloudflare | bavatuesdays

  7. Pingback: The Ghost of bava | bavatuesdays

  8. Pingback: But what does it cost? | bavatuesdays

  9. Pingback: bava in the cloud with clusters | bavatuesdays

Leave a Reply to Arek Panek Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.