Changing the Oil on a Multi-Database WPMu Install

And to build on my last post about Moving from Multi-DB to SharDB on WPMu, here is a rundown on some of the work done to further optimize the multiple-databases for our WPMu at UMW Blogs.

First off, we have a plugin install called WordPress MU Sitewide Tag Pages that basically republishes every public post from around UMW Blogs into one blog, the tags blog, which is located at This plugin has a ton of uses for us, not least of which is sitewide tags and categories, as well as feeds off those sitewide tags and categories for syndication and republishing into course sites. Another function of this blog could be a searchable archive of all the public work done. So, needless to say, this blog is important enough that we have let it run now for almost two years without pruning. And when I checked the wp_posts table of this blog in the database there was what seemed to be 99,066 posts.

And that one table was weighing in at a hefty 425 MBs, which mean it was out-of-control, and I imagine some of our recent performance and down time issues might be related, so I decided to cull this table. So after backing it up, I tweeted looking for advice on how I might delete all posts from the tags blog before a certain day, say August 15, 2009—the week before the Fall semester started this year.

And in no time at all, Michigan’s finest, Gowtham (and sgowtham on Twitter) sent me the following bit of code:

DELETE FROM wp_posts WHERE post_date < "2009-08-15";

Along with the note, "please be sure to replace the right table name in place of wp_posts though." Which would like like this for the tags blog with the ID 1291:

DELETE FROM wp_1291_posts WHERE post_date < "2009-08-15";

So I just copy that bit of SQL code into the PHPMyAdmin editor that lets you run SQL queries for a specific database like so...

And within a minute 66,000 posts were removed, and the table was now 142 MBs.

Now, let's stop here for a second. How sick is that? Within minutes I got a solution to my issue and deleted 66,000 posts and potentially saved UMW Blogs a couple of more crashes this coming week, from a friend on twitter who just knew what to do. How much richer am I because of this relationship? How much richer is all of UMW? It's kinda wild to think about that. And more than that, it doesn't stop there....

When I removed those 66,000 posts the size of the table was still showing 425 MBs, but there was now a value in overhead column of the table that had a value of 140 MBs. I had o idea how to get rd of this, so asked again in Twitter, and D'Arcy Norman tweeted me back this:

He was right, optimizing the table brought the 425 Mbs down to a slender 140 MBs, and the overhead value was flushed clean. So this got me thinking, hmmmm, what if I could run a script and optimize every table in every database? I mean this site has been running non-stop for almost three years, and has been a work horse. How about giving UMW Blogs an "oil change"---to use Gowtham's analogy which works beautifully. So, when I tweeted about that optimization question, Gowtham once again sent me a message, and after I gave him the naming conventions of my 18 databases on UWM Blogs, wham, I had a PHP script to optimize every table in UMW Blogs.

People, are you getting the point here? Gowtham is an Engineering Physics grad, a hell of a photographer too, but he knows his shit. And he was willing to share that will me in all of about a few minutes the first time. But for the optimization script, he actually wrote it up and sent it to me via email. I don;t know how long that took him, but it would have taken me hours and hours of research and learning to figure out I could do it. And hours more to beg someone who knew at UMW to write it for me (if that person existed). Well, this process was a matter of minutes.

And, Gowtham not only sent me the SQL Optimization script, but was more than cool with me sharing it, so here is the code to the script as a txt file.

And here it is copied below sans opening and closing PHP calls.

# PHP script to optimize each table in a given set
# of MySQL databases and improve the overall performance.
# First written: Gowtham, Mon Feb 27 09:33:01 EST 2006
# Last modified: Gowtham, Sun, 24 Jan 2010 13:01:58 -0500

# Connect to the database
# It is expected that you will be able to connect
# to all your databases [listed below] with this
# set of credentials
$host = 'localhost';
$dbuser = 'USERNAME';
$dbpasswd = 'PASSWORD';

# The act of connecting to the MySQL server
$connect = mysql_connect($host, $dbuser, $dbpasswd) or
die('MySQL Connection Error: ' .
mysql_errno() . ': ' .

# List of databases
$databases = array(

# Loop through the array, 'databases'
# Each array element will be stored in the variable 'database'
foreach ($databases as $database) {

# Connect to the database using details specified
# above - it expects that there is one username-password
# combination that can connect to all your databases
mysql_select_db($database, $connect) or
die('Database Connection Error: ' .
mysql_errno() . ': ' .

# Select all tables from 'database'
$sql_q = "SHOW TABLES FROM $database";
$result = mysql_query($sql_q) or
die('Invalid Query: ' .
mysql_errno() . ': ' .

# Loop through the list of tables from 'database'
while ($table_details = mysql_fetch_row($result)) {
$table_name = $table_details[0];

# Print a message to the screen - helps in debugging
echo "$database :: $table_name \n";

# SQL query to optimize the table
$sql_o = "OPTIMIZE TABLE `$database`.`$table_name`";
$optimize = mysql_query($sql_o) or
die('Invalid Query: ' .
mysql_errno() . ': ' .


# Close the connection

I put in the drop database sign-in information, uploaded it to UMW Blogs via FTP, and went to the script at, and a few minutes later all of the tables in all 18 of the databases were optimized.

Is this not awesome, UMW Blogs has gotten the oil change it has been in desparate need of---and I don't recommend waiting every 4,000 users and blogs, every 2,000 may be wiser. So, thanks to Gowtham, UMW Blogs is purring like a cat, and I really hope it stays that way.

And just to show you I have no shame, something else I saw when the databases were optimizing is that random blogs still have the Spam Karma tables in them, I would think about 500 or 600 blog tables filled with old spam. The tables are labeled wp_blogid#_sk2_logs and wp_blogid#_sk2_spams. For example: wp_21_sk2_logs and wp_21_sk2_spams. So riddle be this Batman of Gowtham, how would you find and drop all tables with _sk2_ in them across 18 databases?

I love the internet, and I can't thank you enough Gowtham for your selfless support, people like you make this space both fun and wonder-filled.

This entry was posted in UMW Blogs, wordpress multi-user, wpmu and tagged , , , , , , . Bookmark the permalink.

5 Responses to Changing the Oil on a Multi-Database WPMu Install

  1. Alan Levine says:

    Running MySQL queries like deletion or doing custom requests (e.g. getting post or comment counts for specfic time ranges) is a WP powertool.

    The downsiode of having to delete, though, is that older stuff is than not as accessible via your tags, like your oldest tweets or hash tagged ocntent falling off the edge of the twitter table. It’s a call you obviously have to make.

    Optimizing is stuff I do a lot, I hate the overhead. In phpMyAdmin for your batch of tables, there is a menu option at the bottom to “select all tables with overhead”, and then you can just optimize the ones selected.

    IN your case, you might want to roll the optimization script in with something that prunes the site-wide-post table (e.g. you can make it do everything older than 6 months) into a cron script that can just do the oil changes dutifully in the background.

  2. Pingback: Deleting old MySQL tables across blogs and databases in WPMu at bavatuesdays

  3. Scott says:

    Some great techy posts Jim.

    Someone has developed a Optimize DB Plugin haven’t tried it out yet
    but seems like it would do the same thing.

  4. Reverend says:


    That’s a great idea to run a cron job on the tags blog every 6 months to clear out the previous semesters posts. What’s nice about the archival issue, is that we can still install those posts in another archival blog or do nothing at all, because they are simply copies of the originals. And while that may be useful for some period of time, it is in many ways like another back-up.

    I’ll have to figure out the cron job for this and an occasional optimization.

    Thanks for that, I have you and the UBC crew in mind when I finally get around to these essentials that you all are o good at. And I’m still looking forward to Novak’s post on doing the WPMu setup over a couple of servers like you guys have it. It seems like t would things run that much smoother by distributing the resource load that much more.

    And while these maintenece posts should speed things up a bit, I can’t get too excited just yet until I’m sure that it has solved our issues all together. I’ll have to sit and wait to see.

    I wonder if that plugin works across mutliple DBs? Seems like ti is defined for a single WP install, and I don’t know enough about how this works to know if it would find the array of databases and just work.

  5. scott says:

    This is another good way from the command line method. *You make sure there are no incoming queries (so stop WP) needs to run on a running DB.

    mysqlcheck -u root -p –auto-repair –check –optimize –all-databases

    good script to use to see what needs to be tuned:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.