Home      Portfolio      Technologies      Contact Us  
Date: Mon Feb 17, 2003 12:59 am
Topic: Article 5: MySQL hot-backups
Welcome back to my next article... This artile is kind of a spur of the moment type one 
as I had another article about 70% complete.. But ran into a little bit of difficulty with 
PHP/ MySQL general operation. So during the process of coming up with a solution, I 
decided to turn my learning experience/ new script into an article that people can 
use/learn from.
My basic problem was related to backing up MySQL data. In the past I have been 
simply tar compressing up the directory that contained all of the MySQL table files with 
a simple shell script. Now one of the databases that I do backup on a daily basis is very 
large, in excess of 2GB in size and very busy.
So what would happen is while tar was copying/compressing the files into a backup 
archive, a mysql thread (most likely the webserver connecting), would perform an insert 
or an update on one or multiple of the tables, causing the backed up files that tar had 
to be partially corrupt.
Now the really nice thing about MySQL and MyISAM tables (the default type) MySQL 
makes is they are very reliable. Even with this partially corrupted backup, I could issue 
the "CHECK TABLE blah" and "REPAIR TABLE blah" command and the database repaired 
all the problems, which was typically buggered up index files. So no major harm done.. 
But nevertheless a true HOT backup with no errors like this would be nice.
I searched MySQL's website and found a few helpful utilities that did part
of the job, but decided to write my own backup script in PHP since it would
be highly configurable, and give me nice reporting.
Now there are plenty of ways to backup files.. Some more obtrusive than others. What 
I mean by that is if a script had to copy/compress massive amounts of file data, that's 
a big I/O hit on the server. And can bring the frontend (what web users see) to a 
screeching halt if your not carefull. With my script I've decided to go with a similar 
technique as before, copying the RAW table files from the operating system, instead 
this time, PHP will connect to the database during the entire operation and issue 
"TABLE LOCK" commands to lock the table in read only mode. That is if a thread 
(most likely web/apache client again), wants to work with the table, they will only be 
able to read data from it. Any delete or update operations will sit there waiting for the 
backup thread to release its lock on whatever table it is currently backing up.
This can produce slight lag on the frontend. That's also why you'd want to run backups 
late at night 2-5AM when the database is as quiet as possible. As with my case, the 
main problem is that even at those off hours of the day, the database is quite busy 
updating data in the system. But it has to get backed up somehow.. So if I have to 
introduce 30 seconds of lag here and there, well be it.
Another way to do backups, is to basically export all the data from the database, 
including the database structure into some kind of files that in case of a restore, can 
be fed thru some kind of program and re-inserted into the database to bring it back 
to the current running state. This idea interests me, and I'll probably investigate it in 
the future. As I've thought a very interesting backup script would work this way, but 
have some kind of millisecond delay in the backup loop. As to not completely flood 
the server with requests for data to export, but keep the server running smooth 
with enough cpu/disk time to respond to frontend queries in a timely manner, but 
also not drag the backup over a large period of time.
Let's start by taking a look at my backup script.. It's pretty simple in all reality but 
gets the job done. Again if you have any input to make it better, let me know or 
have a better way of backing up, let me know. There are a ton of ways to solve the 
same problem, not every method works properly in every instance.. 
But it's worth a look.
Keep in mind, this PHP script could be run as apache, if the disk permissions where 
correct.. But I typically custom compile a PHP standalone binary, so I can run local 
PHP scripts just like PERL or SHELL scripts from cron (a unix program that runs 
programs at desired times).
Now I thought perhaps the max_execution time setting for PHP could be an issue. 
As it is set in php.ini to something rather low.. Say 30 seconds.. As most webpages 
take much less than that to generate. According to PHP's website, as of PHP 4.3.x, 
when running commandline scripts, the execution time gets automaticlly set to 0 
(unlimited) which is good for this as it may take many minutes to perform the 
operation.
** Before running this script, be sure to read it over and understand it... I've cut 
and pasted from a few sources to get it together here, so there could be typos, etc. 
Keep in mind this code is not mean to be just copied and used, but rather, read, 
understoon and implemented in your own unique way, perhaps similar, perhapsdifferent.
	
Code/Quote:
#!/usr/local/bin/php
<?
// Nightly Database Backup Script
// Who to send email reports to
$EMAIL = "report@domain.com";
// Mysql server IP
$SVR = "127.0.0.1";
// Mysql server login
$USR = "root";
// Mysql server password
$PWD = "12345";
// Mysql db
$BACKUP = "mydb";
// Source of mysql files
$SRC = "/usr/local/var/";
// Temp location to store hot backup files
$TMP = "/tmp/tmpbackup";
// Location of end archived backup
$DST = "/backups/
// Timestamping/log function
function goMsg($msg) {
  $line = date("Y-m-d h:i:s") . " - " . $msg;
  $report = $report . $line;
}
// Fire up DB
if (!$db = mysql_connect($SVR, $USR, PWD)) {
  mail($EMAIL, "Backup Failure!", "Cannot connect to database");
  exit;
}
// Switch Databases
mysql_select_db($BACKUP, $db);
// Header
goMsg("Nightly Database Backup Starting\n\n");
$SQL = "show tables from " . $BACKUP;
$RES = mysql_query($SQL);
// Setup
mkdir($TMP);
// Main Loop
while ($OBJ = mysql_fetch_row($RES)) {
  // Start backup
  goMsg("Starting backup of table: " . $OBJ[0] . "\n");
  // Lock
  mysql_query("lock tables " . $OBJ[0] . " read");
  // Flush
  mysql_query("flush tables " . $OBJ[0]);
  // Copy
  $e1 = "/bin/cp " . $SRC . $BACKUP/" . $OBJ[0] . ".* $TMP";
  exec($e1);
  // Unlock
  mysql_query("unlock tables");
  // Finished optimize
  goMsg("Finished hotcopy of table: " . $OBJ[0] . "\n");
}
// Tar up the goods
goMsg("Started tar archive compression of all tables");
$cmd = "/bin/tar cvzf $DST/database-full-" . date("Y-m-d_h-i-s") . ".tgz $TMP";
exec($cmd);
goMsg("Finished tar archive compression of all tables");
// Cleanup
exec("/bin/rm $TMP/*");
rmdir($TMP");
goMsg("Cleaned up temp work files");
// Footer
goMsg("Backup complete!\n\n");
// Send mail
mail($EMAIL, "Master DB Backup", $report);
// Close DB
mysql_close($db);
exit;
?>
	
*** UPDATED 2004-03-10
A user emailed me and wanted to point out that the method above is 
designed for MyISAM databases, which alot of people (especially beginners to 
mysql use) ... The case above will not work for INNODB databases. Now there are 
reasons both ways which database type to use.. Anyone using INNODB has to use 
the mysql_dump command (also works with MyISAM) .. But generates rather large 
files.. I tried to dump a 700MB database, and when the mysql_dump created file 
hit 2GB I had to look at other solutions...
quote from jason (who wrote email to me)
	
Code/Quote:
anyways, with mysqldump you're best bet is to use:
--opt
Same as --quick --add-drop-table --add-locks --extended-insert --lock-tables. 
Should give you the fastest possible dump for reading into a MySQL server.
	
Thanks for everyone giving feedback..
Good luck..
	
 
Managed With Tymbrel