Truncating your SQL 2008 Database with a few lines of SQL...

Tue 05 February 2013

Here's a scenario you may be familiar with: you've got yourself a nice Sharepoint setup that you've gotten to run rather nicely. Conscientious admin that you are, you've set up a good, solid maintenance plan that checks your database health, backs up your database and transaction log... But all of a sudden, your backup drive fills up. Since everything has been hunky dory you only realize this during your next server check and by then, the transaction log's grown to monstrous proportions! You clear up your backup drive and free up space, but you realize to your horror that your transaction log isn't shrinking... Oh no!

If all of this is hitting home, you've probably already realized that the nifty little commands that used to work in SQL Server 2005 aren't working on SQL Server 2008. So did I. Here's my new trick for truncating your SQL 2008 database, hope it helps. I would highly recommend you read the whole article thoroughly before proceeding, it has information that you need to know before you do what you're about to do.

Open up SQL Management studio, then open a query window to the database. For simplicity's sake, I'll assume your DB is called WSS_Content but if you've got multiple Content Databases / Site Collections (as well you should), the same applies with a different database name / log file name.

First, run this:

alter database WSS_Content set recovery simple with NO_WAIT go checkpoint go dbcc shinkfile(WSS_Content, 1) go alter database WSS_Content set recovery full with NO_WAIT

And get yourself some coffee. Lots of coffee -- the bigger your transaction log is, the longer it will take. Run this during a weekend, or as soon as you can when there are as little people in your office as possible; do NOT abort the process, or you'll regret it.

The above snippet of code switches your database from a full recovery model to a simple recovery model. The full recovery model makes thorough use of the transaction log; the simple does not. Before SQL Server actually makes any changes to its database, it stores the commands in the transaction log - this is so that if your server crashes it can continue to execute what it was doing when it crashed. This is what makes your SQL database so nice and robust: it is catalogging EVERYTHING it's doing so that if something goes wrong it can retrace its steps.

I know what you're thinking, and no. It's not a good idea to keep your database in 'simple' mode, no matter how good your backups are. The rule of thumb is that if you have a production database that stores data of any relevance at all to you, you should be using the full recovery model, period. If your database is a 'holding area', if you're just using it to perform computations and pass it off to another database, you can use a simple recovery model, maybe even run the database on a RAID-0 array so it's nice and fast. Or if your database is written to only once a day, for instance if you are retrieving data from another site or the web and caching it locally, then backing it up immediately afterwards. Those are the only two examples I can think of where it makes sense to use a simple recovery model.

Now that you've executed the above code, the following code should be pretty fast:
backup log WSS_Content to disk = N'{some backup location of your choice}' go dbcc shrinkfile('WSS_Content_Log', 1)

This is what actually shrinks the file. It makes a backup of the transaction log as SQL 2008 expects it and then shrinks the file. Of course, if you have enough space in your backup drive, you may wish to just execute this code - it'll all depend on how big your transaction log has grown.

blogroll

social