Working primarily on support for Microsoft Dynamics ERP systems I often see tickets logged where the SQL transaction log has grown uncontrollably and is filling (or almost filling) the drive where its located. This post aims to explain the most common reason why this happens and some steps you can take to truncate and shrink the SQL transaction log and reclaim the space. Please note I only mention the FULL and SIMPLE recovery models as these are the recovery models I have most experience with.
What is the SQL transaction log?
The transaction log is a fundamental part of the SQL database. All changes that are made to the database are described and written into the SQL transaction log before the changes are written to the main database file.
My basic\novice understanding of how this works based on a typical UPDATE transaction is as follows:
- An UPDATE statement is issued to update some data in the database.
- If the page containing the data being updated isn’t in memory the page is retrieved into memory.
- The data is changed in memory and log records describing the changes are written to the transaction log.
- At some point later the updated page in memory is written to the SQL database file when the CHECKPOINT runs. (this can be before the UPDATE statement COMMITS)
- UPDATE statement COMMITS
Its also important to highlight that this process of logging to the transaction log happens whether the database is in the FULL or SIMPLE recovery model.
SQL Recovery Models and Log Truncation
If the database is in the SIMPLE recovery model the log records are automatically truncated when they are no longer needed. This means the space within the transaction log file can be reused and theoretically the transaction log file won’t grow. (caveats here because for example if you were to delete a large number of rows in one SQL transaction this would also cause the log file to grow however generally the log file will be much more manageable in SIMPLE recovery model)
When the database is in the FULL recovery model the transaction log records aren’t automatically truncated when they are no longer needed. Therefore new database changes will be written to the end of the log and the log file will continue to expand and grow.
The only way to truncate the log, so the space can be reused, is to perform a SQL transaction log backup. When you do this log records are written to the SQL transaction log backup file and portions (VLFs) of the transaction log can be used again*. If a SQL transaction log backup isn’t performed the transaction log will continue growing and growing until it fills the disk. (assuming auto growth is switched on).
Therefore the most common reason I have experienced for a transaction log file to have grown extremely large is because the database is in the FULL recovery model and LOG backups haven’t been taken for a long time.
* Its important to note that the space within the transaction log is just marked as available again. The transaction log file itself isn’t physically shrank by the transaction log backup.
How to reclaim the space. Truncate, Shrink or both?
A transaction log backup doesn’t physically shrink the transaction log file on disk. The transaction log backup truncates the log records that are no longer needed making the space within the log file available for use again.
Therefore, if you have a very large transaction log file, the first thing I do is check the recovery model and the available space in the log using the script below. (run this against the target database)
SELECT [name] as [Database Name] ,[recovery_model_desc] as [Recovery Model] ,log_reuse_wait_desc as [Log Reuse Description] ,total_log_size_in_bytes*1.0/1024/1024 as [Total Log Size Mb] ,used_log_space_in_bytes*1.0/1024/1024 as [Log Used Space Mb] ,(total_log_size_in_bytes - used_log_space_in_bytes)*1.0/1024/1024 as [Free Log Space Mb] ,((total_log_size_in_bytes - used_log_space_in_bytes)*1.0/1024/1024)/((total_log_size_in_bytes*1.0/1024/1024))*100 as [% Space Free in Log] FROM sys.dm_db_log_space_usage loginfo inner join sys.databases dbs on dbs.database_id=loginfo.database_id;
If the “% Space Free in Log” is very small and the “Recovery Model” isn’t SIMPLE and the “Log Reuse Description” is LOG_BACKUP you need to truncate and shrink the log to reclaim the space. For example this log file needs truncating and then shrinking to reclaim the space:
If the “% Space Free in Log ” is very large and the recovery model is FULL then you just need to shrink the transaction log to reclaim the space. For example after backing up the log you can see the “% Space Free in the Log” has increased so this log file just needs shrinking.
Truncating the log
If after running the script above you have determined you need to truncate the log file before shrinking it, you can perform a log backup to do this using the command below:
BACKUP LOG [DBName]
TO DISK = N’C:\<Location>\<FileName>’ WITH NOFORMAT, NOINIT,
NAME = N’DBName-Log Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10
If you re-run the script you should notice the ” % Space Free in the Log ” has increased, as the log has been truncated, and you can now move towards shrinking the log to reclaim that free space.
However, what if the log has grown so large you don’t have enough disk space to perform a transaction log backup? How can you truncate the log? In this situation I try the following:
- Backup the transaction log to NUL.
This discards all the data as the backup is taken and therefore doesn’t use up disk space creating the backup (basically backing up to nothing). The syntax for this is below:
BACKUP LOG <DBName> TO DISK = ‘NUL’;
After doing this you can shrink the log. You’ll also need to immediately perform a FULL backup as well.
2. Change the recovery model of the database to SIMPLE and back to FULL.
This is a little more drastic and should be a last resort. This breaks the log chain therefore after shrinking the log file its imperative to take a FULL backup to restart the log chain. You also need to ensure no features are being used that utilise the log file like replication or mirroring. You can change the recovery model using the statement below:
ALTER DATABASE <DBName> SET RECOVERY SIMPLE;
IMPORTANT:- Please note after backing up the log file its wise to keep the transaction log backup file handy in case you ever need to restore it.
Shrinking the log file
Once you have truncated the transaction log you can now shrink the log file to reclaim the space on disk. One thing to note is not to shrink the log too much. If you do its inevitable the log file will have to grow when a large operation occurs i.e. something like a rebuild of an index on a very large table which has to be logged in the transaction log. I tend to shrink to about 10% of the size of the database.
To shrink the log you do the following:
DBCC SHRINKFILE ('<TransactionLogLogicalFileName>' , 1024);
This example will shrink the log file to 1GB.
You can find out the “Logical File Name” of the transaction log file using the statement below against the database in question:
select name as [LogicalFileName] from sys.database_files where type_desc = 'LOG'
As an example if I run the following shrink operation against my example database it shrinks it to approx 400Mb:
USE [LargeLogFileDB] GO DBCC SHRINKFILE (N'LargeLogFileDB_log' , 400) GO
If I now run the original script I can see the size of the transaction log has shrank and therefore the space on disk has been reclaimed:
To prevent this happening again ensure you have regular LOG backups for all databases in the FULL recovery model. This may also entail checking any other third backup software you may use that can backup the log file.
Also, if point in time recovery isn’t needed consider changing the database recovery model to SIMPLE so the transaction log auto truncates and you’ll never have to work about a runaway transaction log again. (I use this for practice databases etc. I often get asked why the logs have grown for those databases, as they are hardly ever used by end users, however if you have regular index maintenance jobs on those databases, this is all logged in the transaction log, and if this isn’t backed up will cause the log file to grow and grow)
I’m more of an SQL enthusiast than a SQL expert however these are the general steps I take to resolve the issue of a very large transaction log. There are so many good blog posts online on this subject that I’d highly recommend anyone wanting to learn more about this topic to take a look at. (Google is your friend here :)).
I hope this post helps other Dynamics ERP System Admins out there that come across large transaction log files.
Thanks for reading!