t-log too large

Happens all the time.  Twice this week.  Vendor Database, something happens, runs out of space for the logs, got a problem.  SInce iis a vendor database, and since our best practice for IT support is to get on the phone, why sound we worry?  One reason is that if the database is in full recovery mode and backups break (we use MS DPM), the log will grow and we have to do something about it.

If the transaction log is too big, it is not getting purged.  that will happen if the db is in full and a backup occurs.  Assuming that the log doesn’t grow so fast it messes stuff up.  It can happen if the db is simple and we outrun the save to disk function.

In another way of looking at it, we will have a problem if we run out of space.  One bad log can mess up the entire volume.  Or the log file might not be set to autogrow, and in that case we might have a problem if it is not correctly sized.  Also, indexing and bulk-imports can add to the log faster than we anticipate and what was once good is now bad.

Just to make sure it is straight, we usually expect that the backup and the size of the database will grow as new records are added, but the right-sized log should be truncated often enough that we don’t see IT grow.  After the backup, it is empty\emptier although the file size is the same (because all the space will be needed, like a high-water mark) and the percent used will increase throughout the process until the log is truncated again. If the log is shrunk and we DO need to grow it, it can be a slow process and the db will be unresponsive during the growth.

Shawn McGehee in SimpleTalk

DBCC SQLPERF(logspace)

will report on the size of the log(s) and the percent used like this:

DB Name      log (MB)    log % used   Status
master       1.992188    41.7402      0
tempdb       379.4297    23.68777     0
model        0.7421875   54.93421     0
msdb         417.3672    3.173259     0
MedicatDB    5809.742    0.3472498    0
BB_admin     19.99219    24.47489     0
BBLEARN      135.4297    2.817277     0
BB_stats     1872.992    12.3829      0
BB_cms       19.99219    20.68191     0

We can try to backup the log file with t-sql or run a manual DPM job.  We can try to create a second log file on another drive temporarily to get the database to continue to respond.  It that works, we can continue our usual maintenance.

In this example, a new database was create, an insert without commit performed and another tranaction started.  Since the first log entry (which was the MinLSN or oldest record stil needed) was at the start of the log, the attempt to backup the log, which should have worked, did not result in clearing unnecessary Log entries, since

——

if a log file is filled and needs to grow (and autogrowth is turned on), it can be enlarged by increasing the size and additing more/new virtual log files (VLFs).  The new parts of the log will need to be initialized. .  If the settings for autogrowth have small increases, and autogrowth is common, we will have lots of little VLFs. and the log is fragmented.  reading these VLFs will make a restart of the server slow due to the need to read each of them

We can count the VLFs with a DBCC command Loginfo (undocumented and unsupported)

USE DatabaseName 
DBCC Loginfo;

Autogrowth is the issue, we can create and manually resize the log file easily with a reasonable number of VLFs – getting there with a number of autogrowths creates unnecessary virtual log files.

another approach is to change the recovery model from full into simple and shrink the log.

DBCC SHRINKFILE (file_name, target_size).

Following this, change back to full and immediately do a log backup to restart the LSN chain.  If the log is too large, usually there are too many VLFs as well.

Advertisements