t-logs

Transaction logs

All changes to data are recorded in data pages held in memory in the buffer cache

As soon as possible, they are written sequentially to the transaction log as implicit transactions or as a series of entries into an explicit transaction

Next they are written to disk, possible with random access following a checkpoint or via lazy writer if there is memory pressure

In the transaction log, which typically is a single file per database, there are a number of Virtual log files (VLF) to which the transactions are written.  The order is effectively round-robin as long as the next VLF is marked as inactive and therefore available.  The active loop is the first to the final vlf which have at least one active log entry – which is a transaction that has not been committed and therefore might be rolled back.

In simple recovery, all inactive VLFs are truncated immediately after a checkpoint.

In full or bulk-record recovery, a checkpoint writing data to disk does not permit VLFs to be truncated, so VLFs are preserved so that a chain of LSN are maintained intact – allowing a point in time recovery from the last full database backup.  When the transaction log is backed up, the current transaction log VLF can be truncated if there are no active log entries in them. 

After all LDFs are written to, the transaction log file might be forced to grow.  Best practice is to create the log file at the proper size so that the frequency of backups does not force the log to grow.  Second best practice is to grow the log file at a constant size (one recommendation is to start testing with a size of ½ GB (512MB), rather than a percentage of the file.  If the increase is small, a number of small LDFs are created – if the increase is larger, larger LDFs are created.  There is an impact on the database during log growth, and the new space has to be initialized, whereas database growth could be autoinitialized.

Worse case is to cause or allow the transaction log to grow larger than necessary, then shrink the log to smaller than required and force it to grow again.  One recommendation for full recovery  is to create the log to 1.5x the size of the largest data file – this will allow for transactions to be stored as well as for transactions that occur during index recompile or index rebuilds which are heavily transactional and require log growth.

If the volume for the transaction log gets filled, or the transaction log growth is too slow, a 9002 error will be generated and the database will be converted into read-only mode.  

Advertisements