large t-log and 9002 error

On one hand, you gotch your database (data) files (.mdf). this is all the data fit to be written to disk. And on the other hand,you have log files (ldf) which contain information about transactions and such like that haven’t been hardened, or haven’t been backed up. If a database stops accepting transactions, eventually all the log data will be written into the datafile. At that point, either the log file will become empty (simple recovery model) or the logs will provide point-in-time recovery (full recovery model). After a log file backup, the log becomes empty or available for reuse.

If the transaction log becomes large, it is because backups are failing (which should not really affect the simle model) or because inbetween t-log backups, there is so much activiity that the transcations fill the r-log and it grows. This is entirely proper and what we want – the alternative is that the database gives a 9002 error and goes into read-only mode. this would happen if the log could not grow due to settings or the disk was full. Of course, we would rather than the log files were large enough that growth was not necessary, if only since log growth can be expensive and during log growth, the server will not be able to process transactions.

Growth of the log might be seen as becoming large enough, or as a response to an unusually occurrence, which will not happen again, or will happen rarely, and we can then try to shrink the log file. Unusual bulk imports or data loads can increase the logs (consider changing recovery model prior to expected unusual data ingestion). Similarly with archiving/deleting (we had that with MS SQL and Blackboard), or index rebuilds, which ought to be expected activities.

if these activities are not unusual, the size of the log is either too small (and should grow) or the frequency of log backups should be increased.

Advertisements