For most activities in MS SQL (not including queries), information on the action is logged.  depending on the database, the action is logged to that database transaction log.  this potentially allows for rollbacks or point in time restores if something is damaged.

When SQL is installed, you can specify the location of the transaction logs; you can also move the logs after the database comes on-line.  Since transaction logs are very write-oriented and can prevent anything else from happening, these logs need to be on very fast storage (low latency and high i/o disks – RAID 1 or 10 or SSD).  Even tempdb has a transaction file.

There is an undocumented function [fn_dblog(null,null)] to look at a log file from one starting LSN to the ending LSN (null is the default).  You can also use DBCC Log() and  trace flag 2537. to look at logs.  There is an example in MS SQL Tips

USE ReadingDBLog;
select COUNT(*) from fn_dblog(null,null)

will count the number of entries in the specified log file.  To look at the data, use

USE ReadingDBLog;
select [Current LSN],
       [Transaction Name],
       [Transaction ID],
       [Transaction SID],
       [Begin Time].
       [Page ID],
       [Slot ID],
       [Begin Time],
       [End Time],
       [Number of Locks],
       [Lock Information]
FROM fn_dblog(null,null)

There are more than 100 columns available in the table. If you backup the database, and count rows in the tlow, you will see the number of rows decrease, assuming that transactions have been committed, and the inactive entries removed.

locks and latches

when a table or record needs to be queried or updated, while data is in-flight, the table (or record, or page) is locked so that data integrity (ACID) can be assured.  

SELECT * FROM sys.dm_tran_locks

to examine the locks  A lock might create a block on other process (the second can’t finish until the first is done and the lock removed.  this might also involve a deadlock based on two processes – one will be terminated and the other will eventually complete). 

Create in testing by starting a transaction without a commit tran or rollback tran.

performance tuning, CONFIO webinar on EE, PASS presentation on PT.

SQL Server very slow:  very busy and improperly sized, or bad queries and not running properly

using RTA (response time analysis). focus on slow responding queries.  Use Wait types of define what is happening (7 wait time tables).  DVMs are used but don’t do periodic determinations (cumulative).  Basic monitoring query.

Make sure you are working on the right query.

INSERT INTO SessionWaitInfo
SELECT r.session_id, r.sql_handle, r.statement_start_offset,
r.statement_end_offset, r.plan_handle, r.database_id,
r.blocking_session_id, r.wait_type, r.query_hash,
s.host_name, s.program_name, s.host_process_id,
s.login_name, CURRENT_TIMESTAMP cdt
FROM sys.dm_exec_requests r
INNER JOIN sys.dm_exec_sessions s ON s.session_id = r.session_id
WHERE r.status <> 'background'
AND s.session_id > 50
AND s.session_id <> @@SPID

get baseline stats and wait_type info.  review execution plan, check table information and indexes

monitor before and after so we know it made a difference

OLTP-OLAP-CRUD-ACID and other acronyms.

On-line, on-line … processing

OLTP: On-Line Transaction Processing

— does typical database stuff, provides data to the warehouse for OLAP

OLAP – On-Line Analytic Processing

— does analysis

CRUD – create, read, update and delete
— SCRUD (Search…)
— BREAD (Browse, Read, Edit, Add, Delete)

ACID – Atomicity, Consistency, Isolation, Durability

RBAR –  row by agonizing row

from Don Jones (SQL Server Admin in AMOLunches)
– SQL – Structured Query Language
– DML – Data Manipulation Language
– DDL – Data Definition Language
– SSRS – SQL Server Reporting Services
– SSIS – SQL Server Integration Services
– SSAS – SQL Server Analysis Services
– DMO – Database Management Obejcts



largely copied from

  • Cluster Index
  • Non-clustered index
  • Covering Index

    create an index on a column, but include additional, none-indexed information.  run a query that uses the covering index.  When the query finds the index entry, it also finds the additional information.  If this info is enough to satisfy the query, the engine does not have to do to the record.  Create covering indexes to match frequently run queries

    CREATE INDEX IX_LastName_Includes ON dbo.People (LastName) INCLUDE (FirstName, PhoneNumber)
  • Filtered index

    If we have a table with lots of data in a column, mostly type a but some type b (and we care about type b), an index on that table that includes the types will be large.  If we want to query the table for data that selected type b, we can create a relatively small index (less than 100% of the records) just for type b

    CREATE INDEX IX_OrderStatus ON dbo.Orders (OrderNumber) WHERE OrderProcessed = 0
  • full-test indexes
  • XML indexes



MAX Memory in an active cluster.

In a single node instance, you want to reserve some computer memory for the OS or other activities (SSMS on the box, RDP into the box, backup, AV, et al).  If you don’t do this, MS SQL will have a max memory setting of a huge value, which means “take everything you can get” — which means that if/WHEN the OS needs memory to do some OS task, it will have to reclaim it from SQLOS, which means probably dumping some cached data/index/Stored procedures, or disk thrashing.

OK, got it.  Now create a cluster, with two nodes.  Set up the max memory, fail over to the other node with the same max memory – no sweat.  NOW, run a second instance on the other node.  We are now Active-Active (is this the correct term?), and when the failover occurs (as it eventually will even if we are only doing maintenance), we have two instances both wanting most of the ram on the same physical server:  total ram configured for SQL greater than … total ram.

An article in SQLSERVERCENTRAL  discusses how to deal with this.  The obvious solution is to waste 1/2 the configured RAM on both machines so that the failing over server immediately sees what it expects – nearly half the total CPU ram – for the fail-over instance.

A better approach is to reconfigure the max memory for each instance depending on if it has a whole server for itself or if it is sharing.

First step is to set up each instance with a data drive on the SAN – with different letters.  For each instance, put a blank txt file on that drive.  Instance one has E:\instanceOne.txt and Instance two has F:\InstanceTwo.txt.

Second step is to write three SQLAgent jobs to reconfigure the max memory settings (there will be 6 jobs, three for each instance, although if we have three instances, we would need a different number per instance or per cluster). JOB one  is to give the instance all available memory (minus system reserve).  JOB two is what we want for each instance if we are sharing.  This COULD be 1/2 of non-reserved memory, but if the two instances don’t really need the same max memory, it could be different.  JOB three is what we want for the OTHER (active?) instance into whose machine we are failing.

Finally, we need one more SQLAgent job that runs on restart.  It is the same on both instances, and says on restart, check and run the appropriate local SQL job (ie, reconfigure MAX or reconfigure SHARED)

If the SQL job starts and sees only E or F, the correct memory is MAX.  If the SQL job sees E and F (ie, we have two instances on the same machine), configure your instance to its reconfigured max and the OTHER  instance to its max.  When things are fixed, and the machines restarted, we check, see the nominal drive, and reconfigure for max again.

That all makes sense BUT the machine that stays up – why will that SQL instance think it is in  “restart” mode?  the trick is that these are linked servers, so that the SQL agent job on the instance that restarts will be able to evoke sp_configure on the machine that stays up:

EXEC sys.sp_configure N'max server memory (MB)', N'73728'

So the instance that restarts will check to see it is sharing resources, configure itself, configure the initial instance and both machines are running right.  When we fix machine two, we fail instance two back to machine two, SQL restarts, sees it is alone, configures max memory for itself, and removes the hold-back on machine one.

Or so i think….

count rows in single users database

This is an interesting little script by James Rea that will count the number of rows in each table in a specified database.

It might be useful in the future.  It extends count(*) to the entire database and not just a single table.  (Took me at least five minutes to realize that little fact…).  The output is sorted by count but the script is fulsomely commented and  there is a commented line to that will sort by table name rather than count.

The next, or final, comment is the description of the script as “records contained within any single user database” – i am scratching my head and trying to figure out why the database was forced into single-user mode and why databases would be set up this way and … finally looked at the code.  I would have written this as “count records in any selected user database” – not 100% certain that this is better, however the point is that the database is selected (use database…) and it is one at a time, not all user tables in all databases.  A SINGLE database, not a SINGLE USER database.