restore with copy

SQL Backup creates a file which you can direct to a location and to which you can provide a name.

code from

USE [master]
BACKUP DATABASE [AdventureWorks] TO DISK = 'c:\mssql\backup\yukon\AW2K5_Full.bak'

In this example, the database is changed to single-user mode and dropped. The mdf and ldf files remain in place. A backup is done AFTER the file is dropped. The contents of the .bak file can be displayed with

RESTORE FILELISTONLY FROM DISK = 'c:\mssql\backup\yukon\AW2K5_Full.bak' -- WITH FILE = 1

which will return the logicalName, physicalName, type (D vs L), filegroup name, size, FileID.

Another approach is 

RESTORE VERIFYONLY FROM DISK = 'c:\mssql\backup\yukon\AW2K5_Full.bak'

also review the current physical location of the mdf and ldf files with t-sql tosys.master_files

USE [master]
DB_NAME([database_id]) [database_name]
, [file_id]
, [type_desc] [file_type]
, [name] [logical_name]
, [physical_name]
FROM sys.[master_files]
WHERE [database_id] IN (DB_ID('AdventureWorks'))
ORDER BY [type], DB_NAME([database_id]);

If we restore now even to a new name, the data and log files in the .bak file are in the default locations and we will get rrros including the suggestion to use move.

The restore plan below changes the name from AdventureWorks to AdventureWorksCopy from the AdventureWorks full backup .bak file, and moved data and log from the default location to a new location.

USE [master]
RESTORE DATABASE [AdventureWorksCopy] FROM DISK = 'c:\mssql\backup\yukon\AW2K5_Full.bak'
MOVE 'AdventureWorks_Data' TO 'c:\mssql\data\yukon\AdventureWorksCopy_Data.mdf',
MOVE 'AdventureWorks_Log' TO 'c:\mssql\log\yukon\AdventureWorksCopy_Log.ldf',

t-log too large

Happens all the time.  Twice this week.  Vendor Database, something happens, runs out of space for the logs, got a problem.  SInce iis a vendor database, and since our best practice for IT support is to get on the phone, why sound we worry?  One reason is that if the database is in full recovery mode and backups break (we use MS DPM), the log will grow and we have to do something about it.

If the transaction log is too big, it is not getting purged.  that will happen if the db is in full and a backup occurs.  Assuming that the log doesn’t grow so fast it messes stuff up.  It can happen if the db is simple and we outrun the save to disk function.

In another way of looking at it, we will have a problem if we run out of space.  One bad log can mess up the entire volume.  Or the log file might not be set to autogrow, and in that case we might have a problem if it is not correctly sized.  Also, indexing and bulk-imports can add to the log faster than we anticipate and what was once good is now bad.

Just to make sure it is straight, we usually expect that the backup and the size of the database will grow as new records are added, but the right-sized log should be truncated often enough that we don’t see IT grow.  After the backup, it is empty\emptier although the file size is the same (because all the space will be needed, like a high-water mark) and the percent used will increase throughout the process until the log is truncated again. If the log is shrunk and we DO need to grow it, it can be a slow process and the db will be unresponsive during the growth.

Shawn McGehee in SimpleTalk

DBCC SQLPERF(logspace)

will report on the size of the log(s) and the percent used like this:

DB Name      log (MB)    log % used   Status
master       1.992188    41.7402      0
tempdb       379.4297    23.68777     0
model        0.7421875   54.93421     0
msdb         417.3672    3.173259     0
MedicatDB    5809.742    0.3472498    0
BB_admin     19.99219    24.47489     0
BBLEARN      135.4297    2.817277     0
BB_stats     1872.992    12.3829      0
BB_cms       19.99219    20.68191     0

We can try to backup the log file with t-sql or run a manual DPM job.  We can try to create a second log file on another drive temporarily to get the database to continue to respond.  It that works, we can continue our usual maintenance.

In this example, a new database was create, an insert without commit performed and another tranaction started.  Since the first log entry (which was the MinLSN or oldest record stil needed) was at the start of the log, the attempt to backup the log, which should have worked, did not result in clearing unnecessary Log entries, since


if a log file is filled and needs to grow (and autogrowth is turned on), it can be enlarged by increasing the size and additing more/new virtual log files (VLFs).  The new parts of the log will need to be initialized. .  If the settings for autogrowth have small increases, and autogrowth is common, we will have lots of little VLFs. and the log is fragmented.  reading these VLFs will make a restart of the server slow due to the need to read each of them

We can count the VLFs with a DBCC command Loginfo (undocumented and unsupported)

USE DatabaseName 
DBCC Loginfo;

Autogrowth is the issue, we can create and manually resize the log file easily with a reasonable number of VLFs – getting there with a number of autogrowths creates unnecessary virtual log files.

another approach is to change the recovery model from full into simple and shrink the log.

DBCC SHRINKFILE (file_name, target_size).

Following this, change back to full and immediately do a log backup to restart the LSN chain.  If the log is too large, usually there are too many VLFs as well.

sa account

when MS SQL is installed, the sa account is create with full admin permissions on all databases (there are on install, only four, so i am thinking that it is in the model database as the default owner for new dbs? or as part of server admin?).

WIndows authentication disables the sa account and changes the password.  If the account is enabled in mixed mode,, you could be compromised by a brute force or dictionary attack.  The Window accounts would presumably turn off after three/five attempts and make a brute force less forceful.  

You can rename sa in the GUI or t-sql


you can disable it in the GUI (SSMS) or via t-sql




The recovery mode/model of a database controls how restores can occur and if the log file can be backed up.

Simple model is ‘recovery to the last database backup” and “transaction logs are not backed up”. This means that you can not combine a (frequent) log backup to extend recovery to a last database backup (point in time recovery, and we can recovery to an intermediate time if we have the requisite logs) . if your recovery point needs to be recent, you need to backup the database recently.

Full recovery model means that the database is backed up, but the logs are backed up more frequently.  Recovery means getting the most recent full backup and the recent logs and combining them.  The expensive database backups are not done as often.  The cheaper and smaller log files are backed up often (and each time the log file is backed up and therefore can be recovered, the log can be made smaller).

Simple recovery is allowing SQL to manage itself, Full recovery means we have to manage the logs ourselves.  Obviously, if we don’t backup the database and in full mode the logs, we don’t have any recovery options.  Backup means recovery.

if we fail to backup the logs in Full recovery mode, they keep getting larger and larger.  In Simple recovery mode, each time we have a checkpoint, updates are written to the mdf file and the log gets smaller.

to change the mode, do a backup, then in SSMS, select the database, properties, options, recovery model drop-down, select and OK.  Change is immediate.

To view the mode in T-SQL:

SELECT name, recovery_model_desc
   FROM sys.databases
      WHERE name = 'model' ; -- database name

to change the mode

USE master ;
ALTER DATABASE model SET RECOVERY FULL ; -- simple, bulk-logged

After changing to simple, make sure there are no tasks running to backup the logs.  After changing to Full, immediately run a backup to start the log chain.


Database files (data or logs) can be set to a size, to a maximum size and to autogrow when small and and if autogrowing, how to grow (% or specific amount).  If autogrowth is not enabled, the size of the file needs to be managed by the DBA, application or Agent so that the file is truncated or reduced periodically.  The log size will be reduced (or at least the percentage used) following a backup (Full) or a checkpoint (Simple).

SSMS, database, properites, files, column for autogrowth\maxsize.





startup and cache

All activity (CRUD: create, read, update, delete) in MS SQL is done in memory – RAM.

When MS SQL is started/restarted, there is nothing (no data) in memory.  When you submit a query, data is located on the dsik read into memory, and used for the query, perhaps with changes or perhaps with a delete.  At some point the data in memory is pushed back to the disk, but the data in memory is kept there (cached) forever.  Forever is defined as until the server is restarted, or until the engine determines that there is other data those inclusion in cache is more valuable (related to the cost of reading it in again and again) that our first query.  Then it (or some other cached and underused data) will be dumped.

Data as records, but also indexes and query plans are cached.

Data (records, indexes, logs) are stored in 8K pages.  Nothing less than a single page will be cached.  Tables and logs and indexes, as such, are not cached, only pages containing data.  All data on a single page belong to one object (there is no mixing of records/tables and indexes on a page even if the page is not full).

On a page, data that is from a heap is not stored in any particular order.  To find a records, we read or scan all data.  Finding a match does not mean that we can’t have more matches, so we need to read all the data in a heap (possibily multiple pages of data) before we are finished.

If data is stored with a clustered index (CI) it is stored in a particular order (which might or might not be ordered on an individual page, but the order on the page IS constrained by the xxx).  When we search using the Clustered index, we know where the first record that matches the query is found and we know when we have found everything that is in the index.  The clustered index is on some number of columns but includes ALL the data: the index is the table.

If there is a non-clustered index (NCI) on a heap or a clustered index table, there is created a copy of the table, containing a reduced number of columns and it is physically ordered based on the indexed columns.  Other columns might be included as part of the index or as included columns or a covered index.  There is a pointer to the location of each records in each index entry for the heap or the clustered index.  If a query can be answered by data only from the NCI, there is no need to go to the actual data.  This would not happen if we use select (*)…because only the actual data can have all the columns, OR we have an NCI with all the data, which is wasteful.

Again, the heap, the NCI and the CI objects are on separate pages.  If we have a NCI on a heap or on a CI, it has its own data pages

Use DMF to check for the number of pages and fragmentation

SELECT * FROM sys.dm_db_index_physical_stats
 (DB_ID(N'Equella'), OBJECT_ID(N'dbo.bookmark'), NULL, NULL , 'DETAILED');


when an index (not table) is being created, rebuilt or reorganized, data is moved around on data pages. If the FillFactor is less than 100%, space is reserved on each leaf page for future changes. this means that the index grows in size and fewer indexes can be in memory – an index scan will take longer, both of which can impact performance. BOL says a FF of 50% can cause db read performance to decrease by 2x, but the MS default is 100% or 0. Best for read only tables or NCIs with data that is not going to change.

After creation or a rebuild, the FF is not maintained if the page is written to. Only on another rebuild will the table be reset.

FF can be set database wide with sp_configure. You can view an individual index with sys.indexes. Change it with Alter index  create index.

However if the tables are being updated with “random” insertions, many of the index pages will need to be updated, by hard page splits (create a new page, move 1/2 the data). This will be fixed by a rebuild later.

FF does not apply to NCI on a heap table.

If a table is updated with a CI sequentially increasing key, all the writing is at the end of the table. The index will not encur page splits on a rebuild (should it need any?) and the FF is not relevant.

Bad performance if there are index page splits (writes to logs and more IO to create and move data), but bad performance if we have too much FF due to I/O. If an index is frequently fragmented, we should consider adding ff on an index rebuild.

low update tables wtih 100 reads per write): FF 100%
high update table  with writes>reads: FF 50-70%
others : FF 80-90% (from Brad McGehee).  So he recommends something other than 100% (0) for many tables.