version control for this dummy

More often than i’d like to admit, i have started to make changes in a stored procedure (or any other random act of programming) and only eventually realized that 1) it was a bigger deal than i throught, and 2) it no longer works.  And i REALLY HATE to walk away from code this isn’t working.

The obvious (ha!) solution is to clone my USP before making changes (modify, change alter to create, change the name, execute the code) , but sometimes that is too simple to work for me.  (“harder, harder, must make it harder…”).

The general solution is to think about source control or version control.  Unfortunately, since the document of interest, the USP, is stored inside of the MS SQL database, it is difficult to use traditional subversion software to back them up.  There are a couple of expensive solutions for this, and i GUESS i could go to my boss and say that all of us should use something and that he should buy it and reward me for being the first person to notice there was a problem.  Guess whose problem THAT would be?

Other workarounds would be to copy\paste from the console into a text document on my local machine and use traditional source control from there, or script\save the stored procedures as a .sql file on my local machine.  There is a script wizard that would do that, but i haven’t learned how to script or automate it, and t-sql output seems limited to 4000 odd characters for display.  And i got the oddest stuff if i output the results of querying the description property (the text) to a file.

What i am leaning towards is a low-cost hybrid solution.  Anup Saund has a project on gitHub called .gitSQL.  The free version saves only 4 item each object type, the $40/year version is unlimited.  What it does is to interrogate your sql server, list the databases, let you pick one,  and shows you the stored procedures, tables, views and functions.  You can select all or some of them, and export them to your local machine.  I had some issues with the initial import.  Silly me checked the “import new” button, and that was a mistake.  It also took a long time to initially read the database, but now it comes back up fast.

What you get on your local machine are .sql files that will, if imported back to the server, recreate the object.  This script is familiar, since you could generate it with “script to > create” in SSMS.  But this file also contains the entire code that you would be editing locally, so you can just cut/paste into SSMS if the current code is not running.

So right now i have four files that correspond to four USPs from Data_2.  Anytime i am going to work on a stored procedure, i can put an updated local copy on my machine, or backup ALL the USP scripts.  However – there is something else – the local script contains not only the code for the USP, it also contains any code to create stored procedures that your stored procedure references, any output tables (at least the schema) and other assorted code.  Since i am going to want to just copy\paste a working copy of the code, this  is not a real problem for me.

Secondly, i have also installed GIT on my machine, and the .gitSql destination is a’ git inited’ directory.  So i open .gitSQL and a GIT client (GIT BASH is going to be fine), and each time i get nervous, i can GIT ADD and GIT COMMIT to source control.

This means 30 seconds or so of work at various stages of my editing.  i don’t see this as a major issue – every 10-20 minutes or so i SHOULD take a 30 second break anyway.

I can also back up SQL tables – create schema and import data in the .sql file or just the data into a flat file.  Not good for backups, but fine for utility tables, such as tables2process.

This just might work for me — and maybe i can claim it as a tax deduction.

Advertisements

NUMA nodes

What is NUMA?  National Underwater and Marine Agency, (Clive Eric Cussler) and there actually is such a place, although they probably don’t have fleets of ships and a former President who is POTUS.

Non-Uniform Memory Access (NUMA) is a condition where different regions of RAM have different characteristics.  I mean that because RAM is large and there are lots of really fast CPU, addressing one region of memory will take longer than a “closer” region for a particular computer chip.  In this case, it makes sense for a CPU to use memory in it’s neighborhood.  Or more correctly, the data that a CPU is storing and working with is best if it is in one ‘node’ that will be highly accessible to that CPU.

Jonathan Kehayias of PSSkills indicates that this query

select * from sys.dm_os_memory_nodes

will return data like this

memory_node_id virtual_address_space_reserved_kb ... more rows
0              99838476 
64             0

on a non-numa configured machine.  If there are configured numa nodes, there will be more than just two memory_node_id entries.  The second node (64) is the DAC.  You can get a similar view from this

select * from sys.dm_os_nodes

Glenn Berry has a script that gives information about cores and CPUs, et al

-- Hardware information from SQL Server 2012
-- (Cannot distinguish between HT and multi-core)
 SELECT cpu_count AS [Logical CPU Count]
, hyperthread_ratio AS [Hyperthread Ratio]
, cpu_count/hyperthread_ratio AS [Physical CPU Count]
, physical_memory_kb/1024 AS [Physical Memory (MB)]
, committed_target_kb/1024 AS [Committed Target Memory (MB)]
, max_workers_count AS [Max Workers Count]
, affinity_type_desc AS [Affinity Type]
, sqlserver_start_time AS [SQL Server Start Time]
, virtual_machine_type_desc AS [Virtual Machine Type]
FROM sys.dm_os_sys_info WITH (NOLOCK) OPTION (RECOMPILE);

simple wait states

Klaus Aschenbrenner has a simple overview of wait states.

Unless the query gets the cpu and finishes in 4 ms, all queries in MS SQL are at times waiting.

A query that is being processed has to yield its place by cooperative Scheduling, which is enforced for SQL processes by the SQLOS.  Every 4 milliseconds, the query is moved to the ‘suspended’ state to let other queries play.  Then ideally, it moves to the runnable state and waits until it it has another chance to play.

But at times, or even the first time around, it might not get on the CPU.  If the storage manager can not locate the data needed by the query, the query gets moved immediately to suspended until the I/O system can find the data on disk, read it into the data cache, and tell the storage manager that a pointer to the data can be given to the query.

A query might in the middle of its run need more data.  This could happen if some of the data was loaded but there is not enough room for all the data, and now the additional pages need to be loaded.  Or a subquery.  Or the Query analyzer’s plan needs to write something to tempdb (for example a sort).  Anything that is going to require reading from the very slow physical media is going to make the query wait.

Now, suppose the data is in memory, but another process is using it.  OK, wait 4ms and it will be come free.  But it will take time to request and get granted a lock on the data for our query – it has to wait for the resource to become free, and then get the lock.

There are other, non-normal, causes for waits that indicate something is not ideal, but the point is that wait states are normal.  And all wait state information is stored by SQL Server and can be examined by the Dynamic Management View sys.dm_os_wait_stats.  If the server is running OK, the wait states you see with this DMV are normal.  If the server slows down, the reason for the slowdown will be seen as one or more wait states.  The wait state is the symptom, not the cause (slow storage, bad indexes, poor plans, locking and blocking, CPU pressure) but will start you looking, hopefully in the right places.

simple foreach through a table

Select * into #temp from dbo.table

We know that one – now, how to do something for each row in the table?

consider this:

declare @column1 varchar(10)
select top 1 @column1 = column1 from #temp order by tableName
-- if there are no rows, @@rowcount is zero
while (@@rowcount > 0)
begin
-- do something with the selection
print column1
-- delete the entire row we just selected
 delete from #temp where column1 = @column1
 select top 1 @column1 = column1 from #temp order by column1
end

What we are doing is selecting one row from #temp.  I am attempting to maintain the same order each time by the order by, but for the purposes of this post, it doesn’t matter.  We do something with the row data.  We delete from #temp that row we just selected.  We select another row from the diminished table.  Assuming @@rowcount is still positive, we do it all over again.    Because of the ‘while’ we have to do the first selection outside, and we have to check the new value of the loop at the bottom.  Or else after the last row is deleted, @@rowcount would be 1, we would go to the top and enter the loop again based on the last selection success.  Or actually, the last deletion.

email report

I’m writing a report and i want to use SSMS to send it. OK, use ‘msdb.dbo.sp_send_dbmail’ to send it from inside SSMS.

But i want it to be formatted. OK, set @body_format = ‘HTML’ and you can format it with HTML to your little hearts content.

But it looks so ugly. OK, but that is NOT my problem.

Setting up the format for the HTML typically means creating a number of nvarchar(max) local variables, stuffing strings into them and combining the strings into the value of the @body variable. The same stuff you would do with powershell, or C# or whatever. . The typical grunt work of

set @header = '<table border = ''1''><th>Column 1</th><th>...'

You need the double singlequotes to get the single quote around the border (like border = ‘1’)

A couple things will help. One is to find a nicely formatted table, and steal it. It will probably have CSS in it, so be prepared to have something like this:

SET @tableHTML =
 N'<style type="text/css">
 #box-table
 {
 font-family: "Lucida Sans Unicode", "Lucida Grande", Sans-Serif;
 font-size: 12px;
 text-align: center;
 }
 #box-table th
 {
 font-size: 13px;
 font-weight: normal;
 background: #b9c9fe;
 color: #039;
 }
 #box-table td
 {
 color: #669;
 }
 tr:nth-child(odd) { background-color:#eee; }
 tr:nth-child(even) { background-color:#fff; }
 </style>' + 
 N'<H3>'<font color="red">All Rows From [db].[table].[column]</H3>' +
 N'<table id="box-table"> ' + ...

OK, so a standard style sheet at the top, you can fuss with these after it’s working. Next a <H3> Title outside the table, then the top of the table. Start building your columns. You COULD use sys.columns to get a list of all of the table columns, and you could put in a != condition if we want to exclude a column from the report.

Next, we want the data for each column, we want it tagged without having to manually put the </td><td> in a bazillion times, and we want it readable. Sounds like XML to me.

try something like this:

+ CAST ( (
 SELECT td = CAST([ID] AS VARCHAR(100)),'',
 td = [column1_name],'',
 td = [column2_name],'',
 td = [column3_name] ,'',
 td = CONVERT(VARCHAR(30),[StartDate],120) ,'',
 td = CONVERT(VARCHAR(30),[EndDate],120)
 FROM [table]
 ORDER BY [startDate]
 FOR XML RAW('tr'),ELEMENTS)
 ) AS NVARCHAR(MAX) ) +
 N'</table > '

Cast the selections, each aliased to ‘td’ – but that’s ok, because we are selecting FOR XML. Notice you close the table in the last line, or you can have a htmlTail variable as well and glue them together.

Another option would be to have one or more of the td elements something else, like tdc. This will output XML with an invalid HTML element, however you can then do something like this to put special formatting in that/those cell(s).

SET @body = REPLACE(@body, '<tdc>', '<td class="center">')