use of exec sp_executesql

We have data, and then we have data. We have client tables, and then there are tables we make from their data. These would be intermediate tables that are used to stage query results that will be reused or sometimes actual output tables. The point being that our tables need to be changed/updated/recreated when the client data changes.

Our best practice is to create the intermediate tables by running stored procedures. Then we can create a “master script to” run all the stored procedures and recreate all the tables. The script self-documents that tables we need to create and controls the order in which they run.

I have this, but i run the name of the stored procedure through another stored procedure (log_process). This does some time stamps, counts and logs successes for each of the stored procedures. It sticks the output into another table so we can audit our processes.

The ‘master script’ that i stole from production  uses @@ROWCOUNT to determine the number of lines in output – a quick measure of how much stuff was done. The other is the number of seconds the stored procedure took to run for each of the tables, the name of the usp, the time started and ended, et al.

This is fine, except in the new project, some valid usps returned row counts of 0 or 1. I didn’t write all the code and some of these tables are tooled together with multiple steps. If the last step is a union with a blank table, no rows are added to millions of existing rows, and @@ROWCOUNT is 0. Bummer.

In the “master script”, i was getting the name of each stored procedure and the name of the output table from a tables2process table, and doing a while loop over each line – drop this table, run the indicated procedure trhough log_process, loop. I initially was going to require the log_process stored procedure to do the drop, update, log process, but i started getting out of scope errors – exec drop_table @table was not always resolving as it should. So the “master script” is responsible for the drop. Then i call the log_process script and pass the name of the stored procedure to (re)create the dropped table. Finally, i capture @@rowcount and update the log, then pass control back to master script.

Remember, we have an issue with @@rowcount? OK, then pass the tablename to the log_process stored procedure and have it do a row count of the final table. Scope of logic issues. OK, pass row count to the log process from the master script – but THAT row count would be based on the pre-drop table and not the post-creation counts.  And it wasn’t working right anyway

Finally i did this: drop the table in ‘master script’. Run the indicated usp using the log_process stored procedure to recreate the table. Come back to master script, get the row count of the new table, update the log with that value (inserted with null for row_count), then loop.

I still have scope issues, but the purpose for this post is the t-sql dynamic SQL i used for the row-count data.

Dynamic SQL means creating a @cmd string with the proper syntax and adding varibles by escaping the string, inserting the value, and reentering the string to complete it. this could be done as

set @cmd = 'select count(*) from '
set @cmd = @cmd + @table
set @cmd = @cmd + ' where test = '
set @cmd = @cmd + @number
set @cmd = @cmd + ' '

Metalogic – not guaranteed to run. Or you can do something like

set @cmd = 'select * from ' + @table 
+ ' where test = ' + @number

There are lots of issues, involving the variables and requiring 1 or ” or ”’ or sometimes ”” to get it right, but this is the idea.

Typically
set @result = exec(@cmd) does not work to return the count.

What does work is sp_executesql, which you execute with various possible parameters. this works

DECLARE @count int 
DECLARE @cmd nvarchar(max) 
set @cmd= N'SELECT @count = count(*) from ' + @TName + ''
exec sp_executesql @SQL, N'@count int out', @count out
select @count
Advertisements