Showing posts with label resultset. Show all posts
Showing posts with label resultset. Show all posts

Monday, March 26, 2012

fetch successful resultset after exception

Ok, here is the deal. In T-SQL, an error is handled depending on its severity. For example, a severity of 15 or less (or is it 10 or less? doesn't matter) will only raise a warning that can be caught through the message event. 16 or higher (or whatever) will cause an exception to be thrown on the .NET side, and 20 or higher causes the connection to be closed.

So far so good.

Now here's my issue: I have a stored procedure that does 2 queries (inside the same SP). Sometimes, the first query will succeed, while the second one will cause an error of severity 16. Again, in .NET that will throw an exception, making me unable to fetch the first resultset (which I require for logging purpose). If the error was, let say, severity 9, I could simply subscribe to the message event to get my error, yet still be able to get the first result set. But unfortunately life isn't perfect, and the error is indeed a severity 16.

Anyone know of a way to be able to get the first result set, even if the second query (within the same SP) completly fails?Why don't you wrap the two SQL statements in a transaction, so that if one fails, all attempted changes will be rolled back. You certainly wouldn't want something to work half-way--it should be an all or nothing deal. If you don't mind it working half-way, then perhaps the SQL should be put into separate calls.|||Well, I answered my own question: nothing is stopping me from accessing the data. If using a datareader, the exception isn't thrown until you move to the resultset that errored out. If using a dataset with multiple tables, the tables that succeeded are filled just fine as normal, and the ones that failed have 0 rows. Not the behavior I expected, but it did the trick.

As to why a non-atomic process is wrapped in a single stored procedure? Its because the SP was made for a relatively limited (in features) scheduling software, and it can only call one SP on the trigger. A bit silly, but I'm not the one who set that up originally, I'm just handling a migration. The requirement here was so that we could log what succeeded so we can give the log away to a supplier or some such.

Anyway, seems like ADO.NET handles that requirement just fine after all. I'm surprised, honestly.

feeding results from one SP to anohter

Hi,
I need an SP to get the resultset output of another SP.
ie
create Procedure A
as
begin
select * from Area
end
Now in procedure B, how can I get these results into a cursor form processin
g?
Also, now to through another slant on it, what if Procedure A is in a remote
database, does the same logic apply?
Thanks,
Steve
(similar to a previous post today - sorry)One way to do would be to use global temporary tables (## prefix).
But this is more like a procedural programming approach which is more
suitable for client side as opposed to database side.
Could you please elaborate more on what you are trying to do.
Here is a sample code.
HTH...
use pubs
go
-- ========================================
=====
-- Create procedure basic template
-- ========================================
=====
-- creating the store procedure
IF EXISTS (SELECT name
FROM sysobjects
WHERE name = N'mytestprocA'
AND type = 'P')
DROP PROCEDURE dbo.mytestprocA
GO
CREATE PROCEDURE dbo.mytestprocA
AS
drop table ##global_temp
select top 5 * into ##global_temp from authors
GO
-- ========================================
=====
-- example to execute the store procedure
-- ========================================
=====
EXECUTE dbo.mytestprocA
GO
use pubs
go
-- ========================================
=====
-- Create procedure basic template
-- ========================================
=====
-- creating the store procedure
IF EXISTS (SELECT name
FROM sysobjects
WHERE name = N'mytestprocB'
AND type = 'P')
DROP PROCEDURE dbo.mytestprocB
GO
CREATE PROCEDURE dbo.mytestprocB
AS
DECLARE temp_Cursor CURSOR FOR
select * from ##global_temp
OPEN temp_Cursor
FETCH NEXT FROM temp_Cursor
WHILE @.@.FETCH_STATUS = 0
BEGIN
FETCH NEXT FROM temp_Cursor
END
CLOSE temp_Cursor
DEALLOCATE temp_Cursor
GO
-- ========================================
=====
-- example to execute the store procedure
-- ========================================
=====
EXECUTE dbo.mytestprocB
GO
http://zulfiqar.typepad.com
BSEE, MCP
"Steve" wrote:

> Hi,
> I need an SP to get the resultset output of another SP.
> ie
> create Procedure A
> as
> begin
> select * from Area
> end
> Now in procedure B, how can I get these results into a cursor form process
ing?
> Also, now to through another slant on it, what if Procedure A is in a remo
te
> database, does the same logic apply?
> Thanks,
> Steve
> (similar to a previous post today - sorry)
>|||Best way to do this is change proc A to a function.
2nd best:
insert #table
Exec A
I would also ask what you are using the cursor for, but that is only because
cursors are evil :) Seriously most every use for a cursor has a far easier
to manage set based solution, and that is what we are here to help you with.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"Steve" <Steve@.discussions.microsoft.com> wrote in message
news:8BA898FD-43B9-401C-83DB-BD314A2EAA19@.microsoft.com...
> Hi,
> I need an SP to get the resultset output of another SP.
> ie
> create Procedure A
> as
> begin
> select * from Area
> end
> Now in procedure B, how can I get these results into a cursor form
> processing?
> Also, now to through another slant on it, what if Procedure A is in a
> remote
> database, does the same logic apply?
> Thanks,
> Steve
> (similar to a previous post today - sorry)
>

Friday, March 23, 2012

Feed stored procedure with SELECT resultset

I have two SQL Server stored procedures, PROC1 and PROC2. PROC1 has
about 50 input parameters. PROC2 is the main procedure that does some
data modifications and afterwards calls PROC1 using an EXECUTE
statement.

The input parameter values for PROC1 are stored in a table in my
database. What I like to do is passing those values to PROC1 using a
SELECT statement. Currently, all 50 parameters are read and stored in
a variable, and afterwards they are passed to PROC1 using:

EXEC spPROC1 @.var1, @.var2, @.var3, ... , @.var50

Since it is a lot of code declaring and assigning 50 variables, I was
wondering if there is a possibility to run a statement like:

EXEC spPROC1 (SELECT * FROM myTable WHERE id = 2)

Any help on this is greatly appreciated!On 21 Oct 2004 07:19:02 -0700, Dieter Gasser wrote:

> I have two SQL Server stored procedures, PROC1 and PROC2. PROC1 has
> about 50 input parameters. PROC2 is the main procedure that does some
> data modifications and afterwards calls PROC1 using an EXECUTE
> statement.
> The input parameter values for PROC1 are stored in a table in my
> database. What I like to do is passing those values to PROC1 using a
> SELECT statement. Currently, all 50 parameters are read and stored in
> a variable, and afterwards they are passed to PROC1 using:
> EXEC spPROC1 @.var1, @.var2, @.var3, ... , @.var50
> Since it is a lot of code declaring and assigning 50 variables, I was
> wondering if there is a possibility to run a statement like:
> EXEC spPROC1 (SELECT * FROM myTable WHERE id = 2)
> Any help on this is greatly appreciated!

You could build up a dynamic SQL statement and execute it that way, but I
think it would be vastly better if your spPROC1 read its inputs to be in a
table, rather than as parameters.

Friday, March 9, 2012

Faster select count(*)

About 50% of the search time is spent just counting the number of rows in
the resultset.
We page the resultset and show only one page at a time. Is there any way we
can count
the number of rows in the resultset faster?
CasperMore information, please -- counting what result set? Are you doing this on
the client side? Can you show the code you're currently using?
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
--
"Casper Hornstrup" <msdn@.csite.com> wrote in message
news:OcflLTjLFHA.3000@.TK2MSFTNGP10.phx.gbl...
> About 50% of the search time is spent just counting the number of rows in
> the resultset.
> We page the resultset and show only one page at a time. Is there any way
we
> can count
> the number of rows in the resultset faster?
> Casper
>|||select rowcnt from sysindexes
where id = object_id(N'[table]') and indid = 1
(indid = 1) is PK constraints I believe, so may not work for heaps; other
indexes don't appear to track rowcnt. Also the rwcnt field I don't think is
kept 100% current, so probably not accurate enough for your needs. But it's
there anyways ;)
- KH
"Casper Hornstrup" wrote:

> About 50% of the search time is spent just counting the number of rows in
> the resultset.
> We page the resultset and show only one page at a time. Is there any way w
e
> can count
> the number of rows in the resultset faster?
> Casper
>
>|||http://www.aspfaq.com/2120
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.
"Casper Hornstrup" <msdn@.csite.com> wrote in message
news:OcflLTjLFHA.3000@.TK2MSFTNGP10.phx.gbl...
> About 50% of the search time is spent just counting the number of rows in
> the resultset.
> We page the resultset and show only one page at a time. Is there any way
we
> can count
> the number of rows in the resultset faster?
> Casper
>|||> select rowcnt from sysindexes
> where id = object_id(N'[table]') and indid = 1
That will give you a rough count of what's in the entire table. However,
the OP said they wanted the count of the resultset... which is likely a
combination of a join between multiple tables and a filtered where clause.|||Yea, the original question is pretty ambiguious.
In the scenario you suggest using @.@.ROWCOUNT after the select might work for
this application.
It all depends eh? ;)
- KH
"Aaron [SQL Server MVP]" wrote:

> That will give you a rough count of what's in the entire table. However,
> the OP said they wanted the count of the resultset... which is likely a
> combination of a join between multiple tables and a filtered where clause.
>
>|||"examnotes" <KH@.discussions.microsoft.com> wrote in message news:<5EC86F57-C7F7-4AC0
-A197-B1C677605F8A@.microsoft.com>...
> Yea, the original question is pretty ambiguious.
> In the scenario you suggest using @.@.ROWCOUNT after the select might work f
or
> this application.
> It all depends eh? ;)
> - KH
>
> "Aaron [SQL Server MVP]" wrote:
>
respond

Faster Indexes using order by statement

Hello all,
I have a specific sql statement and I am looking to return the result
set a lot faster. Currently the setup is like so:
myTable schema:
Counterdecimal9 (pk)
Machinevarchar60
LogEntryvarchar1000
Activevarchar50
SysInfovarchar255
Idlevarchar50
IPvarchar15
KioskDatedatetime
KioskTimedatetime
ServerDatedatetime
ServerTimedatetime
Applicationvarchar15
WebDomainvarchar50
NSCodevarchar10
There is a Clustered index on Counter and two Non-clustered:
1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
fill factor 90% on PRIMARY
sql statment:
select top 1000 machine, logentry,
convert(varchar(8),kioskdate,5) as Kiosk_date,
convert(varchar(8),kiosktime,8) as Kiosk_time,
convert(varchar(8),serverdate,5) as ServerDate,
convert(varchar(8),serverdate,8) as ServerTime,
application,
nscode,
webdomain
from myTable
where machine = 'machinename'
order by kioskdate desc, kiosktime desc
Currently the table holds over 18 million records and the above search
returns in under 3 seconds, however when I change the order by
statement to ServerDate desc only (which is what I need), it takes
over four minutes. I have tried altering/tweaking the indexes, but
have had no success.
Any ideas greatly appreciated.
Thanks
Scott
The only index that would be useful for that query is
Counter, Machine, NSCode, KioskDate, KioskTime
And that only for a scan as Counter is the first field.
How unique is machinename - if it's good then put an index on that.
Not sure how the optimiser would handle it but try an index
machinename, kioskdate desc, kiosktime desc
This will be in the oerder of the required resultset and the server could
take the top thousand entries for the machinename without any other
processing. Doubt if it will do that but it's worth a try.
You might be able to give it hint by finding the range of dates required and
using that.
"scott" wrote:

> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
>
> Counterdecimal9 (pk)
> Machinevarchar60
> LogEntryvarchar1000
> Activevarchar50
> SysInfovarchar255
> Idlevarchar50
> IPvarchar15
> KioskDatedatetime
> KioskTimedatetime
> ServerDatedatetime
> ServerTimedatetime
> Applicationvarchar15
> WebDomainvarchar50
> NSCodevarchar10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
>
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
>
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
>
> Any ideas greatly appreciated.
> Thanks
> Scott
>
|||Scott,
both your nonclustered indexes are not very useful (ever). This is
because Counter is a unique column and is already indexed.
As Nigel suggested, if "machine = 'machinename'" is highly selective
(returns just a few percent of all rows), then your query could benefit
from an index on (machinename, kioskdate, kiosktime). Specifying
ascending or descending in the index definition is not useful for your
query.
If the expression "machine = 'machinename'" returns a very high
percentage of all rows, then the query could benefit from an index on
(kioskdate, kiosktime, machinename).
You could also try the Index Tuning Wizard and see what that comes up
with.
Hope this helps,
Gert-Jan
scott wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
> Counter decimal 9 (pk)
> Machine varchar 60
> LogEntry varchar 1000
> Active varchar 50
> SysInfo varchar 255
> Idle varchar 50
> IP varchar 15
> KioskDate datetime
> KioskTime datetime
> ServerDate datetime
> ServerTime datetime
> Application varchar 15
> WebDomain varchar 50
> NSCode varchar 10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
> Any ideas greatly appreciated.
> Thanks
> Scott
(Please reply only to the newsgroup)
|||Hi,
many thanks for all your replies, however I'm still have no joy on
this. Whatever non-clustered indexes I create, searching by kioskdate
and time (desc) is still way faster than searchig by serverdate (desc)
which is what I want.
Using the Index Tuning Wizard gives me nothing whichever way I do it
(create new indexes, or scan with existing indexes).
Back to the drawing board methinks!
Cheers
Scott