Hello
I ask which approach gives better speed results. To put next query in one
stored procedure, or to split in two:
//ONE
IF(@.FindType=1) /*Find Full word*/
BEGIN
SELECT f_English FROM t_Dictionary
WHERE f_NonEnglish = @.parWord
END
ELSE /*Partial find*/
BEGIN
SELECT f_English FROM t_Dictionary
WHERE f_NonEnglish LIKE (@.parWord+'%')
END
or split it in two independent stored procedures
//first:
SELECT f_English FROM t_Dictionary
WHERE f_NonEnglish = @.parWord
//and second
SELECT f_English FROM t_Dictionary
WHERE f_NonEnglish LIKE (@.parWord+'%')
I need speed!
Thanks!Hi
Makes no difference. Make sure f_NonEnglish is indexed.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"MilanB" <MilanB@.discussions.microsoft.com> wrote in message
news:084BC180-BBE6-4AF6-B307-29966653D4BF@.microsoft.com...
> Hello
> I ask which approach gives better speed results. To put next query in one
> stored procedure, or to split in two:
> //ONE
> IF(@.FindType=1) /*Find Full word*/
> BEGIN
> SELECT f_English FROM t_Dictionary
> WHERE f_NonEnglish = @.parWord
> END
> ELSE /*Partial find*/
> BEGIN
> SELECT f_English FROM t_Dictionary
> WHERE f_NonEnglish LIKE (@.parWord+'%')
> END
> or split it in two independent stored procedures
> //first:
> SELECT f_English FROM t_Dictionary
> WHERE f_NonEnglish = @.parWord
> //and second
> SELECT f_English FROM t_Dictionary
> WHERE f_NonEnglish LIKE (@.parWord+'%')
>
> I need speed!
> Thanks!|||I have seen many times the parameter in where clause makes query slow
in stroe procedure and functions.
So first check speed of you query in query analyzer and then in store
procedures.
If query run fast in query anlyzer than in store proc then execute your
query as dynamic sql in store proc.
Make sure proper index are there.
Regards
Amish|||I think that Dynamic SQL is best avoided at all costs. If you find that
your query is running slow in a stored proc, but quick in Query analyser,
first check your indexes (as already suggested), then look at the code of
the query to see if it's really as optimal as it can be, don't forget that
sometimes splitting a query into two can have a dramatic performance
improvement. Finally, use with recompile on the procedure.
Regards
Colin Dawson
www.cjdawson.com
"amish" <shahamishm@.gmail.com> wrote in message
news:1135428805.710381.226300@.g14g2000cwa.googlegroups.com...
>I have seen many times the parameter in where clause makes query slow
> in stroe procedure and functions.
> So first check speed of you query in query analyzer and then in store
> procedures.
> If query run fast in query anlyzer than in store proc then execute your
> query as dynamic sql in store proc.
> Make sure proper index are there.
> Regards
> Amish
>|||Yes Colin,
my one store proc which was having about 1000 lines was taking about 5
minutes to complete.
But then I split it into 4 different store proc and then the time they
all took to complete was only 1 minute.
Always try to make your procedure as short as possible.
Regards|||amish (shahamishm@.gmail.com) writes:
> my one store proc which was having about 1000 lines was taking about 5
> minutes to complete.
> But then I split it into 4 different store proc and then the time they
> all took to complete was only 1 minute.
> Always try to make your procedure as short as possible.
Nah, being guilty of procedures that are even longer than 1000 lines,
I cannot agree. As with so many other things in the database world, the
answer is "It depends".
Most of our long procedures are updating procedures that encapsulates quite
a bit of business logic. (We are strong adherents of the idea that the
business logic should be where the data is.)
But if you have a procedure which goes like:
IF @.cond1 IS NOT NULL AND @.cond2 IS NULL
SELECT ...
FROM tbl
WHERE col1 = @.cond1
ELSE IF @.cond2 IS NOT NULL
SELECT ...
FROM tbl
WHERE col2 = @.cond2
And even better has things like
IF @.todate IS NULL
SELECT @.todate = getdate()
Then there is reason to split up the code into several procedures. The
reason for this is parameter sniffing. SQL Server builds the query
plan for a stored procedure each time there is no plan for it in cache.
It looks at the parameter values for that particular call, and uses
these as guidance. This means if that if you have lots of branches
with various conditions, all will get their plan from those input values.
But the search on ProductID may not get an optimal plan, if @.productid
is NULL. And in the case an input parameter is replaced with a default
values, as in the @.todate example, the parameter sniffing is not good at
all.
So for these reasons, it may be a good idea to have a main procedure
that only looks at parameter values and then calls various sub-
procedures that all have their specific queries. Or you just build
dynamic SQL instead, if you security policy permits that. As the
number of input conditions increases this becomes about the only manageable
possibility.
Also see my article http://www.sommarskog.se/dyn-search.html.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Thank you all for answering.
Regards
Milan|||I also agree that there's no set rule on this. Keeping your procedures
short is a good rule of thumb, but there are always exceptions.
My personal view is to be more strict about a keeping to a single execution
path within a stored proc. There's no reason why a stored proc cannot call
a seperate stored procedure. So you can build some complicated business
logic into several much simpler stored procedures. With the CLR integration
in SQL2005 this will become the standard way of building some pretty
complicated solutions.
The most important thing is to always try to write code which is recompiled
infrequently, whilst executing at the maximum potential of the database. It
is also important to make sure that any stored procedure uses as few
resources as possible to obtain maximum scalibility. Since developers are
always generating bespoke procedures, there cannot be any hard and fast
rules, because the moment that one is set there will be a situation which
defies the rule.
Regards
Colin Dawson
www.cjdawson.com
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns9736AB4BE5DACYazorman@.127.0.0.1...
> amish (shahamishm@.gmail.com) writes:
> Nah, being guilty of procedures that are even longer than 1000 lines,
> I cannot agree. As with so many other things in the database world, the
> answer is "It depends".
> Most of our long procedures are updating procedures that encapsulates
> quite
> a bit of business logic. (We are strong adherents of the idea that the
> business logic should be where the data is.)
> But if you have a procedure which goes like:
> IF @.cond1 IS NOT NULL AND @.cond2 IS NULL
> SELECT ...
> FROM tbl
> WHERE col1 = @.cond1
> ELSE IF @.cond2 IS NOT NULL
> SELECT ...
> FROM tbl
> WHERE col2 = @.cond2
> And even better has things like
> IF @.todate IS NULL
> SELECT @.todate = getdate()
> Then there is reason to split up the code into several procedures. The
> reason for this is parameter sniffing. SQL Server builds the query
> plan for a stored procedure each time there is no plan for it in cache.
> It looks at the parameter values for that particular call, and uses
> these as guidance. This means if that if you have lots of branches
> with various conditions, all will get their plan from those input values.
> But the search on ProductID may not get an optimal plan, if @.productid
> is NULL. And in the case an input parameter is replaced with a default
> values, as in the @.todate example, the parameter sniffing is not good at
> all.
> So for these reasons, it may be a good idea to have a main procedure
> that only looks at parameter values and then calls various sub-
> procedures that all have their specific queries. Or you just build
> dynamic SQL instead, if you security policy permits that. As the
> number of input conditions increases this becomes about the only
> manageable
> possibility.
> Also see my article http://www.sommarskog.se/dyn-search.html.
>
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx|||Colin Dawson (newsgroups@.cjdawson.com) writes:
> My personal view is to be more strict about a keeping to a single
> execution path within a stored proc. There's no reason why a stored
> proc cannot call a seperate stored procedure. So you can build some
> complicated business logic into several much simpler stored procedures.
There is however one problem: T-SQL does not lend itself extremely well
to this practice. If you split up logic between procedures, you need
parameters, and T-SQL procedures indeed have them. But only scalar
parameters, and in a database you rather work with tables.
There are ways to share data through tables between stored procedures,
(see http://www.sommarskog.se/share_data.html for a discussion), but
no method is entirely satisfying. Particularly in SQL 2000, you easily
end up with recompilations that can be costly.
> With the CLR integration in SQL2005 this will become the standard way of
> building some pretty complicated solutions.
Depends on what your business logic does. For our part, I don't see that
we will make any massive move to the CLR. T-SQL is still the best for
handling data. Which at least our business logic mainly is about.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||>>Or you just build
dynamic SQL instead, if you security policy permits that. As the
number of input conditions increases this becomes about the only
manageable
possibility.
<<
Erland,
I almost completely agree to what you are saying, but sometimes dynamic
SQL is the best option even if there are only two parameters:
select sum(amount), count(*) from some_table where some_date
between @.date_from and @.date_to
If the index on some_date in non-clustered, and the table is big
enough, it's better to let the optimizer choose between table scan and
bookmark lookups every time the query runs
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment