Showing posts with label single. Show all posts
Showing posts with label single. Show all posts

Monday, March 26, 2012

Fetch rows afftected in a table

Hi,
I'm pretty new to this community and here's my query. Lets suppose I have executed a DML query on a table (inserted a single row). The table is not having any identity, date or time fields....just a primary key column. I wish to retrieve the record that I or anybody else inserted last into that table. Or I might even wish to fetch all records inserted into that table in the last 10 seconds. Can anybody here help me out in this case, plz.

Quote:

Originally Posted by dev177

Hi,
I'm pretty new to this community and here's my query. Lets suppose I have executed a DML query on a table (inserted a single row). The table is not having any identity, date or time fields....just a primary key column. I wish to retrieve the record that I or anybody else inserted last into that table. Or I might even wish to fetch all records inserted into that table in the last 10 seconds. Can anybody here help me out in this case, plz.


Yah.
I got the same doubt .is there any to find the time of insertion of a particular row|||

Quote:

Originally Posted by srinit

Yah.
I got the same doubt .is there any to find the time of insertion of a particular row


You could add columns UpdatedByID and LastUpdateDate and set UpdatedByID to be the user's username and set LastUpdateDate to GETDATE() (or just make the default for LastUpdateDate to be GETDATE()).

Fetch limited rows sequentially

Hello Friends,
I want to fetch limited rows from single query (example:-1000 rows present in emp table , I want to fetch first 10 rows at a time and store in a file. Next iteration it has to fetch next 10 rows and store in another file like this it has to create 10 different files with 10 rows)
Thanks in advance
Waiting early reply
Regds
NitinHi Nitin,

SELECT *
FROM (SELECT TOP 10 *
FROM (SELECT TOP 20 *
FROM emp
ORDER BY LName ASC) AS t1
ORDER BY LName DESC) as t2
ORDER BY LName

This select will return the 10 after the first ten from emps table. Just loop through increasing the TOP 20 to TOP 30 and so on to get each interval of ten employers.

Hope this helps! :)
Robert

Friday, March 23, 2012

Federated database VS single database server

Is there any expert in designing federated database ? I need a reference in designing a federated database. Also a comparison - Federated database VS single database server. I need a information in decided what solution i should go for.
Best regards,
TawanaCheck out the Microsoft Patterns and Practices web site. There is a white
paper there on designing federated databases along with best practices and
comparisons.
--
HTH,
SriSamp
Please reply to the whole group only!
http://www32.brinkster.com/srisamp
"tawana" <anonymous@.discussions.microsoft.com> wrote in message
news:38841121-069F-487F-A2C5-082ACA97A93B@.microsoft.com...
> Is there any expert in designing federated database ? I need a reference
in designing a federated database. Also a comparison - Federated database
VS single database server. I need a information in decided what solution i
should go for.
>
> Best regards,
> Tawana|||Why are you considering federated DBs? What are you
trying to achieve? Then we may be able to help you better.
>--Original Message--
>Is there any expert in designing federated database ? I
need a reference in designing a federated database. Also
a comparison - Federated database VS single database
server. I need a information in decided what solution i
should go for.
>
>Best regards,
>Tawana
>.
>|||Using a federation of servers comes with its own set of problems... MS
recommends ONLY using a federation when you have already scaled-up, meaning
you have bought the biggest, baddest, SMP box, and that box isn;t big
enough...
Make sure you have exhausted the alternatives before jumping into
federations..
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"tawana" <anonymous@.discussions.microsoft.com> wrote in message
news:38841121-069F-487F-A2C5-082ACA97A93B@.microsoft.com...
> Is there any expert in designing federated database ? I need a reference
in designing a federated database. Also a comparison - Federated database
VS single database server. I need a information in decided what solution i
should go for.
>
> Best regards,
> Tawanasql

Monday, March 12, 2012

Fastest way to delete hundreds of table triggers and hundreds of stored procedures?

How can i delete all user stored procedures and all table triggers very fast
in
a single database?

Thank youYou can run the following script in Query Analyzer. Be certain you are in
the correct database.

USE MyDatabase
DECLARE @.DropStatement nvarchar(4000)
DECLARE DropStatements CURSOR
LOCAL FAST_FORWARD READ_ONLY FOR
SELECT N'DROP ' +
CASE xtype
WHEN 'P' THEN N'PROCEDURE '
WHEN 'TR' THEN N'TRIGGER '
END +
QUOTENAME(USER_NAME(uid)) +
N'.' +
QUOTENAME(name)
FROM sysobjects
WHERE xtype IN('P', 'TR')
OPEN DropStatements
WHILE 1 = 1
BEGIN
FETCH NEXT FROM DropStatements INTO @.DropStatement
IF @.@.FETCH_STATUS = -1 BREAK
EXEC(@.DropStatement)
END
CLOSE DropStatements
DEALLOCATE DropStatements

--
Hope this helps.

Dan Guzman
SQL Server MVP

"serge" <sergea@.nospam.ehmail.com> wrote in message
news:VRP%c.25102$lP4.1520160@.news20.bellglobal.com ...
> How can i delete all user stored procedures and all table triggers very
> fast
> in
> a single database?
> Thank you|||I will try this today.

Thank you

> You can run the following script in Query Analyzer. Be certain you are in
> the correct database.
> USE MyDatabase
> DECLARE @.DropStatement nvarchar(4000)
> DECLARE DropStatements CURSOR
> LOCAL FAST_FORWARD READ_ONLY FOR
> SELECT N'DROP ' +
> CASE xtype
> WHEN 'P' THEN N'PROCEDURE '
> WHEN 'TR' THEN N'TRIGGER '
> END +
> QUOTENAME(USER_NAME(uid)) +
> N'.' +
> QUOTENAME(name)
> FROM sysobjects
> WHERE xtype IN('P', 'TR')
> OPEN DropStatements
> WHILE 1 = 1
> BEGIN
> FETCH NEXT FROM DropStatements INTO @.DropStatement
> IF @.@.FETCH_STATUS = -1 BREAK
> EXEC(@.DropStatement)
> END
> CLOSE DropStatements
> DEALLOCATE DropStatements|||"serge" <sergea@.nospam.ehmail.com> wrote in message
news:oEg0d.30007$lP4.1961638@.news20.bellglobal.com ...
> I will try this today.

As we discover Bank of America goes offline since their databases somehow
lost their entire schema in a hacker attack. :-)

> Thank you
> > You can run the following script in Query Analyzer. Be certain you are
in
> > the correct database.
> > USE MyDatabase
> > DECLARE @.DropStatement nvarchar(4000)
> > DECLARE DropStatements CURSOR
> > LOCAL FAST_FORWARD READ_ONLY FOR
> > SELECT N'DROP ' +
> > CASE xtype
> > WHEN 'P' THEN N'PROCEDURE '
> > WHEN 'TR' THEN N'TRIGGER '
> > END +
> > QUOTENAME(USER_NAME(uid)) +
> > N'.' +
> > QUOTENAME(name)
> > FROM sysobjects
> > WHERE xtype IN('P', 'TR')
> > OPEN DropStatements
> > WHILE 1 = 1
> > BEGIN
> > FETCH NEXT FROM DropStatements INTO @.DropStatement
> > IF @.@.FETCH_STATUS = -1 BREAK
> > EXEC(@.DropStatement)
> > END
> > CLOSE DropStatements
> > DEALLOCATE DropStatements|||Well, at least the tables are still there :-)

--
Hope this helps.

Dan Guzman
SQL Server MVP

"Greg D. Moore (Strider)" <mooregr_deleteth1s@.greenms.com> wrote in message
news:Xxh0d.21421$2s.7550@.twister.nyroc.rr.com...
> "serge" <sergea@.nospam.ehmail.com> wrote in message
> news:oEg0d.30007$lP4.1961638@.news20.bellglobal.com ...
>> I will try this today.
>
> As we discover Bank of America goes offline since their databases somehow
> lost their entire schema in a hacker attack. :-)
>
>>
>> Thank you
>>
>> > You can run the following script in Query Analyzer. Be certain you are
> in
>> > the correct database.
>>> > USE MyDatabase
>> > DECLARE @.DropStatement nvarchar(4000)
>> > DECLARE DropStatements CURSOR
>> > LOCAL FAST_FORWARD READ_ONLY FOR
>> > SELECT N'DROP ' +
>> > CASE xtype
>> > WHEN 'P' THEN N'PROCEDURE '
>> > WHEN 'TR' THEN N'TRIGGER '
>> > END +
>> > QUOTENAME(USER_NAME(uid)) +
>> > N'.' +
>> > QUOTENAME(name)
>> > FROM sysobjects
>> > WHERE xtype IN('P', 'TR')
>> > OPEN DropStatements
>> > WHILE 1 = 1
>> > BEGIN
>> > FETCH NEXT FROM DropStatements INTO @.DropStatement
>> > IF @.@.FETCH_STATUS = -1 BREAK
>> > EXEC(@.DropStatement)
>> > END
>> > CLOSE DropStatements
>> > DEALLOCATE DropStatements
>>
>>|||What's the fastest way to delete tables? :)

> Well, at least the tables are still there :-)|||I ran this and it was fast, it's what i was looking for.

I am trying to figure out how to delete ONLY the user objects and not the
system objects.

By changing the WHERE condition to become:

WHERE xtype IN('P', 'TR') AND OBJECTPROPERTY (Id, 'IsMSShipped') = 0

I tried running it and it seems it did not delete the Stored Procedures with
TYPE = System.

I just want to make sure this condition I am using :

AND OBJECTPROPERTY (Id, 'IsMSShipped') = 0

is the right one, that I am not screwing other things unknowingly?

Thank you

"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:%uQ%c.9387$yp2.8834@.newssvr30.news.prodigy.co m...
> You can run the following script in Query Analyzer. Be certain you are in
> the correct database.
> USE MyDatabase
> DECLARE @.DropStatement nvarchar(4000)
> DECLARE DropStatements CURSOR
> LOCAL FAST_FORWARD READ_ONLY FOR
> SELECT N'DROP ' +
> CASE xtype
> WHEN 'P' THEN N'PROCEDURE '
> WHEN 'TR' THEN N'TRIGGER '
> END +
> QUOTENAME(USER_NAME(uid)) +
> N'.' +
> QUOTENAME(name)
> FROM sysobjects
> WHERE xtype IN('P', 'TR')
> OPEN DropStatements
> WHILE 1 = 1
> BEGIN
> FETCH NEXT FROM DropStatements INTO @.DropStatement
> IF @.@.FETCH_STATUS = -1 BREAK
> EXEC(@.DropStatement)
> END
> CLOSE DropStatements
> DEALLOCATE DropStatements
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "serge" <sergea@.nospam.ehmail.com> wrote in message
> news:VRP%c.25102$lP4.1520160@.news20.bellglobal.com ...
> > How can i delete all user stored procedures and all table triggers very
> > fast
> > in
> > a single database?
> > Thank you|||>> Well, at least the tables are still there :-)

"serge" <sergea@.nospam.ehmail.com> wrote in news:BRJ0d.35921$lP4.2446119
@.news20.bellglobal.com:

> What's the fastest way to delete tables? :)

USE master
DROP DATABASE CriticalFinancialInfo
GO|||> AND OBJECTPROPERTY (Id, 'IsMSShipped') = 0

Yes, this is correct. I should have included this in the script I posted.

--
Hope this helps.

Dan Guzman
SQL Server MVP

"serge" <sergea@.nospam.ehmail.com> wrote in message
news:tdK0d.36158$lP4.2450872@.news20.bellglobal.com ...
>I ran this and it was fast, it's what i was looking for.
> I am trying to figure out how to delete ONLY the user objects and not the
> system objects.
> By changing the WHERE condition to become:
>
> WHERE xtype IN('P', 'TR') AND OBJECTPROPERTY (Id, 'IsMSShipped') = 0
> I tried running it and it seems it did not delete the Stored Procedures
> with
> TYPE = System.
> I just want to make sure this condition I am using :
> AND OBJECTPROPERTY (Id, 'IsMSShipped') = 0
> is the right one, that I am not screwing other things unknowingly?
> Thank you
>
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:%uQ%c.9387$yp2.8834@.newssvr30.news.prodigy.co m...
>> You can run the following script in Query Analyzer. Be certain you are
>> in
>> the correct database.
>>
>> USE MyDatabase
>> DECLARE @.DropStatement nvarchar(4000)
>> DECLARE DropStatements CURSOR
>> LOCAL FAST_FORWARD READ_ONLY FOR
>> SELECT N'DROP ' +
>> CASE xtype
>> WHEN 'P' THEN N'PROCEDURE '
>> WHEN 'TR' THEN N'TRIGGER '
>> END +
>> QUOTENAME(USER_NAME(uid)) +
>> N'.' +
>> QUOTENAME(name)
>> FROM sysobjects
>> WHERE xtype IN('P', 'TR')
>> OPEN DropStatements
>> WHILE 1 = 1
>> BEGIN
>> FETCH NEXT FROM DropStatements INTO @.DropStatement
>> IF @.@.FETCH_STATUS = -1 BREAK
>> EXEC(@.DropStatement)
>> END
>> CLOSE DropStatements
>> DEALLOCATE DropStatements
>>
>> --
>> Hope this helps.
>>
>> Dan Guzman
>> SQL Server MVP
>>
>> "serge" <sergea@.nospam.ehmail.com> wrote in message
>> news:VRP%c.25102$lP4.1520160@.news20.bellglobal.com ...
>> > How can i delete all user stored procedures and all table triggers very
>> > fast
>> > in
>> > a single database?
>>> > Thank you
>>>>>
>>|||My quest continues, maybe i should explain what i am trying to achieve.

There is a database with 3000+ Stored Procedures. We give copies to other
people
and we continue making updates to the *development* database. When we want
to give the other people our latest stored procedures, we have code that
deletes all
stored procedures one by one, thus taking maybe 30 minutes to delete.
Then we recreate all the SPs.

Now i wanted to find out if there was a way to speed this process, i
originally thought
that deleting all SPs one shot could do the trick. But the further i analyze
it, i see some complications.

For example, the other people could very well have created their own SPs,
how can
i NOT delete those SPs?

Do you or anyone else have any idea how i can accomplish this?

Thank you

> > AND OBJECTPROPERTY (Id, 'IsMSShipped') = 0
> Yes, this is correct. I should have included this in the script I posted.|||"serge" <sergea@.nospam.ehmail.com> wrote in
news:zZM0d.39176$lP4.2491585@.news20.bellglobal.com :

> My quest continues, maybe i should explain what i am trying to
> achieve.
> There is a database with 3000+ Stored Procedures. We give copies to
> other people
> and we continue making updates to the *development* database. When we
> want to give the other people our latest stored procedures, we have
> code that deletes all
> stored procedures one by one, thus taking maybe 30 minutes to delete.
> Then we recreate all the SPs.
> Now i wanted to find out if there was a way to speed this process, i
> originally thought
> that deleting all SPs one shot could do the trick. But the further i
> analyze it, i see some complications.
> For example, the other people could very well have created their own
> SPs, how can
> i NOT delete those SPs?

You *need* version control.

I suppose the simplest way without spending any money would be to add a
table with the current version number of each stored procedure. Your
"update db with new stored procedures" would have to check that version
number and decide whether to drop and recreate the stored procedure.

Taking the idea a step further, the version table would also contain the
complete source of the stored procedure. Then another stored procedure
could cursor over the table, doing the drops and creates. Distributing
your new procedures would involve exporting this table to an external file
(say an MDB) and distributing it to the other recipients.|||serge (sergea@.nospam.ehmail.com) writes:
> My quest continues, maybe i should explain what i am trying to achieve.
> There is a database with 3000+ Stored Procedures. We give copies to
> other people and we continue making updates to the *development*
> database. When we want to give the other people our latest stored
> procedures, we have code that deletes all stored procedures one by one,
> thus taking maybe 30 minutes to delete. Then we recreate all the SPs.
> Now i wanted to find out if there was a way to speed this process, i
> originally thought that deleting all SPs one shot could do the trick.
> But the further i analyze it, i see some complications.
> For example, the other people could very well have created their own
> SPs, how can i NOT delete those SPs?

First of all, does their license permit them to add their own stored
procedures?

As Ross said, you need version control.

In our shop we have all our stored procedures, triggers, tables, in short
all database objects under version control in SourceSafe, and SourceSafe
is the master for all building efforts.

To build and install we have a toolset, called AbaPerls. One tool is
DBBUILD which builds an entire database from scripts, that is tables,
stored procedures, el todo. DBBUILD is also what we add when we add
a new component, or subsystem as we call it, to the database. Then we
have another tool DBUPDGEN which reads SourceSafe, and finds all changes
between two labels and that produces an update script. (For changed tables
you get a template to move over the data, but in most cases you have to
modify the generated code.) Furthermore, AbaPerls has its own set of
tables, so we know what we have loaded into a database. There is also
a stored procedure which lists mismatches between AbaPerls and SQL Server's
own system tables. Customer-added code would end up there.

I have made AbaPerls available as freeware on http://www.abaris.se/abaperls/
But as Ross outlined, you can achieve something a lot simpler with quite
easy means, and it may be enough for your organization.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||I understand and I do agree your idea is very good. But at the moment, I am
trying
and hoping that there is a simpler way to rely on the WHERE Condition and
using
some custom field or something in the sysobjects table to decide whether
that SP
is one of our own SP and not delete it.

Thanks for your idea. I'll have to keep this in mind, I might have to use
this
approach in the future.

> You *need* version control.
> I suppose the simplest way without spending any money would be to add a
> table with the current version number of each stored procedure. Your
> "update db with new stored procedures" would have to check that version
> number and decide whether to drop and recreate the stored procedure.
> Taking the idea a step further, the version table would also contain the
> complete source of the stored procedure. Then another stored procedure
> could cursor over the table, doing the drops and creates. Distributing
> your new procedures would involve exporting this table to an external file
> (say an MDB) and distributing it to the other recipients.|||We do have SourceSafe on our end but not to go into details here,
that wouldn't be a viable solution at the time being. Maybe in the future.

Like i just replied to Ross, at the moment, I am trying and hoping that
there is a simpler way to rely on the WHERE Condition and using
some custom field or something in the sysobjects table to decide whether
that SP is one of our own SP and not delete it.

It's very appreciated that you've made your AbaPerls toolset freeware for
everyone to use. I have taken note of the link and I will have to look at
this
in great length in the future. At this moment, it's very hard to switch to
using
tools for what i require. If i can get away with a simple WHERE condition
to NOT delete SPs not created by us, I'll be very interested to use.

Due to time constraints, I don't want to add a new project to work on.
This task of deleting SPs fast is already not part of my regular work, add
to that I have to figure out ways to improve some slow SPs we have (again
not part of my regular work).

Thank you

> First of all, does their license permit them to add their own stored
> procedures?
> As Ross said, you need version control.
> In our shop we have all our stored procedures, triggers, tables, in short
> all database objects under version control in SourceSafe, and SourceSafe
> is the master for all building efforts.
> To build and install we have a toolset, called AbaPerls. One tool is
> DBBUILD which builds an entire database from scripts, that is tables,
> stored procedures, el todo. DBBUILD is also what we add when we add
> a new component, or subsystem as we call it, to the database. Then we
> have another tool DBUPDGEN which reads SourceSafe, and finds all changes
> between two labels and that produces an update script. (For changed tables
> you get a template to move over the data, but in most cases you have to
> modify the generated code.) Furthermore, AbaPerls has its own set of
> tables, so we know what we have loaded into a database. There is also
> a stored procedure which lists mismatches between AbaPerls and SQL
Server's
> own system tables. Customer-added code would end up there.
> I have made AbaPerls available as freeware on
http://www.abaris.se/abaperls/
> But as Ross outlined, you can achieve something a lot simpler with quite
> easy means, and it may be enough for your organization.|||serge (sergea@.nospam.ehmail.com) writes:
> Like i just replied to Ross, at the moment, I am trying and hoping that
> there is a simpler way to rely on the WHERE Condition and using
> some custom field or something in the sysobjects table to decide whether
> that SP is one of our own SP and not delete it.

There is no such custom field. Possibly you could add some condition which
looked in syscomments for things you recognize, but that would be completely
bizarre to do.

You would have to have list of known procedures to delete. Then again, that
is not very difficult to make effecient:

CREATE PROCEDURE drop_till_you_bop @.procs ntext AS
DECLARE @.proc sysname
DECLARE drop_cur INSENSITIVE CURSOR FOR
SELECT nstr FROM iter_charlist_to_tbl(@.procs, DEFAULT) i
JOIN sysobjects o ON i.nstr = o.name
WHERE o.xtype = 'P'
OPEN drop_cur
WHILE 1 = 1
FETCH drop_cur INTO @.proc
IF @.@.fetch_status <> 0
BREAK
EXEC ('DROP PROCEDURE ' + @.proc)
END
DEALLOCATE drop_cur

iter_charlist_to_tbl is on
http://www.sommarskog.se/arrays-in-...list-of-strings

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||As already mentioned, a manifest and/or version control to probably the best
long term solution. In the interim. you might consider adding extended
properties to the objects you own so that you can more easily identify
these. You'll need to include the extended properties in your DDL scripts.

CREATE PROC MyProcedure
AS
SELECT 'MyProcedure'
GO

EXEC sp_addextendedproperty
'OwnedBy',
'MyApplication',
'USER',
'dbo',
'PROCEDURE',
'MyProcedure'
GO

--this
SELECT *
FROM sysobjects o
JOIN ::fn_listextendedproperty(
'OwnedBy',
'USER',
'dbo',
'PROCEDURE',
NULL,
NULL,
NULL
) ep ON o.name = ep.objname
WHERE o.xtype = 'P' AND
ep.value = 'MyApplication'

--
Hope this helps.

Dan Guzman
SQL Server MVP

"serge" <sergea@.nospam.ehmail.com> wrote in message
news:zZM0d.39176$lP4.2491585@.news20.bellglobal.com ...
> My quest continues, maybe i should explain what i am trying to achieve.
> There is a database with 3000+ Stored Procedures. We give copies to other
> people
> and we continue making updates to the *development* database. When we want
> to give the other people our latest stored procedures, we have code that
> deletes all
> stored procedures one by one, thus taking maybe 30 minutes to delete.
> Then we recreate all the SPs.
> Now i wanted to find out if there was a way to speed this process, i
> originally thought
> that deleting all SPs one shot could do the trick. But the further i
> analyze
> it, i see some complications.
> For example, the other people could very well have created their own SPs,
> how can
> i NOT delete those SPs?
> Do you or anyone else have any idea how i can accomplish this?
> Thank you
>
>> > AND OBJECTPROPERTY (Id, 'IsMSShipped') = 0
>>
>> Yes, this is correct. I should have included this in the script I
>> posted.|||Thanks again for the post. I'll have to look into this more closely
in the next few days (hopefully).

"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns95636012ECE1Yazorman@.127.0.0.1...
> serge (sergea@.nospam.ehmail.com) writes:
> > Like i just replied to Ross, at the moment, I am trying and hoping that
> > there is a simpler way to rely on the WHERE Condition and using
> > some custom field or something in the sysobjects table to decide whether
> > that SP is one of our own SP and not delete it.
> There is no such custom field. Possibly you could add some condition which
> looked in syscomments for things you recognize, but that would be
completely
> bizarre to do.
> You would have to have list of known procedures to delete. Then again,
that
> is not very difficult to make effecient:
> CREATE PROCEDURE drop_till_you_bop @.procs ntext AS
> DECLARE @.proc sysname
> DECLARE drop_cur INSENSITIVE CURSOR FOR
> SELECT nstr FROM iter_charlist_to_tbl(@.procs, DEFAULT) i
> JOIN sysobjects o ON i.nstr = o.name
> WHERE o.xtype = 'P'
> OPEN drop_cur
> WHILE 1 = 1
> FETCH drop_cur INTO @.proc
> IF @.@.fetch_status <> 0
> BREAK
> EXEC ('DROP PROCEDURE ' + @.proc)
> END
> DEALLOCATE drop_cur
> iter_charlist_to_tbl is on
> http://www.sommarskog.se/arrays-in-...list-of-strings
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp|||Looks interesting. Having extended properties to the SP.
I'll have to investigate this further, hopefully soon.

Thank you again.

"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:rnh1d.10373$yp2.9988@.newssvr30.news.prodigy.c om...
> As already mentioned, a manifest and/or version control to probably the
best
> long term solution. In the interim. you might consider adding extended
> properties to the objects you own so that you can more easily identify
> these. You'll need to include the extended properties in your DDL
scripts.
> CREATE PROC MyProcedure
> AS
> SELECT 'MyProcedure'
> GO
> EXEC sp_addextendedproperty
> 'OwnedBy',
> 'MyApplication',
> 'USER',
> 'dbo',
> 'PROCEDURE',
> 'MyProcedure'
> GO
> --this
> SELECT *
> FROM sysobjects o
> JOIN ::fn_listextendedproperty(
> 'OwnedBy',
> 'USER',
> 'dbo',
> 'PROCEDURE',
> NULL,
> NULL,
> NULL
> ) ep ON o.name = ep.objname
> WHERE o.xtype = 'P' AND
> ep.value = 'MyApplication'

Friday, March 9, 2012

faster query - smaller tables?

My database is much faster now because I found that I only need around
150,000 records in a table for any single query. The date selection will
always be in a certain range.
So if I break my tables up into smaller tables based on the maximum date
range, its much faster.
I don't understand why its 2x's slower if there are more than x-number of
records in the table.. I have tried non-clustered and then clustered indexes
on the date field, and the date + customer id as the key fields... but its
still slower as more records are added to the table. Is this normal?
SELECT DISTINCT A.CustID, B.Discount
FROM Sales A, Discounts B WHERE
A.SaleDate = '20000117' AND B.DiscountDate = '20000118' AND B.Discount > 0.2
If I have 9.5 million records in Sales, the query takes 2 seconds to 4
seconds.
If I have about 60 tables of 150000 records each, and I select from the
appropriate table, the query is about a half second or less.
Is this a common practice, to break tables up based on date range or am I
still doing something wrong?
CREATE TABLE [Sales] (
[SaleDate] [datetime] NOT NULL ,
[CustID] [varchar] (10) NOT NULL ,
[SaleAmt] [DECIMAL (8,3)] NOT NULL
) ON [PRIMARY]
CREATE TABLE [Discounts] (
[DiscountDate] [datetime] NOT NULL ,
[Discount] [DECIMAL (8,3)] NOT NULL
[Code] [numeric] NOT NULL
) ON [PRIMARY]Rich
You may want to read 'Creating a Partitioned View' article in the BOL.
Does the optimizer available to use indexes defined on the table?
"Rich" <no@.spam.invalid> wrote in message
news:UekHe.54059$4o.18050@.fed1read06...
> My database is much faster now because I found that I only need around
> 150,000 records in a table for any single query. The date selection will
> always be in a certain range.
> So if I break my tables up into smaller tables based on the maximum date
> range, its much faster.
> I don't understand why its 2x's slower if there are more than x-number of
> records in the table.. I have tried non-clustered and then clustered
> indexes
> on the date field, and the date + customer id as the key fields... but its
> still slower as more records are added to the table. Is this normal?
> SELECT DISTINCT A.CustID, B.Discount
> FROM Sales A, Discounts B WHERE
> A.SaleDate = '20000117' AND B.DiscountDate = '20000118' AND B.Discount >
> 0.2
> If I have 9.5 million records in Sales, the query takes 2 seconds to 4
> seconds.
> If I have about 60 tables of 150000 records each, and I select from the
> appropriate table, the query is about a half second or less.
> Is this a common practice, to break tables up based on date range or am I
> still doing something wrong?
> CREATE TABLE [Sales] (
> [SaleDate] [datetime] NOT NULL ,
> [CustID] [varchar] (10) NOT NULL ,
> [SaleAmt] [DECIMAL (8,3)] NOT NULL
> ) ON [PRIMARY]
> CREATE TABLE [Discounts] (
> [DiscountDate] [datetime] NOT NULL ,
> [Discount] [DECIMAL (8,3)] NOT NULL
> [Code] [numeric] NOT NULL
> ) ON [PRIMARY]
>|||Hi
Where are your indexes?
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Rich" wrote:

> My database is much faster now because I found that I only need around
> 150,000 records in a table for any single query. The date selection will
> always be in a certain range.
> So if I break my tables up into smaller tables based on the maximum date
> range, its much faster.
> I don't understand why its 2x's slower if there are more than x-number of
> records in the table.. I have tried non-clustered and then clustered index
es
> on the date field, and the date + customer id as the key fields... but its
> still slower as more records are added to the table. Is this normal?
> SELECT DISTINCT A.CustID, B.Discount
> FROM Sales A, Discounts B WHERE
> A.SaleDate = '20000117' AND B.DiscountDate = '20000118' AND B.Discount > 0
.2
> If I have 9.5 million records in Sales, the query takes 2 seconds to 4
> seconds.
> If I have about 60 tables of 150000 records each, and I select from the
> appropriate table, the query is about a half second or less.
> Is this a common practice, to break tables up based on date range or am I
> still doing something wrong?
> CREATE TABLE [Sales] (
> [SaleDate] [datetime] NOT NULL ,
> [CustID] [varchar] (10) NOT NULL ,
> [SaleAmt] [DECIMAL (8,3)] NOT NULL
> ) ON [PRIMARY]
> CREATE TABLE [Discounts] (
> [DiscountDate] [datetime] NOT NULL ,
> [Discount] [DECIMAL (8,3)] NOT NULL
> [Code] [numeric] NOT NULL
> ) ON [PRIMARY]
>
>|||Dear Rich,
When processing a join, The SQL optimizer evaluates all reasonable join
permutations and estimates the total I/O cost, in terms of I/O time. The pla
n
resulting in the lowest estimate of I/O time is the plan chosen.
Please Note that As the number of tables increases, then the number of
permutations that the optimizer must evaluate increases as a factorial of th
e
number of tables in the query:
I.E: Nbr Of Tables is 2 then Nbr Of Permutions On SQL is 2!
Nbr Of Tables is 3 then Nbr Of Permutions On SQL is 3!
Nbr Of Tables is 4 then Nbr Of Permutions On SQL is 4!
and so on...
In Your case you have 2 tables and as i can see they are not related(No Join
In Between), one way to solve your problem and optimize your query processin
g
time is either you filter your tables Discounts and Sales and then build the
join,
Or you build a relation between both tables on primary indexes and then make
your criteria fields as clustered indexes(i.e: SaleDate , DiscountDate,
Discount)
i prefer you combine both ways
Good Luck
Mario Aoun
"Rich" wrote:

> My database is much faster now because I found that I only need around
> 150,000 records in a table for any single query. The date selection will
> always be in a certain range.
> So if I break my tables up into smaller tables based on the maximum date
> range, its much faster.
> I don't understand why its 2x's slower if there are more than x-number of
> records in the table.. I have tried non-clustered and then clustered index
es
> on the date field, and the date + customer id as the key fields... but its
> still slower as more records are added to the table. Is this normal?
> SELECT DISTINCT A.CustID, B.Discount
> FROM Sales A, Discounts B WHERE
> A.SaleDate = '20000117' AND B.DiscountDate = '20000118' AND B.Discount > 0
.2
> If I have 9.5 million records in Sales, the query takes 2 seconds to 4
> seconds.
> If I have about 60 tables of 150000 records each, and I select from the
> appropriate table, the query is about a half second or less.
> Is this a common practice, to break tables up based on date range or am I
> still doing something wrong?
> CREATE TABLE [Sales] (
> [SaleDate] [datetime] NOT NULL ,
> [CustID] [varchar] (10) NOT NULL ,
> [SaleAmt] [DECIMAL (8,3)] NOT NULL
> ) ON [PRIMARY]
> CREATE TABLE [Discounts] (
> [DiscountDate] [datetime] NOT NULL ,
> [Discount] [DECIMAL (8,3)] NOT NULL
> [Code] [numeric] NOT NULL
> ) ON [PRIMARY]
>
>|||I was just curious about what is the approximate criterion
which allows an optimizer to tell that
T3 joined T1 joined T2
is faster than
T1 joined T2 joined T3 ?
Is it based on the number of columns? Can't be based on
rows because their number is unknown before the join
and would take to much time to compute it...
Any information or pointer on this problem?
Thank you
- Pamela
.NET developer|||Rich,
Why aren't the two tables joined in the query? Do all Customers get all
Discounts? That doesn't seem to make sense (from a data model point of
view).
Assuming you do have some common key (and you join on it in the query),
then this type of query would benefit from a clustered index on the date
range, or from a covering index (see BOL for more details).
Hope this helps,
Gert-Jan
Rich wrote:
> My database is much faster now because I found that I only need around
> 150,000 records in a table for any single query. The date selection will
> always be in a certain range.
> So if I break my tables up into smaller tables based on the maximum date
> range, its much faster.
> I don't understand why its 2x's slower if there are more than x-number of
> records in the table.. I have tried non-clustered and then clustered index
es
> on the date field, and the date + customer id as the key fields... but its
> still slower as more records are added to the table. Is this normal?
> SELECT DISTINCT A.CustID, B.Discount
> FROM Sales A, Discounts B WHERE
> A.SaleDate = '20000117' AND B.DiscountDate = '20000118' AND B.Discount > 0
.2
> If I have 9.5 million records in Sales, the query takes 2 seconds to 4
> seconds.
> If I have about 60 tables of 150000 records each, and I select from the
> appropriate table, the query is about a half second or less.
> Is this a common practice, to break tables up based on date range or am I
> still doing something wrong?
> CREATE TABLE [Sales] (
> [SaleDate] [datetime] NOT NULL ,
> [CustID] [varchar] (10) NOT NULL ,
> [SaleAmt] [DECIMAL (8,3)] NOT NULL
> ) ON [PRIMARY]
> CREATE TABLE [Discounts] (
> [DiscountDate] [datetime] NOT NULL ,
> [Discount] [DECIMAL (8,3)] NOT NULL
> [Code] [numeric] NOT NULL
> ) ON [PRIMARY]|||"Gert-Jan Strik" <sorry@.toomuchspamalready.nl> wrote in message
news:42EE6C4F.DD1000E0@.toomuchspamalready.nl...
> Rich,
> Why aren't the two tables joined in the query? Do all Customers get all
> Discounts? That doesn't seem to make sense (from a data model point of
> view).
> Assuming you do have some common key (and you join on it in the query),
> then this type of query would benefit from a clustered index on the date
> range, or from a covering index (see BOL for more details).
> Hope this helps,
> Gert-Jan
That helps, thanks.
Richard
> Rich wrote:
of
indexes
its
0.2
I

Sunday, February 19, 2012

Failure sending email in subscriptions

I am running SQL 2005, I have the latest service pack on SQL 2005 and also on
the Windows 2003 server. This is on a single server. I am using my Exchange
2003 server as my SMTP server which is on a different server with the same
subnet.
-When a domain user schedules a subscription it won't send an email.
-When a domain admin schdules a subscription the email will get sent.
-When a domain user schedules a subscription with ONLY the link option, it
will send an email and the user can access the link.
- I tried setting my RS config to use Local System and also Network Service,
and that did not make a difference.
In the RS logs I get the error...
Success: False, Status: Failure sending mail: The report server has
encountered a configuration error. See the report server log files for more
information., DeliveryExtension: Report Server Email, Report: Purchase
Requests Ordered But Not Received, Attempt 0
I also get an error in the Windows event log when a subscription is created
or editted, but I'm not sure if it is related. The error is..
Report Server (MSSQLSERVER) cannot load the Report Server DocumentLibrary
extension.
Here is my RS config file..
<RSEmailDPConfiguration>
<SMTPServer>192.168.100.200</SMTPServer>
<SMTPServerPort></SMTPServerPort>
<SMTPAccountName></SMTPAccountName>
<SMTPConnectionTimeout></SMTPConnectionTimeout>
<SMTPServerPickupDirectory></SMTPServerPickupDirectory>
<SMTPUseSSL></SMTPUseSSL>
<SendUsing>2</SendUsing>
<SMTPAuthenticate></SMTPAuthenticate>
<From>hqjobs01Reports@.trinitystairs.com</From>
<EmbeddedRenderFormats>
<RenderingExtension>MHTML</RenderingExtension>
</EmbeddedRenderFormats>
<PrivilegedUserRenderFormats></PrivilegedUserRenderFormats>
<ExcludedRenderFormats>
<RenderingExtension>HTMLOWC</RenderingExtension>
<RenderingExtension>NULL</RenderingExtension>
<RenderingExtension>RGDI</RenderingExtension>
</ExcludedRenderFormats>
<SendEmailToUserAlias>False</SendEmailToUserAlias>
<DefaultHostName>trinitystairs.com</DefaultHostName>
<PermittedHosts></PermittedHosts>
</RSEmailDPConfiguration>
Please Help!Having the same problem. Tried everything and nothing worked. Next week I
will try transferring all my services to run under domain admin account to
see if that helps.
These are some of the links where I have explained my problem. Read just to
make sure that we have same problem.
http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=1769187&SiteID=17
http://www.microsoft.com/communities/newsgroups/en-us/default.aspx?dg=microsoft.public.opsmgr.reporting&mid=c1fdce30-5db0-487d-931e-1e17cb0c387b
"Pablo" wrote:
> I am running SQL 2005, I have the latest service pack on SQL 2005 and also on
> the Windows 2003 server. This is on a single server. I am using my Exchange
> 2003 server as my SMTP server which is on a different server with the same
> subnet.
> -When a domain user schedules a subscription it won't send an email.
> -When a domain admin schdules a subscription the email will get sent.
> -When a domain user schedules a subscription with ONLY the link option, it
> will send an email and the user can access the link.
> - I tried setting my RS config to use Local System and also Network Service,
> and that did not make a difference.
> In the RS logs I get the error...
> Success: False, Status: Failure sending mail: The report server has
> encountered a configuration error. See the report server log files for more
> information., DeliveryExtension: Report Server Email, Report: Purchase
> Requests Ordered But Not Received, Attempt 0
>
> I also get an error in the Windows event log when a subscription is created
> or editted, but I'm not sure if it is related. The error is..
> Report Server (MSSQLSERVER) cannot load the Report Server DocumentLibrary
> extension.
> Here is my RS config file..
> <RSEmailDPConfiguration>
> <SMTPServer>192.168.100.200</SMTPServer>
> <SMTPServerPort></SMTPServerPort>
> <SMTPAccountName></SMTPAccountName>
> <SMTPConnectionTimeout></SMTPConnectionTimeout>
> <SMTPServerPickupDirectory></SMTPServerPickupDirectory>
> <SMTPUseSSL></SMTPUseSSL>
> <SendUsing>2</SendUsing>
> <SMTPAuthenticate></SMTPAuthenticate>
> <From>hqjobs01Reports@.trinitystairs.com</From>
> <EmbeddedRenderFormats>
> <RenderingExtension>MHTML</RenderingExtension>
> </EmbeddedRenderFormats>
> <PrivilegedUserRenderFormats></PrivilegedUserRenderFormats>
> <ExcludedRenderFormats>
> <RenderingExtension>HTMLOWC</RenderingExtension>
> <RenderingExtension>NULL</RenderingExtension>
> <RenderingExtension>RGDI</RenderingExtension>
> </ExcludedRenderFormats>
> <SendEmailToUserAlias>False</SendEmailToUserAlias>
> <DefaultHostName>trinitystairs.com</DefaultHostName>
> <PermittedHosts></PermittedHosts>
> </RSEmailDPConfiguration>
>
> Please Help!