Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
indexes is faster (or slower) than doing a SELECT INTO and then creating
the indexes on the table created.
The table in question contains around a million records.
Thanks
*** Sent via Developersdex http://www.developersdex.com ***The only way to know is to test it both ways in the exact conditions and
hardware etc. that you will be using.
--
Andrew J. Kelly SQL MVP
"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
> indexes is faster (or slower) than doing a SELECT INTO and then creating
> the indexes on the table created.
> The table in question contains around a million records.
> Thanks
>
> *** Sent via Developersdex http://www.developersdex.com ***|||And how many are you inserting? How many indexes and how large. How much
time does it need to create indexes? Hardware specs? Columns specs?
It would probably best to drop indexes and recreate them if you can, but I
dont see the problem even if you dont drop them. Only, try to see if you
need to defragment them after that...
MC
"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
> indexes is faster (or slower) than doing a SELECT INTO and then creating
> the indexes on the table created.
> The table in question contains around a million records.
> Thanks
>
> *** Sent via Developersdex http://www.developersdex.com ***|||Joe
Most likely that SELECT * INTO will win. BTW ,1 million rows is not so big
nowadays
"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
> indexes is faster (or slower) than doing a SELECT INTO and then creating
> the indexes on the table created.
> The table in question contains around a million records.
> Thanks
>
> *** Sent via Developersdex http://www.developersdex.com ***|||Hello,
If it is a production server I will suggest you to create the table and
indexes and then insert the data into new table using BCP IN or BULK INSERT
in batch commit mode with BULK_INSERT recovery model. If you are creating a
table with SELECT * INTO and this will create locks in sysobjects table
which
is not a good idea.
Thanks
Hari
"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
> indexes is faster (or slower) than doing a SELECT INTO and then creating
> the indexes on the table created.
> The table in question contains around a million records.
> Thanks
>
> *** Sent via Developersdex http://www.developersdex.com ***|||Can anybody explain to me exactly what BULK_LOGGED recovery does?
I gather it has less overhead and less recovery options than FULL, but
the BOL description is pretty vague.
Thanks.
Josh
On Wed, 28 Feb 2007 08:05:51 -0600, "Hari Prasad"
<hari_prasad_k@.hotmail.com> wrote:
>Hello,
>If it is a production server I will suggest you to create the table and
>indexes and then insert the data into new table using BCP IN or BULK INSERT
>in batch commit mode with BULK_INSERT recovery model. If you are creating a
>table with SELECT * INTO and this will create locks in sysobjects table
>which
> is not a good idea.
>Thanks
>Hari
>"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
>news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
>> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
>> indexes is faster (or slower) than doing a SELECT INTO and then creating
>> the indexes on the table created.
>> The table in question contains around a million records.
>> Thanks
>>
>> *** Sent via Developersdex http://www.developersdex.com ***
>|||BOL has some good info but here are some basics. First off it is no
different than FULL recovery until you do an operation that can take
advantage of a minimally logged operation. These would be things like:
CREATE INDEX
SELECT INTO
BULK INSERT
etc. (See BOL for more details)
When you do execute one of these in Bulk_Logged or Simple recovery mode only
the ID of the extent that was modified during that operation is logged in
the tran log. So if you did a bulk insert of 1 million rows and it filled up
1000 extents you would only log the ID's for those 1000 extents not the
actual data that would normally be logged in FULL recovey mode.There are
several implications of this. One is that you can no longer do point in time
recovery until you issue another full backup to start the proper logging
again. You can restore the entire log file though. The second is that when
you backup the tran log it will go and get all the data for those 1000
extents and place them in the log backup file. So the backup will take the
hit that normally would have occured if you did not do a minimally logged
load.
--
Andrew J. Kelly SQL MVP
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:7skgu25thlm2vjher8dfrl29scbpj894bv@.4ax.com...
> Can anybody explain to me exactly what BULK_LOGGED recovery does?
> I gather it has less overhead and less recovery options than FULL, but
> the BOL description is pretty vague.
> Thanks.
> Josh
>
> On Wed, 28 Feb 2007 08:05:51 -0600, "Hari Prasad"
> <hari_prasad_k@.hotmail.com> wrote:
>>Hello,
>>If it is a production server I will suggest you to create the table and
>>indexes and then insert the data into new table using BCP IN or BULK
>>INSERT
>>in batch commit mode with BULK_INSERT recovery model. If you are creating
>>a
>>table with SELECT * INTO and this will create locks in sysobjects table
>>which
>> is not a good idea.
>>Thanks
>>Hari
>>"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
>>news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
>> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
>> indexes is faster (or slower) than doing a SELECT INTO and then creating
>> the indexes on the table created.
>> The table in question contains around a million records.
>> Thanks
>>
>> *** Sent via Developersdex http://www.developersdex.com ***
>|||Andrew,
Thanks!
So, bulk_logged is not faster or cheaper than simple, even for those
operations.
We have some large tables that are recreated daily, and it has
occurred to us to move them out to a separate database we can run on
whatever lightweight logging we can find. I guess simple is the
simple answer! Also make sure it's on RAID10 space rather than RAID5.
Also, the point about backup taking a hit, is good to know. I guess
another option might be to use bulk_logged for a little extra safety
and then if nothing goes wrong, just truncate the log instead of
backing it up to tape.
Josh
On Fri, 2 Mar 2007 13:40:41 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>BOL has some good info but here are some basics. First off it is no
>different than FULL recovery until you do an operation that can take
>advantage of a minimally logged operation. These would be things like:
>CREATE INDEX
>SELECT INTO
>BULK INSERT
>etc. (See BOL for more details)
>When you do execute one of these in Bulk_Logged or Simple recovery mode only
>the ID of the extent that was modified during that operation is logged in
>the tran log. So if you did a bulk insert of 1 million rows and it filled up
>1000 extents you would only log the ID's for those 1000 extents not the
>actual data that would normally be logged in FULL recovey mode.There are
>several implications of this. One is that you can no longer do point in time
>recovery until you issue another full backup to start the proper logging
>again. You can restore the entire log file though. The second is that when
>you backup the tran log it will go and get all the data for those 1000
>extents and place them in the log backup file. So the backup will take the
>hit that normally would have occured if you did not do a minimally logged
>load.|||If it is a minimally logged operation (must meet all conditions listed
below) then you ge the same logging and performance from Bulk-Logged and
Simple cerocery models. The key difference with Simple is that you can't do
a log restore at all. Yes Raid 5 is bad for heavy writes. One more note. If
you do use a seperate db you still need valid FULL backups before you start
the operation.
a.. The recovery model is simple or bulk-logged.
a.. The target table is not being replicated.
a.. The target table does not have any triggers.
a.. The target table has either 0 rows or no indexes.
a.. The TABLOCK hint is specified. For more information,
Andrew J. Kelly SQL MVP
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:623ju250u00obhg2c3v4v3ell4guds7sm6@.4ax.com...
> Andrew,
> Thanks!
> So, bulk_logged is not faster or cheaper than simple, even for those
> operations.
> We have some large tables that are recreated daily, and it has
> occurred to us to move them out to a separate database we can run on
> whatever lightweight logging we can find. I guess simple is the
> simple answer! Also make sure it's on RAID10 space rather than RAID5.
> Also, the point about backup taking a hit, is good to know. I guess
> another option might be to use bulk_logged for a little extra safety
> and then if nothing goes wrong, just truncate the log instead of
> backing it up to tape.
> Josh
>
> On Fri, 2 Mar 2007 13:40:41 -0500, "Andrew J. Kelly"
> <sqlmvpnooospam@.shadhawk.com> wrote:
>>BOL has some good info but here are some basics. First off it is no
>>different than FULL recovery until you do an operation that can take
>>advantage of a minimally logged operation. These would be things like:
>>CREATE INDEX
>>SELECT INTO
>>BULK INSERT
>>etc. (See BOL for more details)
>>When you do execute one of these in Bulk_Logged or Simple recovery mode
>>only
>>the ID of the extent that was modified during that operation is logged in
>>the tran log. So if you did a bulk insert of 1 million rows and it filled
>>up
>>1000 extents you would only log the ID's for those 1000 extents not the
>>actual data that would normally be logged in FULL recovey mode.There are
>>several implications of this. One is that you can no longer do point in
>>time
>>recovery until you issue another full backup to start the proper logging
>>again. You can restore the entire log file though. The second is that when
>>you backup the tran log it will go and get all the data for those 1000
>>extents and place them in the log backup file. So the backup will take the
>>hit that normally would have occured if you did not do a minimally logged
>>load.
>
Showing posts with label indexes. Show all posts
Showing posts with label indexes. Show all posts
Monday, March 12, 2012
Fastest way to copy tables and their indexes between servers?
The DTS Task Copy Server Objects is PAINFULLY slow.
The Copy Table Wizard is fast but generates an unmanagable DTS and does not bring over the indexes.
Any tips or tricks to copy tables, data and indexes and a reasonable speed?
Thanks,
Carl
Without fully understanding the specifics but going by what you've done so far, one other option would be to script the database objects to file, run the generated scripts on the second server then use the BCP utility BCP.EXE (bulk copy program) to copy the data over.
See http://msdn2.microsoft.com/en-us/library/aa337544.aspx for more information on how to use the BCP utility.
Regards,
Uwa.
Friday, March 9, 2012
Faster Indexes using order by statement
Hello all,
I have a specific sql statement and I am looking to return the result
set a lot faster. Currently the setup is like so:
myTable schema:
Counterdecimal9 (pk)
Machinevarchar60
LogEntryvarchar1000
Activevarchar50
SysInfovarchar255
Idlevarchar50
IPvarchar15
KioskDatedatetime
KioskTimedatetime
ServerDatedatetime
ServerTimedatetime
Applicationvarchar15
WebDomainvarchar50
NSCodevarchar10
There is a Clustered index on Counter and two Non-clustered:
1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
fill factor 90% on PRIMARY
sql statment:
select top 1000 machine, logentry,
convert(varchar(8),kioskdate,5) as Kiosk_date,
convert(varchar(8),kiosktime,8) as Kiosk_time,
convert(varchar(8),serverdate,5) as ServerDate,
convert(varchar(8),serverdate,8) as ServerTime,
application,
nscode,
webdomain
from myTable
where machine = 'machinename'
order by kioskdate desc, kiosktime desc
Currently the table holds over 18 million records and the above search
returns in under 3 seconds, however when I change the order by
statement to ServerDate desc only (which is what I need), it takes
over four minutes. I have tried altering/tweaking the indexes, but
have had no success.
Any ideas greatly appreciated.
Thanks
Scott
The only index that would be useful for that query is
Counter, Machine, NSCode, KioskDate, KioskTime
And that only for a scan as Counter is the first field.
How unique is machinename - if it's good then put an index on that.
Not sure how the optimiser would handle it but try an index
machinename, kioskdate desc, kiosktime desc
This will be in the oerder of the required resultset and the server could
take the top thousand entries for the machinename without any other
processing. Doubt if it will do that but it's worth a try.
You might be able to give it hint by finding the range of dates required and
using that.
"scott" wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
>
> Counterdecimal9 (pk)
> Machinevarchar60
> LogEntryvarchar1000
> Activevarchar50
> SysInfovarchar255
> Idlevarchar50
> IPvarchar15
> KioskDatedatetime
> KioskTimedatetime
> ServerDatedatetime
> ServerTimedatetime
> Applicationvarchar15
> WebDomainvarchar50
> NSCodevarchar10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
>
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
>
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
>
> Any ideas greatly appreciated.
> Thanks
> Scott
>
|||Scott,
both your nonclustered indexes are not very useful (ever). This is
because Counter is a unique column and is already indexed.
As Nigel suggested, if "machine = 'machinename'" is highly selective
(returns just a few percent of all rows), then your query could benefit
from an index on (machinename, kioskdate, kiosktime). Specifying
ascending or descending in the index definition is not useful for your
query.
If the expression "machine = 'machinename'" returns a very high
percentage of all rows, then the query could benefit from an index on
(kioskdate, kiosktime, machinename).
You could also try the Index Tuning Wizard and see what that comes up
with.
Hope this helps,
Gert-Jan
scott wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
> Counter decimal 9 (pk)
> Machine varchar 60
> LogEntry varchar 1000
> Active varchar 50
> SysInfo varchar 255
> Idle varchar 50
> IP varchar 15
> KioskDate datetime
> KioskTime datetime
> ServerDate datetime
> ServerTime datetime
> Application varchar 15
> WebDomain varchar 50
> NSCode varchar 10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
> Any ideas greatly appreciated.
> Thanks
> Scott
(Please reply only to the newsgroup)
|||Hi,
many thanks for all your replies, however I'm still have no joy on
this. Whatever non-clustered indexes I create, searching by kioskdate
and time (desc) is still way faster than searchig by serverdate (desc)
which is what I want.
Using the Index Tuning Wizard gives me nothing whichever way I do it
(create new indexes, or scan with existing indexes).
Back to the drawing board methinks!
Cheers
Scott
I have a specific sql statement and I am looking to return the result
set a lot faster. Currently the setup is like so:
myTable schema:
Counterdecimal9 (pk)
Machinevarchar60
LogEntryvarchar1000
Activevarchar50
SysInfovarchar255
Idlevarchar50
IPvarchar15
KioskDatedatetime
KioskTimedatetime
ServerDatedatetime
ServerTimedatetime
Applicationvarchar15
WebDomainvarchar50
NSCodevarchar10
There is a Clustered index on Counter and two Non-clustered:
1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
fill factor 90% on PRIMARY
sql statment:
select top 1000 machine, logentry,
convert(varchar(8),kioskdate,5) as Kiosk_date,
convert(varchar(8),kiosktime,8) as Kiosk_time,
convert(varchar(8),serverdate,5) as ServerDate,
convert(varchar(8),serverdate,8) as ServerTime,
application,
nscode,
webdomain
from myTable
where machine = 'machinename'
order by kioskdate desc, kiosktime desc
Currently the table holds over 18 million records and the above search
returns in under 3 seconds, however when I change the order by
statement to ServerDate desc only (which is what I need), it takes
over four minutes. I have tried altering/tweaking the indexes, but
have had no success.
Any ideas greatly appreciated.
Thanks
Scott
The only index that would be useful for that query is
Counter, Machine, NSCode, KioskDate, KioskTime
And that only for a scan as Counter is the first field.
How unique is machinename - if it's good then put an index on that.
Not sure how the optimiser would handle it but try an index
machinename, kioskdate desc, kiosktime desc
This will be in the oerder of the required resultset and the server could
take the top thousand entries for the machinename without any other
processing. Doubt if it will do that but it's worth a try.
You might be able to give it hint by finding the range of dates required and
using that.
"scott" wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
>
> Counterdecimal9 (pk)
> Machinevarchar60
> LogEntryvarchar1000
> Activevarchar50
> SysInfovarchar255
> Idlevarchar50
> IPvarchar15
> KioskDatedatetime
> KioskTimedatetime
> ServerDatedatetime
> ServerTimedatetime
> Applicationvarchar15
> WebDomainvarchar50
> NSCodevarchar10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
>
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
>
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
>
> Any ideas greatly appreciated.
> Thanks
> Scott
>
|||Scott,
both your nonclustered indexes are not very useful (ever). This is
because Counter is a unique column and is already indexed.
As Nigel suggested, if "machine = 'machinename'" is highly selective
(returns just a few percent of all rows), then your query could benefit
from an index on (machinename, kioskdate, kiosktime). Specifying
ascending or descending in the index definition is not useful for your
query.
If the expression "machine = 'machinename'" returns a very high
percentage of all rows, then the query could benefit from an index on
(kioskdate, kiosktime, machinename).
You could also try the Index Tuning Wizard and see what that comes up
with.
Hope this helps,
Gert-Jan
scott wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
> Counter decimal 9 (pk)
> Machine varchar 60
> LogEntry varchar 1000
> Active varchar 50
> SysInfo varchar 255
> Idle varchar 50
> IP varchar 15
> KioskDate datetime
> KioskTime datetime
> ServerDate datetime
> ServerTime datetime
> Application varchar 15
> WebDomain varchar 50
> NSCode varchar 10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
> Any ideas greatly appreciated.
> Thanks
> Scott
(Please reply only to the newsgroup)
|||Hi,
many thanks for all your replies, however I'm still have no joy on
this. Whatever non-clustered indexes I create, searching by kioskdate
and time (desc) is still way faster than searchig by serverdate (desc)
which is what I want.
Using the Index Tuning Wizard gives me nothing whichever way I do it
(create new indexes, or scan with existing indexes).
Back to the drawing board methinks!
Cheers
Scott
Faster Indexes using order by statement
Hello all,
I have a specific sql statement and I am looking to return the result
set a lot faster. Currently the setup is like so:
myTable schema:
Counter decimal 9 (pk)
Machine varchar 60
LogEntry varchar 1000
Active varchar 50
SysInfo varchar 255
Idle varchar 50
IP varchar 15
KioskDate datetime
KioskTime datetime
ServerDate datetime
ServerTime datetime
Application varchar 15
WebDomain varchar 50
NSCode varchar 10
There is a Clustered index on Counter and two Non-clustered:
1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
fill factor 90% on PRIMARY
sql statment:
select top 1000 machine, logentry,
convert(varchar(8),kioskdate,5) as Kiosk_date,
convert(varchar(8),kiosktime,8) as Kiosk_time,
convert(varchar(8),serverdate,5) as ServerDate,
convert(varchar(8),serverdate,8) as ServerTime,
application,
nscode,
webdomain
from myTable
where machine = 'machinename'
order by kioskdate desc, kiosktime desc
Currently the table holds over 18 million records and the above search
returns in under 3 seconds, however when I change the order by
statement to ServerDate desc only (which is what I need), it takes
over four minutes. I have tried altering/tweaking the indexes, but
have had no success.
Any ideas greatly appreciated.
Thanks
ScottThe only index that would be useful for that query is
Counter, Machine, NSCode, KioskDate, KioskTime
And that only for a scan as Counter is the first field.
How unique is machinename - if it's good then put an index on that.
Not sure how the optimiser would handle it but try an index
machinename, kioskdate desc, kiosktime desc
This will be in the oerder of the required resultset and the server could
take the top thousand entries for the machinename without any other
processing. Doubt if it will do that but it's worth a try.
You might be able to give it hint by finding the range of dates required and
using that.
"scott" wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
>
> Counter decimal 9 (pk)
> Machine varchar 60
> LogEntry varchar 1000
> Active varchar 50
> SysInfo varchar 255
> Idle varchar 50
> IP varchar 15
> KioskDate datetime
> KioskTime datetime
> ServerDate datetime
> ServerTime datetime
> Application varchar 15
> WebDomain varchar 50
> NSCode varchar 10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
>
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
>
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
>
> Any ideas greatly appreciated.
> Thanks
> Scott
>|||Scott,
both your nonclustered indexes are not very useful (ever). This is
because Counter is a unique column and is already indexed.
As Nigel suggested, if "machine = 'machinename'" is highly selective
(returns just a few percent of all rows), then your query could benefit
from an index on (machinename, kioskdate, kiosktime). Specifying
ascending or descending in the index definition is not useful for your
query.
If the expression "machine = 'machinename'" returns a very high
percentage of all rows, then the query could benefit from an index on
(kioskdate, kiosktime, machinename).
You could also try the Index Tuning Wizard and see what that comes up
with.
Hope this helps,
Gert-Jan
scott wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
> Counter decimal 9 (pk)
> Machine varchar 60
> LogEntry varchar 1000
> Active varchar 50
> SysInfo varchar 255
> Idle varchar 50
> IP varchar 15
> KioskDate datetime
> KioskTime datetime
> ServerDate datetime
> ServerTime datetime
> Application varchar 15
> WebDomain varchar 50
> NSCode varchar 10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
> Any ideas greatly appreciated.
> Thanks
> Scott
--
(Please reply only to the newsgroup)|||Hi,
many thanks for all your replies, however I'm still have no joy on
this. Whatever non-clustered indexes I create, searching by kioskdate
and time (desc) is still way faster than searchig by serverdate (desc)
which is what I want.
Using the Index Tuning Wizard gives me nothing whichever way I do it
(create new indexes, or scan with existing indexes).
Back to the drawing board methinks!
Cheers
Scott
I have a specific sql statement and I am looking to return the result
set a lot faster. Currently the setup is like so:
myTable schema:
Counter decimal 9 (pk)
Machine varchar 60
LogEntry varchar 1000
Active varchar 50
SysInfo varchar 255
Idle varchar 50
IP varchar 15
KioskDate datetime
KioskTime datetime
ServerDate datetime
ServerTime datetime
Application varchar 15
WebDomain varchar 50
NSCode varchar 10
There is a Clustered index on Counter and two Non-clustered:
1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
fill factor 90% on PRIMARY
sql statment:
select top 1000 machine, logentry,
convert(varchar(8),kioskdate,5) as Kiosk_date,
convert(varchar(8),kiosktime,8) as Kiosk_time,
convert(varchar(8),serverdate,5) as ServerDate,
convert(varchar(8),serverdate,8) as ServerTime,
application,
nscode,
webdomain
from myTable
where machine = 'machinename'
order by kioskdate desc, kiosktime desc
Currently the table holds over 18 million records and the above search
returns in under 3 seconds, however when I change the order by
statement to ServerDate desc only (which is what I need), it takes
over four minutes. I have tried altering/tweaking the indexes, but
have had no success.
Any ideas greatly appreciated.
Thanks
ScottThe only index that would be useful for that query is
Counter, Machine, NSCode, KioskDate, KioskTime
And that only for a scan as Counter is the first field.
How unique is machinename - if it's good then put an index on that.
Not sure how the optimiser would handle it but try an index
machinename, kioskdate desc, kiosktime desc
This will be in the oerder of the required resultset and the server could
take the top thousand entries for the machinename without any other
processing. Doubt if it will do that but it's worth a try.
You might be able to give it hint by finding the range of dates required and
using that.
"scott" wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
>
> Counter decimal 9 (pk)
> Machine varchar 60
> LogEntry varchar 1000
> Active varchar 50
> SysInfo varchar 255
> Idle varchar 50
> IP varchar 15
> KioskDate datetime
> KioskTime datetime
> ServerDate datetime
> ServerTime datetime
> Application varchar 15
> WebDomain varchar 50
> NSCode varchar 10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
>
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
>
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
>
> Any ideas greatly appreciated.
> Thanks
> Scott
>|||Scott,
both your nonclustered indexes are not very useful (ever). This is
because Counter is a unique column and is already indexed.
As Nigel suggested, if "machine = 'machinename'" is highly selective
(returns just a few percent of all rows), then your query could benefit
from an index on (machinename, kioskdate, kiosktime). Specifying
ascending or descending in the index definition is not useful for your
query.
If the expression "machine = 'machinename'" returns a very high
percentage of all rows, then the query could benefit from an index on
(kioskdate, kiosktime, machinename).
You could also try the Index Tuning Wizard and see what that comes up
with.
Hope this helps,
Gert-Jan
scott wrote:
> Hello all,
> I have a specific sql statement and I am looking to return the result
> set a lot faster. Currently the setup is like so:
> myTable schema:
> Counter decimal 9 (pk)
> Machine varchar 60
> LogEntry varchar 1000
> Active varchar 50
> SysInfo varchar 255
> Idle varchar 50
> IP varchar 15
> KioskDate datetime
> KioskTime datetime
> ServerDate datetime
> ServerTime datetime
> Application varchar 15
> WebDomain varchar 50
> NSCode varchar 10
> There is a Clustered index on Counter and two Non-clustered:
> 1) Counter & ServerDate - unique values, fill factor 90% on PRIMARY
> 2) Counter, Machine, NSCode, KioskDate, KioskTime - - unique values,
> fill factor 90% on PRIMARY
> sql statment:
> select top 1000 machine, logentry,
> convert(varchar(8),kioskdate,5) as Kiosk_date,
> convert(varchar(8),kiosktime,8) as Kiosk_time,
> convert(varchar(8),serverdate,5) as ServerDate,
> convert(varchar(8),serverdate,8) as ServerTime,
> application,
> nscode,
> webdomain
> from myTable
> where machine = 'machinename'
> order by kioskdate desc, kiosktime desc
> Currently the table holds over 18 million records and the above search
> returns in under 3 seconds, however when I change the order by
> statement to ServerDate desc only (which is what I need), it takes
> over four minutes. I have tried altering/tweaking the indexes, but
> have had no success.
> Any ideas greatly appreciated.
> Thanks
> Scott
--
(Please reply only to the newsgroup)|||Hi,
many thanks for all your replies, however I'm still have no joy on
this. Whatever non-clustered indexes I create, searching by kioskdate
and time (desc) is still way faster than searchig by serverdate (desc)
which is what I want.
Using the Index Tuning Wizard gives me nothing whichever way I do it
(create new indexes, or scan with existing indexes).
Back to the drawing board methinks!
Cheers
Scott
Subscribe to:
Posts (Atom)