Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
indexes is faster (or slower) than doing a SELECT INTO and then creating
the indexes on the table created.
The table in question contains around a million records.
Thanks
*** Sent via Developersdex http://www.codecomments.com ***
The only way to know is to test it both ways in the exact conditions and
hardware etc. that you will be using.
Andrew J. Kelly SQL MVP
"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
> indexes is faster (or slower) than doing a SELECT INTO and then creating
> the indexes on the table created.
> The table in question contains around a million records.
> Thanks
>
> *** Sent via Developersdex http://www.codecomments.com ***
|||Joe
Most likely that SELECT * INTO will win. BTW ,1 million rows is not so big
nowadays
"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
> indexes is faster (or slower) than doing a SELECT INTO and then creating
> the indexes on the table created.
> The table in question contains around a million records.
> Thanks
>
> *** Sent via Developersdex http://www.codecomments.com ***
|||Hello,
If it is a production server I will suggest you to create the table and
indexes and then insert the data into new table using BCP IN or BULK INSERT
in batch commit mode with BULK_INSERT recovery model. If you are creating a
table with SELECT * INTO and this will create locks in sysobjects table
which
is not a good idea.
Thanks
Hari
"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
> Hi, I'm trying to figure out if doing an INSERT INTO.. a table with
> indexes is faster (or slower) than doing a SELECT INTO and then creating
> the indexes on the table created.
> The table in question contains around a million records.
> Thanks
>
> *** Sent via Developersdex http://www.codecomments.com ***
|||Thanks for the opinions. I'll definitely investigate.
*** Sent via Developersdex http://www.codecomments.com ***
|||Can anybody explain to me exactly what BULK_LOGGED recovery does?
I gather it has less overhead and less recovery options than FULL, but
the BOL description is pretty vague.
Thanks.
Josh
On Wed, 28 Feb 2007 08:05:51 -0600, "Hari Prasad"
<hari_prasad_k@.hotmail.com> wrote:
>Hello,
>If it is a production server I will suggest you to create the table and
>indexes and then insert the data into new table using BCP IN or BULK INSERT
>in batch commit mode with BULK_INSERT recovery model. If you are creating a
>table with SELECT * INTO and this will create locks in sysobjects table
>which
> is not a good idea.
>Thanks
>Hari
>"Joe Grizzly" <grizzlyjoe@.campcool.com> wrote in message
>news:%23OYYT%23zWHHA.4252@.TK2MSFTNGP06.phx.gbl...
>
|||BOL has some good info but here are some basics. First off it is no
different than FULL recovery until you do an operation that can take
advantage of a minimally logged operation. These would be things like:
CREATE INDEX
SELECT INTO
BULK INSERT
etc. (See BOL for more details)
When you do execute one of these in Bulk_Logged or Simple recovery mode only
the ID of the extent that was modified during that operation is logged in
the tran log. So if you did a bulk insert of 1 million rows and it filled up
1000 extents you would only log the ID's for those 1000 extents not the
actual data that would normally be logged in FULL recovey mode.There are
several implications of this. One is that you can no longer do point in time
recovery until you issue another full backup to start the proper logging
again. You can restore the entire log file though. The second is that when
you backup the tran log it will go and get all the data for those 1000
extents and place them in the log backup file. So the backup will take the
hit that normally would have occured if you did not do a minimally logged
load.
Andrew J. Kelly SQL MVP
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:7skgu25thlm2vjher8dfrl29scbpj894bv@.4ax.com...
> Can anybody explain to me exactly what BULK_LOGGED recovery does?
> I gather it has less overhead and less recovery options than FULL, but
> the BOL description is pretty vague.
> Thanks.
> Josh
>
> On Wed, 28 Feb 2007 08:05:51 -0600, "Hari Prasad"
> <hari_prasad_k@.hotmail.com> wrote:
>
|||Andrew,
Thanks!
So, bulk_logged is not faster or cheaper than simple, even for those
operations.
We have some large tables that are recreated daily, and it has
occurred to us to move them out to a separate database we can run on
whatever lightweight logging we can find. I guess simple is the
simple answer! Also make sure it's on RAID10 space rather than RAID5.
Also, the point about backup taking a hit, is good to know. I guess
another option might be to use bulk_logged for a little extra safety
and then if nothing goes wrong, just truncate the log instead of
backing it up to tape.
Josh
On Fri, 2 Mar 2007 13:40:41 -0500, "Andrew J. Kelly"
<sqlmvpnooospam@.shadhawk.com> wrote:
>BOL has some good info but here are some basics. First off it is no
>different than FULL recovery until you do an operation that can take
>advantage of a minimally logged operation. These would be things like:
>CREATE INDEX
>SELECT INTO
>BULK INSERT
>etc. (See BOL for more details)
>When you do execute one of these in Bulk_Logged or Simple recovery mode only
>the ID of the extent that was modified during that operation is logged in
>the tran log. So if you did a bulk insert of 1 million rows and it filled up
>1000 extents you would only log the ID's for those 1000 extents not the
>actual data that would normally be logged in FULL recovey mode.There are
>several implications of this. One is that you can no longer do point in time
>recovery until you issue another full backup to start the proper logging
>again. You can restore the entire log file though. The second is that when
>you backup the tran log it will go and get all the data for those 1000
>extents and place them in the log backup file. So the backup will take the
>hit that normally would have occured if you did not do a minimally logged
>load.
|||If it is a minimally logged operation (must meet all conditions listed
below) then you ge the same logging and performance from Bulk-Logged and
Simple cerocery models. The key difference with Simple is that you can't do
a log restore at all. Yes Raid 5 is bad for heavy writes. One more note. If
you do use a seperate db you still need valid FULL backups before you start
the operation.
a.. The recovery model is simple or bulk-logged.
a.. The target table is not being replicated.
a.. The target table does not have any triggers.
a.. The target table has either 0 rows or no indexes.
a.. The TABLOCK hint is specified. For more information,
Andrew J. Kelly SQL MVP
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:623ju250u00obhg2c3v4v3ell4guds7sm6@.4ax.com...
> Andrew,
> Thanks!
> So, bulk_logged is not faster or cheaper than simple, even for those
> operations.
> We have some large tables that are recreated daily, and it has
> occurred to us to move them out to a separate database we can run on
> whatever lightweight logging we can find. I guess simple is the
> simple answer! Also make sure it's on RAID10 space rather than RAID5.
> Also, the point about backup taking a hit, is good to know. I guess
> another option might be to use bulk_logged for a little extra safety
> and then if nothing goes wrong, just truncate the log instead of
> backing it up to tape.
> Josh
>
> On Fri, 2 Mar 2007 13:40:41 -0500, "Andrew J. Kelly"
> <sqlmvpnooospam@.shadhawk.com> wrote:
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment