Sunday, February 26, 2012

Fast Hardware - Slow SQL

Hey All,
We've got a new 8-way SQL 2000 Server that pretty much
follows all of the Microsoft guidelines for performance
(i.e. the right kind of RAID arrays for both the
transaction and data volumes, etc). It was
configured/setup with Microsoft's sales support folks.
Although SQL runs okay, it doesn't run as fast as we'd
expect. One thing we notice is that processor/memory
utilization is almost always nothing.
As an example, if we're creating a blank database and
specify large file sizes, it could literally take
something like 6 hours to create a "blank" 100gb database
with a 20gb log file (on separate volumes).
We're also running a data conversion process as we're
upgrading our Microsoft Navision SQL server database and
although it runs faster than our older environment, the
system is still running overall at less than 1% CPU
utilization and maybe using 2gb of the available 8gb of
RAM.
I know my disk subsystem is fine as SQL can backup a 120gb
database to disk in less than 25 minutes. I've also
watched counters (like avg. disk. length queue) and they
are in the expected ranges.
How can I "force" SQL to run faster and utilize all the
hardware that it has available?
It's hard to believe that on such an expensive piece of
hardware it could still take 6 hours to create a blank
database!!!
Any help would be appreciated...
-daveThat does seem like a long time. What OS are you running? What kind of
SAN? Are you sure the drivers are the latest for that SAN / HBA?
Andrew J. Kelly
SQL Server MVP
"Dave Rose" <anonymous@.discussions.microsoft.com> wrote in message
news:1171c01c3f4ec$553e47b0$a401280a@.phx
.gbl...
> Hey All,
> We've got a new 8-way SQL 2000 Server that pretty much
> follows all of the Microsoft guidelines for performance
> (i.e. the right kind of RAID arrays for both the
> transaction and data volumes, etc). It was
> configured/setup with Microsoft's sales support folks.
> Although SQL runs okay, it doesn't run as fast as we'd
> expect. One thing we notice is that processor/memory
> utilization is almost always nothing.
> As an example, if we're creating a blank database and
> specify large file sizes, it could literally take
> something like 6 hours to create a "blank" 100gb database
> with a 20gb log file (on separate volumes).
> We're also running a data conversion process as we're
> upgrading our Microsoft Navision SQL server database and
> although it runs faster than our older environment, the
> system is still running overall at less than 1% CPU
> utilization and maybe using 2gb of the available 8gb of
> RAM.
> I know my disk subsystem is fine as SQL can backup a 120gb
> database to disk in less than 25 minutes. I've also
> watched counters (like avg. disk. length queue) and they
> are in the expected ranges.
> How can I "force" SQL to run faster and utilize all the
> hardware that it has available?
> It's hard to believe that on such an expensive piece of
> hardware it could still take 6 hours to create a blank
> database!!!
> Any help would be appreciated...
> -dave

Fast Hardware - Slow SQL

Hey All,
We've got a new 8-way SQL 2000 Server that pretty much
follows all of the Microsoft guidelines for performance
(i.e. the right kind of RAID arrays for both the
transaction and data volumes, etc). It was
configured/setup with Microsoft's sales support folks.
Although SQL runs okay, it doesn't run as fast as we'd
expect. One thing we notice is that processor/memory
utilization is almost always nothing.
As an example, if we're creating a blank database and
specify large file sizes, it could literally take
something like 6 hours to create a "blank" 100gb database
with a 20gb log file (on separate volumes).
We're also running a data conversion process as we're
upgrading our Microsoft Navision SQL server database and
although it runs faster than our older environment, the
system is still running overall at less than 1% CPU
utilization and maybe using 2gb of the available 8gb of
RAM.
I know my disk subsystem is fine as SQL can backup a 120gb
database to disk in less than 25 minutes. I've also
watched counters (like avg. disk. length queue) and they
are in the expected ranges.
How can I "force" SQL to run faster and utilize all the
hardware that it has available?
It's hard to believe that on such an expensive piece of
hardware it could still take 6 hours to create a blank
database!!!
Any help would be appreciated...
-davewhat aspect of performance are you interested in?
the blank db init is seriously off.
you should be able to init at a rate of ~50MB/sec per
file, on a big disk array with the db partitioned into
filegroups or files, you should be at a combined rate of
>200MB/sec, so for 120GB, 40min with single file and 10min
with 4 files.
8-way scalability is a tricky item, some SQL operations
scale very well (hash & merge joins), other ops scale very
poorly (RPC, etc)
i will be have article detailing some of these issues on
www.sql-server-performance.com in a few of weeks
also, are you large row count data modifications?
(inserts, updates & delete) check the above web site later
this week, there will be some very interesting items on
this matter
>--Original Message--
>Hey All,
>We've got a new 8-way SQL 2000 Server that pretty much
>follows all of the Microsoft guidelines for performance
>(i.e. the right kind of RAID arrays for both the
>transaction and data volumes, etc). It was
>configured/setup with Microsoft's sales support folks.
>Although SQL runs okay, it doesn't run as fast as we'd
>expect. One thing we notice is that processor/memory
>utilization is almost always nothing.
>As an example, if we're creating a blank database and
>specify large file sizes, it could literally take
>something like 6 hours to create a "blank" 100gb database
>with a 20gb log file (on separate volumes).
>We're also running a data conversion process as we're
>upgrading our Microsoft Navision SQL server database and
>although it runs faster than our older environment, the
>system is still running overall at less than 1% CPU
>utilization and maybe using 2gb of the available 8gb of
>RAM.
>I know my disk subsystem is fine as SQL can backup a
120gb
>database to disk in less than 25 minutes. I've also
>watched counters (like avg. disk. length queue) and they
>are in the expected ranges.
>How can I "force" SQL to run faster and utilize all the
>hardware that it has available?
>It's hard to believe that on such an expensive piece of
>hardware it could still take 6 hours to create a blank
>database!!!
>Any help would be appreciated...
>-dave
>.
>|||That does seem like a long time. What OS are you running? What kind of
SAN? Are you sure the drivers are the latest for that SAN / HBA?
--
Andrew J. Kelly
SQL Server MVP
"Dave Rose" <anonymous@.discussions.microsoft.com> wrote in message
news:1171c01c3f4ec$553e47b0$a401280a@.phx.gbl...
> Hey All,
> We've got a new 8-way SQL 2000 Server that pretty much
> follows all of the Microsoft guidelines for performance
> (i.e. the right kind of RAID arrays for both the
> transaction and data volumes, etc). It was
> configured/setup with Microsoft's sales support folks.
> Although SQL runs okay, it doesn't run as fast as we'd
> expect. One thing we notice is that processor/memory
> utilization is almost always nothing.
> As an example, if we're creating a blank database and
> specify large file sizes, it could literally take
> something like 6 hours to create a "blank" 100gb database
> with a 20gb log file (on separate volumes).
> We're also running a data conversion process as we're
> upgrading our Microsoft Navision SQL server database and
> although it runs faster than our older environment, the
> system is still running overall at less than 1% CPU
> utilization and maybe using 2gb of the available 8gb of
> RAM.
> I know my disk subsystem is fine as SQL can backup a 120gb
> database to disk in less than 25 minutes. I've also
> watched counters (like avg. disk. length queue) and they
> are in the expected ranges.
> How can I "force" SQL to run faster and utilize all the
> hardware that it has available?
> It's hard to believe that on such an expensive piece of
> hardware it could still take 6 hours to create a blank
> database!!!
> Any help would be appreciated...
> -dave

Fast dev workstation!

Working a new gig, they gave me a pretty nice little workstation,
dual-core 3ghz 2.5gb RAM, don't know what the disk is except it's
80gb.
Anyway, I'm working a GL project where we have to produce about 30m
debits and credits from 2.5m transactions.
The SQL is something like:
select
tranid, x1, a2, gl
into #myfoo
from trantable -- 2.5m rows
cross join xtable -- 12 rows
inner join GLCOA on x1 = a1 -- 300k rows
inner join attribute table on trancode=attribcode -- ten rows
(selecting into the temp just so I don't have to wait for 30m rows to
scroll into query analyzer)
So, guess how long this takes to run, workstation running XP and
SQL2K?
.
.
.
.
Answer: two minutes
Two minutes!?!?
I'm impressed.
When I was a lad, it would have taken quite a while running at 4.77mhz
to generate and write 30m records onto diskettes, I'm just saying, but
even running SQL7 on a 300mhz two-CPU server seven years ago, would
have been easily 10x slower.
I gotta look into the disk config, it's just one drive, this is an HP
workstation.
J.
ps - for all that, the shop is a little chintzy on the monitors, just
a 17 inch 1280x flat panel, OK, but could be better. Not directly a
SQLServer issue. and the mouse has a cord and a ball.On Thu, 16 Nov 2006 22:44:04 -0800, JXStern <JXSternChangeX2R@.gte.net>
wrote:
>So, guess how long this takes to run, workstation running XP and
>SQL2K?
>.
>.
>Answer: two minutes
OK, two and a half.
But even so, isn't anybody else amazed at this?
Perhaps too easily amused,
J.

Fast dev workstation!

Working a new gig, they gave me a pretty nice little workstation,
dual-core 3ghz 2.5gb RAM, don't know what the disk is except it's
80gb.
Anyway, I'm working a GL project where we have to produce about 30m
debits and credits from 2.5m transactions.
The SQL is something like:
select
tranid, x1, a2, gl
into #myfoo
from trantable-- 2.5m rows
cross join xtable-- 12 rows
inner join GLCOA on x1 = a1-- 300k rows
inner join attribute table on trancode=attribcode -- ten rows
(selecting into the temp just so I don't have to wait for 30m rows to
scroll into query analyzer)
So, guess how long this takes to run, workstation running XP and
SQL2K?
..
..
..
..
Answer: two minutes
Two minutes!?!?
I'm impressed.
When I was a lad, it would have taken quite a while running at 4.77mhz
to generate and write 30m records onto diskettes, I'm just saying, but
even running SQL7 on a 300mhz two-CPU server seven years ago, would
have been easily 10x slower.
I gotta look into the disk config, it's just one drive, this is an HP
workstation.
J.
ps - for all that, the shop is a little chintzy on the monitors, just
a 17 inch 1280x flat panel, OK, but could be better. Not directly a
SQLServer issue. and the mouse has a cord and a ball.
On Thu, 16 Nov 2006 22:44:04 -0800, JXStern <JXSternChangeX2R@.gte.net>
wrote:
>So, guess how long this takes to run, workstation running XP and
>SQL2K?
>.
>.
>Answer: two minutes
OK, two and a half.
But even so, isn't anybody else amazed at this?
Perhaps too easily amused,
J.

Fast dev workstation!

Working a new gig, they gave me a pretty nice little workstation,
dual-core 3ghz 2.5gb RAM, don't know what the disk is except it's
80gb.
Anyway, I'm working a GL project where we have to produce about 30m
debits and credits from 2.5m transactions.
The SQL is something like:
select
tranid, x1, a2, gl
into #myfoo
from trantable -- 2.5m rows
cross join xtable -- 12 rows
inner join GLCOA on x1 = a1 -- 300k rows
inner join attribute table on trancode=attribcode -- ten rows
(selecting into the temp just so I don't have to wait for 30m rows to
scroll into query analyzer)
So, guess how long this takes to run, workstation running XP and
SQL2K?
.
.
.
.
Answer: two minutes
Two minutes!?!?
I'm impressed.
When I was a lad, it would have taken quite a while running at 4.77mhz
to generate and write 30m records onto diskettes, I'm just saying, but
even running SQL7 on a 300mhz two-CPU server seven years ago, would
have been easily 10x slower.
I gotta look into the disk config, it's just one drive, this is an HP
workstation.
J.
ps - for all that, the shop is a little chintzy on the monitors, just
a 17 inch 1280x flat panel, OK, but could be better. Not directly a
SQLServer issue. and the mouse has a cord and a ball.On Thu, 16 Nov 2006 22:44:04 -0800, JXStern <JXSternChangeX2R@.gte.net>
wrote:
>So, guess how long this takes to run, workstation running XP and
>SQL2K?
>.
>.
>Answer: two minutes
OK, two and a half.
But even so, isn't anybody else amazed at this?
Perhaps too easily amused,
J.

Fast Date Ranges Without Loops In SQL Server 2000

The trick is to use a pivot table

Check out the code here

http://sqlservercode.blogspot.com/2...ops-in-sql.htmltechnically that does have a loop right?|||To populate the table yes, but that's a one time thing

I was talking specifically about the select statement

http://sqlservercode.blogspot.com/|||I have added the code to split a character string without tedious
looping but using a pivot table instead
go to my blog to look at the code

http://sqlservercode.blogspot.com/

Fast counting of records

I seem to remember reading many moons ago about a function where you
can retrieve a count of the last recordset you opened.

For example:

I've got a stored procedure that returns a recordset using the TOP 10
so I only get the top 10 records. I need to know the recordcount but I
dont want to reuse the SELECT statement because its quite complex.

Any ideas?

What does @.@.Count do?

Thanks in advance"Samuel Hon" <noreply@.samuelhon.co.uk> wrote in message
news:c8672b7d.0309110815.2fba8c32@.posting.google.c om...
> I seem to remember reading many moons ago about a function where you
> can retrieve a count of the last recordset you opened.
> For example:
> I've got a stored procedure that returns a recordset using the TOP 10
> so I only get the top 10 records. I need to know the recordcount but I
> dont want to reuse the SELECT statement because its quite complex.
> Any ideas?
> What does @.@.Count do?
> Thanks in advance

@.@.ROWCOUNT

Simon

Fast copy of data from one table to another

I have to copy the contents of a table (6million rows) into another table on
the same instance. The target table has an identity column(PK). What is the
fastest way to do this? The instance has to remain up for the web site, and
yes, the target table is read by the web site. I was thinking a bcp out/in.
Thanks,
MichaelUse SET IDENTITY_INSERT ON/OFF to keep the identity column. There's
also a switch in the bcp utility to keep identities (-E)
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_set-set_7zas.asp
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/coprompt/cp_bcp_61et.asp
Snake wrote:
> I have to copy the contents of a table (6million rows) into another table on
> the same instance. The target table has an identity column(PK). What is the
> fastest way to do this? The instance has to remain up for the web site, and
> yes, the target table is read by the web site. I was thinking a bcp out/in.
> Thanks,
> Michael

Fast copy of data from one table to another

I have to copy the contents of a table (6million rows) into another table on
the same instance. The target table has an identity column(PK). What is the
fastest way to do this? The instance has to remain up for the web site, and
yes, the target table is read by the web site. I was thinking a bcp out/in.
Thanks,
Michael
Use SET IDENTITY_INSERT ON/OFF to keep the identity column. There's
also a switch in the bcp utility to keep identities (-E)
http://msdn.microsoft.com/library/de...t-set_7zas.asp
http://msdn.microsoft.com/library/de...p_bcp_61et.asp
Snake wrote:
> I have to copy the contents of a table (6million rows) into another table on
> the same instance. The target table has an identity column(PK). What is the
> fastest way to do this? The instance has to remain up for the web site, and
> yes, the target table is read by the web site. I was thinking a bcp out/in.
> Thanks,
> Michael

Fast copy of data from one table to another

I have to copy the contents of a table (6million rows) into another table on
the same instance. The target table has an identity column(PK). What is th
e
fastest way to do this? The instance has to remain up for the web site, and
yes, the target table is read by the web site. I was thinking a bcp out/in.
Thanks,
MichaelUse SET IDENTITY_INSERT ON/OFF to keep the identity column. There's
also a switch in the bcp utility to keep identities (-E)
http://msdn.microsoft.com/library/d... />
t_7zas.asp
http://msdn.microsoft.com/library/d...
1et.asp
Snake wrote:
> I have to copy the contents of a table (6million rows) into another table
on
> the same instance. The target table has an identity column(PK). What is
the
> fastest way to do this? The instance has to remain up for the web site, a
nd
> yes, the target table is read by the web site. I was thinking a bcp out/in
.
> Thanks,
> Michael

Fast copy big table content

Hi all,

need advice on the following task:
copy the content of a big table from DB_A to DB_B in the same server

the size of table:
~ 7 million rows, ~ 9G in size, 1 clustered pk index, 13 nonclustered
index

current practice:
use DTS to copy the data, takes over 20 hours as
-- first had to delete existing data of the table in DB_B
-- then copy
-- all these happen while all indexes are in place.

I am trying to check what is the best or most efficient way to copy
this kind of data and what would
be the expected time for such load.

my machine: SQL 2000 Enterprise, 8-way P4, 12G RAM on a EMC Clarrion
600 SAN.Most efficient way is:

drop table on DB_B (remember that this invalidates inceremenal backups.

Allow bulk inserts, set recovery to simple or bulk insert on DB_B

use DB_B

then do a

select * FROM DB_A.table_source WITH(NOLOCK) INTO table_dest

Then

CREATE INDEXS
ALTER TABLE ADD PRIMARY KEY

There is no faster way;)

DM Unseen|||"New MSSQL DBA" <boscong88@.gmail.com> wrote in message
news:1119515833.641071.147380@.f14g2000cwb.googlegr oups.com...
> Hi all,
> need advice on the following task:
> copy the content of a big table from DB_A to DB_B in the same server
> the size of table:
> ~ 7 million rows, ~ 9G in size, 1 clustered pk index, 13 nonclustered
> index
> current practice:
> use DTS to copy the data, takes over 20 hours as
> -- first had to delete existing data of the table in DB_B
> -- then copy
> -- all these happen while all indexes are in place.

Dropthe indices and use bulk insert or BCP and then rebuild your indices.

I routinely load a table even bigger than that in under 6 hours this way.
(and I really haven't spent much time optimizing the process since we moved
to the new hardware, so I might be able to improve that further.)

> I am trying to check what is the best or most efficient way to copy
> this kind of data and what would
> be the expected time for such load.
> my machine: SQL 2000 Enterprise, 8-way P4, 12G RAM on a EMC Clarrion
> 600 SAN.

Fast BCP

Can someone tell me the conditions that should be
fulfilled for a FAST BCP to happen
when i create a DTS package and that has Transform data
Task which inserts data into a table from select query is
that done via BCP
SanjayLook in BooksOnLine under BCP - Minimally logged load.
--
Andrew J. Kelly
SQL Server MVP
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:024401c3456b$c1e25a40$a101280a@.phx.gbl...
> Can someone tell me the conditions that should be
> fulfilled for a FAST BCP to happen
> when i create a DTS package and that has Transform data
> Task which inserts data into a table from select query is
> that done via BCP
> Sanjay
>|||This is a bit confusing
It says that Target table should have 0 rows
Is this true
Also TABLOCK hint should be specified
Is this true too
Sanjay
>--Original Message--
>Look in BooksOnLine under BCP - Minimally logged load.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"Sanjay" <sanjayg@.hotmail.com> wrote in message
>news:024401c3456b$c1e25a40$a101280a@.phx.gbl...
>> Can someone tell me the conditions that should be
>> fulfilled for a FAST BCP to happen
>> when i create a DTS package and that has Transform data
>> Task which inserts data into a table from select query
is
>> that done via BCP
>> Sanjay
>
>.
>|||It only needs 0 rows if there are existing indexes on the table. If the
table has indexes it must be empty to do a minimally logged load. You can
drop the indexes and recreate them afterwards though. A table lock is
needed as well. This shouldn't be an issue if your loading that many rows
in you certainly don't want other users in there.
--
Andrew J. Kelly
SQL Server MVP
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:8be401c34577$ae5ab1f0$a401280a@.phx.gbl...
> This is a bit confusing
> It says that Target table should have 0 rows
> Is this true
> Also TABLOCK hint should be specified
> Is this true too
> Sanjay
> >--Original Message--
> >Look in BooksOnLine under BCP - Minimally logged load.
> >
> >--
> >
> >Andrew J. Kelly
> >SQL Server MVP
> >
> >
> >"Sanjay" <sanjayg@.hotmail.com> wrote in message
> >news:024401c3456b$c1e25a40$a101280a@.phx.gbl...
> >> Can someone tell me the conditions that should be
> >> fulfilled for a FAST BCP to happen
> >>
> >> when i create a DTS package and that has Transform data
> >> Task which inserts data into a table from select query
> is
> >> that done via BCP
> >>
> >> Sanjay
> >>
> >
> >
> >.
> >

Fast "bulk" inserts...

I have an app that must insert batches of transaction records every 30 min.
Typically only a few hundred each time. We found that just appending a
hundred or so INSERT commands together in a single query sent to the server
was significantly faster than our first approach of preparing a paraemterize
d
query and executing once for each record. So now we send only a few commands
to SQL instead of hundreds.
Someone suggested to me that we put all of the records into an XML
"document" and use OPENXML. We will test this approach.
Any suggestions, guesses on how the new approach and other possible
approaches to try?
Also, would a different approach be warranted if the number of records in a
batch was around 10K, instead of just 200-300?
Thanks,
LMcPhee"lmcphee" <lmcphee@.discussions.microsoft.com> wrote in message
news:E2D39232-D7B3-450C-B76F-5536DC10FEF5@.microsoft.com...
>I have an app that must insert batches of transaction records every 30 min.
> Typically only a few hundred each time. We found that just appending a
> hundred or so INSERT commands together in a single query sent to the
> server
> was significantly faster than our first approach of preparing a
> paraemterized
> query and executing once for each record. So now we send only a few
> commands
> to SQL instead of hundreds.
> Someone suggested to me that we put all of the records into an XML
> "document" and use OPENXML. We will test this approach.
If you can get by with your current approach, do. In SQL 2005/ADO.NET 2.0
you will have the SQLBulkInsert, which will be the perfered method.

> Any suggestions, guesses on how the new approach and other possible
> approaches to try?
> Also, would a different approach be warranted if the number of records in
> a
> batch was around 10K, instead of just 200-300?
>
The batching approach will still work, but you wouldn't want to batch 10k
rows. That's just too much SQL. Keep your batch size constant, and send
multiple batches if necessary.
Another critically important thing for running multiple inserts is to use an
explicit transaction. Without an explicit transaction SQL Server must flush
the log to disk (aka commit) after every insert. The commit takes much
longer than just doing the insert, so you want to do all the inserts, and
just force a single commit at the end.
David

Farsi Collation

I've installed SQL Server 2000 with Arabic collation during the
installation. Yet, I'm not able to search for Farsi strings. What
should I do?topolou (topolou2000@.yahoo.com) writes:
> I've installed SQL Server 2000 with Arabic collation during the
> installation. Yet, I'm not able to search for Farsi strings. What
> should I do?

Unfortunately there is no Farsi collation for SQL Server, but the Arabic
collations are as close as you can come. I have learnt that these
collations does not sort Farsi correctly.

Why you cannot find Farsi data I don't know, but note that you need
to delelimit Unicode literals with N'', for instance N'nisse'.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Farms

I'm new to the architect world and I'm trying to solve a performance problem with a SQL Server cluster. But, in talking to some folks they were recommending that I write a design based on a SQL Server "farm". However, when I ask them for more information about what a "farm" is, they start to shuffle their feat!

So, what is a farm? Is it better than a cluster? Where can I find information on creating a SQL Server farm?

Many thanks!? I haven't heard the term "farm" used too much in conjunction with SQL Server installations. But there is some info in this doc (found via Google): http://www.dell.com/downloads/global/power/4q03-mis.pdf ... looks like a pretty good article, either way. -- Adam MachanicPro SQL Server 2005, available nowhttp://www..apress.com/book/bookDisplay.html?bID=457-- <Laurence Hunter@.discussions.microsoft.com> wrote in message news:44b215d6-a25e-44ac-83d8-c18517f64798@.discussions.microsoft.com...I'm new to the architect world and I'm trying to solve a performance problem with a SQL Server cluster. But, in talking to some folks they were recommending that I write a design based on a SQL Server "farm". However, when I ask them for more information about what a "farm" is, they start to shuffle their feat!So, what is a farm? Is it better than a cluster? Where can I find information on creating a SQL Server farm?Many thanks!

Farmat number into Date

I am reporting off an oracle database that has a field with a number that is
the date. ie 20041201 (yyyymmdd)
What would be the best way to convert this into a date with the format
dd/mm/yy.
I regularly use this field with a parameter to allow my usres to generate
their own reports.
Thanks in advanceLook at the DateTime structure and its constructors:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemdatetimememberstopic.asp
The following expression would convert the field into a valid DateTime
object:
=new DateTime(Left(CStr(Fields!Date.Value), 4), Mid(CStr(Fields!Date.Value),
5, 2), Right(CStr(Fields!Date.Value), 2))
-- Robert
This posting is provided "AS IS" with no warranties, and confers no rights.
"Tango" <Tango@.discussions.microsoft.com> wrote in message
news:9EA0FDAC-20C4-459A-AD30-AE5F439A38B2@.microsoft.com...
>I am reporting off an oracle database that has a field with a number that
>is
> the date. ie 20041201 (yyyymmdd)
> What would be the best way to convert this into a date with the format
> dd/mm/yy.
> I regularly use this field with a parameter to allow my usres to generate
> their own reports.
> Thanks in advance|||Thank you Robert,
At what point would you recommend i use this expression.
Is it possible to use at the dataset stage and if so how.
Regards
Todd
"Robert Bruckner [MSFT]" wrote:
> Look at the DateTime structure and its constructors:
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemdatetimememberstopic.asp
> The following expression would convert the field into a valid DateTime
> object:
> =new DateTime(Left(CStr(Fields!Date.Value), 4), Mid(CStr(Fields!Date.Value),
> 5, 2), Right(CStr(Fields!Date.Value), 2))
>
> -- Robert
> This posting is provided "AS IS" with no warranties, and confers no rights.
>
> "Tango" <Tango@.discussions.microsoft.com> wrote in message
> news:9EA0FDAC-20C4-459A-AD30-AE5F439A38B2@.microsoft.com...
> >I am reporting off an oracle database that has a field with a number that
> >is
> > the date. ie 20041201 (yyyymmdd)
> >
> > What would be the best way to convert this into a date with the format
> > dd/mm/yy.
> > I regularly use this field with a parameter to allow my usres to generate
> > their own reports.
> >
> > Thanks in advance
>
>|||If you want to perform the conversion in the query instead of inside the
report (or by adding a calculated field on the dataset), you have to find
function calls in the query language that allow you to convert the field
into a DateTime object. The data provider / database server of the data
source will determine the language that has to be used (T-SQL, PL/SQL, etc.)
-- Robert
This posting is provided "AS IS" with no warranties, and confers no rights.
"Tango" <Tango@.discussions.microsoft.com> wrote in message
news:63B337CF-3563-4DA4-AC77-18429117803F@.microsoft.com...
> Thank you Robert,
> At what point would you recommend i use this expression.
> Is it possible to use at the dataset stage and if so how.
> Regards
> Todd
> "Robert Bruckner [MSFT]" wrote:
>> Look at the DateTime structure and its constructors:
>> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfsystemdatetimememberstopic.asp
>> The following expression would convert the field into a valid DateTime
>> object:
>> =new DateTime(Left(CStr(Fields!Date.Value), 4),
>> Mid(CStr(Fields!Date.Value),
>> 5, 2), Right(CStr(Fields!Date.Value), 2))
>>
>> -- Robert
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>>
>> "Tango" <Tango@.discussions.microsoft.com> wrote in message
>> news:9EA0FDAC-20C4-459A-AD30-AE5F439A38B2@.microsoft.com...
>> >I am reporting off an oracle database that has a field with a number
>> >that
>> >is
>> > the date. ie 20041201 (yyyymmdd)
>> >
>> > What would be the best way to convert this into a date with the format
>> > dd/mm/yy.
>> > I regularly use this field with a parameter to allow my usres to
>> > generate
>> > their own reports.
>> >
>> > Thanks in advance
>>

FAQ: Are there any whitepapers about building a Disaster Recovery site at a remote location for

Hi,

Sorry for the wide distribution.

I'm trying to find any useful whitepapers about how to effectively build and operate a disaster recovery site at a remote location for SQL Server 2000. Does anyone know where to find such information?

I also know that one good option for my customer is using the Mirroring feature of SQL Server 2005. What are the other options? Is Replication an effective one for a mission-critical database (online banking)?

Thanks in advance

here are some articles and white papers that may be helpful to you:

SQL Server 2005 Failover Clustering White paper
http://www.microsoft.com/downloads/details.aspx?FamilyID=818234dc-a17b-4f09-b282-c6830fead499&DisplayLang=en

Database Mirroring in SQL Server 2005
http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx

SQL Server 2000 High Availability Series
https://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/harag01.mspx

SQL Server 2000 Backup and Restore
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlbackuprest.mspx

|||

I'm not trying to endorse products and I'm not claiming to know the most about this topic. There are many people that have a lot more insight into the specifics but these are intended to give you some quick and dirty information.

I haven't been able to find anything that specifically addresses the DR for SQL Server 2000 that is all encompassing. In general there are somethings that you could consider that vary in costs as your recovery techniques become more sophisticated.

Some options that are available within SQL 2000 are:

Log shipping: I believe that you send log backups and restore them at the far end without recovery. When a disaster occurs you restore the last log backup with the WITH RECOVERY option and your off and running on your backup server.

Replication: The freshness of the data at the far end will depend on the replication model chosen. Real time can be very demanding if you're planning on replicating every table in every database on your SQL Server. Generally speaking there is less overhead the more specific you could be. There are some functionality constraints that you will have to investigate to see how well they will work for you. For example if you add a column to a table that is being replicated you will have to manually add that column to the subscription for the subscribers to receive it. This is a very simple example.

Some options that are available within SQL 2005

Log shipping: Works similarly to SQL 2000.

Replication: Much improved over SQL 2000. Subscriptions automatically add columns added to tables now. Very slick.

Database Mirroring: Just had this functionality enabled with SP1. There is a lot to consider with this model. There is a primary database server, a secondary or backup database server, and a witness database server. The witnesses function is to control when the failover happens amongst other things. One of the downfalls are incorrect failovers.There are instances of the witness server incorrectly identifying a failed primary server and forcing traffic to the backup. Also I'm not sure how smooth the failback works.

Warm database server just needing to have SQL databases restored. This is a very low cost and not very real time option. This is SQL version independent.

There are several different third party High Availability/ Disaster Recovery products that are out there. Depending on how sophisticated a failover you'll need to have.

Sonasoft - Sonasafe for SQL Server - www.sonasafe.com

Neverfail - Neverfail for SQL Server - www.neverfailgroup.com

This is a very broad topic to try to address I'm sorry if this information hasn't been helpful. If you'd like to compare notes, I'd be glad to discuss some of my findings. I'm going through a very similar experience right now. Except we are a SQL 2005 only shop.

The other thing that you didn't mention is whether or not you're considering clustering. This adds a whole additional level of complexity to the considerations.

Drew Flint

FAQ: Are there any whitepapers about building a Disaster Recovery site at a remote location

Hi,

Sorry for the wide distribution.

I'm trying to find any useful whitepapers about how to effectively build and operate a disaster recovery site at a remote location for SQL Server 2000. Does anyone know where to find such information?

I also know that one good option for my customer is using the Mirroring feature of SQL Server 2005. What are the other options? Is Replication an effective one for a mission-critical database (online banking)?

Thanks in advance

here are some articles and white papers that may be helpful to you:

SQL Server 2005 Failover Clustering White paper
http://www.microsoft.com/downloads/details.aspx?FamilyID=818234dc-a17b-4f09-b282-c6830fead499&DisplayLang=en

Database Mirroring in SQL Server 2005
http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx

SQL Server 2000 High Availability Series
https://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/harag01.mspx

SQL Server 2000 Backup and Restore
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlbackuprest.mspx

|||

I'm not trying to endorse products and I'm not claiming to know the most about this topic. There are many people that have a lot more insight into the specifics but these are intended to give you some quick and dirty information.

I haven't been able to find anything that specifically addresses the DR for SQL Server 2000 that is all encompassing. In general there are somethings that you could consider that vary in costs as your recovery techniques become more sophisticated.

Some options that are available within SQL 2000 are:

Log shipping: I believe that you send log backups and restore them at the far end without recovery. When a disaster occurs you restore the last log backup with the WITH RECOVERY option and your off and running on your backup server.

Replication: The freshness of the data at the far end will depend on the replication model chosen. Real time can be very demanding if you're planning on replicating every table in every database on your SQL Server. Generally speaking there is less overhead the more specific you could be. There are some functionality constraints that you will have to investigate to see how well they will work for you. For example if you add a column to a table that is being replicated you will have to manually add that column to the subscription for the subscribers to receive it. This is a very simple example.

Some options that are available within SQL 2005

Log shipping: Works similarly to SQL 2000.

Replication: Much improved over SQL 2000. Subscriptions automatically add columns added to tables now. Very slick.

Database Mirroring: Just had this functionality enabled with SP1. There is a lot to consider with this model. There is a primary database server, a secondary or backup database server, and a witness database server. The witnesses function is to control when the failover happens amongst other things. One of the downfalls are incorrect failovers.There are instances of the witness server incorrectly identifying a failed primary server and forcing traffic to the backup. Also I'm not sure how smooth the failback works.

Warm database server just needing to have SQL databases restored. This is a very low cost and not very real time option. This is SQL version independent.

There are several different third party High Availability/ Disaster Recovery products that are out there. Depending on how sophisticated a failover you'll need to have.

Sonasoft - Sonasafe for SQL Server - www.sonasafe.com

Neverfail - Neverfail for SQL Server - www.neverfailgroup.com

This is a very broad topic to try to address I'm sorry if this information hasn't been helpful. If you'd like to compare notes, I'd be glad to discuss some of my findings. I'm going through a very similar experience right now. Except we are a SQL 2005 only shop.

The other thing that you didn't mention is whether or not you're considering clustering. This adds a whole additional level of complexity to the considerations.

Drew Flint

FAQ on SQL Server connect issues?

Seems like there are so many different variations and flavors of connect
issues, surely there is a FAQ or a KB that documents all (or most) of them?
There are KB articles that document specific areas and
common causes for certain errors. I can't think of anything
that documents all of these in one place as it would need to
cover a lot of territory - ODBC, OLE DB, JDBC, ADO, ADO.Net,
Linked Servers, Novell networks, Web applications, client
applications written in various languages, MDAC issues,
Server configurations, network issues, different SQL Server
versions, etc, etc.
There is a web cast on troubleshooting connectivity issues
if that's something along the lines of what you are looking
for:
Support WebCast: Microsoft SQL Server 2000: Troubleshooting
Connectivity
http://support.microsoft.com/?id=812817
-Sue
On Mon, 24 May 2004 22:06:31 -0500, "Earl"
<brikshoe<at>comcast<.>net> wrote:

>Seems like there are so many different variations and flavors of connect
>issues, surely there is a FAQ or a KB that documents all (or most) of them?
>
|||Earl,
'How to troubleshoot connectivity issues in SQL Server 2000'
http://support.microsoft.com/default.aspx?id=827422
Dinesh
SQL Server MVP
--
SQL Server FAQ at
http://www.tkdinesh.com
"Earl comcast net>" <brikshoe<at.> wrote in message
news:ucX0DVgQEHA.2452@.TK2MSFTNGP11.phx.gbl...
> Seems like there are so many different variations and flavors of connect
> issues, surely there is a FAQ or a KB that documents all (or most) of
them?
>

FAQ on SQL Server connect issues?

Seems like there are so many different variations and flavors of connect
issues, surely there is a FAQ or a KB that documents all (or most) of them?There are KB articles that document specific areas and
common causes for certain errors. I can't think of anything
that documents all of these in one place as it would need to
cover a lot of territory - ODBC, OLE DB, JDBC, ADO, ADO.Net,
Linked Servers, Novell networks, Web applications, client
applications written in various languages, MDAC issues,
Server configurations, network issues, different SQL Server
versions, etc, etc.
There is a web cast on troubleshooting connectivity issues
if that's something along the lines of what you are looking
for:
Support WebCast: Microsoft SQL Server 2000: Troubleshooting
Connectivity
http://support.microsoft.com/?id=812817
-Sue
On Mon, 24 May 2004 22:06:31 -0500, "Earl"
<brikshoe<at>comcast<.>net> wrote:

>Seems like there are so many different variations and flavors of connect
>issues, surely there is a FAQ or a KB that documents all (or most) of them?
>|||Earl,
'How to troubleshoot connectivity issues in SQL Server 2000'
http://support.microsoft.com/default.aspx?id=827422
Dinesh
SQL Server MVP
--
--
SQL Server FAQ at
http://www.tkdinesh.com
"Earl comcast net>" <brikshoe<at.> wrote in message
news:ucX0DVgQEHA.2452@.TK2MSFTNGP11.phx.gbl...
> Seems like there are so many different variations and flavors of connect
> issues, surely there is a FAQ or a KB that documents all (or most) of
them?
>

Faq

How does one post one? I have been plodding around this join for almost 2 years. I think I may be able to help somebody (provided blindman (my fav) is asleep). I'm sorry if i've missed it. :cool: a tribI actually plan to build a true FAQ thread, and "sticky" it in the forum. The FAQ will be closed to keep the thread from running amok, so that memebers willl need to ask one of the Forum Moderators to add a new FAQ entry for them.

Please feel free to contriubte anything you think belongs in a FAQ. I won't guarantee every offering will become part of the FAQ, but I will guarantee that every offering will be considered (at least as long as I'm a moderator anyway).

-PatP|||How does one post one? I have been plodding around this join for almost 2 years. I think I may be able to help somebody (provided blindman (my fav) is asleep). I'm sorry if i've missed it. :cool: a tribCongratulations. You are now president of my fan club.
But the blindman never sleeps...

FAO Hilary Cotter

Hilary,
please can you send me an alternative email address - I seem to be marked as
a dodgy spammer on your current one:
"<smtp-out1.blueyonder.co.uk #5.5.0 smtp;550-195.188.213.4 blocked by
blacklist.mail.ops.worldnet.att.net.>"
Thanks,
Paul
hilarycotter1@.hotmail.com
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:%23uDacMWiEHA.1280@.TK2MSFTNGP09.phx.gbl...
> Hilary,
> please can you send me an alternative email address - I seem to be marked
as
> a dodgy spammer on your current one:
> "<smtp-out1.blueyonder.co.uk #5.5.0 smtp;550-195.188.213.4 blocked by
> blacklist.mail.ops.worldnet.att.net.>"
> Thanks,
> Paul
>

fancy select needed.Thanks for your help!

Hi,
I have a table like the following:
ID | Name
--
1 | Name 1
1 | Name 2
2 | Name 3
2 | Name 4
2 | Name 5
I now want to do a select which results in the following:
ID | Name
--
1 | Name 1, Name 2
2 | Name 3, Name 4, Name 5
Does anybody know how this could be done? Thank you a lot in advance!
DanielBetter to do it in your client app / reporting tool / programming language.
Do not stress the server.
AMB
"Daniel Walzenbach" wrote:

> Hi,
> I have a table like the following:
>
> ID | Name
> --
> 1 | Name 1
> 1 | Name 2
> 2 | Name 3
> 2 | Name 4
> 2 | Name 5
>
> I now want to do a select which results in the following:
> ID | Name
> --
> 1 | Name 1, Name 2
> 2 | Name 3, Name 4, Name 5
> Does anybody know how this could be done? Thank you a lot in advance!
> Daniel

fancy select needed.Thanks for your help!

Here you go:
create table tt (
colA int,
colB varchar(10)
)
insert into tt values(1,'NAME 1')
insert into tt values(1,'NAME 2')
insert into tt values(2,'NAME 3')
insert into tt values(2,'NAME 4')
insert into tt values(2,'NAME 5')
select * from tt
colA colB
-- --
1 NAME 1
1 NAME 2
2 NAME 3
2 NAME 4
2 NAME 5
select cola, col1, col2, col3 = ( select min(colb) from
tt where tt2.cola = tt.cola and tt.colb > tt2.col2 )
from
(
select cola, col1, col2 = ( select min(colb) from tt
where tt1.cola = tt.cola and tt.colb > tt1.col1 )
from
(
select cola, min(colb) col1
from tt
group by cola
) tt1
) tt2
cola col1 col2 col3
-- -- -- --
1 NAME 1 NAME 2 NULL
2 NAME 3 NAME 4 NAME 5

>--Original Message--
>Hi,
>I have a table like the following:
>
>ID | Name
>--
>1 | Name 1
>1 | Name 2
>2 | Name 3
>2 | Name 4
>2 | Name 5
>
>I now want to do a select which results in the following:
>ID | Name
>--
>1 | Name 1, Name 2
>2 | Name 3, Name 4, Name 5
>Does anybody know how this could be done? Thank you a
lot in advance!
> Daniel
>.
>Thank you!
"Lito Dominguez" <anonymous@.discussions.microsoft.com> schrieb im
Newsbeitrag news:197d01c504b3$217eeab0$a601280a@.phx.gbl...
> Here you go:
> create table tt (
> colA int,
> colB varchar(10)
> )
> insert into tt values(1,'NAME 1')
> insert into tt values(1,'NAME 2')
> insert into tt values(2,'NAME 3')
> insert into tt values(2,'NAME 4')
> insert into tt values(2,'NAME 5')
> select * from tt
> colA colB
> -- --
> 1 NAME 1
> 1 NAME 2
> 2 NAME 3
> 2 NAME 4
> 2 NAME 5
> select cola, col1, col2, col3 = ( select min(colb) from
> tt where tt2.cola = tt.cola and tt.colb > tt2.col2 )
> from
> (
> select cola, col1, col2 = ( select min(colb) from tt
> where tt1.cola = tt.cola and tt.colb > tt1.col1 )
> from
> (
> select cola, min(colb) col1
> from tt
> group by cola
> ) tt1
> ) tt2
> cola col1 col2 col3
> -- -- -- --
> 1 NAME 1 NAME 2 NULL
> 2 NAME 3 NAME 4 NAME 5
>
> lot in advance!

Fan Trap or Multiple one-to-many joins

I'm having a problem creating a view on some data that involves two one-to-many joins like this:

tbl1 m--> tbl2 <--n tbl3

and I need to create a view on the data from all three tables without duplicates from tbl1 and tbl3.

The problem is that tbl1 and tbl3 are not related at all, except that they are linked by data in tbl2.

Think of it like this: you have a project, which can have multiple consultants, and multiple stakeholders, and the data must be returned in such a way that each consultant and each stakeholder appears once in the output (the project name must appear multiple times of course). The issue is that the following two datasets are logically distinct but semantically identical:

proj A, consultant A, stakeholder A
proj A, consultant B, stakeholder B
--
proj A, consultant B, stakeholder A
proj A, consultant A, stakeholder B

but what I'm aiming for is:

proj A, consultant A, stakeholder A
proj A, consultant B, stakeholder B

I've heard this described as a fan trap, but usual solutions involve reorganzing the data so that tbl3 joins tbl1 joins tbl2, but in my case there is no link there

Incidently, the intended platform for this is DB2 and/or SQL Server. Any help appreciated, thanks.your examples are not very clear

you start out by diagramming tbl1, tbl2, tbl3, and then immediately switch to projects, consultants, and shareholders, without showing the actual data in these tables, just some apparent cross join query results

you might wish to show a more comprehensive example, because so far, it's hard to understand what you're asking|||I've heard this described as many things, most of which aren't polite to repeat. ;)

You're trying to figure out how to build a join to show the relationship between consultants and shareholders, to produce a one-to-one join between two tables that explicitly have no relationship. If you figure out how to make this happen, please let me know... I'm sure that I'll be fascinated by the explanation!

You've got a clear relationship between project and shareholder, and another relationship between project and consultant. You don't explicitly state that there must be a shareholder or a consultant for any given project, and at some point in the project's life I can guarantee that there will not be one of either. You don't explicitly state that there must be one shareholder for every consultant. If you think about these requirements, unless at least one of these requirements is false, you can't get the output you want... There ain't no way to git there from here.

You need to rethink either the specifications or the requirements. Something has got to give because using the definitions that you've given, the present problem can't be solved.

-PatP|||ah, so that's what that is -- i had never heard that terminology before

this pdf is a pretty good explanation --
http://support.businessobjects.com/documentation/installation_resources/5i/tips_and_tricks/pdf/universe_design/ut001.pdf

i would solve this problem with a UNION query --

proj A, consultant A
proj A, consultant B
proj A, stakeholder A
proj A, stakeholder B

Family Tree Branches

Okie, this one has me a little stumped so bear with me an I will explain as best I can...

I have a family tree database...

The two key tables (for this problem) are Person and Child. The person table is your generic person details table and contains, PersonId, PersonFirstName, PersonLastName, PersonDOB etc. The Child table is a linking table that links a PersonId to a ChildPersonId (so both Id's are valid Id's in the person table).

Now what I want to try and achieve is a breakdown of the different branchs of the family tree. A branch is an independant, unattached section of the tree (if that makes sense). It's a grouping of people we know are related but we can't determine how they are related to another group of people in the tree.

If you look at this http://gw.geneanet.org/index.php3?b=bengos&lang=en;m=N;v=van+halewyn you will get an idea of what I mean.

I'm not sure if this is something that can be don't with a query at all the be honest... I suspect that I will have to wrap some other code around it, but I'm really not sure on what approach I should be using. Any help people could offer would be great.Are you just looking for all the roots? If that's the case, this would show you that:

SELECT c1.PersonId
FROM
Child c1
LEFT OUTER JOIN Child c2 ON c1.PersonId = c2.ChildPersonId
WHERE c2.ChildPersonId IS NULL

This will give you the root of all branches. You can then build your individual branches from there. Make sense?|||linking table can be modified with an identity column for each parent-child
relation , so that any branch of tree can be identified with the identity column.|||Are you sure that is going to work...? It would find the root nodes but there could be multiple root nodes for a tree couldn't there? The family tree itself isn't really a tree structure... it expands in both directions from any given point.

It expands up (for example) due to the children nodes and up because of the parent nodes...|||currently what I am doing is

1. Getting all PersonIds and creating an array.
2. Take the first Id and get all the children and parent and spouse ids
3. Get all the child and parent and spouse id's of those found in step 2.
- repeat until no more ids are found, this means the end of a branch
4. go back to array and select the first id that is not one of those that has been found so far
5. return to step 2 until all ideas have been found or searched on

this gives me all the branches.... not sure how to structure the display be it's a start.|||Can u post the table structure?|||CREATE TABLE [dbo].[Child] (
[PersonId] [int] NULL ,
[ChildPersonId] [int] NULL
) ON [PRIMARY]
GO

CREATE TABLE [dbo].[Person] (
[PersonId] [int] NOT NULL ,
[Firstnames] [nvarchar] (60) NULL ,
[Surname] [nvarchar] (60) NULL ,
[Title] [nvarchar] (100) NULL ,
[Gender] [nvarchar] (1) NULL ,
[DOBDay] [int] NULL ,
[DOBMonth] [int] NULL ,
[DOBYear] [int] NULL ,
[POBLocale] [nvarchar] (50) NULL ,
[POBCountry] [nvarchar] (50) NULL ,
[DODDay] [int] NULL ,
[DODMonth] [int] NULL ,
[DODYear] [int] NULL ,
[PODLocale] [nvarchar] (50) NULL ,
[PODCountry] [nvarchar] (50) NULL ,
[POBurial] [nvarchar] (50) NULL
) ON [PRIMARY]
GO

Now for the Child table both ChildPersonId and PersonId link to a valid PersonId in the Person table|||I would like to suggest 3 more columns in Child Table. RealtionId and ParentRealtionId
And RelationHierarchy.

CREATE TABLE [dbo].[Child] (
[RelationId] [int] NOT NULL,
[ParentRealtionId] NULL,
[PersonId] [int] NULL ,
[ChildPersonId] [int] NULL ,
[RelationHierarchy] varchar(4000)
) GO

so the tables look like

Person Table:

[PersonId] [Firstnames] [Surname]
-------------------
1 Gauthier van Halewyn ..

2 Rogier van Halewyn ..

3 Jacques van Halewyn..

4 Roland van Halewyn ..


Child Table:

[RelationId] [ParentRealtionId] [PersonId] [ChildPersonId] [RelationHierarchy]
-----------------------------
10 null null 1 null
20 10 1 2 10~
30 20 2 3 10~20~
40 30 3 4 10~20~30~

assumptions: Every PersonId occurs only once as a Child PersonId in Child Table.

So to find the children and grand children of a person
All u have to do is select the RelationHierarchy and RelationId from the child table
Concat both and search.

To find the family tree of Gauthier van Halewyn (PersonId=1)



DECLARE @. RelationHierarchy VARCHAR(4000)

SELECT @. RelationHierarchy= isnull(RelationHierarchy, ) + ~ + CONVERT(Varchar(20), RelationId) + ~% FROM [Child] WHERE [ChildPersonId]= 1

SELECT * FROM [Child ] WHERE RelationHierarchy LIKE @. RelationHierarchy

I'm not sure this is going to answer all your questions but this will definetly prove helpful.|||Hmmm,... I see what you are saying... I have though about that myself,.. adding a generation field or something somewhere... I don't want to complicate things too much though,.. but I think I might need to in order to achieve what I would like... thanks for the input

False number of rows in sp_spaceused and sysindexes

Hi,
On a server with WS2003 / SQL2000 SP3, I must run everyday DBCC UpdateUsage
(0) With Count_Rows to correct the number of rows returned by sp_spaceused,
Enterprise Manager or in table sysindexes.
For example, I can have 318000 instead of 318256, 3456 instead of 3458
There was not this problem on a server with W2000 / SQL2000 SP1.
Anyone has an idea ?
Thanks,
SebastienI think you must have had statistics set to auto-update on the 2000 server,
and no longer have this set on the 2003 server.
I've never relied on sp_spaceused unless I also update stats before I run
it.
--
http://www.aspfaq.com/
(Reverse address to reply.)
"Sebastien" <nospam@.nospam.com> wrote in message
news:uXpFj4YoEHA.1272@.TK2MSFTNGP09.phx.gbl...
> Hi,
> On a server with WS2003 / SQL2000 SP3, I must run everyday DBCC
UpdateUsage
> (0) With Count_Rows to correct the number of rows returned by
sp_spaceused,
> Enterprise Manager or in table sysindexes.
> For example, I can have 318000 instead of 318256, 3456 instead of 3458
> There was not this problem on a server with W2000 / SQL2000 SP1.
> Anyone has an idea ?
> Thanks,
> Sebastien
>
>|||Hi Sebastien,
Thanks for your post.
As MVP Aaron said, there are some known issues in sp_spaceused, however,
you'd better take the following steps to see whether it make more effect.
First of all, take the latest updates of SQL Server. Some known issues are
fixed. For example,
FIX: Statistics Maintenance Creates Significant Bottleneck on SQL Servers
that Use 16 or More CPUs
http://support.microsoft.com/?id=293849
Secondly, run the command sp_spaceused like below
sp_spaceused @.updateusage = true
Note that when you include the "@.updateusage=true" parameter for
sp_spaceused, the sp will run DBCC UPDATEUSAGE, which will update the
allocation information in sysindexes to insure that it provides accurate
information. Running DBCC UPDATEUSAGE on the whole database is a
resource-intensive task and can have a significant impact on the
performance of a system. Note that sysindexes is also updated at
checkpoints (though not as thoroughly), so during peak load times you may
want to invesitgate omitting the @.updateusage parameter and running
"CHECKPOINT" before sp_spaceused instead.
Thank you for your patience and corperation. If you have any questions or
concerns, don't hesitate to let me know. We are here to be of assistance!
Sincerely yours,
Mingqing Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
---
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!|||No, auto-update statistics option is ON on the 2 server for the database,
and auto-create is OFF.
sp_spaceused mytable,@.updateusage=true or update statistics don't correct
that.
Only DBCC UpdateUsage (0,mytable) With Count_Rows can correct !!!
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> a écrit dans le message de
news:u0KceEZoEHA.1644@.tk2msftngp13.phx.gbl...
> I think you must have had statistics set to auto-update on the 2000
server,
> and no longer have this set on the 2003 server.
> I've never relied on sp_spaceused unless I also update stats before I run
> it.
> --
> http://www.aspfaq.com/
> (Reverse address to reply.)
>
>
> "Sebastien" <nospam@.nospam.com> wrote in message
> news:uXpFj4YoEHA.1272@.TK2MSFTNGP09.phx.gbl...
> > Hi,
> >
> > On a server with WS2003 / SQL2000 SP3, I must run everyday DBCC
> UpdateUsage
> > (0) With Count_Rows to correct the number of rows returned by
> sp_spaceused,
> > Enterprise Manager or in table sysindexes.
> >
> > For example, I can have 318000 instead of 318256, 3456 instead of 3458
> >
> > There was not this problem on a server with W2000 / SQL2000 SP1.
> >
> > Anyone has an idea ?
> >
> > Thanks,
> > Sebastien
> >
> >
> >
>|||Thanks for your reply.
The server run french version of SQL2000 with SP3a and only have 4 CPU Xeon
HT.
sp_spaceused @.updateusage = true or sp_spaceused mytable,@.updateusage =true don't solve the problem
only the dbcc command with Count_Rows can correct that !!!
Sebastien
""Mingqing Cheng [MSFT]"" <v-mingqc@.online.microsoft.com> a écrit dans le
message de news:NWreWReoEHA.3212@.cpmsftngxa06.phx.gbl...
> Hi Sebastien,
> Thanks for your post.
> As MVP Aaron said, there are some known issues in sp_spaceused, however,
> you'd better take the following steps to see whether it make more effect.
> First of all, take the latest updates of SQL Server. Some known issues are
> fixed. For example,
> FIX: Statistics Maintenance Creates Significant Bottleneck on SQL Servers
> that Use 16 or More CPUs
> http://support.microsoft.com/?id=293849
> Secondly, run the command sp_spaceused like below
> sp_spaceused @.updateusage = true
> Note that when you include the "@.updateusage=true" parameter for
> sp_spaceused, the sp will run DBCC UPDATEUSAGE, which will update the
> allocation information in sysindexes to insure that it provides accurate
> information. Running DBCC UPDATEUSAGE on the whole database is a
> resource-intensive task and can have a significant impact on the
> performance of a system. Note that sysindexes is also updated at
> checkpoints (though not as thoroughly), so during peak load times you may
> want to invesitgate omitting the @.updateusage parameter and running
> "CHECKPOINT" before sp_spaceused instead.
> Thank you for your patience and corperation. If you have any questions or
> concerns, don't hesitate to let me know. We are here to be of assistance!
>
> Sincerely yours,
> Mingqing Cheng
> Online Partner Support Specialist
> Partner Support Group
> Microsoft Global Technical Support Center
> ---
> Introduction to Yukon! - http://www.microsoft.com/sql/yukon
> This posting is provided "as is" with no warranties and confers no rights.
> Please reply to newsgroups only, many thanks!
>
>|||Hi Sebastien,
Thanks for your further explaination.
Executing sp_spaceused computes the amount of disk space used for data and
indexes, and the disk space used by a table in the current database. If no
table (objname) is given, sp_spaceused reports on the space used by the
entire current database.
After you drop an index, sp_spaceused reports inaccurate information. This
is a known limitation. However, sp_spaceused provides a quick, rough
estimate. If you notice incorrect values when you execute sp_spaceused, run
DBCC UPDATEUSAGE to correct the inaccuracies. Because this statement takes
some time to run on large tables or databases use it only when you suspect
incorrect values being returned or when other users are not active.
I am afraid that you will have to run that DBCC command if sp_spaceused
@.updateusage = true doesn't make sence:(
I am sorry for the inconvenience it may caught. Thank you for your patience
and corperation. If you have any questions or concerns, don't hesitate to
let me know.
Sincerely yours,
Mingqing Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
---
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!

False number of rows in sp_spaceused and sysindexes

Hi,
On a server with WS2003 / SQL2000 SP3, I must run everyday DBCC UpdateUsage
(0) With Count_Rows to correct the number of rows returned by sp_spaceused,
Enterprise Manager or in table sysindexes.
For example, I can have 318000 instead of 318256, 3456 instead of 3458
There was not this problem on a server with W2000 / SQL2000 SP1.
Anyone has an idea ?
Thanks,
Sebastien
I think you must have had statistics set to auto-update on the 2000 server,
and no longer have this set on the 2003 server.
I've never relied on sp_spaceused unless I also update stats before I run
it.
http://www.aspfaq.com/
(Reverse address to reply.)
"Sebastien" <nospam@.nospam.com> wrote in message
news:uXpFj4YoEHA.1272@.TK2MSFTNGP09.phx.gbl...
> Hi,
> On a server with WS2003 / SQL2000 SP3, I must run everyday DBCC
UpdateUsage
> (0) With Count_Rows to correct the number of rows returned by
sp_spaceused,
> Enterprise Manager or in table sysindexes.
> For example, I can have 318000 instead of 318256, 3456 instead of 3458
> There was not this problem on a server with W2000 / SQL2000 SP1.
> Anyone has an idea ?
> Thanks,
> Sebastien
>
>
|||Hi Sebastien,
Thanks for your post.
As MVP Aaron said, there are some known issues in sp_spaceused, however,
you'd better take the following steps to see whether it make more effect.
First of all, take the latest updates of SQL Server. Some known issues are
fixed. For example,
FIX: Statistics Maintenance Creates Significant Bottleneck on SQL Servers
that Use 16 or More CPUs
http://support.microsoft.com/?id=293849
Secondly, run the command sp_spaceused like below
sp_spaceused @.updateusage = true
Note that when you include the "@.updateusage=true" parameter for
sp_spaceused, the sp will run DBCC UPDATEUSAGE, which will update the
allocation information in sysindexes to insure that it provides accurate
information. Running DBCC UPDATEUSAGE on the whole database is a
resource-intensive task and can have a significant impact on the
performance of a system. Note that sysindexes is also updated at
checkpoints (though not as thoroughly), so during peak load times you may
want to invesitgate omitting the @.updateusage parameter and running
"CHECKPOINT" before sp_spaceused instead.
Thank you for your patience and corperation. If you have any questions or
concerns, don't hesitate to let me know. We are here to be of assistance!
Sincerely yours,
Mingqing Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
|||No, auto-update statistics option is ON on the 2 server for the database,
and auto-create is OFF.
sp_spaceused mytable,@.updateusage=true or update statistics don't correct
that.
Only DBCC UpdateUsage (0,mytable) With Count_Rows can correct !!!
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> a crit dans le message de
news:u0KceEZoEHA.1644@.tk2msftngp13.phx.gbl...
> I think you must have had statistics set to auto-update on the 2000
server,
> and no longer have this set on the 2003 server.
> I've never relied on sp_spaceused unless I also update stats before I run
> it.
> --
> http://www.aspfaq.com/
> (Reverse address to reply.)
>
>
> "Sebastien" <nospam@.nospam.com> wrote in message
> news:uXpFj4YoEHA.1272@.TK2MSFTNGP09.phx.gbl...
> UpdateUsage
> sp_spaceused,
>
|||Thanks for your reply.
The server run french version of SQL2000 with SP3a and only have 4 CPU Xeon
HT.
sp_spaceused @.updateusage = true or sp_spaceused mytable,@.updateusage =
true don't solve the problem
only the dbcc command with Count_Rows can correct that !!!
Sebastien
""Mingqing Cheng [MSFT]"" <v-mingqc@.online.microsoft.com> a crit dans le
message de news:NWreWReoEHA.3212@.cpmsftngxa06.phx.gbl...
> Hi Sebastien,
> Thanks for your post.
> As MVP Aaron said, there are some known issues in sp_spaceused, however,
> you'd better take the following steps to see whether it make more effect.
> First of all, take the latest updates of SQL Server. Some known issues are
> fixed. For example,
> FIX: Statistics Maintenance Creates Significant Bottleneck on SQL Servers
> that Use 16 or More CPUs
> http://support.microsoft.com/?id=293849
> Secondly, run the command sp_spaceused like below
> sp_spaceused @.updateusage = true
> Note that when you include the "@.updateusage=true" parameter for
> sp_spaceused, the sp will run DBCC UPDATEUSAGE, which will update the
> allocation information in sysindexes to insure that it provides accurate
> information. Running DBCC UPDATEUSAGE on the whole database is a
> resource-intensive task and can have a significant impact on the
> performance of a system. Note that sysindexes is also updated at
> checkpoints (though not as thoroughly), so during peak load times you may
> want to invesitgate omitting the @.updateusage parameter and running
> "CHECKPOINT" before sp_spaceused instead.
> Thank you for your patience and corperation. If you have any questions or
> concerns, don't hesitate to let me know. We are here to be of assistance!
>
> Sincerely yours,
> Mingqing Cheng
> Online Partner Support Specialist
> Partner Support Group
> Microsoft Global Technical Support Center
> Introduction to Yukon! - http://www.microsoft.com/sql/yukon
> This posting is provided "as is" with no warranties and confers no rights.
> Please reply to newsgroups only, many thanks!
>
>
|||Hi Sebastien,
Thanks for your further explaination.
Executing sp_spaceused computes the amount of disk space used for data and
indexes, and the disk space used by a table in the current database. If no
table (objname) is given, sp_spaceused reports on the space used by the
entire current database.
After you drop an index, sp_spaceused reports inaccurate information. This
is a known limitation. However, sp_spaceused provides a quick, rough
estimate. If you notice incorrect values when you execute sp_spaceused, run
DBCC UPDATEUSAGE to correct the inaccuracies. Because this statement takes
some time to run on large tables or databases use it only when you suspect
incorrect values being returned or when other users are not active.
I am afraid that you will have to run that DBCC command if sp_spaceused
@.updateusage = true doesn't make sence
I am sorry for the inconvenience it may caught. Thank you for your patience
and corperation. If you have any questions or concerns, don't hesitate to
let me know.
Sincerely yours,
Mingqing Cheng
Online Partner Support Specialist
Partner Support Group
Microsoft Global Technical Support Center
Introduction to Yukon! - http://www.microsoft.com/sql/yukon
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!

False hits with Contains query

I'lll try this question here too, since I coudn't get an answer in the T-SQL forum

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=166991&SiteID=1

I cant figure out why a query returns false results.
It is basically:
select *
from sometable
where somecolumn=1
and contains(someothercolumn, 'arti-b')
This query takes forever to process and returns about 100 false hits for every row where someothercolumn actually contains the string 'arti-b'.
Note, I don't get the full set of rows where somecolumn=1, so there is some filtering from the contains clause.
If I use the same query, searching for just 'arti', it works fine. 'whatever-b' seems to work as well, as long as 'whatever' != 'arti'.
Does anyone know what causes this?
SQL server Enterprise edition
version 8 (SP4)
Language: US english
Collation: Finnish_swedish.

Full-text search is a word-based tool for natural language searches. It's not designed to handle punctuation.

As suggested in http://support.microsoft.com/kb/200043/EN-US/:

Where non-alphanumeric character must be used in the search critera (primarily the dash '-' character), use the Transact-SQL LIKE clause instead of the FULLTEXT or CONTAINS predicates.

In other words, use this query:

select * from sometable
where somecolumn=1
and someothercolumn like '%arti-b%'

Steve Kass
Drew University
|||

I see. I never found that KB article.

I don't suppose this behaviour of the full text search engine can be configured somehow?

A LIKE-query is not an option for me. Someothercolumn is text and there are over a million rows in the sometable.

How does the search engine handle hyphens? Is it better to filter out any such strings or is it possible to get something acceptable by querying for CONTAINS(someothertable, "arti b") or perhaps "arti*" ?

/Gustav

False error when trying to return data in datagrid

VB.NET 2003 / SQLS2K

The Stored Procedure returns records within Query Analyzer.
But when the Stored Procedure is called by ADO.NET ~ it produced the
following error message.

--------
Exception Message: Failed to enable constraints. One or more rows
contain values violating non-null, unique, or foreign-key constraints.
--------

--------
Exception Source: System.Data
--------

If I click OK past the error messages I will get data filling the
datagrid. However not as I would like to see it.

Even though it returns the proper data rows and includes all the
columns I asked for, it also returns plenty of columns I didn't ask for
(all the columns of the main table) and all those columns are filled
with "null"

In addition each row header contains a red exclaimation mark whch when
hovered over reads;

"Column 'cmEditedBy' does not allow DBNull.Values."

An interesting thing about this column 'cmEditedBy' is that there is
noting wrong with it and all rows for that column contain data.

I believe this error is a mistake! But it probably indicates some other
problem. How should I track its cause?

M O R E ...
Below is the code in the data layer, the stored procedure, and the data
returned within query analyzer.

\\
'DataAdapter
Friend daView041CmptCyln As New SqlDataAdapter

'SqlCommand
Private daView041CmptCyln_CmdSel As New SqlCommand

'Add the command
daView041CmptCyln.SelectCommand = daView041CmptCyln_CmdSel

'Select
With daView041CmptCyln_CmdSel
.CommandType = CommandType.StoredProcedure
.CommandText = "usp_View_041Cmpt_ByJobCyln"
.Connection = sqlConn
With daView041CmptCyln_CmdSel.Parameters
.Add(New SqlParameter("@.RETURN_VALUE", SqlDbType.Int, _
4, ParameterDirection.ReturnValue, False, CType(0,
Byte), _
CType(0, Byte), "", DataRowVersion.Current, Nothing))
'Criteria
.Add("@.fkJob", SqlDbType.Text).Value = _
"48c64a55-874d-40d0-addc-7245f5d9c118"
'.Add("@.fkJob", SqlDbType.Text).Value = f050View.jobID
End With
End With
//

\\
ALTER PROCEDURE usp_View_041Cmpt_ByJobCyln
(@.fkJob char(36))
AS SET NOCOUNT ON;

SELECT
JobNumber,
DeviceName,
ComponentName,

Description,
Quan,
Bore,
Stroke,
Rod,
Seconds,
CylPSI,
PosA,
PosB,
PosC,
PosD,
PosE,
HomeIsRet,
RetIsRetrac,
POChecks,
Regulated,
FlowControl,
PortSize,
LoadMass

FROM tbl040cmpt
INNER JOIN tbl030Devi ON fkDevice = pkDeviceId
INNER JOIN tbl020Proc ON fkProcess = pkProcessId
INNER JOIN tbl010Job ON fkJob = pkjobId
INNER JOIN lkp202ComponentType ON fkComponenttype = pkComponentTypeId
INNER JOIN lkp201DeviceType ON fkDeviceType = pkDeviceTypeId
INNER JOIN lkp101PortSize on cmSmallint05 = pkPortSizeId

WHERE
(fkJob = @.fkJob)
--fkJob = '48c64a55-874d-40d0-addc-7245f5d9c118'
AND fkComponentType = 2

GO
//

(note - columns are wrapped)
\\
F1111Clip DriverCylinderClip Driver_2 - Top -
Cylinder91.2502.250.8752.250NULL01101110011/8 NPTNULL
F1111Punch MechCylinderPunch Mech_1 -
Cylinder_222.1002.0001.0001.234NULL11000110011/8
NPTNULL
F1111Clip
DriverCylinderBottom92.1002.0001.0001.000NULL11010110011/4
NPTNULL
F1111Punch MechCylinderPunch Mech_1 -
Cylinder_122.1002.0001.0001.000NULL01000110011/8
NPTNULL
F1111DegateCylinderDegate 1 -
Cylinder21.1882.500.8751.000NULL11000110011/8 NPTNULL
F1111Clip DriverCylinderClip Driver 1 -
Bottom11.1801.250.8751.000NULL00011110011/4 NPTNULL
//dbuchanan (dbuchanan52@.hotmail.com) writes:
> VB.NET 2003 / SQLS2K
> The Stored Procedure returns records within Query Analyzer.
> But when the Stored Procedure is called by ADO.NET ~ it produced the
> following error message.
> --------
> Exception Message: Failed to enable constraints. One or more rows
> contain values violating non-null, unique, or foreign-key constraints.
> --------

Nah, it sounds as if that message is produced by .Net Framework. The
stored procedure pleads innocense.

> Even though it returns the proper data rows and includes all the
> columns I asked for, it also returns plenty of columns I didn't ask for
> (all the columns of the main table) and all those columns are filled
> with "null"
> In addition each row header contains a red exclaimation mark whch when
> hovered over reads;
> "Column 'cmEditedBy' does not allow DBNull.Values."

Well, that column is not in the result set, so obviously when you try
to populated the DataSet, NULL values is all you get. And apparently
they are not permitted.

I don't have that much experience of ADO .Net, but it sounds to me that
you have run some wizard that has constructed your dataset, and you then
have not been careful which columns to include. (Personally, if I were
to work with data sets, I would probably construct them manually.) Or is
there some thought behind of including columns that are not reported by the
query?

While not relevant to your problem, permit me to point an issue of style
with your query: you table includes six tables, no column is prefixed
with any alias (or the table name). This makes it very difficult for
anyone who looks at query to tell which table, the columns are coming
from. This also mean that if the DBA adds, say, "Description" to one
more table, the procedure will no longer compile because that column name
is now ambiguous.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

False "LIKE" hits

Hi All... We have an application where a table has a field of type text that contains readable text, but some of it may have an HTML tag in it - specifically an HTML <img> tag. For example, it may be something like:

BLAH BLAH BLAH <img src='http://pics.10026.com/?src=RetrieveImage.aspx?ImageId=1234'> BLAH BLAH BLAH.

The problem we're seeing is that when searching that field using LIKE, it's returning records that do in fact satisfy the SQL query, but we'd like it not to. That is, we'd like to exclude those HTML tags from the search.

For example:

SELECT ... WHERE TextField LIKE '%retr%'

is returning records that have those HTML tags. Yes, I know SQL is only doing its job... Anyone have any ideas to exclude those tags from the LIKE search? Thanks! -- Curt

Curt, I'm not clear on if you want to filter out the rows that have ANY html tags? or you're saying that part of a single column has a mixture of data AND html and you'd want to search only the non-html part of the column?

So, if you search this string below for the word "White" you want the row, but if you search on the word "Image" you do NOT want the row?

White Horse <img src='http://pics.10026.com/?src=RetrieveImage.aspx?ImageId=1234'>

so, you want to filter out everything between the "<" and ">" ?

You could write a function that parses each line, and searches for your string...

Bruce

|||

If I understand you correctly, you want the return to be:

BLAH BLAH BLAH ... BLAH BLAH BLAH

If that is a correct interpretation, it's not going to be either easy or pretty. By that I mean you will have to parse the data fields on the search which is really going to increase the time required to search. Indexes will not be used.

If this is the path you wish to take, you will need to create a User Defined Function that will take the entire field and strip out all characters between paired angle brackets.

|||

Thanks for the replies, Bruce and Arnie. I'm sorry for not making the issue very clear - believe it or not, it took me some time to figure out the wording I did manage to get down...

Using the statement SELECT * From TheTable WHERE TheField LIKE '%RETR%'

I would want the following record to be included:

BLAH RETR BLAH <img src='http://pics.10026.com/?src=RetrieveImage.aspx?ImageId=1234'> BLAH BLAH BLAH

But I would NOT want the following record included:

BLAH BLAH BLAH <img src='http://pics.10026.com/?src=RetrieveImage.aspx?ImageId=1234'> BLAH BLAH BLAH

In any records that are included, I would want the text returned as it appears - that is nothing filtered out.

You both pointed in the direction of writing a function to filter out that "<....>" data before applying a search. And yeah, that just adds to the overhead of the search... And quite frankly, I'm a bit nervous about that anyway - there's gonna be alot of these records and that LIKE just seems expensive. I've also been looking at indexing the text and using CONTAINS (that sound right?). We're also considering a sort of application-specific index of the text before we put it in the table as alot of the searchs are somewhat predictable.

|||

If there is only a single tag per row, you could have a WHERE clause that includes BOTH the substring that precedes "<" and the substring that follows ">". That should not require a function.

If there are multiple "<...>" entries per row, this method would not work as desired.

Dan

|||

IF, and that is a big IF in my opinion, the data is consistant and your sample correctly reflects the search value, this could work:

Code Snippet


DECLARE @.MyTable table
( RowID int IDENTITY,
Comment varchar(max)
)


INSERT INTO @.MyTable VALUES ( 'BLAH RETR BLAH <img src='http://pics.10026.com/?src='RetrieveImage.aspx?ImageId=1234''> BLAH BLAH BLAH' )
INSERT INTO @.MyTable VALUES ( 'BLAH BLAH BLAH <img src='http://pics.10026.com/?src='RetrieveImage.aspx?ImageId=1234''> BLAH BLAH BLAH' )


SELECT Comment
FROM @.MyTable
WHERE ( Comment LIKE '% RETR %'
AND Comment NOT LIKE '%=''RETR'
)

And of course, you could 'build up' the search values using parameters and constants.

This feels so 'unclean' that now I have to go take a shower... Wink

|||

slightly shorter version.

Code Snippet

SELECT Comment
FROM @.MyTable
WHERE (Comment LIKE '%[ ]RETR[ ]%')

|||This problem was born for regular expressions:
http://msdn.microsoft.com/msdnmag/issues/07/02/SQLRegex/default.aspx

you could also use charindex:

Code Snippet

SELECT *
From TheTable
where charindex('RETR', TheField)

not between charindex('<', TheField)

and charindex('>', TheField)

|||Spent far too long on this but here goes.

The following allows for any number of HTML Tags in the field.

Tested with the following:

Code Snippet

SELECT * into TheTable
From(
select 'BLAH RETR BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH' as TheField
union all
select 'BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH'
union all
select 'BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH RETR BLAH <img src="RetrieveImage.aspx?ImageId=1234"> '
union all
select 'BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> '
union all
select 'BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAHRETRBLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> '
union all
select 'BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> BLAH BLAH BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> RETR BLAH BLAH BLAH BLAH <img src="RetrieveImage.aspx?ImageId=1234"> '
) as t1

create a table Numbers with a single field Num populated 1 to x where x = a number larger than the longest field you expect to work with:

Code Snippet

select * into numbers from (
select 1 as Num union all
select 2 as Num union all
select 3 as Num union all
...
select 2999 as Num union all
select 3000 as Num) as T1

Then use this query:

Code Snippet

select distinct TheField
from (
select
TheField,
case
when substring(TheField, num,1) = '>'

or num = 1

then substring(TheField, Num+1, charindex('<',TheField+'<', Num) - Num -1 )
else ''
end as c3
from TheTable, Numbers
) as T1
where c3 like '%RETR%'

Fallback ? 2005 to 2000 !

Looks very hard to me.
Has anyone done a successfull fallback ? And survived the process. The doco
I have found shows it to be a total monster. I appear to have the choice of
no fallback at all;
export/import data (approx 100GB per prod server!);
use a third party utility (approx 100GB per prod server);
setup replication from 2005 back to 2000;
I have proved that you can't restore back from 2005 to 2000, nor can you
detach/attach back to 2000.
There just has to be a fast way to do this.
TIA.Jim
One more is create SSIS Package. Unfortunately I have not played with it
personally
"Jim Trowbridge" <JimTrowbridge@.discussions.microsoft.com> wrote in message
news:0C61339A-ABE5-4C6D-835C-6EAFFC76FF44@.microsoft.com...
> Looks very hard to me.
> Has anyone done a successfull fallback ? And survived the process. The
> doco
> I have found shows it to be a total monster. I appear to have the choice
> of
> no fallback at all;
> export/import data (approx 100GB per prod server!);
> use a third party utility (approx 100GB per prod server);
> setup replication from 2005 back to 2000;
> I have proved that you can't restore back from 2005 to 2000, nor can you
> detach/attach back to 2000.
> There just has to be a fast way to do this.
> TIA.|||Fallback to 2000 is not easy. Think about it, MSFT really does not want you
to turn back. With that said, the best approach is to test your upgrade
plan and test your fallback plan too. Go through the pre and post upgrade
steps just like you are doing it for real. Don't forget to make full use of
tools available at your disposal such as Upgrade Advsior and Upgrade
Assistant (2 different tools - similar names - both are free) so that the
process can be as smooth as possible.
Rick Heiges
SQL Server MVP
"Jim Trowbridge" <JimTrowbridge@.discussions.microsoft.com> wrote in message
news:0C61339A-ABE5-4C6D-835C-6EAFFC76FF44@.microsoft.com...
> Looks very hard to me.
> Has anyone done a successfull fallback ? And survived the process. The
> doco
> I have found shows it to be a total monster. I appear to have the choice
> of
> no fallback at all;
> export/import data (approx 100GB per prod server!);
> use a third party utility (approx 100GB per prod server);
> setup replication from 2005 back to 2000;
> I have proved that you can't restore back from 2005 to 2000, nor can you
> detach/attach back to 2000.
> There just has to be a fast way to do this.
> TIA.|||All true, but someone must have actually done a real fallback ?!
How did you do it, and how painful was it ?
I'm looking at having to fallback with 100GB of application data.
"Rick Heiges" wrote:
> Fallback to 2000 is not easy. Think about it, MSFT really does not want you
> to turn back. With that said, the best approach is to test your upgrade
> plan and test your fallback plan too. Go through the pre and post upgrade
> steps just like you are doing it for real. Don't forget to make full use of
> tools available at your disposal such as Upgrade Advsior and Upgrade
> Assistant (2 different tools - similar names - both are free) so that the
> process can be as smooth as possible.
> Rick Heiges
> SQL Server MVP
> "Jim Trowbridge" <JimTrowbridge@.discussions.microsoft.com> wrote in message
> news:0C61339A-ABE5-4C6D-835C-6EAFFC76FF44@.microsoft.com...
> > Looks very hard to me.
> >
> > Has anyone done a successfull fallback ? And survived the process. The
> > doco
> > I have found shows it to be a total monster. I appear to have the choice
> > of
> > no fallback at all;
> > export/import data (approx 100GB per prod server!);
> > use a third party utility (approx 100GB per prod server);
> > setup replication from 2005 back to 2000;
> >
> > I have proved that you can't restore back from 2005 to 2000, nor can you
> > detach/attach back to 2000.
> >
> > There just has to be a fast way to do this.
> >
> > TIA.
>
>|||Someone from MS posted the steps needed here awhile back.
But *no* guarantees for success, and it's a bit of work!
You can find them here:
Can I move my SQL Server 2005 database to SQL Server 2000?
http://sql.veranoest.net/sql_faq.htm#move_db_files_from_SQL2005
_to_SQL2000
_________________________________________________________
Vera Noest
MCSE, CCEA, Microsoft MVP - Terminal Server
TS troubleshooting: http://ts.veranoest.net
___ please respond in newsgroup, NOT by private email ___
=?Utf-8?B?SmltIFRyb3dicmlkZ2U=?=<JimTrowbridge@.discussions.microsoft.com> wrote on 25 okt 2007 in
microsoft.public.sqlserver.server:
> All true, but someone must have actually done a real fallback ?!
> How did you do it, and how painful was it ?
> I'm looking at having to fallback with 100GB of application
> data.
> "Rick Heiges" wrote:
>> Fallback to 2000 is not easy. Think about it, MSFT really does
>> not want you to turn back. With that said, the best approach
>> is to test your upgrade plan and test your fallback plan too.
>> Go through the pre and post upgrade steps just like you are
>> doing it for real. Don't forget to make full use of tools
>> available at your disposal such as Upgrade Advsior and Upgrade
>> Assistant (2 different tools - similar names - both are free)
>> so that the process can be as smooth as possible.
>> Rick Heiges
>> SQL Server MVP
>> "Jim Trowbridge" <JimTrowbridge@.discussions.microsoft.com>
>> wrote in message
>> news:0C61339A-ABE5-4C6D-835C-6EAFFC76FF44@.microsoft.com...
>> > Looks very hard to me.
>> >
>> > Has anyone done a successfull fallback ? And survived the
>> > process. The doco
>> > I have found shows it to be a total monster. I appear to have
>> > the choice of
>> > no fallback at all;
>> > export/import data (approx 100GB per prod server!);
>> > use a third party utility (approx 100GB per prod server);
>> > setup replication from 2005 back to 2000;
>> >
>> > I have proved that you can't restore back from 2005 to 2000,
>> > nor can you detach/attach back to 2000.
>> >
>> > There just has to be a fast way to do this.
>> >
>> > TIA.|||I did a rollback from Informix 9 to Informix 7 once. While it's a different
engine, the idea is the same.
Basically it was a dump & reload and a MAJOR pain-in-the-ass.
"Jim Trowbridge" <JimTrowbridge@.discussions.microsoft.com> wrote in message
news:0C61339A-ABE5-4C6D-835C-6EAFFC76FF44@.microsoft.com...
> Looks very hard to me.
> Has anyone done a successfull fallback ? And survived the process. The
> doco
> I have found shows it to be a total monster. I appear to have the choice
> of
> no fallback at all;
> export/import data (approx 100GB per prod server!);
> use a third party utility (approx 100GB per prod server);
> setup replication from 2005 back to 2000;
> I have proved that you can't restore back from 2005 to 2000, nor can you
> detach/attach back to 2000.
> There just has to be a fast way to do this.
> TIA.

faking modules

We have a pretty large code base of stored procedures ( 700 ),
functions ( 300 ) , and views ( 200 ) at our medium-sized company. Why
someone would implement the majority of their business logic in
transact-sql , I don't know, but we are way beyond the point of being
able prevent that from happening. And have a few years before we are
out of it.
So, the major thing I find myself griping over as we continue to roll
out new sets of code into the database and maintain the large legacy
codebase, is the lack of a module / package / namespace system for
organzing our stored procedures, functions, and views.
(My other major gripes are: lack of strong array / list support outside
of table variables and temp tables, especially the ability to pass
around table variables as parameters; inability to define custom
aggregation functions (addressed in 2005); inability to create
"on-the-fly" views or table expressions(addressed in 2005); the
arbitrary limit of insert exec's; the inability to use scalar udf's
because of their ugly performance)
So, I want to know about anybody out there who has attempted to fake
modules in someway or another by conventions or anything. Any
particular methods that were useful for meaningfully organzing hundreds
/ thousands of stored procedures?
Right now, i've been using the convention of putting a module name and
underscore before each function, stored proc, and view. I haven't
written the queries against the information_schema views or made any
handy utilities for browsing the database parsing these names, but
that's probably next on my list.
Also, why does this not seem to even be addressed in 2005'
nposternposter (code4breakfast@.yahoo.com) writes:
> We have a pretty large code base of stored procedures ( 700 ),
> functions ( 300 ) , and views ( 200 ) at our medium-sized company. Why
> someone would implement the majority of their business logic in
> transact-sql , I don't know,
I know. Because T-SQL is where the data is. In our shop, we are strong
believers in having business logic in stored procedures. Actually, one
guy in our shop thought we should move logic up to the middle layer,
and conducted a test where he rewrote a complex stored procedure. The
test was a total fiasco: performance was awful and it did not scale.
And before you ask: we have around 3700 stored procedures.

> So, the major thing I find myself griping over as we continue to roll
> out new sets of code into the database and maintain the large legacy
> codebase, is the lack of a module / package / namespace system for
> organzing our stored procedures, functions, and views.
There is in SQL 2005: schemas. Well, there are schemas in SQL 2000 as well,
but as they are tied to users in SQL 2000, they are useless.

> (My other major gripes are: lack of strong array / list support outside
> of table variables and temp tables, especially the ability to pass
> around table variables as parameters;
Have a look at http://www.sommarskog.se/share_data.html for some tips.

> inability to define custom
> aggregation functions (addressed in 2005); inability to create
> "on-the-fly" views or table expressions(addressed in 2005);
But beware that CTEs are basically macros, and recalculated each time
they occur in a query.

> the arbitrary limit of insert exec's;
A limit of 1 does not look very arbitrary to me.

> the inability to use scalar udf's because of their ugly performance)
This has improved in SQL 2005, where you also can write UDFs in the
CLR.
To be honest, it seems to me that your main problem is that you are
trying to use T-SQL as if it is a traditional programming langauge.
It isn't. Yes, T-SQL is a bit poor on general constructs for
modularisation, but just get used to it. Don't try to write in T-SQL
as if it was C++ or Visual Basic.

> So, I want to know about anybody out there who has attempted to fake
> modules in someway or another by conventions or anything. Any
> particular methods that were useful for meaningfully organzing hundreds
> / thousands of stored procedures?
Actually, we have a subsystem concept that is implemented through our
loadtool. The main reason we have done this, is that we don't ship
all parts of the system to all customers. So this is a concept that
has been added for configuration management rather than programming.
Our largest subsystem - which is large by legacy - has over 1000 stored
procedures.

> Also, why does this not seem to even be addressed in 2005'
As I said, the separation of user and schema is an important step in
this direction.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||>> . Why someone would implement the majority of their business logic in tra
nsact-sql , I don't know, .. <<
Because the database is where the data is and most of the business
logic probably has to do with rules about relationships in the data. I
also hope you have a lot of DRI, defaults and constraints, too.
Think about what a disaster it would be if you did this in the front
end. What if 699 of your procedures believe there is a rule that says
(age > 18) and one that believes the rule is (age >= 18)?
You do business rules in the schema so you have them in ONE place, ONE
way, ONE time.
T-SQL was never meant to be a development language. In fact, the old
rule of thumb was never write a proc over one page long. But you can
find some code control tools out there which will work with T-SQL.
lity to create "on-the-fly" views or table expressions(addressed in 2005); t
he
arbitrary limit of insert exec's; the inability to use scalar udf's
because of their ugly performance) <<
In short, you are not a SQL programmers at all! Let's go down the list
of things you missed about SQL
1) We don't use arrays and lists -- ever hear of 1NF?
2) Table variables as parameters would mix data and meta-data
3) I have been able to write custom aggregation functions with the
math avaialble to me in Standard SQL. The statistical and analytics
should be done in a stat package that can handle the floating point
problems.
4) We design schemas carefully and never use "on-the-fly" views or
table. It means that you are probably converting 3GL programs into
T-SQL and need 1950's styule scratch files.
5) What does "the arbitrary limit of insert exec's" mean?
6) Why do you need scalar udf's? You are not doing computations in the
Database, are you? How would put such things into an optimizer --even
in theory?
So much for the ISO-11179 Standards!
Your problem is that you "don't get it" -- SQL is not for development;
it is for data. The joke we had in ANSI X3H2 was that SQL stood for
"Scarcely Qualifies as a Language" because all it does is maintain,
validate and persist data.|||Erland Sommarskog wrote:
> nposter (code4breakfast@.yahoo.com) writes:
> I know. Because T-SQL is where the data is. In our shop, we are strong
> believers in having business logic in stored procedures. Actually, one
> guy in our shop thought we should move logic up to the middle layer,
> and conducted a test where he rewrote a complex stored procedure. The
> test was a total fiasco: performance was awful and it did not scale.
> And before you ask: we have around 3700 stored procedures.
>
This is obviously a long discussion, and I don't have enough background
writing large, scalable data layers to really argue. The angle I was
coming from is that most people would agree that you can write more
cleanly organized and modularized code outside of transact-sql for
things beyond simple queries. On writing pure reports or processing
large data sets, sure I would vote for transact-sql, but for small
hits, single update or inserts with complex validation checks, small
data set queries on not a very heavy transactional db, I would vote for
the taking the logic out.

> There is in SQL 2005: schemas. Well, there are schemas in SQL 2000 as well
,
> but as they are tied to users in SQL 2000, they are useless.
okay ! i might not invest as much energy into rolling my own module
system then. i've only begun to skim the 2005 docs.

>
> Have a look at http://www.sommarskog.se/share_data.html for some tips.
>
i haven't read this site before, but upon reading it now, i have used
each one of these techniques. they are still limiting and a pain to
implement for something as simple as just passing aroudn a table /
list. its like i have to compile my code into this lower level language
of passing around temp tables.

> But beware that CTEs are basically macros, and recalculated each time
> they occur in a query.
>
> A limit of 1 does not look very arbitrary to me.
>
Why limit to 1? Because "we said so". This coudl have been used as a
general mechanism for calling stored procs from other stored procs if
it wasn't for the limitation... yeah , i know, make a table-valued udf,
but what if i want to pass tables into and out of the table-valued udf.
which leads you back to temp tables and stored procs.

> To be honest, it seems to me that your main problem is that you are
> trying to use T-SQL as if it is a traditional programming langauge.
> It isn't. Yes, T-SQL is a bit poor on general constructs for
> modularisation, but just get used to it. Don't try to write in T-SQL
> as if it was C++ or Visual Basic.
>
i have embraced t-sql. i hate the fact that i have, but i have. i use
views, table-valued udf's, stored procs in the style i think is
considered the "t-sql way". i gave up the expectation of being able to
write reusable , efficient scalar functions. but i really don't think
its too much to ask to simply ask for a way to group my code logically.
these weren't top secret ideas when t-sql was created.

> Actually, we have a subsystem concept that is implemented through our
> loadtool. The main reason we have done this, is that we don't ship
> all parts of the system to all customers. So this is a concept that
> has been added for configuration management rather than programming.
> Our largest subsystem - which is large by legacy - has over 1000 stored
> procedures.
>
custom written loadtool? i was thinking of using make/ant scripts with
isqlw commands to load sets of code in. then check that into source
control. can you give any more details about the loadtool and how you
use it? how do you browse the subsystems?|||> ... On writing pure reports or processing
> large data sets, sure I would vote for transact-sql, but for small
> hits, single update or inserts with complex validation checks, small
> data set queries on not a very heavy transactional db, I would vote for
> the taking the logic out.
I would vote for keeping the logic in, for a number of reasons: (1)
assert. (2) deadlocks. (3) flexibility. (4) performance. (5)
scalability. (6) security. The list continues.
"nposter" <code4breakfast@.yahoo.com> wrote in message
news:1124669275.064403.271190@.g44g2000cwa.googlegroups.com...
> Erland Sommarskog wrote:
> This is obviously a long discussion, and I don't have enough background
> writing large, scalable data layers to really argue. The angle I was
> coming from is that most people would agree that you can write more
> cleanly organized and modularized code outside of transact-sql for
> things beyond simple queries. On writing pure reports or processing
> large data sets, sure I would vote for transact-sql, but for small
> hits, single update or inserts with complex validation checks, small
> data set queries on not a very heavy transactional db, I would vote for
> the taking the logic out.
>
well,
> okay ! i might not invest as much energy into rolling my own module
> system then. i've only begun to skim the 2005 docs.
>
outside
> i haven't read this site before, but upon reading it now, i have used
> each one of these techniques. they are still limiting and a pain to
> implement for something as simple as just passing aroudn a table /
> list. its like i have to compile my code into this lower level language
> of passing around temp tables.
>
> Why limit to 1? Because "we said so". This coudl have been used as a
> general mechanism for calling stored procs from other stored procs if
> it wasn't for the limitation... yeah , i know, make a table-valued udf,
> but what if i want to pass tables into and out of the table-valued udf.
> which leads you back to temp tables and stored procs.
>
> i have embraced t-sql. i hate the fact that i have, but i have. i use
> views, table-valued udf's, stored procs in the style i think is
> considered the "t-sql way". i gave up the expectation of being able to
> write reusable , efficient scalar functions. but i really don't think
> its too much to ask to simply ask for a way to group my code logically.
> these weren't top secret ideas when t-sql was created.
>
hundreds
> custom written loadtool? i was thinking of using make/ant scripts with
> isqlw commands to load sets of code in. then check that into source
> control. can you give any more details about the loadtool and how you
> use it? how do you browse the subsystems?
>|||nposter (code4breakfast@.yahoo.com) writes:
> This is obviously a long discussion, and I don't have enough background
> writing large, scalable data layers to really argue. The angle I was
> coming from is that most people would agree that you can write more
> cleanly organized and modularized code outside of transact-sql for
> things beyond simple queries.
Admittedly, code in C#, Ada or whatever can be prettier than in T-SQL.
(Then again, you can write it in C++, and don't tell me that is
prettier! :-)
However, pretty code is not always the as effecient code. The cost for
moving the logic out the database can be enourmous with all data that
has to go forth and back.
You can do code reuse in T-SQL as well, but it is certainly a little
clumsier. But one should also keep in mind that code reuse is not really
the same virtue in T-SQL as it is client languages. For instance, accessing
a view because it pre-computes some result you need may not be a good idea
after all, because the view accesses four tables irrelevant to your
task.

> i haven't read this site before, but upon reading it now, i have used
> each one of these techniques. they are still limiting and a pain to
> implement for something as simple as just passing aroudn a table /
> list. its like i have to compile my code into this lower level language
> of passing around temp tables.
I certainly can't disagree that this is a point where T-SQL could serve
from improvement. Being able to pass a table as a parameter would be a
great addition.
In SQL 2005 there is, by the way, a new workaround: since you know can
receive the result of FOR XML queries into a variable of the new xml
data type, you can pass a table in an xml parameter. (And you don't need
OPENXML to shred it anymore.) Yes, this is really, really ugly. A lot
better novelty in SQL 2005 is statement recompilation. This makes the
cost for sharing temp tables less expensive.

> Why limit to 1? Because "we said so".
More probably because permitting more levels would have made the
implementation more complicated. I have no idea, but I can imagine that
doing it in one layer was something that almost could be done for free
by using something already there, like placing the result set in the
output buffer, and the intercept that. But that would be possible to
do in several layers without considerable re-architecture.
And in any case, INSERT-EXEC have several other issues as well, as I
discuss in my article. From the point of view of modular programming,
I dislike INSERT-EXEC because you can change the callee and wreck a
caller you don't know about. It's simply not robust enough.

> custom written loadtool? i was thinking of using make/ant scripts with
> isqlw commands to load sets of code in. then check that into source
> control. can you give any more details about the loadtool and how you
> use it? how do you browse the subsystems?
Sure, the full documentation is on http://www.abaris.se/abaperls
One more feature that we have that may be of particular interest to
you, is our pre-processor.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland Sommarskog wrote:
> nposter (code4breakfast@.yahoo.com) writes:
> I know. Because T-SQL is where the data is. In our shop, we are strong
> believers in having business logic in stored procedures. Actually, one
> guy in our shop thought we should move logic up to the middle layer,
> and conducted a test where he rewrote a complex stored procedure. The
> test was a total fiasco: performance was awful and it did not scale.
> And before you ask: we have around 3700 stored procedures.
>
> There is in SQL 2005: schemas. Well, there are schemas in SQL 2000 as well
,
> but as they are tied to users in SQL 2000, they are useless.
>
Well, they're tied to users or roles. In fact, we've had a reasonable
amount of success here with a database split into (at last count) 9
different schemas (as new subsystems are added to the rest of the
project, 1 or 2 new schemas are created), all of which are owned by
roles.
I do, of course, look forward to being able to do away with these roles
and have them as pure schemas, but it works reasonably well now. We've
even managed to get Merge Replication working with it (after a few
false starts, we went live with this a couple of ws ago)
Damien|||Can you explain this a little further. Do you have the tables in one
centralized schema. Then each new schema encapsulates sets of stored
procedures, functions, and views? This is in 2000 still I assume. So
you have something like:
server = PRODSERVER
database/schema1 = PROD_TABLES
database/schema2 = PROD_PROCEDURE_SCHEMA1
database/schema3 = PROD_PROCEDURE_SCHEMA2
or am i missing the definition of schemas. are schemas lower level than
the database level?|||Very intersting. this is exactly what i was envisioning somebody out
there had already created. I don't know if i will use it directly, but
I will probably write up my own simplified version inspired by your
system|||nposter wrote:
> Can you explain this a little further. Do you have the tables in one
> centralized schema. Then each new schema encapsulates sets of stored
> procedures, functions, and views? This is in 2000 still I assume. So
> you have something like:
> server = PRODSERVER
> database/schema1 = PROD_TABLES
> database/schema2 = PROD_PROCEDURE_SCHEMA1
> database/schema3 = PROD_PROCEDURE_SCHEMA2
> or am i missing the definition of schemas. are schemas lower level than
> the database level?
How I'm using it is pretty well how it'll work under 2005. I'm not
saying it's the right way, or the best way, but it's working here, and
avoiding come collisions.
So, we create a database, and a couple of schemas:
create database MyBigDatabase
go
use MyBigDatabase
go
sp_addrole 'ClientServices' --This will change to create Schema in SS
2005
go
sp_addrole 'CreditorServices'
go
--And so on, for each schema needed, then
create table ClientServices.Users
UserID uniqueidentifier not null,
--Columns specific to the concept of a User within Client Services
go
create table CreditorServices.Users
UserID uniqueidentifier not null,
--Columns specific to the concept of a User within Creditor Services
--Create Proc, Create View, etc, all within their own schemas
Some of the tables we use are in fact still owned by dbo - these are
tables which are globally required/accessable within the individual
services. Before anyone asks about my use of Uniqueidentifier columns
for what are obviously (with more DDL) going to be Primary Keys, I'd
like to point out that we always knew we wanted to support replication,
so it just saved time. And yes, there are unique constraints on the
natural keys.
Damien