Showing posts with label local. Show all posts
Showing posts with label local. Show all posts

Friday, March 23, 2012

February Books Online Available

If you're using the local client of Books Online, you can get the version released in February of 2007 here:

http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx

If you're using the Web to read Books Online, it is automatically updated. You can read that here:

http://msdn2.microsoft.com/en-us/library/ms130214.aspx

There is also a wealth of other information here:

http://www.microsoft.com/technet/prodtechnol/sql/2005/learning/default.mspx

Wednesday, March 21, 2012

Fault tolerant distributor in merge replication and data mirroring (in SQL Server 2005/200

Cluster it. It should be a local distributor.
http://www.zetainteractive.com - Shift Happens!
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Josep" <jmartinez@.autec.es> wrote in message
news:OT0UAqhUIHA.5404@.TK2MSFTNGP03.phx.gbl...
> Hello!
> My customer has SQL Server 2000 with Merge Replication, implementing a
> star tipology: one publisher (also distributor) and 11 push
> subscribers.And they want to move to SQL Server 2005/2008 and have a fault
> tolerant environment.
> I've thought that probably the best is to set up data mirroring for the
> publication, but the BOL says that data mirroring is not possible for the
> distribution database. So, what's the best way to do implement a fault
> tolerant system for the distribution database? (it's not needed to be
> automatic)
> Maybe doing backups and restores quite ofter of distribution database from
> the principal to the mirror? because we have the point that in merge
> replication only changes occurs there when the replication-schema is
> changed or a new identity range is given to a subscriber (when using
> automatic identity range).
>
> Thanks in advance!
> Josep
>
>
Your English is much better than my Spanish, and IMHO extremely good!
Again I think a restore of the publisher database or the distribution
database is your best bet. If the subscribers get ahead of the
distribution database due to a restore from an earlier version of the
distribution database, the subscribers will fill in any missing data
on the publisher.
On Jan 9, 5:13 am, "Josep" <jmarti...@.autec.es> wrote:[vbcol=seagreen]
> Ok. That's the best option, but it's quite expensive and my customer doesn't
> want it. When I said fault-tolerant I didn't mean "real time recovery", I
> should better have said that the downtime recovery process should be quite
> small: less than two hours.
> As it's a Merge Replication there can be this small downtime, that's why I
> was saying doing backup/restore of distribution database... but then I don't
> know how exactly would connect the subscribers to the distribution or other
> issues that can be there and I don't know... or, if as you said, cluster is
> the only option.
> Thank you!
> Josep.
> PS: Hilary, sorry for my English :-(
> "Hilary Cotter" <hilary.cot...@.gmail.com> escribi en el mensajenews:uDoLtIiUIHA.5132@.TK2MSFTNGP02.phx.gbl. ..
>
>
>
>

Friday, March 9, 2012

Fastest SQL Server Library

I need to connect to the Sql Server 2005 on the local machine using C/C++
and am trying to find the fastest way of doing so. I'm dealing with up to a
million records and usually lots of small selects. I also can't use
joins/stored procedures at this point due to accomodating older software.
Initial testing shows that ole/odbc are nearly equivalent performance wise
and that I should stay as far away from ado as possible. I also took a
cursory glance at dblib and am disregarding it as it doesn't support string
fields longer than 255 characters and it doesn't even come with sql server
anymore.
ODBC seems to by far the winner given what a pain OLE is to use and the
relative lack of documentation for it.
Anything else I'm missing?
Here's a few stats as well (I threw in some other databases I'm testing out
using their c libraries just for fun). No stored procedures/prepared
statements were used in these tests. ADO is using non .net version. Time is
listed in milliseconds.
single 1200 record select (all fields):
SELECT * FROM customers;
ole: 15
odbc: 15
ado: 500
mysql: 30
postgresql: 30
1000 identical 1 record select (2 fields) statements:
// almost useless test given the caching taking place
// note that there's an index on customer
SELECT customer,name FROM customers WHERE customer=1025;
ole: 350
odbc: 340
ado: 2900
mysql: 390
postgresql: 1000
1000 different 1 record select (2 fields) statements:
// interesting note
// sqlserver takes twice as long if single quotes don't surround values
// mysql takes four times as long if single quotes don't surround values
// postgresql is unaffected by single quotes around values
// perhaps the databases are caching the statements besides the quoted
values
SELECT customer,name FROM customers WHERE customer='%d'; (%d = 1000 to 2000)
ole: 4250
odbc: 4250
ado: 10200
mysql: 400
postgresql: 1100
10,000 different multi record selects (2 fields),
79,000 of 350,000 total records retrieved overall
there is an index on orderno
SELECT orderno,product FROM shipped WHERE orderno='%07d'; (%d = 0 to 10000)
ole: 7766
odbc: 7550
ado: 51500
mysql: 23400 (subsequent times 15,400)
postgresql: 35600 (subsequent times 14,400)
-GaryPersonally, I would use ADO. Yes, it's a bit slower than OLE DB (which is
generally considered to be faster than the ODBC library) -- but it's
not -that much- slower, and there are huge benefits in terms of productivity
(ADO is easier to program against than either of the other options). Search
the web and you'll find lots of performance tips and tricks.
High-performance ADO access patterns may not be the same as those for OLE DB
or ODBC libraries.
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--
"garyh" <garyh@.spammenot.involved.com> wrote in message
news:e40lad02ca3@.enews4.newsguy.com...
>I need to connect to the Sql Server 2005 on the local machine using C/C++
>and am trying to find the fastest way of doing so. I'm dealing with up to a
>million records and usually lots of small selects. I also can't use
>joins/stored procedures at this point due to accomodating older software.
> Initial testing shows that ole/odbc are nearly equivalent performance wise
> and that I should stay as far away from ado as possible. I also took a
> cursory glance at dblib and am disregarding it as it doesn't support
> string fields longer than 255 characters and it doesn't even come with sql
> server anymore.
> ODBC seems to by far the winner given what a pain OLE is to use and the
> relative lack of documentation for it.
> Anything else I'm missing?
> Here's a few stats as well (I threw in some other databases I'm testing
> out using their c libraries just for fun). No stored procedures/prepared
> statements were used in these tests. ADO is using non .net version. Time
> is listed in milliseconds.
> single 1200 record select (all fields):
> SELECT * FROM customers;
> ole: 15
> odbc: 15
> ado: 500
> mysql: 30
> postgresql: 30
> 1000 identical 1 record select (2 fields) statements:
> // almost useless test given the caching taking place
> // note that there's an index on customer
> SELECT customer,name FROM customers WHERE customer=1025;
> ole: 350
> odbc: 340
> ado: 2900
> mysql: 390
> postgresql: 1000
> 1000 different 1 record select (2 fields) statements:
> // interesting note
> // sqlserver takes twice as long if single quotes don't surround values
> // mysql takes four times as long if single quotes don't surround values
> // postgresql is unaffected by single quotes around values
> // perhaps the databases are caching the statements besides the quoted
> values
> SELECT customer,name FROM customers WHERE customer='%d'; (%d = 1000 to
> 2000)
> ole: 4250
> odbc: 4250
> ado: 10200
> mysql: 400
> postgresql: 1100
> 10,000 different multi record selects (2 fields),
> 79,000 of 350,000 total records retrieved overall
> there is an index on orderno
> SELECT orderno,product FROM shipped WHERE orderno='%07d'; (%d = 0 to
> 10000)
> ole: 7766
> odbc: 7550
> ado: 51500
> mysql: 23400 (subsequent times 15,400)
> postgresql: 35600 (subsequent times 14,400)
> -Gary
>|||From the SQL Server 2005 Books Online:
<Excerpt>
The SQL Native Client OLE DB provider is a low-level, COM API that is used
for accessing data. The SQL Native Client OLE DB provider is recommended for
developing tools, utilities, or low-level components that need high
performance. The SQL Native Client OLE DB provider is a native, high
performance provider that accesses the SQL Server Tabular Data Stream (TDS)
protocol directly.
The SQL Native Client OLE DB provider exposes interfaces to consumers
wanting access to data on one or more computers running an instance of SQL
Server 2005 or earlier.
The SQL Native Client OLE DB provider is an OLE DB version 2.0-compliant
provider.
</Excerpt>
Hope this helps.
Dan Guzman
SQL Server MVP
"garyh" <garyh@.spammenot.involved.com> wrote in message
news:e40lad02ca3@.enews4.newsguy.com...
>I need to connect to the Sql Server 2005 on the local machine using C/C++
>and am trying to find the fastest way of doing so. I'm dealing with up to a
>million records and usually lots of small selects. I also can't use
>joins/stored procedures at this point due to accomodating older software.
> Initial testing shows that ole/odbc are nearly equivalent performance wise
> and that I should stay as far away from ado as possible. I also took a
> cursory glance at dblib and am disregarding it as it doesn't support
> string fields longer than 255 characters and it doesn't even come with sql
> server anymore.
> ODBC seems to by far the winner given what a pain OLE is to use and the
> relative lack of documentation for it.
> Anything else I'm missing?
> Here's a few stats as well (I threw in some other databases I'm testing
> out using their c libraries just for fun). No stored procedures/prepared
> statements were used in these tests. ADO is using non .net version. Time
> is listed in milliseconds.
> single 1200 record select (all fields):
> SELECT * FROM customers;
> ole: 15
> odbc: 15
> ado: 500
> mysql: 30
> postgresql: 30
> 1000 identical 1 record select (2 fields) statements:
> // almost useless test given the caching taking place
> // note that there's an index on customer
> SELECT customer,name FROM customers WHERE customer=1025;
> ole: 350
> odbc: 340
> ado: 2900
> mysql: 390
> postgresql: 1000
> 1000 different 1 record select (2 fields) statements:
> // interesting note
> // sqlserver takes twice as long if single quotes don't surround values
> // mysql takes four times as long if single quotes don't surround values
> // postgresql is unaffected by single quotes around values
> // perhaps the databases are caching the statements besides the quoted
> values
> SELECT customer,name FROM customers WHERE customer='%d'; (%d = 1000 to
> 2000)
> ole: 4250
> odbc: 4250
> ado: 10200
> mysql: 400
> postgresql: 1100
> 10,000 different multi record selects (2 fields),
> 79,000 of 350,000 total records retrieved overall
> there is an index on orderno
> SELECT orderno,product FROM shipped WHERE orderno='%07d'; (%d = 0 to
> 10000)
> ole: 7766
> odbc: 7550
> ado: 51500
> mysql: 23400 (subsequent times 15,400)
> postgresql: 35600 (subsequent times 14,400)
> -Gary
>|||Interesting numbers - thanks.
I know this doesn't answer your question directly, but here goes.
We have client (GUI) and server (console) apps. The server apps
are heavy duty; memory, DB and processor intensive. We
originally developed in Borland and used their ADO Express for
both of the app types. Once the ADO properties were set
properly, it worked pretty good. We support SQL Server, Oracle
and DB2, all with the same code, except for a few SQL language
quirks.
A while back we decided to port the server apps to VC++ 8 in
preparation for doing 64 bit (we have over a hundred of these
server apps). We decided to use ODBC at this point. I created a
set of classes to encapsulate ODBC that had an almost identical
API to the ADO Express stuff. It was a fair amount of work but
made the port go quickly and makes ongoing development fast and
easy. You might want to consider doing something like this. A
brief example follows (C++). Compare it to raw ODBC.
// Set up the ODBC environment, read the Registry, etc.
VConnection conn( "MyDb", "MyUserId", "MyPassword" );
// Create the DB object and open a connection to the DB
VDatabase db( conn );
db.Open();
// Create a query object, set the SQL and a couple of properties
VQuery q( &db );
q.SQL->Text = "SELECT anInt FROM aTable WHERE aCol = :colval";
q.SetLockMode( DirtyRead );
q.CacheSize = 100;
// Set the parameter, open the query, read the results, close the
query
q.ParamByName( "colval" )->AsInteger = 1234;
q.Open();
if ( !q.Eof )
myVal = q.FieldByName( "anInt" )->AsInteger;
q.Close();
- Arnie|||Thanks, I had forgotten about the SQL Native Client . After doing a few
tests I do see a slight improvement of about 5% in both odbc and ole. Not
much of an improvement, but I'll take it.
-Gary
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:e2Y%23iFWdGHA.1456@.TK2MSFTNGP04.phx.gbl...
> From the SQL Server 2005 Books Online:
> <Excerpt>
> The SQL Native Client OLE DB provider is a low-level, COM API that is used
> for accessing data. The SQL Native Client OLE DB provider is recommended
> for developing tools, utilities, or low-level components that need high
> performance. The SQL Native Client OLE DB provider is a native, high
> performance provider that accesses the SQL Server Tabular Data Stream
> (TDS) protocol directly.
> The SQL Native Client OLE DB provider exposes interfaces to consumers
> wanting access to data on one or more computers running an instance of SQL
> Server 2005 or earlier.
> The SQL Native Client OLE DB provider is an OLE DB version 2.0-compliant
> provider.
> </Excerpt>
>
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP

Sunday, February 26, 2012

Failure writing properties running SSIS package from a Web Service

I am attempting to run an SSIS package from a web service. Right now both the service and package are on my local machine which is running XP. I have accessed the web service from a client application in debug mode. I am not sure if it is actually running under aspnet_wp.exe because it is XP and a development environment? (separate question)? The package fails with a series of OnError messages similar to:

The result of the expression ""/c DEL /F /Q \"" + @.DeployFolder + "\\catalog.diff.lz\""" on property "Arguments" cannot be written to the property. The expression was evaluated, but cannot be set on the property.

An initial supposition is that the permissions of the web service are inadequate for the package. I have the authentication as "Windows" and <identity impersonate="true" /> in the Web.Config file. When I break in the debugger the Environment.UserName and Environment.UserDomainName are mine and I am an Admin on the box.
the authorization is 'deny users="?".

The article that describes basic implementation of this in a Web Service states:

With its default settings for authentication and authorization, a Web

service generally does not have sufficient permissions to access SQL

Server or the file system to load and execute packages. You may have to

assign appropriate permissions to the Web service by configuring its

authentication and authorization settings in the web.config

file and assigning database and file system permissions as appropriate.

A complete discussion of Web, database, and file system permissions is

beyond the scope of this topic.

And how!

Note that the load is fine and that this is a run time error and that the package runs correctly when run manually from SQL Server using the 'run package' menu item in the Object Explorer tree of the SQL Server Management Console.

I need to know if this is an ASP.NET issue per se or XP or if this is even a security issue. And how to solve it! This is critical path so an expeditious reply with a solution would be greatly appreciated.Can't believe I left this out but the the web service is running under Integrated Windows authentication.