Thursday, March 29, 2012

Field Selection

I'm pretty new to Crystal and I'm stumped at the following issue:

I need to select a sub-set of a dataset. In this case I need to select fields with a status of "Requested", from a list of fields that include "Accepted", "Requested", "Declined". This part is easy, but it gets complicated (for me):

Then I have to use those fields filtered by "Requested" to find, within the last 180 days, if any clients have "Accepted" and/or "Requested" and/or "Declined" (i.e. status = any).

This is a two staged selection process, and I'm fine with the code for the 180 days selection. It almost seems to be a contradiction in that I need to filter by "Requested" in the first intance, and in the second I need to then show returns (for all Status types), but the initial filter means I can not do this.

Another way of looking at it is:

1. I want to select X1 from X(1,2,3,4,5), then
2. I want to use X1 to determine if there have been cases of X(1,2,3,4,5) in the last 6 months.

Any help is muchly appreciated.

Cheers.

Mat.any ideas? I'm desperate.

Field overflow and Log grow

Hi,
I am using SQL Server 2000.
One user database contains a table as following:
CREATE TABLE Events..Audit_Sub (
[RecID] [bigint] NOT NULL ,
[Name] [varchar] (100) NULL ,
[Value] [varchar] (1024) NULL
) ON [PRIMARY]
Sometimes the application writes to the table a record that overflows the
size of the Value field (actually because of an error in the new code, the
appilcation attempts to write about 5Kb to the Valaue field).
The fact is:
- No error is detected on SQL Server, data is written to tha table, it is
visible by select, it is just truncated to the field size (1Kb).
- In the meanwhile it appears that DB log begins growing: the things are not
directly dependent, just somewhat later the log begins growing, but no error
is found in SQL errorlog.
- Later on transaction log cannot be backup up, data are no more written to
DB, but then it is too late to understand the reason why.
The question is:
- In which way may the two things (overflow and log grow) be related?
- What really happens on SQL server when data overflow occurs? How does it
handle?
Thanks in advance,
MRMarco
> - In the meanwhile it appears that DB log begins growing: the things are
> not
> directly dependent, just somewhat later the log begins growing, but no
> error
> is found in SQL errorlog.
> - In which way may the two things (overflow and log grow) be related?
It does not matter whether or not overflow occured. the log file grows up
and it is not truncated unless you have SIMPLE recovery mode the database
set
> - What really happens on SQL server when data overflow occurs? How does it
> handle?
create table t (c1 tinyint, c2 varchar (5))
--owerflow on c1 column
insert into t values(4545745454545,'a')
--Server: Msg 8115, Level 16, State 2, Line 1
--Arithmetic overflow error converting expression to data type tinyint.
--The statement has been terminated.
select * from t
--(0 row(s) affected)
--now insert much more characters than you defined for c2 columnn
insert into t values(1,'asgtbvtybvtg')
--Server: Msg 8152, Level 16, State 9, Line 1
--String or binary data would be truncated.
--The statement has been terminated.
select * from t
--(0 row(s) affected)
"Marco Roda" <mrtest@.amdosoft.com> wrote in message
news:e8t656$le6$1@.ss408.t-com.hr...
> Hi,
> I am using SQL Server 2000.
> One user database contains a table as following:
> CREATE TABLE Events..Audit_Sub (
> [RecID] [bigint] NOT NULL ,
> [Name] [varchar] (100) NULL ,
> [Value] [varchar] (1024) NULL
> ) ON [PRIMARY]
> Sometimes the application writes to the table a record that overflows the
> size of the Value field (actually because of an error in the new code, the
> appilcation attempts to write about 5Kb to the Valaue field).
> The fact is:
> - No error is detected on SQL Server, data is written to tha table, it is
> visible by select, it is just truncated to the field size (1Kb).
> - In the meanwhile it appears that DB log begins growing: the things are
> not
> directly dependent, just somewhat later the log begins growing, but no
> error
> is found in SQL errorlog.
> - Later on transaction log cannot be backup up, data are no more written
> to
> DB, but then it is too late to understand the reason why.
> The question is:
> - In which way may the two things (overflow and log grow) be related?
> - What really happens on SQL server when data overflow occurs? How does it
> handle?
> Thanks in advance,
> MR
>
>
>|||Hi
At a guess you have the ANSI_WARNINGS setting off as "When OFF, data is
truncated to the size of the column and the statement succeeds. " e.g
SET ANSI_WARNINGS ON
DECLARE @.error int
CREATE TABLE #tmp ( col1 char(1) NOT NULL )
BEGIN TRANSACTION
INSERT INTO #tmp ( col1 ) values ( 'AA' )
SET @.error = @.@.ERROR
IF @.error <> 0
BEGIN
SELECT 'Transaction Rolled Back Error Status: ' + CAST(@.error as varchar(30))
ROLLBACK TRANSACTIOn
END
ELSE
BEGIN
PRINT 'Transaction Comitted'
COMMIT TRANSACTION
END
GO
SELECT * from #tmp
GO
DROP TABLE #tmp
GO
/*
Msg 8152, Level 16, State 14, Line 5
String or binary data would be truncated.
The statement has been terminated.
----
Transaction Rolled Back Error Status: 8152
(1 row(s) affected)
col1
--
(0 row(s) affected)
*/
SET ANSI_WARNINGS OFF
DECLARE @.error int
CREATE TABLE #tmp ( col1 char(1) NOT NULL )
BEGIN TRANSACTION
INSERT INTO #tmp ( col1 ) values ( 'AA' )
SET @.error = @.@.ERROR
IF @.error <> 0
BEGIN
SELECT 'Transaction Rolled Back Error Status: ' + CAST(@.error as varchar(30))
ROLLBACK TRANSACTIOn
END
ELSE
BEGIN
PRINT 'Transaction Comitted'
COMMIT TRANSACTION
END
GO
SELECT * from #tmp
GO
DROP TABLE #tmp
GO
/*
(1 row(s) affected)
Transaction Comitted
col1
--
A
(1 row(s) affected)
*/
although with your log file growing it may be that you have detected an
error and not rolled back the transaction, use DBCC OPENTRAN to view open
transactions.
John
"Marco Roda" wrote:
> Hi,
> I am using SQL Server 2000.
> One user database contains a table as following:
> CREATE TABLE Events..Audit_Sub (
> [RecID] [bigint] NOT NULL ,
> [Name] [varchar] (100) NULL ,
> [Value] [varchar] (1024) NULL
> ) ON [PRIMARY]
> Sometimes the application writes to the table a record that overflows the
> size of the Value field (actually because of an error in the new code, the
> appilcation attempts to write about 5Kb to the Valaue field).
> The fact is:
> - No error is detected on SQL Server, data is written to tha table, it is
> visible by select, it is just truncated to the field size (1Kb).
> - In the meanwhile it appears that DB log begins growing: the things are not
> directly dependent, just somewhat later the log begins growing, but no error
> is found in SQL errorlog.
> - Later on transaction log cannot be backup up, data are no more written to
> DB, but then it is too late to understand the reason why.
> The question is:
> - In which way may the two things (overflow and log grow) be related?
> - What really happens on SQL server when data overflow occurs? How does it
> handle?
> Thanks in advance,
> MR
>
>
>|||"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uozopRApGHA.4996@.TK2MSFTNGP05.phx.gbl...
> Marco
> > - In the meanwhile it appears that DB log begins growing: the things are
> > not
> > directly dependent, just somewhat later the log begins growing, but no
> > error
> > is found in SQL errorlog.
> > - In which way may the two things (overflow and log grow) be related?
>
> It does not matter whether or not overflow occured. the log file grows up
> and it is not truncated unless you have SIMPLE recovery mode the database
> set
>
> > - What really happens on SQL server when data overflow occurs? How does
it
> > handle?
>
> create table t (c1 tinyint, c2 varchar (5))
> --owerflow on c1 column
> insert into t values(4545745454545,'a')
> --Server: Msg 8115, Level 16, State 2, Line 1
> --Arithmetic overflow error converting expression to data type tinyint.
> --The statement has been terminated.
> select * from t
> --(0 row(s) affected)
> --now insert much more characters than you defined for c2 columnn
> insert into t values(1,'asgtbvtybvtg')
> --Server: Msg 8152, Level 16, State 9, Line 1
> --String or binary data would be truncated.
> --The statement has been terminated.
> select * from t
> --(0 row(s) affected)
>
The fact is: when the application attempts writing more data, data is REALLY
WRITTEN (even if truncated), and NO ERROR is thrown.
- Why did not get error?
- May the overflow be a reason why the log is growing?

Field overflow and Log grow

Hi,
I am using SQL Server 2000.
One user database contains a table as following:
CREATE TABLE Events..Audit_Sub (
[RecID] [bigint] NOT NULL ,
[Name] [varchar] (100) NULL ,
[Value] [varchar] (1024) NULL
) ON [PRIMARY]
Sometimes the application writes to the table a record that overflows the
size of the Value field (actually because of an error in the new code, the
appilcation attempts to write about 5Kb to the Valaue field).
The fact is:
- No error is detected on SQL Server, data is written to tha table, it is
visible by select, it is just truncated to the field size (1Kb).
- In the meanwhile it appears that DB log begins growing: the things are not
directly dependent, just somewhat later the log begins growing, but no error
is found in SQL errorlog.
- Later on transaction log cannot be backup up, data are no more written to
DB, but then it is too late to understand the reason why.
The question is:
- In which way may the two things (overflow and log grow) be related?
- What really happens on SQL server when data overflow occurs? How does it
handle?
Thanks in advance,
MRMarco
> - In the meanwhile it appears that DB log begins growing: the things are
> not
> directly dependent, just somewhat later the log begins growing, but no
> error
> is found in SQL errorlog.

> - In which way may the two things (overflow and log grow) be related?
It does not matter whether or not overflow occured. the log file grows up
and it is not truncated unless you have SIMPLE recovery mode the database
set

> - What really happens on SQL server when data overflow occurs? How does it
> handle?
create table t (c1 tinyint, c2 varchar (5))
--owerflow on c1 column
insert into t values(4545745454545,'a')
--Server: Msg 8115, Level 16, State 2, Line 1
--Arithmetic overflow error converting expression to data type tinyint.
--The statement has been terminated.
select * from t
--(0 row(s) affected)
--now insert much more characters than you defined for c2 columnn
insert into t values(1,'asgtbvtybvtg')
--Server: Msg 8152, Level 16, State 9, Line 1
--String or binary data would be truncated.
--The statement has been terminated.
select * from t
--(0 row(s) affected)
"Marco Roda" <mrtest@.amdosoft.com> wrote in message
news:e8t656$le6$1@.ss408.t-com.hr...
> Hi,
> I am using SQL Server 2000.
> One user database contains a table as following:
> CREATE TABLE Events..Audit_Sub (
> [RecID] [bigint] NOT NULL ,
> [Name] [varchar] (100) NULL ,
> [Value] [varchar] (1024) NULL
> ) ON [PRIMARY]
> Sometimes the application writes to the table a record that overflows the
> size of the Value field (actually because of an error in the new code, the
> appilcation attempts to write about 5Kb to the Valaue field).
> The fact is:
> - No error is detected on SQL Server, data is written to tha table, it is
> visible by select, it is just truncated to the field size (1Kb).
> - In the meanwhile it appears that DB log begins growing: the things are
> not
> directly dependent, just somewhat later the log begins growing, but no
> error
> is found in SQL errorlog.
> - Later on transaction log cannot be backup up, data are no more written
> to
> DB, but then it is too late to understand the reason why.
> The question is:
> - In which way may the two things (overflow and log grow) be related?
> - What really happens on SQL server when data overflow occurs? How does it
> handle?
> Thanks in advance,
> MR
>
>
>|||Hi
At a guess you have the ANSI_WARNINGS setting off as "When OFF, data is
truncated to the size of the column and the statement succeeds. " e.g
SET ANSI_WARNINGS ON
DECLARE @.error int
CREATE TABLE #tmp ( col1 char(1) NOT NULL )
BEGIN TRANSACTION
INSERT INTO #tmp ( col1 ) values ( 'AA' )
SET @.error = @.@.ERROR
IF @.error <> 0
BEGIN
SELECT 'Transaction Rolled Back Error Status: ' + CAST(@.error as varchar(30)
)
ROLLBACK TRANSACTIOn
END
ELSE
BEGIN
PRINT 'Transaction Comitted'
COMMIT TRANSACTION
END
GO
SELECT * from #tmp
GO
DROP TABLE #tmp
GO
/*
Msg 8152, Level 16, State 14, Line 5
String or binary data would be truncated.
The statement has been terminated.
----
Transaction Rolled Back Error Status: 8152
(1 row(s) affected)
col1
--
(0 row(s) affected)
*/
SET ANSI_WARNINGS OFF
DECLARE @.error int
CREATE TABLE #tmp ( col1 char(1) NOT NULL )
BEGIN TRANSACTION
INSERT INTO #tmp ( col1 ) values ( 'AA' )
SET @.error = @.@.ERROR
IF @.error <> 0
BEGIN
SELECT 'Transaction Rolled Back Error Status: ' + CAST(@.error as varchar(30)
)
ROLLBACK TRANSACTIOn
END
ELSE
BEGIN
PRINT 'Transaction Comitted'
COMMIT TRANSACTION
END
GO
SELECT * from #tmp
GO
DROP TABLE #tmp
GO
/*
(1 row(s) affected)
Transaction Comitted
col1
--
A
(1 row(s) affected)
*/
although with your log file growing it may be that you have detected an
error and not rolled back the transaction, use DBCC OPENTRAN to view open
transactions.
John
"Marco Roda" wrote:

> Hi,
> I am using SQL Server 2000.
> One user database contains a table as following:
> CREATE TABLE Events..Audit_Sub (
> [RecID] [bigint] NOT NULL ,
> [Name] [varchar] (100) NULL ,
> [Value] [varchar] (1024) NULL
> ) ON [PRIMARY]
> Sometimes the application writes to the table a record that overflows the
> size of the Value field (actually because of an error in the new code, the
> appilcation attempts to write about 5Kb to the Valaue field).
> The fact is:
> - No error is detected on SQL Server, data is written to tha table, it is
> visible by select, it is just truncated to the field size (1Kb).
> - In the meanwhile it appears that DB log begins growing: the things are n
ot
> directly dependent, just somewhat later the log begins growing, but no err
or
> is found in SQL errorlog.
> - Later on transaction log cannot be backup up, data are no more written t
o
> DB, but then it is too late to understand the reason why.
> The question is:
> - In which way may the two things (overflow and log grow) be related?
> - What really happens on SQL server when data overflow occurs? How does it
> handle?
> Thanks in advance,
> MR
>
>
>|||"Uri Dimant" <urid@.iscar.co.il> wrote in message
news:uozopRApGHA.4996@.TK2MSFTNGP05.phx.gbl...
> Marco
>
>
> It does not matter whether or not overflow occured. the log file grows up
> and it is not truncated unless you have SIMPLE recovery mode the database
> set
>
it[vbcol=seagreen]
>
> create table t (c1 tinyint, c2 varchar (5))
> --owerflow on c1 column
> insert into t values(4545745454545,'a')
> --Server: Msg 8115, Level 16, State 2, Line 1
> --Arithmetic overflow error converting expression to data type tinyint.
> --The statement has been terminated.
> select * from t
> --(0 row(s) affected)
> --now insert much more characters than you defined for c2 columnn
> insert into t values(1,'asgtbvtybvtg')
> --Server: Msg 8152, Level 16, State 9, Line 1
> --String or binary data would be truncated.
> --The statement has been terminated.
> select * from t
> --(0 row(s) affected)
>
The fact is: when the application attempts writing more data, data is REALLY
WRITTEN (even if truncated), and NO ERROR is thrown.
- Why did not get error?
- May the overflow be a reason why the log is growing?

Field Order

I am new to SQL so keep asking questions. My apologies if they are dumb.
Does it make any difference to the performance or otherwise of SQL if I
arrange the fields in one particular order in my table, and then input them
in another?
For example, my table is Field 1, Field 2, Field 3 etc., but I might want to
input Field 3 first, then Field 1, then Field 2.
Thanks
On Mon, 5 Apr 2004 11:55:48 +0100, "Keith" <@..> wrote:

>I am new to SQL so keep asking questions. My apologies if they are dumb.
There are no dumb questions. Not asking - that is dumb.

>Does it make any difference to the performance or otherwise of SQL if I
>arrange the fields in one particular order in my table, and then input them
>in another?
No, not at all.
(Sole exception - if you write an insert statement supplying data for
ALL columns in the same order as they are defined, you are allowed to
leave out the column list in the insert statement - but that is
definitely not recommended practice, since it will produce errors as
soon as someone or something changes the columns' order)
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)

Field Order

I am new to SQL so keep asking questions. My apologies if they are dumb.
Does it make any difference to the performance or otherwise of SQL if I
arrange the fields in one particular order in my table, and then input them
in another?
For example, my table is Field 1, Field 2, Field 3 etc., but I might want to
input Field 3 first, then Field 1, then Field 2.
ThanksOn Mon, 5 Apr 2004 11:55:48 +0100, "Keith" <@..> wrote:

>I am new to SQL so keep asking questions. My apologies if they are dumb.
There are no dumb questions. Not asking - that is dumb.

>Does it make any difference to the performance or otherwise of SQL if I
>arrange the fields in one particular order in my table, and then input them
>in another?
No, not at all.
(Sole exception - if you write an insert statement supplying data for
ALL columns in the same order as they are defined, you are allowed to
leave out the column list in the insert statement - but that is
definitely not recommended practice, since it will produce errors as
soon as someone or something changes the columns' order)
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||Thank you
"Hugo Kornelis" <hugo@.pe_NO_rFact.in_SPAM_fo> wrote in message
news:78g270t8umrbpp1in6r6ihk04ebichk8ng@.
4ax.com...
> On Mon, 5 Apr 2004 11:55:48 +0100, "Keith" <@..> wrote:
>
> There are no dumb questions. Not asking - that is dumb.
>
them
> No, not at all.
> (Sole exception - if you write an insert statement supplying data for
> ALL columns in the same order as they are defined, you are allowed to
> leave out the column list in the insert statement - but that is
> definitely not recommended practice, since it will produce errors as
> soon as someone or something changes the columns' order)
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)

Field ntext in the table only stores 256 characters

declare @.mensagem varchar(8000)
CREATE TABLE #Mensagem (mensagem text)
set @.mensagem = 'Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa dsfsdfsdf sdfsdfsdf 111111111 00 '
insert into #Mensagem select @.mensagem
select * from #MensagemHi Frank.
I ran that under SQL2KEE & it produced a perfect result..
The Query Analyser truncates column output to 256 characters by default, so
try setting your "Maximum characters per column" to something higher (eg
8000) under Query Analyser's "Tools/Options" menu, "Results" tab.
HTH
Regards,
Greg Linwood
SQL Server MVP
"Frank Dulk" <fdulk@.bol.com.br> wrote in message
news:#ZTWHAgjDHA.2592@.TK2MSFTNGP10.phx.gbl...
>
>
> declare @.mensagem varchar(8000)
> CREATE TABLE #Mensagem (mensagem text)
> set @.mensagem =>
'Aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
>
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
> aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa dsfsdfsdf sdfsdfsdf 111111111 00 '
> insert into #Mensagem select @.mensagem
> select * from #Mensagem
>

Field not sorting in ascending order

Hi, I've created a website usiing asp.net and all the data are stored in sql front. All the item are sorted in ascending order except one record. The correct order should be MP61, MP100, MP200, but this record is retrieved as MP100, MP200, MP61. If the coding is wrong, all the order displayed is not in ascending order. We have hundreds of items, but why it happens to this particular record? Can anyone help? Thanks in advance

That's because your field is not numeric, if it were numeric, you would expect 61,100,200

but, with text, it looks at MP with a '1' and sees it first, then, MP with a '2' and sees it next - - naturally 6 is after 2 - - but that's the reason.

|||

It is possible to get text to behave a bit like numbers.

If you had stored this instead, it would sort correctly:

MP061, MP100, MP200

This example presumes that the numerical component will always be no more than 3 characters and all 3 postiions are identified for each record (even if they are zero). In general, it's better not to try to sort alpha-numeric data in a numerical sort order.

|||If all of your records have a 2-character prefix in that column, andthere are only numeric characters that follow, you can use thisapproach to solve your sorting problem:
SELECT
someColumns
FROM
someTable
ORDER BY
CAST(SUBSTRING(mpColumn,3,99) AS integer),
mpColumn