Monday, March 26, 2012

fetching unique pins...

Hi,

I have a table which contains a bunch of prepaid PINs. What is the
best way to fetch a unique pin from the table in a high-traffic
environment with lots of concurrent requests?

For example, my PINs table might look like this and contain thousands
of records:

ID PIN ACQUIRED_BY
DATE_ACQUIRED
...
100 1864678198
101 7862517189
102 6356178381
...

10 users request a pin at the same time. What is the easiest/best way
to ensure that the 10 users will get 10 different unacquired pins?

Thanks for any help...Bobus wrote:
> Hi,
> I have a table which contains a bunch of prepaid PINs. What is the
> best way to fetch a unique pin from the table in a high-traffic
> environment with lots of concurrent requests?
> For example, my PINs table might look like this and contain thousands
> of records:
> ID PIN ACQUIRED_BY
> DATE_ACQUIRED
> ...
> 100 1864678198
> 101 7862517189
> 102 6356178381
> ...
> 10 users request a pin at the same time. What is the easiest/best way
> to ensure that the 10 users will get 10 different unacquired pins?

Place a Primary Key or Unique constraint on the PIN column. When a
duplicate error occurs generate a new PIN & try to save the new user row
again. Repeate until success.
--
MGFoster:::mgf00 <at> earthlink <decimal-point> net
Oakland, CA (USA)|||Thanks, however, we do not generate the PINs ourselves. We simply
maintain the inventory of PINs which are given to us from a 3rd party.

Is there a way in SQL to update a single row ala the LIMIT function in
MYSQL? Something like:
update tablename set foo = bar limit 1|||--BEGIN PGP SIGNED MESSAGE--
Hash: SHA1

Unless you're only using the PIN for a one-time operation - somewhere
you are going to save that PIN (in a table). That table is where you'd
put the Primary Key/Unique constraint.

I don't know what the LIMIT function does. If you want to just update
one row you'd indicate which row in the WHERE clause:

UPDATE table_name SET foo = bar WHERE foo_id = 25

foo_id would be a unique value.
--
MGFoster:::mgf00 <at> earthlink <decimal-point> net
Oakland, CA (USA)

--BEGIN PGP SIGNATURE--
Version: PGP for Personal Privacy 5.0
Charset: noconv

iQA/AwUBQ/GTdYechKqOuFEgEQLJ/wCgxLHQiPaeDWXwsi5BxBpg6tlKmFoAn0tv
KM3PLa2qdl2KzW3Lp/XFHbiv
=gfzL
--END PGP SIGNATURE--

Bobus wrote:
> Thanks, however, we do not generate the PINs ourselves. We simply
> maintain the inventory of PINs which are given to us from a 3rd party.
> Is there a way in SQL to update a single row ala the LIMIT function in
> MYSQL? Something like:
> update tablename set foo = bar limit 1|||Maybe this

select top 1 ID, PIN from pin_table where acquired_by = <not acquired
value> (NOTE: this could be expensive if you use null to signify Not
Acquired, perhaps a non-null value with an index would help).

update pin_table set acquired_by = <acquired value> where ID = <ID from
select
commit

--or --

set up one table containing the unused pins and one containing the used
pins

then
select top 1 ID, PIN from unused_pin
insert into used_pin values (ID, PIN)
delete from unused_pin where ID = ID

commit|||I successfully used a transactional message queue for a similar
scenario.

Besides, try this:

create table #pins(id int identity, PIN decimal(10));
insert into #pins(PIN)values(1000000000);
insert into #pins(PIN)values(1000000001);
insert into #pins(PIN)values(1000000002);
go
create table #point_to_pins(id int identity)
go
--to get a PIN
insert into #point_to_pins default values
select @.@.identity

use @.@.identity to get the PIN, you will not get any collisions ever|||On 13 Feb 2006 20:00:07 -0800, Bobus wrote:

>Hi,
>I have a table which contains a bunch of prepaid PINs. What is the
>best way to fetch a unique pin from the table in a high-traffic
>environment with lots of concurrent requests?
>For example, my PINs table might look like this and contain thousands
>of records:
> ID PIN ACQUIRED_BY
>DATE_ACQUIRED
> ...
> 100 1864678198
> 101 7862517189
> 102 6356178381
> ...
>10 users request a pin at the same time. What is the easiest/best way
>to ensure that the 10 users will get 10 different unacquired pins?
>Thanks for any help...

Hi Bobus,

To get just one row, you can use TOP 1. Add an ORDER BY if you want to
make it determinate; without ORDER BY, you'll get one row, but there's
no way to predict which one.

If you expect high concurrency, you'll have to use the UPDLOCK to make
sure that the row gets locked when you read it, because otherwise a
second transaction might read the same row before the first can update
it to mark it acquired.

If you also don't want to hamper concurrency, add the READPAST locking
hint to allow SQL Server to skip over locked rows instead of waiting
until the lock is lifted. This is great if you need one row but don't
care which row is returned. But if you need to return the "first" row in
the queue, you can't use this (after all, the transaction that has the
lock might fail and rollback; if you had skipped it, you'd be processing
the "second" available instead of the first). In that case, you'll have
to live with waiting for the lock to be released - make sure that the
transaction is as short as possible!!

So to sum it up: to get "one row, just one, don't care which", use:

BEGIN TRANSACTION
SELECT TOP 1
@.ID = ID,
@.Pin = Pin
FROM PinsTable WITH (UPDLOCK, READPAST)
WHERE Acquired_By IS NULL
-- Add error handling
UPDATE PinsTable
SET Acquired_By = @.User,
Date_Acquired = CURRENT_TIMESTAMP
WHERE ID = @.ID
-- Add error handling
COMMIT TRANSACTION

And to get "first row in line", use:

BEGIN TRANSACTION
SELECT TOP 1
@.ID = ID,
@.Pin = Pin
FROM PinsTable WITH (UPDLOCK)
WHERE Acquired_By IS NULL
ORDER BY Fill in the blanks
-- Add error handling
UPDATE PinsTable
SET Acquired_By = @.User,
Date_Acquired = CURRENT_TIMESTAMP
WHERE ID = @.ID
-- Add error handling
COMMIT TRANSACTION

--
Hugo Kornelis, SQL Server MVP|||Hugo Kornelis (hugo@.perFact.REMOVETHIS.info.INVALID) writes:

> BEGIN TRANSACTION
> SELECT TOP 1
> @.ID = ID,
> @.Pin = Pin
> FROM PinsTable WITH (UPDLOCK, READPAST)
> WHERE Acquired_By IS NULL
> -- Add error handling
> UPDATE PinsTable
> SET Acquired_By = @.User,
> Date_Acquired = CURRENT_TIMESTAMP
> WHERE ID = @.ID
> -- Add error handling
> COMMIT TRANSACTION
> And to get "first row in line", use:
> BEGIN TRANSACTION
> SELECT TOP 1
> @.ID = ID,
> @.Pin = Pin
> FROM PinsTable WITH (UPDLOCK)
> WHERE Acquired_By IS NULL
> ORDER BY Fill in the blanks
> -- Add error handling
> UPDATE PinsTable
> SET Acquired_By = @.User,
> Date_Acquired = CURRENT_TIMESTAMP
> WHERE ID = @.ID
> -- Add error handling
> COMMIT TRANSACTION

Yet a variation is:

SET ROWCOUNT 1
UPDATE PinsTabel
SET @.ID = ID,
@.Pin = Pin
WHERE Acquired_By IS NULL
SET ROWCOUNT 0

It is essential to have a (clustered) index on Acquired_By.

Which solution that gives best performance it's difficult to tell.
My solution looks shorted, but Hugo's may be more effective.

Note also that if there is a requirement that a PIN must actually
be used, the transaction scope may need have to be longer, so in
case of an error, there can be a rollback. That will not be good
for concurrency, though.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Thanks for the responses everyone!

MGFoster: I was asking about the "limit" clause so that I could
implement a solution similar to what Erland recommended. This
guarantees no collisions.

Randy: in your solution, I believe there is a chance that two
concurrent requests will end up grabbing the same pin.

Alexander: clever! It's like an Oracle sequence. But, in our
particular case, we could have a problem of unused pins for
transactions which rollback.

Hugo: that should definintely do the trick.

Erland: yours too! I will try them both out.

Thanks for the help!|||Bobus,

When I was solving a similar problem, I did try out the approaches
suggested by Hugo and Erland. I hate to say that, but I was always
getting a bottleneck because of lock contention on PinsTable. Maybe I
was missing something at that time. I had a requirement to produce
hundreds of PINs per minute at peak times, so I decided to allocate a
batch of PINs at a time, instead of distrributing them one at a time -
that took care of lock contention|||Alexander Kuznetsov (AK_TIREDOFSPAM@.hotmail.COM) writes:
> When I was solving a similar problem, I did try out the approaches
> suggested by Hugo and Erland. I hate to say that, but I was always
> getting a bottleneck because of lock contention on PinsTable. Maybe I
> was missing something at that time. I had a requirement to produce
> hundreds of PINs per minute at peak times, so I decided to allocate a
> batch of PINs at a time, instead of distrributing them one at a time -
> that took care of lock contention

Bobus said "But, in our particular case, we could have a problem of unused
pins for transactions which rollback."

This would call for a design where you get a start a transaction, get a
pin, use it for whatever purpose, and then commit. But as you say, you
will get contentions on the PINs here, although it's possible that READPAST
hint could help. I did some quick tests, and it seem to work.

The other option as you say is just to grab a PIN or even a bunch of
them. A later job would then examine which PINs that were taken, and
which were never used, and then mark the latter as unused.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Alexander/Erland: thanks for the tips. I will let you know what issues
we run into, if any.

Best wishes.|||Erland,

I think you are right. I looked up that project, and in fact I did not
try READPAST at all. Having observed poor performance, I implemented
batches, which reduced amount of database calls and as a side effect
took care of lock contention. I immediately got good performance and
did not drill down any furhter.|||Erland's solution seems to work great from our initial tests!

Unrelated question, and probably should be another thread, but have any
of you SQL Server geniuses tried PostgreSQL? Any comments, positive or
negative?|||In 25 words or less: It is nice but has the feel of a college project
where grad students kept addinf things to it based on the last academic
fad or thesis topic. I would go with Ingres, which has a "commercial
feel" and a great optimizer.

No comments:

Post a Comment