##Minutia
Minutia
##IDENTITY columns
IDENTITY columns
##Solution
Solution
##Minutia
##IDENTITY columns
##Solution
Minutia
IDENTITY columns
Solution
##Minutia
A table called V
... really? Columns Column1
and Column2
.... why are you using example tables? In your setup you called it Table
but in the code you called it V
...
Normally we require that code is real code, not 'example' code. In this case, it seems your actual code is 'real', just the tables it works on is not.
##IDENTITY columns
IDENTITY columns are generally there for one reason only, to create a unique field that is fast to use (insert/index/join/search).
These properties are guaranteed in all uses of the field, including concurrent modification of the data (two processes inserting data at the same time will each generate unique ID's still - guaranteed)
You want to have the added use for the column: to ensure there are no 'gaps' in the sequence. This is not what IDENTITY columns are for.
IDENTITY columns, for performance reasons, generally allocate a 'pool' of numbers to each process that's adding data, and as each number is used from the pool, the pool shrinks. If the pool is no longer needed, then the pool is 'returned' to the system and another process can continue with it. If two processes are inserting data at the same time you will often get gaps between the numbers because the pools are different. Also, if there's a "hard crash" on the system, the pool is often completely 'lost', and you could have large gaps in the ranges.
This is the compromise that IDENTITY columns make. They sacrifice sequentialness for performance and concurrency.
##Solution
There are two solutions, the easy one, and the hard one.
The easy one is to accept that there will be gaps, and move on. I strongly recommend that you revisit your requirements, and see if you can just use the IDENTITY column and move on.
The hard solution is to build your own system for number generation (although it will still not be perfect... if someone deletes a record, you will still have gaps).
Your solution is too simple, and also too slow. It is, essentially, broken.
If two processes run at the same time, they will allocate the same numbers for records, and you will have duplicate insert situations in to your table. This is because you do not have any transaction handling for your statements... they are all independent. You need to have a table-lock on the table so that no other process can add records while you are finding the max value. The lock is needed for the entire process of finding, and deciding, and inserting the value. You will need to read up on "with holdlock" for your select statements otherwise the lock will not be maintained: See the HOLDLOCK and other documentation on transactions.
Then, after the manipulation is done, you can commit the transaction.
Your performance will suck because you will be locking a lot of data, etc.
The better solution is to have a separate ID table for your key field. A simple table like:
create table VTableKey (
nextid integer not null
)
insert into VTableKey values (1)
Now, with that table, you can:
- begin a transaction
- query-with-update-lock and update just that one record
- insert into the V table
- commit the transaction
In that way, you maintain the 'atomicity' of the transaction - the value only changes on a successful insert. I would consider using a stored procedure to encapsulate that logic.
Note that you are still vulnerable to someone deleting records, you will still end up with gaps.