Auto Increment the Primary Key in Berkeley Database Java Edition - berkeley-db-je

I want to auto increment the Primary Key in Berkeley Database. I use #PrimaryKey(sequence="Id"). It worked fine. but when i enter a another data the sequence is getting wrong. As a Example when i execute database.put the primary key is "1", but next time it is "101" and next time it is "201". This is my code. are there any thing to add. I+ didn't use SequenceConfig config = new SequenceConfig(); config.setAllowCreate(true);. Do i need to use it. Please help me.
#PrimaryKey(sequence="ID")
long id;
String name;
Login_Audit(String name)
{
this.name = name;
}

The sequence would only be wrong if it ever returned the same value twice. There's no requirement that values of a sequence should be consecutive, nor should you ever rely on them being so. The reason you're not getting consecutive numbers is probably the way BDB JE handles multi-threading efficiently: opening a handle to a sequence will "pre-allocate" a range of values to be used exclusively by that handle, so that it can give you new values without having to do an expensive database lock operation every time.
You can either just not care about the actual values of your IDs (this is the preferred option) or open the sequence manually using Database.openSequence() and manipulate it directly.

Related

Randomly generated public unique ids

Currently I'm generating unique ids for rows in my database using int and auto_increment. These ids are public facing so in the url you can see something like this https://example.com/path/1 or https://example.com/path/2
After talking with another engineer they've advised me that I should use randomly generated ids so that they're not guessable.
How can I generate a unique ID & without doing a forloop on the database each time to make sure it's unique? e.g. take stripe for example. All of their ids are price_sdfgsdfg or prod_iisdfgsdfg. Whats the best way to generate unique ids for rows like these?
Without knowing which language or database you're using, the simplest way is using uuids.
To prevent downloading all existing database unique keys, and then for looping over them all, simply just try to INSERT INTO whichever table you are using.
If the result fails (e.g. Exception), then the row is taken, continue.
If the result passes, break loop.
This only works when you have a column which is NOT NULL, and UNIQUE.
That's how I "know" without looping over the whole database of IDs, or downloading them into local memory, etc.
Using auto_increment wont lead to duplicates because when a SQL or no-SQL table is in use, it will be locked and given to the next available number in the queue, which is the beauty of databases.
SQL example (mySQL, SQLite, mariadb):
CREATE TABLE `my_db`.`my_table` ( `unique_id` INT NOT NULL , UNIQUE (`unique_id`)) ENGINE = InnoDB;`
Insert a unique_id
INSERT INTO `test` (`unique_id`) VALUES ('999999999');
Great, we have a row
INSERT INTO `test` (`unique_id`) VALUES ('999999999');
If not, then retry:
Error:
#1062 - Duplicate entry '999999999' for key 'unique_id'
If these are public URLs, and the content is sensitive, then I definitely do not recommend int's as someone can trivially guess 1 through 99999999... etc.
In any language, have a look at /dev/urandom.
In shell/bash scripts, I might use uuidgen:
9dccd646-043e-4984-9126-3060b4ced180
In Python, I'll use pandas:
df.set_index(pd.util.hash_pandas_object(df, encoding='utf8'), drop=True, inplace=True)
df.index.rename('hash', inplace=True)
Lastly, UUID's aren't perfect: they are only a-f 0-9 all lowercase, but they are easy to generate: every language has one.
In JavaScript you may want to check out some secure Open Source apps, for example, Jitsi: https://github.com/jitsi/js-utils/blob/master/random/roomNameGenerator.js where they conjugate word:
E.g. Satisfied-Global-Architectural-Bitter

How to update distributed cache under high traffic and multiple applications?

I have N services that use M redis as the remote distributed cache. Suppose now multiple services want to retrieve the same key, and the following pseudo codes are how the work is done:
redisClient = getRedisClientByConsistentHash(key)
value = redisClient.get(key)
if value not exist
value = getValueFromSomewhereElse(key) // line4
redisClient set key value ex 1 nx // line5
return value
So the problem is:
In "line4", if 2 applications retrieve different values, one is newer and the other is old(should be deprecated), it's possible that the call to store the old value will happen before the call to store the new value, thus the new value won't be stored in redis. If we introduce some distributed lock mechanism, the problem still remains.
If the Key Storage internally makes use of timestamp of key in a way such that if KeyA is required to be updated from ValueA to ValueB then this updation is possible only if ValueB is inserted at a time which is greater than last updated timestamp of KeyA. Then its guaranteed that only new values will be inserted in a particular Key Storage. OldValues cannot overwrite NewValues (Timestamp Based Protocol). (Don't know whether redis follows Timestamp Based Protocol).
Both of your 2 applications (say, A, B) tried to fetch key from their respective primary redisClient and did not find the key and hence they went to fetch the key from SomewhereElse and found the key, but A has old and B has new. In such case there are few questions:
1. What if A's or B's primary `redisClient` itself gave you a value which is old?
2. How you come to know the value which is fetched is old or new?
Solutions:
1. Use a value which has majority (i.e the value received from atleast [ceil(M+1)/2] redisClients). Ofcourse this involves querying atleast [ceil(M+1)/2] rediClients which seems expensive. (Paxos Theorem)
2. Depending upon application logic, most of the time you don't require latest values. That is, if the application requirement is to just check the presence of a value then it does not matter whether the value is old or new.

How to avoid inserting the wrong data type in SQLite tables?

SQLite has this "feature" whereas even when you create a column of type INTEGER or REAL, it allows you to insert a string into it, even a string without numbers in it, like "the quick fox jumped over the lazy dog".
How do you prevent this kind of insertions to happen in your projects?
I mean, when my code has an error that leads to that kind of insertions or updates, I want the program to give an error, so I can debug it, not simply insert garbage in my database silently.
You can implement this using the CHECK constraint (see previous answer here). This would look like
CREATE TABLE T (
N INTEGER CHECK(TYPEOF(N) = 'integer'),
Str TEXT CHECK(TYPEOF(Str) = 'text'),
Dt DATETIME CHECK(JULIANDAY(Dt) IS NOT NULL)
);
The better and safer way is write a function (isNumeric, isString, etc) that validates the user input...

Compare Two-Way Encription With Data in Database

I have a column named id_num in database and the column must has a unique true value.
Users have to enter their ID Num to register in my system.
To protect ID Num, I encript it using $this->encript->encode()
The encripted data will generate different code every time I enter the same data.
Example:
First registration:
I entered 12345, will be encripted to PVfuF8GDzE4yton9tNabJwG
Second registration:
I entered the same number 12345, will be encripted to different code M0wYZsDAdR1u0WlsDAdR1
So, I call checkExistIdNum() to check either the ID Num already exist or not to make sure the id_num column is unique.
function checkExistIdNum($enc_id_num=null) {
$this->db->select('COUNT(*) AS count');
$this->db->where("(id_num = '$enc_id_num' AND user_id != '".user_id()."')");
$query = $this->db->get('user_info');
$num = $query->row()->count;
if($num>0) return true;
else return false;
}
Both has the same true value, but how can I compare id_num = '$enc_id_num' while both encripted code are different?
I think you are confusing the concept of a cryptographic hash and two way encryption.
A hash is a one way, and it always has the same result, given identical input.
MD5 or SHA1 are one-way hash algorithms that are commonly used to mask passwords in databases, the main reason for this is that it is just that, one-way, if the hash is obtained it cannot be reverted to its original value.
Use the database's built-in encryption functions so that the database indexes the values, and can quickly match against an arbitrary value you enter. Otherwise you're just reinventing the wheel, and you'll either have to keep a separate index that you compare against every time (very slow), or decrypt and compare every row individually (EXTREMELY slow).
Built-in encryption solves all of this without the possibility of leaking sensitive data through the indices.
And yeah, maybe it would be a good idea to use a hash, but for trivial account strings, that could easily be reverse-engineered if someone dumped the database.
Since you don't identify your database or your PHP version I can't be more specific.

What would be the best algorithm to find an ID that is not used from a table that has the capacity to hold a million rows

To elaborate ..
a) A table (BIGTABLE) has a capacity to hold a million rows with a primary Key as the ID. (random and unique)
b) What algorithm can be used to arrive at an ID that has not been used so far. This number will be used to insert another row into table BIGTABLE.
Updated the question with more details..
C) This table already has about 100 K rows and the primary key is not an set as identity.
d) Currently, a random number is generated as the primary key and a row inserted into this table, if the insert fails another random number is generated. the problem is sometimes it goes into a loop and the random numbers generated are pretty random, but unfortunately, They already exist in the table. so if we re try the random number generation number after some time it works.
e) The sybase rand() function is used to generate the random number.
Hope this addition to the question helps clarify some points.
The question is of course: why do you want a random ID?
One case where I encountered a similar requirement, was for client IDs of a webapp: the client identifies himself with his client ID (stored in a cookie), so it has to be hard to brute force guess another client's ID (because that would allow hijacking his data).
The solution I went with, was to combine a sequential int32 with a random int32 to obtain an int64 that I used as the client ID. In PostgreSQL:
CREATE FUNCTION lift(integer, integer) returns bigint AS $$
SELECT ($1::bigint << 31) + $2
$$ LANGUAGE SQL;
CREATE FUNCTION random_pos_int() RETURNS integer AS $$
select floor((lift(1,0) - 1)*random())::integer
$$ LANGUAGE sql;
ALTER TABLE client ALTER COLUMN id SET DEFAULT
lift((nextval('client_id_seq'::regclass))::integer, random_pos_int());
The generated IDs are 'half' random, while the other 'half' guarantees you cannot obtain the same ID twice:
select lift(1, random_pos_int()); => 3108167398
select lift(2, random_pos_int()); => 4673906795
select lift(3, random_pos_int()); => 7414644984
...
Why is the unique ID Random? Why not use IDENTITY?
How was the ID chosen for the existing rows.
The simplest thing to do is probably (Select Max(ID) from BIGTABLE) and then make sure your new "Random" ID is larger than that...
EDIT: Based on the added information I'd suggest that you're screwed.
If it's an option: Copy the table, then redefine it and use an Identity Column.
If, as another answer speculated, you do need a truly random Identifier: make your PK two fields. An Identity Field and then a random number.
If you simply can't change the tables structure checking to see if the id exists before trying the insert is probably your only recourse.
There isn't really a good algorithm for this. You can use this basic construct to find an unused id:
int id;
do {
id = generateRandomId();
} while (doesIdAlreadyExist(id));
doSomethingWithNewId(id);
Your best bet is to make your key space big enough that the probability of collisions is extremely low, then don't worry about it. As mentioned, GUIDs will do this for you. Or, you can use a pure random number as long as it has enough bits.
This page has the formula for calculating the collision probability.
A bit outside of the box.
Why not pre-generate your random numbers ahead of time? That way, when you insert a new row into bigtable, the check has already been made. That would make inserts into bigtable a constant time operation.
You will have to perform the checks eventually, but that could be offloaded to a second process that doesn’t involve the sensitive process of inserting into bigtable.
Or go generate a few billion random numbers, and delete the duplicates, then you won't have to worry for quite some time.
Make the key field UNIQUE and IDENTITY and you wont have to worry about it.
If this is something you'll need to do often you will probably want to maintain a live (non-db) data structure to help you quickly answer this question. A 10-way tree would be good. When the app starts it populates the tree by reading the keys from the db, and then keeps it in sync with the various inserts and deletes made in the db. So long as your app is the only one updating the db the tree can be consulted very quickly when verifying that the next large random key is not already in use.
Pick a random number, check if it already exists, if so then keep trying until you hit one that doesn't.
Edit: Or
better yet, skip the check and just try to insert the row with different IDs until it works.
First question: Is this a planned database or a already functional one. If it already has data inside then the answer by bmdhacks is correct. If it is a planned database here is the second question:
Does your primary key really need to be random? If the answer is yes then use a function to create a random id from with a known seed and a counter to know how many Ids have been created. Each Id created will increment the counter.
If you keep the seed secret (i.e., have the seed called and declared private) then no one else should be able to predict the next ID.
If ID is purely random, there is no algorithm to find an unused ID in a similarly random fashion without brute forcing. However, as long as the bit-depth of your random unique id is reasonably large (say 64 bits), you're pretty safe from collisions with only a million rows. If it collides on insert, just try again.
depending on your database you might have the option of either using a sequenser (oracle) or a autoincrement (mysql, ms sql, etc). Or last resort do a select max(id) + 1 as new id - just be carefull of concurrent requests so you don't end up with the same max-id twice - wrap it in a lock with the upcomming insert statement
I've seen this done so many times before via brute force, using random number generators, and it's always a bad idea. Generating a random number outside of the db and attempting to see if it exists will put a lot strain on your app and database. And it could lead to 2 processes picking the same id.
Your best option is to use MySQL's autoincrement ability. Other databases have similar functionality. You are guaranteed a unique id and won't have issues with concurrency.
It is probably a bad idea to scan every value in that table every time looking for a unique value. I think the way to do this would be to have a value in another table, lock on that table, read the value, calculate the value of the next id, write the value of the next id, release the lock. You can then use the id you read with the confidence your current process is the only one holding that unique value. Not sure how well it scales.
Alternatively use a GUID for your ids, since each newly generated GUID is supposed to be unique.
Is it a requirement that the new ID also be random? If so, the best answer is just to loop over (randomize, test for existence) until you find one that doesn't exist.
If the data just happens to be random, but that isn't a strong constraint, you can just use SELECT MAX(idcolumn), increment in a way appropriate to the data, and use that as the primary key for your next record.
You need to do this atomically, so either lock the table or use some other concurrency control appropriate to your DB configuration and schema. Stored procs, table locks, row locks, SELECT...FOR UPDATE, whatever.
Note that in either approach you may need to handle failed transactions. You may theoretically get duplicate key issues in the first (though that's unlikely if your key space is sparsely populated), and you are likely to get deadlocks on some DBs with approaches like SELECT...FOR UPDATE. So be sure to check and restart the transaction on error.
First check if Max(ID) + 1 is not taken and use that.
If Max(ID) + 1 exceeds the maximum then select an ordered chunk at the top and start looping backwards looking for a hole. Repeat the chunks until you run out of numbers (in which case throw a big error).
if the "hole" is found then save the ID in another table and you can use that as the starting point for the next case to save looping.
Skipping the reasoning of the task itself, the only algorithm that
will give you an ID not in the table
that will be used to insert a new line in the table
will result in a table still having random unique IDs
is generating a random number and then checking if it's already used
The best algorithm in that case is to generate a random number and do a select to see if it exists, or just try to add it if your database errs out sanely. Depending on the range of your key, vs, how many records there are, this could be a small amount of time. It also has the ability to spike and isn't consistent at all.
Would it be possible to run some queries on the BigTable and see if there are any ranges that could be exploited? ie. between 100,000 and 234,000 there are no ID's yet, so we could add ID's there?
Why not append your random number creator with the current date in seconds. This way the only way to have an identical ID is if two users are created at the same second and are given the same random number by your generator.

Resources