How oracle handles concurrency in clustered environment? - oracle

I have to implement a database solution wherein contention is handled in a clustered environment. There is a scenario wherein there are multiple users trying to access a bank account at the same time and deposit money into it if balance is less than $100, how can I make sure that no extra money is deposited? Basically , this query is supposed to fire :-
update acct set balance=balance+25 where acct_no=x ;
Since database is clustered , account ends up getting deposited multiple times.
I am looking for purely oracle based solution.

Clustering doesn't matter for the system which is trying to prevent the scenario you're fearing/seeing, which is locking.
Behold scenario user A and then user B trying to do an update, based on a check (less than 100 dollar in account):
If both the check and the update is done in the same transaction, locking will prevent that user B does a check, UNTIL user A has done both the check, and the actual insert. In other words, user B will find the check failing, and will not perform the asked action.
When a user says "at the same time", you should know that the computer does not know that concept, as all transactions are sequential, no matter what millisecond is identical. Behold the ID that is kept in the Redo Logs, there's only one counter. Transaction X and Y is done before or after each other, never at the same time.

That doesn't sound right ... When Oracle locks a row for update, the lock should be across all nodes. What you describe doesn't sound right. What version of Oracle are you using, and can you provide a step-by-step example of what you're doing?
Oracle 11 doc here:
http://docs.oracle.com/cd/B28359_01/server.111/b28318/consist.htm#CNCPT020

Related

Event Sourcing and concurrent, contradictory events creation

I am having a hard time figuring this one out. Maybe you can help me.
Problem statement:
Imagine there is a system that records financial transactions of an account (like a wallet service). Transactions are stored in a database and each Transaction denotes an increase or decrease of the balance of a given amount.
On the application code side, when the User wants to purchase, all Transactions for his account are being pulled from the DB and the current balance is calculated. Based on the result, the customer has or has not sufficient funds for the purchase (the balance can never go below zero).
Transactions example:
ID userId amount currency, otherData
Transaction(12345, 54321, 180, USD, ...)
Transaction(12346, 54321, -50, USD, ...)
Transaction(12347, 54321, 20, USD, ...)
Those 3 from above would mean the User has 150 USD on his balance.
Concurrent access:
Now, imagine there are 2 or more instances of such application. Imagine, the User has a balance of 100 USD and bought two items worth of 100 USD at the same time. Request for such a purchase goes to two different instances, which both read all Transactions from DB and reduce them into currentBalance. In both replicas, at the same time balance equals to 100 USD. Both services allow purchase and add new Transaction Transaction(12345, 54321, -100, USD, ...) which decreases the balance by 100.
If there are two, contradictory Transactions inserted into the DB, the balance is incorrect: -100 USD.
Question:
How should I deal with such a situation?
I know that usually optimistic or pessimistic concurrency control is used. So here are my doubts about both:
Optimistic concurrency
It's about keeping the version of the resource and comparing it before the actual update, like a CAS operation. Since Transactions are a form of events - immutable entities - there is no resource which version I could grasp. I do not update anything. I only insert new changes to the balance, which has to be consistent with all other existing Transactions.
Pessimistic concurrency
It's about locking the table/page/row for modification, in case they more often happen in the system. Yeah, ok.. blocking a table/page for each insert is off the table I think (scalability and high load concerns). And locking rows - well, which rows do I lock? Again, I do not modify anything in the DB state.
Open ideas
My feeling is, that this kind of problem has to be solved on the application code level. Some, yet vague ideas that come to my mind now:
Distributed cache, which holds "lock of given User", so that only one Transaction can be processed at a time (purchase, deposit, withdrawal, refund, anything).
Each Transaction has having field such as previousTransactionId - pointer to the last committed Transaction and some kind of unique index on this field (exactly one Transaction can point to exactly one Transaction in the past, first Transaction ever having null value). This way I'd get constraint violation error trying to insert a duplicate.
Asynchronous processing with queueing system, and having a topic-per-user: exactly one instance processing Transactions for given User one-by-one. Nice try, but unfortunatelly I need to be synchronous with the purchase in order to reply to 3rd party system.
One thing to note is that typically there's a per-entity offset (a monotonically increasing number, e.g. Account|12345|6789 could be the 6789th event for account #12345) associated with each event. Thus, assuming the DB in which you're storing events supports it, you can get optimistic concurrency control by remembering the highest offset seen when reconstructing the state of that entity and conditioning the insertion of events on there not being events for account #12345 with offsets greater than 6789.
There are datastores which support the idea of "fencing": only one instance is allowed to publish events to a particular stream, which is another way to optimistic concurrency control.
There are approaches which move pessimistic concurrency control into the application/framework/toolkit code. Akka/Akka.Net (disclaimer: I am employed by Lightbend, which maintains and sells commercial support for one of those two projects) has cluster sharding, which allows multiple instances of an application to coordinate ownership of entities between themselves. For example instance A might have account 12345 and instance B might have account 23456. If instance B receives a request for account 12345, it (massively simplifying) effectively forwards the request to instance A which enforces that only request for account 12345 is being processed at a time. This approach can in some way be thought of as a combination of 1 (of note: this distributed cache is not only providing concurrency control, but actually caching the application state (e.g. the account balance and any other data useful for deciding if a transaction can be accepted) too) and 3 (even though it's presenting a synchronous API to the outside world).
Additionally, it is often possible to design the events such that they form a conflict-free replicated data type (CRDT) which effectively allows forks in the event log as long as there's a guarantee that they can be reconciled. One could squint and perhaps see bank accounts allowing overdrafts (where the reconciliation is allowing a negative balance and charging a substantial fee) as an example of a CRDT.
How should I deal with such a situation?
The general term for the problem you are describing is set validation. If there is some property that must hold for the set taken as a whole, then you need to have some form of lock to prevent conflicting writes.
Optimistic/pessimistic are just two different locking implementations.
In the event that you have concurrent writes, the usual general mechanism is that first writer wins. The losers of the race follow the "concurrent modification" branch, and either retry (recalculating again to ensure that the desired properties still hold) or abort.
In a case like you describe, if your insertion code is responsible for confirming that the user balance is not negative, then that code needs to be able to lock the entire transaction history for the user.
Now: notice that if in the previous paragraph, because its really important. One of the things you need to understand in your domain is whether or not your system is the authority for transactions.
If your system is the authority, then maintaining the invariant is reasonable, because your system can say "no, that one isn't a permitted transaction", and everyone else has to go along with it.
If your system is NOT the authority - you are getting copies of transactions from "somewhere else", then your system doesn't have veto power, and shouldn't be trying to skip transactions just because the balance doesn't work out.
So we might need a concept like "overdrawn" in our system, rather than trying to state absolutely that balance will always satisfy some invariant.
Fundamentally, collaborative/competitive domains with lots of authorities working in parallel require a different understanding of properties and constraints than the simpler models we can use with a single authority.
In terms of implementation, the usual approach is that the set has a data representation that can be locked as a whole. One common approach is to keep an append only list of changes to the set (sometimes referred to has the set's history or "event stream").
In relational databases, one successful approach I've seen is to implement a stored procedure that takes the necessary arguments and then acquires the appropriate locks (ie - applying "tell, don't ask" to the relational data store); that allows you to insulate the application code from the details of the data store.

How to keep design docs in sync across per-user databases

I am building an app, for example, todoapp.
Features:
Offline-first (couchdb + pouchdb)
Multi-user
CouchDB options: require_valid_user, couch_peruser.
Ok, each user has a private database. But how can I validate docs on post/put?
design/validate_doc_update must be in every user db (userdb-{hex}).
How can I place it there and update it sometimes all at once? Sync? Third db/replicate? How can I replicate to all userdbs?
You have three basic options, which are all quite similar:
Set up continuous replications between a prototype database and each user's database. For large numbers of users, and low frequency of updates, this could amount to a lot of overhead for little gain.
Trigger a one-off replication between a prototype database and each user's database, every time an update occurs. This requires you to know when an update occurs in your application, and to manually handle any replication failures that may occur. This overhead may be pretty small in simple scenarios.
Have your application update the design docs every time they change, for each user database. This is sort of a manual sync option.
Which option you choose is really up to you, and is a matter of trade-offs.
If you would ever desire the option to update only a subset of users (say a beta testing group), then option 2 or 3 are going to be your best bets.

Tables being affected by current transactions in Oracle

We are designing a Web-based application with Oracle backend for Conference Room reservation. Also - we have decided to use optimistic locking because we expect the number of collisions to be on the lower side.
Now - in case of optimistic locking, there is always a possibility of "Data Already Modified by Somebody Else" scenario.
And our UI involves quite a few fields to be entered. And displaying a message such as "Data has already been modified" is not a pleasant experience for the end-user; especially after he/she has entered say 15+ field.
What I am contemplating is displaying "Collision Probability" when the end user starts his/her UI session based upon first few entries he/she has entered on the UI.
This CP (collision probability) will be dynamically calculated by taking into account the database sessions which are in progress against given table/columns.
For example - if both Person A and Person B are viewing information for Conference Room X - then both will be shown a higher CP (collision probability).
In such case - either of them can decide to wait few seconds. This will be better than re-entering all the data.
Now my question : In Oracle, is there a way to determine which sessions are going against which tables and ROWID ?
"our UI involves quite a few fields to be entered."
This seems like the sort of problem which could be solved by better flow. Minimise the number of fields a user needs to enter before they can get a list of available suitable rooms. When they choose a room, use pessimistic locking to ensure nobody else can snatch the room while they are completing the booking application. Stash a copy of the initial fields so they can re-run the original query if they change their mind.
Of course, this means maintaining a session and handling state, and we all know web applications suck at that. Which is a way of saying that we often use web technologies when they aren't suited to the application we're writing.

Username uniqueness validation - Design Approach

This is a general design problem - I want to validate a username field for uniqueness when the user enters the value and tabs out. I do a Ajax validation and get a response from the server. This is all very standard. Now, what if I have a HUGE user database ? How to handle this situation ? I want to find if a username "foozbarz" is present among 150Million usernames ?
Database queries are out of question [EDIT] - Read the username database once and populate the cache/hash for faster lookup (to clarify Emil Vikström's point)
In memory databases wont help either
Keep an in-memory hash (or cache/memcache) to store all usernames - usernames can be easily hashed and lookup will be very fast. But there are some problems with this:
a. Size of the hash - can we optimize so that we can reduce the hash size ?
b. Hash/cache refresh frequencies (users might get added while we are validating)
Shard the username table based on some criteria (e.g.: A-B in table username_1 and so on) - thanks piotrek for this suggestion
Or, any other better approach ?
why don't you simply partition the data? if you have/plan to have 150M+ users i assume you have/will have budget for this. if you are just starting (with 2k users) do it traditional way with simple indexed search on database. when you have so many users that you observe performance issues and measure that this is because of your database (and not e.g. www server) then you simply put another database. on the first one you will have users with name from a to m and rest on the other one. you may choose other criterion, like hash, to make data be balanced. when you need more you will add more databases. but if you don't have so many users right now, i advise you not to do any premature optimizations. there are many things that may become a bottleneck with this amount of data
You are most likely right about doing some kind of hashing where you store the taken names and, obviously, not hashed means it's free.
What you shouldn't do is rely on that validation. There can be a lot of time between user pressing Register and user checking if name is free.
To be fair, you only have one issue here and that's consideration for whether you REALLY need to worry whether you will get 150 million users. Scalability is often an issue, but unless this happens over night, you can probably swap in a better solution before this happens.
Secondly, your worry about both users getting a THIS NAME IS FREE and then one taking it. First of all, the chances of that happening are pretty damn low. Secondly, the only ways I can think of ‘solving’ this in a way where user will never click OK with validated name and get a USERNAME TAKEN is to either
a) Remember what user validated last, store that, and if someone else registers that in a mean time, use AJAX to change the name field to taken and notify the user. Don't do this. A lot of wasted cycles and really too much effort to implement.
b) Lock usernames as user validates one, for a short period of time. This results in a lot of free usernames coming up as taken when they actually aren't. You probably don't want this either.
The easiest solution for this is to simply put hash things into the table as users actually click OK, but before doing that, check if the name exists again. If it does, just send the user back with USERNAME TAKEN. The chances of someone racing someone else for a name are really, really slim and I doubt anyone will make a big fuss over how your validator (which did its job, the name was free at the point of checking) ‘lied’ to the user.
Basically your only issue is how you want to store the nicknames.
Your #1 criteria is flawed because this is exactly what you have a database system for: to store and manage data. Why do you even have a table with usernames if you're not going to read it?
The first thing to do is improving the database system by adding an index, preferably a HASH index if your database system supports it. You will have a hard time writing anything near the performance of this yourself.
If this is not enough, you must start scaling your database, for example by building a clustered database or by partitioning the table into multiple sub-tables.
What I think is a fair thing to do is implement caching in front of the database, but for single names. Not all usernames will have a collision attempt, so you may cache the small subset where the collisions typically happen. A simple algorithm for checking the collision status of USER:
Check if USER exist in your cache. If it does:
Set a "last checked" timestamp for USER inside the cache
You are done and USER is a collision
Check the database for USER. If it does exist:
Add USER to the cache
If the cache is full (all X slots is used), remove the least recently used username from the cache (or the Y least recently used usernames, if you want to minimize cache pruning).
You are done and USER is a collision
If it didn't match the cache or the db, you are done and USER is NOT a collision.
You will of course still need a UNIQUE contraint in your database to avoid race conditions.
If you're going the traditional route you could use an appropriate index to improve the database lookup.
You could also try using something like ElasticSearch which has very low latency lookups on large data sets.
If you have 150M+ users, you will have to have in place some function that:
Checks that the user exists, and signals if not found
Verifies the password is correct, and signals if it is not
Retrieves the user's data
This problem you will have, and will have to solve it. In all likelihood with something akin to a user's query. Even if you heavily rely on sessions, still you will have the problem of "finding session X among many from a 150M+ pool", which is structurally identical to "finding user X among many from a 150M+ pool".
Once you solve the bigger problem, the problem you now have is just its step #1.
So I'd check out a scalable database solution (possibly a NoSQL one), and implement the "availability check" using that.
You might end with a
retrieveUserData(user, password = None)
which returns the user info if user and password are valid and correct. For the availability check, you would send no password, and expect an UserNotFound exception if the username is available.

Oracle transaction read-consistency?

I have a problem understanding read consistency in database (Oracle).
Suppose I am manager of a bank . A customer has got a lock (which I don't know) and is doing some updating. Now after he has got a lock I am viewing their account information and trying to do some thing on it. But because of read consistency I will see the data as it existed before the customer got the lock. So will not that affect inputs I am getting and the decisions that I am going to make during that period?
The point about read consistency is this: suppose the customer rolls back their changes? Or suppose those changes fail because of a constraint violation or some system failure?
Until the customer has successfully committed their changes those changes do not exist. Any decision you might make on the basis of a phantom read or a dirty read would have no more validity than the scenario you describe. Indeed they have less validity, because the changes are incomplete and hence inconsistent. Concrete example: if the customer's changes include making a deposit and making a withdrawal, how valid would your decision be if you had looked at the account when they had made the deposit but not yet made the withdrawal?
Another example: a long running batch process updates the salary of every employee in the organisation. If you run a query against employees' salaries do you really want a report which shows you half the employees with updated salaries and half with their old salaries?
edit
Read consistency is achieved by using the information in the UNDO tablespace (rollback segments in the older implementation). When a session reads data from a table which is being changed by another session, Oracle retrieves the UNDO information which has been generated by that second session and substitutes it for the changed data in the result set presented to the first session.
If the reading session is a long running query it might fail because due to the notorious ORA-1555: snapshot too old. This means the UNDO extent which contained the information necessary to assemble a read consistent view has been overwritten.
Locks have nothing to do with read consistency. In Oracle writes don't block reads. The purpose of locks is to prevent other processes from attempting to change rows we are interested in.
For systems that have large number of users, where users may "hold" the lock for a long time the Optimistic Offline Lock pattern is usually used, i.e. use the version in the UPDATE ... WHERE statement.
You can use a date, version id or something else as the row version. Also the virtual columm ORA_ROWSCN may be used but you need to read up on it first.
When a record is locked due to changes or an explicit lock statement, an entry is made into the header of that block. This is called an ITL (interested transaction list). When you come along to read that block, your session sees this and knows where to go to get the read consistent copy from the rollback segment.

Resources