Shopping cart application strategies - asp.net-mvc-3

I am developing a shopping cart application for my client and am trying to find a strategy to make sure there is no way a collision can occur during purchasing.
For example, if there are five items left in inventory, and two clients happen to make a purchase at the same time; the inventory should be three items left and not four. It seems like I would have to know right before the purchase, what the current inventory is. Also, I need a way to tell if someone grabbed the last item even if they have not made the purchase yet.
What strategies/patterns should I use to ensure these conditions are met? I am developing a .net mvc application with SQL Server.

Ah concurrency.
You have multiple things to consider here:
If you store the inventory and subtract from it and write it again in a separate update you are leaving your inventory count open to issues
If your update is a single transaction to reduce the inventory by the current amount you could make your inventory go negative
Your updates must:
Start a transaction
Lock the row in question by performing a select on it and reading inventory. Handle the case where there isn't enough on hand.
If enough inventory update the row
3a sanity check the inventory during testing
Commit or rollback the transaction
There are various ways to do this but the above should work fine here.
You can start the transaction in your code via a new transactionscope object or server side in a proc via begin transaction.

Related

Oracle database as a single synchronization point for two separate web applications

I am considering using an Oracle database to synchronize concurrent operations from two or more web applications on separate servers. The database is the single infrastructure element in common for those applications.
There is a good chance that two or more applications will attempt to perform the same operation at the exact same moment (cron invoked). I want to use the database to let one application decide that it will be the one which will do the work, and that the others will not do it at all.
The general idea is to perform a somehow-atomic and visible to all connections select/insert with node's ID. Only node which has the same id as the first inserted node ID returned by select would be do the work.
It was suggested to me that a merge statement can be of use here. However, after doing some research, I found a discussion which states that the merge statement is not designed to be called
Another option is to lock a table. By definition, only one node will be able to lock the server and do the insert, then select. After the lock is removed, other instances will see the inserted value and will not perform work.
What other solutions would you consider? I frown on workarounds with random delays, or even using oracle exceptions to notify a node that it should not do the work. I'd prefer a clean solution.
I ended up going with SELECT FOR UPDATE. It works as intended. It is important to remember to commit the transaction as soon as the needed update is made, so that other nodes don't hang waiting for the value.

nhibernate doesn't get a chance to update?

I am building a small web application, where the user is granted the ability to rate items.
In my application I am using nhibernate and asp.net mvc.
All the rating requests are sent by jquery (ajax/post).
When the user votes an item, I check if the item has been previously voted. If so, I update the last voting value to the new one received. If not, I just add a new rating to my table.
I have noticed something very strange. This works well, but when I click several times really fast something gets screwed up. I get multiple ratings, it seems as if nhibernate doesn't bother checking if the user has previously voted and just returns a false value.
Is this possible? How can I see what's going under the hood?
thank you
You probably have a concurrency problem. I assume that you get a thread and transaction per click. Clicking very fast results in parallel transactions which can't see what others are doing.
You have a typical problem that items which aren't in the database (the new votes) can't be locked.
The solutions are:
Use lock to avoid multiple votes of the same user being stored at the same time. This doesn't work when you have multiple servers (or AppDomains) on the same database, because the lock is restricted to the AppDomain.
Use table locks in the database to lock out the whole votes table that only one transaction can add votes at the same time.
Have you turned on NHibernate logging?
Add the following to the hibernate.config.xml file:
<property name="show_sql">true</property>
The sql generated can be seen in the console or test runner output if you are running unit tests. You can also configure log4net to write NHibernate logging information to file (See https://web.archive.org/web/20110514164829/http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/07/01/how-to-configure-log4net-for-use-with-nhibernate.aspx)
Lastly, how are you using NHibernate? Are you using a repository pattern? Its hard to determine what is wrong with your application without some idea of the code.

Is there any way to update the most recent version of row through SQL?

I read that Oracle maintains row versions to deal with concurrency. I want to run an update query on a very big real-time database but this update job must alter the most recent version of the row.
Is this possible via PL/SQL or simply SQL?
Edited below **
Let me clear the scenario, the real-life issue that we faced on a very large database. Our client is a well-known cell phone service provider.
Our database has a table that manages records of the current balance left on the customer's cell phone account. Among the other columns of the table, one column stores the amount of recharge done and one other column manages the current active balance left.
We have two independent PL/SQL scripts. One script is automatically fired when the customer recharges his phone and updates his balance.
The second script is about deduction certain charges from the customers account. This is a batch job as it applies to all the customers. This script is scheduled to run at certain intervals of a day. When this script is run, it loads 50,000 records in the memory, updates certain columns and performs bulk update back to the table.
The issue happened is like this:
A customer, whose ID is 101, contacted his local shop to get his phone recharged. He pays the amount. But till the time his phone was about to recharge, the scheduled time of the second script fired the second script. The second script loaded the records of 50,000 customers in the memory. In this in-memory records, one of the record of this customer too.
Till the time the second script's batch update finishes, the first script successfully recharged the customer's account.
Now what happened is that is the actual table, the column: "CurrentAccountBalance" gets updated to 150, but the in-memory records on which the second script was working had the customer's old balance i.e, 100.
The second script had to deduct 10 from the column: "CurrentAccountBalance". When, according to actual working, the customer's "CurrentAccountBalance" should be 140, this issue made his balance 90.
Now how to deal with this issue.
I think what you want is what is anyway happening if you UPDATE.
It is true that Oracle keeps old data for a while, but just to support consistent reads. That is, read operations that see only the state as it was at start of the transaction--even if the data was overwritten in the meantime. It's called Multi Version Concurrency Control and can be controlled by the Transaction Isolation Level.
You can explicitly request the most recent one by selecting `FOR UPDATE; that adds a lock for the record so that nobody else can update it in the meanwhile (until your transaction ends).
However, if you need to write anything (e.g., UPDATE) Oracle works always on the most recent version.
As #Markus suggested, you have a race condition. If you're loading records into memory and working on them before updating the rows in the table, and something else may try to update them in the meantime, then you need to lock them while you work on them. (I'm assuming whatever you're doing is too complicated to do a simple one-step update). Something like this would work:
DECLARE
CURSOR c is SELECT * FROM current_balance_table FOR UPDATE;
BEGIN
FOR r IN c LOOP
/* Do whatever calculations you need */
new_value := r.CurrantAccountBalance - 10;
UPDATE current_balance_table SET CurrentAccountBalance = new_value
WHERE CURRENT OF c;
END LOOP:
END;
The problem now is that all records are locked for the duration of the loop, so your customer in the shop will either not be able to update their balance, or will have a log wait before the update takes effect - though when it does it will work on the updated value you stored. So you'd have to break the cursor up into small chunks, balancing performance of your script against the impact on anyone else trying to update the same table.
One option would be to have an outer cursor selecting all the customers you're targeting with no locking, and then an inner one that locks the balance record for that customer while that row is calculated and updated. You'd have to commit after each inner loop to release the lock for that row. This involves a lot more locking/unlocking and committing after every row update slows things down a lot. But it minimises the impact on the individual customer in the shop, as only a single row is locked at a time and the length of time that is locked is minimised. So, you need to find the right balance.

Strategy for updating data in databases (Oracle)

We have a product using Oracle, with about 5000 objects in the database (tables and packages). The product was divided into two parts, the first is the hard part: client, packages and database schema, the second is composed basically by soft data representing processes (Workflow) that can be configured to run on our product.
Well, the basic processes (workflow) are delivered as part of the product, our customers can change these processes and adapt them to their needs, the problem arises when trying to upgrade to a newer version of the product, then trying to update the database records data, there are problems for records deleted or modified by our customers.
Is there a strategy to handle this problem?
It is common for a software product to be comprised of not just client and schema objects, but data as well; typically it seems to be called "static data", i.e. it is data that should only be modified by the software developer, and is usually not modifiable by end users.
If the end users bypass your security controls and modify/delete the static data, then you need to either:
write code that detects, and compensates for, any modifications the end user may have done; e.g. wipe the tables and repopulate with "known good" data;
get samples of modifications from your customers so you can hand-code customised update scripts for them, without affecting their customisations; or
don't allow modifications of static data (i.e. if they customise the product by changing data they shouldn't, you say "sorry, you modified the product, we don't support you".
From your description, however, it looks like your product is designed to allow customers to customise it by changing data in these tables; in which case, your code just needs to be able to adapt to whatever changes they may have made. That needs to be a fundamental consideration in the design of the upgrade. The strategy is to enumerate all the types of changes that users may have made (or are likely to have made), and cater for them. The only viable alternative is #1 above, which removes all customisations.

(ASP.NET) How would you go about creating a real-time counter which tracks database changes?

Here is the issue.
On a site I've recently taken over it tracks "miles" you ran in a day. So a user can log into the site, add that they ran 5 miles. This is then added to the database.
At the end of the day, around 1am, a service runs which calculates all the miles, all the users ran in the day and outputs a text file to App_Data. That text file is then displayed in flash on the home page.
I think this is kind of ridiculous. I was told they had to do this due to massive performance issues. They won't tell me exactly how they were doing it before or what the major performance issue was.
So what approach would you guys take? The first thing that popped into my mind was a web service which gets the data via an AJAX call. Perhaps every time a new "mile" entry is added, a trigger is fired and updates the "GlobalMiles" table.
I'd appreciate any info or tips on this.
Thanks so much!
Answering this question is a bit difficult since there we don't know all of your requirements and something didn't work before. So here are some different ideas.
First, revisit your assumptions. Generating a static report once a day is a perfectly valid solution if all you need is daily reports. Why hit the database multiple times throghout the day if all that's needed is a snapshot (for instance, lots of blog software used to write html files when a blog was posted rather than serving up the entry from the database each time -- many still do as an optimization). Is the "real-time" feature something you are adding?
I wouldn't jump to AJAX right away. Use the same input method, just move the report from static to dynamic. Doing too much at once is a good way to get yourself buried. When changing existing code I try to find areas that I can change in isolation wih the least amount of impact to the rest of the application. Then once you have the dynamic report then you can add AJAX (and please use progressive enhancement).
As for the dynamic report itself you have a few options.
Of course you can just SELECT SUM(), but it sounds like that would cause the performance problems if each user has a large number of entries.
If your database supports it, I would look at using an indexed view (sometimes called a materialized view). It should support allows fast updates to the real-time sum data:
CREATE VIEW vw_Miles WITH SCHEMABINDING AS
SELECT SUM([Count]) AS TotalMiles,
COUNT_BIG(*) AS [EntryCount],
UserId
FROM Miles
GROUP BY UserID
GO
CREATE UNIQUE CLUSTERED INDEX ix_Miles ON vw_Miles(UserId)
If the overhead of that is too much, #jn29098's solution is a good once. Roll it up using a scheduled task. If there are a lot of entries for each user, you could only add the delta from the last time the task was run.
UPDATE GlobalMiles SET [TotalMiles] = [TotalMiles] +
(SELECT SUM([Count])
FROM Miles
WHERE UserId = #id
AND EntryDate > #lastTaskRun
GROUP BY UserId)
WHERE UserId = #id
If you don't care about storing the individual entries but only the total you can update the count on the fly:
UPDATE Miles SET [Count] = [Count] + #newCount WHERE UserId = #id
You could use this method in conjunction with the SPROC that adds the entry and have both worlds.
Finally, your trigger method would work as well. It's an alternative to the indexed view where you do the update yourself on a table instad of SQL doing it automatically. It's also similar to the previous option where you move the global update out of the sproc and into a trigger.
The last three options make it more difficult to handle the situation when an entry is removed, although if that's not a feature of your application then you may not need to worry about that.
Now that you've got materialized, real-time data in your database now you can dynamically generate your report. Then you can add fancy with AJAX.
If they are truely having performance issues due to to many hits on the database then I suggest that you take all the input and cram it into a message queue (MSMQ). Then you can have a service on the other end that picks up the messages and does a bulk insert of the data. This way you have fewer db hits. Then you can output to the text file on the update too.
I would create a summary table that's rolled up once/hour or nightly which calculates total miles run. For individual requests you could pull from the nightly summary table plus any additional logged miles for the period between the last rollup calculation and when the user views the page to get the total for that user.
How many users are you talking about and how many log records per day?

Resources