3DSecure periodically timing out but taking payment - opayo

I am experiencing a very frustrating issue with SagePay Direct when a card payment initiates a 3DSecure challenge.
Customers are reporting either a hanging iFrame, or payment declined response. Whats worse is that in some instances, Sage takes the payment but the user is unaware of this and tries to buy again
Looking at my logs my code is working as expected and is loading the iFrame with the returned ACSURL as the src.
After searching the web, it appears it is a known issue with a timeout occurring on the secure merchant issuer that i hand off to.
The trouble i have is that i have no control of the response(or lack of) from the issuer as its in an iFrame.
Sage have not been very helpful with this problem only going as far as to say "we have heard of customers who experience this issue"
Does anyone have any experience of this problem and know how to resolve it? I guess the bottom line is to turn off the 3DSecure checks but this seems counter productive to the new EU ruling coming into force at some point.
Worth pointing out that this is only affecting a small percentage of my customer base and a lot of transactions are processing successfully (even with the password challenge) but the customers who experience problems are rightly shouting loudly.
anyone any ideas?
Thanks

We process up to 1000-2000 transactions daily via SagePay, using the Direct protocol. They are very cheap but their service is in all honesty fairly terrible. We have a single digit quantity of transactions every day that fail in this way. We've also got another provider and don't experience the same issues.
We have a routine job that asks the SagePay Reporting API about transactions that failed, to see what the current status is (did SagePay get the transaction? was it successfully authorised? etc). This API is utterly, utterly terrible and was a nightmare to integrate with, but it's useful as at least we can refund customers without having to log into the SagePay dashboard.
One thing that we discovered (that isn't documented anywhere on the SagePay site as far as I can tell) is that you're limited to one transaction at a time, or around 20-30 transactions per minute by default. If you go over this (a temporary peak or whatever) your transactions queue up and are delayed. If it gets really busy it completely falls over, and takes a while to recover. We had to switch SagePay off entirely for a few hours due to this (we've got backups in place).
Anyway, so it turns out our transactions were all being processed on one TID (short for Terminal ID). This is akin to a physical card terminal in a shop which can only process one transaction at a time. We asked SagePay support for more and we now have 10-15.
I hope this helps you. I'd recommend implementing a fallback payment supplier in case SagePay fails. A year or two ago they had a 3 day(!!!!) outage which was fairly devastating for us. We now take this seriously!

We've recently had an increase in what I believe may be the same thing. Basically the customer would be sent off to the 3ds page, then returned to the callback page, but for reasons I can't explain the PHP session wouldn't reestablish. The POST response to the callback page was enough to identify the order and complete it (as we'd taken payment), but the customer would then subsequently be prompted to log in again - they'd then see their basket as still having products in and place a second order (that would go through successfully).
After many hours debugging and making changes I managed to replicate this on a development server whilst using mobile emulation...
Long story short, what I have done is to add:
session_regenerate_id();
When I perform the initial vsp register CURL (this is the CURL where you get given the ACSURL). So far, this seems to be enough to ensure that the session gets reestablished when the customer returns to the callback page.

Related

Why magento run orders email in Cronjob? Any issue if i switch to instant send?

Can anyone please explain why Magento runs order email in cronjob?
I set up cronjob to send email every 5 minutes.
Is there any issue if I switch to instant sending the customer an instant email confirmation?
My customer asks why he can't receive the order confirmation instantly.
Can anyone please explain why Magento runs order email in cronjob?
Well their changelogs don't really explain why, but generally the reasons for moving processes to a cron job are:
It goes from synchronous to asynchronous
The processing time doesn't matter as much
The web server doesn't need to handle it (timeouts may not be relevant, memory limits may be larger, interference with the web server pool may be lessened)
I set up cronjob to send email every 5 minutes. Is there any issue if I switch to instant sending the customer an instant email confirmation?
Not really, no. Other than that it would be a regression in Magento capability. If you take the checkout process for example, when you place your order there is a variety of things that happen; save quote, convert quote to order, prepare payment, capture payment, create invoice, save everything, etc... In this case they've taken the time it takes to generate and send the order email out of this process to improve the checkout speed.
Yes - you can put it back to being sent instantly if you'd like, but my suggestion to you would just be to run your cron every minute instead of every five minutes.
Generally you should employ a rule of "try not to touch core Magento functionality unless you have to.". Hope this helps!

how to handle UI actions on front-end responsively while waiting for the processing in back-end?

Use a StackOverflow Q&A thread as an example - when you vote up, vote down, or favorite a question, you can see the UI quickly respond to that action with changes in the # of up-votes on the side.
How can we achieve that effect? If send every of such action to back-end for processing and use the returned response to update UI, you will see a slow update and feel the glitches. But if put some of the logic on the front-end, you will also need to take care of the fraud/abuse etc before reflecting the action on UI, i.e - before changing the # of up-votes, don't you need to make sure that's a valid click by an valid user first?
You make sure that a valid user is using the app before a user clicks on anything. This is done through authentication, and it must include various protection mechanisms against malicious users.
When a user clicks, a call is made to a server. In a properly architected app this call is lightweight, and the server responds very quickly. I don't know why you believe that "you will see a slow update and feel the glitches". Adding an upvote to the database should take a few hundred milliseconds at most (including the roundtrip from the client), especially if the commit is asynchronous or a memcache is used.
If a database update results in a need to do some complex operations, typically these operations are not done right away. For example, a cron job may run periodically to compute new rankings, etc., precisely because you do not want every user to wait. Alternatively, a task is created and put in a task queue to be executed when resources are available - again to make sure that a user does not wait.
In some apps a UI is updated immediately after the call to the server is made, before any response from a server arrives. You can do it when the consequences of a failed call are negligible. For example, if an upvote fails to be saved in the database, it's not a disaster, especially if it happens once in a million tries. Again, in a properly architected app calls fail extremely rarely.
This is a decision that an app developer needs to make. I would not update a UI before a server response if such an update may lead a user to believe that some other action is now possible. For example, if a user uploads a new photo, I would not show icons to edit or share this photo until I know that the photo is safely saved.

User closes the browser without logging out

I am developing a social network in ASP.NET MVC 3. Every user has must have the ability to see connected people.
What is the best way to do this?
I added a flag in the table Contact in my database, and I set it to true when the user logs in and set it to false when he logs out.
But the problem with this solution is when the user closes the browser without logging out, he will still remain connected.
The only way to truly know that a user is currently connected is to maintain some sort of connection between the user and the server. Two options immediately come to mind:
Use javascript to periodically call your server using ajax. You would have a special endpoint on your server that would be used to update a "last connected time" status, and you would have a second endpoint for users to poll to see who is online.
Use a websocket to maintain a persistent connection with your server
Option 1 should be fairly easy to implement. The main thing to keep in mind that this will increase the amount of requests coming into your server, and you will have to plan accordingly in order handle the traffic this could generate. You will have some control over the amount of load on your server by configuring how often javascript timer calls back to your server.
Option 2 could be a little more involved if you did this without library support. Of course there are libraries out there such as SignalR that make this really easy to do. This also has an impact on the performance of your site since each user will be maintaining a persistent connection. The advantage with this approach is that it reduces the need for polling like option 1 does. If you use this approach it would also be very easy to push a message to user A that user B has gone offline.
I guess I should also mention a really easy 3rd option as well. If you feel like your site is pretty interactive, you could just track the last time they made a request to your site. This of course may not give you enough accuracy to determine whether a user is "connected".

Continuous AJAX requests - effect on web app?

I have a web app idea which will require AJAX requests to function. Its crucial that AJAX requests are running as long as the user is on the site.
My question is, if the user left the site up (my ajax requests will be running) for say 4/5 hours, will these AJAX requests still run, my concerns are screen dimming, screensavers, computer sleep states. Will all or none of these affect the performance of my web app?
Unfortunately, it's very client dependent. For example, mobile devices may stop processing JS when going into a sleep state (e.g., to save battery life). However, on an image rotator application I wrote some time ago, which sent regular requests to the server to retrieve images (there was good reason not to cache them, I swear), accessed primarily by non-mobile clients, I observed that requests continued for hours, even days. While I can't know if the client machine ever entered a sleep state, I'm pretty confident it did.
Long story short - I think you can't be sure, but for some target audiences, you can be reasonably sure. I would recommend investigating your audience.
If in your site users require to login simple keep a session time out option to lets say 15 mins. That means after 15 mins of idle time the session will be destroyed and the ajax requests automatically truncated.
If you do not have login this becomes difficult but still can be achieved via ip tracking or similar mechanisms but these will never be as full proof as the first one.
I must agree with Sean in that it it's very client side orientated. If the client stays active, be it through human interaction our not, then the AJAX should keep going.

How can I speed up the front end of a web application?

We have an application that pulls in about 50 records from a database with about 13 data points for each record. Each record needs to be reviewed for accuracy, sometimes edited, and then 'approved' or 'rejected'.
It seems as if the process of running an approval or rejection takes some time before another approval or rejection can occur (yes this backend could be optimized).
Im looking for techniques or suggestions to make the front end of the application much quicker while the back end continues to process the previous approval or rejection. This would help speed our team of record reviewers go through each record.
Would a messaging service like RabbitMQ apply here?
All help, links, feedback is appreciated.
-=Vin
Tabbed browsing.

Resources