We've been using the EWS SDK for a few years now and after many mistakes, we've decided it was time to refactor our code base to reflect what we've learned. One issue we see happen every once in a while is that all EWS call fails because it's pointing to a CAS that is malfunctioning. The solution seems as easy as firing off a background thread every n seconds where n represents how often we'll autodiscover.
I've scoured the web and can't seem to find any information relating to the matter.
How often should I autodiscover?
From the "How To: Refresh configuration information by using Autodiscover" topic on MSDN:
We recommend that you refresh your user settings by sending a new Autodiscover request after 24 hours have passed since your last Autodiscover request. This time can be adjusted to meet the requirements of your application.
Related
I am experiencing a very frustrating issue with SagePay Direct when a card payment initiates a 3DSecure challenge.
Customers are reporting either a hanging iFrame, or payment declined response. Whats worse is that in some instances, Sage takes the payment but the user is unaware of this and tries to buy again
Looking at my logs my code is working as expected and is loading the iFrame with the returned ACSURL as the src.
After searching the web, it appears it is a known issue with a timeout occurring on the secure merchant issuer that i hand off to.
The trouble i have is that i have no control of the response(or lack of) from the issuer as its in an iFrame.
Sage have not been very helpful with this problem only going as far as to say "we have heard of customers who experience this issue"
Does anyone have any experience of this problem and know how to resolve it? I guess the bottom line is to turn off the 3DSecure checks but this seems counter productive to the new EU ruling coming into force at some point.
Worth pointing out that this is only affecting a small percentage of my customer base and a lot of transactions are processing successfully (even with the password challenge) but the customers who experience problems are rightly shouting loudly.
anyone any ideas?
Thanks
We process up to 1000-2000 transactions daily via SagePay, using the Direct protocol. They are very cheap but their service is in all honesty fairly terrible. We have a single digit quantity of transactions every day that fail in this way. We've also got another provider and don't experience the same issues.
We have a routine job that asks the SagePay Reporting API about transactions that failed, to see what the current status is (did SagePay get the transaction? was it successfully authorised? etc). This API is utterly, utterly terrible and was a nightmare to integrate with, but it's useful as at least we can refund customers without having to log into the SagePay dashboard.
One thing that we discovered (that isn't documented anywhere on the SagePay site as far as I can tell) is that you're limited to one transaction at a time, or around 20-30 transactions per minute by default. If you go over this (a temporary peak or whatever) your transactions queue up and are delayed. If it gets really busy it completely falls over, and takes a while to recover. We had to switch SagePay off entirely for a few hours due to this (we've got backups in place).
Anyway, so it turns out our transactions were all being processed on one TID (short for Terminal ID). This is akin to a physical card terminal in a shop which can only process one transaction at a time. We asked SagePay support for more and we now have 10-15.
I hope this helps you. I'd recommend implementing a fallback payment supplier in case SagePay fails. A year or two ago they had a 3 day(!!!!) outage which was fairly devastating for us. We now take this seriously!
We've recently had an increase in what I believe may be the same thing. Basically the customer would be sent off to the 3ds page, then returned to the callback page, but for reasons I can't explain the PHP session wouldn't reestablish. The POST response to the callback page was enough to identify the order and complete it (as we'd taken payment), but the customer would then subsequently be prompted to log in again - they'd then see their basket as still having products in and place a second order (that would go through successfully).
After many hours debugging and making changes I managed to replicate this on a development server whilst using mobile emulation...
Long story short, what I have done is to add:
session_regenerate_id();
When I perform the initial vsp register CURL (this is the CURL where you get given the ACSURL). So far, this seems to be enough to ensure that the session gets reestablished when the customer returns to the callback page.
Newbie in Newrelic here. I have an API service hosted on Heroku and being monitored at Newrelic.
While I was studying how to use newrelic. I found out my 2 workers are being underutilised with very low RPM and low transaction time. So I decided to cut down to one worker which saves me $36 a month. =]
Shortly after that I received tonnes of logEntries emails stating request timeouts of one of my web dynos. Looking into Newrelic. I found out that one of my actions are being called suspciously high number of times for 2-3 minutes.
The action being V1::CarsController#Index, which basically shows a collection of cars.
While I was not sure whether the deletion of one worker dyno has caused memcached to do something, I also suspect that may be someone is trying scrap the data off the database. I am not too sure how to further investigate into the issue. I wonder if I can track down the request IP and see it is the same? or how can I further investigate?
If further information is needed I am happy to provide in Edits!
Thanks
I want to increase the 'session timeout', which currently is set to 20 minutes. How can I increase or decrease it by one hour, or in other terms, 60 minutes?
There are a few ways to accomplish what you need, as we ran into the same issue when doing our NetSuite integration.
You can make a dummy search event every couple of min. We searched for a bogus transaction that we knew would never be created, and limited to a date in the distant past and only that date. That way the search would return very quickly with zero results.
Implement SingleSignOn. This is the preferred method. Once you initiate the single sign on, if the session has timed out on you previously you can quickly make a new session using tokens and do not need to ask the user for their username/password again.
We had a service that needed consumed at two different points in the application that did not know about each other. So the way we got around this but still using one service was saving the cookies from the service in a shared location. Then when the service is needed by one of the application they would recreate the service from the cookies. If the service had timed out we would recreate the service and update the cookies. This method became outdated once we implimented SingleSignOn, as then we could just create the service from the tokens as needed, and the tokens were stored in a shared location.
Hope this helped.
There is no standard way that I know of in NetSuite, you could though use a browser plugin to refresh the page or click the home button every 19 mins. Would work if for example the person is AFK.
There is no way to change the web service request timeout period (for sync operations it lasts approx 15 min, then the operation gets terminated on the server side). The general practice for long running operations that takes more than 15 mins is to use async requests.
I'm using the WNetEnumResource to enumerate all network share connections and WNetCancelConnection2 to close them. Then I am using WNetUseConnection to connect to a share using discrete credentials. This process happens multiple times throughout the day.
The problem that I'm running into is that after the first flow through the process I'm getting:
System Error 1219 has occurred.
Multiple connections to a server or shared resource by the same user,
using more than one user name, are not allowed. Disconnect all
previous connections to the server or shared resource and try again.
This happens even when the enumeration says there are no current connections.
My question is: why? Why am I getting this error? Is the authenticated connection to the server still cached? Can I enumerate these authentication tokens? Kerberos? LSA?
I haven't been able to find the smallest foothold of information to progress forward on this project. Any help is appreciated!
I'm trying to remember the solution we used when we came across this problem for a network backup program a few years ago.
I'm certain the solution involves using either WNetAddConnection2 or WNetAddConnection3 instead of WNetUseConnection. I think that passing the flag CONNECT_CRED_RESET should take care of this, but I'm not absolutely certain.
Note that CONNECT_CRED_RESET is only documented for WNetAddConnection2 and not WNetAddConnection3, though MSDN says the only difference between the two is the hWnd parameter for owner of dialog windows - I'd try with WNetAddConnection2 and only if it works, experiment with WNetAddConnection3. You may even get it to work with WNetUseConnection!
Make sure to note the dependencies CONNECT_CRED_RESET has on other flags.
I am using a modified version of the TaskCloud example to try and read/write my own data.
While testing on a a deployed version, I've noticed that the round-trip response time is slow.
From my Android device, I have a 100ms ping response to appspot.com.
I have changed the AppEngine application to do nothing (The Google Dashboard shows insignificant Average Latency.
The problem is that the time it takes for HttpClient client .execute(post) is about 3 seconds.
(This is the time when an instance is already loaded)
Any suggestions would be greatly appreciated.
EDIT: I've watched the video of Google I/O showing the CloudTasks Android-AppEngine app, and you can see that refreshing the list (a single call to AppEngine) takes about 3 seconds as well. The guy is saying something about performance which I didn't fully get (debuggers are running at both ends?)
The video: http://www.youtube.com/watch?v=M7SxNNC429U&feature=related
Time location: 0:46:45
I'll keep investigating...
Thanks for your help so far.
EDIT 2: Back to this issue...
I've used shark packet sniffer to find out what is happening. Some of the time is spent negotiating a SSL connection for each server call. Using http (and ACSID) is faster than https (and SACSID).
new DefaultHttpClient() and new HttpPost() are used for each server call.
EDIT 3:
Looking at the sniffer logs again, there is an almost 2 seconds delay before the actual POST.
I have also found out that the issue exists with Android 2.2 (all versions) but is resolved with Android 2.3
EDIT 4: It's been resolved. Please see my answer below.
It's difficult to answer your question since no detail about your app is provided. Anyway you can try to use appstats tool provided by Google to analyze the bottleneck.
After using the Shark sniffer, I was able to understand the exact issue and I've found the answer in this question.
I have used Liudvikas Bukys's comment and solved the problem using the suggested line:
post.getParams().setBooleanParameter(CoreProtocolPNames.USE_EXPECT_CONTINUE, false);
Often the first call to your GAE app will take longer than subsequent calls. You should make yourself familiar with loading and warm-up requests and how GAE handles instances of your app: http://code.google.com/intl/de-DE/appengine/docs/adminconsole/instances.html
Some things you could also try:
make your app handle more than one request per instance (make sure your app is threadsafe!) http://code.google.com/intl/de-DE/appengine/docs/java/config/appconfig.html#Using_Concurrent_Requests
enable always on feature in app admin (this will cost you)