How to increase the timeout period of a web service request in netsuite - session

I want to increase the 'session timeout', which currently is set to 20 minutes. How can I increase or decrease it by one hour, or in other terms, 60 minutes?

There are a few ways to accomplish what you need, as we ran into the same issue when doing our NetSuite integration.
You can make a dummy search event every couple of min. We searched for a bogus transaction that we knew would never be created, and limited to a date in the distant past and only that date. That way the search would return very quickly with zero results.
Implement SingleSignOn. This is the preferred method. Once you initiate the single sign on, if the session has timed out on you previously you can quickly make a new session using tokens and do not need to ask the user for their username/password again.
We had a service that needed consumed at two different points in the application that did not know about each other. So the way we got around this but still using one service was saving the cookies from the service in a shared location. Then when the service is needed by one of the application they would recreate the service from the cookies. If the service had timed out we would recreate the service and update the cookies. This method became outdated once we implimented SingleSignOn, as then we could just create the service from the tokens as needed, and the tokens were stored in a shared location.
Hope this helped.

There is no standard way that I know of in NetSuite, you could though use a browser plugin to refresh the page or click the home button every 19 mins. Would work if for example the person is AFK.

There is no way to change the web service request timeout period (for sync operations it lasts approx 15 min, then the operation gets terminated on the server side). The general practice for long running operations that takes more than 15 mins is to use async requests.

Related

Laravel - Efficiently consuming large external API into database

I'm attempting to consume the Paypal API transaction endpoint.
I want to grab ALL transactions for a given account. This number could potentially be in the 10's of millions of transactions. For each of these transactions, I need to store it in the database for processing by a queued job. I've been trying to figure out the best way to pull this many records with Laravel. Paypal has a max request items limit of 20 per page.
I initially started off with the idea of creating a job when a user gives me their API credentials that gets the first 20 items and processes them, then dispatches a job from the first job that contains the starting index to use. This would loop forever until it errored out. This doesn't seem to be working well though as it causes a gateway timeout on saving those API credentials and the request to the API eventually times out (before getting all transactions). I should also mention that the total number of transactions is unknown, so chaining doesn't seem to be the answer as there is no way to know how many jobs to dispatch...
Thoughts? Is getting API data best suited for a job?
Yes job is way to go . I’m not familiar with paypal api but it’s seems requests are rate limited paypal rate limiting.. you might want to delay your api requests a bit.. also you can make a class to monitor your api requests consumption by tracking the latest requests you made and in the job you can determine when to fire the next request and record it in the database...
My humble advise
please don’t pull all the data your database will get bloated quickly and you’ll need to scale each time you have a new account it’s not easy task.
You could dispatch the same job at the end of the first job which queries your current database to find the starting index of the transactions for that job.
So even if your job errors out, you could dispatch it again, then it will resume from where it was ended previously
May be you will need link your app with another data engine like AWS, anyway I think the best idea is creating an APi, pull only the most important data, indexed, and keep the all big data in another endpoint, where you can reach them if you need

WCF - WebHttpBinding - RESTful - Performance Issue

first time poster so go easy on me.
I am currently trying to address a performance issue when hitting my web service after a one minute period of inactivity. Literally after one minute of THAT user not hitting the web service then the next call will take 15 seconds before actually hitting the service operation. If you keep making random (not the same service operation just so you guys don't think it is "caching" the call) service operation calls the service returns immediately (less than a second).
Here are some "timings" I decided to take so you can see how I came to the one minute of inactivity:
2:04PM
2:16PM --15 seconds
2:21PM --15 seconds
2:24PM --15 seconds
2:25PM --15 seconds
Again, if you hit the web service continuously without a one minute period of inactivity then ALL methods will return in less than a second.
Here are some details regarding my web service:
WCF, WebHttpBinding, RESTful, using HTTPs.
Basic Authentication + Custom Authentication using IDispatchMessageInspector. Authentication happens with EVERY call (except to the Initializer.aspx page).
Custom Initialization.aspx page has been created which is called every night after the Application Pool is recycled. This page caches a bunch of global data used by all users along with starting that compile.
Application Pool ONLY recycles every night at 2AM. Worker threads are never killed off because timeout is disabled.
I heard about ReliableSession but as the setting implies that sounds like it would only work for PerSession, not PerCall.
Is there any way to resolve this or am I stuck to resorting to "pinging" the server every 45 seconds using a dummy service operation?
Found out the issue. We have multiple domain controllers. When the user was getting authenticated it would start from the forest level and work its way down to the actual domain controller that server resided on. The firewalls that were put in place were blocking all domain controllers except what the server resided on.
So basically, it would fail to communicate to the N+ domain controllers until it finally reached the only one it could.
You can fix this a number of ways but we just created firewall rules to allow the web server to communicate to the domain controller the users needed to be authenticated against.

What exactly happens when I change number of Azure role instances?

I observe the following weird behavior. I have an Azure web role which is deployed on love Azure cloud. Now I click "Configure" in the Azure Management Portal and change the number of instances - the portal shows some "activity". Now I open the browser and navigate to the URL assigned to my deployment and start refreshing the page something like once per two seconds. The page reloads fine many times and then fro some time it will stop reloading - the request will be rejected, then after something like half a minute the requests are handled normally.
What is happening? Is the web server temporarily stopped? How do I change number of instances so that HTTP requests to the role are handled at all times?
When you change the configuration file, your current instance might be restarted. This might be the reason you met with, which your website didn't response in about 30 seconds.
Please have a look http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx and check if it 's because of the role restarting.
What you are doing is manual. Have you looked at the SDK for autoscaling Azure?
http://channel9.msdn.com/posts/Autoscaling-Windows-Azure-applications
Check out the demo at the 18 minute mark. It doesn't answer your question directly, but its a much more configurable/dynamic way of scaling Azure.
Azure updates your roles one update domain at a time, so in theory you should see no downtime when updating the config (provided you have at least two instances). However, if you refresh the browser every couple of seconds, it's possible that your requests go always to the same instance due to keep-alive.
It would be interesting to know what the behavior is if you disable keep-alives for your webrole. Note that this will have a performance impact, so you'll probably want to re-enable keep-alives after the exercise.

How to handle session timeout when a device was suspended in an Ajax app?

It's easy enough to build an Ajax app which checks all responses to make sure they aren't indicative of a session expiry, and if the session has expired automatically log the user out with a friendly "Your session timed out due to inactivity" error message.
But a common occurrence in Ajax applications is that:
User is logged in, happily using app, retrieving data over http with an established http session
User closes laptop
Host times out http session after N minutes
User reopens laptop later on. Ajax app appears alive and well. They click around which is just fine since the app lets them see things they've already loaded.
Then, they click on something that requires data to be loaded, and the data comes back indicating session expiration
The Ajax app kicks them out and says "Your session timed out due to inactivity".
This is really weird to the user because they were not inactive from their point of view.
Now, one possibility is to have Javascript code in the client which uses setTimeout() to periodically (say, every 15 minutes if the session timeout is 30 minutes) trigger a request to the host to ask how much time is left in the session. This periodic check is great because it lets you show them a warning when they are close to timing out, e.g. "You're session will time out in 1 minute unless you do something".
But that doesn't help when the user's machine is suspended. That's because according to all my testing in many different browsers, setTimeout time applies to elapsed running time instead of elapsed real time. That is, if you call setTimeout("alert('hi')",2*60*1000); and then suspend your machine 10 seconds later, wait 5 minutes, and reactivate your machine, you'll have wait 110 more seconds until you get that alert (I have not been able to find definitive documentation of this behavior but it is a demonstrable fact). So that means your period check may not happen for quite after the user's machine resumes.
My solution to this is to, instead of having my periodic check based on on a long setTimeout, instead do a short setTimeout (say, every 5 seconds), and check the elapsed time since the last check using new Date().getTime() to get the actual clock time. This way I am always checking against the real clock, and instead of the client waiting from zero to fifteen minutes before realizing it has timed out after a suspension, at most it will wait about five seconds (plus http response time) to find out.
But I dislike this solution because it relies on a frequent timer based interruption. Is there a smarter way to handle this?
With big sites like facebook, which is rich with interactive updates, you'll find that there is a combination of all sorts different mechanisms. I'd guess that they're doing validation both on API requests and on Push requests (since someone once told me they use push in addition to ajax)
Timeouts: One thing to consider is that if you store session data in a cookie, having that cookie expire is the same as no longer being logged in. Since the cookie is a hashed value of a few things like a user ID, or a timestamp, it is really easy to see that a session is no longer valid on the very first function call to the API.
Long polling: if a site uses long polling in which a connection is opened indefinitely to await a response from the web server, then closing your computer would kill that connection.
However, if they're just doing regular ajax polling with a reoccurring function call via setInterval, then the web server would automatically know whether the user should get data in return based on the timestamp in their hashed cookie, assuming there is one to check. Those are the types of things that get sent in the header.
Some services actually update a database field that stores your timestamp of last activity and then expires if a certain amount of time has elapsed. This is a less efficient way to do it since it keep track of state.
There's really quite a few ways sites do these things.

How to find the cause of RESTful service bad performance?

I am creating a service which receives some data from mobile phones and saves it to the database.
The phone is sending the data every 250 ms. As I noticed that the delay for data storing is increasing I tried to run WireShark and write a log as well.
I noticed that the web requests from mobile phone are being made without the delay (checked with WireShark), but the in the service log I noticed the request is received every second and a half or almost two seconds.
Does anyone know where could be the problem or the way to test and determine the cause of such delay?
I am creating a service with WCF (webHttpBinding) and the database is MS SQL.
By the way the log stores the time of http request and also the time of writing data to the database. As mentioned above the request is received every 1.5 - 2 seconds and after that it takes 50 ms to store data to the database.
Thanks!
My first guess after reading the question was that maybe you are submitting data so fast, the database server is hitting a write-contention lock (e.g. AutoNumber fields?)
If your database platform is SQL Server, take a look at http://www.sql-server-performance.com/articles/per/lock_contention_nolock_rowlock_p1.aspx
Anyway please post more information about the overall architecture of the system... what softwares/platforms are used at what parts etc...
Maybe there is some limitation in the connection imposed by the service provider?
What happens if you (for testing) don't write to the database and just log the page hits in the server log with timestamp?
Check that you do not have any tracing running on the web services, this can really kill perf.

Resources