I've been two days struggling to know the reason but with no luck
I have dynamics 365 on premise and we have some customizations on case form and workflows and plugins on create
When I open new fresh session the request creation took almost 8 seconds , however after save first request and try to submit a new request again it took almost 2 seconds
I need to understand what's the caching behavior on server side that make first request took all this time compared to the upcoming requests coz this is misleading to decide performance is good or not
Thank you,
Related
I want to increase the 'session timeout', which currently is set to 20 minutes. How can I increase or decrease it by one hour, or in other terms, 60 minutes?
There are a few ways to accomplish what you need, as we ran into the same issue when doing our NetSuite integration.
You can make a dummy search event every couple of min. We searched for a bogus transaction that we knew would never be created, and limited to a date in the distant past and only that date. That way the search would return very quickly with zero results.
Implement SingleSignOn. This is the preferred method. Once you initiate the single sign on, if the session has timed out on you previously you can quickly make a new session using tokens and do not need to ask the user for their username/password again.
We had a service that needed consumed at two different points in the application that did not know about each other. So the way we got around this but still using one service was saving the cookies from the service in a shared location. Then when the service is needed by one of the application they would recreate the service from the cookies. If the service had timed out we would recreate the service and update the cookies. This method became outdated once we implimented SingleSignOn, as then we could just create the service from the tokens as needed, and the tokens were stored in a shared location.
Hope this helped.
There is no standard way that I know of in NetSuite, you could though use a browser plugin to refresh the page or click the home button every 19 mins. Would work if for example the person is AFK.
There is no way to change the web service request timeout period (for sync operations it lasts approx 15 min, then the operation gets terminated on the server side). The general practice for long running operations that takes more than 15 mins is to use async requests.
We've been using the EWS SDK for a few years now and after many mistakes, we've decided it was time to refactor our code base to reflect what we've learned. One issue we see happen every once in a while is that all EWS call fails because it's pointing to a CAS that is malfunctioning. The solution seems as easy as firing off a background thread every n seconds where n represents how often we'll autodiscover.
I've scoured the web and can't seem to find any information relating to the matter.
How often should I autodiscover?
From the "How To: Refresh configuration information by using Autodiscover" topic on MSDN:
We recommend that you refresh your user settings by sending a new Autodiscover request after 24 hours have passed since your last Autodiscover request. This time can be adjusted to meet the requirements of your application.
We would like to check every 3 seconds if there are any updates in our database, using jquery $.ajax. Technology is clear but are there any reasons why not to fire so many ajax calls? (browser, cache, performance, etc.). The web application is running for round about 10 hrs per day on every client.
We are using Firefox.
Ajax calls has implications not on client side(Browser,...) but on the server side. For example, every ajax call is a hit on server. ie. more bandwidth consumption, no of server request hit increases which in turn increases server load etc etc. Ajax call is actually meant to increase client friendliness at the cost of Server side implications.
Regards,
Ravi
You should think carefully before implementing infinite repeating AJAX calls with an arbitrary delay between them. How did you come up with 3 seconds? If you're going to be polling your server in this way, you need to reduce the frequency of requests to as low a number as possible. Here are some things to think about:
Is the data you're fetching really going to change that often?
Can your server handle a request every 3 seconds, how long does the operation take for a single request?
Could you increase the delay after inactivity or guess based on previous server responses how long the next delay should be?
Can you stop the polling completely when the window loses focus, and restart it when it's in the foreground again.
If a user opens the same page in a website 10 times, your server should recognise this and throttle its responses, either using a cookie with a unique value in it (recommended) or based on the client IP address.
Above all, instead of polling, consider using HTML 5 web sockets to "push" data to the client - most modern browsers support this. Several frameworks are available that will fall back to polling if web sockets are not available - one excellent .NET example is SignalR.
I've seen a lot of application making request each 5sec or so, for instance a remote control (web player) or a chat. So that should not be a problem for the browser to do so.
What would be a good practice is to wait an answer before making a new request, that means not firing the requests with a setInterval for instance.
(In the case the user lose its connection that would prevent opening too much connections).
Also verifying that all the calculations associated with an answer are done when receiving the next answer.
And if you have access to that in the server side, configure you server to set http headers Connection: Keep-Alive, so you won't add to much TCP overhead to each of your requests. That could speed up small requests a lot.
The last point I see is of course verifying that you server is able to answer that much request.
You are looking for any changes after each 3sec , In this way the traffic would be increased as you fetching data after short duration and continuously . It may also continuous increase the memory usage on browser side . As you need to check any update done in the database , you can go for any other alternatives like Sheepjax , Comet or SignalR . (SignalR generally broadcast the data to all users and comet needs license ) . Hope this may help you .
I have a web application that relies on very "live" data - so it needs an update every 1 second if something has changed.
I was wondering what the pros and cons of the following solutions are.
Solution 1 - Poll A Lot
So every 1 second, I send a request to the server and get back some data. Once I have the data, I wait for 1 second before doing it all again. I would detect client-side if the state had changed and take action appropriately.
Solution 2 - Block A Lot
So I start a request to the server that will time-out after 30 seconds. The server keeps an eye on the data on the server by checking it once per second. If the server notices the data has changed it sends the data back to the client, which takes action appropriately.
Scenario
Essentially, the data is reasonably small in size, but changes at random intervals based on live events. The thing is, the web UI will be running something in the region of 2,000 instances, so do I have 2,000 requests per second coming from the UI or do I have 2,000 long-running requests that take up to 30 seconds?
Help and advice would be much appreciated, especially if you have worked with AJAX requests under similar volumes.
One common solution for such cases is to use static json files. Server-side scripts update them when the data is changed and they are served by fast and light webserver (like nginx). Since files are static and small - webserver will do that right in cache, in very fast manner.
Consider a better architecture. Implementing this kind of messaging system is trivial to do right in something like nodeJS. Message dispatch will be instantaneous, and you won't need to poll for your data on either side.
You don't need to rewrite your whole system: The data producer could simply POST the updates to the nodeJS server instead of writing them to a file, and as a bonus, you don't even need to waste time on disk IO.
If you started without knowing any nodeJS, you could still be done in a couple hours, because you can just hack up the chat example.
I can't comment yet, but I would agree with geocar. Running live or almost live web services with just polling will be solution stuck between a rock and a hard place.
You could also look into web sockets to allow push as this sounds a better solution for this than just updating every second to 30 seconds.
Good luck!
I am creating a service which receives some data from mobile phones and saves it to the database.
The phone is sending the data every 250 ms. As I noticed that the delay for data storing is increasing I tried to run WireShark and write a log as well.
I noticed that the web requests from mobile phone are being made without the delay (checked with WireShark), but the in the service log I noticed the request is received every second and a half or almost two seconds.
Does anyone know where could be the problem or the way to test and determine the cause of such delay?
I am creating a service with WCF (webHttpBinding) and the database is MS SQL.
By the way the log stores the time of http request and also the time of writing data to the database. As mentioned above the request is received every 1.5 - 2 seconds and after that it takes 50 ms to store data to the database.
Thanks!
My first guess after reading the question was that maybe you are submitting data so fast, the database server is hitting a write-contention lock (e.g. AutoNumber fields?)
If your database platform is SQL Server, take a look at http://www.sql-server-performance.com/articles/per/lock_contention_nolock_rowlock_p1.aspx
Anyway please post more information about the overall architecture of the system... what softwares/platforms are used at what parts etc...
Maybe there is some limitation in the connection imposed by the service provider?
What happens if you (for testing) don't write to the database and just log the page hits in the server log with timestamp?
Check that you do not have any tracing running on the web services, this can really kill perf.