Jersey/Jetty API Handler/Container resends the GET request every 1 min if it does not receive the response within 1 minute - jersey

<jersey.version>2.23.2</jersey.version>
<jetty.version>9.4.11.v20180605</jetty.version>
There is a GET API resource registered with Jersey and I am hitting this API from Postman. In case, the num of records is less and can be fetched within 1 minute, API works fine. But, if the request takes more than 1 minute to process, the Jetty Container closes the previous request Socket and retries by sending same GET request to same API again(no external requests are sent, Jetty is retrying them itself).
FYI : There is no SSL Session being used here.
These are non-Async requests.
Q 1. Any idea why Jetty does that ?
Also, I tried to change the timeouts in ServerFactory like so, but nothing changed the above behaviour :
((DefaultServerFactory)startupConfig.getServerFactory()).setIdleThreadTimeout(Duration.minutes(4));
startupConfig.getMetricsFactory().setFrequency(Duration.minutes(4));
Q 2. Any idea what settings need to be changed ?
On Production, SSL Sessions are being used and I don't see this problem there.
Q 3. How would SSL Session change this behaviour of resending the new API requests bu Jetty ?

Related

setInterval() ajax call is working on local server but giving error in online server

I have a website in which there is chatroom where I use to send AJAX request to check if a person received a new message or not. If a new message is received, it gets appended to the DOM without refreshing the page (like Facebook).
I am using:-
setInterval(check_if_new_message, 1000);
i.e. one AJAX request to check message every one second.
This was working fine as it was supposed to when I was trying to run on the local server. But then I bought Starter Shared Linux Hosting on GoDaddy and then my ajax requests are not working properly. First 100-150 requests are working fine but after that, it stars giving an error like net::ERR_CONNECTION_CLOSED in the console of the browser.
setInterval(check_if_new_message, 1000);
You can see that you are using:
setInterval(check_if_new_message, 1000);
That means you are calling check_if_new_message after every 1 second. This works well in the localhost because it is on your computer. But when you try this on a live server, you will get:
net::ERR_CONNECTION_CLOSED
This is because your server can not handle so many requests. Your server may have less RAM.
This is not a good practice for a real-time chat application.
If you want to make a realtime chat application use WebSocket for that.
Useful resources for WebSocket:
What is WebSocket?
WS library

Firefox not sending request after response 500

As you guys can see below when I am trying to upload a file XLSX into my API in Firefox, if XLSX does not have all columns that I need to map it to a class, I will throw an Exception and it will return a status 500. If I do not change the file I will get error 500 infinite times. Like below
After some requests it does not send anymore. And it only happens when I change the file, the 4th request below should return status 200. When I debug my application in spring it is not even entering the endpoint.
In Chrome I do not have this problem, I can sent a lot of requests and the behavior is different. Do you guys know if firefox has some security to prevent multiple requests like this? If I reload the page and send the 4th request alone it will return status 200.
I am using Angular 5 + Spring Boot

Keep-alive problems in Internet Explorer 11 in a ajax application

I have a web application that communicates with the server with AJAX. I am using https and SSL and keep-alive is set to 10 seconds.
Sometimes the user is sending a request to the server exactly when the keep-alive time expires. When this happens a new SSL session is created. This is all fine.
After this happens, the browser (Internet Explorer 11) is resending the AJAX request.
But now strange things happens. The browser is only sending the headers of the request. No body. The server first waits for a body that never arrives. Finally the server is aborting the request and the client gets a exception with http status 0 with message: Network error.
Some say this is normal behavior when using SSL and keep-alive and that this must be handled in the webb application. Other say this is a not correct behavior in Internet Explorer 11.
All I can see is that the server cant reuse a body of a request sent on a previous SSL session. The browser need to resend the entire request, but this is not happening.
If I catch the exception in the application and resends the request to the server, everything is working again. But it feels very strange to catch all http=0 in and resending them. Could also be dangerous from a application point of view.
The application only works in IE, so I cant compare with Chrome and FF.
My question is: Is this normal behavior or have I perhaps some incorrect configuration in the browser or on the webbserver?
When the server closes a connection at exactly the same time as a new request on a keep alive connection was sent, then it is not clear if the server received the request and acted on it or not. Therefor only requests which are considered idempotent (i.e. not change anything on the server) should be automatically retried by the client. GET and HEAD are considered idempotent while POST not.
Thus if the client automatically retries a GET or HEAD this would be correct behavior. Retrying a POST would be bad. Resubmitting a request without a body which originally had a body (like you claim but don't prove) would be a bug.

HTTP request not hitting controller

I currently have a problem where I send an asynchronous ajax request to a .NET controller in order to start a database search. This request makes it to the server which kicks off the search and immediately (less than a second) replies to the callback with a search ID, at which point I begin sending ajax requests every 10 seconds to check if the search has finished. This method works fine, and has been tested successfully with multiple users sending simultaneous requests.
If I send a second search request from the same user before the first search is finished, this call will not make it to the controller endpoint until after the first search has completed, which can take up to a minute. I can see the request leave chrome (or FF/IE) in the dev tools, and using Fiddler as a proxy I can see the request hit the machine that the application is running on, however it will not hit the breakpoint on the first line of the endpoint until after the first call returns.
At the point this call is blocking, there are typically up to 3 pending requests from the browser. Does IIS or the .NET architecture have some mechanism that is queuing my request? Or if not, what else would be between the request leaving the proxy and entering the controller? I'm at a bit of a loss for how to debug this.
I was able to find the issue. It turns out that despite my endpoint being defined asynchronously, ASP.NET controllers by default synchronize by session. So while my endpoints were able to be executed simultaneously across sessions, within the same session it would only allow one call at a time. I was able to fix the issue by setting the controller SessionState attribute to Read Only, allowing my calls to come through without blocking.
[SessionState(System.Web.SessionState.SessionStateBehavior.ReadOnly)]

Why does my Ajax request go directly from state 1 to 4?

I am making a request to a CGI program using AJAX. The response sends me content-length. My purpose is to dynamically exhibit the response progress. For that I need to start a function on onreadystate value of XHR object to be 3. But the request doesn't seems to acquire that status number. Instead it goes directly from state 1 to state 4.
What am I missing?
The response could be going so quickly that you just don't notice it at state 3. Especially if you are running it on localhost, the response could be transmitted very quickly. You could try setting an alert when it gets to stage 3 to test whether it's actually getting there. Also, I belive internet explorer says that it is a mistake to access the response in stage 3 so there could be compatibility issues.
If you're running on localhost, then probably the browser is never getting a chance to run between the time it sends the request and the time it gets the response...
browser opens connection, sets readyState to 1
browser sends packet to server process
server process receives packet, gets priority from scheduler
server returns data to browser, and yields control of the CPU. Browser continues execution.
browser sees all data has been received, sets readyState to 4.
Long story short: don't count on going into the "receiving" state.

Resources