NSURLSession didReceiveChallenge called for every request - nsurlsession

I have gone through this link
Why is a HTTPS NSURLSession connection only challenged once per domain?
In case of NSURLConnection, didReceiveChallenge was not called for every request. In case of NSURLSession, it is called for every request. How can they behave differently if its dependent only on TLS session cache

Related

No persistent session when connecting to API on different host

I am sending a websocket connection to the API server on a different host:
new WebSocket("ws://localhost:3000")
Whereas my front end is hosted on localhost:8080.
Inside my API's websocket connection handler I'm able to set a key on the session (with Sinatra's enable :sessions) but every time I refresh the html page, the data is lost.
Is there some requirement for sessions that the front end share the same host as the server? Or is there some way I can get around this? By the way, the front end is running on a Webpack server (Node).
I also tried adding a cross_origin allowance for the API's root route http://localhost:3000 and then doing this in the client (this example in coffeescript):
$.get "http://localhost:3000", ->
new Websocket("ws://localhost:3000")
My thinking was that maybe the session needed to be "initialized" over http:// instead of ws:// but it didn't work either. The session didn't work for the $.get "http://localhost:3000" request either. Refreshing the page shows that the session clears each time.
As we've discussed in comments, you probably have a problem with 3rd party session cookies in the browser.
Here's a scheme that you could use to work around it.
Client makes webSocket connection for the first time.
Server sends a webSocket message back with sessionID in it.
Client stores sessionID in a first party cookie (e.g. a cookie in the host web page).
User hits refresh.
Web page checks to see if it has a webSocket session cookie in the cookies for the host page. If so, it constructs a URL for the webSocket connection that includes that session ID `new Websocket("ws://localhost:3000?session=xyslkfas")
When server accepts webSocket connection, it checks the query parameters to see if there is already a session being specified. If so and that session is still valid, it hooks up that connection to that session. If not, it creates a new session and goes back to step 2.

What is the difference between cookie and cookiejar?

Today I faced the term "cookiejar" (package net/http/cookiejar). I tried to gather some information regarding it, but got nothing intelligible came out. I know that cookie is key/value pairs that server sends to a client, eg: Set-Cookie: foo=10, browser stores it locally and then each subsequent request browser will send these cookies back to the server, eg: Cookie: foo=10.
Ok, but what about cookiejar? What is it and how does it look like?
As you described in your question, cookies are managed by browsers (HTTP clients) and they allow to store information on the clients' computers which are sent automatically by the browser on subsequent requests.
If your application acts as a client (you connect to remote HTTP servers using the net/http package), then there is no browser which would handle / manage the cookies. By this I mean storing/remembering cookies that arrive as Set-Cookie: response headers, and attaching them to subsequent outgoing requests being made to the same host/domain. Also cookies have expiration date which you would also have to check before deciding to include them in outgoing requests.
The http.Client type however allows you to set a value of type http.CookieJar, and if you do so, you will have automatic cookie management which otherwise would not exist or you would have to do it yourself. This enables you to do multiple requests with the net/http package that the server will see as part of the same session just as if they were made by a real browser, as often HTTP sessions (the session ids) are maintained using cookies.
The package net/http/cookiejar is a CookieJar implementation which you can use out of the box. Note that this implementation is in-memory only which means if you restart your application, the cookies will be lost.
So basically an HTTP cookie is a small piece of data sent from a website and stored in a user's web browser while the user is browsing that website.
Cookiejar is a Go interface of a simple cookie manager (to manage cookies from HTTP request and response headers) and an implementation of that interface.
In general it is a datastore where an application (browser or not) puts the cookies it uses during requests and responses. So it is really a jar for cookies.

Aborting an HTTP request. Server-side advantage?

In, for example, JavaScript AJAX libraries it is possible to abort an AJAX request. Is there any server side advantage to this or is it just for client-side cleanliness? Is it part of TCP?
If, for example, I am requesting via AJAX a Python based server service – which is resource intensive – from my JavaScript Web App and abort this AJAX request, is it possible that aborting will ease the load on the server, or will the my ajax library just ignore the response from the server?
It does not affect the server-side if you use your frameworks abort feature. The server will still process the request regardless.
Once you made an HTTP request to a resource URL on your server (be it Asynch or not, aka ajax or "regular"), you can't abort it from your client / with another http request (unless your service has some weird listener that awaits for potential consequent http requests and stops itself up receiving one). My proposition, if you have one resource and time consuming operation, either split it into simpler operations, parallelize it or at east make some periodic responses to at least inform the user that it's still working and hasn't died

Realitme via ajax, How to create an open connection to a non-blocking server like tornado etc?

When people create real-time web apps, they are leaving a ajax request open/long running.
how do they do this in javascript?
There is really no difference from a normal ajax request. A callback is associated with the XMLHttpRequest. Once the request is complete the callback is invoked. The difference is on the server-side where the request is held open until data is ready for the client, or a timeout occurs. On the browser side, the callback is invoked as each successive request is answered. The callback must process the data from the server and initiate another request. The request is handled asynchronously, so the browser is not blocked.
A really good example of the whole thing is the chat demo included in Tornado.

How does an XMLHttpRequest response get routed to the right browser-callback?

I have made webpage that uses Ajax to update some values without reloading the page. I am using an XMLHttpRequest object to send a POST request, and I assign a callback function that gets called when the response arrives, and it works just fine.
But... how in the world does the browser know that some data coming from some ip:port should be sent to this particular callback function? I mean, in a worst case scenario, if I have Firefox and IE making some POST requests at roughly the same time from the same server, and even making subsequent POST requests before the responses arrive to the previous ones, how does the data coming in gets routed to the right callback functions ??
Each HTTP request made is on a seperate TCP connection. The browser simply waits for data to come back on that connection then invokes your callback function.
At a lower level, the TCP implementation on your OS will keep track of which packets belong to each socket (i.e. connection) by using a different "source port" for each one. There will be some lookup table mapping source ports to open sockets.
It is worth noting that the number of simultaneous connections a browser makes to any one server is limited (typically to 2). This was sensible back in the old days when pages reloaded to send and recieve data, but in these enlightened days of AJAX it is a real nuisance. See the page for an interesting discussion of the problem.
Each request has its own connection. Means that if you have single connection, of course you will have single response, and this response will be in your callback.
The general idea is that your browser opens a new connection entirely, makes a request to the server and waits for a response. This is all in one connection which is managed by the browser via a JavaScript API. The connection is not severed and then picked up again when the browser pushes something down, so the browser, having originated the request, knows what to do when the request finishes.
What truly makes things Asynchronous, is that these connections can happen separately in the background, which allows multiple requests to go out and return, while waiting for responses. This gives you the nice AJAX effect that appears to be the server returning something at a later time.

Resources