I've wrote a SIP UAC, and I've tried a few ways to detect and ignore repeating incoming messages from the UAS, but with every approach I tried, something went wrong, my problem is that all the messages that has to do with the same call has the same signature, and to compare all of the message text is too much, so I was wondering, what parameter that compose a message should I be looking at when trying to detect these repeating messages.
UPDATE:
I had a problem with an incoming Options, which I handled with sending the server an empty Ok response. (Update: after a while of testing I noticed, that I still get every now and then I get another Options request, few every few second, so I try responding with a Bad request, and now I only get the Options request once/twice every registration/reregistration)
currently I have repeating messages of SessionInPogress, and different error messages such as busy here, and unavailable, I get so many of these, and it messes my log up, I would like to filter them.
any idea how to achieve that?
UPDATE:
I'll try your Technics before posting back, perhaps this would solve my problems
Here is what I used, it works nicely:
private boolean compare(SIPMessage message1, SIPMessage message2) {
if (message1.getClass() != message2.getClass())
return false;
if (message1.getCSeq().getSeqNumber() != message2.getCSeq().getSeqNumber())
return false;
if (!message1.getCSeq().getMethod().equals(message2.getCSeq().getMethod()))
return false;
if (!message1.getCallId().equals(message2.getCallId()))
return false;
if (message1.getClass()==SIPResponse.class)
if(((SIPResponse)message1).getStatusCode()!=((SIPResponse)message2).getStatusCode())
return false;
return true;
}
Thanks,
Adam.
It's a bit more complicated than ChrisW's answer.
First, the transaction layer filters out most retransmissions. It does this by, for most things, comparing the received message against a list of current transactions. If a transaction's found, that transaction will mostly swallow retransmissions as per the diagrams in RFC 3261, section 17. For instance, a UAC INVITE transaction in the Proceeding state will drop a delayed retransmitted INVITE.
Matching takes place in one of two ways, depending on the remote stack. If it's an RFC 3261 stack (the branch parameter on the topmost Via starts with "z9hG4bK") then things are fairly straightforward. Section 17.2.3 covers the full details.
Matching like this will filter out duplicate/retransmitted OPTIONS (which you mention as a particular problem). OPTIONS messages don't form dialogs, so looking at CSeq won't work. In particular, if the UAS sends out five OPTIONS requests which aren't just retransmissions, you'll get five OPTIONS requests (and five non-INVITE server transactions).
Retransmitted provisional responses to a non-INVITE transaction are passed up to the Transaction-User layer, or core as it's sometimes called, but other than the first one, final responses are not. (Again, you get this simply by implementing the FSM for that transaction - a final response puts a UAC non-INVITE transaction in the Completed state, which drops any further responses.
After that, the Transaction-User layer will typically receive multiple responses for INVITE transactions.
It's perfectly normal for a UAS to send multiple 183s, at least for an INVITE. For instance it might immediately send a 100 to quench your retransmissions (over unreliable transports at least), then a few 183s, a 180, maybe some more 183s, and finally a 200 (or more, for unreliable transports).
It's important that the transaction layer hands up all these responses because proxies and user agents handle the responses differently.
At this level the responses aren't, in a way, retransmitted. I should say: a UAS doesn't use retransmission logic to send loads of provisional responses (unless it implements RFC 3262). 200 OKs to INVITEs are resent because they destroy the UAC transaction. You can avoid their retransmission by sending your ACKs timeously.
I think that a message is duplicate/identical, if its ...
Cseq
Call-ID
and method name (e.g. "INVITE")
... values match that of another message.
Note that a response message has the same CSeq as the request to which it's responding; and, that a single request you get several, provisional, but non-duplicate responses (e.g. RINGING followed by OK).
Related
Many libraries include Expect: 100-continue on all HTTP 1.1 POST and PUT requests by default.
I intend to reduce perceived latency by removing 100-continue mechanism on the client side on those requests for which I know the expense of sending data right away is less than waiting a roundtrip for 100-continue, namely on short requests.
Of course I still want all the other great features of HTTP 1.1, thus only I want to kill Expect: 100-continue header. I have two options:
remove expect header entirely, or
send empty expect header, Expect:\r\n
Is there ever any difference between the two?
Any software that might break for one or the other?
Nothing should break if you remove the Expect header, but I know that Microsoft IIS has had issues with 100 Continue in the past. For example, IIS5 always sends 100 continue responses. So, I wonder if at least some of the uses of it in libraries might be to work around similarly broken behaviour in servers.
Many libraries seem to set this header and then not actually handle 100 Continue properly - e.g. they begin to send the request body immediately without waiting for a 100 Continue and then don't handle the fact that the server might send back any HTTP error code before they've finished sending the request body (the first part's OK, it's the second part which is broken - see later in my answer). This leads me to believe that some authors have just copied it from elsewhere without fully understanding the subtleties.
I can't see any reason to include a blank Expect header - if you're not going to include 100-continue (or some other Expect clause) then omit the header entirely. The only reason to include it would be to work around broken webservers, but I'm not aware of any which behave in this way.
Finally, if you're just looking to reduce roundtrip latencies it seems to me that it wouldn't actually be inconsistent with the RFC to simply begin to transmit the request body immediately. You're not supposed to wait indefinitely to send the request body (as per the RFC), so you're behaving to the spec - it's just your timeout before sending anyway is zero.
You must be aware that servers are at liberty to not send the 100 Continue response if they've already received some of the request body, so you have to handle servers which send 100 Continue, those which send nothing and wait for the full request and those which immediately send any HTTP error code (which may be 417, but more likely a generic 4xx code). In this way, your short requests shouldn't have any overhead (aside from the Expect header) but you won't have to wait for the 100 Continue. Of course, for this approach to work you'll need to be doing things in a way which lets you interrupt the request as soon as the server returns an error code (e.g. non-blocking IO with poll() or select()).
Doing things this way might help keep your code more consistent between small and large requests while reducing the latency. The downside is that it's perhaps not what the RFC authors had in mind, even if it doesn't explicitly violate any of the requirements. Also, it might make your later code more complicated if you're not already doing non-blocking IO or similar.
I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)
Is there a RESTful way to determine whether a POST (or any other non-idempotent verb) will succeed? This would seem to be useful in cases where you essentially need to do multiple idempotent requests against different services, any of which might fail. It would be nice if these requests could be done in a "transaction" (i.e. with support for rollback), but since this is impossible, an alternative is to check whether each of the requests will succeed before actually performing them.
For example suppose I'm building an ecommerce system that allows people to buy t-shirts with custom text printed on them, and this system requires integrating with two different services: a t-shirt printing service, and a payment service. Each of these has a RESTful API, and either might fail. (e.g. the printing company might refuse to print certain words on a t-shirt, say, and the bank might complain if the credit card has expired.) Is there any way to speculatively perform these two requests, so my system will only proceed with them if both requests appear valid?
If not, can this problem be solved in a different way? Creating a resource via a POST with status = pending, and changing this to status = complete if all requests succeed? (DELETE is more tricky...)
HTTP defines the 202 status code for exactly your scenario:
202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
Source: HTTP 1.1 Status Code Definition
This is similar to 201 Created, except that you are indicating that the request has not been completed and the entity has not yet been created. Your response would contain a URL to the resource representing the "order request", so clients can check the status of the order through this URL.
To answer your question more directly: There is no way to "test" whether a request will succeed before you make it, because you're asking for clairvoyance.
It's not possible to foresee the range of technical problems that could occur when you attempt to make a request in the future. The network may be unavailable, the server may not be able to access its database or external systems it depends on for functioning, there may be a power-cut and the server is offline, a stray neutrino could wander into your memory and bump a 0 to a 1 causing a catastrophic kernel fault.
In order to consume a remote service you need to account for possible failures of any request in isolation of any other processes.
For your specific problem, if the services have no transactional safety, you can't bake any in there and you have to deal with this in a more real-world way. A few options off the top of my head:
Get the T-Shirt company to give you a "test" mechanism, so you can see whether they'll process any given order without actually placing it. It could be that placing an order with them is a two-phase operation, where you construct the order in the first phase (at which time they validate its creation) and then you subsequently ask the order to be processed (after you have taken payment successfully).
Take the credit-card payment first and move your order into a "paid" state. Then attempt to fulfil the order with the T-Shirt service as an asynchronous process. If fulfilment fails and you can identify that the customer tried to get something printed the company is not prepared to produce, you will have to contact them to change their order or produce a refund.
Most organizations will adopt the second approach, due to its technical simplicity and reduced risk to the business. It also has the benefit of being able to cope with the T-Shirt service not being available; the asynchronous process simply waits until the service is available and completes the order at that time.
Exactly. That can be done as you suggest in your last sentence. The idea would be to decopule resource creation (that will always work unless network failures) that represents an "ongoing request" of the "order acceptation", that can be later decided. As POST returns a "Location" header, you can then retrieve in any moment the "status" of your request.
At some point it may become either accepted or rejected. This may be intantaneous or it may take some time, so you have to design your service with these restrictions (i.e. allowing the client to check if his/her order is accepted, or running some kind of hourly/daily service that collect accepted requests).
I'm developing a chat application and I've come across the following thought.. Should I use 'multiple' long polling requests to my server, each one handling different things. For example one checking for messages, one for 'is typing', one for managing the contacts list 'is online/offline' etc.. or would it be better to handle it all through one channel?
My opinion is that you’d be better off with one connection, and sending JSON messages back and forth, e.g.:
User joined:
{"user_add": "st3"}
User left:
{"user_left": "sneeu"}
Message received
{"message": "Good morning!", "from": "st3"}
And these could be sent together in an array, so that users could receive everything since their last response.
Polling is going to be your biggest bandwidth/resource hog, so keep it to a minimum; e.g. issue HEAD requests with appropiate date / if-modified-since headers to allow caching to work sensibly, with the server returning just headers containing the date and time of the last change to any of the properties you're interested in - or perhaps something even more minimal than that; and only issue a full GET if the returned headers imply there is new information.
I have a set of resources whose representations are lazily created. The computation to construct these representations can take anywhere from a few milliseconds to a few hours, depending on server load, the specific resource, and the phase of the moon.
The first GET request received for the resource starts the computation on the server. If the computation completes within a few seconds, the computed representation is returned. Otherwise, a 202 "Accepted" status code is returned, and the client must poll the resource until the final representation is available.
The reason for this behavior is the following: If a result is available within a few seconds, it needs to be retrieved as soon as possible; otherwise, when it becomes available is not important.
Due to limited memory and the sheer volume of requests, neither NIO nor long polling is an option (i.e. I can't keep nearly enough connections open, nor even can I even fit all of the requests in memory; once "a few seconds" have passed, I persist the excess requests). Likewise, client limitations are such that they cannot handle a completion callback, instead. Finally, note I'm not interested in creating a "factory" resource that one POSTs to, as the extra roundtrips mean we fail the piecewise realtime constraint more than is desired (moreover, it's extra complexity; also, this is a resource that would benefit from caching).
I imagine there is some controversy over returning a 202 "Accepted" status code in response to a GET request, seeing as I've never seen it in practice, and its most intuitive use is in response to unsafe methods, but I've never found anything specifically discouraging it. Moreover, am I not preserving both safety and idempotency?
So, what do folks think about this approach?
EDIT: I should mention this is for a so-called business web API--not for browsers.
If it's for a well-defined and -documented API, 202 sounds exactly right for what's happening.
If it's for the public Internet, I would be too worried about client compatibility. I've seen so many if (status == 200) hard-coded.... In that case, I would return a 200.
Also, the RFC makes no indication that using 202 for a GET request is wrong, while it makes clear distinctions in other code descriptions (e.g. 200).
The request has been accepted for processing, but the processing has not been completed.
We did this for a recent application, a client (custom application, not a browser) POST'ed a query and the server would return 202 with a URI to the "job" being posted - the client would use that URI to poll for the result - this seems to fit nicely with what was being done.
The most important thing here is anyway to document how your service/API works, and what a response of 202 means.
From what I can recall - GET is supposed to return a resource without modifying the server. Maybe activity will be logged or what have you, but the request should be rerunnable with the same result.
POST on the other hand is a request to change the state of something on the server. Insert a record, delete a record, run a job, something like that. 202 would be appropriate for a POST that returned but isn't finished, but not really a GET request.
It's all very puritan and not well practiced in the wild, so you're probably safe by returning 202. GET should return 200. POST can return 200 if it finished or 202 if it's not done.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
In case of a resource that is supposed to have a representation of an entity that is clearly specified by an ID (as opposed to a "factory" resource, as described in the question), I recommend staying with the GET method and, in a situation when the entity/representation is not available because of lazy-creation or any other temporary situation, use the 503 Service Unavailable response code that is more appropriate and was actually designed for situations like this one.
Reasoning for this can be found in the RFCs for HTTP itself (please verify the description of the 503 response code), as well as on numerous other resources.
Please compare with HTTP status code for temporarily unavailable pages. Although that question is about a different use case, it actually relates to the exact same feature of HTTP.