I am trying to check pop and smtp values entered by user.. I wish to validate that pop and smtp say for example(pop.gmail.com,smtp.gmail.com) which is entered by user is correct or wrong.
For that I am sending only one request to server by taking both pop and smtp values entered by user which will do two tasks
1. Checks user entered pop by making connection to that particular server ,
2. Checks user entered smtp by sending 1 mail to some dummy mail id..
I finished all these tasks..
But now what my requirement is, I have to show the user after validating each thing.. I mean in ui i have to show as
POP connection Checked.. ok
smtp connection Checked.. ok like that.
But i sent only one request to server for doing both these tasks..So i need to get intermediate status from server after finishing each tasks..So only i can update in client side UI.. But i don't know is it possible to get intermediate responses from server for a single request... Any idea friends? If so can you come up with a little bit of code...
Expecting the suggestions?
you should take a look in the long polling technique, it is possible to retrieve partial response but it doesn't work on all browsers.
You can use HEAD request instead of GET or POST which only return HTTP header
Slightly off topic - but sending a dummy mail can be "dangerous".
Many servers "note" if you try and send to a local address, which does not exist. For example - if the server's domain is "whatever.com" and you send to a random address, say aaa#whatever.com, and "aaa" is not a valid user, then the server notices this.
The server may then take an action like blocking you, as a sender, for a period of time. (This helps to reduce spam from dictionary attacks.) So your "test" ends up effectively blocking the real mail from being delivered.
The reverse is also true. Let's say you try to send to an external address, which you know is valid (your own email address for example) as the test. In this case the from address must be a valid internal address. If you use an invalid internal address, or worse an address which is not internal, it's likely the server will refuse to deliver the mail (at best) and at worst again institute a temporary block.
The key factor in both these situations is that although the SMTP protocol is very "loose", SMTP servers watch very closely for "bad behavior" because this is one way of distinguishing a spamming program. So any hide of "incorrect" behavior can lead to it arbitrarily refusing to accept your mails (usually for a limited period of time.)
Incidentally, back to your original question.
Both of your tests are pretty much instantaneous. Even if the email server is on the other side of the world you can do both checks inside a couple seconds. So chances are even if you send back 2 packets, to the user they'll appear as "arriving together". And since 1 request from the browser can only handle 1 response from the server you would need to send the response in 2 packets.
ie do first test - send first part of response - do second test - send second part of response.
For a normal HTTP packet this is no big deal. Do some sort of flush / send after the first response is ready, and then again after the second response. The browser is used to displaying partial pages as they arrive.
However for an AJAX request you'll need to get into your framework at quite a low level. Most frameworks, that I'm aware of, require the incoming Async packet to be "complete" before they start to parse it. This is especially true if the packet is formatted as say xml where partial parsing is useless in pretty much all cases.
Related
Consider a service running on a server for a customer c1,but customer c1 times out after 'S' sec for what so ever be the reason so customer again fires the same request ,so server is running duplicate query hence it gets overloaded, resolve this glitch. Please help me !!!
I assume you are on the server side and hence cannot control multiple requests coming in from the same client.
Every client should be having an IP address associated with them. In your load balancer(if you have one) or in your server you need to keep an in-memory cache which keeps track of all requests, their IP addresses, timestamp when request originated and timestamp when request processing finished. Next you define and appropriate time measure - which should be near about 70-80% percentile of processing time for all your requests. Lets say X seconds.
Now, before you accept any request at your loadbalancer/ server you need to check in this in-memory cache whether the same IP has sent the same request and the time elapsed since the last request is less than X. If so do not accept this request and instead send a custom error stating something like "previous request still under processing. Please try after some time".
In case IP address is not enough for identifying a client, as the same client may be sending requests to different endpoints on your server for different services, then you need to store another identifier which maybe a kind of token/session identifier - such as c1 or customer id in your case. Ideally, a customer can send only 1 request from 1 IP Address to an endpoint at any 1 point of time. Just in case you have mobile and web interfaces then you can add the channel-type(web/mobile/tablet) as well to the list of identifying parameters .
So now, a combination of - customer id(c1), IP address, request URL,request time, channel-type will always be unique for a request coming in. Using a key of all these parameters in your cache to uniquely fetch information for a request and validating whether to start processing the request or send a custom error message to prevent overloading the server with re-requests - should solve the problem defined above.
Note - 'S' seconds i.e. client-side timeout - given that the client-side timeout is not in our control - should not concern the server-side and will have no bearing on the design I have detailed above.
I'm developing a chat application and I've come across the following thought.. Should I use 'multiple' long polling requests to my server, each one handling different things. For example one checking for messages, one for 'is typing', one for managing the contacts list 'is online/offline' etc.. or would it be better to handle it all through one channel?
My opinion is that you’d be better off with one connection, and sending JSON messages back and forth, e.g.:
User joined:
{"user_add": "st3"}
User left:
{"user_left": "sneeu"}
Message received
{"message": "Good morning!", "from": "st3"}
And these could be sent together in an array, so that users could receive everything since their last response.
Polling is going to be your biggest bandwidth/resource hog, so keep it to a minimum; e.g. issue HEAD requests with appropiate date / if-modified-since headers to allow caching to work sensibly, with the server returning just headers containing the date and time of the last change to any of the properties you're interested in - or perhaps something even more minimal than that; and only issue a full GET if the returned headers imply there is new information.
I've wrote a SIP UAC, and I've tried a few ways to detect and ignore repeating incoming messages from the UAS, but with every approach I tried, something went wrong, my problem is that all the messages that has to do with the same call has the same signature, and to compare all of the message text is too much, so I was wondering, what parameter that compose a message should I be looking at when trying to detect these repeating messages.
UPDATE:
I had a problem with an incoming Options, which I handled with sending the server an empty Ok response. (Update: after a while of testing I noticed, that I still get every now and then I get another Options request, few every few second, so I try responding with a Bad request, and now I only get the Options request once/twice every registration/reregistration)
currently I have repeating messages of SessionInPogress, and different error messages such as busy here, and unavailable, I get so many of these, and it messes my log up, I would like to filter them.
any idea how to achieve that?
UPDATE:
I'll try your Technics before posting back, perhaps this would solve my problems
Here is what I used, it works nicely:
private boolean compare(SIPMessage message1, SIPMessage message2) {
if (message1.getClass() != message2.getClass())
return false;
if (message1.getCSeq().getSeqNumber() != message2.getCSeq().getSeqNumber())
return false;
if (!message1.getCSeq().getMethod().equals(message2.getCSeq().getMethod()))
return false;
if (!message1.getCallId().equals(message2.getCallId()))
return false;
if (message1.getClass()==SIPResponse.class)
if(((SIPResponse)message1).getStatusCode()!=((SIPResponse)message2).getStatusCode())
return false;
return true;
}
Thanks,
Adam.
It's a bit more complicated than ChrisW's answer.
First, the transaction layer filters out most retransmissions. It does this by, for most things, comparing the received message against a list of current transactions. If a transaction's found, that transaction will mostly swallow retransmissions as per the diagrams in RFC 3261, section 17. For instance, a UAC INVITE transaction in the Proceeding state will drop a delayed retransmitted INVITE.
Matching takes place in one of two ways, depending on the remote stack. If it's an RFC 3261 stack (the branch parameter on the topmost Via starts with "z9hG4bK") then things are fairly straightforward. Section 17.2.3 covers the full details.
Matching like this will filter out duplicate/retransmitted OPTIONS (which you mention as a particular problem). OPTIONS messages don't form dialogs, so looking at CSeq won't work. In particular, if the UAS sends out five OPTIONS requests which aren't just retransmissions, you'll get five OPTIONS requests (and five non-INVITE server transactions).
Retransmitted provisional responses to a non-INVITE transaction are passed up to the Transaction-User layer, or core as it's sometimes called, but other than the first one, final responses are not. (Again, you get this simply by implementing the FSM for that transaction - a final response puts a UAC non-INVITE transaction in the Completed state, which drops any further responses.
After that, the Transaction-User layer will typically receive multiple responses for INVITE transactions.
It's perfectly normal for a UAS to send multiple 183s, at least for an INVITE. For instance it might immediately send a 100 to quench your retransmissions (over unreliable transports at least), then a few 183s, a 180, maybe some more 183s, and finally a 200 (or more, for unreliable transports).
It's important that the transaction layer hands up all these responses because proxies and user agents handle the responses differently.
At this level the responses aren't, in a way, retransmitted. I should say: a UAS doesn't use retransmission logic to send loads of provisional responses (unless it implements RFC 3262). 200 OKs to INVITEs are resent because they destroy the UAC transaction. You can avoid their retransmission by sending your ACKs timeously.
I think that a message is duplicate/identical, if its ...
Cseq
Call-ID
and method name (e.g. "INVITE")
... values match that of another message.
Note that a response message has the same CSeq as the request to which it's responding; and, that a single request you get several, provisional, but non-duplicate responses (e.g. RINGING followed by OK).
I am using Ajax to retrieve the data from server as below based on some ID to perform auto suggest function. however when i submit the form and update the database, the auto suggest field suppose should not contain anything for this ID anymore, but it will still retrieve data from its cache. do anyone know how to clear the cache and make the Ajax sending request to get the latest data from server every time i press the button? Pls help i really stuck on this whole weeks and couldnt find the solution.
For example: when ID field is 00001, auto suggest field will be 1,2,3. After i submit the form and update the database, when i search for 00001 again, it should not contain anything but it does, it still cache the data as 1,2,3 in suggest field...
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
var data=xmlhttp.responseText;
alert(data);
}
}
xmlhttp.open("GET","gethint.php?q="+str,true);
xmlhttp.send();
I had this problem once before. This is probably something you can fix in your server settings. What the server does is get a server request, build the answer, and when the same request is done again it sends the same response it built before.
To easily avoid this problem, I added an extra request parameter (a UID).
so:
xmlhttp.open("GET","gethint.php?q="+str+**"?something"=RANDOMGUID**,true);
this way you always ha a unique request.
Works with IE8
xmlHttp.open("GET", URL, true);
xmlHttp.setRequestHeader("Cache-Control", "no-cache");
xmlHttp.setRequestHeader("Pragma", "no-cache");
xmlHttp.setRequestHeader("If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT");
you could use http headers to prevent the response from being cached:
Cache-Control: no-cache
Expires: Mon, 24 Oct 2005 16:13:22 GMT
Another option is to add another parameter to the url that varies everytime (for example the current time in millis), that way for the browser you ask another url and the cache won't be used.
Easiest thing to do is use jQuery#ajax and disable caching.
jQuery will suffix a parameter ?somenumber to your ajax call which just is sufficient to persuade the browser it cannot use cached data.
I came across this once. Here's the answer I got: Why does jQuery.ajax() add a parameter to the url? .
You could do the same thing manually too, but you would have to check if the addition of the parameter is all there is to it.
No code provided but some guidance that can help you manage account state consistently across many potential tabs, history, devices, etc:
First, the more condensed version in one long paragraph:
If you want a consistent view for an account, regardless of history back/forward buttons, extra tabs or extra windows (even with different IPs/devices), you can use an increment counter and a heartbeat (heartbeat can be implemented as an XMLHttpRequest send() during a setInterval call configured to say 2 seconds). The increment is managed at the server. The client provides the counter value on each call to server. On each request, the server checks the counter value provided by the client with its own saved value. The server produces the next counter value, persists it, and returns that counter value in the reply so client can use it on its next call. If the client provided the expected counter value, it was a "good" request. If the value provided was different than what the server had stored, the client's call was a "bad" request. Honor good requests. Server may only partly honor bad requests. The client's "next" call could be the next heartbeat or any other request on that account. Multiple client views into that account can overlap but essentially one client only would get the next good call. All other clients would get bad next calls because their counter values will no longer match what the server has stored. If you use one account view, every call to server should be a good call once the session is initiated. [A session can last when browser javascript maintains the counter value, but unless you use cookies or the like, you cannot extend a session if the page is refreshed since javascript would be reinitialized. This means every first call to page would be a "bad" call.] If you use history back, some other tab, or some other device, you should be able to use it, but you will get a bad call at a minimum whenever you switch from one to the other. To limit these bad call cases, turn off heartbeat when that browser view is inactive. Note, don't even implement a heartbeat if you don't mind showing the user a possibly stale page for a prolonged time or if the particular page is unlikely to be stale (but this question assumes you can get stale data on user's browser view).
Let's add more detail:
Every request to a server from an existing opened browser page provides the counter value. This can be, for example, during a form submit or during javascript XMLHttpRequest object .send().
Requests typed from url bar by the user may not have a counter value sent. This and logon can just be treated as having an incorrect count value. These would be examples of "bad" calls, and should be handled as gracefully as possible but should generally not be allowed to update the account if you want a consistent view.
Every request seeking to modify the account (a "writer") must have provided the anticipated counter value (which can be updated at the server other than as +1 if you have more elaborate needs but must be anticipated/unique for a next request). At the server end, if the counter value is the expected one, then process the request variables normally and allow write access. Then include in the reply to client the next legit value the server will expect for that variable (eg, cnt++) and persist that value on the server end (eg, update counter value in database or in some other server file) so that the server will know the next legit counter value to expect whenever the next request comes in for that account.
A request that is a simple "read" is processed the same way as a write request except that if it is a bad request (if the counter doesn't match), a read is more likely to be able to be safely processed.
All requests that provide a different counter value than expected ("bad" requests) still result in the updating of the counter at the server and still result in the client's reply getting the good next expected counter value; however, those bad requests should be ignored to the extent they ask to update the account. Bad requests could even result in more drastic action (such as logging user out).
Client javascript will update the value of counter upon every server reply to what the server returns so that this updated counter value is sent back on any next call (eg, on heartbeat or any talk to server). Every client request will always get a legit next value sent back but only the client that uses that first will be treated as ok by server.
The other clients (ie, any client request that doesn't provide the expected counter value) will instead be given a safe state, eg, the current state as per the database while ignoring any write/update requests. The server can treat the "bad" client calls in other more drastic ways, eg, by logging the user out or whatever, but primarily just make sure to honor at most the bad client's safe read requests, not updating the account in any way.
The heartbeat is necessary only if you want a clean view in short order. To make it light on server, you can have the heartbeat be a simple ping (sending the counter value along). If acknowledged as the good client, you can be done for that heartbeat. If you were a bad client however, then server can return say good fresh info which can be used by javascript in heartbeat code to update the GUI. The heartbeat can be to a different php server page or main one but if different make sure that page gets consistent view of server saved counter variable (eg, use a database).
Another feature you may want to implement for an account is "active/inactive status. The client would be inactive if the mouse position has not changed for a number of seconds or minutes (and no keys typed or other user input during that time). The heartbeat can deactivate itself (clearInterval) when client is inactive. On every user input check if heartbeat is stopped and if so restart it. Heartbeat restart also means user is changing from inactive to active. Stopping the heartbeat would conserve client/server resources when user is browsing on other tab or afk. When becoming active again, you can do things like log out user if they were inactive for a long time, etc... or just do nothing new except restart heartbeat. [Remember, the reply to a heartbeat could indicate the heartbeat request was "bad".. which might possibly be a "drastic" reason to log user out as mentioned above.]
I know that an answer has been accepted, but it didn't worked in my case. I already had no-cache header added. In my case this was the solution that really worked, because if you add a code after the request, it might not get executed in time for the second piece of code to run successfully:
x = new XMLHttpRequest();
x.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
//execute the other code or function only when the page is really loaded!
}
};
x.open("GET", "add your url here", true);
x.send();
I have two phones connected to a Wifi access point, both have ip in the private range.
One of the phones has a HTTP server running on it and the other phone acts like a client. The client sends GET requests data to the server as name/
value pairs in the URL query string. At the moment the server is only sending on HTTP.OK on receiving the query string.
What is happening is the client may not be stationary and maybe moving around so it may not be possible for it to be in range always of the Wifi access
point due to that I am not getting all the data sent from the client at the server end.
I want to ensure that all data sent is actually received by the server.
What kind of error correction should I implement? Can I check for some relevant HTTP error codes or the like?
If the HTTP server doesn't receive the entire query string in a GET request, then the HTTP request cannot possibly be valid as these parameters are on the first line of the request.
The server will be unable to handle the request and in this case will likely return status code 400 (Bad Request).
If your client receives this (which seems unlikely that it would fail to transmit the request, yet receive the response), then you'll know to retransmit. In general, the properties of TCP connections like automatic retransmissions, checksums and timeouts should be all you need for successful delivery, or to determine failure.
You need to check for timeouts on the client. That depends on the process/language used.
EDIT: http://wiki.forum.nokia.com/index.php/Using_Http_and_Https_in_Java_ME
Looks like you simply set a timeout and catch IO errors.
Premature optimization.
Connection integrity is already dealt with in the lower parts of the network stack. So if there were any dropouts in the middle of the request (assuming it spanned more than a single packet) the TCP stack would attempt to recover them before passing the data on to the server.
If you need to prove this to yourself, then just add a checksum as the last part of the query.
C.