IcmpSendEcho2 documentation says:
The ReplyBuffer contains the ICMP echo responses, if any.
For ICMP, if only one request is sent, wouldn't only zero or one responses be expected?
If that's the case, does that mean IcmpSendEcho2 can send multiple requests since it can receive multiple responses? And if so, is there any way to find out how many requests were sent?
The goal of all this is to try and get a packet loss %, but knowing how many requests were sent needs to be known.
IcmpSendEcho2() sends 1 request, but that can generate multiple responses. The output is an array of responses, so you have to make sure the array is large enough to receive all of the responses.
Related
I'm looking for a proper way to have one goroutine sending out request packets to specific servers while a second goroutine receiving the responses and handling them, maybe even create a new goroutine for each response to handle.
The architecture of the game is that there are multiple masterservers, which can be asked for ip lists of registered servers.
After getting the ips and ports from the masterservers, each of the ips gets a request for its data, like server name, map, players, etc.
Also, are there better ways to handle this?
Currently I am creating a goroutine per request that also waits for a response afterwards.
The waiting for a response timeouts after 35ms and continues to send 1.2 times the previous amount of request packets to have a small burst of requests. Also the timeout is doubled on every retry.
I'd like to know if there are better strategies that have proven to be more robust and have a lower latency, that are not too complex.
Edit:
I only create the client side sockets, but would have, if there is no better approach, a client that sends UDP request packets that contain a different socket's address as sender value in order to receive the answers on a different socket that acts kind of like a server, where all the response packets are collected. In order to separate the sending socket from the receiving socket.
This question is tagged as client-server as one of the sockets is supposed to act like a server, even tho all it does is receive expected answers in response to request packets sent by the client socket.
I have a need to be able to validate TOS/DSCP marks on response data from a set of HTTP servers. Would it be possible, given a list of target URLs to test, if there is a way in go to generate the HTTP request, and then be able to examine the response's TCP packet details in order to obtain the TOS value?
My assumption at this point is that it may require creating a socket, and then dynamically generating a TCP packet that contains the HTTP request payload. I've been searching around to see if there were any libraries that would aid in this task, but haven't found anything specific yet.
Note: a simple TCP connection will not provide enough data - the target servers in question will alter TOS/DSCP marks dynamically based on the HTTP server name (so essentially, a single physical server will respond with different TOS marks depending on the vHost requested), so it is important to be able to verify the TOS on actual HTTP response packets, and not something simple like a ping. The TOS values in the TCP 3-way handshake cannot be trusted either - it must be a packet containing the HTTP data.
I did end up solving this problem using gopacket/pcap and net/http.
In a nutshell, what I ended up doing is writing a function that creates a channel, and then calls a goroutine that does the actual packet capture and parsing. The goroutine passes the captured TOS value back to the channel, and then the original function does the http request, and then reads the channel to get the TOS result. Still a bit of a work-in-progress, but so far, this solution seems to be working fairly well.
Many libraries include Expect: 100-continue on all HTTP 1.1 POST and PUT requests by default.
I intend to reduce perceived latency by removing 100-continue mechanism on the client side on those requests for which I know the expense of sending data right away is less than waiting a roundtrip for 100-continue, namely on short requests.
Of course I still want all the other great features of HTTP 1.1, thus only I want to kill Expect: 100-continue header. I have two options:
remove expect header entirely, or
send empty expect header, Expect:\r\n
Is there ever any difference between the two?
Any software that might break for one or the other?
Nothing should break if you remove the Expect header, but I know that Microsoft IIS has had issues with 100 Continue in the past. For example, IIS5 always sends 100 continue responses. So, I wonder if at least some of the uses of it in libraries might be to work around similarly broken behaviour in servers.
Many libraries seem to set this header and then not actually handle 100 Continue properly - e.g. they begin to send the request body immediately without waiting for a 100 Continue and then don't handle the fact that the server might send back any HTTP error code before they've finished sending the request body (the first part's OK, it's the second part which is broken - see later in my answer). This leads me to believe that some authors have just copied it from elsewhere without fully understanding the subtleties.
I can't see any reason to include a blank Expect header - if you're not going to include 100-continue (or some other Expect clause) then omit the header entirely. The only reason to include it would be to work around broken webservers, but I'm not aware of any which behave in this way.
Finally, if you're just looking to reduce roundtrip latencies it seems to me that it wouldn't actually be inconsistent with the RFC to simply begin to transmit the request body immediately. You're not supposed to wait indefinitely to send the request body (as per the RFC), so you're behaving to the spec - it's just your timeout before sending anyway is zero.
You must be aware that servers are at liberty to not send the 100 Continue response if they've already received some of the request body, so you have to handle servers which send 100 Continue, those which send nothing and wait for the full request and those which immediately send any HTTP error code (which may be 417, but more likely a generic 4xx code). In this way, your short requests shouldn't have any overhead (aside from the Expect header) but you won't have to wait for the 100 Continue. Of course, for this approach to work you'll need to be doing things in a way which lets you interrupt the request as soon as the server returns an error code (e.g. non-blocking IO with poll() or select()).
Doing things this way might help keep your code more consistent between small and large requests while reducing the latency. The downside is that it's perhaps not what the RFC authors had in mind, even if it doesn't explicitly violate any of the requirements. Also, it might make your later code more complicated if you're not already doing non-blocking IO or similar.
I have two phones connected to a Wifi access point, both have ip in the private range.
One of the phones has a HTTP server running on it and the other phone acts like a client. The client sends GET requests data to the server as name/
value pairs in the URL query string. At the moment the server is only sending on HTTP.OK on receiving the query string.
What is happening is the client may not be stationary and maybe moving around so it may not be possible for it to be in range always of the Wifi access
point due to that I am not getting all the data sent from the client at the server end.
I want to ensure that all data sent is actually received by the server.
What kind of error correction should I implement? Can I check for some relevant HTTP error codes or the like?
If the HTTP server doesn't receive the entire query string in a GET request, then the HTTP request cannot possibly be valid as these parameters are on the first line of the request.
The server will be unable to handle the request and in this case will likely return status code 400 (Bad Request).
If your client receives this (which seems unlikely that it would fail to transmit the request, yet receive the response), then you'll know to retransmit. In general, the properties of TCP connections like automatic retransmissions, checksums and timeouts should be all you need for successful delivery, or to determine failure.
You need to check for timeouts on the client. That depends on the process/language used.
EDIT: http://wiki.forum.nokia.com/index.php/Using_Http_and_Https_in_Java_ME
Looks like you simply set a timeout and catch IO errors.
Premature optimization.
Connection integrity is already dealt with in the lower parts of the network stack. So if there were any dropouts in the middle of the request (assuming it spanned more than a single packet) the TCP stack would attempt to recover them before passing the data on to the server.
If you need to prove this to yourself, then just add a checksum as the last part of the query.
C.
What are the strengths of GET over POST and vice versa when creating an ajax request? How do I know which I should use at any given time? Is it a security-minded decision?
Also, what is the difference in how they are actually sent?
GETs should be used for idempotent operations, that is operations that can be safely repeated more than once without changing anything. Browsers will cache GET requests (for normal and AJAX requests)
POSTs should be generally be used for non-idenpotent operations, like saving something. Although you can use them for other operations if you want.
Data for GETs is sent over the URL query string. Data for POSTs is sent separately. Some browsers have a maximum URL length (I think Internet Explorer is 2048 characters), and if the query string becomes too long you'll get an error.
You should use GET and POST requests in AJAX calls just as you would use GET and POST requests in normal calls. Basic rule of thumb:
Will the request modify anything in your Model?
YES: The request will modify (add/update/delete) data from your data store,
or in some other way change the state of the server (cause creation of
a file, for example). Use POST.
NO: The request will not affect the state of anything (database, file system,
sessions, ...) on the server, but merely retrieve information. Use GET.
POST requests are requests that you do not want to accidentally happen. GET requests are requests you are OK with happening by a user pointing a browser to via a URL.
GET requests can be repeated quite simply since their data is based in the URL itself.
You should think about AJAX requests like you think about regular form requests (and their GET and POST)
The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.
http://developer.yahoo.com/performance/rules.html#ajax_get