POST call with same headers and same request body gives different response? - bash

I was using a website (mysite1.com) , and 3 screens deep from the login screen is the screen that I want to reach using bash and curl & simulating the exact same request that would have gone through the browser. What I mean by simulating is sending exactly the same headers (including referer & origin).
Here is what is happening:
I was able to cross the login screen & screen 2 by simulating the browser behaviour
Now, I am stuck at screen 3. The POST call that goes to the mysite1 server is same as the browser would send, every bit of it.
To do the POST call of 3rd Screen , I create a form on localhost with action="URLOf3rdScreenOnMysite1" and method=post. And before submitting, I change the referer origin and other headers using a browser extension.
This is generating the request that I mentioned in point 2.
However,the first two calls for screen1 and 2 were in bash.
There are no cookies being used by mysite1. Session_id is there as a GET query string parameter. I assumed that probably the server is keeping a track of the flow of URLs requested, but I got the error response even when I followed the flow using bash.
The POST call in the 3rd screen returns different response (an error response) when I try to simulate and returns the , even when the flow of URLs requested is the same in both the cases. How can this be possible? How is the server coming to know that these request are different, the one from browser and the other from bash + last-screen-from-browser? Are there any other parameters involved except for Headers + POST Data + URLs requested? Maybe a different connection established when calling the 3rd screen from browser?

Related

View Result tree is not showing a correct request as in script

In the script we are using a PUT request as shown in below image
But when we run this, it is showing it as a GET request as shown in the below image
Due to this request is failing with a 400 bad request response code.
I am using Jmeter 5.5
If you expand this Sample Result to see its sub results you will see your initial PUT request followed by at least one redirect so View Results Tree listener is "showing the correct result", the fact that your PUT request fails is rather connected with a problem in your script, most probably you're simply not authorized due to missing or improperly implemented correlation or something like this.
So visit previous results and inspect their response data to ensure that your script is doing what is supposed to be doing, the fact that the request is "green" indicates only that HTTP Status Code is below 400, it doesn't necessarily mean that it's successful.

How to simulate a delayed web server response using Ruby?

To test iPhone and iPad fetching and caching images from an external web server, I'd like to make my own server delay for 0.5, 1, or 3 seconds before an image is returned, using a URL that looks like:
http://www.mysite.com/getImage.cgi?pic=pic001.png&delayWanted=3
is there a simple way to to this?
Using Ruby, the two ways I was thinking of was to use CGI and change the HTTP header to return the type of image/png, the no-cache header and "expire time = 1 year ago", and provide the content size, and then open the image file and output the data, but this probably will need to best match how a standard web server returns the HTTP headers. Another way is to sleep first, and then simply send an HTTP code of redirect to the real picture's URL, so the web server should handle the rest. Or is there a simpler way?
I don't know about Ruby, but if you can insert a Linux box you are root on into the network path (or the server you run Ruby on qualifies) there's "netem" for emulating lifelike network conditions: delay, packet loss, jitter...
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem
I think the simpler way to do that is as you say, using sleep, for example
sleep(2.minutes)
it takes only one line of code and delays your response.

how to clear the cache data when using ajax?

I am using Ajax to retrieve the data from server as below based on some ID to perform auto suggest function. however when i submit the form and update the database, the auto suggest field suppose should not contain anything for this ID anymore, but it will still retrieve data from its cache. do anyone know how to clear the cache and make the Ajax sending request to get the latest data from server every time i press the button? Pls help i really stuck on this whole weeks and couldnt find the solution.
For example: when ID field is 00001, auto suggest field will be 1,2,3. After i submit the form and update the database, when i search for 00001 again, it should not contain anything but it does, it still cache the data as 1,2,3 in suggest field...
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
var data=xmlhttp.responseText;
alert(data);
}
}
xmlhttp.open("GET","gethint.php?q="+str,true);
xmlhttp.send();
I had this problem once before. This is probably something you can fix in your server settings. What the server does is get a server request, build the answer, and when the same request is done again it sends the same response it built before.
To easily avoid this problem, I added an extra request parameter (a UID).
so:
xmlhttp.open("GET","gethint.php?q="+str+**"?something"=RANDOMGUID**,true);
this way you always ha a unique request.
Works with IE8
xmlHttp.open("GET", URL, true);
xmlHttp.setRequestHeader("Cache-Control", "no-cache");
xmlHttp.setRequestHeader("Pragma", "no-cache");
xmlHttp.setRequestHeader("If-Modified-Since", "Sat, 1 Jan 2000 00:00:00 GMT");
you could use http headers to prevent the response from being cached:
Cache-Control: no-cache
Expires: Mon, 24 Oct 2005 16:13:22 GMT
Another option is to add another parameter to the url that varies everytime (for example the current time in millis), that way for the browser you ask another url and the cache won't be used.
Easiest thing to do is use jQuery#ajax and disable caching.
jQuery will suffix a parameter ?somenumber to your ajax call which just is sufficient to persuade the browser it cannot use cached data.
I came across this once. Here's the answer I got: Why does jQuery.ajax() add a parameter to the url? .
You could do the same thing manually too, but you would have to check if the addition of the parameter is all there is to it.
No code provided but some guidance that can help you manage account state consistently across many potential tabs, history, devices, etc:
First, the more condensed version in one long paragraph:
If you want a consistent view for an account, regardless of history back/forward buttons, extra tabs or extra windows (even with different IPs/devices), you can use an increment counter and a heartbeat (heartbeat can be implemented as an XMLHttpRequest send() during a setInterval call configured to say 2 seconds). The increment is managed at the server. The client provides the counter value on each call to server. On each request, the server checks the counter value provided by the client with its own saved value. The server produces the next counter value, persists it, and returns that counter value in the reply so client can use it on its next call. If the client provided the expected counter value, it was a "good" request. If the value provided was different than what the server had stored, the client's call was a "bad" request. Honor good requests. Server may only partly honor bad requests. The client's "next" call could be the next heartbeat or any other request on that account. Multiple client views into that account can overlap but essentially one client only would get the next good call. All other clients would get bad next calls because their counter values will no longer match what the server has stored. If you use one account view, every call to server should be a good call once the session is initiated. [A session can last when browser javascript maintains the counter value, but unless you use cookies or the like, you cannot extend a session if the page is refreshed since javascript would be reinitialized. This means every first call to page would be a "bad" call.] If you use history back, some other tab, or some other device, you should be able to use it, but you will get a bad call at a minimum whenever you switch from one to the other. To limit these bad call cases, turn off heartbeat when that browser view is inactive. Note, don't even implement a heartbeat if you don't mind showing the user a possibly stale page for a prolonged time or if the particular page is unlikely to be stale (but this question assumes you can get stale data on user's browser view).
Let's add more detail:
Every request to a server from an existing opened browser page provides the counter value. This can be, for example, during a form submit or during javascript XMLHttpRequest object .send().
Requests typed from url bar by the user may not have a counter value sent. This and logon can just be treated as having an incorrect count value. These would be examples of "bad" calls, and should be handled as gracefully as possible but should generally not be allowed to update the account if you want a consistent view.
Every request seeking to modify the account (a "writer") must have provided the anticipated counter value (which can be updated at the server other than as +1 if you have more elaborate needs but must be anticipated/unique for a next request). At the server end, if the counter value is the expected one, then process the request variables normally and allow write access. Then include in the reply to client the next legit value the server will expect for that variable (eg, cnt++) and persist that value on the server end (eg, update counter value in database or in some other server file) so that the server will know the next legit counter value to expect whenever the next request comes in for that account.
A request that is a simple "read" is processed the same way as a write request except that if it is a bad request (if the counter doesn't match), a read is more likely to be able to be safely processed.
All requests that provide a different counter value than expected ("bad" requests) still result in the updating of the counter at the server and still result in the client's reply getting the good next expected counter value; however, those bad requests should be ignored to the extent they ask to update the account. Bad requests could even result in more drastic action (such as logging user out).
Client javascript will update the value of counter upon every server reply to what the server returns so that this updated counter value is sent back on any next call (eg, on heartbeat or any talk to server). Every client request will always get a legit next value sent back but only the client that uses that first will be treated as ok by server.
The other clients (ie, any client request that doesn't provide the expected counter value) will instead be given a safe state, eg, the current state as per the database while ignoring any write/update requests. The server can treat the "bad" client calls in other more drastic ways, eg, by logging the user out or whatever, but primarily just make sure to honor at most the bad client's safe read requests, not updating the account in any way.
The heartbeat is necessary only if you want a clean view in short order. To make it light on server, you can have the heartbeat be a simple ping (sending the counter value along). If acknowledged as the good client, you can be done for that heartbeat. If you were a bad client however, then server can return say good fresh info which can be used by javascript in heartbeat code to update the GUI. The heartbeat can be to a different php server page or main one but if different make sure that page gets consistent view of server saved counter variable (eg, use a database).
Another feature you may want to implement for an account is "active/inactive status. The client would be inactive if the mouse position has not changed for a number of seconds or minutes (and no keys typed or other user input during that time). The heartbeat can deactivate itself (clearInterval) when client is inactive. On every user input check if heartbeat is stopped and if so restart it. Heartbeat restart also means user is changing from inactive to active. Stopping the heartbeat would conserve client/server resources when user is browsing on other tab or afk. When becoming active again, you can do things like log out user if they were inactive for a long time, etc... or just do nothing new except restart heartbeat. [Remember, the reply to a heartbeat could indicate the heartbeat request was "bad".. which might possibly be a "drastic" reason to log user out as mentioned above.]
I know that an answer has been accepted, but it didn't worked in my case. I already had no-cache header added. In my case this was the solution that really worked, because if you add a code after the request, it might not get executed in time for the second piece of code to run successfully:
x = new XMLHttpRequest();
x.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
//execute the other code or function only when the page is really loaded!
}
};
x.open("GET", "add your url here", true);
x.send();

GET vs. POST ajax requests: When and how to use either?

What are the strengths of GET over POST and vice versa when creating an ajax request? How do I know which I should use at any given time? Is it a security-minded decision?
Also, what is the difference in how they are actually sent?
GETs should be used for idempotent operations, that is operations that can be safely repeated more than once without changing anything. Browsers will cache GET requests (for normal and AJAX requests)
POSTs should be generally be used for non-idenpotent operations, like saving something. Although you can use them for other operations if you want.
Data for GETs is sent over the URL query string. Data for POSTs is sent separately. Some browsers have a maximum URL length (I think Internet Explorer is 2048 characters), and if the query string becomes too long you'll get an error.
You should use GET and POST requests in AJAX calls just as you would use GET and POST requests in normal calls. Basic rule of thumb:
Will the request modify anything in your Model?
YES: The request will modify (add/update/delete) data from your data store,
or in some other way change the state of the server (cause creation of
a file, for example). Use POST.
NO: The request will not affect the state of anything (database, file system,
sessions, ...) on the server, but merely retrieve information. Use GET.
POST requests are requests that you do not want to accidentally happen. GET requests are requests you are OK with happening by a user pointing a browser to via a URL.
GET requests can be repeated quite simply since their data is based in the URL itself.
You should think about AJAX requests like you think about regular form requests (and their GET and POST)
The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.
http://developer.yahoo.com/performance/rules.html#ajax_get

What are the advantages of using a GET request over a POST request?

Several of my ajax applications in the past have used GET request but now I'm starting to use POST request instead. POST requests seem to be slightly more secure and definitely more url friendly/pretty. Thus, i'm wondering if there is any reason why I should use GET request at all.
I generally set up the question as thus: Does anything important change after the request? (Logging and the like notwithstanding). If it does, it should be a POST request, if it doesn't, it should be a GET request.
I'm glad that you call POST requests "slightly" more secure, because that's pretty much what they are; it's trivial to fake a POST request by a user to a page. Making it a POST request, however, prevents web accelerators or reloads from re-triggering the action accidentally.
As AJAX, there is one more consideration: if you are returning JSON with callback support, be very careful not to put any sensitive data that you don't want other websites to be able to see in there. Wikipedia had a vulnerability along these lines where the user anti-CSRF token was revealed via their JSON API.
All good points, however, in answer to the question, GET requests are more useful in certain scenarios over POST requests:
They can be bookmarked
They can be cached
They're faster
They have known consequences (assuming they don't change data), so visiting them multiple
times is not a problem.
For the sake of posterity, updating this comment with the blog notes re: point #3 here, all credit to Omar AL Zabir (the author of the referenced blog post):
"Atlas by default makes HTTP POST for all AJAX calls. Http POST is
more expensive than Http GET. It transmits more bytes over the wire,
thus taking precious network time and it also makes ASP.NET do extra
processing on the server end. So, you should use Http Get as much as
possible. However, Http Get does not allow you to pass objects as
parameters. You can pass numeric, string and date only. When you make
a Http Get call, Atlas builds an encoded url and makes a hit to that
url. So, you must not pass too much content which makes the url become
larger than 2048 chars. As far as I know, that’s what is the max
length of any url.
Another evil thing about http post is, it’s actually 2 calls. First
browser sends the http post headers and server replies with “HTTP 100
Continue”. When browser receives this, it sends the actual body."
You should use GET where you're doing a request which has no side effects, e.g. just fetching some info. This request can:
Be repeated without any problem - if the browser detects an error it can silently retry
Have its result cached by the browser
Be cached by a proxy
These things are all good. Anything which is only retrieving data (particularly public data) should really be a GET. The server should send sensible Last-Modified: and Expires: headers to allow caching if required.
There is one other difference not mentioned by anyone.
GET requests are passed in the URL string and are therefore subject to a length limit usually dependent on the browser. It seems that most are around 2000 chars.
POST requests can be much much larger - in fact not limited really. So if you're needing to request data from a web server and you're passing in lots of parameter information then a POST request might be the only option.
So, as mentioned before really a GET request is for requesting data (no side effects) while a POST request is generally used for transmitting data back to the server to be stored (with side effects). e.g. Use POST to upload a file. GET to retrieve a file.
There was a time when IE I believe had a very short GET URL string. Some applications like Lotus notes use large numbers of random characters to represent document id's. I had the displeasure of using another product that generated random strings so the page URL was unique each time. The random string was HUGE... and it didn't always work with IE6 from memory.
This might help you to decide where to use GET and where to use POST:
URIs, Addressability, and the use of HTTP GET and POST.
POST requests are just as insecure as GETs. The main difference is that POST is used to modify the state of the server application, while GET only requests data from it.
The difference matters when you use clean, "restful" URLs, where the URL itself specifies the resource, and the different methods trigger different actions on the server side.
Perhaps most importantly, GET is book-markable / viewable in url history, and searchable with Google.
POST is important where you don't want the event to be bookmarkable or able to be typed in as a URL - otherwise you (or Google crawling your URLS) could end up accidentally doing things like deleting users from your system, for example.
GET
POST
In GET method, values are visible in the URL
In POST method, values are not visible in the URL.
GET has a limitation on the length of the values, generally 255 characters.
POST has no limitation on the length of the values since they are submitted via the body of HTTP.
GET performs are better compared to POST because of the simple nature of appending the values in the URL.
It has lower performance as compared to GET method because of time spent in including POST values in the HTTP body
This method supports only string data types.
This method supports different data types, such as string, numeric, binary, etc.
GET results can be bookmarked.
POST results cannot be bookmarked.
GET request is often cacheable.
The POST request is hardly cacheable.
GET Parameters remain in web browser history.
Parameters are not saved in web browser history.
Source and more in depth analysis: https://www.guru99.com/difference-get-post-http.html

Resources