Do we need the "Expect: 100-continue" header in the xfire request header? - performance

I found the apache xfire has add one head parameter in its post header:
POST /testservice/services/TestService1.1 HTTP/1.1
SOAPAction: "testAPI" Content-Type: text/xml; charset=UTF-8
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; XFire Client +http://xfire.codehaus.org)
Host: 192.168.10.111:9082
Expect: 100-continue
Will this Expect: 100-continue make the roundtrip call between the xfire client and its endpoint server a little bit waste because it will use one more handshake for the origin server to return the "willing to accept request"?
This just my guess.
Vance

I know this is old question but as I was just researching the subject, here is my answer. You don't really need to use "Expect: 100-continue" and it indeed does introduce extra roundtrip. The purpose of this header is to indicate to the server that you want your request to be validated before posting the data. This also means that if it is set, you are committed to waiting (within your own timeout period - not indefinitely!) for server response (either 100 or HTTP failure) before sending your form or data. Although it seems like extra expense, it is meant to improve performance in failure cases, by allowing the server to make you know not to send the data (since the request has failed).
If the header is not set by the client, this means you are not awaiting for 100 code from the server and should send your data in the request body. Here is relevant standard: http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html (jump to section 8.2.3).
Hint for .NET 4 users: this header can be disabled using static property "Expect100Continue"
Hint for libcurl users: there was a bug in old version 7.15 when disabling this header wasn't working; fixed in newer versions (more here: http://curl.haxx.se/mail/lib-2006-08/0061.html)

Related

Ruby HTTP request always returns 403 response (too many requests). Works in Postman/browser

I am trying to write a simple function which would easily extract the contact information from a classified listing.
Background
The URL I'm looking at is
https://www.idealista.pt/imovel/27542922/
Looking through the developer tools in Chrome, I see that it makes a GET request to this URL. https://www.idealista.pt/pt/ajax/listingController/adContactInfoForListing.ajax?adId=27542922
If I make a GET request in Postman or just copy the second URL into Chrome I get a JSON containing various details.
My code
(Ruby)
uri = URI('https://www.idealista.pt/pt/ajax/listingController/adContactInfoForListing.ajax?adId=27542922')
foo = Net::HTTP.get(uri)
JSON.parse(foo)
The problem
The response is a 403 with a body saying that the system has detected that many requests have been made in a short period of time.
I can replicate this in Postman by doing seven or eight consecutive requests, but then if I wait a minute or two before trying again I get back to seeing the JSON.
Through Ruby it happens straight away.
What I've tried
I've tried copying some or all of the temporary headers created by Postman into my request in Ruby but I still get the same error or 404
User-Agent - PostmanRuntime/7.22.0
Accept - */*
Cache-Control - no-cache
Postman-Token - 6c68a9eb-83d5-4724-9f41-3fc51971db9f
Host - www.idealista.pt
Accept-Encoding - gzip, deflate, br
Cookie - userUUID=c017919a-6115-4905-95b3-5d949c6fb447; _pxhd=34ed938caca242bf6050147e1514cda07b704cc7681245a4beec5a64e0a5cf66:d4f21381-522a-11ea-a954-6f59910ff05b; SESSION=887b6dbc-78a4-4abd-9600-7ce401507331; WID=15a353ca7aab3446|XlEN6|XlEN4
Connection - keep-alive
you have to use a proxy, and chanfe the ip

Firefox SPNEGO Negotiate protocol - multiple connections?

I'm using gssapi/Kerberos authentication in my web application, and I want single sign on via the browser.
The problem is, Firefox sends an initial request to the server with no authentication, and receives a 401. But it includes a keep-alive header:
Connection: keep-alive
If the server respects this keep-alive request, and returns a WWW-Authenticate header, then Firefox behaves correctly and sends the local user's Kerberos credentials, and all is well.
But, if the server doesn't keep the connection alive, Firefox will not send another request with the credentials, even though the response has the WWW-Authenticate header.
This is a problem because I'm using Django, and Django doesn't support the keep-alive protocol.
Is there a way to make Firefox negotiate without the keep-alive? In the RFC that defines the Negotiate extension, there's nothing about requiring that the same connection be re-used.
Alternatively, is there a way to make Firefofx preemptively send the credentials on the first request? This is explicitly allowed in the RFC.
That header is HTTP 1.0, wake up, fast-forward 15 years and your problems will go away. Firefox works very well with SPNEGO.

JMeter different results during replay of test

I have a strange problem with JMeter.
I've made recording of some sort of web application without any problems. Problem appears during playback of test.
For some reason I receive different results during playback than during recording.
When I compare Http Request made during recording and playback I don't see a single difference (except for some security token which I'm extracting from earlier requests and passing as parameter).
To be more exact during recording I receive a response with a big body (>5kB), and during playback body of response is empty. Response code is 200 (OK).
This body contains crucial data from database, so I'm afraid that measurement made by this JMeter script will not reflect actual behavior of application, simply I will not measure what I really need.
Now my questions:
is there some tool or JMeter plug-in which will allow more effectively see contents of HTTP requests and its responses? It would be great If I could compare of requests made during recording and playback. So far I used two listeners: "View Results Tree". I've sandwiched between them to compare request from recording and playback.
is there some known bug in JMeter which could explain the difference? For example something related to recording process?
Here is example of request:
POST http://10.133.27.81:8080/c/portal/render_portlet
POST data:
p_l_id=69210&p_p_id=blank_WAR_Blank_INSTANCE_iNM3&p_p_action=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-2&p_p_col_pos=1&p_p_col_count=2
[no cookies]
Request Headers:
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Accept-Language: pl
Accept: */*
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
csrf_token: 1GXK-0QD7-GFPJ-JLDG-JP2G-J390-BFLG-7LL7
Pragma: no-cache
Method: POST /c/portal/render_portlet HTTP/1.1
X-Requested-With: OWASP CSRFGuard Project
Referer: http://10.133.27.81:8080/group/bou
Accept-Encoding: gzip, deflate
Content-Length: 143
Host: 10.133.27.81:8080
Update: to make sure which headers or parameters are constant I made 4 recordings of same test case during different sessions and compared them, so I'm quite sure that only csrf_token has to be field with value fetched from other request. I've added debug sampler to verify that this value is fetched properly.
Update 2: Problem found.
There where two problems:
There is a bug in JMeter when you do a search (Ctrl-F) it searches the whole project except for HTTP Header Menagers and my request contained csrf_token inside of header (I detected that before posting this question). Making a search in xml using text editor was good workaround for that.
when I try to find source of problems, before I've found problem number one, I've added a new problem by removing a HTTP Cookie Manager (I'm blaming myself and IE for this).
Generally changing Internet Explorer to FireFox with HttpFox add-on help to spot the problem.
Thanks everyone for support.
Marek
Response code 200 doesn't have to mean that everything went well at the application level.
To find out more details you can use Debug sampler and Debug PostProcessor.
Example here.
Your issue comes certainly from a missing dynamic request parameter that you didn't compute.
Look for example at csrf_token header Did you make it variable? Or do you transmit its initial recorded valur, but also at any parameter that contains some hashdata or numeric data referencing some content that does not exist in your page or request.
For example col pos I see p_p_col_id and related parameters, are you sure they reference something in your replay.
There is really very little chance of a JMeter bug in this case.

Method/program for sending a given HTTP request (with headers)

I am debugging my website. When it has an error, the full text form of the HTTP request that caused the error is logged. I want to be able to replay these HTTP requests to help debugging the error.
For instance, I have this in my log now:
POST /ipn/handler.ashx?inst=272&msgType=result HTTP/1.0
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Host: mysite.com
Content-Length: 28
User-Agent: AGENT/1.0 (UserAgent)
region=website&compName=ACTL
I want a way to make this exact request again on my local test machine (with changed Host attribute). What is the best way to do this?
You could use telnet to talk to your web server and type the exact requests.
You could also use libcurl (& curl) to make a program which is an HTTP client.
And many scripting languages (Python, Ruby, Perl, Ocaml, ...) also have HTTP client libraries (sometimes above Curl).

Is the anchor part of a URL being sent to a web server?

Say, there's a URL, http://www.example.com/#hello.
Will the #hello thing be sent to the web server or not, according to standards?
How do modern browsers act?
The answer to this question is similar to the answers for Retrieving anchor link in URL for ASP.NET.
Basically, according to the standard at RFC 1808 - Relative Uniform Resource Locators (see Section 2.4.1), it says:
"Note that the fragment identifier is not considered part of the URL."
As stephbu pointed out, "the anchor tag is never sent as part of the HTTP request by any browser. It is only interpreted locally within the browser".
The hash variables aren't sent to the web server at all.
For instance, a request to http://www.whatismyip.org/#test from Firefox sends the follow HTTP request packet
GET / HTTP/1.1
Host: www.whatismyip.org
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Cache-Control: max-age=0
You'll notice the # is nowhere to be found.
Pages you see using # as a form of navigation are doing so through javascript.
This parameter is accessible though the window.location.hash variable
The anchor part (after the #) is not sent to any $_SERVER variables in PHP. I don't know if there is a way of retrieving that piece of info from the URL or not (as far as I know, it's not possible). It's supposed to be used by the browser only to find a location in the page, which is why the page does not reload if you click on an anchor like so: hello

Resources