Empty Request Body rejects request - asp.net-web-api

I have a Asp.Net WebApi method (POST) that works in one IIS server but when deployed to another IIS server (both the servers are IIS 8), if the request body is kept empty I get a error returned as below :
Request Rejected. The requested URL was rejected. Please consult with your administrator.
If I put any character in the request body - I get the expected result ! Any idea what I can check ? I believe some settings in the server - as the service is working in to former server - its only the later server that doesn't accept an Empty Request Body !
Unsuccessful request
Successful request/response when entered some garbage character in request body !
Update
The response header for the unsuccessful response:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Connection: close
Content-Length: 188

Related

\api\data\v9.0 is failing with error 401 unauthorized in Jmeter

Thread Name:Thread Group 1-1
Sample Start:2022-11-14 15:38:43 GMT
Load time:56
Connect Time:42
Latency:56
Size in bytes:1472
Sent bytes:2669
Headers size in bytes:1472
Body size in bytes:0
Sample Count:1
Error Count:1
Data type ("text"|"bin"|""):
Response code:401
Response message:Unauthorized
HTTPSampleResult fields:
ContentType:
DataEncoding: null
Response Header
Connection: keep-alive
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-origin
Accept-Language: en-GB,en;q=0.5
prefer: odata.include-annotations="*"
x-ms-sw-tenantid:
x-ms-user-agent:
clienthost: Browser
x-ms-client-session-id:
Accept: application/json
x-ms-source-id:
x-ms-sw-objectid:
x-ms-correlation-id:
content-type: application/json
x-ms-app-id:
I tried applying Cookie Manager and Authorization Manager but the code is failing everytime.
As per HTTP Status Code 401 description:
The HyperText Transfer Protocol (HTTP) 401 Unauthorized response status code indicates that the client request has not been completed because it lacks valid authentication credentials for the requested resource.
This status code is sent with an HTTP WWW-Authenticate response header that contains information on how the client can request for the resource again after prompting the user for authentication credentials.
So you either need to provide proper Authorization header via HTTP Header Manager or again properly configure the HTTP Authorization Manager to generate the header for you.

Prevent Open URL Redirect from gorilla/mux

I am working on a RESTful web application using Go + gorilla/mux v1.4 framework. Some basic security testing after a release revealed an Open URL Redirection vulnerability in the app that allows user to submit a specially crafted request with an external URL that causes server to response with a 301 redirect.
I tested this using Burp Suite and found that any request that redirects to an external URL in the app seems to be responding with a 301 Moved Permanently. I've been looking at all possible ways to intercept these requests before the 301 is sent but this behavior seems to be baked into the net/http server implementation.
Here is the raw request sent to the server (myapp.mycompany.com:8000):
GET http://evilwebsite.com HTTP/1.1
Accept: */*
Cache-Control: no-cache
Host: myapp.mycompany.com:8000
Content-Length: 0
And the response any time is:
HTTP/1.1 301 Moved Permanently
Location: http://evilwebsite.com/
Date: Fri, 13 Mar 2020 08:55:24 GMT
Content-Length: 0
Despite putting in checks for the request.URL to prevent this type of redirect in the http.handler, I haven't had any luck getting the request to reach the handler. It appears that the base http webserver is performing the redirect without allowing it to reach my custom handler code as defined in the PathPrefix("/").Handler code.
My goal is to ensure the application returns a 404-Not Found or 400-Bad Request for such requests. Has anybody else faced this scenario with gorilla/mux. I tried the same with a Jetty web app and found it returned a perfectly valid 404. I've been at this for a couple of days now and could really use some ideas.
This is not the claimed Open URL redirect security issue. This request is invalid in that the path contains an absolute URL with a different domain than the Host header. No sane client (i.e. browser) can be lured into issuing such an invalid request in the first place and thus there is no actual attack vector.
Sure, a custom client could be created to submit such a request. But a custom client could also be made to interpret the servers response in a non-standard way or visit a malicious URL directly without even contacting your server. This means in this case the client itself would be the problem and not the servers response.

Does it make any sense for a `HTTP/1.1` response to return HTTP status code `421 Misdirected Request`?

I am currently debugging a surprising "Bad Request" response from an API.
Request:
POST /path HTTP/1.1
...
Response:
HTTP/1.1 421 Misdirected Request
Date: Fri, 30 Nov 2018 21:59:12 GMT
...
Via: https/1.1 subdomain.example.org (ApacheTrafficServer/7.1.4)
...
Per my research, HTTP status code 421 was only added with the http/2 specification. As you can see, my client is sending a HTTP1.1 request.
Does it make any sense to use it in the response to a HTTPS/1.1 request? What could it mean there?
Update: Further research indicates that this 421 response is triggered by an invalid CSRF token and Cookie value in the header, retrying the request with a verifiable valid combination returns the expected result with 200 OK. Unfortunately this doesn't really explain anything.
421 was added for HTTP/2 which allowed connection reuse. If a client reused a connection incorrectly (like Firefox used to) then the server should respond with this.
However that doesn’t mean it’s an HTTP/2 only status code. For example if a load balancer takes HTTP/2 requests in and passes them to back end servers over HTTP/1.1, then one of those backend servers can reject a request over HTTP/1.1 if it believes it was incorrectly sent that request. As you can see your request was sent via an Apache Traffic Server, so I suspect that is what happened here.

What's wrong with this http request? Is there a validator somewhere?

I'm trying to send an http request but get 400 Bad Request from the server. How do I know what's wrong? Is there an http request validator somewhere on the web? If not, can somebody explain why this request fails:
GET http://www.example.com/index.htm HTTP/1.1
connection: close
content-length: 0
The request does end in \r\n\r\n.
I solved it: With http 1.1 the host header is required. When I added that header the request succeded and I received the expected response.

Does if-no-match need to be set programmatically in ajax request, if server sends Etag

My question is pretty simple. Although while searching over, I have not found a simple satisfying answer.
I am using Jquery ajax request to get the data from a server. Server
hosts a rest API that sets the Etag and Cach-control headers to the GET requests. The Server also sets CORS headers to allow the Etag.
The client of the Api is a browser web app. I am using Ajax request to call the Api. Here are the response headers from server after a simple GET request:
Status Code: 200 OK
Access-Control-Allow-Origin: *
Cache-Control: no-transform, max-age=86400
Connection: Keep-Alive
Content-Encoding: gzip
Content-Type: application/json
Date: Sun, 30 Aug 2015 13:23:41 GMT
Etag: "-783704964"
Keep-Alive: timeout=15, max=99
Server: Apache-Coyote/1.1
Transfer-Encoding: chunked
Vary: Accept-Encoding
access-control-allow-headers: X-Requested-With, Content-Type, Etag,Authorization
access-control-allow-methods: GET, POST, DELETE, PUT
All I want to know is:
Do I need to manually collect the Etag from response headers sent from the server and attach an if-no-match header to ajax request?OR the Browser sends it by-default in a conditional get request when it has an 'Etag'
I have done debugging over the network console in the browser and It
seems the browser is doing the conditional GET automatically and
sets the if-no-match header.
if it is right, Suppose, I created a new resource, and then I called the get request. It gives me the past cached data for the first time. But when I reload the page, It gives the updated one. So I am confused that, If the dataset on the server-side has changed and it sends a different Etag, Why doesn't the browser get an updated data set from the server unless I have to reload
Also in case of pagination. Suppose I have a URL /users?next=0. next is a query param where the value for the next changes for every new request. Since each response will get its own 'Etag'. Will the browser store the 'Etag' based on request or it just stores the lastest Etag of the previous get request, irrespective of the URL.
Well, I have somehow figured out the solution myself:
The browser sends the if-no-match header itself when it sees url had the e-tag header on a previous request. Browser saves the e-tag with respect to that URL, so it does not matter how many requests with different URLs happen.
Also, a trick to force the browser to fetch a conditional-get to check the e-tag:
Set the max-age header to the lowest (for me 60s works great)
once the cache expires, thebrowser will send a conditional-get to check if the expired cached resource is valid. If the if-no-match header matches with e-tag. The server sends the response back with 304: Not-Modified header. This means the expired cached resource is valid and can be used.

Resources