Why does default Windows WebDAV Client ignore 403 Forbidden response? - windows

I'm using sabre/dav library for my project and I'm having some difficulties with preventing default Windows WebDAV client "deleting" a file that shouldn't be deleted.
The implementation in the server part is ok and Forbidden statuses are thrown and acknowledged apparently by other clients (Finder, CyberDuck) that abort the deletion process with a forbidden error to user. The same statuses are thrown back at Windows client, but it seems that it simply "deletes" the files, that are still available on server-side. By refreshing the folder those files are again visible. It seems that Win client simply ignores the forbidden responses and virtually deletes folder, so that it isn't visible anymore.
The bigger problem is that if for a reason you decide to delete a folder, where there are protected and unprotected files/folders underneath, it deletes all of the unprotected ones, because it fails to acknowledge the first (or any) Forbidden response. Other WebDAV clients detect this and stop the deletion process therefore the designated folder and its child folders/files are untouched.
Example of a forbidden response, when trying to delete a folder or file:
HTTP/1.1 403 Forbidden
Server: nginx/1.8.0
Date: Wed, 29 Jul 2015 13:55:11 GMT
Content-Type: application/xml; charset=utf-8
Connection: keep-alive
X-Frame-Options: SAMEORIGIN
X-Powered-By: PHP/5.4.41-1~dotdeb+7.1
X-Sabre-Version: 2.1.3
Vary: Accept-Encoding,User-Agent
Content-Length: 320
<?xml version="1.0" encoding="utf-8"?>
<d:error xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns">
<s:sabredav-version>2.1.3</s:sabredav-version>
<s:exception>Sabre\DAV\Exception\Forbidden</s:exception>
<s:message>Permission denied to delete node</s:message>
</d:error>
Tested default Win client on Win8.1 x86.
Any idea how to force Win WebDav client to detect Forbidden responses and terminate deletion process?
Thanks.

As mentioned by Evert, I think you could bypass the problem with properties: https://learn.microsoft.com/en-us/openspecs/sharepoint_protocols/ms-wdvme/f83d826b-7fad-4f80-838c-5c7cc98cb59f
This is described here: How to make file READ ONLY when exposed through WebDAV
Edit: It works in terms of you can set the read-only flag on the file, but this does windows explorer not prevent from sending the Delete request and ignoring the 403 status code. Windows explorer seems just to be a bad webdav client....

Related

Prevent Open URL Redirect from gorilla/mux

I am working on a RESTful web application using Go + gorilla/mux v1.4 framework. Some basic security testing after a release revealed an Open URL Redirection vulnerability in the app that allows user to submit a specially crafted request with an external URL that causes server to response with a 301 redirect.
I tested this using Burp Suite and found that any request that redirects to an external URL in the app seems to be responding with a 301 Moved Permanently. I've been looking at all possible ways to intercept these requests before the 301 is sent but this behavior seems to be baked into the net/http server implementation.
Here is the raw request sent to the server (myapp.mycompany.com:8000):
GET http://evilwebsite.com HTTP/1.1
Accept: */*
Cache-Control: no-cache
Host: myapp.mycompany.com:8000
Content-Length: 0
And the response any time is:
HTTP/1.1 301 Moved Permanently
Location: http://evilwebsite.com/
Date: Fri, 13 Mar 2020 08:55:24 GMT
Content-Length: 0
Despite putting in checks for the request.URL to prevent this type of redirect in the http.handler, I haven't had any luck getting the request to reach the handler. It appears that the base http webserver is performing the redirect without allowing it to reach my custom handler code as defined in the PathPrefix("/").Handler code.
My goal is to ensure the application returns a 404-Not Found or 400-Bad Request for such requests. Has anybody else faced this scenario with gorilla/mux. I tried the same with a Jetty web app and found it returned a perfectly valid 404. I've been at this for a couple of days now and could really use some ideas.
This is not the claimed Open URL redirect security issue. This request is invalid in that the path contains an absolute URL with a different domain than the Host header. No sane client (i.e. browser) can be lured into issuing such an invalid request in the first place and thus there is no actual attack vector.
Sure, a custom client could be created to submit such a request. But a custom client could also be made to interpret the servers response in a non-standard way or visit a malicious URL directly without even contacting your server. This means in this case the client itself would be the problem and not the servers response.

OCI ObjectStorage -- How to enable CORS for a bucket?

I need help in figuring out how to enable CORS (https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) on an object-storage bucket please?
At the moment, Cross-Origin-Resource sharing is enabled for OCI ObjectStorage native & Swift API.
➜ ~ git:(master) ✗ curl -I https://objectstorage.us-phoenix-1.oraclecloud.com
HTTP/1.1 404 Not Found
Date: Wed, 12 Dec 2018 01:52:00 GMT
Connection: keep-alive
opc-request-id: 286901e2-4180-812c-2779-18b415009904
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST,PUT,GET,HEAD,DELETE,OPTIONS
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: Access-Control-Allow-Credentials,Access-Control-Allow-Methods,Access-Control-Allow-Origin,Connection,opc-client-info,opc-request-id
Sadly, it seems like Oracle's object storage does not support CORS headers (nor ACLS - shame).
If you are generating pre-signed URLs to files in the private bucket, you could use the "Sec-Fetch-Mode" request header to know if the browser will check CORS for file.
See details: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Sec-Fetch-Mode
So you can do the following:
if Sec-Fetch-Mode = cors, you should download the file to your server and then send it to the browser (that way the file origin will be the same as your application).
if it's not, then you can generate a pre-signed URL and redirect the browser to it.
One last thing, IE and Safari browsers will not send the "Sec-Fetch-Mode" header even if they still check for cors, so in their case, the solution of sending the file through your server is to be done.
More about pre-signed URLs: https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html

Browser serving an obsolete Authorization header from cache

I'm experiencing my client getting logged out after an innocent request to my server. I control both ends and after a lot of debugging, I've found out that the following happens:
The client sends the request with a correct Authorization header.
The server responds with 304 Not Modified without any Authorization header.
The browser serves the full response including an obsolete Authorization header as found in its cache.
From now on, the client uses the obsolete Authorization and gets kicked out.
From what I know, the browser must not cache any request containing Authorization. Nonetheless,
chrome://view-http-cache/http://localhost:10080/api/SearchHost
shows
HTTP/1.1 200 OK
Date: Thu, 23 Nov 2017 23:50:16 GMT
Vary: origin, accept-encoding, authorization, x-role
Cache-Control: must-revalidate
Server: 171123_073418-d8d7cb0 =
x-delay-seconds: 3
Authorization: Wl6pPirDLQqWqYv
Expires: Thu, 01 Jan 1970 00:00:00 GMT
ETag: "zUxy1pv3CQ3IYTFlBg3Z3vYovg3zSw2L"
Content-Encoding: gzip
Content-Type: application/json;charset=utf-8
Content-Length: 255
The funny server header replaces the Jetty server header (which shouldn't be served for security reasons) by some internal information - ignore that. This is what curl says:
< HTTP/1.1 304 Not Modified
< Date: Thu, 23 Nov 2017 23:58:18 GMT
< Vary: origin, accept-encoding, authorization, x-role
< Cache-Control: must-revalidate
< Server: 171123_073418-d8d7cb0 =
< ETag: "zUxy1pv3CQ3IYTFlBg3Z3vYovg3zSw2L"
< x-delay-seconds: 3
< Content-Encoding: gzip
This happens in Firefox, too, although I can't reproduce it at the moment.
The RFC continues, and it looks like the answer linked above is not exact:
unless a cache directive that allows such responses to be stored is present in the response
It looks like the response is cacheable. That's fine, I do want the content to be cached, but I don't want the Authorization header to be served from cache. Is this possible?
Explanation of my problem
My server used to send the Authorization header only when responding to a login request. This used to work fine, problems come with new requirements.
Our site allows users to stay logged in arbitrarily long (we do no sensitive business). We're changing the format of the authorization token and we don't want to force all users to log in again because of this. Therefore, I made the server to send the updated authorization token whenever it sees an obsolete but valid one. So now any response may contain an authorization token, but most of them do not.
The browser cache combining the still valid response with an obsolete authorization token comes in the way.
As a workaround, I made the server send no etag when an authorization token is present. It works, but I'd prefer some cleaner solution.
The quote in the linked answer is misleading because it omitted an important part: "if the cache is shared".
Here's the correct quote (RFC7234 Section 3):
A cache MUST NOT store a response to any request, unless: ... the Authorization header field (see Section 4.2 of [RFC7235]) does not appear in the request, if the cache is shared,
That part of the RFC is basically a summary.
This is the complete rule (RFC7234 Section 3.2) that says essentially the same thing:
A shared cache MUST NOT use a cached response to a request with an Authorization header field (Section 4.2 of [RFC7235]) to satisfy any subsequent request unless a cache directive that allows such responses to be stored is present in the response.
Is a browser cache a shared cache?
This is explained in Introduction section of the RFC:
A private cache, in contrast, is dedicated to a single user; often, they are deployed as a component of a user agent.
That means a browser cache is private cache.
It is not a shared cache, so the above rule does not apply, which means both Chrome and Firefox do their jobs correctly.
Now the solution.
The specification suggests the possibility of a cached response containing Authorization to be reused without the Authorization header.
Unfortunately, it also says that the feature is not widely implemented.
So, the easiest and also the most future-proof solution I can think of is make sure that any response containing Authorization token isn't cached.
For instance, whenever the server sees an obsolete but valid Authorization token, send a new valid one along with Cache-Control: no-store to disallow caching.
Also you must never send Cache-Control: must-revalidate with Authorization header because the must-revalidate directive actually allows the response to be cached, including by shared caches which can cause even more problems in the future.
... unless a cache directive that allows such responses to be stored is present in the response.
In this specification, the following Cache-Control response directives (Section 5.2.2) have such an effect: must-revalidate, public, and s-maxage.
My current solution is to send an authorization header in every response; using a placeholder value of - when no authorization is wanted.
The placeholder value is obviously meaningless and the client knows it and happily ignores it.
This solution is ugly as it adds maybe 20 bytes to every response, but that's still better than occasionally having to resend a whole response content as with the approach mentioned in my question. Moreover, with HTTP/2 it'll be free.

Using Active Directory roles while accessing a website from JMeter

In our company the web app that we are testing uses the active directory roles assigned to the user for accessing the website.
Edit:
Important information that I forgot to mention is that, while accessing the website I am not prompted for the username and password. The website is only displayed if I have the correct Active Directory role assigned to my user profile.
For Example,
Opening IE as myself - able to access the website.
Opening IE as a service account (with required Active Directory roles) - able to access the website.
Opening IE as a different user outside my project - not able to access the website.
I have tried (skeptically, desperate to get it working) Basic/ Kerberos Authorization in the HTTP Authorization Manager and even running JMeter as that service account still no luck. I keep getting the below
Thread Name: Users 1-1
Sample Start: 2017-04-26 17:08:18 CDT
Load time: 83
Connect Time: 13
Latency: 83
Size in bytes: 438
Sent bytes:136
Headers size in bytes: 243
Body size in bytes: 195
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""): text
Response code: 401
Response message: Unauthorized
Response headers:
HTTP/1.1 401 Unauthorized
Server: nginx/1.10.1
Date: Wed, 26 Apr 2017 22:08:18 GMT
Content-Type: text/html
Content-Length: 195
Connection: keep-alive
WWW-Authenticate: Negotiate
X-Frame-Options: deny
X-Content-Type-Options: nosniff
HTTPSampleResult fields:
ContentType: text/html
DataEncoding: null
I am just trying to find out if any one here has got the JMeter working in a similar scenario/ if any one can point me in the right direction to overcome this hurdle.
Thanks all for your help in advance.
You need to identify the exact implementation of the authentication in your application.
Given you receive WWW-Authenticate: Negotiate - this is definitely not Basic HTTP Auth.
Negotiate may stand either for NTLM or for Kerberos (or in some cases for both, i.e. if Kerberos is not successful it will fall back to NTLM) and JMeter needs to be configured differently for these schemes.
For example for NTLM you need to provide only credentials and domain in the HTTP Authorization Manager and for Kerberos you need to populate Realm and set your Kerberos settings (KDC and login config) under jaas.conf and krb5.conf files
See Windows Authentication with Apache JMeter article for more information and example configurations.

How can I force browsers use expire (rather than etags/modification time)

I have a server serving static files with an expire of 1 year but my browsers still get the file and receive a 304 - not modified. I want to prevent the browser from even attempting the connection. I realize that that happens in several different setup (Ubuntu Linux) with Chrome and Firefox.
My test is as follows:
$ wget -S -O /dev/null http://trepalchi.it/static/img/logo-trepalchi-black.svg
--2016-03-14 19:56:14-- http://trepalchi.it/static/img/logo-trepalchi-black.svg
Risoluzione di trepalchi.it (trepalchi.it)... 213.136.85.40
Connessione a trepalchi.it (trepalchi.it)|213.136.85.40|:80... connesso.
Richiesta HTTP inviata, in attesa di risposta...
HTTP/1.1 200 OK
Server: nginx/1.2.1
Date: Mon, 14 Mar 2016 18:55:29 GMT
Content-Type: image/svg+xml
Content-Length: 25081
Last-Modified: Sun, 13 Mar 2016 23:03:53 GMT
Connection: keep-alive
Expires: Tue, 14 Mar 2017 18:55:29 GMT
Cache-Control: max-age=31536000
Cache-Control: public
Accept-Ranges: bytes
Lunghezza: 25081 (24K) [image/svg+xml]
Salvataggio in: "/dev/null"
100%[==================================================================================================================================================================>] 25.081 --.-K/s in 0,07s
2016-03-14 19:56:14 (328 KB/s) - "/dev/null" salvato [25081/25081]
That shows correctly providing expires and cache control (via nginx).
If I go to the browser and enable cache and open diagnostic tools, the first hit I notice a 200 return code, then I refresh the page (Control-r) and find a connection with 304 - not modified return code.
Inspecting firefox cache (about:cache) I found it with correct expire and clicking on the link in that page I was able to see it w/o hitting the remote server.
I also tested pages where the images are loaded from image tags (as opposed as directly called as in the example above).
All the letterature I read state that with such an expire the browser should not even try a connection. What's wrong? RFC 2616 states:
HTTP caching works best when caches can entirely avoid making requests
to the origin server. The primary mechanism for avoiding requests is
for an origin server to provide an explicit expiration time in the
future, indicating that a response MAY be used to satisfy subsequent
requests. In other words, a cache can return a fresh response without
first contacting the server.
Note another question addresses the problem of how 304 is generated, I just want to prevent the connection to be made
Sandro
Thanks

Resources