I'm using gssapi/Kerberos authentication in my web application, and I want single sign on via the browser.
The problem is, Firefox sends an initial request to the server with no authentication, and receives a 401. But it includes a keep-alive header:
Connection: keep-alive
If the server respects this keep-alive request, and returns a WWW-Authenticate header, then Firefox behaves correctly and sends the local user's Kerberos credentials, and all is well.
But, if the server doesn't keep the connection alive, Firefox will not send another request with the credentials, even though the response has the WWW-Authenticate header.
This is a problem because I'm using Django, and Django doesn't support the keep-alive protocol.
Is there a way to make Firefox negotiate without the keep-alive? In the RFC that defines the Negotiate extension, there's nothing about requiring that the same connection be re-used.
Alternatively, is there a way to make Firefofx preemptively send the credentials on the first request? This is explicitly allowed in the RFC.
That header is HTTP 1.0, wake up, fast-forward 15 years and your problems will go away. Firefox works very well with SPNEGO.
Related
I am working on a RESTful web application using Go + gorilla/mux v1.4 framework. Some basic security testing after a release revealed an Open URL Redirection vulnerability in the app that allows user to submit a specially crafted request with an external URL that causes server to response with a 301 redirect.
I tested this using Burp Suite and found that any request that redirects to an external URL in the app seems to be responding with a 301 Moved Permanently. I've been looking at all possible ways to intercept these requests before the 301 is sent but this behavior seems to be baked into the net/http server implementation.
Here is the raw request sent to the server (myapp.mycompany.com:8000):
GET http://evilwebsite.com HTTP/1.1
Accept: */*
Cache-Control: no-cache
Host: myapp.mycompany.com:8000
Content-Length: 0
And the response any time is:
HTTP/1.1 301 Moved Permanently
Location: http://evilwebsite.com/
Date: Fri, 13 Mar 2020 08:55:24 GMT
Content-Length: 0
Despite putting in checks for the request.URL to prevent this type of redirect in the http.handler, I haven't had any luck getting the request to reach the handler. It appears that the base http webserver is performing the redirect without allowing it to reach my custom handler code as defined in the PathPrefix("/").Handler code.
My goal is to ensure the application returns a 404-Not Found or 400-Bad Request for such requests. Has anybody else faced this scenario with gorilla/mux. I tried the same with a Jetty web app and found it returned a perfectly valid 404. I've been at this for a couple of days now and could really use some ideas.
This is not the claimed Open URL redirect security issue. This request is invalid in that the path contains an absolute URL with a different domain than the Host header. No sane client (i.e. browser) can be lured into issuing such an invalid request in the first place and thus there is no actual attack vector.
Sure, a custom client could be created to submit such a request. But a custom client could also be made to interpret the servers response in a non-standard way or visit a malicious URL directly without even contacting your server. This means in this case the client itself would be the problem and not the servers response.
I have a site with two https servers. One (frontend) serves up a UI made of static pages. The other (backend) serves up a microservice. Both of them happen to be using the same (test) X509 certificate to identify themselves. Individually, I can connect to them both over https requiring the client certificate "tester".
We were hiding CORS issues until now by going through an nginx setup that makes the frontend and backend appear that they are same Origin. I have implemented the headers 'Access-Control-Allow-Origin', 'Access-Control-Allow-Credentials' for all requests; with methods, headers for preflight check requests (OPTIONS).
In Chrome, cross-site like this works just fine. I can see that front-end URLs and backend URLs are different sites. I see the OPTIONS requests being made before backend requests are made.
Even though Chrome doesn't seem to need it, I did find the xmlhttprequest object that will be used to perform the request and did a xhr.withCredentials = true on it, because that seems to be what fetch.js does under the hood when it gets "credentials":"include". I noticed that there is an xhr.setRequestHeader function available that I might need to use to make Firefox happy.
Firefox behaves identically for the UI calls. But for all backend calls, I get a 405. When it does this, there is no network connection being made to the server. The browser just decided that this is a 405 without executing any https request. Even though this is different behavior from Chrome, it kind of makes sense. Both the front-end UI and backend service need a client certificate to be chosen. I chose the certificate "tester" when I connected to the UI. When it goes to make a backend request, it could assume that the same client certificate should be used to reach the back-end. But maybe it assumes that it could be different, and there is something else I need to tell Firefox.
Is anybody here using CORS in combination with 2 way SSL certificates like this, and had this Firefox problem and fixed it somewhere. I suspect that it's not a server-side fix, but something that the client needs to do.
Edit: see the answer here: https://stackoverflow.com/a/74744206/537554
I haven't actually tested this using client certificates, but I seem to recall that Firefox will not send credentials if Access-Control-Allow-Origin is set to the * wildcard instead of an actual domain. See this page on MDN.
Also there's an issue with Firefox sending a CORS request to a server that expects the client certificate to be presented in the TLS handshake. Basically, Firefox will not send the certificate during the preflight, creating a chicken and the egg problem. See this bug on bugzilla.
When using CORS with credentials (basic auth, cookies, client certificate, etc.):
Access-Control-Allow-Credentials must be true
Access-Control-Allow-Origin must not be *
Access-Control-Allow-Origin must not be multi-value (neither duplicated nor comma-delimited)
Access-Control-Allow-Origin must be set to exactly the value from the request's Origin header in order for the request to work (either hard-coded that way or if it passes a whitelist of allowed values)
The preflight OPTIONS request must not require credentials (including the client certificate). Part of the purpose of the preflight is to ask what is allowed in a CORS request, and therefore sending credentials before knowing if they are allowed is incorrect.
The preflight OPTIONS request must return a 200-level response, generally 204
Note: For Access-Control-Allow-Origin, you may want to consider allowing the value null since redirect chains (like the ones typically used for OAuth) can cause that Origin value in a request from a browser.
Today I faced the term "cookiejar" (package net/http/cookiejar). I tried to gather some information regarding it, but got nothing intelligible came out. I know that cookie is key/value pairs that server sends to a client, eg: Set-Cookie: foo=10, browser stores it locally and then each subsequent request browser will send these cookies back to the server, eg: Cookie: foo=10.
Ok, but what about cookiejar? What is it and how does it look like?
As you described in your question, cookies are managed by browsers (HTTP clients) and they allow to store information on the clients' computers which are sent automatically by the browser on subsequent requests.
If your application acts as a client (you connect to remote HTTP servers using the net/http package), then there is no browser which would handle / manage the cookies. By this I mean storing/remembering cookies that arrive as Set-Cookie: response headers, and attaching them to subsequent outgoing requests being made to the same host/domain. Also cookies have expiration date which you would also have to check before deciding to include them in outgoing requests.
The http.Client type however allows you to set a value of type http.CookieJar, and if you do so, you will have automatic cookie management which otherwise would not exist or you would have to do it yourself. This enables you to do multiple requests with the net/http package that the server will see as part of the same session just as if they were made by a real browser, as often HTTP sessions (the session ids) are maintained using cookies.
The package net/http/cookiejar is a CookieJar implementation which you can use out of the box. Note that this implementation is in-memory only which means if you restart your application, the cookies will be lost.
So basically an HTTP cookie is a small piece of data sent from a website and stored in a user's web browser while the user is browsing that website.
Cookiejar is a Go interface of a simple cookie manager (to manage cookies from HTTP request and response headers) and an implementation of that interface.
In general it is a datastore where an application (browser or not) puts the cookies it uses during requests and responses. So it is really a jar for cookies.
How can I make sure only a script hosted on a specific list of domains is allowed to connect to my WebSocket application?
Or to prevent opinion based closevotes, is there a state-of-the-art or native way?
I do not intend to implement user authentication.
The mechanism for this with WebSocket is the origin header.
This HTTP header is set by browsers to the domain of the host that served the HTML that contained the JavaScript which opened the WebSocket connection.
A WebSocket server can inspect the origin header during the initial opening handshake of the WebSocket protocol. The server can then only allow proceeding of the connection if the origin matches a known whitelist.
The header cannot be modified from JavaScript, and all browsers are required by the RFC6455 specification to include it.
Caution: a non-browser WebSocket client can of course fake the origin header to any value it likes.
#oberstet gave you the right answer.
If you are worried about bots or programmatic HTTP agents, then you are going to have a bad time. Everything in a HTTP request can be spoofed. Your only option is to use cookies to attach a token with limited time validity that certify the user went through an allowed website to get that script. Get that cookie in the WebSocket handshake and decide if you allow it or not.
E.g.: When a user visit your site, or one of your sites, return a cookie with a symmetrically encrypted token based on the user IP address, User-Agent header, and Origin header; when the user initiates a WebSocket connection, if it is in the same 2nd domain, it will send the cookie, then if the data adds up allow the connection, otherwise, reject it. If the WS is in another domain, then you will have to forget about cookies and rely on a web socket message once the connection is established to check the validity of the connection.
I'm using an angular service to GET a resource via a rest api. The server sets the ETag header to some value and it also sets Cache-Control: no-cache in it's response.
This works as expected using Firefox, but when I access the same app using Chrome, it is not sending the If-None-Match. I've tried on current Chrome dev and stable channels on both a Mac and an Ubuntu box, and it was the same on both, while Firefox was adding the If-None-Match correctly.
Now, there are other non-xhr/static resources that are fetched conditionally and all those requests correctly get a 304 NOT MODIFIED response.
Is there anything I can do to get more information about why Chrome is not sending the If-None-Match header only for XHR requests?
If you're issuing an Ajax query in Chrome over HTTPS, any certificate errors, such as using a self-signed cert on your API server, prevent the response from being cached. This seems to be by design.
Evidently a Chrome defect existed but was fixed in Webkit and made it into Chromium / Chrome around 2010.
Another question recommends setting the If-Modified-Since and If-None-Match headers manually using jQuery's ifModified: true and cache: true options. Unfortunately this won't over-ride Chrome's intended behavior to not cache HTTPS responses from a server with a self signed certificate.
Testing on a server with a valid signed SSL certificate solved the issue for me; Chrome received 304's for text/html content as expected, using the default jQuery AJAX methods.