I have a 100% https web server that does TLS renegotiation. This is very useful so that users can come to the site and get some nice pages before clicking the login button and being asked for their client certificate. Below is the part of the code that does renegotiation line 213-236 of X509Cert class
import org.jboss.netty.handler.ssl.SslHandler
val sslh = r.underlying.context.getPipeline.get(classOf[SslHandler])
trySome(sslh.getEngine.getSession.getPeerCertificates.toIndexedSeq) orElse {
if (!fetch) None
else {
sslh.setEnableRenegotiation(true) // todo: does this have to be done on every request?
r match {
case UserAgent(agent) if needAuth(agent) => sslh.getEngine.setNeedClientAuth(true)
case _ => sslh.getEngine.setWantClientAuth(true)
}
val future = sslh.handshake()
future.await(30000) //that's certainly way too long.
if (future.isDone && future.isSuccess)
trySome(sslh.getEngine.getSession.getPeerCertificates.toIndexedSeq)
else
None
}
}
Now I was expecting that once someone authenticates with an X509 Client certificate the session would last a little while and remember that certificate chain - say 10 minutes or more, and in any case at the very least 1 minute. Indeed that is why I have the option of calling the above method with the variable "fetch" set to false. I am hoping that if someone authenticated the connection does not need to be renegotiated.
But I noticed on my that with most browsers it looks like I need to call sslh.handshake() every time if I want to get the session and return the X509 Certificates. If "fetch" is set to false, then I mostly get returned None.
Is this normal, or is there something I am doing wrong?
PS.
the code above is part of an implementation of the WebID protocol
This is using netty 3.2.5Final. I also tried with 3.2.7Final without more luck.
So I had to change the code of the current service running the above code so that it always forces a handshake (see the commit) But this does not give me as much flexibility as I hoped to have.
It turns out I was having a problem with unfiltered's use of netty. I found this when implementing the same on Play 2.0.
See bug report 215 on Play2.0, and on unfiltered bug 111: secure netty creates new TLS contexts on each connection
Related
I have a WebSocket client that uses the netty WebSocketClientCompressionHandler to support compression extension. For this extension to work properly I need to set the allowExtensions value to true when creating a newHandshaker using the WebSocketClientHandshakerFactory.
At times when the server does not support these extensions it responds without a Sec-WebSocket-Extensions. If that is the case if reserved (RSV) bits are used, the client should terminate the connection immediately.
Since I am creating the WebSocketClientHandshaker before I could get any response from the server I am unable to set the value of allowExtensions to false afterwards when I come to know that the server does not support extensions.
Is it in anyway possible to set the value of allowExtensions to false after I receive the response from server (or inform netty) so that netty will close the connection if RSV bit is set due to protocol violation?
(For the server implementation I do check the client request headers for Sec-WebSocket-Extensions before creating the handshaker which is fine.)
The only solution I had was to replace the WebSocketFrameDecoder after finishing the handshake and set the allowExtensions value to false if the handshake response does not have the extension header:
handshaker.finishHandshake(ctx.channel(), handshakeResponse);
Channel channel = ctx.channel();
String extensionsHeader = handshakeResponse.headers().getAsString(HttpHeaderNames.SEC_WEBSOCKET_EXTENSIONS);
if (extensionsHeader == null) {
// This replaces the frame decoder to make sure the rsv bits are not allowed
channel.pipeline().replace(WebSocketFrameDecoder.class, "ws-decoder",
new WebSocket13FrameDecoder(false, false, handshaker.maxFramePayloadLength(),
false));
}
I'm trying to make a web server in Rust for a simple browser game. I want the server to be able to deliver pages through HTTPS, but also be able to communicate through WebSockets. I'm planning to put this server on Heroku, but since they only allow one port per application I have to make the WebSocket server operate on the same port as the other HTTPS code.
It seems like this is possible with crates like rust-websocket, but that crate uses an outdated version of hyper and seems to be no longer maintained. The crate tokio_tungstenite is much more up to date.
The problem is that both hyper and tungstenite have their own implementation of the HTTP protocol that WebSockets operate over with no way to convert between the two. This means that once an HTTPS request has been parsed by either hyper or tungstenite there is no way to continue the processing by the other part, so you can't really try to connect the WebSocket and match on an error in tungstenite and process it by hyper, nor can you parse the request by hyper and check if it's a WebSocket request and send it over to tungstenite. Is there any way to resolve this problem?
I think it should be possible to do that, the tungstenite and tokio-tungstenite allow you to specify custom headers (there are helpers functions for that, prefixed with hdr), so depending on the hyper version you use, if you can convert a request to some form, when the headers can be extracted, you can pass them to tungstenite.
You might also want to try warp crate, it's built on top of hyper and it uses tungstenite under the hood for the websocket support, so if you want to write your own version of warp, you can take a look at the source code (the source code may contain hints on how to use hyper and tungstenite together).
You can do it, but it's quite fiddly. You'll have to use tokio-tungstenite, do the handshake yourself (check header, set response headers) and spawn a new future on the runtime that will handle the websockets connection. The new future can be created by calling on_upgrade() on the request body with the latest version of hyper, and the connection can then be passed to tokio_tungstenite::WebSocketStream::from_raw_socket to turn it into a websockets connection.
Example handler (note that this doesn't fully check the request headers and assumes we want an upgrade):
fn websocket(req: Request<Body>) -> Result<Response<Body>, &'static str> {
// TODO check other header
let key = match req.headers().typed_get::<headers::SecWebsocketKey>() {
Some(key) => key,
None => return Err("failed to read ws key from headers"),
};
let websocket_future = req
.into_body()
.on_upgrade()
.map_err(|err| eprintln!("Error on upgrade: {}", err))
.and_then(|upgraded| {
let ws_stream = tokio_tungstenite::WebSocketStream::from_raw_socket(
upgraded,
tokio_tungstenite::tungstenite::protocol::Role::Server,
None,
);
let (sink, stream) = ws_stream.split();
sink.send_all(stream)
.map(|_| ())
.map_err(|err| error!("{}", err))
});
hyper::rt::spawn(websocket_future);
let mut upgrade_rsp = Response::builder()
.status(StatusCode::SWITCHING_PROTOCOLS)
.body(Body::empty())
.unwrap();
upgrade_rsp
.headers_mut()
.typed_insert(headers::Upgrade::websocket());
upgrade_rsp
.headers_mut()
.typed_insert(headers::Connection::upgrade());
upgrade_rsp
.headers_mut()
.typed_insert(headers::SecWebsocketAccept::from(key));
Ok(upgrade_rsp)
}
I am using Nancy.Authentication.Forms. The users are vociferously complaining that their sessions are timing out too quickly (less than 5 minutes). I've searched high and low but cannot find how to set the session timeout time to one hour. Any help is greatly appreciated.
Thanks,
Ross
I'm not 100% sure what could be causing it because of the little information provided. But I think your problem is probably due to the encryption/decryption of the cookie.
Cookie Encryption differs on application startup
By default, a new key is created every time you start the application, which would cause decryption of the cookie to fail, causing authentication to fail, and requiring you to re-authenticate to create a new cookie.
By setting your own keys, every after restart the decryption will always be the same. This is VERY important if you have a load balancer.
https://github.com/NancyFx/Nancy/wiki/Forms-Authentication#a-word-on-cryptography
Since I also did not find an answer, I would like to post my research result:
WithCookie accepts an expiration DateTime.
https://github.com/NancyFx/Nancy/blob/a9989a68db1c822d897932b2af5689284fe2ceb8/src/Nancy/ResponseExtensions.cs
public static Response WithCookie(this Response response, string name, string value, DateTime? expires);
In my case it would be:
return Response.AsRedirect(redirectUrl)
.WithCookie(SecurityExtensions.CookieName, newSession.Id.ToString(), DateTime.Now.AddHours(1));
For the session duration itself I didn't find anything yet.
I have a question involving a "timeout" when sending an HTTPS "GET" request using the ServerXMLHTTP object.
In order to fool the object to send the request with the logged in user's id and password, I set it up to use a dummy proxy and then excluded the domain of the URL (on the intranet). So variable url_to_get contains .mydomain.com, while the proxy address is actually "not.used.com".
// JScript source code
HTTP_RequestObject = new ActiveXObject("Msxml2.ServerXMLHTTP.6.0");
// Using logged in username authentication
HTTP_RequestObject.open("GET", url_to_get, false);
HTTP_RequestObject.setProxy(2, "not.used.com", "*.mydomain.com");
try
{
HTTP_RequestObject.send();
}
catch (e)
{
}
In the catch block, I log an exception of "(0x80072EE2) The operation timed out". This is timestamped 1 to 2 seconds after a log message right before the open.
A retry will work as expected, and it can do it over and over again. Is this something on the server side? Or is it a result of the proxy?
Well this was painful and embarassing. I figured out the root cause of the issue I was seeing with the timeout. I had set the timeouts to "defaults" that were 3 orders of magnitude smaller than the real defaults. So even when I increased them to what I thought were really large values, I was still shorter than the defaults.
I had skimmed through the wording on the Microsoft page describing the setTimouts() method and misinterpreted the timebase of the parameters. I assumed seconds while it was actually milliseconds.
During the process of debugging this issue, I duplicated the code using the alternative COM object, "WinHttp.WinHttpRequest.5.1", and in verifying the equivalent API, SetTimeouts(), discovered the faux pas.
I did learn a few things in the process, so all is not lost, "WinHttp.WinHttpRequest.5.1" has a SetAutoLogonPolicy() [3] method that allows me to skip the hokey "proxy" foolishness required with "Msxml2.ServerXMLHTTP.6.0" to force it to send the user's credentials to an intranet server. I also fiddled with Fiddler [4] and learned enough to be dangerous!
Hopefully someone else can learn from my mistake and find this useful in debugging their own issue in the future.
Here are some inline links since I don't have enough rep to post more than two:
[3] : msdn.microsoft.com/en-us/library/windows/desktop/aa384050%28v=vs.85%29.aspx
[4] : fiddler2.com
Msxml2.ServerXMLHTTP.6.0 can be used via a proxy. It took me a while but persistence paid off. I moved from WinHttp.WinHttpRequest.5.1 to Msxml2.ServerXMLHTTP.6.0 at the advice of Microsoft.
Where varHTTP is your object reference to Msxml2.ServerXMLHTTP.6.0 you can use
varHTTP.setproxy 2, ProxyServerName, "BypassList"
Hope this helps and you pursue onwards with Msxml2.ServerXMLHTTP.6.0.
Recently I downloaded Phil Strugeon REST server for CodeIgniter.
I reviewed source code and when I come to Digest authentication I saw following code:
if ($this->input->server('PHP_AUTH_DIGEST'))
{
$digest_string = $this->input->server('PHP_AUTH_DIGEST');
}
elseif ($this->input->server('HTTP_AUTHORIZATION'))
{
$digest_string = $this->input->server('HTTP_AUTHORIZATION');
}
else
{
$digest_string = "";
}
And bit later after some checks for absence of $digest_string and presence of username:
// This is the valid response expected
$A1 = md5($digest['username'].':'.$this->config->item('rest_realm').':'.$valid_pass);
$A2 = md5(strtoupper($this->request->method).':'.$digest['uri']);
$valid_response = md5($A1.':'.$digest['nonce'].':'.$digest['nc'].':'.$digest['cnonce'].':'.$digest['qop'].':'.$A2);
if ($digest['response'] != $valid_response)
{
header('HTTP/1.0 401 Unauthorized');
header('HTTP/1.1 401 Unauthorized');
exit;
}
In Wikipedia I see following text about HTTP Digest Auth:
For subsequent requests, the hexadecimal request counter (nc) must be greater than the last value it used – otherwise an attacker could simply "replay" an old request with the same credentials. It is up to the server to ensure that the counter increases for each of the nonce values that it has issued, rejecting any bad requests appropriately.
The server should remember nonce values that it has recently generated. It may also remember when each nonce value was issued, expiring them after a certain amount of time. If an expired value is used, the server should respond with the "401" status code and add stale=TRUE to the authentication header, indicating that the client should re-send with the new nonce provided, without prompting the user for another username and password.
However I can't see anything about checking cnonce, nc or nonce in source code.
Does it mean that somebody who recorded request from Client to Server that passed authentification may just "replay" it in future and receive fresh value of resource?
Is it really vulnarability? Or I misunderstood something?
I noticed this too when looking at codeigniter-restserver. It IS vulnerable to replay attacks because, as you said, it does not enforce the nonce.
Digest authentication requires a handshake:
client makes request with Authorization. It will fail because the client does not yet know the nonce
server responds with WWW-Authenticate header, that contains the correct nonce to use
client makes same request using the nonce provided in the server response
server checks that the nonces match and provides the requested url.
To accomplish this, you'll need to start a session on your REST server to remember the nonce. An easy scheme for ensuring nonce is always unique is to base it on current time using a function such as uniqid()