I have a question concerning Sonos' client certificate. I didn't find any mention of it in the official documentation pages.
Do the speakers automatically send the client certificate on getMediaUri requests or does the server need to require it in the SSL negociation?
It would be neat if the speakers sent the client certificate all the time, because if the server needs to explicitly require the client certificate on the secure endpoint it means other APIs are impacted (createItem for example) whereas the only thing that really needs to be secured is the streaming url.
The server does not need to require sending the cert on each request, but if you DO require it, it is something that can be sent each time.
Related
I've read a lot of related topics in the net, but I still don't have an answer to my question.
Is it possible to implement flow described below?
Proxy receive request.
If request is encrypted and proxy cert is trusted then intercept.
If request is not encrypted, then intercept.
If request is encrypted and proxy cert is NOT trusted then pass it through without interception.
This behaviour should be default for all traffic going through the proxy.
It'd be also really nice to be able to get all possible info for passing encrypted requests (src and dst ip addresses etc.). Basically the same info which I can get with fiddler.
Not really. The main problem is that mitmproxy can not know if proxy cert is trusted by the client or not.
In the SSL/TLS protocol client starts with the CLIENT_HELLO and in response the server (in this case motmproxy) sends back the SERVER_HELLO message containing the generated server certificate.
The client now checks if the received server certificate is trusted. If not the connection is terminated. As far as I know the SSL/TLS spec does not define how to do so. Sems clients end back an SSL_ALERT message, other simply drop the connection, and a third group continues the SSL/TLS handshake but have certain internal values set in a way that always let the handshake fail.
There is a mitmproxy script that tries to identify connections that were not successful and then if the client asks for the same domain a second time bypasses interception.
Of course this requires that the client resends requests which is not always the case.
https://github.com/sociam/x-ray/blob/master/mitmproxy/examples/tls_passthrough.py
I'd like to know if it's possible for web services to detect HTTPS connections with "faked" root certificates created by Fiddler4 (Web debugging proxy) to prevent reverse engineering.
Is there any method to check whether the encryption is done with the original certificate or with one made by Fiddler?
A server has no way to know what certificate the client received unless the client sends the server that information.
From client JavaScript, you cannot detect such interception today; JavaScript does not expose the capabilities to introspect the certificate. It is possible to use Java or Flash inside a webpage to inspect the certificate received upon connecting to a server, but a sufficiently devious interceptor could just avoid MITM'ing the Java/Flash connection.
In contrast, a native code client application can detect what certificate was presented by the server and reject any certificate that doesn't match the expected certificate; this is called certificate pinning and it's a technique used by some applications. Note that this will block more than Fiddler; it'll also block connections through corporate inspection proxies (e.g. BlueCoat, ISA TMG, etc) and through some popular consumer antivirus programs' proxies (e.g. BitDefender). More importantly, users can circumvent your certificate pinning checks if they like; your code is running on their device, and they have the ability to modify your code in memory to strip out your certificate pinning checks. On some mobile devices, this code modification requires "jail-breaking" the device, but this isn't an insurmountable barrier.
Scenario: Sensitive information is exchanged (1) from client to server AND (2) from server to client.
Problem: Data exchanged is not encrypted, so sniffing is easy (it's always theoretically possible, right?)
Solution: Encrypt all data transmitted in either direction (server-to-client and client-to-server).
Implementation:
(1) Client to server - Generate certificate, install private key on server and configure Tomcat to work on HTTPS (Many tutorials for this online).
(2) Server to client - Private key goes to (or generated by) clients, however it seems that some tutorials strongly emphasize that that every client should have their own certificate for the sake of authentication.
Question: If I am already authenticating my users through a database username/password (hashed with salt) combo, but I still need to encrypt server-to-client data transmissions to reduce chance of sniffing, can I just generate one private key for all clients? Are there other ways of achieving what I need with Tomcat/Spring?
It seems you're mixing something up:
Regular https includes encryption in both directions, and only a private key + certificate on the server side. Once a client requests resources through https, they get the answer encrypted. So you'll just need to enforce the https connection (e.g. by redirecting certain requests to https with no delivery of data through http)
If you want client certificates, these are purely used for client authentication, so sharing a common client key/certificate with all possible clients will defeat this purpose. Having client keys/certs does not add any more encryption to your data transfer.
Answering to your follow-up question in the comment:
For https, the server keeps its private key, the public key is what is shared with the client. On typical https, the client can be reasonably sure who the server is (authentication, done through the trustworthy signature on the server's public key. This is what you pay trustcenters for) However, the server has no clue who the client is (here client certificates would come into play, but purely for authentication, not for encryption)
Server and client negotiate a common session key. For this purpose there are many different implementations of the key-exchange protocol. This forum is probably not the right place to describe session negotiation and the ssl handshake again, but you can be sure that you only need a server side key for the purpose you describe above: Take any website as an example: If you go to google mail, their https encryption works through them having a private key and a certified (signed) public key: You have no client side certification, but provide your username and password through the encrypted connection to them. Otherwise you'd have to install a client side key/certificate for a lot of services - and that would be too much of a burden for the average internet user.
Hope that helps.
Is the data secure if posted programmatically (not through a browser) to an https endpoint?
My understanding is browser encrypts data and sends it to https endpoint. How can a Ruby or Node.js or any other program do the same?
Yes. If you connect to an https endpoint with curl, wget, or whatever library, the transfer is secure from the source of the connection to the destination. That source could be a server (your webserver) or the client browser.
However, if it's done in client side JS or other browser scripting language, you have to make sure the initial request from client to your site is secure as well if first passing secure data to the client for it to pass to the destination https server.
I checked node.js request library as well as Ruby HTTParty libraries. Both these support SSL encryption based on proper options (port: 443 etc.). In general if we use any well supported library that enables HTTP gets and posts, we should be covered in terms of transmitting data securely to the https endpoint.
I think I understand what you mean and that question has been answered. However, I would just point out that HTTPS does not make your data secure, only the connection and even that is only encrypted from eavesdropping which is not really secure.
There is, of course, lots more to think about and do to make your data secure end-to-end.
I'm trying to develop a XMPP "Proxy" which will be in the middle of a standard Jabber communication.
The schema will be something like this:
Pidgin ---> Proxy <--- eJabberD
|
v
Console
The purpose of this proxy is to log all the stanzas which go over the wire. IMHO, this is very convenient when you're developing XMPP based solutions.
I'm doing this with EventMachine and Ruby, and the main problem is to know how to decypher the traffic after the TLS/SASL handshake.
Before the starttls, all works perfectly, the server and client can talk between them, but when the tls handshake begins, although it works, it is impossible to dump the clear content as all of the traffic is encrypted.
I'm not an expert in TLS/SASL thing, so I don't know which is the best approach to do this. I think one way to achieve this, should be to grab the certificate in the handshake and use it to decypher the content as it goes throught the proxy.
Thanks!
If you could do what you say (grab the certificate on the wire and use it to decrypt), then TLS would be pretty worthless. This is one of the primary attacks TLS exists to prevent.
If the server will allow it, just don't send starttls. This is not required by spec. If starttls is required by your server, then you can configure it to use a null cipher, which will leave the traffic unencrypted. Not all servers will support that of course.
You can man-in-the-middle the starttls. Respond with your own tunnel to the client, and send a separate starttls negotiation to the server. This should generate certificate warnings on the client, but since you control the client you can tell it to accept the certificate anyway.
If you control the server, you can use the private key from it to decrypt the traffic. I'm not aware with any off-the-shelf code to do that easily, but it's writable.