We're implementing consul with TLS security enabled, but it doesn't look like the consul agent performs any revocation lookup on the incoming (or local) certificates. Is this expected behavior? We'd like to be able to lock rogue/expired agents out.
Does anything reliably implement CRL/OCSP checking? As far as I know the answer is basically no.
From what I understand, the current best practice is just to have very short-lived certs, and change them all the time. letsencrypt is good for external services, but for internal services (which you likely use consul for), Vault(done by the same guys that do consul) has a PKI backend that does exactly this. It publishes CRL if you have any tools that bother, but as far as I can tell, basically nothing does, because it's sort of broken (denial of service, huge CRL lists, slower, etc) More info on Vault here: https://www.vaultproject.io/docs/secrets/pki/index.html
Also, there are other internal CA tools, and for larger infrastructure you could even use the letsencrypt code(it is open source).
By default, Consul does not verify incoming certificates. You can enable this behavior by setting verify_incoming in your configuration:
{
"verify_incoming": true,
"verify_incoming_rpc": true,
"verify_incoming_https": true,
}
You can also tell Consul to verify outgoing connections via TLS:
{
"verify_outgoing": true,
}
In these situations, it may be necessary to set the ca_file and ca_path arguments as well.
Related
I think the answer to this is no, but don’t understand why this feature is not available, I’d like to configure a list of ciphers on a per backend basis i.e to be able to use ssl-default-server-ciphers in each backend section rather than having to use ciphers on each server line. I don’t want to use ssl-default-server-ciphers in the global section as each backend can have a different set of ciphers.
I can't seem to add formatted text in comment reply, so I'll address the response below by clarifying the question here. Backends DO have an option to specify ciphers, here is an edited example from one of my configs:
backend https_be
server 10.255.2.5 10.255.2.5:443 ssl ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256
server 10.255.2.6 10.255.2.6:443 ssl ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256
What I'd like to be able to do, and would be much cleaner is:
backend https_be
option ssl-default-server-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256
server 10.255.2.5 10.255.2.5:443 ssl
server 10.255.2.6 10.255.2.6:443 ssl
But haproxy does not seem to support this, I don't know why, it would be a useful think to do AND make config files easier to read. As for the comment on frontend reference I did not mention frontends so a little confused over that comment. It is true the same issue applies to bind on frontend, but I tried to keep it simple and illustrate the point with a backend example.
I’d like to configure a list of ciphers on a per backend basis
A backend have no cipher option. Do you mean the bind option ciphers ?
I don’t want to use ssl-default-server-ciphers in the global section as each backend can have a different set of ciphers.
Well then don't set ssl-default-server-ciphers and define the ciphers on the server line.
A backend have servers which have ciphers as option. Please don't mix the keywords with what you want to do.
frontend and listen => have bind option
backend have no bind option and therefore no ciphers parameter!
This is explained in the Proxies section of the documentation.
Is there any upper limit for how many domains Traefik can secure, via Let'sEncrypt?
(I know Let'sEncrypt has rate limits; that's not what this is about.)
If Traefik places all domains / hostnames in a single certificate, seems there's an upper limit at 100 — see: https://community.letsencrypt.org/t/maximum-number-of-sites-on-one-certificate/10634/3 — does Traefik work this way?
However if Traefik generates one new cert, per domain / hostname, then I suppose there is no upper limit. Is this the case?
Is the behaviour different if acme.onDemand = true is set,
versus if acme.onHostRule = true is set? Maybe in one case Traefik stores all domains / hostnames in the same cert, in another, in different certs?
(Background: I build a SaaS and organizations that start using it, provide their own custom domains. I really don't think the following is the case, but still I'm slightly worred that, maybe I'm accidentally adding a max-100-organizations restriction when integrating with Traefik.)
There's no upper limit. Traefik generates one cert per hostname.
From Traefik's Slack chat:
basically Traefik creates one certificate by host if you are using onHostRule or onDemand.
You can create one certificate for multiple domains by using domains https://docs.traefik.io/configuration/acme/#domains.
(This chat message, however, probably it'll disappear soon — Slack's 10k limit: https://traefik.slack.com/archives/C0CDT22PJ/p1546183883145900?thread_ts=1546183554.145800&cid=C0CDT22PJ )
(Note, though, that onDemand is deprecated — see: https://github.com/containous/traefik/issues/2212)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Not suitable for this site This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I would like to block all connections to my server that use a VPN or Proxy. Is there anyway to detect that a VPN or proxy connection is being used? If not, is there anyway that I can check the likelihood that a VPN or proxy is being used? Lastly, is there anything that I can query or prompt the user with to check if they are using a VPN or Proxy so that if anyone does get through, I can try and perform additional verification? I do not need any information from the user such as location, true IP, or anything like that. I just want to entirely bar connections from VPNs or Proxies.
Edit: I've been thinking that I could potentially run a test to see if there is consistent discrepancies between ping to the VPN IP and the detectable latency of the client, but that sounds pretty unreliable.
Edit2: A proxy or VPN server would likely have many more ports open than a standard home connection so I could use the number of ports open to help gauge the likelihood of a connection coming from a VPN by running a port scan of the person connecting.
Unfortunately, there's is no proper technical way to get the information you want. You might invent some tests, but those will have a very low correlation with the reality. So either you'll not catch those you want, or you'll have a larger number of false positives. Neither can be considered to make sense.
Generating any kind of traffic backwards from an Internet server in response to an incoming client (a port scan, or even a simple ping) is generally frowned upon. Or, in the case of a port scan, it may be even worse for you, eg when the client lives behind a central corporate firewall, the worst of which is when the client comes from behind the central government network firewall pool...
Frankly, IP-based bans (or actually, any kind of limiting focusing on people who do not exclusively possess their public IP address: proxy servers, VPNs, NAT devices, etc) have been unrealistic for a long time, and as the IPv4 pools have been getting depleted in many parts of the world, ISPs are putting more and more clients behind large NAT pools (it's this week's news in my country that the largest ISP, a subsidiary of Deutsche Telekom, has started handing out private IPv4 addresses as a standard way of business to its customers, and people have to ask the provider explicitly to get a public IP address), so there's even less and less point in doing so. If you want to ban clients, you should ban them based on identity (account), and not based on IP address.
At IPinfo we offer a privacy detection API, which will let you know if a connection is coming from a VPN, an anonymous proxy, a tor exit node, or a hosting provider (which could be used to tunnel traffic). Here's an example:
$ curl ipinfo.io/43.241.71.120/privacy?token=$TOKEN
{
"vpn": true,
"proxy": false,
"tor": false,
"hosting": true
}
If you wanted to block connections to your site from VPNs then you could make an API request to get this information, and reply with an error if it's detected as a VPN. In PHP that would look something like this:
$ip = $_SERVER['REMOTE_ADDR'];
$url = "http://ipinfo.io/{$ip}/privacy?token={$IPINFO_API_TOKEN}";
$details = json_decode(file_get_contents($url));
// Just block VPNs
if($details->vpn) {
return echo "VPN Access Blocked!";
}
// Or we could block all the other types of private / anonymous connections...
if($details->vpn || $details->proxy || $details->tor || $details->hosting) {
return echo "Access Blocked!";
}
The simplest way to do this is to use an external service like an API to block VPN or proxy users.
MaxMind and GetIPIntel both offer it via API, you might want to give it a try. GetIPIntel provides free API service so I suggest you try that first.
For OpenVPN, someone used unique MSS values to identify VPN connections but the setup is complicated and it might be "patched" now.
The strategies you've mentioned in your edits don't seem like a very good idea because you'll run into many false positives. Sending out port scans whenever they connect to your service is going to take a lot of time and resources before you get the results.
List of Tor exit nodes is publicly available. You only want "exit nodes" and it's available as CSV. This should be 100% complete and accurate as it's generated directly from Tor directory.
A free list of open proxies is available from iblocklist.com. A free list that incorporates open proxies, Tor nodes and VPN endpoints from ip2location.com.
The last two have most likely limited coverage and accuracy, especially as it comes to VPN exit nodes - there's just too many of them. Some providers take another approach and consider all "hosted subnets" (subnets from which ISPs assign their clients IPs for hosted servers) as some kind of VPN or proxy, as end-users should be connecting from "consumer" subnets.
Yes, you can detect whether an IP belongs to a VPN/ proxy using Shodan. The following Python code shows how to do it:
import shodan
# Setup the API wrapper
api = shodan.Shodan('YOUR API KEY') # Free API key from https://account.shodan.io
# Lookup the list of services an IP runs
ipinfo = api.host(VISITOR_IP)
# Check whether the IP runs a VPN service by looking for the "vpn" tag
if 'tags' in ipinfo and 'vpn' in ipinfo['tags']:
print('{} is connecting from a VPN'.format(VISITOR_IP))
You can also look at the list of ports to determine the likelihood that the visitor is connecting from a HTTP proxy:
if 8080 in ipinfo['ports']:
print('{} is running a web server on a common proxy port'.format(VISITOR_IP))
Btw you can do this now using our new, free InternetDB API. For example:
import requests
VISITOR_IP = "5.45.38.184" # In production this would be the IP of your visitor
info = requests.get(f"https://internetdb.shodan.io/{VISITOR_IP}").json()
if "vpn" in info["tags"]:
print(f"{VISITOR_IP} is connecting from a VPN")
You can download a list of known proxy IP addresses and lookup locally to see if it is VPN, open proxy etcs.
There are several commercial products in the market. IP2Proxy LITE is a free one you can try immediately.
Get (somehow) list of IP of proxy servers.
Measure round trip ping time to user. Helps in online websocket games. Games are playable with ping under 50ms, so you can disconnect users with ping about 100ms and greater with a message "Sorry, too large ping".
I'm using SSL to form a trusted connection between two peers. Each peer knows who it expects to be connecting to (or accepting a connection from) at a given time. It should only accept valid certificates, and further, it should only accept certificates with certain attributes (probably by checking the canonical name).
So far, I can get the two sides to talk, based on the example in this question, and its answer. Each side can print out the certificate presented by the other peer.
I'm not sure what the correct way to verify these certificates is, though. The obvious way would be to just look at the certificates after the connection is made and drop the connection if it doesn't meet our expectations.
Is there a more correct way to do this? Is there a callback which is given the peer's presented certificate and can give it a thumbs-up or thumbs-down? Or is the right thing to handle it after SSL is done with its work?
In this case, I am the CA, so trusting the CA isn't an issue. I'm
signing these certificates on behalf of my users. The canonical names
aren't even domain names. Users connect peer-to-peer. I want the
client software I distribute to verify that the connecting user has a
certificate I signed and is the right user.
Its sounds like you are running a Private PKI. Just load the root of the trust chain into OpenSSL with SSL_CTX_load_verify_locations or SSL_load_verify_locations.
Be sure to use SSL_PEER_VERIFY to ensure OpenSSL performs the verification. The call would probably look like SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);. If peer validation fails, then the connect will fail.
There are ways to ensure the connect succeeds and then catching the error later. The trick is to set the verify callback, have the verify callback always return 1, and then call SSL_get_verify_result after the connection is setup. See SSL/TLS Client for an example.
Note: in all cases, you still have to perform name checking manually. OpenSSL currently does not do it (its in HEAD for OpenSSL 1.1.0). See libcurl or PostgreSQL for some code you can rip.
An example of a SSL/TLS client is provided by OpenSSL at its wiki. See SSL/TLS Client. There's no server code or example at the moment.
I'm not sure what the correct way to verify these certificates is, though.
The obvious way would be to just look at the certificates after the
connection is made and drop the connection if it doesn't meet our
expectations.
There's a lot to this, and some of it is not obvious. I'm going to break the answer up into parts, but all the parts try to answer your question.
First, you can verify the certificates are well formed. The group responsible in the context of the Web is the CA/Browser forums. They have baseline and extended requirements for creating certificates:
Baseline Certificate Requirements, https://www.cabforum.org/Baseline_Requirements_V1_1_6.pdf
Extended Validation Certificate Requirements, https://www.cabforum.org/Guidelines_v1_4_3.pdf
In the baseline docs, you will find, for example, an IP listed as the Common Name (CN) must also be listed in the Subject Alternate Names (SAN). In the extended docs, you will find that private IPs (Reserved per RFC 1918) cannot be present in a extended validation (EV) certificate; and EV certificates cannot contain wild cards.
Second, you can perform customary validation according to RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile, http://www.ietf.org/rfc/rfc5280.txt.
The customary checks are the ones like hostname matching, time period validity checks, and verifying an end-entity or leaf certificate (client or server certificate) chains back to a root. In browsers using CAs, that's any number of hundreds of trusted roots or intermediates.
If you choose to perform revocation checking, then you will probably DoS your application (how is that for obvious!). A mobile client on a 3G network cannot download and process a 30MB CRL - it will surely hang the application. And an application cannot perform a OCSP query when the URL is wrong - that will surely fail.
Also, if you are performing hostname matching that includes wildcards, then care must be taken to handle ccTLDs properly. ccTLDs are like *.eu, *.us, or இலங்கை (nic.lk). There's some 5000 or so of them and Mozilla offers a list at http://publicsuffix.org/ (alternately, https://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1).
Third, CAs don't warrant anything, so the answers you get from a CA is worthless. If you don't believe me, then check their Certification Practice Statement (CPS). For example, here is an excerpt from Apple's Certification Authority Certification Practice Statement (18 Sept 2013, page 6):
2.4.1. Warranties to Subscribers
The AAI Sub-CA does not warrant the use of any Certificate to any Subscriber.
2.4.2. CA disclaimers of warranties
To the extent permitted by applicable law, Subscriber agreements, if applicable,
disclaim warranties from Apple, including any warranty of merchantability or
fitness for a particular purpose
That means that they don't warrant the binding of the public key to the organization through the issuer's signature. And that's the whole purpose of X509!.
Fourth, DNS does not provide authentic answers. So you might get a bad answer from DNS and happily march over to a server controlled by your adversary. Or, 10 of the 13 root DNS servers under US control may collude to give you a wrong answer in the name of US national security.
Trying to get an authentic response from a non-US server is near impossible. The "secure DNS" pieces (sans DNSSEC) are still evolving, and I'm not aware of any mainstream implementations.
In the case of colluding US servers, a quorum won't work because the US holds an overwhelming majority.
The problem here is that you are making security decisions based on input from external services (CA and DNS). Essentially, you are conferring too must trust in untrustworthy actors.
A great treatment of the problems with PKI and PKIX is Dr. Peter Gutmann's Engineering Security at www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf. Be sure to read Chapters 1 adn 6. Dr. Gutmann has a witty sense of humor, so its not dry reading. Another great book is Ross Anderson's Security Engineering at http://www.cl.cam.ac.uk/~rja14/book.html.
You have a couple of defenses with all the problems caused by PKI, PKIX, and CAs. First, you can run a private PKI where you are your own certificate authority. In this case, you are not trusting an outsider. Bad DNS answers and rogue servers should be caught because the server's certificate will not form a valid chain.
Second, you can employ a security diversification strategy. Gutmann writes about it in his Engineering Security book, and you should visit "Security through Diversity" starting on page 292 and the "Risk Diversification for Internet Applications" section on page 296.
Third, you can employ a Trust-On-First-Use (TOFU) or Key Continuity strategy. This is similar to Wendlandt, Anderson and Perrig's Perspectives: Improving SSH-style Host Authentication with Multi-Path Probing or SSH's StrictHostKeyChecking option. In this strategy, you do the customary checks and then pin the certificate or public key. You can also ask for other's view of the certificate or public key. Unexpected certificate or key changes should set off alarm bells.
OWASP has an treatment of Certificate and Public Key Pinning at https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning. Note: some places rotate their certificates every 30 days or so, so you should probably pin public keys if possible. The list of frequent rotators includes Google, and its one of the reasons tools like Certificate Patrol makes so much noise.
I would like have one quick question. Is there any addon for Firefox or tool how to get session key generated from master secret during SSL handshake by which is encoded symmetrically whole client/server communication? I need it due to decoding of communication (POST/GET/etc..) via Wireshark or PCAP library. As I can see Firebug is showing decrypted communication so I hope there exist some proper ways how to reach this session key :)
Thank you all for a help.
I have good news for you. You can actually get the Master-Key data that you need from both Firefox and Chrome. And you can use the output file in Wireshark to decrypt the SSL/TLS traffic without the need for the private key from the SSL/TLS server. Check out "Method 2" here: http://www.root9.net/2012/11/ssl-decryption-with-wireshark-private.html
As a tip, if you don't want to reboot your machine just open a command prompt and run:
set SSLKEYLOGFILE=c:\sslKeyLogFile.txt
"C:\Program Files (x86)\Mozilla Firefox\firefox.exe"
Since Firefox is being launched from the same session that you added the environment variable in, it will launch with that variable set. Otherwise a restart of Windows will be required after setting it in the System settings dialogs.
I also want to point out that the answer from Chris wasn't necessarily wrong, this is a fairly new feature. It didn't make it into release until Wireshark 1.6.
If you want to use Wireshark then the pre master secret will be of no use for you (you refer to it as 'cipher key' in your question).
Wireshark can only decrypt traffic if you specify the RSA private key of the server, which doesn't change on every connection unlike the pre master secret. However, you can't get that through your browser or anything else for obvious reasons.
If you want to decrypt SSL traffic I suggest using an intermediate proxy instead, like Fiddler. It does not passively capture traffic but proxies the traffic, which enables it to actually decrypt the data sent and received.