How many domains can Traefik secure with HTTPS, any upper limit? - lets-encrypt

Is there any upper limit for how many domains Traefik can secure, via Let'sEncrypt?
(I know Let'sEncrypt has rate limits; that's not what this is about.)
If Traefik places all domains / hostnames in a single certificate, seems there's an upper limit at 100 — see: https://community.letsencrypt.org/t/maximum-number-of-sites-on-one-certificate/10634/3 — does Traefik work this way?
However if Traefik generates one new cert, per domain / hostname, then I suppose there is no upper limit. Is this the case?
Is the behaviour different if acme.onDemand = true is set,
versus if acme.onHostRule = true is set? Maybe in one case Traefik stores all domains / hostnames in the same cert, in another, in different certs?
(Background: I build a SaaS and organizations that start using it, provide their own custom domains. I really don't think the following is the case, but still I'm slightly worred that, maybe I'm accidentally adding a max-100-organizations restriction when integrating with Traefik.)

There's no upper limit. Traefik generates one cert per hostname.
From Traefik's Slack chat:
basically Traefik creates one certificate by host if you are using onHostRule or onDemand.
You can create one certificate for multiple domains by using domains https://docs.traefik.io/configuration/acme/#domains.
(This chat message, however, probably it'll disappear soon — Slack's 10k limit: https://traefik.slack.com/archives/C0CDT22PJ/p1546183883145900?thread_ts=1546183554.145800&cid=C0CDT22PJ )
(Note, though, that onDemand is deprecated — see: https://github.com/containous/traefik/issues/2212)

Related

What data can be monitored of an HTTPS connection?

Suppose that my computer is not compromised. If somebody is listening somewhere between my computer and the server (my ISP for example), what can they see of my HTTPS connection?
I assume they can see the domain (e.g. google.com).
But what about the specific site I'm browsing (e.g. /wiki/Privacy in https://en.wikipedia.org/wiki/Privacy)?
What about the subdomain (e.g. en in https://en.wikipedia.org/wiki/Privacy)?
What about GET parameters, everything after the '?' (e.g. https://www.google.com/search?q=privacy). Can they see what I search on google?
Please feel free to add more info in case I've missed to ask something relevant.
Example: https://www.google.com/search?q=privacy
They can see
The full domain (domain or subdomain, here "www.google.com")
The ip of the contacted domain
The approximate size of the exchanged data
The duration of the exchange(s)
They cannot see:
The path (the part of the url after the domain, here "/search")*
The GET or POST parameters (here "?q=privacy")
The content of the answer
The cookies
*After a bug in proxy discovery, the path and GET parameters may be transmitted in plain text (http://www.securitynewspaper.com/2016/08/01/proxy-pac-hack-allows-intercept-https-urls/).
And with the approximate size of the exchanged data, it may be possible to infer witch pages were visited.

Consul TLS CRL checking

We're implementing consul with TLS security enabled, but it doesn't look like the consul agent performs any revocation lookup on the incoming (or local) certificates. Is this expected behavior? We'd like to be able to lock rogue/expired agents out.
Does anything reliably implement CRL/OCSP checking? As far as I know the answer is basically no.
From what I understand, the current best practice is just to have very short-lived certs, and change them all the time. letsencrypt is good for external services, but for internal services (which you likely use consul for), Vault(done by the same guys that do consul) has a PKI backend that does exactly this. It publishes CRL if you have any tools that bother, but as far as I can tell, basically nothing does, because it's sort of broken (denial of service, huge CRL lists, slower, etc) More info on Vault here: https://www.vaultproject.io/docs/secrets/pki/index.html
Also, there are other internal CA tools, and for larger infrastructure you could even use the letsencrypt code(it is open source).
By default, Consul does not verify incoming certificates. You can enable this behavior by setting verify_incoming in your configuration:
{
"verify_incoming": true,
"verify_incoming_rpc": true,
"verify_incoming_https": true,
}
You can also tell Consul to verify outgoing connections via TLS:
{
"verify_outgoing": true,
}
In these situations, it may be necessary to set the ca_file and ca_path arguments as well.

When is a secondary DNS server used?

On our router we have the primary DNS set to a local IP, which is running Windows Server 2008 and the built in DNS server. We use this to resolve domains to local servers, if the domain is not founds locally we have forwarders set up to query external name servers.
The secondary DNS on the router is set to our ISP's primary DNS, incase the local DNS server is down.
The mac clients in our office pick up the DNS servers correctly from the router but it seems very random as to what DNS server it uses. For example, a local site would load up but some of the images would not. If I hard coded my DNS address to be the local DNS server everything would work fine.
So my question is, when would a mac client use the secondary DNS server? I though it'd only use it if the primary DNS was unavailable?
Thanks!
The general idea of a secondary DNS server was that in case the primary DNS server doesn't reply (e.g. it is offline, unreachable, restarting, etc.), the system can fall back to a secondary one, so it won't be unable to resolve DNS names during that time. Doesn't reply means "no reply at all", it will not ask the secondary when the primary one said that a name is unknown. Answering that a name is unknown is a reply.
The problem here is that DNS uses UDP and UDP is connectionless. So if a DNS server is offline, the system won't notice that other by not receiving a reply from it. As an UDP packet may as well get lost and the round-trip time (RTT) is unknown, it will have to resend the request a couple of times, every time waiting for several seconds, before it finally gets to the conclusion that this server is dead. This means it can take up to an entire minute and above to resolve a DNS name if the first DNS server dies.
As that seems unacceptable, different operating system developed different strategies to handle this in a better way. As both DNS servers are supposed to deliver the same result for the same domain (if not, your setup is actually flawed as the secondary should be a 1-to-1 replacement for the primary one), it shouldn't matter which one is being used. Some systems may send a request to the primary one but if no reply comes back within a few seconds, they don't resend to it but first try the secondary one (then they resend to the primary one and so on). Some may also query both at once, make the faster one win and then keep using that one for a while (until they start another race to see if it is still the faster one). Some may also prefer the primary one but do some kind of load balancing and switch to the secondary one if more than a certain amount of queries are currently pending on the primary one. Some will just alternate between them as a poor man's load balancing. All of this is actually allowed.
In your case, though, I'm afraid something is wrong with your primary server as by default, macOS will only use the primary one. If it constantly falls back to the secondary one, it may consider the primary one to be too slow. Every time that happens, the secondary server becomes the primary one, see this older knowlebase article. This cnet article explained how this can be disabled but I'm not sure this is still possible in current systems. I wasn't able to find any reference on this but IIRC from the very back of my head, Apple once mentioned on a WWDC that they are now more aggressive at DNS querying and may even try to contact multiple DNS servers at once with the fastest one winning in some cases but I might be wrong on this (maybe this was iOS only or so).
I googled this article which explains newer MacOS DNS search order. And this one which explains how to tweak it to obtain results that you desire.
Though the general idea is that it was never intended (in any OS) that first server is the one used and the second one is a backup. ( Even on windows, if first server for some reason doesn't answers very quickly, the second one will be queried.) It's wiser to regard server query order as unspecified.

Alternative Host (by DNS?) for Web Server Failure Protection

I'm interested in having a second web host run a copy of my website, such that if my first host goes down, the traffic routes to the second host. Is this possible?
My guess would be to add additional nameservers beyond the first two.
I also suspect it's doable with no-ip.com, but I'm not clear on how that works, and if they would require me to leave my first host entirely?
See if your DNS provider will let you do round robbin DNS.
Basically, DNS queries will return more than one IP for your site. Try nslookup google.com to see how it might look.
There are loads of other ways to do geographical load balancing and failover (most are expensive though).
DNS Made Easy provides this service, which is called DNS Failover. For others looking:
http://www.dnsmadeeasy.com/s0306/price/dns.html

How exactly is the same-domain policy enforced?

Let's say I have a domain, js.mydomain.com, and it points to some IP address, and some other domain, requests.mydomain.com, which points to a different IP address. Can a .js file downloaded from js.mydomain.com make Ajax requests to requests.mydomain.com?
How exactly do modern browsers enforce the same-domain policy?
The short answer to your question is no: for AJAX calls, you can only access the same hostname (and port / scheme) as your page was loaded from.
There are a couple of work-arounds: one is to create a URL in foo.example.com that acts as a reverse proxy for bar.example.com. The browser doesn't care where the request is actually fulfilled, as long as the hostname matches. If you already have a front-end Apache webserver, this won't be too difficult.
Another alternative is AJAST, which works by inserting script tags into your document. I believe that this is how Google APIs work.
You'll find a good description of the same origin policy here: http://code.google.com/p/browsersec/wiki/Part2
This won't work because the host name is different. Two pages are considered to be from the same origin if they have the same host, protocol and port.
From Wikipedia on the same origin policy:
The term "origin" is defined using the
domain name, application layer
protocol, and (in most browsers) TCP
port of the HTML document running the
script. Two resources are considered
to be of the same origin if and only
if all these values are exactly the
same.

Resources