Traefik Redirect Domain to Subdomain - lets-encrypt

I want to permanently redirect all requests to example.com and www.example.com to blog.example.com in a TLS environment.
My current config:
traefik.toml:
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[providers.docker]
exposedbydefault = false
watch = true
network = "web"
[providers.file]
filename = "traefik_dynamic.toml"
[certificatesResolvers.lets-encrypt.acme]
email = "mymail#example.com"
storage = "/letsencrypt/acme.json"
[certificatesResolvers.lets-encrypt.acme.dnsChallenge]
provider = "myprovider"
traefik_dynamic.toml:
[http.middlewares]
[http.middlewares.goToBlog.redirectregex]
regex = "^https://(.*)example.com/(.*)"
replacement = "https://blog.example.com/$${2}"
permanent = true
[http.routers]
[http.routers.gotoblog]
rule = "Host(`example.com`) || Host(`www.example.com`)"
entrypoints = ["websecure"]
middlewares = ["goToBlog"]
service = "noop#internal"
[http.routers.gotoblog.tls]
certResolver = "lets-encrypt"
When I try to access example.com it gives my an SSL Protocol Error. All my other endpoints including blog.example.com are working. What am I doing wrong?

Okey, obviously it had nothing to do with my redirect configuration. Seemed like a hickup in traefik / docker, similar to ACME certificates timeout with traefik. Just waited one day and everything worked as expected. Just two minor updates to correct the redirect configuration. Maybe there's a more elegant solution.
traefik_dynamic.toml:
[http.middlewares]
[http.middlewares.goToBlog.redirectregex]
regex = "^https://(.*)example.com/(.*)"
replacement = "https://blog.example.com/${2}" # no double $$
permanent = true
[http.routers]
[http.routers.gotoblog]
rule = "Host(`example.com`, `www.example.com`)" # just an array of domains is fine, too
entrypoints = ["websecure"]
middlewares = ["goToBlog"]
service = "noop#internal"
[http.routers.gotoblog.tls]
certResolver = "lets-encrypt"

Related

Fetch emails through IMAP with proxy of form user:password:host:port

I have code to login to my email account to fetch recent emails:
def fetchRecentEmail(emailAddr, emailPassword, timeout=120):
host = fetch_imap_server(emailAddr) # e.g. 'outlook.office365.com'
with IMAP4_SSL(host) as session:
status, _ = session.login(emailAddr, emailPassword)
if status == 'OK':
# fetch most recent message
status, messageData = session.select("Inbox")
:
I'm trying to tweak it to go through a proxy.
ref: How can I fetch emails via POP or IMAP through a proxy?
ref: https://gist.github.com/sstevan/efccf3d5d3e73039c21aa848353ff52f
In each of the above resources, the proxy is of clean form IP:PORT.
However my proxy is of the form USER:PASS:HOST:PORT.
The proxy works:
USER = 'Pp7fwti5n-res-any-sid-' + random8Digits()
PASS = 'abEDxts7v'
HOST = 'gw.proxy.rainproxy.io'
PORT = 5959
proxy = f'{USER}:{PASS}#{HOST}:{PORT}'
proxies = {
'http': 'http://' + proxy,
'https': 'http://' + proxy
}
response = requests.get(
'https://ip.nf/me.json',
proxies=proxies, timeout=15
)
The following code looks like it should work, but errors:
HOST = 'outlook.office365.com'
IMAP_PORT = 963
PROXY_TYPE = 'http' # rainproxies are HTTP
mailbox = SocksIMAP4SSL(
host=HOST,
port=IMAP_PORT,
proxy_type=PROXY_TYPE,
proxy_addr=URL,
proxy_port=PORT,
username=USER,
password=PASS
)
emailAddress, emailPassword = EMAIL.split(',')
mailbox.login(emailAddress, emailPassword)
typ, data = mailbox.list()
print(typ)
print(data)
I needed to add a timeout arg/param in 2 places to get the code to run:
def _create_socket(self, timeout=None):
sock = SocksIMAP4._create_socket(self, timeout)
server_hostname = self.host if ssl.HAS_SNI else None
return self.ssl_context.wrap_socket(
sock, server_hostname=server_hostname
)
def open(self, host='', port=IMAP4_PORT, timeout=None):
SocksIMAP4.open(self, host, port, timeout)
Rather confusing that nobody else seems to have flagged that in the gist.
But it still won't work.
If I use any number other than 443 for IMAP_PORT I get this error:
GeneralProxyError: Socket error: 403: Forbidden
[*] Note: The HTTP proxy server may not be supported by PySocks (must be a CONNECT tunnel proxy)
And if I use 443, while I now get no error, mailbox = SocksIMAP4SSL( never completes.
So I am still far from a working solution.
I am hoping to run this code simultaneously on 2 CPU cores, so I don't understand the implications of using port 443. Is that going to mean that no other process on my system can use that port? And if this code is using this port simultaneously in two processes, does this mean that there will be a conflict?
Maybe you can try monkeypatching socket.socket with PySocket.
import socket
import socks
socks.set_default_proxy(socks.SOCKS5, HOST, PORT, True, USER, PASS)
socket.socket = socks.socksocket
Then check if your IMAP traffic is going through a given proxy.

Traefik not getting SSL certificates for new domains

I've got Traefik/Docker Swarm/Let's Encrypt/Consul set up, and it's been working fine. It managed to successfully get certificates for the domains admin.domain.tld, registry.domain.tld and staging.domain.tld, but now that I've tried adding containers that are serving domain.tld and matomo.domain.tld those aren't getting any certificates (browser warns of self signed certificate because it's the default Traefik certificate).
My Traefik configuration (that's being uploaded to Consul):
debug = false
logLevel = "DEBUG"
insecureSkipVerify = true
defaultEntryPoints = ["https", "http"]
[entryPoints]
[entryPoints.ping]
address = ":8082"
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[traefikLog]
filePath = '/var/log/traefik/traefik.log'
format = 'json'
[accessLog]
filePath = '/var/log/traefik/access.log'
format = 'json'
[accessLog.fields]
defaultMode = 'keep'
[accessLog.fields.headers]
defaultMode = 'keep'
[accessLog.fields.headers.names]
"Authorization" = "drop"
[retry]
[api]
entryPoint = "traefik"
dashboard = true
debug = false
[ping]
entryPoint = "ping"
[metrics]
[metrics.influxdb]
address = "http://influxdb:8086"
protocol = "http"
pushinterval = "10s"
database = "metrics"
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "domain.tld"
watch = true
exposedByDefault = false
network = "net_web"
swarmMode = true
[acme]
email = "my#mail.tld"
storage = "traefik/acme/account"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
Possibly related, in traefik.log I repeatedly (as in almost once per second) get the following (but only for the registry subdomain). Sounds like an issue to persist the data to consul, but there are no errors indicating such an issue.
{"level":"debug","msg":"Looking for an existing ACME challenge for registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"Looking for provided certificate to validate registry.domain.tld...","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"No provided certificate found for domains registry.domain.tld, get ACME certificate.","time":"2019-07-07T11:37:23Z"}
{"level":"debug","msg":"ACME got domain cert registry.domain.tld","time":"2019-07-07T11:37:23Z"}
Update: I managed to find this line in the log:
{"level":"error","msg":"Error getting ACME certificates [matomo.domain.tld] : cannot obtain certificates: acme: Error -\u003e One or more domains had a problem:\n[matomo.domain.tld] acme: error: 400 :: urn:ietf:paramsacme:error:connection :: Fetching http://matomo.domain.tld/.well-known/acme-challenge/WJZOZ9UC1aJl9ishmL2ACKFbKoGOe_xQoSbD34v8mSk: Timeout after connect (your server may be slow or overloaded), url: \n","time":"2019-07-09T16:27:43Z"}
So it seems the issue is the challenge failing because of a timeout. Why the timeout though?
Update 2: More log entries:
{"level":"debug","msg":"Looking for an existing ACME challenge for staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"Looking for provided certificate to validate staging.domain.tld...","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No provided certificate found for domains staging.domain.tld, get ACME certificate.","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"No certificate found or generated for staging.domain.tld","time":"2019-07-10T19:38:34Z"}
{"level":"debug","msg":"http: TLS handshake error from 10.255.0.2:51981: remote error: tls: unknown certificate","time":"2019-07-10T19:38:34Z"}
But then, after a couple minutes to an hour, it works (for two domains so far).
not sure if its a feature or a bug, but removing the following http to https redirect solved it for me:
[entryPoints.http.redirect]
entryPoint = "https"

Traefik forward auth per frontend

I am trying to divide microservices and their auth.
Demo config is looks like:
[frontends]
[frontends.frontend1]
entryPoints = ["http"]
backend = "rancher1"
passHostHeader = true
forwardAuth = "http://127.0.0.1:8090"
[frontends.frontend1.routes.test_1]
rule = "PathPrefixStrip:/order"
[frontends.rancher2]
backend = "rancher2"
passHostHeader = true
[frontends.rancher2.routes.test_1]
rule = "PathPrefixStrip:/test"
How to apply forwardAuth to frontends.frontend1
Thanks to Daniel he helped me.
So, it's really easy to do:
Check your traefik version its should be at least 1.7 (i am not sure in which version this feature was added but its working in 1.7 and 1.7.1).
Make your config like this:
[frontends.service]
backend = "service"
passHostHeader = true
[frontends.ordersWorker.auth.forward]
address = "http://127.0.0.1:8090"

Traefik ACME HTTP SNI 01 .well-known not available (v1.5.0)

I am using a complex Traefik - Dropcart setup with automatic SSL certification via Let's Encrypt. Because of the TLS-SNI termintation I switched to the rc5 Docker version of Let's Encrypt which support HTTP-SNI, DNS isn't an option for me.
Unfortunately it gives an 400 timeout error (see logs).
Config
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
compress = true
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[...]
[acme]
email = "email#address.com"
caServer = "https://acme-staging.api.letsencrypt.org/directory"
storage = "/etc/traefik/acme/acme.json"
entryPoint = "https"
onHostRule = true
acmeLogging = true
#dnsProvider = "manual"
[acme.httpChallenge]
entryPoint = "http"
Logs
domain.example.com:acme: Error 400 - urn:acme:error:connection -
Fetching http://domain.example.com/.well-known/acme-challenge/5uyEKpgr[...]c4CfMOZjc: Timeout
Error Detail:
Validation for domain.example.com:80
Resolved to:
*IPv4*
*IPv6*
Used: *IPv6*
]"
Does anyone know how I can get HTTP validation fixed?
Thanks!
EDIT:
Same config seemed to work on a consul backend. So maybe something to do with Docker or acme.json?

Using varnish to cache heroku app

I have setup varnish a long time ago and I have one of my backends set to a host.herokuapp.com and it works great. For a while I was able to change settings and reload the varnish config with the basic service varnish reload command.
Now when I try reloading it I get:
* Reloading HTTP accelerator varnishd Command failed with error code 106
Message from VCC-compiler:
Backend host "myapp.herokuapp.com": resolves to multiple IPv4 addresses.
Only one address is allowed.
Please specify which exact address you want to use, we found these:
154.129.225.36
13.21.108.188
50.10.185.176
50.13.98.193
54.125.177.29
54.213.81.135
107.25.192.112
174.139.35.141
('/etc/varnish/backends.vcl' Line 39 Pos 27)
backend mobile { .host = "myapp.herokuapp.com"; .port = "80"; }
--------------------------#####################-----------------
In backend specification starting at:
('/etc/varnish/backends.vcl' Line 39 Pos 1)
backend mobile { .host = "myapp.herokuapp.com"; .port = "80"; }
#######---------------------------------------------------------------
Running VCC-compiler failed, exit 1
VCL compilation failed
Error: vcl.load 7ba71b44-c6b9-40e9-b0be-18f02bb5e9be /etc/varnish/default.vcl failed
As heroku uses dynamic IPs for their dynos, the IP list changes constantly and therefore it makes no sense to set the IPs as backends. Any clue on a way to fix this?
i had nearly the same problem with servers hosted by acquia.
A way to solve it was to :
- set the backend IPs from the hoster servers by acquia in a separate vcl
- build a croned script that regulary updates that vcl if the backend changed
- restart the varnish to put the new backends in production
#!/usr/bin/python2.7
import socket
import subprocess
import re
#
# Do a nslookup and return the list of the IPs
#
def _nslookup(host):
ips = ""
ips = socket.getaddrinfo(host ,0,0,0,0)
ip_list = []
for result in ips:
ip_list.append(result[-1][0])
ip_list = list(set(ip_list))
return ip_list
#
# Compare current backends with the list returned by nslookup
#
def _compare_backends_vcl(host_name, group_name):
current_ips = []
current_ips = _nslookup(host_name)
# Get current backends
current_backends = []
list = subprocess.Popen("/usr/bin/varnishadm backend.list | grep " + group_name + " | awk '{print $1}'", shell=True, stdout=subprocess.PIPE)
backend = ""
for backend in list.stdout:
current_backends.append(re.sub(r'^.*\((.*),.*,.*$\n', r'\1', backend))
# Due to varnish bug that is not removing backends (old ones are still declared in backend.list
# we are forced to add backends
# So the nslookup should be part of the current set of backends
# if set(current_ips).symmetric_difference(current_backends):
if set(current_ips).difference(current_backends):
# List is present so difference exist
print "_compare: We have to update " + group_name
return True
else:
return False
#
# Write the corresponding file
#
def _write_backends_vcl(host_name, group_name):
TEMPLATE_NODE = '''backend %s {
\t.host = "%s";
\t.port = "80";
\t.probe = %s;
}'''
vcl_file = open("/etc/varnish/" + group_name + "_backends.vcl", 'w')
host_num = 1
hosts = _nslookup(host_name)
for host in hosts:
vcl_file.write(TEMPLATE_NODE % (group_name + "_" + str(host_num), host, group_name + "_probe"))
vcl_file.write("\n\n")
host_num +=1
vcl_file.write("director " + group_name + "_default round-robin {\n")
for i in range(len(hosts)):
node = group_name + "_" + str(i+1)
vcl_file.write("\t{ .backend = %s; }\n" % node)
vcl_file.write("}\n")
vcl_file.close()
# Main
do_reload = ""
if _compare_backends_vcl("myhost.prod.acquia-sites.com", "MYHOST_CONFIG"):
do_reload = True
_write_backends_vcl("myhost.prod.acquia-sites.com", "MYHOST_CONFIG")
if do_reload:
print "Reloading varnish"
subprocess.Popen(['sudo', '/etc/init.d/varnish', 'reload'])
exit(1)
else:
# print "Everything is ok"
exit(0)
then, the corresponding vcl looks like :
backend MYHOST_CONFIG_1 {
.host = "XX.XX.XX.XX";
.port = "80";
.probe = MYHOST_CONFIG_probe;
}
backend MYHOST_CONFIG_2 {
.host = "XX.XX.XX.XX";
.port = "80";
.probe = MYHOST_CONFIG_probe;
}
director MYHOST_CONFIG_default round-robin {
{ .backend = MYHOST_CONFIG_1; }
{ .backend = MYHOST_CONFIG_2; }
}
You have to setup the MYHOST_CONFIG_probe probe and to set MYHOST_CONFIG_default as a director for your config.
Beware that varnish stores all the backend, and so you have to restart it regularly to purge the defective servers
I had the same problem today.
So, I installed nginx server on port 3000, and setup a proxy_pass server to myapp.herokuapp.com.
then, modify host="myapp.herokuapp.com" and port="80" to host="127.0.0.1" and port="3000".

Resources