I could not find a question similar to this, there were others mentioning https redirects, but not about minimizing the redirects.
Been looking for a solution, and could not sort it out yet.
We use Docker > Traefik for WordPress and have www as the preferred version for WordPress. There are multiple WP instances. Domains are added dynamically.
However, with this config, I am receiving two redirects, from http to https to https www
http://example.com/
https://example.com/
https://www.example.com/
Is there any way to minimize the redirect?
ideally a 301 redirect from
http://example.com directly to https://www.example.com
Traefik config file as follows
defaultEntryPoints = ["http", "https"]
[web]
address = ":8080"
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[acme]
email = "email#domain.com"
storage = "acme.json"
entryPoint = "https"
onDemand = false
OnHostRule = true
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "traefik.example.com"
watch = true
exposedbydefault = false
Try replacing your [entryPoints.http.redirect] entry with this:
[entryPoints.http.redirect]
#entryPoint = "https"
regex = "^http:\/\/(www\.)*(example\.com)(.*)"
replacement = "https://www.$2$3"
permanent = true
Regex101
It will not handle the https://example.com/ entry so you need to add:
[entryPoints.https.redirect]
regex = "^https:\/\/(example\.com)(.*)"
replacement = "https://www.$1/$2"
permanent = true
If you have multiple frontedns, the regex can get hard to handle, so instead you can consider having a label on the container, like this:
traefik.frontend.headers.SSLRedirect=true
traefik.frontend.headers.SSLHost=www.example.com
As of 1.7 there is new option SSLForceHost that would force even existing SSL connection to be redirected.
traefik.frontend.headers.SSLForceHost=true
Here's what I had to do. The above answer was helpful, but traefik wouldn't start because you actually need a double \ to escape in the .toml.
Also you still need to make sure you have the normal entry points and ports there.
Here's my complete entryPoints section:
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.http.redirect]
regex = "^http:\\/\\/(www.)*(example\\.com)(.*)"
replacement = "https://www.$2/$3"
permanent = true
[entryPoints.https.redirect]
regex = "^https:\\/\\/(example.com)(.*)"
replacement = "https://www.$1/$2"
permanent = true
[entryPoints.https.tls]
This is how I got it to work with docker provider behind AWS ELB.
traefik container
/usr/bin/docker run --rm \
--name traefik \
-p 5080:80 \
-p 5443:443 \
-v /etc/traefik/traefik.toml:/etc/traefik/traefik.toml \
-v /var/run/docker.sock:/var/run/docker.sock \
traefik
traefik.toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
docker labels
-l traefik.enable=true \
-l traefik.http.middlewares.redirect.redirectregex.regex="^http://(.*)" \
-l traefik.http.middlewares.redirect.redirectregex.replacement="https://\$1" \
-l traefik.http.routers.web-redirect.rule="Host(\`domain.com\`)" \
-l traefik.http.routers.web-redirect.entrypoints="http" \
-l traefik.http.routers.web-redirect.middlewares="redirect" \
-l traefik.http.routers.web-secure.rule="Host(\`domain.com\`)" \
-l traefik.http.routers.web-secure.entrypoints="https" \
ELB listeners
Related
I have setup wireguard server and client.
Server - Ubuntu 18.04
Client - Windows 11(x64)
Wireguard Interface Subnet - 10.200.1.0/24
Wireguard Server IP - 10.200.1.27/24
server.conf
Address = 10.200.1.27/24
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE;
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE;
ListenPort = 51820
PrivateKey = WHnFUoljugAbX3XkmHg2TmZH4k2ptbX0N1xB7cruN2g=
[Peer]
PublicKey = *********
AllowedIPs = 10.200.1.72/32, 192.168.1.0/24
Endpoint = ********:63587
[Peer]
PublicKey = ********
AllowedIPs = 10.200.1.71/32
Endpoint = ********:10295
Wireguard Client is in home network 192.168.1.0/24 and windows 11 has the IP address 192.168.1.25
windows11-Wireguard-Client-Config
[Interface]
Address = 10.200.1.72/24
ListenPort = 63587
PrivateKey = *******
[Peer]
PublicKey = ********
AllowedIPs = 10.200.1.0/24
Endpoint = *******:51820
PersistentKeepalive = 30
Problem
From another peer, I am able to ping 192.168.1.25 but I have a macOS running on IP 192.168.1.6, so I am unable to ping the macbook.
I even tried adding route on windows
route add -p 192.168.1.0 mask 255.255.255.0 10.200.1.72
From the other peer if I ping after adding the above route the ICMP response will be,
FROM 10.200.1.72: icmp_seq=1 Redirect Network(New nexthop: 192.168.1.6)
Can someone please guide me in what to do, as I have ran out of thoughts.
I'm trying to use the Cloudflare API to dynamically update a single specific firewall rule.
I'm using a bash script to:
Grab the latest IP addresses within my Cloudflare firewall rule.
Pass in a new IP address to be added to the firewall using the $1 variable.
Use the Filters API to update the firewall rule with the new IP address.
Here's the full bash script I'm using to try and achieve this. (I might not be doing things in the most efficient way, but I'm new to bash overall)
#!/bin/bash
# Cloudflare Email
EMAIL='someone#example.com'
# API Key
TOKEN='Token'
# Zone ID
ZONE='Zone'
# Firewall ID
ID='ID'
# Rule Filter
FILTER='Filter'
# Grab Cloudflare firewall rule we want to update:
RULE=$(
curl -X GET "https://api.cloudflare.com/client/v4/zones/$ZONE/firewall/rules/$ID?id=$ID" \
-H "X-Auth-Email: $EMAIL" \
-H "X-Auth-Key: $TOKEN" \
-H "Content-Type: application/json"
)
# Filter the response to just show IPv4 and IPv6 addresses:
OLD=$(
jq -r '.result.filter.expression | capture(".*{(?<ips>[^}]*)}").ips' <<<"$RULE"
)
# Debug
echo $OLD
# Use the filters API to update the expression
curl -X PUT \
-H "X-Auth-Email: $EMAIL" \
-H "X-Auth-Key: $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"id": "ID",
"paused": false,
"expression": "(ip.src in {'$OLD' '$1'})"
}' "https://api.cloudflare.com/client/v4/zones/$ZONE/filters/$FILTER"
Running this script when there are two IP address in the firewall rule shown here works perfectly.
The response is also all good:
{
"result": {
"id": "ID",
"paused": false,
"expression": "(ip.src in {192.168.1.1 192.168.1.2})"
},
"success": true,
"errors": [],
"messages": []
}
But, when I run the script a third time, with a different IP address I get this curl error.
$ bash test.sh 192.168.1.3
192.168.1.1 192.168.1.2 <--- Just Debug
curl: (3) unmatched close brace/bracket in URL position 24:
192.168.1.2 192.168.1.3})"
}
^
I don't understand why it works for two IP's but three it doesn't. Can anyone shed some light on this?
Thank you so much, let me know if anyone needs additional information!
I try to run Vault with docker-compose on Virtual machine ubuntu 20.04 ( ip : 192.168.56.9 ). Without the https, already works fine, but when I try to put vault in https with self-signed certificat from openssl, it doesn't works.
Here my configurations :
docker-compose.yml :
version: '3.6'
services:
vault:
build:
context: ./vault
dockerfile: Dockerfile
ports:
- 8200:8200
volumes:
- ./vault/config:/vault/config
- ./vault/policies:/vault/policies
- ./vault/data:/vault/data
- ./vault/logs:/vault/logs
- ./vault/volume_test/:/vault/volume_test
environment:
- VAULT_ADDR=http://192.168.56.9:8200
command: server -config=/vault/config/vault-config.conf
cap_add:
- IPC_LOCK
Dockerfile :
# base image
FROM alpine:3.7
# set vault version
ENV VAULT_VERSION 0.10.3
# create a new directory
RUN mkdir /vault
# download dependencies
RUN apk --no-cache add \
bash \
ca-certificates \
wget
# download and set up vault
RUN wget --quiet --output-document=/tmp/vault.zip https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip && \
unzip /tmp/vault.zip -d /vault && \
rm -f /tmp/vault.zip && \
chmod +x /vault
# update PATH
ENV PATH="PATH=$PATH:$PWD/vault"
# add the config file
COPY ./config/vault-config.conf /vault/config/vault-config.conf
# expose port 8200
EXPOSE 8200
# run vault
ENTRYPOINT ["vault"]
My vault-config.conf :
backend "file" {
path = "vault/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = false
tls_cert_file = "/home/xxx/Vault-Docker/domain.crt"
tls_key_file = "/home/xxx/Vault-Docker/domain.key"
}
#api_addr = "http://192.168.56.9:8200"
disable_mlock = true
ui = true
How I create my .crt and my .key :
Create a cert.conf file in /home/xxx/Vault-Docker/ :
[req]
default_bits = 4096
default_md = sha256
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
C = FR
ST = VA
L = SomeCity
O = MyCompany
OU = MyDivision
CN = 192.168.56.9
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[alt_names]
IP.1 = 192.168.56.9
And excute in /home/xxx/Vault-Docker/ :
openssl req -nodes -x509 -days 365 -keyout domain.key -out domain.crt -config cert.conf
But when I run :
docker-compose up -d --build
Then :
docker logs vault-docker_vault_1
The output is :
Error initializing listener of type tcp: error loading TLS cert: open /home/xxx/Vault-Docker/domain.crt: no such file or directory
Someone to tell me where is my error ?
Thanks a lot !
That's because your cert configuration is not mounted inside the container. In order to fix it, you need to:
create a new directory ./vault/cert
move both domain.crt and domain.key to ./vault/cert
add to docker-compose-yml a new volume:
volumes:
...
- ./vault/cert/:/vault/cert
...
Change on vault-config.conf from /home/.../domain* to /vault/cert/domain* on both tls_cert_* directives
Then Vault will be able to find the certificates.
You are giving the certificate path which will be the container path and you have kept the certificate in the host machine.
I am using Jaeger UI to display traces from my application. It's work fine for me if both application an Jaeger are running on same server. But I need to run my Jaeger collector on a different server. I tried out with JAEGER_ENDPOINT, JAEGER_AGENT_HOST and JAEGER_AGENT_PORT, but it failed.
I don't know, whether my values setting for these variables is wrong or not. Whether it required any configuration settings inside application code?
Can you provide me any documentation for this problem?
In server 2 , Install jaeger
$ docker run -d --name jaeger \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
In server 1, set these environment variables.
JAEGER_SAMPLER_TYPE=probabilistic
JAEGER_SAMPLER_PARAM=1
JAEGER_SAMPLER_MANAGER_HOST_PORT=(EnterServer2HostName):5778
JAEGER_REPORTER_LOG_SPANS=false
JAEGER_AGENT_HOST=(EnterServer2HostName)
JAEGER_AGENT_PORT=6831
JAEGER_REPORTER_FLUSH_INTERVAL=1000
JAEGER_REPORTER_MAX_QUEUE_SIZE=100
application-server-id=server-x
Change the tracer registration application code as below in server 1, so that it will get the configurations from the environment variables.
#Produces
#Singleton
public static io.opentracing.Tracer jaegerTracer() {
String serverInstanceId = System.getProperty("application-server-id");
if(serverInstanceId == null) {
serverInstanceId = System.getenv("application-server-id");
}
return new Configuration("ApplicationName" + (serverInstanceId!=null && !serverInstanceId.isEmpty() ? "-"+serverInstanceId : ""),
Configuration.SamplerConfiguration.fromEnv(),
Configuration.ReporterConfiguration.fromEnv())
.getTracer();
}
Hope this works!
Check this link for integrating elasticsearch as the persistence storage backend so that the traces will not remove once the Jaeger instance is stopped.
How to configure Jaeger with elasticsearch?
Specify "JAEGER_AGENT_HOST" and ensure "local_agent" is not specified in tracer config file.
Below is the working solution for Python
import os
os.environ['JAEGER_AGENT_HOST'] = "123.XXX.YYY.ZZZ" # Specify remote Jaeger-Agent here
# os.environ['JAEGER_AGENT_HOST'] = "16686" # optional, default: "16686"
from jaeger_client import Config
config = Config(
config={
'sampler': {
'type': 'const',
'param': 1,
},
# ENSURE 'local_agent' is not specified
# 'local_agent': {
# # 'reporting_host': "127.0.0.1",
# # 'reporting_port': 16686,
# },
'logging': True,
},
service_name="your-service-name-here",
)
# create tracer object here and voila!
Guidance of Jaeger: https://www.jaegertracing.io/docs/1.33/getting-started/
Jaeger-Client features: https://www.jaegertracing.io/docs/1.33/client-features/
Flask-OpenTracing: https://github.com/opentracing-contrib/python-flask
OpenTelemetry-Python: https://opentelemetry.io/docs/instrumentation/python/getting-started/
I'm tasked with creating about a hundred files for use with puppet. I'm creating .yaml files with unique filenames that will contain site-specific IP and hostname information, these must have the same format (ideally from a template).
I want to create a file generator that fills in variables for IP, subnet, network, and hostname from an input file (.csv?) What's the best way to approach this?
sample format:
---
network::interfaces::interfaces:
eth0:
method: 'static'
address: '10.20.30.1'
netmask: '255.255.240.0'
broadcast: '10.20.30.255'
network: '10.20.30.0'
gateway: '10.20.30.1'
network::interfaces::auto:
- 'eth0'
hosts::host_entries:
HOSTNAME:
ip: '10.20.30.2'
hosts::purge_hosts: true
dhcpd::range_start: '10.20.30.11'
dhcpd::range_stop: '10.20.30.240'
dhcpd::gateway: '10.20.30.1'
hornetq::site: 'test'
Write a skeleton like this:
network::interfaces::interfaces:
eth0:
method: 'static'
address: '__IP__'
netmask: '__MASK__'
broadcast: '__BC__'
network: '__NET__'
gateway: '__GW__'
etc.
Generate files with a loop like
cat input-file | while read OUTPUT IP MASK BC NET GW ; do
sed -e s/__IP__/$IP/ \
-e s/__MASK__/$MASK/ \
-e s/__BC__/$BC/ \
-e s/__NET__/$NET/ \
-e s/__GW__/$GW/ \
<$SKELETON >$OUTPUT
done
This assumes that the fileds in the input file are separated by whitespace, with the name of the respective output file in the first column.