I setup minikube on MacOS and as result there is a virtual intf created on the host machine as the following:
bridge100: flags=8a63<UP,BROADCAST,SMART,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> mtu 1500
options=3<RXCSUM,TXCSUM>
ether f2:18:98:52:ec:64
inet 192.168.64.1 netmask 0xffffff00 broadcast 192.168.64.255
inet6 fe80::f018:98ff:fe52:ec64%bridge100 prefixlen 64 scopeid 0x13
inet6 fdd5:e29:6049:e016:475:5258:18a3:3700 prefixlen 64 autoconf secured
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x0
member: vmenet0 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 18 priority 0 path cost 0
member: vmenet1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 20 priority 0 path cost 0
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
On a minikube VM, I got an error when trying to pull a image when I run a VPN on the host machine:
$ docker run -it --net=container:$ID --pid=container:$ID --volumes-from=$ID alpine sh
Unable to find image 'alpine:latest' locally
docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.19:59651->192.168.64.1:53: i/o timeout.
If I do a dig on host when the VPN is running, I got the following outputs showing dns with 192.168.64.1 fails.
(base) /etc $ dig registry-1.docker.io
; <<>> DiG 9.10.6 <<>> registry-1.docker.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45428
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;registry-1.docker.io. IN A
;; ANSWER SECTION:
registry-1.docker.io. 3591 IN A 52.205.127.201
registry-1.docker.io. 3591 IN A 34.237.244.67
registry-1.docker.io. 3591 IN A 52.55.124.246
registry-1.docker.io. 3591 IN A 52.72.252.48
registry-1.docker.io. 3591 IN A 34.203.135.183
registry-1.docker.io. 3591 IN A 52.202.132.224
registry-1.docker.io. 3591 IN A 54.86.228.181
registry-1.docker.io. 3591 IN A 54.197.112.205
;; Query time: 347 msec
;; SERVER: 10.44.0.1#53(10.44.0.1)
;; WHEN: Wed Mar 02 17:25:26 CST 2022
;; MSG SIZE rcvd: 177
(base) /etc $ dig registry-1.docker.io #192.168.64.1
; <<>> DiG 9.10.6 <<>> registry-1.docker.io #192.168.64.1
;; global options: +cmd
;; connection timed out; no servers could be reached
(base) /etc $
If I stop the VPN and do a dig on the host, I got the following outputs showing dns with 192.168.64.1 success.
(base) /etc $ dig registry-1.docker.io
; <<>> DiG 9.10.6 <<>> registry-1.docker.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39523
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 4, ADDITIONAL: 7
;; QUESTION SECTION:
;registry-1.docker.io. IN A
;; ANSWER SECTION:
registry-1.docker.io. 600 IN A 54.86.228.181
registry-1.docker.io. 600 IN A 52.72.252.48
registry-1.docker.io. 600 IN A 174.129.220.74
registry-1.docker.io. 600 IN A 34.237.244.67
registry-1.docker.io. 600 IN A 52.205.127.201
registry-1.docker.io. 600 IN A 52.202.132.224
registry-1.docker.io. 600 IN A 52.200.37.142
registry-1.docker.io. 600 IN A 52.203.238.92
;; AUTHORITY SECTION:
docker.io. 2920 IN NS ns-1168.awsdns-18.org.
docker.io. 2920 IN NS ns-513.awsdns-00.net.
docker.io. 2920 IN NS ns-1827.awsdns-36.co.uk.
docker.io. 2920 IN NS ns-421.awsdns-52.com.
;; ADDITIONAL SECTION:
ns-1168.awsdns-18.org. 143919 IN A 205.251.196.144
ns-421.awsdns-52.com. 170410 IN A 205.251.193.165
ns-513.awsdns-00.net. 132154 IN A 205.251.194.1
ns-1168.awsdns-18.org. 143919 IN AAAA 2600:9000:5304:9000::1
ns-1827.awsdns-36.co.uk. 171777 IN AAAA 2600:9000:5307:2300::1
ns-421.awsdns-52.com. 172051 IN AAAA 2600:9000:5301:a500::1
ns-513.awsdns-00.net. 132154 IN AAAA 2600:9000:5302:100::1
;; Query time: 6 msec
;; SERVER: 202.96.134.133#53(202.96.134.133)
;; WHEN: Wed Mar 02 17:25:56 CST 2022
;; MSG SIZE rcvd: 466
(base) /etc $ dig registry-1.docker.io #192.168.64.1
; <<>> DiG 9.10.6 <<>> registry-1.docker.io #192.168.64.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21844
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;registry-1.docker.io. IN A
;; ANSWER SECTION:
registry-1.docker.io. 752 IN A 52.72.252.48
registry-1.docker.io. 752 IN A 174.129.220.74
registry-1.docker.io. 752 IN A 34.237.244.67
registry-1.docker.io. 752 IN A 52.205.127.201
registry-1.docker.io. 752 IN A 52.202.132.224
registry-1.docker.io. 752 IN A 52.200.37.142
registry-1.docker.io. 752 IN A 52.203.238.92
registry-1.docker.io. 752 IN A 54.86.228.181
;; Query time: 3 msec
;; SERVER: 192.168.64.1#53(192.168.64.1)
;; WHEN: Wed Mar 02 17:25:59 CST 2022
;; MSG SIZE rcvd: 177
Why such behavior of DNS resolution with respect to VPN? How to make the DNS work when VPN is running?
When you connect to a VPN all your traffic is routed via VPN tunnel and it can't reach 192.168.64.1 since the router in at the other VPN end doesn't know where this address is:
; <<>> DiG 9.10.6 <<>> registry-1.docker.io #192.168.64.1
;; connection timed out; no servers could be reached
This is an expected behavior so you need to set up a route to 192.168.64.0 so it doesn't end up in a VPN tunnel.
You can read how to do this here and here.
Most simple one will look like: route add -host 192.168.64.1 my.local.gateway.ip which adds a route to 54.81.143.201 via specific gateway my.local.gateway.ip.
Related
I wrote a readiness_probe for my pod by using a bash script. Readiness probe failed with Reason: Unhealthy but when I manually get in to the pod and run this command /bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping); if [[ $health -ne 401 ]]; then exit 1; fi bash script exits with code 0.
What could be the reason? I am attaching the code and the error below.
Edit: Found out that the health variable is set to 000 which means timeout in for bash script.
readinessProbe:
exec:
command:
- /bin/bash
- '-c'
- |-
health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; fi
"kubectl describe pod {pod_name}" result:
Name: rustici-engine-54cbc97c88-5tg8s
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Tue, 12 Jul 2022 18:39:08 +0200
Labels: app.kubernetes.io/name=rustici-engine
pod-template-hash=54cbc97c88
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/rustici-engine-54cbc97c88
Containers:
rustici-engine:
Container ID: docker://f7efffe6fc167e52f913ec117a4d78e62b326d8f5b24bfabc1916b5f20ed887c
Image: batupaksoy/rustici-engine:singletenant
Image ID: docker-pullable://batupaksoy/rustici-engine#sha256:d3cf985c400c0351f5b5b10c4d294d48fedfd2bb2ddc7c06a20c1a85d5d1ae11
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 12 Jul 2022 18:39:12 +0200
Ready: False
Restart Count: 0
Limits:
memory: 350Mi
Requests:
memory: 350Mi
Liveness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=20
Readiness: exec [/bin/bash -c health=$(curl -s -o /dev/null --write-out "%{http_code}" http://localhost:8080/api/v2/ping);
if [[ $health -ne 401 ]]; then exit 1; else exit 0; echo $health; fi] delay=10s timeout=5s period=10s #success=1 #failure=10
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-whb8d (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-whb8d:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24s default-scheduler Successfully assigned default/rustici-engine-54cbc97c88-5tg8s to minikube
Normal Pulling 23s kubelet Pulling image "batupaksoy/rustici-engine:singletenant"
Normal Pulled 21s kubelet Successfully pulled image "batupaksoy/rustici-engine:singletenant" in 1.775919851s
Normal Created 21s kubelet Created container rustici-engine
Normal Started 20s kubelet Started container rustici-engine
Warning Unhealthy 4s kubelet Readiness probe failed:
Warning Unhealthy 4s kubelet Liveness probe failed:
The probe could be failing because it is facing performance issues or slow startup. To troubleshoot this issue, you will need to check that the probe doesn’t start until the app is up and running in your pod. Perhaps you will need to increase the Timeout of the Readiness Probe, as well as the Timeout of the Liveness Probe, like in the following example:
readinessProbe:
initialDelaySeconds: 10
periodSeconds: 2
timeoutSeconds: 10
You can find more details about how to configure the Readlines Probe and Liveness Probe in this link.
I'm trying to configure 3proxy server using this guide (I've already used it on OHV hosting and it works just nice!), now trying to start 3proxy behind NAT, and have error 12 of 3proxy which means 12 - failed to bind()
Where is mistake and what I'm doing wrong?
Internal IP:
172.16.20.50
External IP:
82.118.227.155
NAT Ports:
5001-5020
Here are my entire config:
######################
##3Proxy.cfg Content##
######################
##Main##
#Starting 3proxy as a service/daemon
daemon
#DNS Servers to resolve domains and for the local DNS cache
#that providers faster resolution for cached entries
nserver 8.8.8.8
nserver 1.1.1.1
nscache 65536
#Authentication
#CL = Clear Text, CR = Encrypted Passswords (MD5)
#Add MD5 users with MD5 passwords with "" (see below)
#users "user:CR:$1$lFDGlder$pLRb4cU2D7GAT58YQvY49."
users 3proxy:CL:hidden
#Logging
log /var/log/3proxy/3proxy.log D
logformat "- +_L%t.%. %N.%p %E %U %C:%c %R:%r %O %I %h %T"
#logformat "-""+_L%C - %U [%d/%o/%Y:%H:%M:%S %z] ""%T"" %E %I"
rotate 30
#Auth type
#auth strong = username & password
auth strong
#Binding address
external 82.118.227.155
internal 172.16.20.50
#SOCKS5
auth strong
flush
allow 3proxy
maxconn 1000
socks -p5011
User 3proxy created, access to 3proxy granted.
Logs, which means connection established, but no traffic transfered (0/0):
[root#bgvpn113 ~]# tail -f /var/log/3proxy/3proxy.log.2018.05.14
1526329023.448 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21151 88.212.201.205:443 0 0 0 CONNECT_88.212.201.205:443
1526329023.458 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21154 88.212.201.205:443 0 0 0 CONNECT_88.212.201.205:443
1526329023.698 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21158 88.212.201.205:443 0 0 0 CONNECT_88.212.201.205:443
1526329037.419 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21162 195.201.201.32:443 0 0 0 CONNECT_195.201.201.32:443
1526329037.669 SOCK5.5011 00012 3proxy MY_LOCAL_IP:21164 195.201.201.32:443 0 0 0 CONNECT_195.201.201.32:443
Mistake was in outside IP.
I set both ips to 172.16.20.50 and it started to work!
I have a problem when trying to resolve MX records using Resolv::DNS. When I execute the following lines directly on my Mac in irb, everything works:
> require "resolv"
> Resolv::DNS.new.getresource("stackoverflow.com", Resolv::DNS::Resource::IN::MX)
=> #<Resolv::DNS::Resource::IN::MX:0x00007fba42812ff0 #preference=10, #exchange=#<Resolv::DNS::Name: alt4.aspmx.l.google.com.>, #ttl=243>
The same line executed inside a docker container returns an error:
> require "resolv"
> Resolv::DNS.new.getresource("stackoverflow.com", Resolv::DNS::Resource::IN::MX)
Resolv::ResolvError: DNS result has no information for stackoverflow.com
from /usr/local/lib/ruby/2.4.0/resolv.rb:492:in `getresource'
I think the problem is docker-machine. I'm running docker-machine configured by dinghy 4.6.3 (see https://github.com/codekitchen/dinghy) with the following configuration:
Boot2Docker version 18.01.0-ce, build HEAD : 0bb7bbd - Thu Jan 11 16:32:39 UTC 2018
Docker version 18.01.0-ce, build 03596f5
docker#dinghy:~$ busybox | head -1
BusyBox v1.27.2 (2017-10-30 14:58:40 UTC) multi-call binary.
And my docker container is based on ruby:2.4.3-stretch.
I'm not sure if it is simple an issue with the resolv.conf
docker#dinghy:~$ cat /etc/resolv.conf
nameserver 10.0.2.3
Is the config enough for mx lookups?
Update:
This is the dig response from within a container (not from the docker-machine itself, unfortunately the dig package doesn't ship with busybox):
root#3ef2090b7864:/usr/src/app# dig #10.0.2.3 MX stackoverflow.com
; <<>> DiG 9.10.3-P4-Debian <<>> #10.0.2.3 MX stackoverflow.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOTIMP, id: 32375
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;stackoverflow.com. IN MX
;; Query time: 0 msec
;; SERVER: 10.0.2.3#53(10.0.2.3)
;; WHEN: Tue Apr 03 14:29:30 CEST 2018
;; MSG SIZE rcvd: 46
I've installed dnscrypt-proxy form repos on Ubuntu 16.10, than I tested it against command:
dig txt debug.opendns.com
And got what I needed:
dig txt debug.opendns.com
; <<>> DiG 9.10.3-P4-Ubuntu <<>> txt debug.opendns.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48435
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;debug.opendns.com. IN TXT
;; ANSWER SECTION:
debug.opendns.com. 0 IN TXT "server m1.hkg"
debug.opendns.com. 0 IN TXT "flags 20 0 70 7950800000000000000"
debug.opendns.com. 0 IN TXT "originid 0"
debug.opendns.com. 0 IN TXT "actype 0"
debug.opendns.com. 0 IN TXT "source 31.192.111.175:43228"
debug.opendns.com. 0 IN TXT "**dnscrypt enabled** (717473654A614970)"
;; Query time: 279 msec
;; SERVER: 127.0.2.1#53(127.0.2.1)
;; WHEN: Mon Feb 20 18:18:24 CET 2017
;; MSG SIZE rcvd: 250
"dnscrypt enabled" so it's working.
Than I wanted to change opends server to a different one.
So at: /etc/default/dncrypt-proxy
I set:
DNSCRYPT_PROXY_RESOLVER_NAME=ns0.dnscrypt.is
And now I see no "dnscrypt enabled":
dig txt debug.opendns.com
;; Truncated, retrying in TCP mode.
; <<>> DiG 9.10.3-P4-Ubuntu <<>> txt debug.opendns.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 44963
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;debug.opendns.com. IN TXT
;; AUTHORITY SECTION:
opendns.com. 2077 IN SOA auth1.opendns.com. noc.opendns.com. 1487092083 16384 2048 1048576 2560
;; Query time: 442 msec
;; SERVER: 127.0.2.1#53(127.0.2.1)
;; WHEN: Mon Feb 20 18:23:51 CET 2017
;; MSG SIZE rcvd: 92
Website https://dnsleaktest.com/ confirms that I'm using 93-95-228-87.1984.is server.
Why there's no "dnscrypt enabled"?
I my DNS encrypted?
What am I doing wrong?
Looks like it's working as it should be and it's normal behaviour. Andrew gave me an answer:
http://www.webupd8.org/2014/08/encrypt-dns-traffic-in-ubuntu-with.html#comment-3165943154
I have installed consul on AWS EC2, with 3 servers and 1 client.
server IPs = 11.XX.XX.1,11.XX.XX.2,11.XX.XX.3.
client IP = 11.XX.XX.4
consul config: /etc/consul.d/server/config.json
{
"bootstrap": false,
"server": true,
"datacenter": "abc",
"advertise_addr": "11.XX.XX.1",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true,
"addresses": {
"http": "0.0.0.0"
},
"start_join": ["11.XX.XX.2", "11.XX.XX.3"]
}
netstat output on server:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8400 0.0.0.0:* LISTEN 29720/consul
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1006/sshd
tcp 0 0 127.0.0.1:8600 0.0.0.0:* LISTEN 29720/consul
tcp6 0 0 :::8301 :::* LISTEN 29720/consul
tcp6 0 0 :::8302 :::* LISTEN 29720/consul
tcp6 0 0 :::8500 :::* LISTEN 29720/consul
tcp6 0 0 :::22 :::* LISTEN 1006/sshd
tcp6 0 0 :::8300 :::* LISTEN 29720/consul
curl is working fine from remote machine but dig is only working on the local machine.
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40873
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul. IN A
;; ANSWER SECTION:
web.service.consul. 0 IN A 11.XX.XX.4
;; Query time: 0 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Fri Dec 30 08:21:41 UTC 2016
;; MSG SIZE rcvd: 52
but dig is not working from remote machine:
dig #11.XX.XX.1 -p 8600 web.service.consul
; <<>> DiG 9.9.5-3ubuntu0.6-Ubuntu <<>> #11.XX.XX.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
-----------------------------
How to make it work?
By default consul only listens for DNS connections on the instance loopback device. Best practices asks you to install the client on any remote machine looking to consume consul DNS. This is not always practical.
I have seen people expose DNS (consul port 8600) on all interfaces via the Consul configuration JSON like so:
{
"server": true,
"addresses": {
"dns": "0.0.0.0"
}
}
You can also expose all ports listening on loopback with the client_addr field in JSON or pass it via the command line with:
consul agent -client 0.0.0.0
There are more controls and knobs available to tweak (see docs):
https://www.consul.io/docs/agent/options.html