"Insert into a table" operation fails in Hawq - hawq

I have 5 node Hortonworks cluster(Version - 2.4.2) in which I have installed Hawq 2.0.0.
These 5 nodes are:
edge
master ( Name node)
node1(Data Node1)
node2(Data Node2)
node3(Data Node3)
I followed this link to install Hawq in HDP - http://hdb.docs.pivotal.io/hdb/install/install-ambari.html
Hawq coomponents are installed in these nodes:
Hawq master - node1
Hawq standy master - node2
Hawq segment - node1,node2,node3
At the time of installation , Hawq master, Hawq standy master , hawq segments were installed successfully but the basic Hawq tests which is run by Hawq installer in Ambari has failed:
Below in the operation performed by Installer
2016-06-30 00:24:22,513 - --- Check state of HAWQ cluster ---
2016-06-30 00:24:22,513 - Executing hawq status check...
2016-06-30 00:24:22,514 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"source /usr/local/hawq/greenplum_path.sh && hawq state -d /data/hawq/master \" "
2016-06-30 00:24:23,343 - Output of command:
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:--HAWQ instance status summary
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:------------------------------------------------------
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Master instance = Active
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Master standby = node2.localdomain
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Standby master state = Standby host passive
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Total segment instance count from config file = 3
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:------------------------------------------------------
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Segment Status
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:------------------------------------------------------
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Total segments count from catalog = 1
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Total segment valid (at master) = 0
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Total segment failures (at master) = 3
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Total number of postmaster.pid files missing = 0
20160630:00:24:23:032731 hawq_state:node1:gpadmin-[INFO]:-- Total number of postmaster.pid files found = 3
2016-06-30 00:24:23,344 - --- Check if HAWQ can write and query from a table ---
2016-06-30 00:24:23,344 - Dropping ambari_hawq_test table if exists
2016-06-30 00:24:23,344 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"export PGPORT=5432 && source /usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\"DROP TABLE IF EXISTS ambari_hawq_test;\\\" \" "
2016-06-30 00:24:23,436 - Output:
DROP TABLE
2016-06-30 00:24:23,436 - Creating table ambari_hawq_test
2016-06-30 00:24:23,436 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"export PGPORT=5432 && source /usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\"CREATE TABLE ambari_hawq_test (col1 int) DISTRIBUTED RANDOMLY;\\\" \" "
2016-06-30 00:24:23,693 - Output:
CREATE TABLE
2016-06-30 00:24:23,693 - Inserting data to table ambari_hawq_test
2016-06-30 00:24:23,693 - Command executed: su - gpadmin -c "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null node1.localdomain \"export PGPORT=5432 && source /usr/local/hawq/greenplum_path.sh && psql -d template1 -c \\\"INSERT INTO ambari_hawq_test SELECT * FROM generate_series(1,10);\\\" \"
"
--- Above we can see that , the drop and Create table was executed but insert operation didn't succeed.
So, I executed insert command manually on Hawq master node i.e. node1
These are the steps executed manually:
[root#node1 ~]# su - gpadmin
[gpadmin#node1 ~]$ psql
psql (8.4.20, server 8.2.15)
WARNING: psql version 8.4, server version 8.2.
Some psql features might not work.
Type "help" for help.
gpadmin=#
gpadmin=# \c gpadmin
psql (8.4.20, server 8.2.15)
WARNING: psql version 8.4, server version 8.2.
Some psql features might not work.
You are now connected to database "gpadmin".
gpadmin=# create table test(name varchar);
gpadmin=# insert into test values('vikash');
-- The above insert operation thrown an error after a long time as
ERROR: failed to acquire resource from resource manager, resource
request is timed out due to no available cluster (pquery.c:804)
Also, the hawq segment logs in node1 is coming as
[root#node1 ambari-agent]# tail -f /data/hawq/segment/pg_log/hawq-2016-06-30_045853.csv
2016-06-30 05:10:24.522688 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 192.168.122.1"
,,,,,,,0,,"network_utils.c",210,
2016-06-30 05:10:54.603726 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 127.0.0.1",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:10:54.603769 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 2.10.1.71",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:10:54.603778 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 192.168.122.1"
,,,,,,,0,,"network_utils.c",210,
2016-06-30 05:11:24.625919 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 127.0.0.1",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:11:24.626088 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 2.10.1.71",,,,
,,,0,,"network_utils.c",210,
2016-06-30 05:11:24.626129 EDT,,,p248618,th-1357371264,,,,0,,,seg-10000,,,,,"LOG","00000","Resource manager discovered local host IPv4 address 192.168.122.1"
,,,,,,,0,,"network_utils.c",210,
I had also tried to check the "gp_segment_configuration"
gpadmin=# select * from gp_segment_configuration
gpadmin-# ;
registration_order | role | status | port | hostname | address | description
--------------------+------+--------+-------+-------------------+-----------+------------------------------------
-1 | s | u | 5432 | node2.localdomain | 2.10.1.72 |
0 | m | u | 5432 | node1 | node1 |
1 | p | d | 40000 | node1.localdomain | 2.10.1.71 | resource manager process was reset
(3 rows)
NOTE : In hawq-site.xml, the Resource management type is selected as "STANDALONE" instead of "YARN" from the dropdown.
Anyone have any clue, what is the issue here ???
Thanks in advance !!!

I met with such problem before. In such environment, every segment has a common IP address. So please check if the segment nodes has same IP address.
For hawq2.0.0, it will consider segment with same IP address as one node, that's why you have 3 segment nodes, but in gp_segment_configuration, there is only one segment node registered. You could remove the duplicate IP address and try again.
This issue has been fixed with latest hawq codes.

Thanks to you all for your reply.
The underlying OS in centOS and its on vCloud. As suggested, I have gone through IP configurations of all the 3 data nodes holding 3 segments. These nodes were not using same nics(IP). But on investigating further, I found through ifconfig that along with "eth1" & "lo" an another set of config was present under "vibr0" .
This "vibr0" was same in all the segment nodes and this was causing the issue. I removed it from all nodes and then Insert query worked.
Below is the result of ifconfig , and to resolve the issue removed "vibr0" from all the segment nodes.
eth1 Link encap:Ethernet HWaddr 00:50:56:01:31:26
inet addr:2.10.1.74 Bcast:2.10.3.255 Mask:255.255.252.0
inet6 addr: fe80::250:56ff:fe01:3126/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:426157 errors:0 dropped:0 overruns:0 frame:0
TX packets:259592 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:361465764 (344.7 MiB) TX bytes:216951933 (206.9 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:6 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:416 (416.0 b) TX bytes:416 (416.0 b)
virbr0 Link encap:Ethernet HWaddr 52:54:00:DC:EE:00
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

Related

Why can't I configure a static ip on raspberry pi?

I am trying to add a static ip address on raspberry-pi and can't get it working...
ifconfig on pi
wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.68.104 netmask 255.255.255.0 broadcast 192.168.68.255
inet6 fe80::1e8e:49a0:5bf:ad41 prefixlen 64 scopeid 0x20<link>
ether b8:27:eb:c4:41:05 txqueuelen 1000 (Ethernet)
RX packets 210 bytes 49138 (47.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 189 bytes 28376 (27.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
gateway
192.168.xx.x
/etc/resolv.conf:
nameserver 62.179.104.xxx
nameserver 213.46.228.xxx
dhcpcd.conf settings:
interface wlan0
static ip_address=192.168.68.68/20
static routers=192.168.xx.x
static domain_name_servers=62.179.104.xxx 213.46.228.xxx
I have also tried static ip_address=192.168.68.68/24
reboot pi and hostname -I it still gives me the origin ip: 192.168.68.104
What am I doing wrong here? or Is there another way to set a static ip on raspberry pi?
First of all make sure the dhcpcd service is enabled and running:
sudo service dhcpcd status
If that is not the case:
sudo service dhcpcd start
sudo systemctl enable dhcpcd
Now you can edit the dhcpcd config (like you already did)
sudo nano /etc/dhcpcd.conf
If you have a network cable use: eth0 and on wifi: wlan (not wlan0)
interface eth0
static ip_address=192.168.0.4/24
static routers=192.168.0.1
static domain_name_servers=192.168.0.1
Configure this like you need.
After this reboot.
Good luck!

Multiple pings from a client to a server

I have 2 Virtual Machines. One is a Windows(say client). The other one is a CentOS(say server).
My Windows (Client) IP is 1.1.1.1
My CentOS (Server) has an app running and is listening on port 12345. IP address of the CentOS VM is 2.2.2.2.
I want to generate multiple pings from the windows VM from specific IP/ports to CentOS VM specific IP/ports.
i.e generate multiple pings from a certain port + source IP of 1.1.1.1 to destination IP of 2.2.2.2 + destination port number 12345.
I am looking for something like the foll:
**** ping DIP D.Port SIP S.Port -count 1000 ****
Please note: I need to run this ping from my windows CMD.
Is there a way I can do this from my windows CMD line?
check your system docs, on mine it is:
Mac_3.2.57$ping -S 10.0.0.148 -c 100000 1.1.1.1
PING 1.1.1.1 (1.1.1.1) from 10.0.0.148: 56 data bytes
64 bytes from 1.1.1.1: icmp_seq=0 ttl=59 time=25.521 ms
64 bytes from 1.1.1.1: icmp_seq=1 ttl=59 time=18.837 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=59 time=16.605 ms

Kubernetes - Connection tracking does not mangle packages back to the original destination IP (DNAT)

We have a Kubernetes cluster setup using AWS EC2 instances which we created using KOPS. We are experiencing problems with internal pod communication through kubernetes services (which will load balance traffic between destination pods). The problem emerges when the source and destination pod are on the same EC2 instance (node). Kubernetes is setup with flannel for internode communication using vxlan, and kubernetes services are managed by kube-proxy using iptables.
In a scenario where:
PodA running on EC2 instance 1 (ip-172-20-121-84, us-east-1c): 100.96.54.240
PodB running on EC2 instance 1 (ip-172-20-121-84, us-east-1c): 100.96.54.247
ServiceB (service where PodB is a possible destination endpoint): 100.67.30.133
If we go inside PodA and execute "curl -v http://ServiceB/", no answer is received and finally, a timeout is produced.
When we inspect the traffic (cni0 interface in instance 1), we observe:
PodA sends a SYN package to ServiceB IP
The package is mangled and the destination IP is changed from ServiceB IP to PodB IP
Conntrack registers that change:
root#ip-172-20-121-84:/home/admin# conntrack -L|grep 100.67.30.133
tcp 6 118 SYN_SENT src=100.96.54.240 dst=100.67.30.133 sport=53084 dport=80 [UNREPLIED] src=100.96.54.247 dst=100.96.54.240 sport=80 dport=43534 mark=0 use=1
PodB sends a SYN+ACK package to PodA
The source IP for the SYN+ACK package is not reverted back from the PodB IP to the ServiceB IP
PodA receives a SYN+ACK package from PodB, which was not expected and it send back a RESET package
PodA sends a SYN package to ServiceB again after a timeout, and the whole process repeats
Here the tcpdump annotated details:
root#ip-172-20-121-84:/home/admin# tcpdump -vv -i cni0 -n "src host 100.96.54.240 or dst host 100.96.54.240"
TCP SYN:
15:26:01.221833 IP (tos 0x0, ttl 64, id 2160, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.67.30.133.80: Flags [S], cksum 0x1e47 (incorrect -> 0x3e31), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372198 ecr 0,nop,wscale 9], length 0
15:26:01.221866 IP (tos 0x0, ttl 63, id 2160, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.96.54.247.80: Flags [S], cksum 0x36d6 (incorrect -> 0x25a2), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372198 ecr 0,nop,wscale 9], length 0
Level 2:
15:26:01.221898 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 100.96.54.240 tell 100.96.54.247, length 28
15:26:01.222050 ARP, Ethernet (len 6), IPv4 (len 4), Reply 100.96.54.240 is-at 0a:58:64:60:36:f0, length 28
TCP SYN+ACK:
15:26:01.222151 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.247.80 > 100.96.54.240.43534: Flags [S.], cksum 0x36d6 (incorrect -> 0xc318), seq 2871879716, ack 506285655, win 26697, options [mss 8911,sackOK,TS val 153372198 ecr 153372198,nop,wscale 9], length 0
TCP RESET:
15:26:01.222166 IP (tos 0x0, ttl 64, id 32433, offset 0, flags [DF], proto TCP (6), length 40)
100.96.54.240.43534 > 100.96.54.247.80: Flags [R], cksum 0x6256 (correct), seq 506285655, win 0, length 0
TCP SYN (2nd time):
15:26:02.220815 IP (tos 0x0, ttl 64, id 2161, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.67.30.133.80: Flags [S], cksum 0x1e47 (incorrect -> 0x3d37), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372448 ecr 0,nop,wscale 9], length 0
15:26:02.220855 IP (tos 0x0, ttl 63, id 2161, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.240.43534 > 100.96.54.247.80: Flags [S], cksum 0x36d6 (incorrect -> 0x24a8), seq 506285654, win 26733, options [mss 8911,sackOK,TS val 153372448 ecr 0,nop,wscale 9], length 0
15:26:02.220897 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60)
100.96.54.247.80 > 100.96.54.240.43534: Flags [S.], cksum 0x36d6 (incorrect -> 0x91f0), seq 2887489130, ack 506285655, win 26697, options [mss 8911,sackOK,TS val 153372448 ecr 153372448,nop,wscale 9], length 0
15:26:02.220915 IP (tos 0x0, ttl 64, id 32492, offset 0, flags [DF], proto TCP (6), length 40)
100.96.54.240.43534 > 100.96.54.247.80: Flags [R], cksum 0x6256 (correct), seq 506285655, win 0, length 0
The relevant iptable rules (automatically managed by kube-proxy) on instance 1 (ip-172-20-121-84, us-east-1c):
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A KUBE-SERVICES ! -s 100.96.0.0/11 -d 100.67.30.133/32 -p tcp -m comment --comment "prod/export: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 100.67.30.133/32 -p tcp -m comment --comment "prod/export: cluster IP" -m tcp --dport 80 -j KUBE-SVC-3IL52ANAN3BQ2L74
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.10000000009 -j KUBE-SEP-4XYJJELQ3E7C4ILJ
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.11110999994 -j KUBE-SEP-2ARYYMMMNDJELHE4
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.12500000000 -j KUBE-SEP-OAQPXBQCZ2RBB4R7
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.14286000002 -j KUBE-SEP-SCYIBWIJAXIRXS6R
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.16667000018 -j KUBE-SEP-G4DTLZEMDSEVF3G4
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.20000000019 -j KUBE-SEP-NXPFCT6ZBXHAOXQN
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-7DUMGWOXA5S7CFHJ
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-LNIY4F5PIJA3CQPM
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SLBETXT7UIBTZCPK
-A KUBE-SVC-3IL52ANAN3BQ2L74 -m comment --comment "prod/export:" -j KUBE-SEP-FMCOTKNLEICO2V37
-A KUBE-SEP-OAQPXBQCZ2RBB4R7 -s 100.96.54.247/32 -m comment --comment "prod/export:" -j KUBE-MARK-MASQ
-A KUBE-SEP-OAQPXBQCZ2RBB4R7 -p tcp -m comment --comment "prod/export:" -m tcp -j DNAT --to-destination 100.96.54.247:80
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
This is the service definition:
root#adsvm010:/yamls# kubectl describe service export
Name: export
Namespace: prod
Labels: <none>
Annotations: <none>
Selector: run=export
Type: ClusterIP
IP: 100.67.30.133
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 100.96.5.44:80,100.96.54.235:80,100.96.54.247:80 + 7 more...
Session Affinity: None
Events: <none>
If instead of the service we use directly PodB IP (so there is no need to mangle packages), the connection works.
If we use the service but the randomly selected destination pod is running in a different instance, then the connection tracking mechanism works properly and it mangles the package back so that PodA sees the SYN+ACK package as it expected it (coming from ServiceB IP). In this case, traffic goes through cni0 and flannel.0 interfaces.
This behavior started some weeks ago before we were not observing any problems (over a year) and we do not recall any major change to the cluster setup or to the pods we are running. Does anybody have any idea that would explain why the SYN+ACK package is not mangled back to the expected src/dst IPs?
I finally found the answer. The cni0 interface is in bridge mode with all the pod virtual interfaces (one veth0 per pod running on that node):
root#ip-172-20-121-84:/home/admin# brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.0a5864603601 no veth05420679
veth078b53a1
veth0a60985d
...
root#ip-172-20-121-84:/home/admin# ip addr
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8951 qdisc noqueue state UP group default qlen 1000
link/ether 0a:58:64:60:36:01 brd ff:ff:ff:ff:ff:ff
inet 100.96.54.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::1c66:76ff:feb6:2122/64 scope link
valid_lft forever preferred_lft forever
The traffic that goes from/to the bridged interface to/from some other interface is processed by netfilter/iptables, but the traffic that does not leave the bridged interface (e.g. from one veth0 to another, both belonging to the same bridge) is NOT processed by netfilter/iptables.
In the example I exposed in the question, PodA (100.96.54.240) sends a SYN package to ServiceB (100.67.30.133) which is not in the cni0 subnet (100.96.54.1/24) so this package will not stay in the bridged cni0 interface and iptable processes it. That is why we see that the DNAT happened and it got registered in the conntrack. But if the selected destination pod is in the same node, for instance PodB (100.96.54.247), then PodB sees the SYN package and responses with a SYN+ACK where the source is 100.96.54.247 and the destination is 100.96.54.240. These are IPs inside the cni0 subnet and do not need to leave it, hence netfilter/iptables does not process it and does not mangle back the package based on conntrack information (i.e., the real source 100.96.54.247 is not replaced by the expected source 100.67.30.133).
Fortunately, there is the bridge-netfilter kernel module that can enable netfilter/iptables to process traffic that happens in the bridged interfaces:
root#ip-172-20-121-84:/home/admin# modprobe br_netfilter
root#ip-172-20-121-84:/home/admin# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
To fix this in a Kubernetes cluster setup with KOPS (credits), edit the cluster manifest with kops edit cluster and under spec: include:
hooks:
- name: fix-bridge.service
roles:
- Node
- Master
before:
- network-pre.target
- kubelet.service
manifest: |
Type=oneshot
ExecStart=/sbin/modprobe br_netfilter
[Unit]
Wants=network-pre.target
[Install]
WantedBy=multi-user.target
This will create a systemd service in /lib/systemd/system/fix-bridge.service in your nodes that will run at startup and it will make sure the br_netfilter module is loaded before kubernetes (i.e., kubelet) starts. If we do not do this, what we experienced with AWS EC2 instances (Debian Jessie images) is that sometimes the module is loaded during startup and sometimes it is not (I do not know why there such a variability), so depending on that the problem may manifest itself or not.

Add route in VPN connection Mac OS X

I have following routing table:
➜ ~ netstat -nr
Routing tables
Internet:
Destination Gateway Flags Refs Use Netif Expire
default 192.168.0.1 UGSc 63 1 en0
default 10.255.254.1 UGScI 1 0 ppp0
10 ppp0 USc 2 4 ppp0
10.255.254.1 10.255.254.2 UHr 1 0 ppp0
92.46.122.12 192.168.0.1 UGHS 0 0 en0
127 127.0.0.1 UCS 0 0 lo0
127.0.0.1 127.0.0.1 UH 2 62144 lo0
169.254 link#4 UCS 0 0 en0
192.168.0 link#4 UCS 8 0 en0
192.168.0.1 c0:4a:0:2d:18:48 UHLWIir 60 370 en0 974
192.168.0.100 a0:f3:c1:22:1d:6e UHLWIi 1 228 en0 1174
How can I add gateway(10.25.1.252) to specific IP(10.12.254.9) inside VPN.
I tried this command but with no luck:
sudo route -n add 10.12.0.0/16 10.25.1.252
But traceroute show that it uses default gateway:
~ traceroute 10.12.254.9
traceroute to 10.12.254.9 (10.12.254.9), 64 hops max, 52 byte packets
1 10.255.254.1 (10.255.254.1) 41.104 ms 203.766 ms 203.221 ms
Are you using Cisco AnyConnect? Here's a tidbit from https://supportforums.cisco.com/document/7651/anyconnect-vpn-client-faq
Q. How does the AnyConnect client enforce/monitor the
tunnel/split-tunnel policy?
A. AnyConnect enforces the tunnel policy in 2 ways:
1)Route monitoring and repair (e.g. if you change the route table),
AnyConnect will restore it to what was provisioned.
2)Filtering (on platforms that support filter engines). Filtering ensures that even if you could perform some sort of route injection, the filters would block the packets.
Which I interpret as: Whenever you change the route from, the Cisco client resets the route to what your VPN administrator configured.
Your best bet it to talk to you VPN administrator and ask them to add your route.

How to get the paragraph contain the key words in shell scripts?

I want to get the whole paragraph that contain the key words.
For example, the following is the output of "ifconfig -a"
bond0 Link encap:Ethernet HWaddr 00:11:3F:C1:47:98
inet6 addr: fe80::211:3fff:fec1:4798/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:1881856 errors:0 dropped:0 overruns:0 frame:0
TX packets:1059020 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2618747813 (2.4 GiB) TX bytes:182058226 (173.6 MiB)
bond0:oam Link encap:Ethernet HWaddr 00:11:3F:C1:47:98
inet addr:135.2.156.97 Bcast:135.2.156.111 Mask:255.255.255.240
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
bond0:oamA Link encap:Ethernet HWaddr 00:11:3F:C1:47:98
inet addr:135.2.156.103 Bcast:135.2.156.111 Mask:255.255.255.240
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
And I want to extract the paragraph in bold. That is, the paragraph contain the key words "bond0:oamA"
I know if I use grep, only the line
bond0:oamA Link encap:Ethernet HWaddr 00:11:3F:C1:47:98
will be got.
But I want to extract the whole paragraph contain the key words.
Is there a method to get this paragraph?
Thanks a lot!
Try this command:
ifconfig -a | awk -vRS='' '$1~/bond0:oamA/'
When RS is null, awk parse file as multi-line-record.

Resources