amazon ec2, cannot ping internal host - amazon-ec2

In amazon ec2, I have 2 instances in a placement group. First node is 172.31.12.76/20, second, 172.31.12.77/20 I can ssh both nodes from my pc. They share the same security group that has got these 2 rules:
Inbound rules:
Type Protocol Port Range Source
SSH TCP 22 0.0.0.0/0
All IMCP All N/A 0.0.0.0/0
(no outbound rules)
Both nodes see to each other in L2:
root#ip-172-31-12-76:~# arp
[...]
ip-172-31-12-77.eu-west ether 0a:ad:5e:e4:12:de C eth0
[...]
root#ip-172-31-12-77:~# arp
[...]
ip-172-31-12-76.eu-west ether 0a:34:a1:17:57:28 C eth0
[...]
iptables are empty on both nodes.
But ping does not work between each other
I have already checked a previous post:
EC2 instances not responding to internal ping
but it does not address the issue. It looks like there are no other similar posts.
Any idea? Thank you very much!

I got the answer; I need to also allow outbound icmp on each host in order to be able to ping both external and internal IPs.

Related

Fluentbit creates TCP connections

Fluentbit creates TCP connections to itself?
What are these used for?
fluent.conf file:
[SERVICE]
Flush 5
Daemon Off
Log_Level debug
[INPUT]
Name tail
Tag format.logging
path C:\Logs\*main.log
DB C:\Logs\main_logs.db
[OUTPUT]
Name stdout
Match *
This seems to boil down to an implementation detail where it uses unix sockets on Linux for its event loop, but on Windows it has opted to using localhost-connections.
Comment from https://github.com/fluent/fluent-bit/blob/37aa680d32384c1179f02ee08a5bef4cd278513e/lib/monkey/mk_core/deps/libevent/include/event2/util.h#L380
/** Create two new sockets that are connected to each other.
On Unix, this simply calls socketpair(). On Windows, it uses the
loopback network interface on 127.0.0.1, and only
AF_INET,SOCK_STREAM are supported.
(This may fail on some Windows hosts where firewall software has cleverly
decided to keep 127.0.0.1 from talking to itself.)
Parameters and return values are as for socketpair()
*/
The actual implementation is here: https://github.com/fluent/fluent-bit/blob/37aa680d32384c1179f02ee08a5bef4cd278513e/lib/monkey/mk_core/deps/libevent/evutil.c#L207
It matches the pattern when looking at the netstat output:
netstat -anop tcp | findstr <fluentbit pid>
TCP 127.0.0.1:54645 127.0.0.1:54646 ESTABLISHED 12012 \
TCP 127.0.0.1:54646 127.0.0.1:54645 ESTABLISHED 12012 ´` Pair
TCP 127.0.0.1:54647 127.0.0.1:54648 ESTABLISHED 12012 \
TCP 127.0.0.1:54648 127.0.0.1:54647 ESTABLISHED 12012 ´` Pair

Pinging local host doesn't function

elasticsearch==7.10.0
I wish to ping local host '5601' to ensure kibana is running or not but apparently unable to ping.
Note: I am aware that elastic search has in-built function to ping but I still wish to ping using cmd line for a specific reason in my project.
C:\User>ping 5601
Pinging f00:b00:f00:b00 with 32 bytes of data:
PING: transmit failed. General failure.
PING: transmit failed. General failure.
PING: transmit failed. General failure.
PING: transmit failed. General failure.
Ping statistics for f00:b00:f00:b00:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss)
C:\User>ping http://localhost:5601
Ping request could not find host http://localhost:5601. Please check the name and try again.
Could someone help me?
You can use netstat to check if the port exposed by the Kibana UI, 5061 is in LISTEN mode
$ netstat -tlpn | grep 5601
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6 0 0 :::5601 :::* LISTEN -
Or if you want to establish a connection to destination port 5601 you can use nc
$ nc -vz localhost 5601
Connection to localhost 5601 port [tcp/*] succeeded!

Adding a multicast route to an interface in OSX

I have a VM running in Fusion that I want to hit by routing a specific endpoint address through the virtual ethernet interface (multicast DNS, in particular). First I was sending packets and inspecting with Wireshark noticing that nothing was getting through. Then I thought to check the routing table
$ netstat -rn | grep vmnet8
Destination Gateway Flags Refs Use Netif Expire
172.16.12/24 link#29 UC 2 0 vmnet8 !
172.16.12.255 ff:ff:ff:ff:ff:ff UHLWbI 0 35 vmnet8 !
But unlike other interfaces,
Destination Gateway Flags Refs Use Netif Expire
224.0.0.251 a1:10:5e:50:0:fb UHmLWI 0 732 en0
224.0.0.251 a1:10:5e:50:0:fb UHmLWI 0 0 en8
There was no multicast route. So I added it:
$ sudo route add -host 224.0.0.251 -interface vmnet8
add host 224.0.0.251: gateway vmnet8
And so it was true
$ netstat -rn | grep vmnet8
Destination Gateway Flags Refs Use Netif Expire
172.16.12/24 link#29 UC 2 0 vmnet8 !
172.16.12.255 ff:ff:ff:ff:ff:ff UHLWbI 0 35 vmnet8 !
224.0.0.251 a1:10:5e:50:0:fb UHmLS 0 13 vmnet8
I was also sure to check the interface flags to ensure it had been configured to support multicast
$ ifconfig vmnet8
vmnet8: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
ether 00:70:61:c0:11:08
inet 172.16.12.1 netmask 0xffffff00 broadcast 172.16.12.255
Still, no multicast packets I send are getting through. I noted that the other interface's multicast route have different flags than the default ones given to my added route. Namely UHmLWI vs UHmLS. The differences I can see are insignificant. From man netstat:
I RTF_IFSCOPE Route is associated with an interface scope
S RTF_STATIC Manually added
W RTF_WASCLONED Route was generated as a result of cloning
Then again, I'm not claiming to be a routing expert. Perhaps a multicast route entry must be made somehow differently?
You'll note that the Use column is non-zero, despite no packets showing in a sniffer.

Restrict access to router VPN client to a single IP address

I have setup openvpn client on a asus router, it is running padavan firmware, which is similar to tomato and other.
The VPN client works, but I would like to limits it's use to one or 2 ips on my LAN (i.e. AppleTV) and all other clients bypass the VPN connection.
The padavan vpn client has a custom script that is executed with the interface goes up and down on tun0 which is the interface.
I have attempted to route the IP address of the client that I want to use, but it does not prevent access via all of the other clients:
#!/bin/sh
### Custom user script
### Called after internal VPN client connected/disconnected to remote VPN server
### $1 - action (up/down)
### $IFNAME - tunnel interface name (e.g. ppp5 or tun0)
### $IPLOCAL - tunnel local IP address
### $IPREMOTE - tunnel remote IP address
### $DNS1 - peer DNS1
### $DNS2 - peer DNS2
# private LAN subnet behind a remote server (example)
peer_lan="192.168.0.130"
peer_msk="255.255.255.253"
### example: add static route to private LAN subnet behind a remote server
func_ipup()
{
# route add -net $peer_lan netmask $peer_msk gw $IPREMOTE dev $IFNAME
# route add -net $peer_lan gw $IPREMOTE dev $IFNAME
route add default dev tun0 table 200
rule add from 192.168.0.130 table 200
return 0
}
func_ipdown()
{
# route del -net $peer_lan netmask $peer_msk gw $IPREMOTE dev $IFNAME
return 0
}
logger -t vpnc-script "$IFNAME $1"
case "$1" in
up)
func_ipup
;;
down)
func_ipdown
;;
esac
I realise that this is very specific to the padavan firmware, but I think that the commands that are executed when it goes up should be universal, and my routing skills are very limited !
Maybe I need to block / allow using ip tables instead?
Any suggestions or help gratefully appreciated !

Oracle RAC VIP and SCAN IPs

I've read the Oracle RAC documentation a couple of times but SCAN and VIP are still confusing me. Can someone help me understand how this needs to be configured technically so that I can explain it my network admin.
VIP in Oracle RAC, should each VIP bind to the node or just require a DNS A record without allocating it to node1 or node2 and an entry in the host file?
I know while performing Grid cluster installation Oracle will bind the VIP automatically, but should this be part of DNS assigned to one of the nodes or should it be free and unassigned?
Oracle SCAN IPs need to be created in DNS record; is this an A record to 3 IPs with reverse lookup or round robin way and this should not be part of hosts file?
I need to explain this to my network admin to add it on the DNS server.
First, VIPs:
A VIP is a Virtual IP address, and should be defined in DNS and not assigned to any host or interface. When you install GRID/ASM home, you'll specify the VIP names that were assigned in DNS. When Oracle Clusterware starts up, it will assign a VIP to each node in the cluster. The idea is, if a node goes down (crashes), clusterware can immediately re-assign that VIP to a new (surviving) node. This way, you avoid TCP timeout issues.
Next, SCAN:
A SCAN (Single Client Access Name) is a special case of VIP. The SCAN should also be defined in DNS, and not assigned to any host or interface. There should be three IPs associated with the SCAN name in DNS, and the DNS entry should be defined so that one of the three IPs is returned each time DNS is queried, in a round robin fashion.
At clusterware startup time, each of the three VIPs that make up the SCAN will be assigned to a different node in the cluster. (Except in the special case of a two node cluster, one of the nodes wil have a 2 SCAN VIPs assigned to it.) The point of the SCAN, is that no matter how many nodes are added to or removed from the cluster, all the Net Service Name definitions in your tnsnames.ora (or LDAP equivalent) will not need to ever change, because they all refer to the SCAN, which doesn't change, regardless of how many node additions or drops are made to the cluster.
For example, in the three node cluster, you may have:
Physical and virtual hostnames/IPs assigned as follows:
Hostname Physical IP Virtual hostnmae Virtual IP
rac1 10.1.1.1 rac1-vip 10.1.1.4
rac2 10.1.1.2 rac2-vip 10.1.1.5
rac3 10.1.1.3 rac3-vip 10.1.1.6
Additionally, you may have the SCAN defined as:
rac-scan with three IPs, 10.1.1.7, 10.1.1.8, 10.1.1.9. Again, the DNS definition would be defined so those IPs are served up in a round robin order.
Note that the SCAN VIPs, Host VIPs, and the Physical IPs are all in the same subnet.
Finally, though you didn't ask about it, to complete the picture, you'd also need one private, non-routable IP assigned per host, and that IP would be associated with the private interconnect. So, you may have something like:
rac1-priv 172.16.1.1
rac2-priv 172.16.1.2
rac3-priv 172.16.1.3
Note that the '-priv' addresses should not be in DNS, only in the /etc/hosts file of each host in the RAC cluster. (They are private, non-routable, and only clusterware will ever know about or use those addresses, so adding to DNS doesn't make sense.)
Note also, that '-priv' and physical IP/hostname definitions should go in /etc/hosts, and the physical IPs and VIPs should be in DNS. So, physical IPs in both DNS and /etc/hosts, VIPs only in DNS, '-priv' addresses only in /etc/hosts.
not entirely sure what you mean for this, i have each VIP address created in DNS as A records assigned to the hosts, and also record them in the hosts file as well.
in answer to 2, you are correct, the SCAN IPs should not be in the hosts file. And yes 3 "A" records with reserve lookup will be enough (at least that's what has worked for me).
these are my iptables entries
Oracle ports
Allow access from other oracle RAC hosts
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow Multicast
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow multicast
-A INPUT -m pkttype --pkt-type multicast -j ACCEPT
-A INPUT -s 224.0.0.0/24 -j ACCEPT
-A INPUT -s 230.0.1.0/24 -j ACCEPT
I also needed to get our systems admin to give permissions a the firewall level to allow my nodes, their vips and the scan ips to connect via port 1521
hope this helps

Resources