I'm setting elasticsearch cluster on GCE, eventualy it will be used from within the application which is on the same network, but for now while developing, i want to have access from my dev env. Also even on long term i would have to access kibana from external network so need to know how to allow that. For now i'm not taking care of any security considirations.
The cluster (currently one node) on GCE instance, centos7.
It has external host enabled, the ephemeral option.
I can access 9200 from within the instance:
es-instance-1 ~]$ curl -XGET 'localhost:9200/?pretty'
But not via the external ip which when i test it shows 9200 closed:
es-instance-1 ~]$ nmap -v -A my-external-ip | grep 9200
9200/tcp closed wap-wsp
localhost as expected is good:
es-instance-1 ~]$ nmap -v -A localhost | grep 9200
Discovered open port 9200/tcp on 127.0.0.1
I see a similar question here and following it i went to create a firewall rule. First added a tag 'elastic-cluster' to the instance and then a rule
$ gcloud compute instances describe es-instance-1
tags:
items:
- elastic-cluster
$ gcloud compute firewall-rules create allow-elasticsearch --allow TCP:9200 --target-tags elastic-cluster
Here it's listed as created:
gcloud compute firewall-rules list
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
allow-elasticsearch default 0.0.0.0/0 tcp:9200 elastic-cluster
so now there is a rule which supposed to allow 9200
but still not reachable
es-instance-1 ~]$ nmap -v -A my-external-ip | grep 9200
9200/tcp closed wap-wsp
What am i missing?
Thanks
Related
I want to be able to determine which LB is primary and which is secondary from a bash script running on both load balancers.
Background is: For the renewal of a Letsencrypt certificate on a HAproxy load balancer pair, where the service IP is usually bound to the master, it would be necessary to determine which server is the master (has the service IP bound) and which is only a secondary backup (without web access via port :80 and port:443)
If you follow this guide by Sebastian Schrader https://serverfault.com/a/871783, the following procedure will help to determine the master and backup:
IFS="/"
# /org/keepalived/Vrrp1/Instance/ens192/151/IPv4
vrrpInstance=$(/usr/bin/busctl tree | grep keepalived | grep IPv4)
set $vrrpInstance
#151
vrrpRouterID=$7
# (us) 2 "Master" or "Backup"
vrrpProp=$(/usr/bin/busctl get-property org.keepalived.Vrrp1 /org/keepalived/Vrrp1/Instance/ens192/"${vrrpRouterID}"/IPv4 org.keepalived.Vrrp1.Instance State)
# Master or Backup
vrrpStatus=$(echo ${vrrpProp} | cut -c 9-14)
unset IFS
In order to check that all the servers across a fleet aren't supporting deprecated algorithms, I'm (programmatically) doing this:
telnet localhost 22
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SSH-2.0-OpenSSH_8.0p1 Ubuntu-6build1
SSH-2.0-Censor-SSH2
4&m����&F �V��curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1Arsa-sha2-512,rsa-sha2-256,ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519lchacha20-poly1305#openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm#openssh.com,aes256-gcm#openssh.comlchacha20-poly1305#openssh.com,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm#openssh.com,aes256-gcm#openssh.com�umac-64-etm#openssh.com,umac-128-etm#openssh.com,hmac-sha2-256-etm#openssh.com,hmac-sha2-512-etm#openssh.com,hmac-sha1-etm#openssh.com,umac-64#openssh.com,umac-128#openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1�umac-64-etm#openssh.com,umac-128-etm#openssh.com,hmac-sha2-256-etm#openssh.com,hmac-sha2-512-etm#openssh.com,hmac-sha1-etm#openssh.com,umac-64#openssh.com,umac-128#openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-sha1none,zlib#openssh.comnone,zlib#openssh.comSSH-2.0-Censor-SSH2
Connection closed by foreign host.
Which is supposed to be a list of supported algorithms for the various phases of setting up a connection. (kex, host key, etc). Every time I run, I get a different piece of odd data at the start - always a different length.
There's an nmap plugin - ssh2-enum-algos - which returns the data in it's complete form, but I don't want to run nmap; I have a go program which opens the port, and sends the query, but it gets the same as telnet. What am I missing, and how do I fix it?
For comparison, here's the top few lines from the output of nmap script:
$ nmap --script ssh2-enum-algos super
Starting Nmap 7.80 ( https://nmap.org ) at 2019-12-27 22:15 GMT
Nmap scan report for super (192.168.50.1)
Host is up (0.0051s latency).
rDNS record for 192.168.50.1: supermaster
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
| ssh2-enum-algos:
| kex_algorithms: (12)
| curve25519-sha256
| curve25519-sha256#libssh.org
| ecdh-sha2-nistp256
| ecdh-sha2-nistp384
| ecdh-sha2-nistp521
Opening a tcp connection to port 22, (in golang, with net.Dial) then accepting and sending connection strings leaves us able to Read() from the Reader for the connection. Thence the data is in a standard format described by the RFC. From this, I can list the algorithms supported in each phase of an ssh connection. This is very useful for measuring what is being offered, rather than what the appears to be configured (it's easy to configure sshd to use a different config file).
It's a useful thing to be able to do from a security POV.
Tested on every version of ssh I can find from 1.x on a very old solaris or AIX box, to RHEL 8.1.
In some cases you can specify an algorithm to use, and if you specify one that is not supported the server will reply with a list of supported algorithms.
For example, to check for supported key exchange algorithms you can use:
ssh 127.0.0.1 -oKexAlgorithms=diffie-hellman-group1-sha1
diffie-hellman-group1-sha1 is insecure and should be missing from most modern servers. The server will probably respond with something like:
Unable to negotiate with 127.0.0.1 port 22: no matching key exchange method found. Their offer: curve25519-sha256,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256
Exit 255
Typing: "ssh -Q cipher | cipher-auth | mac | kex | key"
will give you a list of the algorithms supported by your client
Typing: "man ssh"
will let you see what options you can specify with the -o argument, including Cipher, MACs, and KexAlgorithms
I'm using WFPSampler to redirect all traffic to a specific interface by using command:
WFPSampler.exe -s PROXY -l FWPM_LAYER_ALE_BIND_REDIRECT_V4 -pla 10.0.2.15 -v -in
This works just fine, traffic from all of the processes is redirected as expected. The only problem is that it binds 127.0.0.1 to 10.0.2.15 as well and then some applications fail to connect.
For example, I've created simple Python HTTP server on 127.0.0.1:8000 and I can not access it over the browser using this address.
I know that on FWPM_LAYER_ALE_BIND_REDIRECT_V4 it is only possible to filter by local address, but I have somehow to filter by remote address at this point to avoid binding localhost to 10.0.2.15
You could redirect the outgoing traffic from 10.0.2.15 to 127.0.0.1 at the same time, with the command like:
WFPSampler.exe -s PROXY -l FWPM_LAYER_ALE_CONNECT_REDIRECT_V4 -ipra 10.0.2.15 -pra 127.0.0.1 -v -in
The comments of this answer has metioned it.
I've read the Oracle RAC documentation a couple of times but SCAN and VIP are still confusing me. Can someone help me understand how this needs to be configured technically so that I can explain it my network admin.
VIP in Oracle RAC, should each VIP bind to the node or just require a DNS A record without allocating it to node1 or node2 and an entry in the host file?
I know while performing Grid cluster installation Oracle will bind the VIP automatically, but should this be part of DNS assigned to one of the nodes or should it be free and unassigned?
Oracle SCAN IPs need to be created in DNS record; is this an A record to 3 IPs with reverse lookup or round robin way and this should not be part of hosts file?
I need to explain this to my network admin to add it on the DNS server.
First, VIPs:
A VIP is a Virtual IP address, and should be defined in DNS and not assigned to any host or interface. When you install GRID/ASM home, you'll specify the VIP names that were assigned in DNS. When Oracle Clusterware starts up, it will assign a VIP to each node in the cluster. The idea is, if a node goes down (crashes), clusterware can immediately re-assign that VIP to a new (surviving) node. This way, you avoid TCP timeout issues.
Next, SCAN:
A SCAN (Single Client Access Name) is a special case of VIP. The SCAN should also be defined in DNS, and not assigned to any host or interface. There should be three IPs associated with the SCAN name in DNS, and the DNS entry should be defined so that one of the three IPs is returned each time DNS is queried, in a round robin fashion.
At clusterware startup time, each of the three VIPs that make up the SCAN will be assigned to a different node in the cluster. (Except in the special case of a two node cluster, one of the nodes wil have a 2 SCAN VIPs assigned to it.) The point of the SCAN, is that no matter how many nodes are added to or removed from the cluster, all the Net Service Name definitions in your tnsnames.ora (or LDAP equivalent) will not need to ever change, because they all refer to the SCAN, which doesn't change, regardless of how many node additions or drops are made to the cluster.
For example, in the three node cluster, you may have:
Physical and virtual hostnames/IPs assigned as follows:
Hostname Physical IP Virtual hostnmae Virtual IP
rac1 10.1.1.1 rac1-vip 10.1.1.4
rac2 10.1.1.2 rac2-vip 10.1.1.5
rac3 10.1.1.3 rac3-vip 10.1.1.6
Additionally, you may have the SCAN defined as:
rac-scan with three IPs, 10.1.1.7, 10.1.1.8, 10.1.1.9. Again, the DNS definition would be defined so those IPs are served up in a round robin order.
Note that the SCAN VIPs, Host VIPs, and the Physical IPs are all in the same subnet.
Finally, though you didn't ask about it, to complete the picture, you'd also need one private, non-routable IP assigned per host, and that IP would be associated with the private interconnect. So, you may have something like:
rac1-priv 172.16.1.1
rac2-priv 172.16.1.2
rac3-priv 172.16.1.3
Note that the '-priv' addresses should not be in DNS, only in the /etc/hosts file of each host in the RAC cluster. (They are private, non-routable, and only clusterware will ever know about or use those addresses, so adding to DNS doesn't make sense.)
Note also, that '-priv' and physical IP/hostname definitions should go in /etc/hosts, and the physical IPs and VIPs should be in DNS. So, physical IPs in both DNS and /etc/hosts, VIPs only in DNS, '-priv' addresses only in /etc/hosts.
not entirely sure what you mean for this, i have each VIP address created in DNS as A records assigned to the hosts, and also record them in the hosts file as well.
in answer to 2, you are correct, the SCAN IPs should not be in the hosts file. And yes 3 "A" records with reserve lookup will be enough (at least that's what has worked for me).
these are my iptables entries
Oracle ports
Allow access from other oracle RAC hosts
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p tcp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow Multicast
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.1.90-172.28.1.97 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 172.28.97.91-172.28.97.93 -j ACCEPT
-A INPUT -m state --state NEW -p udp -m iprange --src-range 192.168.28.91-192.168.28.93 -j ACCEPT
Allow multicast
-A INPUT -m pkttype --pkt-type multicast -j ACCEPT
-A INPUT -s 224.0.0.0/24 -j ACCEPT
-A INPUT -s 230.0.1.0/24 -j ACCEPT
I also needed to get our systems admin to give permissions a the firewall level to allow my nodes, their vips and the scan ips to connect via port 1521
hope this helps
In amazon ec2, I have 2 instances in a placement group. First node is 172.31.12.76/20, second, 172.31.12.77/20 I can ssh both nodes from my pc. They share the same security group that has got these 2 rules:
Inbound rules:
Type Protocol Port Range Source
SSH TCP 22 0.0.0.0/0
All IMCP All N/A 0.0.0.0/0
(no outbound rules)
Both nodes see to each other in L2:
root#ip-172-31-12-76:~# arp
[...]
ip-172-31-12-77.eu-west ether 0a:ad:5e:e4:12:de C eth0
[...]
root#ip-172-31-12-77:~# arp
[...]
ip-172-31-12-76.eu-west ether 0a:34:a1:17:57:28 C eth0
[...]
iptables are empty on both nodes.
But ping does not work between each other
I have already checked a previous post:
EC2 instances not responding to internal ping
but it does not address the issue. It looks like there are no other similar posts.
Any idea? Thank you very much!
I got the answer; I need to also allow outbound icmp on each host in order to be able to ping both external and internal IPs.