I have installed ICP within a private newtork. And each VM having public IP address. I am able to access ICP dashboard using master node private IP address (https://master-node-private-ip:8443).
But I am not able to access using master node public IP address (https://master-node-public-ip:8443). I tried with setting cluster_lb_address: in config.yaml file.
But it doesn't work.
In config.yaml file, uncomment and set both your cluster_lb_address and proxy_lb_address to the master node public IP address. In cluster/hosts file, all the IPs must have the same subnet.
Example:
# cat cluster/hosts
[master]
172.16.151.126
[worker]
172.16.151.182
172.16.155.135
[proxy]
172.16.151.126
#[management]
#4.4.4.4
#[va]
#5.5.5.5
config.yaml
------------------------------
cluster_lb_address: 9.x.x.x
proxy_lb_address: 9.x.x.x
Related
I wish to run my elasticsearch remotely on gcloud VM, this is configured to run at 127.0.0.1 at a specific port 9200. How to access this from a website outside this vm? If I change the network host to 0.0.0.0 on the yml file, even 9200 port becomes inaccessible. How do I overcome this problem?
Changed network.host: [_site_ , _local_ , _global_ ]
_site_ = internal ip given by google cloud vm,
_local_ = 127.0.0.1,
_global_ = found using curl ifconfig.me,
Opened a specific port (9200) and tried to connect with global IP address.
curl to the global ip gives
>Output: Failed to connect to (_global_ ip) port 9200: connection refused.
So put network.host:0.0.0.0 and then try to allow 9200 and 9201 port and restart the elasticsearch service.If you are using ubuntu then sudo service elasticsearch restart then check by doing curl -XGET 'http://localhost:9200?pretty'.Let me know if you are still facing any issues.
Use following configurations for elasticsearch.yml
network.host: 0.0.0.0
action.auto_create_index: false
index.mapper.dynamic: false
Solved this problem by going through the logs and found out that the public ip address is re-mapped to the internal ip address, hence network.host can't be set to external ip directly. Elasticsearch yml config is as follows:
'network.host: xx.xx.xxx.xx' is set to the internal ip (given by google),
'http.cors.enabled: true',
'http.cors.allow-origin:"*", (Do not use * in production, its a security issue)
'discovery.type: single-node' in my case to make it work independently and not in a cluster
Now this sandboxed version can be accessed from outside the VM using the external IP address given by Google.
I have 3 remote VMs. All have both public and internal IP. All blong to eth0.
I have installed Hadoop 3.1.1 on them.
For master node, I wrote its internal ip as master host in its config files (core-site, yarn-site). and public ip of other VMs in its wokers file.
Fo wokers, I wrote public ip in all config files.
but it does not work and jobs stuck in accepted status waiting for container to be lunched!!!
It seems that the workers want to connect with public ip but master is listening on internal one!!!!
If i set public ip for master VM, obviously it does not work!
If i set interal ip of master in workers machine, resourcemanager shows them as CORRUPT datanodes; So there is no active datanode!!
Any Idea how to set IPs?
Thanks in advance
Initially with default iptables rules, EC2 Server A is able to access EC2 Server B via private IP.
Running $ curl "http:<Server_B_private_IP>:80" in Server A is successful.
Now in server B, I set rules to allow only Server A(using public IP) and block rest of the traffic.
Running $ curl "http:<Server_B_public_IP>:80" in server A is successful but $ curl "http:<Server_B_private_IP>:80" is failing.
Is this normal? Why server B is able to recognise server A via public ip, but not with private_ip?
I think the order of your rules is bad. Change the order. Put your "allow" rule before and the restricting rule at the end.
I am trying to configure Hadoop cluster installed using Cent OS 6.x virtual machine,i have configured single node Hadoop cluster first to replicate and form the cluster from that later,but confused on configuration of static ip address for my virtual Hadoop cluster, my ifcfg-eth0 currently look like below,
DEVICE=eth0
TYPE=Ethernetle
UUID=892c57f5-17db-486d-b1b9-97efa8799bf0
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=dhcp
HWADDR=00:0C:29:5C:04:D0
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
can anyone help me to configure static address for my virtual Hadoop cluster, and also i am not able to ping any other host name other than localhost but can able to ping host address,anyone please help me to resolve this ping and static address issues.
A bridge network will help. In virtualbox select machine, settings>network
Also following is an example of the file you are editing.
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
PROTO=static
IPADDR=10.0.1.200
NETMASK=255.255.255.0
There are other requirements. Attached link might help
http://blog.cloudera.com/blog/2014/01/how-to-create-a-simple-hadoop-cluster-with-virtualbox/
For internet access files listed below needs to be modified.
/etc/resolv.conf
#Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
Also the file which you modified for static IP /etc/sysconfig/network-scripts/ifcfg-eth0, need following additions
DNS1=8.8.8.8
DNS2=8.8.4.4
ONBOOT=yes
I have set the following.
rpc_address to external public ip.
Listen Address : internal ip address (not the local host),
rpc_broad_cast: internal ip address
On Dev center, I am using external ip and port 9042.
Let me know, if am doing anything wrong.
Thank you snakecharmerb for trying to help me out on this.
I was able to find a solution for this myself. The actual problem was I was using Dev Center 1.4 to connect to Cassandra 3. Once I upgraded to the Dev Center 1.5, it worked like a charm with SSH Local port forwarding enabled.
These are the following settings
Listen Address : internal ip address (not the localhost),
rpc_address: internal ip address (same as above)
Steps After setting the above steps
On my terminal enabled local port forwarding
ssh -L 9042::9042 #
Start Dev Center 1.5
It worked like a charm
It's worked finally :
steps :
1. set listen_address to private IP of EC2 instance.
2. do not set any broadcast_address
3. set rpc_address to 0.0.0.0
4. set broadcast_rpc_address to public ip of EC2 instance.