I want mesos to start my services on a different interface.
Right now, it can only bind to 0.0.0.0, and my host is accessible to the outside.
I have tried to play with LIBPROCESS_IP as recommended in the doc, but I couldn't make it work.
Edit :
Mesos is already starting it's own services on a private interface, I am writing about the services I want start on marathon, they all get binded to 0.0.0.0
You could start mesos with --ip to bind it to 127.0.0.1. For example, bind mesos-master to 127.0.0.1:
mesos-master.sh --ip=127.0.0.1 --work_dir=/tmp/mesos
Or bind mesos-slave to 127.0.0.1:
mesos-slave.sh --ip=127.0.0.1 --master=127.0.0.1:5050 --work_dir=/tmp/emsos
Related
I have a small gcp instance running elasticsearch (on port 9200) and cerebro (on port 9000). I have created firewall rules allowing connection to those 2 ports.
From my workstation I can access "http://instance_external_ip:9000" and I can get into the cerebro UI.
But if I try to curl "http://instance_external_ip:9200" from my workstation's CL I get a 'Connection refused'.
Elasticsearch is clearly running, since I can curl it from within the instance at localhost:9200, and I can access it via cerebro on my workstation.
Thanks for your help.
You can check if the Elasticsearch application is listening on port 9200 using netstat command and adding the atup options to show the TCP and UDP connexions, all sockets and the PID of the process:
netstat -atup
Check if the Firewall Rules are correctly configured for ingress connections, to the related port and if it’s correctly applied on the network that your instance is running.
I want to connect my elasticsearch cluster from another machine i went through some documentation where they had mentioned that i had change the network.bind_host : 0 .But i didn't find the network.bind_host in my elasticsearch.yml . I got only network.host in my elasticsearch.yml file.Even i tried it by giving as
network.host :0 but i cant able to connect from another machine. And i also tried removing ## before network.host :0 which throws an error when starting elasticsearch cluster.
When i am connecting from another machine i have to give http://clustermachingip:9200 right?
Can anyone please help on this problem?
Thanks..
When you want to connect to an elasticsearch instance of an another machine, yes the address is http://clustermachingip:9200. Can you try setting network.bind_host: clustermachingip
If this doesn't work then you might want to check the connectivity to the machine you are trying to connect to using something like a ping command.
ping clustermachingip
EDIT:
You can just start elasticsearch in one machine and try one of the following curl commands from the other machine.
curl 'clustermachingip:9200/_cat/nodes?v'
curl 'clustermachingip:9200/_cat/health?v'
EDIT2: Clearing out confusion between network.host, network.bind_host
https://www.elastic.co/guide/en/elasticsearch/reference/2.4/modules-network.html#advanced-network-settings
The network.host setting explained in Commonly used network settings
is a shortcut which sets the bind host and the publish host at the
same time. In advanced used cases, such as when running behind a proxy
server, you may need to set these settings to different values:
network.bind_host
This specifies which network interface(s) a node should bind to in order to listen for incoming requests. A node can bind to multiple
interfaces, e.g. two network cards, or a site-local address and a
local address. Defaults to network.host. network.publish_host
The publish host is the single interface that the node advertises to other nodes in the cluster, so that those nodes can connect to it.
Currently an elasticsearch node may be bound to multiple addresses,
but only publishes one. If not specified, this defaults to the “best”
address from network.host, sorted by IPv4/IPv6 stack preference, then
by reachability.
Set your network.host in elasticsearch.yml to 0.0.0.0 i.e. it will listen on all available bound addresses.
network.host: 0.0.0.0
Check your connectivity to the host machine on the port (in case you haven't changed the port it will be 9200).
In case you are not able to connect to the host machine still, I will suggest checking your iptables and allow connections to port 9200.
I am running a browser on the single node Hortonworks Hadoop cluster(HDP 2.3.4) on Centos 6.7:
With localhost:8000 and <hostname>:8000, I can access Hue. Same works for Ambari at 8080
However, several other ports, I only can access with the hostname. So with e.g. <hostname>:50070, I can access the namenode service. If I use localhost:50070, I cannot setup a connection. So I assume localhost is blocked, the namenode not.
How can I set up that localhost and <hostname> have the same port configuration?
This likely indicates that the NameNode HTTP server socket is bound to a single network interface, but not the loopback interface. The NameNode HTTP server address is controlled by configuration property dfs.namenode.http-address in hdfs-site.xml. Typically this specifies a host name or IP address, and this maps to a single network interface. You can tell it to bind to all network interfaces by setting property dfs.namenode.http-bind-host to 0.0.0.0 (the wildcard address, matching all network interfaces). The NameNode must be restarted for this change to take effect.
There are similar properties for other Hadoop daemons. For example, YARN has a property named yarn.resourcemanager.bind-host for controlling how the ResourceManager binds to a network interface for its RPC server.
More details are in the Apache Hadoop documentation for hdfs-default.xml and yarn-default.xml. There is also full coverage of multi-homed deployments in HDFS Support for Multihomed Networks.
I am running kafka on ec2 instance. So amazon ec2 instance has two ips one is internal ip and second one is for external use.
I created producer from local machine, but it redirect to internal ip and give me connection unsuccessful error. Can anybody help me to configure kafka on ec2 instance, so that I can run producer from local machine. I am tried many combinations but didn't work.
In the Kafka FAQ (updated for new properties) you can read:
When a broker starts up, it registers its ip/port in ZK. You need to make sure the registered ip is consistent with what's listed in bootstrap.servers in the producer config. By default, the registered ip is given by InetAddress.getLocalHost.getHostAddress(). Typically, this should return the real ip of the host. However, sometimes (e.g., in EC2), the returned ip is an internal one and can't be connected to from outside. The solution is to explicitly set the host ip and port to be registered in ZK by setting the advertised.listeners property in server.properties.
I solved this problem, by setting advertised.host.name in server.properties and metadata.broker.list in producer.properties to public IP address and host.name to 0.0.0.0.
The easiest way how to reach your Kafka server (version kafka_2.11-1.0.0) on EC2 from consumer in external network is to change the properties file
kafka_2.11-1.0.0/config/server.properties
And modify the following line
listeners=PLAINTEXT://ec2-XXX-XXX-XXX-XXX.eu-central-1.compute.amazonaws.com:9092
Using your public address
Verified on 2.11-2.0.0
I just did this in AWS. First get the Kafka server to listen on the correct interface/IP using host.name. For your case this would be the internal IP, not localhost, since your intent is for outside Kafka clients to connect. Any local clients will need to use that same address, not localhost.
Then set advertised.host.name to a host name, not an IP address. The trick is to get that host name to always resolve to the correct IP for both internal and external machines. I use /etc/hosts inside and DNS outside. See my full answer about Kafka and name resolution here.
If you want to access from LAN, change following 2 files-
In config/server.properties:
advertised.listeners=PLAINTEXT://server.ip.in.lan:9092
In config/producer.properties:
bootstrap.servers=server.ip.in.lan:9092
In my case, the server.ip.in.lan value was 192.168.15.150
Below are the steps to connect Kafka from outside of EC2 instance.
Open Kafka server properties file on EC2.
/kafka_2.11-2.0.0/config/server.properties
Set the value of advertised.listeners to
advertised.listeners=PLAINTEXT://ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com:9092
This should be your Public DNS (IPv4) of EC2 instance.
Stop Kafka server.
Start Kafka server to see above configuration changes in action.
Now you can connect to your Kafka of EC2 instance from outside or from your localhost.
Tried and tested on kafka_2.11-2.0.0
SSH to your EC2 instance or wheverver you're hosting Kafka.
sudo nano /etc/hosts
Add:
127.0.0.1 <your-host-name> localhost
In my case it's:
127.0.0.1 ec2-12-34-56-78.ap-southeast-1.compute.amazonaws.com
Save and exit.
For EC2 you should edit the /etc/hosts file to add:
XXX.XXX.XXX.XXX ip-YYY-YYY-YYY-YYY
where XXX... is your external IP and the ip-YYY-YYY-YYY-YYY is the string returned by the hostname command. You can use 127.0.0.1 instead of your external IP to communicate inside the server.
host.name is deprecated - as are advertised.host.name and advertised.port
My Hbase region server is listening on 127.0.0.1. How do I make it liste on 0.0.0.0 ? I tried channing value of hbase.regionserver.info.bindAddress but that doesn't seem to be working.
In order to expose port 60020 on an external interface in (pseudo)distributed mode, HBase wants your /etc/hosts to look a certain way. If you run Ubuntu, you're likely to find something like this in your /etc/hosts: (I'm assuming your hostname is regionserver)
127.0.0.1 localhost
127.0.1.1 regionserver
Select the a network interface with an IP address, e.g. eth0 with 192.168.1.2 and replace 127.0.1.1 with that address.
Edit the hbase/conf/regionservers to enter your hostname there.
regionserver
Restart HBase and try to connect to port 60020 from a remote machine.
Hope that helps!
Instead of hbase.regionserver.info.bindAddress, you should employ hbase.regionserver.ipc.address property and set it to desired IP address or 0.0.0.0 mask. For example:
<property>
<name>hbase.regionserver.ipc.address</name>
<value>0.0.0.0</value>
</property>
Remember:
this should be applied on each machine with Region Server running if you have a cluster instead of single machine.
you have to restart Region Server component (not Master component) to apply the settings.
all *.info.* properties are about web UI, not core functionality