I'm trying to setup 3 server nodes in my local machine. Each node starts but unable to join the cluster.
I can see the below message being logged in the log file for each server node.
Topology snapshot [ver=1, servers=1, clients=0, CPUs=4, heap=0.1GB]
Here is my code to start the server.
IgniteConfiguration config = new IgniteConfiguration();
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
ipFinder.setAddresses(Arrays.asList("192.168.0.3","192.168.0.3:47100..47120"));
spi.setIpFinder(ipFinder);
config.setDiscoverySpi(spi);
// Start Ignite node.
ignite = Ignition.start(config);
Can anyone please suggest if I was missing something here!
Try removing the address without the port and leave only the one that specifies the range:
ipFinder.setAddresses(Arrays.asList("192.168.0.3:47100..47120"));
Struggled with this for hours and could only solve it by doing the following.
Ensure that you've set the local interface and port range on which your servers are litening:
TcpDiscoverySpi spi = new TcpDiscoverySpi();
//This address should be accessible to other nodes
spi.setLocalAddress("192.168.0.1");
spi.setLocalPort(48500);
spi.setLocalPortRange(20);
Configure your IP finder accordingly. Supposing that the node is to find peers on the same machine configured similarly (i.e., as per [1] above):
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses("192.168.0.1:48500..48520");
spi.setIpFinder(ipFinder);
As instances start up in turn, they'll use ports within the configured range, and within that range, they'll discover one another using TCP discovery.
This is the only way I managed to connect 2+ server nodes on the same machine, without using multicast discovery.
Related
For example; I have 3 nifi nodes in nifi cluster. Example hostnames of these nodes;
192.168.12.50:8080(primary)
192.168.54.60:8080
192.168.95.70:8080
I know that I can access to nifi-rest api from all nifi nodes. I have GetHTTP processor for get cluster summary from rest-api, and this processor runs on only pimary node. I did set "URL" property of this processor to 192.168.12.50:8080/nifi-api/controller/cluster.
But, if primary node is down, new primary node will be elected. Thus, I will not be able to access 192.168.12.50:8080 address from new primary node. Because this node was down. So, I will not be able to get cluster summary result from rest-api.
In this case, Can I use "localhost:8080/nifi-api/controller/cluster" instead of "192.168.12.50:8080/nifi-api/controller/cluster" for each node in nifi cluster?
It depends on a few things... if you are running securely then you have certificates that are generated for each node specific to the hostname, so the host in the web requests needs to match the host in the certificates, so you can't use localhost in that case.
It also depends how NiFi's web server is configured. If nifi.web.http.host or nifi.web.https.host has a specific hostname specified, then the web server is only bound to that hostname and may not accept connections with a different hostname. In a default unsecure setup, if you leave nifi.web.http.host blank then it binds to all interfaces.
You may be able to use the expression language function to obtain the hostname of the current node. So you could make the url something like "http://${hostname()}/nifi-api/controller/cluster".
We have a 8 node cluster. Our applications are pointing to one node in this cluster using Transport Client. Issue here is if that node is down, then the applications won't work. we've resolved this by adding all the other 7 node ip's in the Trasport client object.
My question here is, do we have any concept like global node which internally connects to cluster, to which i can point our applications, so that we don't have to restart all our applications whenever we've added a new node to cluster.
Transport Client itself is a participant in ES cluster . You can consider setting "client.transport.sniff", true in Transport client which will detect new nodes in cluster.
my cluster it's not in the same network, and there is a tunnel between my pc and the server.
i have got this error :
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available
If you are using elasticsearch default multicast mechanism to discover nodes in the cluster you must have all cluster nodes located on the same subnet (this is true till version 2.0).
In order to have your node discovering the other nodes in the cluster you may configure [elasticsearch home]/config/elasticsearch.yaml field name: discovery.zen.ping.unicast.hosts as described [here]
(https://www.elastic.co/guide/en/elasticsearch/reference/2.x/modules-network.html):
discovery.zen.ping.unicast.hosts
In order to join a cluster, a node needs to know the hostname or IP address of at least some of the other nodes in the cluster. This >settting provides the initial list of other nodes that this node will try to contact. Accepts IP addresses or hostnames.
Defaults to ["127.0.0.1", "[::1]"].
Hope it helps.
I have tried to recreate your configuration on my environment and managed to work with Elasticsearch (created an index). Here is how it goes:
Configure Putty tunneling for Elasticsearch 9300 and 9200 ports
After configuring you'll need to open the SSH connection and make sure it is connected
You may look at the SSH event log here is a link on how to do it
The code looks like this
public class App
{
public static void main( String[] args ) throws Exception
{
Settings settings = ImmutableSettings.settingsBuilder().
put("cluster.name", "my-cluster").build();
TransportClient client = new TransportClient(settings)
.addTransportAddress(
new InetSocketTransportAddress(
"localhost", 9093));
CreateIndexResponse rs = client.admin().indices().create(new CreateIndexRequest("tunnelingindex")).actionGet();
System.out.println(rs.isAcknowledged());
client.close();
}
}
The code creates an index named tunnelingindex
If it still do not work for you, I think that you may have an issue which is not related to tunneling or Elasticsearch.
Hope I have managed to help.
you must set:
transport.publish_host: localhost
details here:
Elasticsearch basic client connection
There might be a post which I am looking for. I have very limited time and got requirement at the last moment. I need to push the code to QA and setup elasticsearch with admin team. Please respond me as soon as possible or share the link which has similar post!!.
I have scenario wherein I will have multiple elasticsearch servers, one is hosted on USA , another one in UK and one more server is hosted in India within the same network(companies network) which shares same cluster name. I can set multicast to false and unicast to provide host and IP address information to form a topology.
Now in my application I know that I have to use Transport cLient as follows,
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "myClusterName").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("host1", 9300))
.addTransportAddress(new InetSocketTransportAddress("host2", 9300));
Following are my concerns,
1) As per the above information, admin team will just provide the single ip address that is load balancer ip address and the loadbalancer will manage the request and response handling .I mean the loadbalance is responsible to redirect to the respective elasticsearch server . Here my question is, Is it okay to use Transport client to connect to the host with the portnumber as follows ,
new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("loadbalancer-ip-address", “loadbalance-port-number”)) ;
If loadbalancer will redirect the request to elastcisearch server what should be the configuration to loadbalancer like, we need to provde all the elasticsearch host or ipaddress details to it? so that at any given point of time , if there is any failure to the master elasticsearch server it will pick another master.
2) What is the best configuration for 4 nodes or elasticsearch servers like, shards , replicas and etc.
Each node will have one primary shard and 1 replicate ? which can be configured in elasticsearch.yml
Please replay me as soon as possible.
Thanks in advance.
we are trying to setup a multiregion cassandra cluster on ec2. Our configuration looks like
5 nodes each on us-east-1a,us-east-1b,us-east-1c,us-west-1a. For this we have modified the cassandra-rackdc.properties file.
using GossipingPropertyFileSnitch and modified cassandra.yaml file accordingly
we are using all 20 public ips for the seeds configuration in cassandra.yaml file
We have commented out the listen_address and rpc_address property so that cassandra defaults to using InetAddress.getLocalHost()
We have uncommented the broadcast address to use public ip
we have modifed the agents address.yaml file to use public ip address for the properties stomp_interface and local_interface
We are starting the nodes one by one with a 3 min pause in between.
Issue:
When using the opscenter. It shows only one node in the cluster
the 'nodetool status' command also shows only one node
When using cql statement it does show all of its peers
What is the mistake we are doing?
I am doing something similar as a proof-of-concept. I have a working 2-region cluster. Here are the things that I did differently, from reading your question:
I used the Ec2MultiRegionSnitch, which is designed to handle the public and private IPs in EC2.In AWS, the Elastic IP is not bound to the interface by the instance and this causes problems with the cluster communications.
in cassandra.yaml, I left listen_address as the private IP.
also, set rpc_address to 0.0.0.0
uncomment broadcast_address and set it to the public IP (like you did).
i set up dc_suffix in the cassandra-rackdc.properties file and uncommented prefer_local=true (inside the region, Cassandra will prefer to use private IPs).
I opened the security groups for Cassandra so that tcp ports 7000 and 7001 could talk between the nodes in the 2 different regions. Opscenter uses ports 61620 and 61621.
All nodes have the same cluster name.
seed IPs are set to the public IPs. I didn't use all the nodes as seeds, that's not recommended.
Start the seeds first, followed by the other nodes.
This provided a working cluster. Now I am working on ssl node-to-node communication between regions.