i have updated my cluster name. i have made changes in elasticsearch.yml file, updated clusterName: from 'x' to 'y' and restarted the server. I also made changes to 'renaming the data folder fro x to y' and now i can see the data in elastic head plugin.
i made the same changes in my code for 'cluster name' but i am getting an error NoNodeAvaliable.
for reference - NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{...}{127.0.0.1}{127.0.0.1:9300}]]
i have followed everything, restarted my servers. but still noNodeavailable.
any help will be appreciated.
thanks
Thanks in advance.
Related
I have an issue where i have multiple host dashboards for the same elasticsearch server. Both dashboards has its own name and way of collecting data. One is connected to the installed datadog-agent and the other is somehow connected to the elasticsearch service directly.
The weird thing is that i cannot seem to find a way to turn off the agent connected directly to the ES service, other than turning off the elasticsearch service completly.
I have tried to delete the datadog-agent completely. This stops the dashboard connected to it, to stop receiving data (of course) but the other dashboard keeps receiving data somehow. I cannot find what is sending this data and therefor is not able to stop it. We have multiple master and data node and this is an issue for all of them. ES version is 7.17
another of our clusters is running ES 6.8, and we have not made the final configuration of the monitoring of this cluster but for now it does not have this issue.
just as extra information:
The dashboard connected to the agent is called the same as the host server name, while the other only has the internal ip as it's host name.
Does anyone have any idea what it is that is running and how to stop it? I have tried almost everything i could think of.
i finally found the reason. as all datadog-agents on all master and data nodes was configured to not use the node name as the name and cluster stats was turned on for the elastic plugin for datadog. This resulted in the behavior that when even one of the datadog-agents in the cluster was running, data was coming in to the dashboard which was not named correclty. Leaving the answer here if anyone hits the same situation in the future.
I need guidance to reinstate my Elastic cluster.
I had bootstrapped Elastic Cluster and had created 1 super-user and 2 other system-users too.
Ingest, Data, Gateway nodes had also joined the cluster.
Later, I felt I want to rename the Data but Google-Cloud does not allow me to rename so I created new data nodes with proper name and then deleted the old data nodes.
I had not ingest any data so far, no index was created .
Now, when I tried to see any of the cluster details ( say license information).
It does not authenticate any system user.
I tried re-creating the Bootstrap password and setting again. But that did not work either.
I'm seeing below exception in Elastic logs.
failed to retrieve password hash for reserved user [username]
org.elasticsearch.action.UnavailableShardsException: at least one primary shard for the index [.security-5] is unavailable
Please suggest me, there is a way to reinstate the existing configurations or how can I bootstrap it again .
I had not ingest any data so far
If you haven't added any actual data yet, the simplest approach is probably to delete all the current data directories and start the cluster from scratch again.
Also, is this still Elasticsearch 5 (looking at .security-5)? Because that's a really old version and some things work differently there than with current versions for a proper reset.
I had the sudo access, I created a system user using file based auth
then re-created other system users with the same password
then reverted the access type to normal login
That worked for me.
I am trying to setup ganglia in order to set up monitoring for spark on our cluster.
So far I have installed gmond&gmetad on my master server, and gmond on one of my slaves.
My problem is that I can only see one node on my ganglia web frontend.
I have checked the /var/lib/ganglia/rrds folder, where the rrd files are being created, and I see that both servers contain folders by the same name - ip-10-0-0-58.ec2.internal.
How can I instruct Ganglia to write its data to different folders, in order to differentiate between the nodes?
If any info is missing, I will gladly supply it.
Thanks for the help,
Yaron.
In the end, the problem was solved by removing the bind value from the udp_recv_channel part of the gmond.conf on the master.
I have just spent the best part of 12 hours indexing 70 million documents into Elasticsearch (1.4) on a single node, single server setup on an EC2 Ubuntu 14.04 box. This completed successfully however before taking a snapshot of my server I thought it would be wise to rename the cluster to prevent it accidentally joining production boxes in the future, what a mistake that was! After renaming in the elasticsearch.yml file and restarting the ES service my indexes have disappeared.
I saw the data was still present in the data dir under the old cluster name, i tried stopping ES, moving the data manually in the filesystem and then starting the ES service again but still no luck. I then tried renaming back to the old cluster name, putting everything back in place and still nothing. The data is still there, all 44gb of it but I have no idea how to get this back. I have spent the past 2 hours searching and all i can seem to find is advice on how to restore from a snapshot which I don't have. Any advice would be hugely appreciated - I really hope I haven't lost a day's work. I will never rename a cluster again!
Thanks in advance.
I finally fixed this on my own: Stopped the cluster, deleted the nodes directory that had been created in the new cluster, copied my old nodes directort over being sure to respect the old structure exactly, chowned the folder to elasticsearch just in case, started up the cluster and breathed a huge sigh of relief to see 72 million documents!
I installed Datastax community version in an EC2 server and it worked fine. After that I tried to add one more server and I see two nodes in the Nodes menu but in the main dashboard I see the following error:
Error: Call to /Test_Cluster__No_AMI_Parameters/rc/dashboard_presets/ timed out.
One potential rootcause I can see is the name of the cluster? I specified something else in the cassandra.yaml but it looks like opscenter is still using the original name? Any help would be grealy appreciated.
It was because cluster name change wasn't made properly. I found it easier to change the cluster name before starting Cassandra cluster. On top of this, only one instance of opscentered needs to run in one single cluster. datastax-agent needs to be running in all nodes in the cluster but they need to point to the same opscenterd (change needs to be made at /var/lib/datastax-agent/conf/address.yaml)