I tried running ubuntu elasticsearch at 12:04, after I install and run is OK, but i'm check sudo /etc/init.d/elasticsearch status there I see message elasticsearch is not running. and I tried to run in the browser to localhost: 9200 also failed.
help me please..
It will not start automatically after you install it for good reason. You don't want it to accidentally join a cluster configured to use multicast discovery. See my post here for information on the basics for configuring elasticsearch.
In addition to that post, also make sure you set the following two options in /etc/elasticsearch/elasticsearch.yml:
cluster.name: some-other-name
discovery.zen.ping.multicast.enabled: false
After you have done that, start it by running:
sudo service elasticsearch start
You should almost always disable multicast because on a local testing environment you only have one node so you don't need it, and in a production environment it's just bad practice since nodes accidentally joining the cluster can break things (trust me, I've had this happen and it's a headache).
Did you try sudo /etc/init.d/elasticsearch start?
Related
I follow the first steps to install Flink.
I can start the cluster without any problem
$ start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host DESKTOP-....
Starting taskexecutor daemon on host DESKTOP-....
But I don't get any status from
$ ps aux | grep flink
I can also not access the dashboard via localhost:8081.
There is an older post having these issues, but the solution didn't work for me, since the described conf files do no longer exist, apparently.
My JAVA_HOME is set as C:\Progra~1\Java\jdk1.8.0_311 to avoid issues with the space in Program Files.
Can you check the logs in the /logs folder? I'm suspecting that C:\Program Files\ could still cause issues because of the space there.
go to download Flink folder and try bash command
$./bin/start-cluster.sh --daemon bootstrap-server localhost:8081
and run code one more
$ ./bin/flink run examples/streaming/WordCount.jar
if you finished run above code which not issue, go to localhost:8081
This still seems to be problematic. I tried to run from Windows Subsystem for Linux (WSL).
I have the following versions: java 11.0.16 and flink 1.15.2.
sudo apt-get update
sudo apt install openjdk-11-jre-headless
export FLINK_HOME=/mnt/c/Projects/Apache/flink-1.15.2
I set the following in flink-conf.yaml
rest.port: 8081
rest.address: localhost
rest.bind-adress: 0.0.0.0
Whereby I changed the bind address for localhost to 0.0.0.0 this seems to have fixed the problem.
$FLINK_HOME/bin/start-cluster.sh
Now I can access the Flink Web Dashboard.
I tried two hosters. Massivegrid and Unispace.
On both I created a new environment.
Then choose docker install. Choose elasticsearch.
After it finished, I took the url ipv4 address and pasted on the browser with port as 9200 and nothing popped up.
So I went into terminal and netstat told me there was no elastic service running. In fact nothing related to elastic was installed. No /etc/elasticsearch.
I used this tutorial https://docs.jelastic.com/elasticsearch
I failed on "Connection via Public IP" part.
Wondering what am I missing.
Edit: I created another new enviorment and installed Elasticsearch 6.8.1 image and this works as per tutorial. The newer range from 7.0 onwards is all blank inside. No Java, no elasticsearch etc.
In order to fix the problem with version 7.2 the next steps can be taken:
Add the following string in the /usr/local/bin/docker-entrypoint.sh file:
ulimit -n 65536
And in the elasticsearch.yml file (/usr/share/elasticsearch/config/elasticsearch.yml) please add:
cluster.initial master_nodes: node-1
After that restart the container.
Correct setting for elasticsearch.yml should look like the following:
cluster.initial_master_nodes: node-1
ElasticSearch 6.2.2 on Linux Ubuntu 16.04.3 VM in Azure. It had been up and running fine and then after I rebooted the machine a few days ago I could not get the ElasticSearch service to start at all. Issue was shared and solved here: (ElasticSearch Fails to Start on Ubuntu 16.04.3 - status=1 Failure) by increasing the heap size in the jvm.options file.
Now I have the ElasticSearch service running but I cannot ping it at all. I have tried to ping it from both inside the VM (as localhost:9200) and from outside, (similar to how I make calls to our other ES boxes, and do so successfully) but I'm told Could Not Get Any Reponse (Postman syntax).
The part that is making this impossible to diagnose is nothing is getting written to the ElasticSearch logs! The last time anything was written to any log at /var/log/elasticsearch was before I rebooted the machine a couple days ago.
I have checked the settings in elasticsearch.yml and all seems to be in-line with the elasticsearch.yml that's on a different box of ours in a different location which runs another ElasticSearch instance of ours without any issue.
EDIT: per request - the elasticsearch.yml file from the box that is NOT working correctly is here: http://s000.tinyupload.com/index.php?file_id=72318548245343478927 For comparison purposes, the elasticsearch.yml file from the box that IS working correctly is here: http://s000.tinyupload.com/index.php?file_id=20127693354114612595 Please note that the one that IS working correctly has 3 nodes whereas the one that is not working has only one node, so there will be some slight differences between the yml files because of this.
Check if path.logs: /var/log/elasticsearch is defined in elasticsearch.yml. Add this line if not present.
Check whether the user has permission to write into /var/log/elasticsearch. Change the permission of the files. sudo chmod 777 /var/log/elasticsearch/* and sudo chmod 777 /var/log/elasticsearch
Open /etc/init.d/elasticsearch and check whether ES_PATH_CONF is defined as ES_PATH_CONF="/etc/elasticsearch"
You may try commenting the following lines on log4j2.properties under /etc/elasticsearch.
logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level = info
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
logger.xpack_security_audit_logfile.additivity = false
Use netstat -nultp | grep 9200 and check whether the port is being listened to.
The issue was with the line in the ElasticSearch.yml file which showed as
"10.5.11.6""
That extra quotation mark at the end is what was causing the entire problem.
For anyone that this can benefit, the ElasticSearch.yml file is extremely sensitive when it comes to space, punctuation and case: even an extra space somewhere can cause the entire service to crash. Be very diligent with your edits to elasticsearch.yml.
There are ways to debug:
1. Check if you have ES service running on that particular host via `ps -ef | grep elastic`
2. Look on which port es is listening (or not) ? via netstat
3. it might be a case that your es is running and but is binding not to localhost but to the instance IP . You should be getting the hint on the elasticsearch.yaml
4. Make sure your /usr/share/elasticsearch/elasticsearch.yaml is the file that is being picked up and not the default at /etc/elasticsearch.yaml
5. Configure logging in elasticsearch.yaml to the location
Hope this helps?
Trying again & again with all required steps completed but cluster Installation when install selected Parcels, always shows every host with bad health. setup never completed at full.
i am installing cm 5.5 on CentOS 6.7 using virtualbox.
The Error
Host is in bad health cm.feuni.edu
Host is in bad health dn1.feuni.edu
Host is in bad health dn2.feuni.edu
Host is in bad health nn1.feuni.edu
Host is in bad health nn2.feuni.edu
Host is in bad health rm.feuni.edu
above error are shown on step 6 where setup says
The selected parcels are being downloaded and installed on all the hosts in the cluster
in previous step 5 all hosts were completed with heartbeat checks in the end
memory distributions
cm 8GB
all others with 1GB
i could not find proper answer anywhere else. What reason could be for the bad health?
I don't know if it will help you...
For me, after a few days I struggled with it,
I found the log files (at )
It had a comment there is a mismatch of the guid,
so I uninstalled everything from both machines (using the script they give,/usr/share/cmf/uninstall-cloudera-manager.sh , yum remove 'cloudera-manager-*' and deletion of every directory related to cloudera I found...)
and then removed the guid file:
rm /var/lib/cloudera-scm-agent/cm_guid
Afterwards I re-installed everything, and that fixed that issue for me...
I read online that there can be issues with the hostname and things like that, but I guess that if you get to this part of the installation, you already fixed all the domain/FDQN/hosname/hosts issues.
It saddens me there is no real manual/FAQ for this product.. :(
Good luck!
I faced the same problem. This is my solution:
First I edited config.ini
$ nano /etc/cloudera-scm-agent/config.ini
so that the hostname where the same as the command $ hostname returned.
then I restarted the agent and the server of cloudera:
$ service cloudera-scm-agent restart
$ service cloudera-scm-server restart
then in cloudera manager I deleted the cluster and added again. The wizard continued to run normally.
I have been following this tutorial to try and setup a single node mesosphere cluster from their
official tutorial:
http://mesosphere.com/docs/getting-started/developer/single-node-install/
I followed all the commands without any issues, and I also added the ports 5050 and 8080 to my security group. When I try to access the console for mesos/marathon, I get a "Internet Explorer cannot display the webpage" message.
They also recommend checking it the following way:
MASTER=$(mesos-resolve `cat /etc/mesos/zk`)
mesos-execute --master=$MASTER --name="cluster-test" --command="sleep 5"
But that comes up with an error:
WARNING: Logging before InitGoogleLogging() is written to STDERR
F0106 17:03:08.126703 20993 process.cpp:1561] Failed to initialize, gethostbyname2: Unknown host
*** Check failure stack trace: ***
I am not really sure how to troubleshoot this either, and there are not many tutorials I could find on how to install mesos on ubuntu.
I checked the contents of the zk file, seems to be the default value.
$ cat /etc/mesos/zk
zk://localhost:2181/mesos
I would really appreciate any clues on how to go about this one.
Edit: The process is definitely running too - just an fyi:
root 31545 8.5 5.9 187464 35604 ? Ssl 17:28 0:00 /usr/local/sbin/mesos-slave --master=zk://localhost:2181/mesos --log_dir=/var/log/mesos
root 31563 28.5 2.1 116304 12856 ? Rs 17:28 0:00 /usr/local/sbin/mesos-master --zk=zk://localhost:2181/mesos --port=5050 --log_dir=/var/log/mesos --quorum=1 --wo
Mesos uses gethostbyname2 to resolve hostnames to IPs. The first thing I would recommend, is to try "ping localhost" and "ping hostname", and verify that there are no strange settings in /etc/hosts. If you're doing a multi-node cluster, I'd recommend that hostname map to the public IP address (not 127.0.x.1).
If that doesn't help, you can try setting the --ip and --hostname flags when starting mesos-master and mesos-slave, to bypass the gethostbyname2 resolution. These can also be set by writing to the file-based parameters, e.g. /etc/mesos/mesos-master/ip
For additional troubleshooting, try running wget http://localhost:5050 (or curl -L) from the mesos master, to verify that it is locally visible. Also try wget http://<public_ip>:5050 to verify that the web server is up and serving to the public IP. Depending on how your (EC2?) node is setup, you may need to expose/forward the port, or connect to a VPN.
Thanks Adam. I ran the wget and curl commands, and nothing was actually listening on port 8080 or 5050. I did open those ports in the ec2. A simple reboot did the trick however, once I ssh'ed into the ec2 instance after the reboot, both mesos and marathon were running and both ports are now showing after I ran
netstat -ntln.