elasticsearch service fails to start after installation - elasticsearch

I installed elastic search using the deb package from here. The service however fails to start and throws below error. How can I fix this?
$ sudo service elasticsearch restart
* Stopping Elasticsearch Server [ OK ]
* Starting Elasticsearch Server
chown: invalid group: `elasticsearch:elasticsearch'
chown: invalid group: `elasticsearch:elasticsearch' [fail]

Maybe you don't have this user/group set up in your system:
For security reasons, running the server as an unprivileged user and group is strongly encouraged. Create user and group for Elasticsearch:
groupadd elasticsearch
useradd -s /sbin/nologin -d /usr/local/elasticsearch -c "Elasticsearch User"

Related

Getting "Kibana server is not ready yet" when running from docker

I'm trying to run elasticsearch and kibana via dockers, and I'm getting errors with kibana.
I'm using elasticsearch and kibana version 7.6.2
and Ubuntu 18.04.6 LTS
I run elasticsearch with the following command:
docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.6.2
And it seems that elasticsearch is on (I can bulk documents and get information about the index from python code).
I'm running kibana with the following commands:
docker network create elastic
docker run --net elastic -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://127.0.0.1:9200" docker.elastic.co/kibana/kibana:7.6.2
I see the following message in the web browser: Kibana server is not ready yet
And I see the following logs in the console:
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["info","savedobjects-service"],"pid":7,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nHEAD http://127.0.0.1:9200/.apm-agent-configuration => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_xpack => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","admin"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
Could not create APM Agent configuration: No Living connections
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
How can I run kibana via docker ?
Did you try enrolling kibana to you elasticsearch cluster?
The enrollment token is valid for 30 minutes. If you need to generate a new enrollment token, run the elasticsearch-create-enrollment-token tool on your existing node. This tool is available in the Elasticsearch bin directory of the Docker container.
For example, run the following command on the existing es01 node to generate an enrollment token for newer nodes to be added:
docker exec -it es01
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s
node
When you start Kibana, a unique link is output to your terminal.
To access Kibana, click the generated link in your terminal.
Then in your browser, paste the enrollment token that you copied when starting Elasticsearch and click the button to connect your Kibana instance with Elasticsearch.
Log in to Kibana as the elastic user with the password that was generated when you started Elasticsearch.
More details here
You've created a docker network for the Kibana container, but the Elastic container is not joined to it. Since you can access Elastic from your localhost:9200, there is no need to use the elastic network for the Kibana container.
Update the Kibana docker run command to docker run -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://host.docker.internal:9200" docker.elastic.co/kibana/kibana:7.6.2
This removes the join to the elastic network, and updates the ELASTICSEARCH_HOSTS environment variable so that it uses the localhost of the machine instead of container.

Correct steps to setup Ambari on a centos VM

I am using: CentOS 7 with Ambari 2.1.1 to try and setup a single node setup on a VM. I want to do this to install vanilla hadoop etc instead of installing a prepackaged VM with some modified version of hadoop.
I am logged in as root. I have created a ssh key pair. I also ran:
"cat id_rsa.pub > authorized_keys"
"chmod 700 .ssh/"
"chmod 640 ./ssh/authorized_keys"
I have edited /etc/ssh/sshd_config to: permit empty passwords, allow root login and also to state where the authorized_keys file is.
Without a password I can run "ssh root#localhost" and log in fine.
I have ran "ambari-server setup" successfully and logged in at localhost:8080 with user: admin pass: admin.
In "Install Options" FQDN I typed "localhost.test" and have selected a copy of my private key for the Host Registration Information.
But not matter what I do I am unable to get the components install under the confirmed hosts part and thus can't get any further.
Can someone please point out what I am missing here?
Thanks to Yusaku on HortonWorks forum for the help.
Ok I ran:
hostname -f
and got localhost
python -c ‘import socket; print socket.getfqdn()’
and got localhost.localdomain
By entering localhost.localdomain into the FQDN I was able to get the install working.

logstash Authentication error with shield

I'm getting the following error while trying to output data to elasticsearch from logstash:
Failed to install template: [401]
{"error":"AuthenticationException[unable to authenticate user
[es_admin] for REST request [/_template/logstash]]","status":401}
{:level=>:error}
I have the configuration like this in logstash:
if [type]=="signup"{
elasticsearch {
protocol => "http"
user =>"*****"
password =>"*******"
document_type => "signup"
host => "localhost"
index => "signups"
}
}
I have tried adding user with following commands:
esusers useradd <username> -p <password> -r logstash
I also tried giving role admin but logstash not working for admin user also.
The localhost:9200 is asking for the password and after entering the password it works but the logstash is giving an error.
I also had similar issue. There is a known issue with elasticsearch if the password has an "#" symbol, this issue can happen. See below link:
https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/232
Also some documentation on elasticsearch has instructions to include "shield" configuration in elasticsearch.yml, but if you have only one shield realm, this is not needed. I dont have shield configuration in elasticsearch.yml
I see that you tried with both logstash and admin user but failed.
To try with an admin privileged user:
Please make sure your /etc/elasticsearch/shield/roles.yml has below content for admin role:
# All cluster rights
# All operations on all indices
admin:
cluster: all
indices:
'*':
privileges: all
Then test something like below:
curl -u es_admin:es_admin_password localhost:9200/_cat/health
To make the logstash role user to work, logstash role need to be tweaked at roles.yml. I configured logstash to use admin privileged user to write to elasticsearch. I hope this would help.

install java6 and tomcat7 on Amazon EC2

Ubuntu is running on Amazon EC2, version 10.10
installed java using
sudo apt-get install openjdk-6-jdk
(more about openjdk6 https://launchpad.net/ubuntu/maverick/+package/openjdk-6-jdk)
did the following to in install tomcat7
wget -c http://apache.petsads.us/tomcat/tomcat-7/v7.0.27/bin/apache-tomcat-7.0.27.tar.gz
sudo tar xvfz apache-tomcat-7.0.27.tar.gz -C /var
Then I see a folder called apache-tomcat-7.0.27 under /var
go to /var/apache-tomcat-7.0.27/bin and run:
sudo bash startup.sh
It looks like tomcat starts successfully:
ubuntu#ip-XX-XXX-XX-XXX:/var/apache-tomcat-7.0.27/bin$ sudo bash startup.sh
Using CATALINA_BASE: /var/apache-tomcat-7.0.27
Using CATALINA_HOME: /var/apache-tomcat-7.0.27
Using CATALINA_TMPDIR: /var/apache-tomcat-7.0.27/temp
Using JRE_HOME: /usr
Using CLASSPATH: /var/apache-tomcat-7.0.27/bin/bootstrap.jar:/var/apache-tomcat-7.0.27/bin/tomcat-juli.jar
I did a test by doing:
sudo fuser -v -n tcp 8080
then i got result(looks like tomcat is up and running):
0 USER PID ACCESS COMMAND
8080/tcp: root 1234 F.... java
But if i type in address of my server in browser, i can't see the default tomcat page...
Am I missing anything????? I am open to any advices.
I followed some of the steps (not all of them) in http://www.excelsior-usa.com/articles/tomcat-amazon-ec2-java-stack.html#tomcat
The solution of this problem is:
This instance is not owned by me.
I asked my friend to change the rule for 8080 in the firewall configuration via his aws management console.
Then it worked.
With out knowing exactly what your setup is, my first guess is you need to open port 8080 on the security group for that instance. Go to security groups and either open it to 0.0.0.0/0 or ur specific IP (this depends on your security requirements for the server)

Greenplum gpseginstall asking for "cluster password access"

I'm installing greenplum database on my desktop computer following the official installation guide. When I'm executing
# gpseginstall -f hostfile_exkeys -u gpadmin -p P#$$word
it asks me to provide cluster password access:
[root#sm403-08 greenplum-db-4.2.1.0]# gpseginstall -f hostfile_exkeys -uyang -par0306
20120506:05:59:33:012887 gpseginstall:sm403-08:root-[INFO]:-Installation Info:
link_name None
binary_path /usr/local/greenplum-db-4.2.1.0
binary_dir_location /usr/local
binary_dir_name greenplum-db-4.2.1.0
20120506:05:59:33:012887 gpseginstall:sm403-08:root-[INFO]:-check cluster password access
*** Enter password for localhost-2:
*** Enter password for localhost-2:
*** Enter password for localhost-2:
*** Enter password for localhost-2:
*** Enter password for localhost-2:
This is what my hostfile_exkeys file looks like:
localhost
localhost-1
localhost-2
since I only have one machine.
A similar post on the web (http://www.topix.com/forum/com/greenplum/TSDQHMJ6M7I9D0A44) says:
"I had the same error and I discovered that it was because I had set sshd to refuse root login. You must edit your sshd configuration and permit root login for gpseginstall to work. Hope that helps!"
But I have tried to modify my /etc/ssh/sshd_config file to let it permit root login:
# Authentication:
#LoginGraceTime 2m
PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
and restarted sshd:
Stopping sshd: [FAILED]
Starting sshd: [ OK ]
but nothing works; the gpseginstall program is still asking for password.
I have tried all the passwords I can ever think of, root, gpadmin, my own user's password, but none of them works. What am I expected to do to get it work?
Update: It seems that the problem lies in installing the Greenplum community edition on a single node. Is there anyone who has some experience with this?
Thanks in advance!
It seems that I'm installing Greenplum database on a single node, so don't have to do the gpseginstall step. This is used to install Greenplum on all segments from the master host.
You need to enable password auth.
sudo nano /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes
Then service sshd restart
I will be glad if it helps someone who is trying to install greenplum in cluster mode.
#installing greenplum cluster steps
# first add entires for all servers and interfaces in your /etc/hosts
# gpdb01- master
# gpdb02 - secondary master
# gpdb03 , gpdb04 - data nodes
#setup ssh between all machines
ssh-keygen
ssh-copy-id gpdb02
ssh-copy-id gpdb03
ssh-copy-id gpdb04
# also add entries for the interfaces
vi /etc/hosts
172.12.13.14 gpdb01
172.12.13.14 gpdb01-1
172.12.13.14 gpdb01-2
172.12.13.15 gpdb02
172.12.13.15 gpdb02-1
172.12.13.15 gpdb02-2
172.12.13.16 gpdb03
172.12.13.16 gpdb03-1
172.12.13.16 gpdb03-2
172.12.13.17 gpdb04
172.12.13.17 gpdb04-1
172.12.13.17 gpdb04-2
# enable RootLogin and PasswordAuthentication on all servers
vi /etc/ssh/sshd_config
service sshd restart
#create your hostkey file
gpdb01
gpdb01-1
gpdb01-2
gpdb02
gpdb02-1
gpdb02-2
gpdb03
gpdb03-1
gpdb03-2
gpdb04
gpdb04-1
gpdb04-2
# run the gpseg installer
gpseginstall -f hostfile_exkeys -u gpadmin -p P#$$word

Resources