How can I access elasticsearch from a static ip? - elasticsearch

I created a virtual machine with Ubuntu using Hyper-V. I setup elasticsearch in my ubuntu virtual machine. How can I run elasticsearch from a static ip
sudo apt-get install openjdk-8-jdk
sudo apt-get install nginx
Sudo systemctl enable nginx
sudo apt-get install apt-transport-https
sudo apt-get update
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-amd64.deb
installed elasticsearch
Sudo nano /etc/elasticsearch/elasticsearch.yml
I am able to run elasticsearch on a the local machine
-------Network-----
#
# Set the bind address to a specific IP
network.host: localhost
http.port: 9200
When I go to the browser and connect to localhost:9200, it successfully finds elasticsearch
I want to access elasticsearch through a static ip. For example: 192.168.1.150:9200
I tried
-------Network-----
#
# Set the bind address to a specific IP
network.host: 192.168.1.150
http.port: 9200
I run 'sudo systemctl start elasticsearch.service'
I get error
Job for elasticsearch.service failed because the control process exited with error code
See "systemctl status elasticsearch.service" and "journalctl -xe" for details
Is there a way to accomplish this?
I want to be able to run elasticsearch, logstash, and kibana on the static ip
journalctl -xe
- An ExecStart= process belonging to unit elasticsearch.service has exited.
--
-- The process' exit code is 'exited' and its exit status is 1.
Nov 20 14:22:52 ateet-Virtual-Machine systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit elasticsearch.service has entered the 'failed' state with result 'exit-code'.
Nov 20 14:22:52 ateet-Virtual-Machine systemd[1]: Failed to start Elasticsearch.
-- Subject: A start job for unit elasticsearch.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit elasticsearch.service has finished with a failure.
--
-- The job identifier is 9061 and the job result is failed.
Nov 20 14:22:52 ateet-Virtual-Machine sudo[45417]: pam_unix(sudo:session): session closed for user root
elasticsearch.log
org.elasticsearch.transport.BindTransportException: Failed to bind to 192.168.1.150:[9300-9400]; nested: BindException[Cannot assign requested address];
Thanks

Related

Elasticsearch will not auto start on reboot on my ELK instance

I have an ELK setup on a single instance running ubuntu 18.04. Every service (logstash, kibana, metricbeat) will auto start upon reboot except elasticsearch. I have to issue sudo service elasticsearch start command after rebooting the instance.
I tried this command sudo update-rc.d elasticsearch enable but it did not help.
What needs to be done to so that elastic would restart automatically?
in ubuntu 18.04 (above 16.04) the systemctl is command control of systemd.
to making a program as service you should use below command:
systemctl enable elasticsearch.service
you can check a program is service enabled?
systemctl is-enabled elasticsearch.service

Can’t access Kibana after reload, and enabled & started kibana service. HTTP 302 Error

I installed elasticSearch (7.4.1) on Ubuntu 18.04.1 server in Virtualbox, and it is working fine. I could create mapping, indexes, and use all the cmds(XPUT, XPOST, XGET ...).
After installing Kibana(7.5.2) in the same server as ES, and updated the Kibana.yml for the following line ONLY:
server.host: "0.0.0.0"
Kibana server is up and running but it returns HTTP 302. Here are the cmds I used to install and test this issue.
sudo apt-get install kibana
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service
sudo /bin/systemctl start kibana.service
After waiting for few minutes I checked the kibana log
journalctl -u kibana.service
Once I could see the kibana is up and running as below
"tags":["info","http","server","Kibana"],"pid":3388,"message":"http
server running at http://0.0.0.0:5601"}
I called the url with curl and it returns HTTP 302 Error
curl -I localhost:5601
curl -I 127.0.0.1:5601
curl -I 0.0.0.0:5601
If I use -IL and it returns HTTP 503 Error
curl -IL localhost:5601
Here is the kibana log
"head","statusCode":302,"req":{"url":"/","method":"head","headers":{"host":"localhost:5601","user-agent":"curl/7.58.0","
........
How can I fix this 302 error?
I tried to restart the services, Ubuntu server, and reinstall Kibana, same error.

Kibana stopped working and now server not getting ready although kibana.service starts up nicely

Without any major system update of my Ubuntu (4.4.0-142-generic #168-Ubuntu SMP), Kibana 7.2.0 stopped working. I am still able to start the service with sudo systemctl start kibana.service and the corresponding status looks fine. There is only a warning and no error, this does not seem to be the issue:
# sudo systemctl status kibana.service
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-07-10 09:43:49 CEST; 22min ago
Main PID: 14856 (node)
Tasks: 21
Memory: 583.2M
CPU: 1min 30.067s
CGroup: /system.slice/kibana.service
└─14856 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
Jul 10 09:56:36 srv003 kibana[14856]: {"type":"log","#timestamp":"2019-07-10T07:56:36Z","tags":["warning","task_manager"],"pid":14856,"message":"The task maps_telemetry \"Maps-maps_telemetry\" is not cancellable."}
Nevertheless, when I visit http://srv003:5601/ on my client machine, I keep seeing only (even after waiting 20 minutes):
Kibana server is not ready yet
On the server srv003 itself, I see
me#srv003:# curl -XGET http://localhost:5601/status -I
curl: (7) Failed to connect to localhost port 5601: Connection refused
This is a strange since Kibana seems to be really listening at that port and the firewall is disabled for testing purposes:
root#srv003# sudo lsof -nP -i | grep 5601
node 14856 kibana 18u IPv4 115911041 0t0 TCP 10.0.0.72:5601 (LISTEN)
root#srv003# sudo ufw status verbose
Status: inactive
There is nothing suspicious in the log of kibana.service either:
root#srv003:/var/log# journalctl -u kibana.service | grep -A 99 "Jul 10 10:09:14"
Jul 10 10:09:14 srv003 systemd[1]: Started Kibana.
Jul 10 10:09:38 srv003 kibana[14856]: {"type":"log","#timestamp":"2019-07-10T08:09:38Z","tags":["warning","task_manager"],"pid":14856,"message":"The task maps_telemetry \"Maps-maps_telemetry\" is not cancellable."}
My Elasticsearch is still up and running. There is nothing interesting in the corresponding log files about Kibana:
root#srv003:/var/log# cat elasticsearch/elasticsearch.log |grep kibana
[2019-07-10T09:46:25,158][INFO ][o.e.c.m.MetaDataIndexTemplateService] [srv003] adding template [.kibana_task_manager] for index patterns [.kibana_task_manager]
[2019-07-10T09:47:32,955][INFO ][o.e.c.m.MetaDataCreateIndexService] [srv003] [.monitoring-kibana-7-2019.07.10] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0], mappings [_doc]
Now I am running a bit out of options, and I hope somebody can give me another hint.
Edit: I do not have any Kibana plugins installed.
Consulted sources:
How to fix "Kibana server is not ready yet" error when using AKS
Kibana service is running but can not access via browser to console
Why won't Kibana Node server start up?
https://discuss.elastic.co/t/failed-to-start-kibana-7-0-1/180259/3 - most promising thread, but nobody ever answered
https://discuss.elastic.co/t/kibana-server-is-not-ready-yet-issue-after-upgrade-to-6-5-0/157021
https://discuss.elastic.co/t/kibana-server-not-ready/162075
It looks like if Kibana enters the described undefined state, a simple reboot of the computer is necessary. This is of course not acceptable for a (virtual or physical) machine where other services are running.

Unable to setup external etcd cluster in Kubernetes v1.15 using kubeadm

I'm trying to setup Kubernetes cluster with multi master and external etcd cluster. Followed these steps as described in kubernetes.io. I was able to create static manifest pod files in all the 3 hosts at /etc/kubernetes/manifests folder after executing Step 7.
After that when I executed command 'sudo kubeadmin init', the initialization got failed because of kubelet errors. Also verified journalctl logs, the error says misconfiguration of cgroup driver which is similar to this SO link.
I tried as said in the above SO link but not able to resolve.
Please help me in resolving this issue.
For installation of docker, kubeadm, kubectl and kubelet, I followed kubernetes.io site only.
Environment:
Cloud: AWS
EC2 instance OS: Ubuntu 18.04
Docker version: 18.09.7
Thanks
After searching few links and doing few trails, I am able to resolve this issue.
As given in the Container runtime setup, the Docker cgroup driver is systemd. But default cgroup driver of Kubelet is cgroupfs. So as Kubelet alone cannot identify cgroup driver automatically (as given in kubernetes.io docs), we have to provide cgroup-driver externally while running Kubelet like below:
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --cgroup-driver=systemd --address=127.0.0.1 --pod->manifest-path=/etc/kubernetes/manifests
Restart=always
EOF
systemctl daemon-reload
systemctl restart kubelet
Moreover, no need to run sudo kubeadm init, as we are providing --pod-manifest-path to Kubelet, it runs etcd as Static POD.
For debugging, logs of Kubelet can be checked using below command
journalctl -u kubelet -r
Hope it helps. Thanks.

NFS Vagrant on Fedora 22

I'm trying to run Vagrant using libvirt as my provider. Using rsync is unbearable since I'm working with a huge shared directory, but vagrant does succeed when the nfs setting is commented out and the standard rsync config is set.
config.vm.synced_folder ".", "/vagrant", mount_options: ['dmode=777','fmode=777']
Vagrant hangs forever on this step here after running vagrant up
==> default: Mounting NFS shared folders...
In my Vagrantfile I have this uncommented and the rsync config commented out, which turns NFS on.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
When Vagrant is running it echos this out to the terminal.
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Redirecting to /bin/systemctl start nfs-server.service
Job for nfs-server.service failed. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
Results of systemctl status nfs-server.service
dillon#localhost ~ $ systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2015-05-29 22:24:47 PDT; 22s ago
Process: 3044 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=1/FAILURE)
Process: 3040 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3044 (code=exited, status=1/FAILURE)
May 29 22:24:47 localhost.sulfur systemd[1]: Starting NFS server and services...
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: unable to set any sockets for nfsd
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service: main process exited, code=exited, status=1/FAILURE
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS server and services.
May 29 22:24:47 localhost.sulfur systemd[1]: Unit nfs-server.service entered failed state.
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service failed.
The journelctl -xe log has a ton of stuff in it so I won't post all of it here, but there are some things in the bold red.
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.statd[3028]: failed to create RPC listeners, exiting
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Before I ran vagrant up I looked to see if there were any process binding to port 98 with netstat -tulpn and did not see anything and in fact while vagrant is hanging I ran netstat -tulpn again to see what was binding to port 98 and didn't see anything. (checked for both current user and root)
UPDATE: Haven't gotten any responses.
I wasn't able to figure out the current issue I'm having. I tried using lxc instead, but gets stuck on booting. I'd also prefer not to use VirtualBox, but the issue seems to lie within nfs not the hypervisor. Going to try using the rsync-auto feature Vagrant provides, but I'd prefer to get nfs working.
Looks like when using libvirt the user is given control over nfs and rpcbind, and Vagrant doesn't even try to touch those things like I had assumed it did. Running these solved my issue:
service rpcbind start
service nfs stop
service nfs start
The systemd unit dependencies of nfs-server.service contain rpcbind.target but not rpcbind.service.
One simple solution is to create a file /etc/systemd/system/nfs-server.service containing:
.include /usr/lib/systemd/system/nfs-server.service
[Unit]
Requires=rpcbind.service
After=rpcbind.service
On CentOS 7, all I needed to do
was install the missing rpcbind, like this:
yum -y install rpcbind
systemctl enable rpcbind
systemctl start rpcbind
systemctl restart nfs-server
Took me over an hour to find out and try this though :)
Michel
I've had issues with NFS mounts using both the libvirt and the VirtualBox provider on Fedora 22. After a lot of gnashing of teeth, I managed to figure out that it was a firewall issue. Fedora seems to ship with a firewalld service by default. Stopping that service - sudo systemctl stop firewalld - did the trick for me.
Of course, ideally you would configure this firewall rather than disable it entirely, but I don't know how to do that.

Resources