I install elasticsearch in aws ubuntu 14.04
install document
i change some settings in elasticsearch.yml
elasticsearch.yml
#network.bind_host: 192.168.0.1
to
network.bind_host: localhost
the document told me localhost is good at secure
when I start elasticsearch
sudo service elasticsearch restart
* Stopping Elasticsearch Server [ OK ]
* Starting Elasticsearch Server
and when I send curl
sudo curl -X GET 'http://localhost:9200'
curl: (7) Failed to connect to localhost port 9200: Connection refused
so I change network.bind_host
network.bind_host: 0.0.0.0
and
sudo curl -X GET 'http://localhost:9200'
{
"status" : 200,
"name" : "****",
"cluster_name" : "elasticsearch_***",
"version" : {
"number" : "1.7.2",
"build_hash" : "e12r1r33r3fs593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
but I think 0.0.0.0 is so danger when i product my web site
please somebody help me?
My guess is that you should be having your network.host set to localhost or 127.0.0.1 within your elasticsearch yml.
network.host: localhost <-- try 127.0.0.1 as well
Normally the network.bind_host binds to your network.host by default. You could have a look at this maybe for more details.
And just in case, try adding the http.port as well, so that you could make sure that you could access the ES port, which could look something like this.
http.port: 9200
Hope it helps!
Related
Firstly I create a single node ELK and I use this conig in my elasticsearch.yml
# sed '/^#/d' /etc/elasticsearch/elasticsearch.yml
node.name: "elk01"
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
xpack.security.enabled: true
discovery.type: single-node
after I use this command and create auto built-in users
sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
and it's ok. Everything is working but I want elk-cluster. I create a new server and change config
elk01
# sed '/^#/d' /etc/elasticsearch/elasticsearch.yml
cluster.name: "elk-testcluster"
node.name: "elk01"
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["10.60.201.31", "10.60.201.32"]
cluster.initial_master_nodes: ["10.60.201.31"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
elk02
# sed '/^#/d' /etc/elasticsearch/elasticsearch.yml
cluster.name: "elk-testcluster"
node.name: "elk02"
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["10.60.201.31", "10.60.201.32"]
cluster.initial_master_nodes: ["10.60.201.31"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
Right now when I use curl with username/password, I can getting a elk01 but not elk02
# curl -XGET "10.60.201.31:9200" -u elastic:passcreatedonelk01
{
"name" : "elk01",
"cluster_name" : "elk-testcluster",
"cluster_uuid" : "7513Zor7S3SHqVFzs0hEMQ",
"version" : {
"number" : "7.17.4",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "79878662c54c886ae89206c685d9f1051a9d6411",
"build_date" : "2022-05-18T18:04:20.964345128Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
# curl -XGET "10.60.201.32:9200" -u elastic:passcreatedonelk01
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [elastic] for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"unable to authenticate user [elastic] for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
When I create a new elasticsearch-setup-password on elk02, it's getting error.
sudo /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto
Failed to determine the health of the cluster running at http://10.60.201.32:9200
Unexpected response code [503] from calling GET http://10.60.201.32:9200/_cluster/health?pretty
Cause: master_not_discovered_exception
It is recommended that you resolve the issues with your cluster before running elasticsearch-setup-passwords.
It is very likely that the password changes will fail when run against an unhealthy cluster.
Do you want to continue with the password setup process [y/N]y
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Unexpected response code [503] from calling PUT http://10.60.201.32:9200/_security/user/apm_system/_password?pretty
Cause: Cluster state has not been recovered yet, cannot write to the [null] index
Possible next steps:
* Try running this tool again.
* Try running with the --verbose parameter for additional messages.
* Check the elasticsearch logs for additional error details.
* Use the change password API manually.
ERROR: Failed to set password for user [apm_system].
When I make a cluster, is the use of a common password not provided? Or is it because I run an elasticsearch-setup-password before doing a cluster?
Once you enable ssl you need to add a certificate and key for each node for transport layer.
you can follow these instructions
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html
I have installed Elasticsearch and kibana in the same Centos server. When running netstat -nlp | grep :5601 I get the below result:
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 27244/node
But I still can't access kibana remotely from my windows client, when I try to access kibana from my windows client in browser using "http://my_server_ip:5601", I get something like this:
This page cannot be accessed
...(omitted)
ERR_CONNECTION_RESET
However, I can access ES from my windows client in browser using "http://my_server_ip:9200":
{
"name" : "VM-251-156-centos",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "FsL8YI1mQAGqx3R0kffxbw",
"version" : {
"number" : "7.10.2",
"build_flavor" : "oss",
"build_type" : "rpm",
"build_hash" : "747e1cc71def077253878a59143c1f785afa92b9",
"build_date" : "2021-01-13T00:42:12.435326Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I have searched all day long, almost all the answers suggested to edit the kibana.yml, changing server.host to 0.0.0.0, but they don't work in my case.
My kibana.yml is like this (only list uncommented lines):
server.port: 5601
server.host: "0.0.0.0"
server.name: "http://kibana.example.com"
elasticsearch.hosts: ["http://my_server_ip:9200"]
And I have checked the firewall in Centos server using the command "firewall-cmd --state":
not running
And I have also confirmed that kibana is really running in the Centos server using "sudo systemctl status kibana":
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-01-26 13:57:44 CST; 19h ago
Main PID: 27244 (node)
Tasks: 11
Memory: 80.6M
CGroup: /system.slice/kibana.service
└─27244 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist
Any suggesstions is appreciated.
Solved by changing "server.port: 5601" to "server.port: 5602" in the kibana.yml.
It turns out that the problem was caused by the 5601 port in the centos server was blocked by the server provider for security concerns.
Am running elastic search version 2.3.1 on ubuntu-server 16.04
I can access the elastic api locally as seen below on the default host as show below
curl -X GET 'http://localhost:9200'
{
"name" : "oxo-cluster-node",
"cluster_name" : "oxo-elastic-cluster",
"version" : {
"number" : "2.3.1",
"build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp" : "2016-04-04T12:25:05Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
I need to be able to access elastic search via my domain name or IP Address
I've tried adding the following setting http.publish_host: my.domain file but the server refuses client http connections. Am running the service on default port 9200
When i run
curl -X GET 'http://my.domain:9200'
the result is
curl: (7) Failed to connect to my.domain port 9200: Connection refused
My domain (my.domain) is publicly accessible on the internet and port 9200 is configured to accept connections from anywhere
What am i missing ?
First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news. Don't do it - especially older versions. You're going to end up with security problems in a hurry. I recommend using something like nginx to do basic authentication + HTTPS, and then to proxy_pass it to your locally-bound Elasticsearch instance. This gives you an encrypted and authenticated public connection to your server.
That said, see the networking config documentation. You want either network.host or network.bind_host. network.publish_host is the name that the node advertises to other nodes so that they can connect for clustering. You will also want to make sure that your firewall (iptables or similar) is set up to allow traffic on 9200, and that you don't have any upstream networking security preventing access to the machine (such as AWS security groups or DigitalOcean's networking firewalls).
I'm currently runnning Elasticsearch (ES) 5.5 inside a Docker container. (See below)
curl -XGET 'localhost:9200'
{
"name" : "THbbezM",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CtYdgNUzQrS5YRTRT7xNJw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
I've changed the elasticsearch.yml file to look like this:
http.host: 0.0.0.0
# Uncomment the following lines for a production cluster deployment
#transport.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
network.host: 0.0.0.0
http.port: 9200
I can currently get my indexes through curl -XGET commands. The thing here is that I wanted to be able to do http requests to this ES instance using it's Ip Address instead of the 'localhost:9200' setting starting from my machine (Mac OS X).
So, what I've tried already:
1) I've tried doing it in Postman getting the following response:
Could not get any response
There was an error connecting to X.X.X.X:9200/.
Why this might have happened:
The server couldn't send a response:
Ensure that the backend is working properly
Self-signed SSL certificates are being blocked:
Fix this by turning off 'SSL certificate verification' in Settings > General
Client certificates are required for this server:
Fix this by adding client certificates in Settings > Certificates
Request timeout:
Change request timeout in Settings > General
2) I also tried in Sense (Plugin for Chrome):
Request failed to get to the server (status code: 0):
3) Running a curl from my machine's terminal won't do it too.
What am I missing here?
Docker for Mac provides a DNS name you can use:
docker.for.mac.localhost
You should use the value specified under container name in the YML file to connect to your cluster. Example:
services:
elasticsearch:
container_name: 'example_elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.6.1'
In this case, elastic search is located at http://example_elasticsearch:9200. Note that example_elasticsearch is the name of the container and may be used the same way as machine name or host name.
Context
Have just started using elastic search, installed it on server, can curl and telnet to port 9200 on local machine(server) but cannot connect to it if from another machine.
I disabled firewall on both the server and client as solutions I got from internet were suggesting and also tried suggestions found on the link below but couldn't get it working.
https://discuss.elastic.co/t/accessing-port-9200-remotely/21840
Question
Can some one help me on how to get this working, thanks in advance
Since you just installed Elasticsearch, I suppose you're using ES 2.0 or 2.1. You need to know that since the 2.0 release, Elasticsearch binds to localhost by default (as a security measure to prevent your node from connecting to other nodes on the network without you knowing it).
So what you need to do is simply to edit your elasticsearch.yml configuration file and change the network.bind_host setting like this:
network.bind_host: 0
Then, you need to restart your node and it will be accessible from a remote host.
Let's recreate your scenario. I started freshly installed elasticsearch on my machine. Now I am able to perform curl on port 9200
[root#kali ~]# hostname -i
192.168.109.128
[root#kali ~]# curl http://localhost:9200
{
"status" : 200,
"name" : "Kali Node",
"cluster_name" : "kali",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
If you check the listening tcp ports on your server that java service has opened.
[root#kali ~]# netstat -ntlp | awk '/[j]ava/'
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 3422/java
tcp6 0 0 127.0.0.1:9300 :::* LISTEN 3422/java
You can see elasticsearch is listening on 127.0.0.1 so it is obvious that you can't access port 9200 from the network. Let's verify it using wget from remote server.
$ wget.exe 192.168.109.128:9200
--2015-12-25 13:30:18-- http://192.168.109.128:9200/
Connecting to 192.168.109.128:9200... failed: Connection refused.
lets change the elasticsearch configuration to fix the issue using below command
[root#kali ~]# sed -i '/^network.bind_host:/s/network.bind_host: .*/network.bind_host: 0.0.0.0/' /etc/elasticsearch/elasticsearch.yml
or
just open elasticsearch configuration file and find "network.bind_host" and do following changes below
network.bind_host: 0.0.0.0
then restart your elasticsearch service
[root#kali ~]# service elasticsearch restart
Restarting elasticsearch (via systemctl): [ OK ]
Now lets check the listening tcp port of java
[root#kali ~]# netstat -ntlp | awk '/[j]ava/'
tcp6 0 0 :::9200 :::* LISTEN 3759/java
tcp6 0 0 :::9300 :::* LISTEN 3759/java
Now you can it is listening on all interface.
Lets try the wget command from remote machine
$ wget.exe 192.168.109.128:9200
--2015-12-25 13:39:12-- http://192.168.109.128:9200/
Connecting to 192.168.109.128:9200... connected.
HTTP request sent, awaiting response... 200 OK
Length: 328 [application/json]
Saving to: ‘index.html.1’
index.html.1 100%[====================================================>] 328 --.-KB/s in 0.009s
2015-12-25 13:39:12 (37.1 KB/s) - ‘index.html.1’ saved [328/328]
Try curl command
$ curl.exe http://192.168.109.128:9200
{
"status" : 200,
"name" : "Kali Node",
"cluster_name" : "kali",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
For ElasticSearch version 5, you can edit the configuration file /etc/elasticsearch/elasticsearch.yml and add the following lines
network.bind_host: 0
http.cors.allow-origin: "*"
http.cors.enabled: true
The cors are needed for plugins like Head or HQ on remote machine, because they make Ajax XMLHttpRequest requests
You can also define network.host: 0 since it is a shortcut which sets the bind_host and the publish_host
Sources:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html
I had the same issue and this worked for me:
In /etc/elasticsearch/elasticsearch.yml:
Remove network.host (I believe this should only be used if you are accessing locally)
http.host 192.168.1.50 (IP of the server)
http.port 9200
In /etc/kibana/kibana.yml:
server.host "192.168.1.50"
elasticsearch.hosts ["http://192.168.1.50:9200"]
In your nginx file /etc/nginx/sites-available/yoursite.com
server {
listen 80;
server_name 192.168.1.50;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://192.168.1.50:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Then restart all services and WAIT for a few minutes - I wasn't patient enough the first few attempts and wondered why it kept failing:
systemctl restart elasticsearch
systemctl restart kibana
systemctl restart nginx
After waiting for a few minutes, check that the ports are now on the correct IPs.
netstat -nltp
It should now look something like:
192.168.1.50:5061
192.168.1.50:9200
Test by trying to telnet from the remote machine by doing
telnet 192.168.1.50 9200
Now you are all set to access remotely or set up auditbeat etc.