Accessing elasticsearch from a public domain name or IP - elasticsearch

Am running elastic search version 2.3.1 on ubuntu-server 16.04
I can access the elastic api locally as seen below on the default host as show below
curl -X GET 'http://localhost:9200'
{
"name" : "oxo-cluster-node",
"cluster_name" : "oxo-elastic-cluster",
"version" : {
"number" : "2.3.1",
"build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
"build_timestamp" : "2016-04-04T12:25:05Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
I need to be able to access elastic search via my domain name or IP Address
I've tried adding the following setting http.publish_host: my.domain file but the server refuses client http connections. Am running the service on default port 9200
When i run
curl -X GET 'http://my.domain:9200'
the result is
curl: (7) Failed to connect to my.domain port 9200: Connection refused
My domain (my.domain) is publicly accessible on the internet and port 9200 is configured to accept connections from anywhere
What am i missing ?

First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news. Don't do it - especially older versions. You're going to end up with security problems in a hurry. I recommend using something like nginx to do basic authentication + HTTPS, and then to proxy_pass it to your locally-bound Elasticsearch instance. This gives you an encrypted and authenticated public connection to your server.
That said, see the networking config documentation. You want either network.host or network.bind_host. network.publish_host is the name that the node advertises to other nodes so that they can connect for clustering. You will also want to make sure that your firewall (iptables or similar) is set up to allow traffic on 9200, and that you don't have any upstream networking security preventing access to the machine (such as AWS security groups or DigitalOcean's networking firewalls).

Related

elasticsearch setup on Gcloud VM fails

I wish to run my elasticsearch remotely on gcloud VM, this is configured to run at 127.0.0.1 at a specific port 9200. How to access this from a website outside this vm? If I change the network host to 0.0.0.0 on the yml file, even 9200 port becomes inaccessible. How do I overcome this problem?
Changed network.host: [_site_ , _local_ , _global_ ]
_site_ = internal ip given by google cloud vm,
_local_ = 127.0.0.1,
_global_ = found using curl ifconfig.me,
Opened a specific port (9200) and tried to connect with global IP address.
curl to the global ip gives
>Output: Failed to connect to (_global_ ip) port 9200: connection refused.
So put network.host:0.0.0.0 and then try to allow 9200 and 9201 port and restart the elasticsearch service.If you are using ubuntu then sudo service elasticsearch restart then check by doing curl -XGET 'http://localhost:9200?pretty'.Let me know if you are still facing any issues.
Use following configurations for elasticsearch.yml
network.host: 0.0.0.0
action.auto_create_index: false
index.mapper.dynamic: false
Solved this problem by going through the logs and found out that the public ip address is re-mapped to the internal ip address, hence network.host can't be set to external ip directly. Elasticsearch yml config is as follows:
'network.host: xx.xx.xxx.xx' is set to the internal ip (given by google),
'http.cors.enabled: true',
'http.cors.allow-origin:"*", (Do not use * in production, its a security issue)
'discovery.type: single-node' in my case to make it work independently and not in a cluster
Now this sandboxed version can be accessed from outside the VM using the external IP address given by Google.

How to access an Elasticsearch stored in a Docker container from outside?

I'm currently runnning Elasticsearch (ES) 5.5 inside a Docker container. (See below)
curl -XGET 'localhost:9200'
{
"name" : "THbbezM",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "CtYdgNUzQrS5YRTRT7xNJw",
"version" : {
"number" : "5.5.0",
"build_hash" : "260387d",
"build_date" : "2017-06-30T23:16:05.735Z",
"build_snapshot" : false,
"lucene_version" : "6.6.0"
},
"tagline" : "You Know, for Search"
}
I've changed the elasticsearch.yml file to look like this:
http.host: 0.0.0.0
# Uncomment the following lines for a production cluster deployment
#transport.host: 0.0.0.0
#discovery.zen.minimum_master_nodes: 1
network.host: 0.0.0.0
http.port: 9200
I can currently get my indexes through curl -XGET commands. The thing here is that I wanted to be able to do http requests to this ES instance using it's Ip Address instead of the 'localhost:9200' setting starting from my machine (Mac OS X).
So, what I've tried already:
1) I've tried doing it in Postman getting the following response:
Could not get any response
There was an error connecting to X.X.X.X:9200/.
Why this might have happened:
The server couldn't send a response:
Ensure that the backend is working properly
Self-signed SSL certificates are being blocked:
Fix this by turning off 'SSL certificate verification' in Settings > General
Client certificates are required for this server:
Fix this by adding client certificates in Settings > Certificates
Request timeout:
Change request timeout in Settings > General
2) I also tried in Sense (Plugin for Chrome):
Request failed to get to the server (status code: 0):
3) Running a curl from my machine's terminal won't do it too.
What am I missing here?
Docker for Mac provides a DNS name you can use:
docker.for.mac.localhost
You should use the value specified under container name in the YML file to connect to your cluster. Example:
services:
elasticsearch:
container_name: 'example_elasticsearch'
image: 'docker.elastic.co/elasticsearch/elasticsearch:6.6.1'
In this case, elastic search is located at http://example_elasticsearch:9200. Note that example_elasticsearch is the name of the container and may be used the same way as machine name or host name.

Elasticsearch access to local computer

I have an elasicsearch instance running on my server. I have to configure it in such a way that it's only accessible via my local computer's public IP. I tried changing network.host: to my local IP but its not working. can anyone tell me what m I doing wrong.
Then i can suggest you two things here.
1) Either you put nginx reverse proxy in front of your elasticsearch server and filter the ip address you want to allow to connect elasticsearch.
In nginx.conf file in /usr/local/nginx/conf/ , for more info
location / {
# block one workstation
deny 192.168.1.1;
# allow anyone in 192.168.1.0/24
allow 192.168.1.0/24;
# drop rest of the world
deny all;
}
2) Or you can use elastic shield plugin which comes with X-pack and you can use IP filtering feature to restrict the access to your elasticcluster.
In elasticsearch.yml file
shield.transport.filter.allow: "192.168.0.1"
shield.transport.filter.deny: "192.168.0.0/24"
Also you can edit these settings using their REST api
curl -XPUT localhost:9200/_cluster/settings -d '{
"persistent" : {
"shield.transport.filter.allow" : "172.16.0.0/24"
}
}'
read more here.
Thanks

Installed elastic search on server but cannot connect to it if from another machine

Context
Have just started using elastic search, installed it on server, can curl and telnet to port 9200 on local machine(server) but cannot connect to it if from another machine.
I disabled firewall on both the server and client as solutions I got from internet were suggesting and also tried suggestions found on the link below but couldn't get it working.
https://discuss.elastic.co/t/accessing-port-9200-remotely/21840
Question
Can some one help me on how to get this working, thanks in advance
Since you just installed Elasticsearch, I suppose you're using ES 2.0 or 2.1. You need to know that since the 2.0 release, Elasticsearch binds to localhost by default (as a security measure to prevent your node from connecting to other nodes on the network without you knowing it).
So what you need to do is simply to edit your elasticsearch.yml configuration file and change the network.bind_host setting like this:
network.bind_host: 0
Then, you need to restart your node and it will be accessible from a remote host.
Let's recreate your scenario. I started freshly installed elasticsearch on my machine. Now I am able to perform curl on port 9200
[root#kali ~]# hostname -i
192.168.109.128
[root#kali ~]# curl http://localhost:9200
{
"status" : 200,
"name" : "Kali Node",
"cluster_name" : "kali",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
If you check the listening tcp ports on your server that java service has opened.
[root#kali ~]# netstat -ntlp | awk '/[j]ava/'
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 3422/java
tcp6 0 0 127.0.0.1:9300 :::* LISTEN 3422/java
You can see elasticsearch is listening on 127.0.0.1 so it is obvious that you can't access port 9200 from the network. Let's verify it using wget from remote server.
$ wget.exe 192.168.109.128:9200
--2015-12-25 13:30:18-- http://192.168.109.128:9200/
Connecting to 192.168.109.128:9200... failed: Connection refused.
lets change the elasticsearch configuration to fix the issue using below command
[root#kali ~]# sed -i '/^network.bind_host:/s/network.bind_host: .*/network.bind_host: 0.0.0.0/' /etc/elasticsearch/elasticsearch.yml
or
just open elasticsearch configuration file and find "network.bind_host" and do following changes below
network.bind_host: 0.0.0.0
then restart your elasticsearch service
[root#kali ~]# service elasticsearch restart
Restarting elasticsearch (via systemctl): [ OK ]
Now lets check the listening tcp port of java
[root#kali ~]# netstat -ntlp | awk '/[j]ava/'
tcp6 0 0 :::9200 :::* LISTEN 3759/java
tcp6 0 0 :::9300 :::* LISTEN 3759/java
Now you can it is listening on all interface.
Lets try the wget command from remote machine
$ wget.exe 192.168.109.128:9200
--2015-12-25 13:39:12-- http://192.168.109.128:9200/
Connecting to 192.168.109.128:9200... connected.
HTTP request sent, awaiting response... 200 OK
Length: 328 [application/json]
Saving to: ‘index.html.1’
index.html.1 100%[====================================================>] 328 --.-KB/s in 0.009s
2015-12-25 13:39:12 (37.1 KB/s) - ‘index.html.1’ saved [328/328]
Try curl command
$ curl.exe http://192.168.109.128:9200
{
"status" : 200,
"name" : "Kali Node",
"cluster_name" : "kali",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
For ElasticSearch version 5, you can edit the configuration file /etc/elasticsearch/elasticsearch.yml and add the following lines
network.bind_host: 0
http.cors.allow-origin: "*"
http.cors.enabled: true
The cors are needed for plugins like Head or HQ on remote machine, because they make Ajax XMLHttpRequest requests
You can also define network.host: 0 since it is a shortcut which sets the bind_host and the publish_host
Sources:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html
I had the same issue and this worked for me:
In /etc/elasticsearch/elasticsearch.yml:
Remove network.host (I believe this should only be used if you are accessing locally)
http.host 192.168.1.50 (IP of the server)
http.port 9200
In /etc/kibana/kibana.yml:
server.host "192.168.1.50"
elasticsearch.hosts ["http://192.168.1.50:9200"]
In your nginx file /etc/nginx/sites-available/yoursite.com
server {
listen 80;
server_name 192.168.1.50;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://192.168.1.50:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Then restart all services and WAIT for a few minutes - I wasn't patient enough the first few attempts and wondered why it kept failing:
systemctl restart elasticsearch
systemctl restart kibana
systemctl restart nginx
After waiting for a few minutes, check that the ports are now on the correct IPs.
netstat -nltp
It should now look something like:
192.168.1.50:5061
192.168.1.50:9200
Test by trying to telnet from the remote machine by doing
telnet 192.168.1.50 9200
Now you are all set to access remotely or set up auditbeat etc.

google compute engine add firewall rule for hadoop dashboard

I installed hadoop cluster using bdutil (instead of click to deploy). I am not able to access job tracker page at locahost:50030/jobtracker.jsp (https://cloud.google.com/hadoop/running-a-mapreduce-job)
I am checking it locally using lynx instead of from my client browser (so localhost instead of external ip)
My setting in my config file for bdutil is
MASTER_UI_PORTS=('8088' '50070' '50030')
but after deploying the hadoop cluster when I do firewall rules list I get following
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80,tcp:8080 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.240.0.0/16 tcp:1-65535,udp:1-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
Now I dont see port 50030 in the list of rules. Why so?
so I run a command to add them (manually)
gcloud compute firewall-rules create allow-http --description "Incoming http allowed." --allow tcp:50030 --format json
Now it gets added and I can see in the output of firewall-rules list command.
But still when I do lynx locahost:50030/jobtracker.jsp I get unable to connect. Then, I run a hadoop job so that there is some output to view and then run lynx command but still see unable to connect.
Can someone tell me where I am going wrong in this complete process?
An ephemeral IP is an external IP. The difference between an ephemeral IP and a static IP is that a static IP can be reassigned to another virtual machine instance, while an ephemeral IP is released when the instance is destroyed. An ephemeral IP can be promoted to a static IP through the web UI or the gcloud command-line tool.
You can obtain the external IP of your host by querying the metadata API at http://169.254.169.254/0.1/meta-data/network. The response will be a JSON document that looks like this (pretty-printed for clarity):
{
"networkInterface" : [
{
"network" : "projects/852299914697/networks/rabbit",
"ip" : "10.129.14.59",
"accessConfiguration" : [
{
"externalIp" : "107.178.223.11",
"type" : "ONE_TO_ONE_NAT"
}
]
}
]
}
The firewall rule command seems reasonable, but you may want to choose a more descriptive name. If I saw a rule that said allow-http, I would assume it meant port 80. You may also want to restrict it to a target tag placed on your Hadoop dashboard instance; as written, your rule will allow access on that port to all instances in the current project.

Resources