ElasticSearch: Allow only local requests - elasticsearch

How can allow only local requests for elasticsearch?
So command like:
curl -XGET 'http://localhost:9200/twitter/_settings'
can only be running on localhost and request like:
curl -XGET 'http://mydomain.com:9200/twitter/_settings'
would get rejected?
Because, from what i see, elasticsearch allows it by default.
EDIT:
According to http://www.elasticsearch.org/guide/reference/modules/network.html
you can manage bind_host parameter to allow hosts. And by default, it is set to anyLocalAddress

For elasticsearch prior to v2.0.0, if you want both http transport and internal elasticsearch transport to listen only on localhost simply add the following line to elasticsearch.yml file.
network.host: "127.0.0.1"
If you want only http transport to listen on localhost add the following line instead.
http.host: "127.0.0.1"
Starting from v2.0 elasticsearch is listening only on localhost by default. So, no additional configuration is needed.

If your final goal is to deny any requests from outside the host machine, the most reliable way would be to modify the host's iptables so that it denies any incoming requests to the service ports used by ElasticSearch (9200-9300).
If the end goal is to make sure that everyone refers to the service using an exclusive DNS, you're better off achieving this with an HTTP server that can proxy requests such as HTTPd or nginx.

I use this parameter:
http.host: "127.0.0.1"
This parameter not accept http requests for external request.

Related

Not able to Configure relay sentry setup with proxy

I want to connect relay to sentry.io via proxy service/application.
Please help me in this I am not able to find any way to put proxy between relay and sentry.
Config.yml
relay:
mode: managed
upstream: “https://sentry.io/”
host: 0.0.0.0
port: 3000
tls_port: ~
tls_identity_path: ~
tls_identity_password: ~
Where I have to set the proxy in relay?
You can replace the upstream location to your proxy service/application and there you need to have another relay which can upload the data to sentry.io
Warn : This will just forward the messages, so configure your first relay in proxy mode.

use tls and elastic in fluentbit

I'm trying to send logs to my elastic pod with FluentBit service on a different VM.
I configured ingress for elastic.
I configured the FluentBit that way:
[OUTPUT]
Name es
Match *
Host <host_ip>
Port 443
#Retry_Limit 1
URI /elastic
tls On
tls.verify Off
but I keep getting the following error :
[2020/10/25 07:34:09] [debug] [out_es] HTTP Status=413 URI=/_bulk
it is possible to yo use TLS in elastic output? if yes can you suggest what I configured wrong?
HTTP 413 is a code for Payload Too Large. Try increasing the http.max_content_length in elasticsearch.yml
Also note that you are using tls.verify Off which does not make sense longterm. If you have an ingress with a certificate (LetsEncrypt?) it should be OK to set tls.verify On. Otherwise all looks correct.

elasticsearch setup on Gcloud VM fails

I wish to run my elasticsearch remotely on gcloud VM, this is configured to run at 127.0.0.1 at a specific port 9200. How to access this from a website outside this vm? If I change the network host to 0.0.0.0 on the yml file, even 9200 port becomes inaccessible. How do I overcome this problem?
Changed network.host: [_site_ , _local_ , _global_ ]
_site_ = internal ip given by google cloud vm,
_local_ = 127.0.0.1,
_global_ = found using curl ifconfig.me,
Opened a specific port (9200) and tried to connect with global IP address.
curl to the global ip gives
>Output: Failed to connect to (_global_ ip) port 9200: connection refused.
So put network.host:0.0.0.0 and then try to allow 9200 and 9201 port and restart the elasticsearch service.If you are using ubuntu then sudo service elasticsearch restart then check by doing curl -XGET 'http://localhost:9200?pretty'.Let me know if you are still facing any issues.
Use following configurations for elasticsearch.yml
network.host: 0.0.0.0
action.auto_create_index: false
index.mapper.dynamic: false
Solved this problem by going through the logs and found out that the public ip address is re-mapped to the internal ip address, hence network.host can't be set to external ip directly. Elasticsearch yml config is as follows:
'network.host: xx.xx.xxx.xx' is set to the internal ip (given by google),
'http.cors.enabled: true',
'http.cors.allow-origin:"*", (Do not use * in production, its a security issue)
'discovery.type: single-node' in my case to make it work independently and not in a cluster
Now this sandboxed version can be accessed from outside the VM using the external IP address given by Google.

On iMac, access elasticsearch and neo4j ports on local ip address?

ifconfig shows
inet 192.168.10.1
I can access
http://localhost/
http://127.0.0.1/
http://192.168.10.1
They are all the same.
I also can access neo4j and elasticsearch ports on the following urls
Elasticsearch
http://127.0.0.1:9200/
http://localhost:9200/
Neo4j
http://127.0.0.1:7474/browser/
http://localhost:7474/browser/
But port 9200 and 7474 are not working for 192.168.10.1
http://192.168.10.1:9200
http://192.168.10.1:7474
Something I need to do make the port 7474 (neo4j) and 9200 (elasticsearch) working for 192.168.10.1, but I don't know how.
Please advise, thanks!
I figured it out.
Neo4j
Set up neo4j on the ip (except localhost), in my case
http://192.168.10.1:7474
In the neo4j.conf file
uncomment the following line.
# With default configuration Neo4j only accepts local connections.
# To accept non-local connections, uncomment this line:
dbms.connectors.default_listen_address=0.0.0.0
Elasticsearch
Modify elasticsearch.yml
add the following line
network.host: 0.0.0.0
Then start elasticsearch.

Kibana deployment issue on server . . . client not able to access GUI

I have configured Logstash + ES + kibana on 100.100.0.158 VM and Kibana is running under apache server. port 8080
Now what my need is . . I just have to give URL "100.100.0.158:8080/kibana" to client so client can see his data on web.
When when I put this URL on client browser I am getting this error
"can't contact elasticsearch at http://"127.0.0.1":9200 please ensure that elastic search is reachable from your system"
Do I need to configure ES with IP 100.100.0.158:9200 or 127.0.0.1:9200 is ok . . !
Help . . !
Thanks
Tushar
If your Kibana and ES are installed on the same box, you can have it auto-detect the the ES URL/IP by using this line in your Kibana's config.js file:
/** #scratch /configuration/config.js/5
* ==== elasticsearch
*
* The URL to your elasticsearch server. You almost certainly don't
* want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on
* the same host. By default this will attempt to reach ES at the same host you have
* elasticsearch installed on. You probably want to set it to the FQDN of your
* elasticsearch host
*/
elasticsearch: "http://"+window.location.hostname+":9200",
This is because the interface between Kibana and ES is via JavaScript, and so using 127.0.0.1 or localhost actually points to the client machine (that the browser is running on) rather than the server.
Modify elasticsearch configuration file elasticsearch.yml
Append or modify following configurations:
# Enable or disable cross-origin resource sharing.
http.cors.enabled: true
# Which origins to allow.
http.cors.allow-origin: /https?:\/\/<*your\.kibana\.host*>(:[0-9]+)?/
It is caused by kibana page trying to load jason data from elasticsearch which will be blocked for security reason.
It is about iptables rules. Kibana uses 9292 for web port, but for elasticsearch queries uses 9200. So you must add line to iptables for these ports.
netstat -napt | grep -i LISTEN
you will see these ports: 9200 9300 9301 9302 9292
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
see detail: http://logstash.net/docs/1.3.3/tutorials/getting-started-simple

Resources