I'm trying to send logs to my elastic pod with FluentBit service on a different VM.
I configured ingress for elastic.
I configured the FluentBit that way:
[OUTPUT]
Name es
Match *
Host <host_ip>
Port 443
#Retry_Limit 1
URI /elastic
tls On
tls.verify Off
but I keep getting the following error :
[2020/10/25 07:34:09] [debug] [out_es] HTTP Status=413 URI=/_bulk
it is possible to yo use TLS in elastic output? if yes can you suggest what I configured wrong?
HTTP 413 is a code for Payload Too Large. Try increasing the http.max_content_length in elasticsearch.yml
Also note that you are using tls.verify Off which does not make sense longterm. If you have an ingress with a certificate (LetsEncrypt?) it should be OK to set tls.verify On. Otherwise all looks correct.
Related
Hi i would like to expose my elasticsearch cluster in kubernetes created using ECK (https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html) so it can be accessed externally.
I have a requriement to setup Functionbeat to ship aws lambda cloudwatch logs to elastcsearch.
Please see Step 2: Connect to the Elastic Stack https://www.elastic.co/guide/en/beats/functionbeat/current/functionbeat-installation-configuration.html
Attempt:
I have an elastic load balancer that has haproxy running on it which i use to expose other k8 services externally such as frontends. Ive attempted to modify this to also allow me to expose elasticsearch.
haproxy
frontend elasticsearch
bind *:9200
acl host_data_elasticsearch hdr(host) -i elasticsearch.acme.com
use_backend elasticsearchApp if host_data_elasticsearch
backend elasticsearchApp
server data-es data-es-es-http:9200 check rise 1 ssl verify none
Im attempting to see if i can connect using the following curl command:
curl -u "elastic:$ELASTIC_PASSWORD" -k "https://elasticsearch.acme.com:9200"
However i get the following error:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
In the browser if i navigate to the url i get
This site can’t provide a secure connection
elasticsearch.acme.com sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
Posting answer as community wiki based on #Joao Morais comment:
you added ssl to the server line which instructs haproxy to perform a ssl offload and you didn't add the ssl stuff in the frontend. it seems you should either remove the ssl+verify from the server, add ssl to the front or query a plain http request.
Additional information:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number indicates that you are trying to reach website that is not secure.
To access it you should replace https: with http: in your curl command so it will look like this:
curl -u "elastic:$ELASTIC_PASSWORD" -k "http://elasticsearch.acme.com:9200"
I'm completely new to ELK and trying to install the stack with some beats for our servers.
Elasticsearch, Kibana and Logstash are all installed (on server A). I followed this guide here https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html.
Filebeat template was installed as well.
I also installed filebeat on another server (server B), and was trying to test the connection
$ /usr/share/filebeat/bin/filebeat test output -c
/etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -
path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
/var/log/filebeat
logstash: my-own-domain:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 163.172.167.147
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
Things seems to be ok, yet data from filebeat on server B doesn't seem to be sending data to logstash.
Accessing Kibana keeps redirecting me back to Create Index pattern, with the message
Couldn't find any Elasticsearch data
Any direction pointing would be really appreciated.
Can you check your filebeat.yml file and see if configuration for logs are activated :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
While trying to exucte
./sgadmin.sh -ts truststore.jks -tspass 90f3cbdb3eabe04f815b -ks CN=sgadmin-keystore.jks -kspass a65d2a4fa62d7ed7a4d5 -h host -p 9200 -nhnv -cn eslcl1 -cd ../sgconfig/
I am getting the following error:
Cannot retrieve cluster state due to: None of the configured nodes are available: [{#transport#-1}{6PPXnCNqTt-W5g-0fmeZuQ}{host}{host:9200}]. This is not an error, will keep on trying
error:
WARNING: JAVA_HOME not set, will use /usr/bin/java
Search Guard Admin v5
WARNING: Seems you want connect to the a HTTP port.
sgadmin connect through the transport port which is normally 9300.
Will connect to host:9200 ... done
### LICENSE NOTICE Search Guard ###
If you use one or more of the following features in production
make sure you have a valid Search Guard license
(See https://floragunn.com/searchguard-validate-license)
* Kibana Multitenancy
* LDAP authentication/authorization
* Active Directory authentication/authorization
* REST Management API
* JSON Web Token (JWT) authentication/authorization
* Kerberos authentication/authorization
* Document- and Fieldlevel Security (DLS/FLS)
* Auditlogging
In case of any doubt mail to <sales#floragunn.com>
###################################
Contacting elasticsearch cluster 'eslcl1' and wait for YELLOW clusterstate ...
Cannot retrieve cluster state due to: None of the configured nodes are available: [{#transport#-1}{6PPXnCNqTt-W5g-0fmeZuQ}{host}{host:9200}]. This is not an error, will keep on trying ...
Root cause: NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{6PPXnCNqTt-W5g-0fmeZuQ}{host}{host:9200}]] (org.elasticsearch.client.transport.NoNodeAvailableException/org.elasticsearch.client.transport.NoNodeAvailableException)
* Try running sgadmin.sh with -icl (but no -cl) and -nhnv (If thats works you need to check your clustername as well as hostnames in your SSL certificates)
* Make also sure that your keystore or cert is a client certificate (not a node certificate) and configured properly in elasticsearch.yml
* If this is not working, try running sgadmin.sh with --diagnose and see diagnose trace log file)
* Add --accept-red-cluster to allow sgadmin to operate on a red cluster.
My conf in elasticsearch.yml is
######## Start Search Guard Demo Configuration ########
searchguard.ssl.transport.keystore_filepath: CN=x.x.x.x-keystore.jks
searchguard.ssl.transport.keystore_password: 8a17368ff585a2c3afdc
searchguard.ssl.transport.truststore_filepath: truststore.jks
searchguard.ssl.transport.truststore_password: 90f3cbdb3eabe04f815b
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.http.enabled: false
searchguard.ssl.http.keystore_filepath: CN=x.x.x.x-keystore.jks
searchguard.ssl.http.keystore_password: 8a17368ff585a2c3afdc
searchguard.ssl.http.truststore_filepath: truststore.jks
searchguard.ssl.http.truststore_password: 90f3cbdb3eabe04f815b
searchguard.authcz.admin_dn:
- CN=sgadmin
cluster.name: eslcl1
network.host: x.x.x.x
Is there any configuration that I might need to look into ?
You need to connect to port 9300 (the transport protocol), not 9200 which is typically the HTTP/S port. On port 9300 the elasticsearch nodes talk to each other through a binary TCP protocol. On port 9200 the REST api is accessible via HTTP/S.
sgadmin connect through the binary TCP protocol to elasticsearch, so you need to use port 9300.
Therefore you get this warning
WARNING: Seems you want connect to the a HTTP port.
sgadmin connect through the transport port which is normally 9300.
So your command should look like
./sgadmin.sh -ts truststore.jks -tspass 90f3cbdb3eabe04f815b -ks CN=sgadmin-keystore.jks -kspass a65d2a4fa62d7ed7a4d5 -h host -p 9300 -nhnv -cn eslcl1 -cd ../sgconfig/ -nhnv
(Add -nhnv to disable hostname verification if your certificates are not matching your hostnames)
I have configured Logstash + ES + kibana on 100.100.0.158 VM and Kibana is running under apache server. port 8080
Now what my need is . . I just have to give URL "100.100.0.158:8080/kibana" to client so client can see his data on web.
When when I put this URL on client browser I am getting this error
"can't contact elasticsearch at http://"127.0.0.1":9200 please ensure that elastic search is reachable from your system"
Do I need to configure ES with IP 100.100.0.158:9200 or 127.0.0.1:9200 is ok . . !
Help . . !
Thanks
Tushar
If your Kibana and ES are installed on the same box, you can have it auto-detect the the ES URL/IP by using this line in your Kibana's config.js file:
/** #scratch /configuration/config.js/5
* ==== elasticsearch
*
* The URL to your elasticsearch server. You almost certainly don't
* want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on
* the same host. By default this will attempt to reach ES at the same host you have
* elasticsearch installed on. You probably want to set it to the FQDN of your
* elasticsearch host
*/
elasticsearch: "http://"+window.location.hostname+":9200",
This is because the interface between Kibana and ES is via JavaScript, and so using 127.0.0.1 or localhost actually points to the client machine (that the browser is running on) rather than the server.
Modify elasticsearch configuration file elasticsearch.yml
Append or modify following configurations:
# Enable or disable cross-origin resource sharing.
http.cors.enabled: true
# Which origins to allow.
http.cors.allow-origin: /https?:\/\/<*your\.kibana\.host*>(:[0-9]+)?/
It is caused by kibana page trying to load jason data from elasticsearch which will be blocked for security reason.
It is about iptables rules. Kibana uses 9292 for web port, but for elasticsearch queries uses 9200. So you must add line to iptables for these ports.
netstat -napt | grep -i LISTEN
you will see these ports: 9200 9300 9301 9302 9292
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
see detail: http://logstash.net/docs/1.3.3/tutorials/getting-started-simple
How can allow only local requests for elasticsearch?
So command like:
curl -XGET 'http://localhost:9200/twitter/_settings'
can only be running on localhost and request like:
curl -XGET 'http://mydomain.com:9200/twitter/_settings'
would get rejected?
Because, from what i see, elasticsearch allows it by default.
EDIT:
According to http://www.elasticsearch.org/guide/reference/modules/network.html
you can manage bind_host parameter to allow hosts. And by default, it is set to anyLocalAddress
For elasticsearch prior to v2.0.0, if you want both http transport and internal elasticsearch transport to listen only on localhost simply add the following line to elasticsearch.yml file.
network.host: "127.0.0.1"
If you want only http transport to listen on localhost add the following line instead.
http.host: "127.0.0.1"
Starting from v2.0 elasticsearch is listening only on localhost by default. So, no additional configuration is needed.
If your final goal is to deny any requests from outside the host machine, the most reliable way would be to modify the host's iptables so that it denies any incoming requests to the service ports used by ElasticSearch (9200-9300).
If the end goal is to make sure that everyone refers to the service using an exclusive DNS, you're better off achieving this with an HTTP server that can proxy requests such as HTTPd or nginx.
I use this parameter:
http.host: "127.0.0.1"
This parameter not accept http requests for external request.