Filebeat sent Logs to Logstash thought nginx proxy - elasticsearch

I am trying to make Filbeat sending logs to Logstash using docker containers.
The problem is that I have an nginx proxy in between and Filbeat-Logstash communication is not based on HTTPS.
What is the solutions to make it working?
I was trying to make nginx able to process tcp streams configuring it in this way:
stream {
upstream logs {
server logstash:5044;
}
server {
listen 5088;
proxy_pass logs;
}
}
And this is my filebeat output config:
output.logstash:
hosts: ["IP_OF_NGINX:5088"]
ssl.verification_mode: none
But it seems not to work.
Filebeat shows me this error in its logs:
pipeline/output.go:100 Failed to connect to backoff(async(tcp://IP_OF_NGINX:5088)): dial tcp IP_OF_NGINX:5088: connect: connection refused
Any help?

Related

fail to connect to remote clickhouse-server with clickhouse-client

I hosted a clickhouse server on azure VM (I'm able to run clickhouse-client inside the VM) with a nginx proxy, below is the nginx setting
server {
listen 5000;
server_name myhost.cloudapp.azure.com;
location / {
proxy_pass http://localhost:8123;
}
server {
listen 6000;
server_name myhost.cloudapp.azure.com;
location / {
proxy_pass http://localhost:9000;
}
I'm able to curl both endpoint with proper response, e.g.
curl http://myhost.cloudapp.azure.com:6000
Port 9000 is for clickhouse-client program.
You must use port 8123 for HTTP.
curl http://myhost.cloudapp.azure.com:5000
Ok.
However, when i try to do clickhouse-client -h myhost.cloudapp.azure.com --port 6000, I get the following.
clickhouse-client -h myhost.cloudapp.azure.com --port 6000
ClickHouse client version 21.1.2.15 (official build).
Connecting to myhost.cloudapp.azure.com:6000 as user default.
Code: 102. DB::NetException: Unexpected packet from server bs-
clickhouse.westeurope.cloudapp.azure.com:6000 (expected Hello or Exception, got Unknown packet)
The connection setting for clickhouse-server is as following:
<listen_host>::</listen_host>
I don't know what I'm doing wrong, any hints are appreciated.
9000 -- tcp protocol not HTTP. You need to configure nginx as TCP reverse proxy
NGINX transparent TCP proxy
stream {
upstream syslog {
server sapvmlogstash01.sa.projectplace.com:514;
server sapvmlogstash02.sa.projectplace.com:514;
}
server {
listen 514;
proxy_pass syslog;
}
}

use tls and elastic in fluentbit

I'm trying to send logs to my elastic pod with FluentBit service on a different VM.
I configured ingress for elastic.
I configured the FluentBit that way:
[OUTPUT]
Name es
Match *
Host <host_ip>
Port 443
#Retry_Limit 1
URI /elastic
tls On
tls.verify Off
but I keep getting the following error :
[2020/10/25 07:34:09] [debug] [out_es] HTTP Status=413 URI=/_bulk
it is possible to yo use TLS in elastic output? if yes can you suggest what I configured wrong?
HTTP 413 is a code for Payload Too Large. Try increasing the http.max_content_length in elasticsearch.yml
Also note that you are using tls.verify Off which does not make sense longterm. If you have an ingress with a certificate (LetsEncrypt?) it should be OK to set tls.verify On. Otherwise all looks correct.

Setting up ELK stack

I'm completely new to ELK and trying to install the stack with some beats for our servers.
Elasticsearch, Kibana and Logstash are all installed (on server A). I followed this guide here https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html.
Filebeat template was installed as well.
I also installed filebeat on another server (server B), and was trying to test the connection
$ /usr/share/filebeat/bin/filebeat test output -c
/etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -
path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
/var/log/filebeat
logstash: my-own-domain:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 163.172.167.147
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
Things seems to be ok, yet data from filebeat on server B doesn't seem to be sending data to logstash.
Accessing Kibana keeps redirecting me back to Create Index pattern, with the message
Couldn't find any Elasticsearch data
Any direction pointing would be really appreciated.
Can you check your filebeat.yml file and see if configuration for logs are activated :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log

Connection refused from filebeat to logstash

I have an issue when I try to connect to my logstash from Filebeat
Logstash version 2.0.0
Filebeat 1.0.1
Here the error
INFO Connecting error publishing events (retrying): dial tcp 192.168.50.5:14560: getsockopt: connection refused
This is my logstash configuration
input {
beats {
codec => json
port => 14560
}
}
output {
elasticsearch { hosts=> localhost}
stdout {codec = > rubydebug}
}
Here my filebeat configuration
logstash:
# The Logstash hosts
hosts: ["192.168.50.5:14560","192.168.50.15:14560"]
I install the filebeat logstash plugin as I have read it
./plugin install logstash-input-beats
I have completely run out of ideas, and I would love to use this framework, but it seems not to be responding at all.
Any ideas would be great.
This happens when your logstash is not up or the logstash host is not getting connected (due to firewall maybe) from the host running filebeat . Try doing a telnet to 192.168.50.5 14560 from the host you are running filebeat.

Logstash wont talk to Elastic Search

I have Elastic Search 1.3.2 via ELMA. The ELMA setup places ES REST API behind an Apache reverse proxy with SSL and basic auth.
On a separate host, I am trying to setup Logstash 1.4.2 to forward some information over to ES. The output part of my LS is as follows:
output {
stdout { codec => rubydebug }
elasticsearch {
host => "192.168.248.4"
}
This produces the following error:
log4j, [2014-09-25T01:40:02.082] WARN: org.elasticsearch.discovery: [logstash-ubuntu-jboss-39160-4018] waited for 30s and no initial state was set by the discovery
I then tried setting the protocol to HTTP as follows:
elasticsearch {
host => "192.168.248.4"
protocol => "http"
}
This produces a connection refused error:
Faraday::ConnectionFailed: Connection refused - Connection refused
I have then tried setting the port to 9200 (which gives connection refused error) and 9300 which gives:
Faraday::ConnectionFailed: End of file reached
Any ideas on how I can get logstash talking to my ES?
The way to inform logstash to set output in ES is :
elasticsearch {
protocol => "http"
host => "EShostname:EsportNo"
}
In your case, it should be,
elasticsearch {
protocol => "http"
host => "192.168.248.4:9200"
}
If it's not working, then the problem is with the network address configuration.In order to make sure you have provided the correct configuration,
Check the http.port property of ES
Check network.bind_host property of ES
Check network.publish_host property of ES

Resources