I want to feed a log file to logstash. But the file is on a remote host. Is there a way to make logstash consume this file? Then, I will forward the events to an elasticsearch instance running on the same machine as logstash.
Conversely, is it possible to run logstash on one machine but send output to elasticsearch running on another machine?
Related
Is posible to send data to an external elasticsearch deployment with fleet server?
I have tried with the Kibana Fleet UI settings but there is no username, password field for connection, if I specify those on Advanced YAML configuration give me these error: cannot set both api_key and username/password accessing 'elasticsearch'
Fleet > Settings > Outputs | Specify where agents will send data
I can see the Kibana Fleet Settings xpack.fleet.outputs > config described as Extra config for that output to set this manually but there is no example to set this config variable.
Kibana version: kibana:8.5.3
Elasticsearch version: elasticsearch:8.5.3
Install method: Elastic ECK 2.6
Agents don't support sending the logs to remote cluster, so you won't be able to send data to any external Elasticsearch per say.
However, you can either opt for beats and provide the list of ES hosts where you want to send the logs OR use logstash to receive input from Agent and configure your output with list of ES hosts.
When using logstash to retrieve airflow logs from a folder you have access to, would I still need to make any changes in the airflow.cfg file?
For instance, I have airflow and ELK deployed on same ec2 instance. The logstash .conf file has access to the airflow logs path since they are on the same instance. Do I need to turn on remote logging in airflow config?
In fact you have two options to push airflow logs to Elastic Search:
Using a log collector (logstash, fluentd, ...) to collect Airflow log then send it to Elastic Search server, in this case you don't need to change any Airflow config, you can just read the logs from the files or stdout and send it to ES.
Using Airflow remote logging feature, in this case Airflow will log directly to your remote logging server (ES in your case), and will store a local version of this log to show it when the remote server is unavailable.
So the answer to your question is no, if you have a logstash, you don't need Airflow remote logging config
I have logstash installed in a server which will process logs and publish to elastic search. But, is it possible for logstash to pull logs from remote servers (linux) without installing filebeats in those servers.
Or if filebeats can be installed in the same server as logstash and can it fetch the logs? Please help me if there is any other option as well.
Thanks in advance
Neither Logstash nor Filebeat can pull/fetch log files from remote servers, you need to have some tool installed in the remote servers that will ship the logs elsewhere.
Logstash can consume logs from message queue systems like kafka, redis or rabbitmq, for example, but you need that your remote servers send the logs to those systems anyway, so you would need a log shipper on your remote servers.
I started working on rsyslog like yesterday so i am very new to this. I am facing a problem. In my rsyslog.conf file i set the file format like this:
$ActionFileDefaultTemplate RSYSLOG_FileFormat
This shows logs in changed format on my own machine but when i checkout my remote server machine. Logs are getting forward but they are in a different format. How do i show remote machine logs in certain format. Is it even possible to configure that from my client machine?
Can packetbeat is used to monitor the tomcat server logs and windows logs?? or it will only monitor the database i.e., network monitoring?
Packetbeat only does network monitoring. But you can use it together with Logstash or Logstash-Forwarder to get visibility also into your logs.
It will do only network monitoring. you can use ELK for tomcat server logs.
#tsg is correct but now with the Beats 1.x release they are deprecating Logstash Forwarder in lieu of another Beat called Filebeat. Also they added Topbeat, which allows you to monitor server load and processes in your cluster.
See:
* https://www.elastic.co/blog/beats-1-0-0
You will likely want to install the package repo for your OS, then install each with:
{package manager cmd} install packetbeat
{package manager cmd} install topbeat
{package manager cmd} install filebeat
They each are installed in common directories. For example with Ubuntu (Linux) the config files are in /etc/<beat name>/<beat name>.yml where beat name is one of the 3 above. Each file are similar and you can disable the direct ES export and instead export to Logstash (comment ES and uncomment Logstash) and then add a beats import in your Logstash config. From thereon, Logstash listens for any beats over that port and can redistribute (or queue) using the [#metadata][beat] param to tell where it came from.
Libbeat also provides a framework to build your own so you can send any data you want to Logstash and it can queue and/or index. ;-)
Packetbeat is used mainly for network analysis . It currently supports following protocols :
ICMP (v4 and v6)
DNS
HTTP
Mysql
PostgreSQL
Redis
Thrift-RPC
MongoDB
Memcache
However , for visualizing tomcat logs you can configure them to use log4j and then configure logstash to take input from log4j and then using elasticsearch and kibana to visualise the logs.
To monitor windows logs you can use another beats platform Winlogbeat.