How to use packetbeat stand-alone without Elastic system - elasticsearch

I just want to run packetbeat and get packet sniff from MySQL and output to file or console ,so that I no need Elastic system
I tried to run it but no thing output
root#localhost~: packetbeat -c packetbeat.yml
root#localhost~:
Following are my config file
procs:
enabled: true
monitored:
- process: mysqld
cmdline_grep: mysqld
output:
### Console output
console:
# Pretty print json event
pretty: false
How can I do that ?

Packetbeat works by capturing the network traffic that Mysql creates, so you need to also configure from which device to capture the traffic and on which tcp ports Mysql is running. For example:
interface:
device: any
protocols:
mysql:
ports: [3306]
procs:
enabled: true
monitored:
- process: mysqld
cmdline_grep: mysqld
output:
### Console output
console:
# Pretty print json event
pretty: false
Your console output configuration looks good to me. You can also output to rotating files, if you prefer.

Related

How to collect log from different servers to a central server(Elastic search and kibana)

I am assigned with task to create a central logging server. In my case there are many web app servers spread across. My task is to get logs from these different servers and manage in central server where there will be elastic-search and kibana.
Question
Is it possible to get logs from servers that are having different public IP? If possible how?
How much resource (CPU, Memory, Storage) is required in central server.
Things seen
Saw the examples setups where all logs and applications are on same machine only.
Looking for way to send logs over public IP to elastic-search.
I would like to differ from the Ishara's Answer. You can ship logs directly from filebeat to elasticsearch without using logstash, If your logs are generic types(system logs, nginx logs, apache logs), Using this approach You don't need to go into incur extra cost and maintenance of logstash as filebeat provides inbuilt parsing processor.
If you have debian based OS on your server, I have prepared a shell script to install and configure filebeat. You need to change elasticsearch server URL and modify second last line based on the modules that you want to configure.
Regarding your first question, Yes, You can run filebeat agent on each server and send data to centralize Elasticsearch.
For your second question, It depends on the amount of logs elasticsearch server is going to process and store. It also depends on the where kibana is hosted.
sudo wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt-get update && sudo apt-get install -y filebeat
sudo systemctl enable filebeat
sudo bash -c "cat >/etc/filebeat/filebeat.yml" <<FBEOL
filebeat.inputs:
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.name: "filebeat-system"
setup.template.pattern: "filebeat-system-*"
setup.template.settings:
index.number_of_shards: 1
setup.ilm.enabled: false
setup.kibana:
output.elasticsearch:
hosts: ["10.32.66.55:9200", "10.32.67.152:9200", "10.32.66.243:9200"]
indices:
- index: "filebeat-system-%{+yyyy.MM.dd}"
when.equals:
event.module: system
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
logging.level: warning
FBEOL
sudo filebeat modules enable system
sudo systemctl restart filebeat
Yes, it is possible to get logs from servers that are having different public IP. You need to setup an agent like filebeat (provided by elastic) to each server which produce logs.
You need to setup filebeat instance in each machine.
It will listen to your log files in each machine and forward them to the logstash instance you would mention in filebeat.yml configuration file like below:
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path_to_your_log_1/ELK/your_log1.log
- /path_to_your_log_2/ELK/your_log2.log
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["private_ip_of_logstash_server:5044"]
Logstash server listens to port 5044 and stream all logs through logstash configuration files:
input {
beats { port => 5044 }
}
filter {
# your log filtering logic is here
}
output {
elasticsearch {
hosts => [ "elasticcsearch_server_private_ip:9200" ]
index => "your_idex_name"
}
}
In logstash you can filter and split your logs into fields and send them to elasticsearch.
Resources depend on how much of data you produce, data retention plan, TPS and your custom requirements. If you can provide some more details, I would be able to provide a rough idea about resource requirement.

Setting up ELK stack

I'm completely new to ELK and trying to install the stack with some beats for our servers.
Elasticsearch, Kibana and Logstash are all installed (on server A). I followed this guide here https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html.
Filebeat template was installed as well.
I also installed filebeat on another server (server B), and was trying to test the connection
$ /usr/share/filebeat/bin/filebeat test output -c
/etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -
path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs
/var/log/filebeat
logstash: my-own-domain:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 163.172.167.147
dial up... OK
TLS...
security: server's certificate chain verification is enabled
handshake... OK
TLS version: TLSv1.2
dial up... OK
talk to server... OK
Things seems to be ok, yet data from filebeat on server B doesn't seem to be sending data to logstash.
Accessing Kibana keeps redirecting me back to Create Index pattern, with the message
Couldn't find any Elasticsearch data
Any direction pointing would be really appreciated.
Can you check your filebeat.yml file and see if configuration for logs are activated :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log

Logs not being flushed to Elasticsearch container through Fluentd

I have a local setup running 2 conainers -
One for Elasticsearch (setup for development as detailed here - https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html). This I run as directed in the article using - docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" docker.elastic.co/elasticsearch/elasticsearch:5.4.1
Another as a Fluentd aggregator (using this base image - https://hub.docker.com/r/fluent/fluentd/). My fluent.conf for testing purposes is as follows :
<source>
#type forward
port 24224
</source>
<match **>
#type elasticsearch
host 172.17.0.2 # Verified internal IP address of the ES container
port 9200
user elastic
password changeme
index_name fluentd
buffer_type memory
flush_interval 60
retry_limit 17
retry_wait 1.0
include_tag_key true
tag_key docker.test
reconnect_on_error true
</match>
This I start with the command - docker run -p 24224:24224 -v /data:/fluentd/log vg/fluentd:latest
When I run my processes (that generate logs), and run these 2 containers, I see the following towards the end of stdout for the Fluentd container -
2017-06-15 12:16:33 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"172.17.0.2", :port=>9200, :scheme=>"http", :user=>"elastic", :password=>"obfuscated"}
However, beyond this, I see no logs. When I login to http://localhost:9200 I only see the Elasticsearch welcome message.
I know the logs are reaching the Fluentd container, because when I change fluent.conf to redirect to a file, I see all the logs as expected. What am I doing wrong in my setup of Elasticsearch? How can I get to seeing all the indexes laid out correctly in my browser / through Kibana?
It seems that you are in the right track. Just check the indexes that were created in elasticsearch as follows:
curl 'localhost:9200/_cat/indices?v'
Docs:
https://www.elastic.co/guide/en/elasticsearch/reference/1.4/_list_all_indexes.html
There you can see each index name. So pick one and search within it:
curl 'localhost:9200/INDEXNAME/_search'
Docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html
However I recommend you to use kibana in order to have a better human experience. Just start it and by default it searches for an elastic in localhost. In the interface's config put the index name that you now know, and start to play with it.

problems with registry file in filebeat

I am using filebeat to send data to elasticsearch,
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/kibana_access.log
document_type: nginx
- input_type: log
paths:
- /var/log/redis/redis-server.log
document_type: redis
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
index: '%{[type]}-log'
versions.2x.enabled: false
The configuration is correct, and it is writing to elastic perfectly. But,The issue is that, it is sending the old lines to elastic also, whereas, it should not do so.
No new logs are being written, but in kibana I can see the log countto be the same as previous, when filebeata again sends the data.
I tried checking the registry file, /var/lib/filebeat/registry, and it had information of the files which I had used earlier but was not using now.
{"source":"/var/log/filebeat/filebeat","offset":2514,"FileStateOS":{"inode":4591858,"device":2058},"timestamp":"2017-04-21T17:33:11.913352399+05:30","ttl":-2},{"source":"/var/log/postgresql/postgresql-2017-04-21_120121.log","offset":4485506,"FileStateOS":{"inode":3932558,"device":2058},"timestamp":"2017-04-21T18:11:56.65579033+05:30","ttl":-2}
this is the registry file.
I have set a cron job which restarts filebeat every minute, and sends data to elastic. I am using ubuntu 16.04 and installed filebeat as deb package.
This is the registry file path in filebeat.full.yml --> ${path.data}/registry.
Please explain this behaviour, and also the solution to this.
i just deleted this folder
rm -rf /var/lib/filebeat/
it was solved.

syslog messages not displayed - dropwizard

I created a dropwizard application that uses syslog to logging. I am using Mac OS version 10.10.3. My appenders in Configuration.yml file are as follows.
logging:
level: INFO
appenders:
- type: file
currentLogFilename: /var/log/myapp.log
threshold: INFO
archive: true
archivedLogFilenamePattern: /var/log/myapp-%d.log
archivedFileCount: 5
- type: syslog
host: localhost
port: 514
threshold: INFO
The file myapp.log is populated correctly. But when I do
sudo tail -f /var/log/system.log
I am not able to see the messages. I followed answer from How to start Syslogd server on Mac to accept remote logging messages? but still I was not able to see the messages. However if I do
sudo tcpdump -i lo0 host 127.0.0.1 and udp port 514
I am able see the packets. My syslog.conf looks like this
install.* #127.0.0.1:32376
What am I missing here?

Resources