What is the good practice to use graylog with tarantool? - tarantool

Currently our application uses tarantool provided log module, but was looking for a way to collect the logs into graylog. What is the best way to do so?

According to the graylog documentation, it could be used as a syslog server: https://www.graylog.org/post/how-to-use-graylog-as-a-syslog-server
And Tarantool supports writing to the syslog:
https://www.tarantool.io/en/doc/latest/reference/configuration/#confval-log
All you need is to configure them in compliance:
box.cfg({log = 'syslog:server=127.0.0.1:1514'})

Related

Cloudfoundry logs to Elastic SAAS

In the documentation of Cloudfoundry, the Elastic SAAS service is not mentioned
https://docs.cloudfoundry.org/devguide/services/log-management-thirdparty-svc.html
So was wondering if anyone has done it and how?
I know one way is to use a logstash instance in cf, feed the syslog to it and then ship it to Elastic. But just wondering if there is a direct possibility to skip the logstash deployment on cf?
PS. We also log using the ECS format.

Is there application client for ElasticSeach 6.4.3 (similar to DBvear)

I tried to see my node data from application client (like DBvear), but I didn't found information about that. someone found way to connect DBvear to this version or to see the data by similar application?
I believe what you are looking for is GUI for Elasticsearch.
Typically the industry calls the elasticsearch stack as ELK stack and I believe what you are looking for is the K part of it which is Kibana.
I'm not sure if you are asking for SQL feature but if you are thinking to make use of the SQL feature you can check the Elasticsearch SQL plugin.
Other widely used client application for elasticsearch is Grafana. There are others available too(I think Splunk, Graylog, Loggly) but I believe Kibana and Grafana are the best bet.
Hope this helps!
Actually no, I using elastic search as a Database in different deployments and I don't want to maintenance Kibana instance (i prefer to see all the data in tool like DBvear)

How can I get statistics about what clients search for when querying Elasticsearch?

I'm using Elasticsearch to drive a "search website" feature. I'd like to collect statistics about what people search for (and which search queries are popular).
Elasticsearch is currently running behind Nginx, so I could extract this information from the Nginx access logs - but maybe Elasticsearch can be made to track this iinformation itself?
I found the Index stats API but that seems to be more abstract. It can be used to determne the average time needed to answer a query and such things, but it does not keep track of individual queries.
I am using a similar configuration (ES behind nginx), and I up to now I always just checked nginx' logfiles directly. However, thinking about your question, it makes much sense to route the nginx log files through the Elastic stack to Elastic Search using logstash, this seems to be the cleanest way.
Apparently in deprecated version there were some security auditing options using a plugin termed Shield or Security, but as I said, configuring logstash to ingest nginx logfiles directly seems most endurable way for your purposes.
Further reading and detailed instructions
discuss.elastic.co: How to get elaticsearch access logs
https://sysadmins.co.za/how-to-ingest-nginx-access-logs-to-elasticsearch-using-filebeat-and-logstash/
Elasticsearch Access Log
how to enable ElasticSearch http access log

Remote data store processing with ElasticSearch 7.1 and log4j2.11.1

I am using ElasticSearch 7.1. It comes with log4j2.11.1.jar. The problem comes when I am trying to setup a remote data store with log4j2 running as a TcpSocketServer. I would then use log4j logging API in different Java applications to transmit logs over to the remote data store to analyse. However, from log4j2 Java documentation, I found out that the TcpSocketServer has been taken out.
How did you guys managed to configure a remote data store with the latest log4j2 library? Is there any working architecture layout which still fits my use case?
Elasticsearch is not a great log shipper; also what happens if the network is down? We're generally going more down the route that the Beats should take that part over, so Filebeat with the Elasticsearch module here: https://www.elastic.co/guide/en/beats/filebeat/7.1/filebeat-module-elasticsearch.html

splunk replacement with flume or kafka

I need your help with one suggestion. In current scenario, we have one application on cloud and via splunk we have the ability to view log. I am thinking of implementing this using our big data tools like flume/kafka wherein I can take real time log data from cloud ( currently taken by splunk ) and made it available to our HDFS. Few concern here
is this feasible and make sense ?
for log search (same capability like splunk )
which tool can we use?
If you just want to move logs into HDFS, you can use Flume with HDFS sink.
There are also few other options available like -
Logstash
You can use other framework like Elasticsearch and Kibana to have more functionality available for the logs.

Resources