Kibana, how can I ignore query strings? - elasticsearch

I have configured a Kibana using AWS infrastructure. I'm using AWS CloudWatch Logs and AWS ElasticSearch which contains Kibana software. As you can imagine, I'm uploading all my logs to Kibana.
I'm trying to obtain a list of the most traffic URL's omitting query strings... but I don't know if this is possible.
Can you help me? I've search it on Google and on Elastic documentation, but I didn't found anything.
Here is an example:
Suppose that I have the following URL's:
abc.com/helloWorld.html?param=1
abc.com/helloWorld.html?param=2
abc.com/helloWorld.html?param=3
abc.com/bye.html?anotherParam=1
I want to see the following URL's in order to compute the sum of requests per file. Is it possible?
abc.com/helloWorld.html
abc.com/helloWorld.html
abc.com/helloWorld.html
abc.com/bye.html
Thanks,

Related

WSO2: No matching indices found

I'm using Elastic search to analyze my logs in WSO2 API Manager. I'm using basic authentication mode. After setting up Elastic and Kibana and configuring its setting, these errors appear when I want to see Kibana dashboards. How can I solve these problems?
In you Elasticsearch looks like there is no index which starts with apim_event_faulty or apim_event*, you can check all the indices in your Elasticsearch cluster by hitting _cat/indices?v API of Elasticsearch.
Check whether there is /repository/logs/apim_metrics.log inside your WSO2 API Manager home directory.
If you don't have the apim_metrics.log file, most like there is an issue in configurations you have done in API Manager. Refer this documentation https://apim.docs.wso2.com/en/latest/api-analytics/on-prem/elk-installation-guide/
If you have the apim_metrics.log file, check the content. If it does not have any logs, most likely API Manger haven't gone through any event to trigger apim_event_faulty, apim_event_response logs. Try invoking an API and observe the logs.

How to know where the Elastic Search Hits are coming from

I have elastic search cluster.
Currently designing a python service for client for read and write query to my elastic search. The python service will not be maintained by me. Only internally python service will call our elastic search for fetching and writing
Is there any way to configure the elastic search so that we get to know that the requests are coming from python service, Or any way we can pass some extra fields while querying based on that fields we will get the logs
There is no online feature in elasticsearch to resolve your request. (you want to check the source and add fields to query).
but there is a solution for audit logs.
https://www.elastic.co/guide/en/elasticsearch/reference/current/enable-audit-logging.html
What you can do is placing a proxy in front of it and do the logging there, we have an Apache in front of our Elastic clusters to enable SSL-offloading there and add logging and ACL possibilities.

How can I get statistics about what clients search for when querying Elasticsearch?

I'm using Elasticsearch to drive a "search website" feature. I'd like to collect statistics about what people search for (and which search queries are popular).
Elasticsearch is currently running behind Nginx, so I could extract this information from the Nginx access logs - but maybe Elasticsearch can be made to track this iinformation itself?
I found the Index stats API but that seems to be more abstract. It can be used to determne the average time needed to answer a query and such things, but it does not keep track of individual queries.
I am using a similar configuration (ES behind nginx), and I up to now I always just checked nginx' logfiles directly. However, thinking about your question, it makes much sense to route the nginx log files through the Elastic stack to Elastic Search using logstash, this seems to be the cleanest way.
Apparently in deprecated version there were some security auditing options using a plugin termed Shield or Security, but as I said, configuring logstash to ingest nginx logfiles directly seems most endurable way for your purposes.
Further reading and detailed instructions
discuss.elastic.co: How to get elaticsearch access logs
https://sysadmins.co.za/how-to-ingest-nginx-access-logs-to-elasticsearch-using-filebeat-and-logstash/
Elasticsearch Access Log
how to enable ElasticSearch http access log

How to Analyze logs from multiple sources in ELK

I have started working on ELK recently and have a doubt regarding handling of multiple types of logs.
I have two sets of logs on my server that I want to analyse, one from my android application and the other from my website. I have successfully transferred logs from this server via filebeat to the ELK server.
I have created two filters for either types of logs and have successfully imported these logs into logstash and then Kibana.
This link helped do the above stuff.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
The above link directs to use the logs in the filebeat index in Kibana and start analysing(I successfully did for one type of logs). But the problem that I am facing is that since both these logs are very different, they need to be analysed differently. How do I do this in Kibana. Should I create multiple filebeat indexes there and import them, or should it be just one single index, or some other way. I am not very clear on this(could not find much documentation), hence would request to please help and guide me here.
Elasticsearch organizes by index and type. Elastic used to compare these to SQL concepts, but now offers a new explanation.
Since you say that the logs are very different, Elastic is saying that you should use different indexes.
In Kibana, the visualization is tied to an index. If you had one panel from each index, you can show them both on the same dashboard.

Read Zabbix events to Elastic Search

I am trying to integrate Zabbix with Elastic Search through logstash and further generate dashboard on Kibana. Now there are many links which suggest it is possible to monitor elasticsearch through Zabbix but not the other way around.
http://logstash.net/docs/1.4.2/outputs/zabbix
Now I got one link which suggests zabbix servers can be monitored. I followed the same but not success
http://philippe.lewin.me/2014/10/06/send-zabbix-events-to-logstash/
I need some help to understand the possibilities and probable some workarounds.
OP, are you still having the same problem? I also am looking for a way to send my zabbix snmp event data to Elastic Search.
True, the plugins out there are helping the other way around though
I will try phillipe's way later, if all else fail, probably i will try to migrate specific table(s) with another tools directly like Logstash for example

Resources