ElasticSearch 5.1.1 + Grafana 4.1.0 Time seems offset by UTC difference - elasticsearch

I'm running Grafana 4.1.0_beta1 and Elasticsearch 5.1.1.
All my servers are setup for Mountain Time, I seem to be running into an issue where Grafana attempts to "account" for UTC, and offsets search parameters by 7 hours.
As an example;
date result from server; Wed Jan 4 20:10:54 MST 2017
But when I try to add and test a data source in Grafana, I get this error:
{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"metricbeat-2017.01.05","index_uuid":"_na_","index":"metricbeat-2017.01.05"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"metricbeat-2017.01.05","index_uuid":"_na_","index":"metricbeat-2017.01.05"}
While metricbeat-2017.01.05 does not exist, metricbeat-2017.01.04 does as it should.
When on a dashboard, I don't see any data, until I set the time, to anything over 7h prior.
I didn't see anything regarding timezone in the elasticsearch, or grafana config files.
Am I missing something?

Classic case of over analyzing.
It looked like everything was correct because it was, except for my client timezone.
Correct Timezone on local client
Ensure time is updated correctly
Restart web browser

You should be able to time shift the data back into range by 7h by going into added panel widget -> Time range -> Add time shift.
--If nothing else works that is.

Related

Oracle ORDS: Get request returns old data, then after period of time the changed data

I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed
Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles

How to apply timezone to Log4j2 RollingFile?

There seems to be a bug in the past, hasn't it been fixed yet?
Even if a separate timezone is applied to my console log and file name, the actual rollover time is always the server time.
Is there any way to make this rollover time follow my timezone?

Filebeat not reading logs from nested directories

I am relatively new to ELK stack and I am trying to send logs from a linux servers to elasticsearch. The path I am choosing is -
I have installed the filebeat on linux server where my application logs are getting generated - > parsing them via logstash and then - > sending them to elasticsearch
Questions I have are -
The linux server having application logs generates and stored logs in directories dynamically based on what day/month/time of the day its running for example my directory structure for logs on 06/10/2022 at 11:45 am will look like -
-/var/log/2022/06/10/11/abc.txt
I want my filebeat input path for logs in filebeat.yml to take paths dynamically so that I do not have to keep changing the paths and restarting the filebeat service so I tried to use something like - /var/log/2022/*/*/*/*.txt
But when I specify file path with wildcards like /var/log/2022/*/*/*/*.txt I get no logs shipped and the filebeat service runs fine but harvester always shows 0 files and no logs gets shipped however when I changes that to specifically point to any folder without using wildcard like -/var/log/2022/06/10/11/abc.txt the logs get shipped and I can see them in elasticsearch. So, I want to know what I should do in order to make this dynamic path work and the filebeat version I am using is 7.17.0.
Please let me know if you guys have any ideas.
(Note: - There are 12 folders inside 2022 for months like - 01, 02, 03 etc ..
Inside those folders for months there are sub folders to support dates depending on how many days in the months like - 01, 02...29,30 ...etc
Inside those there are 23 subfolders for hours of the day like - 00,01,02...23 )
Another question I have is whenever I get the logs to get shipped I see latency like I ideally want logs to immediately appear in Elasticsearch as soon as they appear in linux server where the application is running and generating the logs but I always see a latency like logs appears in elasticsearch at least with a delay of 5-15 mins so how can I make it appear as soon as they show up like real time ?
Have you tried using a double asterisk? https://github.com/elastic/beats/pull/3980 implemented a change (FileBeat 6.0, so it should work in 7.x) to expand ** up to sixteen levels.
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /path/to/logs/**/myfiles.log

How to see the all queries ran from Kibana Dev tool

If want to see, all the ran queries, from Kibana Dev tool, how to see it? Is there any query, which shows all the ran queries may be for last 30 days?
I guess there is no Api for showing history of executed queries how ever you can find the related log files in var/log/elasticsearch file path More over if you want to always keep record of all executed queries and events in your Dev tools you can create and index for it and use Logstash to insert your operation logs(which previously said where is stored) into that index.

ELK - Removing old logs viewable in Kibana

I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.

Resources