I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.
Related
What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES
I am currently using Elastic Cloud to store my AWS CloudWatch logs. Everything seems to work fine as I'm already able to display charts and to query ElasticSearch correctly. Yet, I got a strange behavior I can't explain.
I am logging some events from my app. Let's say request_start and request_end. They are both available on Kibana. Yet, I'm also logging another event, let's say request_middle. I can see it on CloudWatch.
When checking in the Discover tab of Kibana, I don't see this event. I tried event:"request_middle" query, in vain. And if I display a list of all events under this same tab, I get a full list, except request_middle.
I tried to query directly Elastic Search, in case of. But no results as well.
Have some of you already encountered such a case? If so, how did you fix it?
I'd like to use Kibana + Elasticsearch as a tool for application failures investigation.
This is how I see that: application logs all failures, some responsible engineer (me) see that new unknown failures appeared on Kibana's dashboard, investigates why failures happened and than marks related documents with specific attribute.
Unfortunately, the only way to update documents from Kibana UI I found so far are raw queries from "Dev tools".
Maybe there are some plugins that allow documents editing via fancy UI?
I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)
So I first attempted to shutdown elasticsearch on the original server, bring up the replacement, copy everything to the same location but that didnt work so I just started fresh. I now need to be able to search those old indexes to show that we have a years worth of log retention. I dont think snapshot or reindex will work because the old indexes are not currently attached to a live server.... But I may be wrong.
Any help is appreciated
Old servers indexes begin with graylog2_2xx and graylog2_1xx
New Server graylog_1x