How to set elasticsearch to push data to sentry - elasticsearch

I have elasticsearch 5.6 and using log4j2 to config it now,
I save data in elasticsearch, And now i want to push data to sentry 8.22.
If elasticsearch reviceve a data then push the data to sentry automatically.
Can someone tell me how to do this?
PS:
I found some links like this Using sentry logging with elasticsearch
But the solution there is too old.

IMO that's not what Sentry is for: You want to find errors in your application, but it isn't a general log collector. You're also not trying to get your operating system, webserver, database,... hooked into Sentry, right?
If anything in Elasticsearch is going wrong, Sentry should collect the error in your application and you can dig deeper from there. No need to connect Elasticsearch directly.
PS: Adding logging libraries is definitely untested and you might run into various issues (at the very least every upgrade will be more complicated) — I'd be pretty careful with this.

Related

Datadog data validation between on-prem and cloud database

Very new to Datadog and need some help. I have crafted 2 SQL queries (one for on-prem database and one for cloud database) and I would like to run those queries through Datadog and be able display the query results and validate that the daily results fall within an expected variance between the two systems.
I have already set up Datadog on the cloud environment and believe I should use DogStatsD to create a custom metric but I am pretty lost with how I can incorporate my necessary SQL queries in the code to create the metric for eventual display on a dashboard. Any help will be greatly appreciated!!!
You probably want to be using the MySQL integration, and configure the 'custom queries' option: https://docs.datadoghq.com/integrations/faq/how-to-collect-metrics-from-custom-mysql-queries
You can follow those instructions after you configure the base integration https://docs.datadoghq.com/integrations/mysql/#pagetitle (This will give you a lot of use metrics in addition to the custom queries you want to run)
As you mentioned, DogStatsD is a library you can import to whatever script or application in order to submit metrics. But it really isn't a common practice in the slightest to modify the underlying code of your database. So instead it makes more sense to externally run a query on the database, take those results, and send them to datadog. You could totally write a python script or something to do this. However the Datadog agent already has this capability built in, so it's probably easier to just use that.
I am also just assuming SQL refers to MySQL, there are other integration for things like SQL Server, and PostgreSQL, and pretty much every implementation of sql. And the same pattern applies where you would configure the integration, and then add an extra line to the config file where you have the check run your queries.

ELK - Removing old logs viewable in Kibana

I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.

Spring Data Couchbase - Search without having admin rights on the cluster

I'm currently working on a POC with Couchbase, using Spring Data to put & get documents on/off a bucket on a cluster.
As I'm working in a big company, I'm lucky they gave me a bucket, but still I don't have the admin rights on the cluster, so I only have access to the bucket.
But as I'm digging into the Spring Data documentation, I'm not able to find a way to retrieve documents without creating views on the server. (I'm getting errors like "Unknown query param" ). Nevertheless with couchbase java sdk i'm able to, through n1ql queries, but the use of the Spring data layer is mandatory.
The answers I found always point me to the server-side function direction
ex : https://stackoverflow.com/a/30928169/3744307
What I would like to find, is a way to add a repository method like
List findReceiptByAccount(String Account)
without having to specificly declare the function server-side.
Is this possible, or have I to send a request to the administrators to create functions for me everytime I have to add a findByX method?
Thanks for your time,
What version of CB is it ?
I think that prior to 4.5, a n1ql access (which you seems to have) is enough to build your index yourself !
With Spring Data Couchbase 2.x that would use a N1QL index in the background, and it would work with a single primary index (although having 1 index per repository entity class would be best for performance). Maybe you can ask your admin to create that index once?

Document Management/Content Management with Search

I have a requirement for a document management system to handle pdf,word,xls,ppt with semantic search.
I started looking into elasticsearch for the same and stumbled on Apache JacKrabbit and subsequently on OpenKM and Hippo. Even though core features like versioning exists in Jackrabbit, I need some pointers on how to go about this.
I need help navigating through the following concerns:
Should I just use elasticsearch and elasticsearch attachment plugin or use Jackrabbit with MySQL backend and use Elasticsearch to index the documents.
Or should I use OpenKM?
Any pointers would be greatly appreciated. This would finally require App integration.
Update Logically, using ElasticSearch for Search makes sense. But I figure that I cannot use that as primary datasource. What are the best options from storage(primary) Apache JackRabbit with MySQL? As all features are prebuilt in OpenKM, would this be a better option?.
What is it you want to achieve? Are you looking to manage making the documents available, is it about managing the content in documents? ES, or any search engine, is generally not a primary data source.
I can't give you any advice wrt OpenKM (neither for or against). Whether Hippo is a match depends on your case which I need to know more about.

Native application to query ELK?

I'm using Logstash, Elasticsearch and Kibana to process, store and visualize my logs.
My setup works fine but now I'm looking for a new tool : before ELK I was used to read my logs on Notepad++ or Glogg (I'm on Windows) and now I'm using only kibana discover tab.
Do you think I can find a native application that looks like a read-only Notepad++ that query Elasticsearch and display my logs like before ?
The three features I actually need are :
querying multiple sources logs,
for a specified date range,
and display it quickly to a concise and fast viewer.
I don't think it's very complicated to implement, so that's why i'm wondering if it already exists :)

Resources