Is it a good idea to use serilog to write logs directly to the elasticsearch - elasticsearch

I'm evaluating different options about the distributed log server.
In the Java world, as I can see, the most popular solution is filebeat + kafka + logstash + elasticsearch + kibana.
However, in .NET world, there's a serilog which can send structure logs directly to the elasticsearch. So the only required components are elasticsearch + kibana.
I searched a lot, but there's not much information about this solution in production. I've no idea whether it's enough to handle large volumes of logs.
Can anyone give me some suggestions? Thanks.

I had the same issue exactly. Our system worked with the "classic" elk-stack architecture i.e. FileBeat -> LogStash -> Elastic ( ->Kibana).
but as we found out in big projects with a lot of logs Serilog is much better solution for the following reasons:
CI\CD - when you have different types of logs with different structure which you want to have different types, Serilog power comes in handy. in LogStash you need to create a different filter to break down a message according to the pattern. which implies that there is big coupling in the log structure aspect and the LogStash aspect - very bug prone.
maintenance - Because of the easy CI\CD and the one point of change, it is easier to maintain a large amount of logs.
Scalability - FileBeat has a problem to handle big chunks of data because of the registry file which have a tend to "explode" - reference from personal experience stack overflow flow question ; elastic-forum question
Less failure points - with serilog the log send directly to elastic when with Filebeat you have to path through LogStash. one more place to fail.
Hope it helps you with your evaluation.

Update (Dec 2021):
The Elasticsearch logger provider has been moved to the Elastic ECS DotNet project.
Find the latest version here: https://github.com/elastic/ecs-dotnet/blob/master/src/Elasticsearch.Extensions.Logging/ReadMe.md
The nuget package is here: https://www.nuget.org/packages/Elasticsearch.Extensions.Logging/1.6.0-alpha1
It is still labelled an alpha release (although it has more functionality than the Essential's version), so currently (Dec 2021) you need to specify the version when adding the package:
dotnet add package Elasticsearch.Extensions.Logging --version 1.6.0-alpha1
Disclaimer: I am the author
ORIGINAL ANSWER
There is now also a stand alone logger provider that will write .NET Core logging direct to Elasticsearch, following the Elasticsearch Common Schema (ECS) field specifications, https://github.com/sgryphon/essential-logging/tree/master/src/Essential.LoggerProvider.Elasticsearch
To use this from your .NET Core application, add a reference to the Essential.LoggerProvider.Elasticsearch package:
dotnet add package Essential.LoggerProvider.Elasticsearch
Then, add the provider to the loggingBuilder during host construction, using the provided extension method.
using Essential.LoggerProvider;
// ...
.ConfigureLogging((hostContext, loggingBuilder) =>
{
loggingBuilder.AddElasticsearch();
})
The default configuration will write to a local Elasticsearch running at http://localhost:9200/.
Once you have sent some log events, open Kibana (e.g. http://localhost:5601/) and define an index pattern for "dotnet-*" with the time filter "#timestamp".
This reduces the dependencies even more, as rather than pull in the entire Serilog infrastructure (App -> Microsoft ILogger -> Serilog provider/adapter -> Elasticsearch sink -> Elasticsearch) you now only have (App -> Microsoft ILogger -> Elasticsearch provider -> Elasticsearch).
The ElasticsearchLoggerProvider also writes events following the Elasticsearch Common Schema (ECS) conventions, so is compatible with events logged from other sources, e.g. Beats.

Related

How to bet notified when an Elastic Search Index has changed [duplicate]

I am using Elasticsearch, and I am building a client (using the Java Client API) to export logs indexed via Logstash.
I would like to be able to be notified (by adding a listener somewhere) when a new document is index (= a new log line have been added) instead of querying the last X documents.
Is it possible ?
This is what you're looking for: https://github.com/ForgeRock/es-change-feed-plugin
Using this plugin, you can register to a websocket channel to receive indexation/deletion events as they happen. It has some limitations, though.
Back in the days, it was possible to install river plugins to stream documents to ES. The river feature has been removed, but this plugin above is like a "reverse river", where outside clients are notified by ES as documents get indexed.
Very useful and seemingly up-to-date with ES 6.x
UPDATE (April 14th, 2019):
According to what was said at Elastic{ON} Zurich 2019, at some point in the 7.x series, there will be a Changes API that will provide index changes notifications (document creation, update, deletion and more).
UPDATE (July 22nd, 2022):
ES 8.x is out and the Changes API is still nowhere in sight ... Good to know, though, that's it's still open at least.

Spring Boot with spring-data-elastic connecting to Elastic Search 7.4.0 on AWS server

I have 2 questions:
Can I run spring-data-elastic v4.0.1.RELEASE (with org.elasticsearch:elasticsearch 7.6.2 ) with ES client running on 7.4.0??? If not, what combination can I use for 7.4.0 client? We are migrating to AWS and I need to use 7.4.0 version of client.
I have parent/child relationship (configured as join datatype field). Could pls somebody provide a documentation or explain, how to use either ElasticsearchRestTemplate or ElasticsearchOperations to correctly insert/update both parent and child records?
Thank you.
Best regards,
Robert
ad 1): from the Elasticsearch documentation I can't at the moment find anything in the breaking changes sections that would prevent using a 7.4.0 client library, but that does not mean there aren't any. But that does not mean that there aren't any. Recently there was a breaking change in the Java classes (from 7.7 to 7.8) and I got the information:
our compatability focus is on the HTTP APIs and we don’t offer any guarantees on the code itself. There’s more background here: https://github.com/elastic/elasticsearch/issues/22707#issuecomment-274163711
So I'd say, write a small test app and with the corresponding libraries, start a local ES 7.4 and test it.
ad 2): adding the join-type mapping ang implementing the corresponding inserts etc. is currently worked on and will hopefully be available in version 4.1.

How to set elasticsearch to push data to sentry

I have elasticsearch 5.6 and using log4j2 to config it now,
I save data in elasticsearch, And now i want to push data to sentry 8.22.
If elasticsearch reviceve a data then push the data to sentry automatically.
Can someone tell me how to do this?
PS:
I found some links like this Using sentry logging with elasticsearch
But the solution there is too old.
IMO that's not what Sentry is for: You want to find errors in your application, but it isn't a general log collector. You're also not trying to get your operating system, webserver, database,... hooked into Sentry, right?
If anything in Elasticsearch is going wrong, Sentry should collect the error in your application and you can dig deeper from there. No need to connect Elasticsearch directly.
PS: Adding logging libraries is definitely untested and you might run into various issues (at the very least every upgrade will be more complicated) — I'd be pretty careful with this.

Native application to query ELK?

I'm using Logstash, Elasticsearch and Kibana to process, store and visualize my logs.
My setup works fine but now I'm looking for a new tool : before ELK I was used to read my logs on Notepad++ or Glogg (I'm on Windows) and now I'm using only kibana discover tab.
Do you think I can find a native application that looks like a read-only Notepad++ that query Elasticsearch and display my logs like before ?
The three features I actually need are :
querying multiple sources logs,
for a specified date range,
and display it quickly to a concise and fast viewer.
I don't think it's very complicated to implement, so that's why i'm wondering if it already exists :)

Spring XD stream failure handling

I have a stream as follows,
source(jms-ibmmq) -> Process -> Process -> sink(jdbc-oracle)
Data ingestion works fine. But as part of my stream there is a possibility that my sink(jdbc-oracle) will be down (or) that there is some problem in the network which prevents persistence to the oracle db.
What I am asking is how to handle this failure and what option spring xd can provide out of the box? Is there a pattern thats commonly used to handle these failures in the streams which has caused processing / sink modules?
Please see the comments on this JIRA issue they explain documentation changes we are adding to explain how to configurre dead-lettering in the message bus.
In addition, we have provided mechanisms such that, if all four modules are deployed to the same container (and all containers that match the deployment criteria), we will directly connect the modules such that an error in the sink will be thrown back to the source (causing the JMS message to be rolled back in your case).
This is achieved by setting the module count property to 0 (meaning deploy on all containers that match the criteria - if any - or all containers, if no criteria).
This feature is available on master (it was added after M7).

Resources