How to configure Serilog Elasticsearch error hanling in appsettings in .NET Core 3.1? - elasticsearch

I am trying to configure Serilog in .NET Core 3.1 (C#) project, but I want to do that completely in appsettings.json. For file sinks I did all configuration w/o any problem, but for elasticsearch I can't figure out how to write rows belows into appsettings.json so that it works:
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri("http://localhost:9200"))
{
FailureCallback = e => Console.WriteLine("Unable to submit event " + e.MessageTemplate),
EmitEventFailure = EmitEventFailureHandling.WriteToSelfLog |
EmitEventFailureHandling.WriteToFailureSink |
EmitEventFailureHandling.RaiseCallback,
FailureSink = new FileSink("./failures.txt", new JsonFormatter(), null)
})
Official documentation shows just basic example for EmitEventFailure as follows:
"emitEventFailure": "WriteToSelfLog"
It doesn't show how combination (multiple flags) of EmitEventFailures should be written. Same situation for FailureSink:
"failureSink": "My.Namespace.MyFailureSink, My.Assembly.Name"
I don't know what exactly this means and I can't figure it out for code sample listed above.
Finally, for FailureCallback the documentation doesn't mention any option to do this through appsettings.json. But this option is not a big deal for me, at worst I can omit it.
Thanks for any hints!

After long hours of research on this, I came up with following answer.
For "emitEventFailure" property, we only need to give comma between values like this:
"emitEventFailure": "WriteToFailureSink, WriteToSelfLog, RaiseCallback",
For the failureSink property, You can refer to this link for complex parameter value binding. We need to set the "type" of the object before specify each of parameters including the parameter's object.
In my example is to set azure blob storage as the failure sink:
"failureSink": {
"type": "Serilog.Sinks.AzureBlobStorage.AzureBatchingBlobStorageSink, Serilog.Sinks.AzureBlobStorage",
"blobServiceClient": {
"type": "Azure.Storage.Blobs.BlobServiceClient, Azure.Storage.Blobs",
"connectionString": "DefaultEndpointsProtocol=https;AccountName=xyz;AccountKey=xyzkey;EndpointSuffix=xyz.net"
},
"textFormatter": "Serilog.Formatting.Elasticsearch.ElasticsearchJsonFormatter, Serilog.Formatting.Elasticsearch",
"storageContainerName": "test-api",
"storageFileName": "test/{yyyy}-{MM}-{dd}.log",
"period": "0.00:00:30",
"batchSizeLimit": "1000"
}

Related

Project New Array Field with Spring Data

As part of an aggregate operation, I need to unwind an array. I am wondering how I can put the object back into an array as part of the project. Here is the MongoDB aggregate operation that works:
db.users.aggregate([ { "$match" : {...} , { "$unwind" : "$profiles"} ,{$project: {'profiles': ['$profiles']}}...}
And more specifically, how can I implement this using Spring Data mongoDB ProjectionOperation:
{$project: {'profiles': ['$profiles']}}
This feature has been added since 3.2.
Edit 1:
I looked through some of the posts and one answer by
Christoph Strobl:
and based on the answer I came up with something that works which is as follows:
AggregationOperation project = aggregationOperationContext -> {
Document projection = new Document();
projection.put("profiles", Arrays.<Object> asList("$profiles"));
projection.put("_id","$id");
return new Document("$project", projection);
};
I am wondering if there is a better way of doing it though.
Any help/suggestion is very much appreciated. Thanks.
Unfortunately there is not.
You can replace $project by project() with an AggregationExpression to shorten it a bit.
// ...
unwind("profiles"),
project().and(ctx -> new Document("profiles", asList("$profiles"))).as("profiles")
I created DATAMONGO-2312 to provide support for new array field projections in one of the next versions.

Azure Function Parameter from Settings

Referring to the following example:
public static void Run([CosmosDBTrigger(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
ILogger log)
I understand, the connectionStringSetting isn't the connection string to use, rather it's name of the setting to look up containing the ConnectionString.
Will this also work for CollectionName and databasename as well? I understand I can experiment and figure out, but I am confused as to how this is even resolved at build time/deployment time?
I see several properties being assigned values while others are taking them from configuration? Is it the underlying constructor for CosmosDBTrigger which takes care of using appropriate value?
Binding to a function is a way of declaratively connecting another resource to the function; bindings may be connected as input bindings, output bindings, or both. Data from bindings is provided to the function as parameters.
here is small sample of Azure function using CosmosDB trigger that is invoked when there are inserts or updates in the specified database and collection.
using Microsoft.Azure.Documents;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using System.Collections.Generic;
using Microsoft.Extensions.Logging;
namespace CosmosDBSamplesV2
{
public static class CosmosTrigger
{
[FunctionName("CosmosTrigger")]
public static void Run([CosmosDBTrigger(
databaseName: "ToDoItems",
collectionName: "Items",
ConnectionStringSetting = "CosmosDBConnection",
LeaseCollectionName = "leases",
CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
ILogger log)
{
if (documents != null && documents.Count > 0)
{
log.LogInformation($"Documents modified: {documents.Count}");
log.LogInformation($"First document Id: {documents[0].Id}");
}
}
}
}
and here is the binding information of same azure function which is used to pass the param value to function
Cosmos DB trigger binding in a function.json file
{
"type": "cosmosDBTrigger",
"name": "documents",
"direction": "in",
"leaseCollectionName": "leases",
"connectionStringSetting": "<connection-app-setting>",
"databaseName": "Tasks",
"collectionName": "Items",
"createLeaseCollectionIfNotExists": true
}
To answer your question how this is even resolved at build time/deployment time" :- To use it locally we pass the same binding information in host.json file and local.settings.json file.
That's how it bind the information internally by checking param name.
Hope it helps.

Enabling nested any queries on OData V4 endpoints with WebAPI

I'm trying to build a nested any query like this ...
~/Api/Calendar?$filter=Roles/any(r:r/User/any(u:u/Name eq 'Joe
Bloggs'))
if I remove the inner any clause leaving me with ...
~/Api/Calendar?$filter=Roles/any(r:r/User/any())
... then the endpoint returns ...
{
"error": {
"code": "",
"message": "The query specified in the URI is not valid. The Any/All nesting limit of '1' has been exceeded. 'MaxAnyAllExpressionDepth' can be configured on ODataQuerySettings or EnableQueryAttribute."
}
}
... which I think lends some light on the problem but I actually have here.
So far I have tried to raise this limit with this during my context initialisation but it doesn't appear to be working ....
config.AddODataQueryFilter(new EnableQueryAttribute { MaxAnyAllExpressionDepth = 3 });
does anyone have any ideas how I can do this globally (i don't want to have to go to every get action on every controller and set the depth.
UPDATE:
So it turns out where I inherit from my own baseEntityController, on the actions I had the EnableQuery attribute which superceeded my global config change hense the reason my changes were not respected.
Simply removing the attribute from the actions themselves has all controllers that inherit from my base working with this new nested any and all limit, but i seem to now have the side effect that expands don't work any more ...
var query = new EnableQueryAttribute {
MaxExpansionDepth = 8,
PageSize = 100,
MaxAnyAllExpressionDepth = 3,
AllowedFunctions = System.Web.OData.Query.AllowedFunctions.All,
AllowedLogicalOperators = System.Web.OData.Query.AllowedLogicalOperators.All,
AllowedQueryOptions = System.Web.OData.Query.AllowedQueryOptions.All,
AllowedArithmeticOperators = System.Web.OData.Query.AllowedArithmeticOperators.All,
MaxTop = 1000
};
config.AddODataQueryFilter(query);
.. as you can see I tried throwing lots of extras in there but it's not having any of it!
The simplest way I found to do this and have everything work is to apply the attribute on the base controller actions, it therefore applied everything correctly to the actions on that controller or any of it's derived types.
It wasn't my ideal but I couldn't find a way to get a global fix for this to work as part of initialising the odata context.
Hopefully this will help someone out there.

EventFlow - Creating a Custom Filter to attach Source Server Information

I'm assuming this is a pretty common question, how do we easily add Server info to an EventFlow event?
My scenario is that I'm deploying an application that will have its own environment specific EventFlowConfig.json, but each server in the farm will get that same json file. So... how can I tell which server in the farm sent the event to ElasticSearch?
One option is to use .net to get servername and send it as a column, which would require that I add server name to every event. That seems a little excessive but it would do the job. I was hoping there was an easier way besides having to actually code this into an event.
Thanks for your time,
Greg
Edit 4 - Karol has been great helping me get this working example up and running, THANK YOU KAROL!!! Attempting to add create a custom filter as an extension:
We need to create a new class for the custom Filter Factory
We then need to create a second new class and have it implement the IFilter interface. To pass the health monitor from the factory we used a constructor.
Use the Evaluate function as our area to add the data (eventData.AddPayloadProperty)
Then refer to the custom filter in the extensions area of our EventFlowConfig.json.
a. The category is filterFactory
b. The type is the name of your class.
c. The qualified type name is in the "type-name, assembly-name”. For example (assuming you name your filter factory ‘MyCustomFilterFactory’): “My.Application.Logging.MyCustomFilterFactory, My.Application.Assembly.WhereCustomFilterAndItsFactoreLive”
Add a reference to Microsoft.Extensions.Configuration where the C# code lives.
Then you can reference your custom filter anywhere you need to, here we are using a global filter
Working example:
class CustomGlobalFilter : IFilter
{
private IHealthReporter HealthReporter;
private string MachineName;
public CustomGlobalFilter(string ServerName, IHealthReporter HealthReporter)
{
MachineName = ServerName;
this.HealthReporter = HealthReporter;
}
FilterResult IFilter.Evaluate(EventData eventData)
{
eventData.AddPayloadProperty("ServerName", MachineName, HealthReporter, "CustomGlobalFilter");
return FilterResult.KeepEvent;
}
}
class CustomGlobalFilterFactory : IPipelineItemFactory<CustomGlobalFilter>
{
public CustomGlobalFilter CreateItem(IConfiguration configuration, IHealthReporter healthReporter)
{
CustomGlobalFilter GlobalFilter = new CustomGlobalFilter(System.Environment.MachineName, healthReporter);
return GlobalFilter;
}
}
Then in the EventFlow Config:
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
},
{
"type": "CustomGlobalFilter"
}
],
...
"extensions": [
{
"category": "filterFactory",
"type": "CustomGlobalFilter",
"qualifiedTypeName": "My.Company.Presentation.App.CustomGlobalFilter, My.Company.Presentation.App"
}
It is not something that is built into EventFlow today, but there are at least a couple of options:
Use EventFlow extensibility to add a custom filter that adds these properties to every event it “sees”.
In many logging libraries there is a concept of “initializers” or “enrichment” that can be used to automatically add contextual properties. For example in Serilog (which is natively supported by EventFlow)

Is paging broken with spring data solr when using group fields?

I currently use the spring data solr library and implement its repository interfaces, I'm trying to add functionality to one of my custom queries that uses a Solr template with a SimpleQuery. it currently uses paging which appears to be working well, however, I want to use a Group field so sibling products are only counted once, at their first occurrence. I have set the group field on the query and it works well, however, it still seems to be using the un-grouped number of documents when constructing the page attributes.
is there a known work around for this?
the query syntax provides the following parameter for this purpose, but it would seem that Spring Data Solr isn’t taking advantage of it. &group.ngroups=true should return the number of groups in the result and thus give a correct page numbering.
any other info would be appreciated.
There are actually two ways to add this parameter.
Queries are converted to the solr format using QueryParsers, so it would be possible to register a modified one.
QueryParser modifiedParser = new DefaultQueryParser() {
#Override
protected void appendGroupByFields(SolrQuery solrQuery, List<Field> fields) {
super.appendGroupByFields(solrQuery, fields);
solrQuery.set(GroupParams.GROUP_TOTAL_COUNT, true);
}
};
solrTemplate.registerQueryParser(Query.class, modifiedParser);
Using a SolrCallback would be a less intrusive option:
final Query query = //...whatever query you have.
List<DomainType> result = solrTemplate.execute(new SolrCallback<List<DomainType>>() {
#Override
public List<DomainType> doInSolr(SolrServer solrServer) throws SolrServerException, IOException {
SolrQuery solrQuery = new QueryParsers().getForClass(query.getClass()).constructSolrQuery(query);
//add missing params
solrQuery.set(GroupParams.GROUP_TOTAL_COUNT, true);
return solrTemplate.convertQueryResponseToBeans(solrServer.query(solrQuery), DomainType.class);
}
});
Please feel free to open an issue.

Resources