Serilog best aproach for outputing to file and elasticsearch - elasticsearch

I used to ship my data to Elasticsearch by FileBeat-LogStash pipeline. Processed my logs which was created via log4net, mutated them, and sent required fields towards elastic.
Now I would like to replace my logic by removing the FileBeat and Logstash and make use of Serilog and it's elasticsearch sink.
To broader the picture I have an API endpoint which receives requests which I need to log to a textual file as they are so I need a File sink. Further down the code, my business logic will make use of data received and among else create an object which I then need to ingest to an index at elastic.
What's the best approach for this, have one Serilog instance and use some kind of filtering or have two Serilog instances? I'm closer to decorating (enrich) my cases and then using sinks by filtering (one Serilog instance) but because I'm a novice with Serilog I don't know how to set up the whole thing.
The abbreviated code would be something like this,
My controller class:
public class RequestController : ControllerBase
{
private readonly BLService _service = new BLService(Log.Logger);
[Route("Test")]
[HttpPost]
public IActionResult Test([FromBody]SampleRequest request)
{
var logId = Guid.NewGuid().ToString();
using (LogContext.PushProperty("LogId", logId))
Log.Information("{#request}", request);
var tran = new SampleTran
{
SampleTranType = "Test",
SampleTranId = request.Id,
EventTime = DateTime.Now
};
_service.ProcessTransaction(tran);
return new OkResult();
}
}
And my service where I'm adding property "Type" with constant value "ElkData" which I could then filter on:
public class BLService
{
private readonly ILogger _log;
public BLService(ILogger logger)
{
_log = logger.ForContext("Type", "ElkData");
}
public void ProcessTransaction(SampleTran transaction)
{
var elkData = DoSomeStuffAndReturnElkTransactionToStore(transaction);
_log.Information("{#ElkData}", elkData );
}
}
One note, my text file should only contain raw requests (without elasticsearch data). So far I'm writing all to file, and my appsettings.json looks like this:
{
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "C:\\DEV\\Logs\\mylog-.txt",
"rollingInterval": "Day",
"outputTemplate": "{Timestamp:yyyy-MM-ddTHH:mm:ss.fff zzz} [{Level:u3}] {Message:j}{NewLine}{Exception}"
}
}
],
"Enrich": [ "FromLogContext" ]
},
"AllowedHosts": "*"
}
I need to add the elastic part using filtering, am I right? Any help would be appreciated.

Here's how I managed to do what I need:
I've used ForContext to enrich my log items. So in the controller, I used:
var requestLog = Log.ForContext("Type", "Request");
requestLog.Information("Request: {#request}", request);//this needs to go to the log file
the code in BLservice stays the same and the filtering is described in the appsettings.json as:
{
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "Logger",
"Args": {
"configureLogger": {
"Filter": [
{
"Name": "ByExcluding",
"Args": {
"expression": "Type = 'ElkData'"
}
}
],
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "C:\\DEV\\Logs\\mylog-.txt",
"rollingInterval": "Day",
"outputTemplate": "{Timestamp:yyyy-MM-ddTHH:mm:ss.fff zzz} [{Level:u3}] {Message:j}{NewLine}{Exception}",
"shared": true
}
}
]
}
}
},
{
"Name": "Logger",
"Args": {
"configureLogger": {
"Filter": [
{
"Name": "ByIncludingOnly",
"Args": {
"expression": "Type = 'ElkData'"
}
}
],
"WriteTo": [
{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "<your elastic url>",
"TypeName": "_doc",
"IndexFormat": "serilog_data",
"InlineFields": true,
"BufferBaseFilename": "C:\\DEV\\Logs\\elk_buffer"
}
}
]
}
}
}
]
}
}
So the file will contain everything that is logged out except logs that carry "Type = 'ElkData'" enrichment, those will end up in elasticsearch index.
Hope this simple approach will help some serilog novice out there someday

Related

DynamoDB streams filter with nested fields not working

I have a Lambda hooked up to my DynamoDB stream. It is configured to trigger if both criteria are met:
eventName = "MODIFY"
status > 10
My filter looks as follows:
{"eventName": ["MODIFY"], "dynamodb": {"NewImage": {"status": [{"numeric": [">", 10]}]}}}
If the filter is configured to only trigger if the event name is MODIFY it works, however anything more complicated than that does not trigger my Lambda. The event looks as follows:
{
"eventID": "ba1cff0bb53fbd7605b7773fdb4320a8",
"eventName": "MODIFY",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-east-1",
"dynamodb":
{
"ApproximateCreationDateTime": 1643637766,
"Keys":
{
"org":
{
"S": "test"
},
"id":
{
"S": "61f7ebff17afad170f98e046"
}
},
"NewImage":
{
"status":
{
"N": "20"
}
}
}
}
When using the test_event_pattern endpoint it confirms the filter is valid:
filter = {
"eventName": ["MODIFY"],
"dynamodb": {
"NewImage": {
"status": [ { "numeric": [ ">", 10 ] } ]
}
}
}
response = client.test_event_pattern(
EventPattern=json.dumps(filter),
Event="{\"id\": \"e00c66cb-fe7a-4fcc-81ad-58eb60f5d96b\", \"eventName\": \"MODIFY\", \"dynamodb\": {\"NewImage\":{\"status\": 20}}, \"detail-type\": \"myDetailType\", \"source\": \"com.mycompany.myapp\", \"account\": \"123456789012\", \"time\": \"2016-01-10T01:29:23Z\", \"region\": \"us-east-1\"}"
)
print(response) >> {'Result': True, 'ResponseMetadata': {'RequestId':...}
Is there something that I'm overlooking? Do DynamoDB filters not work on the actual new image?
probably already found out yourself but for anyone else
its missing the dynamodb json specific numeric field leaf:
{
"eventName": ["MODIFY"],
"dynamodb": {
"NewImage": {
"status": { "N": [{ "numeric": [">", 10] }] }
}
}
}

Slow ElasticSearch Serilog

My C# dotnet core web application uses serilog with elasticsearch which is configured like the code below. Whenever elastic search server has a problem my server's Http queries slow down to being unsusable. Let's say if elasticsearch is down each query takes 15 seconds or so. I assume each query tries to log the message and waits for the elasticserch server to respond. I understand that I can play with elasticsearch timeout but that is not a solution. Is there a better way not to limit the application performance by the logging method ?
"Serilog": {
"IncludeScopes": true,
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "....\\log.txt",
"rollingInterval": "Day",
"outputTemplate": "{Timestamp:o} [{Level:u3}] {Message}{NewLine}{Exception}"
}
},
{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "http://localhost:9200",
"typeName": "SomeApp",
"batchPostingLimit": 50,
"restrictedToMinimumLevel": "Information",
"bufferFileSizeLimitBytes": 5242880,
"bufferLogShippingInterval": 5000,
"bufferRetainedInvalidPayloadsLimitBytes": 5000,
"bufferFileCountLimit": 31,
"connectionTimeout": 5,
"queueSizeLimit": "100000",
"autoRegisterTemplate": true,
"overwriteTemplate": false
}
}
],
"Enrich": [
"FromLogContext"
],
"Properties": {
"Application": "SomeApp",
"Environment": "SomeApp.Production"
}
}
The middleware part looks like this
public async Task Invoke(HttpContext context, ILogger logger, ILogContext logContext)
{
context.Request.EnableBuffering();
var stopWatch = new Stopwatch();
stopWatch.Start();
try
{
await _next.Invoke(context);
}
finally
{
stopWatch.Stop();
await LogRequest(context, logger, stopWatch.ElapsedMilliseconds, logContext);
}
}
private async Task LogRequest(HttpContext context, ILogger logger, long elapsedMs, ILogContext logContext){
logger.Information(LogConstants.RequestMessageTemplate, message);
}

Swagger use a custom swagger.json file aspnet core

Pretty sure I am missing something clearly obvious but not seeing it.
How can I use my updated swagger.json file?
I took my boilerplate swagger/v1/swagger.json code and pasted it into the editor.swagger.io system. I then updated the descriptions etc, added examples to my models and then saved the contents as swagger.json.
Moved the file into the root of my api application, set the file to copy always.
public void ConfigureServices(IServiceCollection services)
{...
services.AddSwaggerGen(c => { c.SwaggerDoc("V1", new Info {Title = "Decrypto", Version = "0.0"}); });
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
...
app.UseSwagger();
//--the default works fine
// app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/V1/swagger.json", "Decrypto v1"); });
app.UseSwaggerUI(c => { c.SwaggerEndpoint("swagger.json", "Decrypto v1"); });
app.UseMvc();
}
I have tried a few different variation but none seem to be the trick. I don't really want to rewrite the work in SwaggerDoc as it seems dirty to me put documentation in the runtime.
the custom swagger.json file I want to use looks like this:
{
"swagger": "2.0",
"info": {
"version": "0.0",
"title": "My Title"
},
"paths": {
"/api/Decryption": {
"post": {
"tags": [
"API for taking encrypted values and getting the decrypted values back"
],
"summary": "",
"description": "",
"operationId": "Post",
"consumes": [
"application/json-patch+json",
"application/json",
"text/json",
"application/*+json"
],
"produces": [
"text/plain",
"application/json",
"text/json"
],
"parameters": [
{
"name": "units",
"in": "body",
"required": true,
"schema": {
"uniqueItems": false,
"type": "array",
"items": {
"$ref": "#/definitions/EncryptedUnit"
}
}
}
],
"responses": {
"200": {
"description": "Success",
"schema": {
"uniqueItems": false,
"type": "array",
"items": {
"$ref": "#/definitions/DecryptedUnit"
}
}
}
}
}
}
},
"definitions": {
"EncryptedUnit": {
"type": "object",
"properties": {
"value": {
"type": "string",
"example": "7OjLFw=="
},
"initializeVector": {
"type": "string",
"example": "5YVg="
},
"cipherText": {
"type": "string",
"example": "596F5AA48A882"
}
}
},
"DecryptedUnit": {
"type": "object",
"properties": {
"encrypted": {
"type": "string",
"example": "7OjLV="
},
"decrypted": {
"type": "string",
"example": "555-55-5555"
}
}
}
}
}
you need to configure PhysicalFileProvider and put your swagger.json into wwwroot or anywhere accessible by PhysicalFileProvider. After that you can access it using IFileProvider
Reference: https://www.c-sharpcorner.com/article/file-providers-in-asp-net-core/
Edit If you just add app.UseStaticFiles(); into your StartUp, you can access wwwroot without hastle.
Reference
Completely Different Approach
you may also consider to serve your file using Controller/Action
public IActionResult GetSwaggerDoc()
{
var file = Path.Combine(Directory.GetCurrentDirectory(),
"MyStaticFiles", "swagger.json");
return PhysicalFile(file, "application/json");
}
.NET Core 2.2 could server physical file to url resource like below.
But if you use custom swagger json, your api is fixed except you change it every time.
public void Configure(IApplicationBuilder app, IHostingEnvironment env,
ILoggerFactory loggerFactory)
{
...
app.UseStaticFiles(new StaticFileOptions
{
FileProvider = new PhysicalFileProvider(
Path.Combine(Directory.GetCurrentDirectory(),
"swagger/v1/swagger.json")),
RequestPath = "swagger/v1/swagger.json"
});
}

Is it possible to get rid of the 'data' ,'nodes', ... fields?

I have the following GraphQL query:
{
allForums {
nodes {
name,
topics: topicsByForumId(orderBy: [TITLE_ASC]) {
nodes {
title
}
}
}
}
}
This returns something as following:
{
"data": {
"allForums": {
"nodes": [
{
"name": "1",
"topics": {
"nodes": [
{
"title": "a"
},
{
"title": "b"
}
]
}
}
]
}
}
}
I would like to get the result below:
[
{
"name": "1",
"topics": [
{
"title": "a"
},
{
"title", "b"
}
]
}
]
Is it possible to get rid of the data, nodes, ... fields? Is that something that can be done within GraphQL, or should I do that in my service implementation?
I am using PostGraphile v4.2.0 as a GraphQL implementation, on top of PostgreSQL v11.
As indicated in the docs, you can expose a simpler interface for connections, or eliminate the default Relay-based connection interface altogether:
If you prefer a simpler list interface over GraphQL connections then you can enable that either along-side our connections (both) or exclusively (only) using our --simple-collections [omit|both|only] option.

DynamoDB DocumentClient returns Set of strings (SS) attribute as an object

I'm new to DynamoDB.
When I read data from the table with AWS.DynamoDB.DocumentClient class, the query works but I get the result in the wrong format.
Query:
{
TableName: "users",
ExpressionAttributeValues: {
":param": event.pathParameters.cityId,
":date": moment().tz("Europe/London").format()
},
FilterExpression: ":date <= endDate",
KeyConditionExpression: "cityId = :param"
}
Expected:
{
"user": "boris",
"phones": ["+23xxxxx999", "+23xxxxx777"]
}
Actual:
{
"user": "boris",
"phones": {
"type": "String",
"values": ["+23xxxxx999", "+23xxxxx777"],
"wrapperName": "Set"
}
}
Thanks!
The [unmarshall] function from the [AWS.DynamoDB.Converter] is one solution if your data comes as e.g:
{
"Attributes": {
"last_names": {
"S": "UPDATED last name"
},
"names": {
"S": "I am the name"
},
"vehicles": {
"NS": [
"877",
"9801",
"104"
]
},
"updatedAt": {
"S": "2018-10-19T01:55:15.240Z"
},
"createdAt": {
"S": "2018-10-17T11:49:34.822Z"
}
}
}
Please notice the object/map {} spec per attribute, holding the attr type.
Means you are using the [dynamodb]class and not the [DynamoDB.DocumentClient].
The [unmarshall] will Convert a DynamoDB record into a JavaScript object.
Stated and backed by AWS. Ref. https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/Converter.html#unmarshall-property
Nonetheless, I faced the exact same use case, as yours. Having one only attribute, TYPE SET (NS) in my case, and I had to manually do it. Next a snippet:
// Please notice the <setName>, which represents your set attribute name
ddbTransHandler.update(params).promise().then((value) =>{
value.Attributes[<setName>] = value.Attributes[<setName>].values;
return value; // or value.Attributes
});
Cheers,
Hamlet

Resources