Asp.net core using serilog, filters and rolling files - appsettings

i need to configure appsettings to use serilog log and to separate same classes (identityserver4, query sql, application ) and redirect the output in dfferent file to analyze tha program flow.
I've installed serilog, serilog.settings.configuration, serilog.sinks.rollingfile, serilog.filters.extensions and serilog.sinks.console but i've not found some documentation to do that.
This is my Serilog section in appsetting:
"Serilog": {
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{ "Name": "LiterateConsole" },
{
"Name": "RollingFile",
"Args": { "pathFormat": "Logs/log-{Date}.txt" }
},
{
"Name": "RollingFile",
"pathFormat": "Logs/DBCommands-{Date}.log",
"Filter": [
{
"Name": "ByIncludingOnly",
"Args": {
"expression": "SourceContext = 'IdentityServer4'"
}
}
]
}
],
"Enrich": [ "FromLogContext", "WithMachineName", "WithThreadId" ],
"Properties": {
"Application": "dsm.security"
}
}
Where i'm wrong ?
UPDATE
I would like to have the same result that i can obtain with the following code
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.ReadFrom.Configuration(Configuration)
.MinimumLevel.Override("Microsoft", LogEventLevel.Information)
.MinimumLevel.Override("System", LogEventLevel.Warning)
.MinimumLevel.Override("Default", LogEventLevel.Warning)
.Enrich.FromLogContext()
.WriteTo.Console(theme: AnsiConsoleTheme.Code)
.WriteTo.Logger(l => l.Filter.ByIncludingOnly(e => e.Level == LogEventLevel.Error).WriteTo.RollingFile(#"Logs/Error-{Date}.log"))
.WriteTo.Logger(l => l.Filter.ByIncludingOnly(e => e.Level == LogEventLevel.Fatal).WriteTo.RollingFile(#"Logs/Fatal-{Date}.log"))
.WriteTo.Logger(l => l.Filter.ByIncludingOnly(Matching.FromSource("dsm.security")).WriteTo.RollingFile(#"Logs/dsm.security-{Date}.log"))
.WriteTo.Logger(l => l.Filter.ByIncludingOnly(Matching.FromSource("IdentityServer4")).WriteTo.RollingFile(#"Logs/IdentityServer-{Date}.log"))
.WriteTo.Logger(l => l.Filter.ByIncludingOnly(Matching.FromSource("Microsoft.EntityFrameworkCore")).WriteTo.RollingFile(#"Logs/EF-{Date}.log"))
// .WriteTo.RollingFile(#"Logs/Verbose-{Date}.log")
.CreateLogger();

Related

Elastic \ Opensearch life cycle management - what is the difference between read_write & open actions

I want to use life cycle management, the goal is to delete messages after 14 days
What should be the action in the first stage? Open or Read_write
What is the difference between the two actions?
{
"policy": {
"policy_id": "delete_after14_days",
"description": "index delete"
"schema_version": 1,
"error_notification": null,
"default_state": "open",
"states": [
{
"name": "hot",
"actions": [
{
**"open": {} or "read_write": {}**
}
],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "14d"
}
}
]
},
{
"name": "delete",
"actions": [
{
"delete": {}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"audit-*"
],
"priority": 0
}
]
}
}

Is there a way to send metadata in krakend endpoint configuration?

I'm using Krakend as API-Gateway, and my configuration looks like this :
{
"plugin": {
"folder": "/etc/krakend/plugins/authenticator/",
"pattern":".so"
},
"port": 8080,
"extra_config": {
"github_com/devopsfaith/krakend/transport/http/server/handler": {
"name": "authenticator"
}
},
"endpoints": [
{
"output_encoding": "no-op",
"backend": [
{
"encoding": "no-op",
"host": [
"127.0.0.1:8080"
],
"url_pattern": "/api/v1/address/{id}",
"method": "GET"
}
],
"endpoint": "/api/v1/addresses/{id}",
"method": "GET"
}
],
"name": "gateway",
"timeout": "30s",
"version": 2
}
I want to pass some metadata per end point and access it in my predefined plugin .
In this case authenticator plugin.
What you are trying to achieve is perfectly possible, and is the way all components work in KrakenD. Your plugin can access the KrakenD configuration using the namespace you define. For instance, you could set your metadata like this (I am assuming you have in your Go code a pluginName = "slifer2015-authenticator" ):
{
"endpoints": [
{
"output_encoding": "no-op",
"backend": [
{
"encoding": "no-op",
"host": [
"127.0.0.1:8080"
],
"url_pattern": "/api/v1/address/{id}"
}
],
"endpoint": "/api/v1/addresses/{id}",
"extra_config": {
"github_com/devopsfaith/krakend/transport/http/server/handler": {
"name": [
"slifer2015-authenticator",
"some-other-plugin-here"
],
"slifer2015-authenticator": {
"Metadata1": "value1",
"Metadata2": {
"Some": 10,
"Thing": 100,
"Here": "60s"
}
}
}
}
}
]
}
Then your metada is available in the extra parameter when the registerer kicks in, inside the key you have chosen.
func (r registerer) registerHandlers(ctx context.Context, extra map[string]interface{}, h http.Handler) (http.Handler, error) {
``

Serilog does not write files after installing the service with sc.exe

I have developed a net core 3.1 service for windows, the service works fine, but fails to write the log file.
during debugging Serilog writes the file correctly, but once installed with sc it writes nothing.
Program.cs
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.UseWindowsService()
.UseSerilog((hostingContext, loggerConfig) =>
loggerConfig.ReadFrom.Configuration(hostingContext.Configuration)) // custom log event
.ConfigureServices((hostContext, services) =>
{
IConfiguration configuration = hostContext.Configuration; //prendi la configurazione
ServiceInfo siOption = configuration.GetSection("ServiceInfo").Get<ServiceInfo>();
services.AddSingleton(siOption);
services.AddHostedService<Worker>();
});
}
appsettngs.json
"Serilog": {
"Using": [ "Serilog.Sinks.Console", "Serilog.Sinks.RollingFile" ],
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"WriteTo": [
{
"Name": "Logger",
"Args": {
"configureLogger": {
"Filter": [
{
"Name": "ByIncludingOnly",
"Args": {
"expression": "(#Level = 'Error' or #Level = 'Fatal' or #Level = 'Warning')"
}
}
],
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "Logs/ex_.log",
"outputTemplate": "{Timestamp} [{Level:u3}] {Message}{NewLine}{Exception}",
//"outputTemplate": "{Timestamp:o} [{Level:u3}] ({SourceContext}) {Message}{NewLine}{Exception}",
"rollingInterval": "Day",
"retainedFileCountLimit": 7
}
}
]
}
}
}
],
"Enrich": [
"FromLogContext",
"WithMachineName"
],
"Properties": {
"Application": "ORAMS-II Service Status Telegram"
}
}
}
I don't know what the problem could be, installed on a linux machine writes the file correctly

Serilog best aproach for outputing to file and elasticsearch

I used to ship my data to Elasticsearch by FileBeat-LogStash pipeline. Processed my logs which was created via log4net, mutated them, and sent required fields towards elastic.
Now I would like to replace my logic by removing the FileBeat and Logstash and make use of Serilog and it's elasticsearch sink.
To broader the picture I have an API endpoint which receives requests which I need to log to a textual file as they are so I need a File sink. Further down the code, my business logic will make use of data received and among else create an object which I then need to ingest to an index at elastic.
What's the best approach for this, have one Serilog instance and use some kind of filtering or have two Serilog instances? I'm closer to decorating (enrich) my cases and then using sinks by filtering (one Serilog instance) but because I'm a novice with Serilog I don't know how to set up the whole thing.
The abbreviated code would be something like this,
My controller class:
public class RequestController : ControllerBase
{
private readonly BLService _service = new BLService(Log.Logger);
[Route("Test")]
[HttpPost]
public IActionResult Test([FromBody]SampleRequest request)
{
var logId = Guid.NewGuid().ToString();
using (LogContext.PushProperty("LogId", logId))
Log.Information("{#request}", request);
var tran = new SampleTran
{
SampleTranType = "Test",
SampleTranId = request.Id,
EventTime = DateTime.Now
};
_service.ProcessTransaction(tran);
return new OkResult();
}
}
And my service where I'm adding property "Type" with constant value "ElkData" which I could then filter on:
public class BLService
{
private readonly ILogger _log;
public BLService(ILogger logger)
{
_log = logger.ForContext("Type", "ElkData");
}
public void ProcessTransaction(SampleTran transaction)
{
var elkData = DoSomeStuffAndReturnElkTransactionToStore(transaction);
_log.Information("{#ElkData}", elkData );
}
}
One note, my text file should only contain raw requests (without elasticsearch data). So far I'm writing all to file, and my appsettings.json looks like this:
{
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "C:\\DEV\\Logs\\mylog-.txt",
"rollingInterval": "Day",
"outputTemplate": "{Timestamp:yyyy-MM-ddTHH:mm:ss.fff zzz} [{Level:u3}] {Message:j}{NewLine}{Exception}"
}
}
],
"Enrich": [ "FromLogContext" ]
},
"AllowedHosts": "*"
}
I need to add the elastic part using filtering, am I right? Any help would be appreciated.
Here's how I managed to do what I need:
I've used ForContext to enrich my log items. So in the controller, I used:
var requestLog = Log.ForContext("Type", "Request");
requestLog.Information("Request: {#request}", request);//this needs to go to the log file
the code in BLservice stays the same and the filtering is described in the appsettings.json as:
{
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "Logger",
"Args": {
"configureLogger": {
"Filter": [
{
"Name": "ByExcluding",
"Args": {
"expression": "Type = 'ElkData'"
}
}
],
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "C:\\DEV\\Logs\\mylog-.txt",
"rollingInterval": "Day",
"outputTemplate": "{Timestamp:yyyy-MM-ddTHH:mm:ss.fff zzz} [{Level:u3}] {Message:j}{NewLine}{Exception}",
"shared": true
}
}
]
}
}
},
{
"Name": "Logger",
"Args": {
"configureLogger": {
"Filter": [
{
"Name": "ByIncludingOnly",
"Args": {
"expression": "Type = 'ElkData'"
}
}
],
"WriteTo": [
{
"Name": "Elasticsearch",
"Args": {
"nodeUris": "<your elastic url>",
"TypeName": "_doc",
"IndexFormat": "serilog_data",
"InlineFields": true,
"BufferBaseFilename": "C:\\DEV\\Logs\\elk_buffer"
}
}
]
}
}
}
]
}
}
So the file will contain everything that is logged out except logs that carry "Type = 'ElkData'" enrichment, those will end up in elasticsearch index.
Hope this simple approach will help some serilog novice out there someday

Using multiple config files for logstash

I am just learning elasticsearch and I need to know how to correctly split a configuration file into multiple. I'm using the official logstash on docker with ports bound on 9600 and 5044. Originally I had a working single logstash file without conditionals like so:
input {
beats {
port => '5044'
}
}
filter
{
grok{
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} \[(?<event_source>[\w\s]+)\]:\[(?<log_type>[\w\s]+)\]:\[(?<id>\d+)\] %{GREEDYDATA:details}"
"source" => "%{GREEDYDATA}\\%{GREEDYDATA:app}.log"
}
}
mutate{
convert => { "id" => "integer" }
}
date {
match => [ "timestamp", "ISO8601" ]
locale => en
remove_field => "timestamp"
}
}
output
{
elasticsearch {
hosts => ["http://elastic:9200"]
index => "logstash-supportworks"
}
}
When I wanted to add metricbeat I decided to split that configuration into a new file. So I ended up with 3 files:
__input.conf
input {
beats {
port => '5044'
}
}
metric.conf
# for testing I'm adding no filters just to see what the data looks like
output {
if ['#metadata']['beat'] == 'metricbeat' {
elasticsearch {
hosts => ["http://elastic:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
}
supportworks.conf
filter
{
if ["source"] =~ /Supportwork Server/ {
grok{
match => {
"message" => "%{TIMESTAMP_ISO8601:timestamp} \[(?<event_source>[\w\s]+)\]:\[(?<log_type>[\w\s]+)\]:\[(?<id>\d+)\] %{GREEDYDATA:details}"
"source" => "%{GREEDYDATA}\\%{GREEDYDATA:app}.log"
}
}
mutate{
convert => { "id" => "integer" }
}
date {
match => [ "timestamp", "ISO8601" ]
locale => en
remove_field => "timestamp"
}
}
}
output
{
if ["source"] =~ /Supportwork Server/ {
elasticsearch {
hosts => ["http://elastic:9200"]
index => "logstash-supportworks"
}
}
}
Now no data is being sent to the ES instance. I have verified that filebeat at least is running and publishing messages, so I'd expect to at least see that much going to ES. Here's a published message from my server running filebeat
2019-03-06T09:16:44.634-0800 DEBUG [publish] pipeline/processor.go:308 Publish event: {
"#timestamp": "2019-03-06T17:16:44.634Z",
"#metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.6.1"
},
"source": "C:\\Program Files (x86)\\Hornbill\\Supportworks Server\\log\\swserver.log",
"offset": 4773212,
"log": {
"file": {
"path": "C:\\Program Files (x86)\\Hornbill\\Supportworks Server\\log\\swserver.log"
}
},
"message": "2019-03-06 09:16:42 [COMMS]:[INFO ]:[4924] Helpdesk API (5005) Socket error while idle - 10053",
"prospector": {
"type": "log"
},
"input": {
"type": "log"
},
"beat": {
"name": "WIN-22VRRIEO8LM",
"hostname": "WIN-22VRRIEO8LM",
"version": "6.6.1"
},
"host": {
"name": "WIN-22VRRIEO8LM",
"architecture": "x86_64",
"os": {
"platform": "windows",
"version": "6.3",
"family": "windows",
"name": "Windows Server 2012 R2 Standard",
"build": "9600.0"
},
"id": "e5887ac2-6fbf-45ef-998d-e40437066f56"
}
}
I got this working by adding a mutate filter to __input.conf to replace backslashes with forward slashes in the source field
filter {
mutate{
gsub => [ "source", "[\\]", "/" ]
}
}
And removing the " from the field accessors in my conditionals So
if ["source"] =~ /Supportwork Server/
Became
if [source] =~ /Supportwork Server/
Both changes seemed to be necessary to get this configuration working.

Resources