Fields missing on an Index of Elasticsearch from one moment to another - elasticsearch

A strange situation happened with the elasticsearch i use to log things.
I'm using Nlog with a target elasticsearch that conforms with ECS (elastic common schema)
https://github.com/elastic/ecs-dotnet/tree/master/src/Elastic.CommonSchema.NLog
By some reason since today, some fields went missing, one of them is the very important field message.
Looking the logs of the stream of kibana inside the Observability menu i see the following:
We can see here that the message field simple disappeared. Can something provoke this? If yes, what steps can i do to fix it?

I did something wrong on the logs,
In the advanced options, for the template, i added an include field.
This was only letting that specific field being logged.

Related

How to create custom messages for alerting monitors in OpenSearch, Kibana?

I am struggling on creating a customized message for my OpenSearch alerting system. I am creating an alerting system that would trigger the alert for a particular event(e.g CPU utilization if a particular service reaches a specific threshold). I would like for this customized message to include any fields/metrics that I am tracking, or even if I would like to attach a dashboard link to message, or any name. Currently OpenSearch is only allowing me to include messages related to the alerting page. I attached picture for clairty
While browsing the web I noticed that I can implement some JSON mechanisms referencing the Mustache template. Any suggestions or walk through or how it can be done. My overall goal is to include a link & custom message in the alerting message that will be sent out to required destination.
The type of messages thats included in OpenSearch alerting page.
#Viewsp although this not related to Elasticsearch (wrong tag included), but you can refer to the fields in payload or aggregation resultset of the alert using similar approach as shown in your snapshot, its just the path of field which will be different.
So, if you have a field called cpu in your log, it can be accessed as: {{ctx.payload.hits.hits[0].cpu}} from payload depending on the _source log field hierarchy. Basically hits[0] refers to the first result in your search payload which is followed by telemetry path of your field to be included.
I would suggest creating a watcher for your usecase to have more knowledge on how fields can be accessed and how does the ctx.payload look like (including aggregations):
Watcher from Kibana :https://www.elastic.co/guide/en/kibana/current/watcher-ui.html
Watcher from DevTools: https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-alerting.html

Using Nlog logger with ECS layout, in kibana the json object appear as a string instead of multiple properties

I'm working on some .Net framework application and i've been asked to send the logs to elasticsearch using kibana as the UI.
To have something that is standardized i have to implement ECS (Elastic Common Schema).
Looking at the example we have on ECS github we only have to implement it on the following way:
Instead of sending to console, like we have on the example i send it to elastic search
The output from it, would be a nice Json object...
Maybe it is expected that on kibana we would see something like the following (Kibana - Discover):
Looking at that, probably the Json Object is supposed to be treated as a string and everything goes inside the message property, but that is not what i'm looking for, i want that json to be divided in many properties.
Since i'm new to Elastic stack world, i've tried to create a template inside the Index Management page and the performing there manual mappings like message._metadata.url to not treat some properties as part of the string but without success.
I'm having trouble finding useful information to solve this problem, can anyone give an hint?
UPDATE:
I found the property enableJsonLayout="true"that we can put on the target of Nlog that indeed turns whats on the Json layout as properties on ElasticSearch which is good.
Is this the right way to use ECS?
How can i add aditional properties?
When you enable this enableJsonLayout="true" then it means that the configured Layout has to handle everything. For EcsLayout then you can find the documentation here:
https://github.com/elastic/ecs-dotnet/tree/master/src/Elastic.CommonSchema.NLog
EcsLayout will by default include all LogEvent Properties as metadata. See also https://github.com/NLog/NLog/wiki/How-to-use-structured-logging
But you can explicit add extra metadata-items:
<layout xsi:type="EcsLayout">
<metadata name="MyProperty" layout="MyPropertyValue" />
</layout>

How do I debug work of 2sxc if Visual Query works perfectly during debug, but cshtml code can't access data?

I've edited some existing visual queries of Blog 4.0 application and when I was debugging it - it worked perfectly. But then on page it stopped working. Any attempt to use key with Data like Data["Posts"] raises System.Collections.Generic.KeyNotFoundException. App.Query["Blog Posts List"]["Posts"] returns something, but any attempt to access its fields raises another exception (don't remember the name, but it said that there's no such member inside that object).
I didn't rename queries, I didn't change application settings. I just edited logic of 2 queries. I renamed 1 wiring endpoint name across 3 queries in whole chain.
How do I debug it? How can I see what does cshtml receive from database so I don't guess and put my crystal ball away?
In general, App.Query["Name of Query"] will get you the streams of data. You usually need to convert it using AsDynamic() or AsList() to something you can work with.
This should help some; What is Data?
If you're just running into problems with field names, you probably forgot the AsList or AsDynamic, just like #accuraty-jeremy mentioned.
For real debugging, go to insights, you'll see what happens - but in your case it probably won't help, because your working with an object that's not dynamic (so doesn't support .FirstName) until you AsList/AsDynamic it.
My bad: I confused two different files - _List.cshtml and _List Paging.cshtml. So I was searching for an error in the code of wrong file.

gatsby-source-ghost is failing to create the Ghost schema in Gatsby

I currently have a Gatsby project based on the Gatsby-Ghost-Starter project. The Ghost schema is no longer being generated in Gatsby for some reason, and I can't seem to figure out why. As a result, I get the following error:
There was an error in your GraphQL query:
Cannot query field "icon" on type "GhostSettings".
If you don't expect "icon" to exist on the type "GhostSettings" it is
most likely a typo. However, if you expect "icon" to exist there are a
couple of solutions to common problems:
If you added a new data source and/or changed something inside gatsby-node.js/gatsby-config.js, please try a restart of your
development server
The field might be accessible in another subfield, please try your query in GraphiQL and use the GraphiQL explorer to see which fields
you can query and what shape they have
You want to optionally use your field "icon" and right now it is not used anywhere. Therefore Gatsby can't infer the type and add it to the
GraphQL schema. A quick fix is to add a least one entry with that
field ("dummy content")
It is recommended to explicitly type your GraphQL schema if you want
to use optional fields. This way you don't have to add the mentioned
"dummy content". Visit our docs to learn how you can define the schema
for "GhostSettings":
https://www.gatsbyjs.org/docs/schema-customization/#creating-type-definitions
This error message is basically repeated for every page and query field that uses anything related to the Ghost schema. I pulled up the GraphiQL explorer to see if I could access the fields there, and it indicated there is no schema available (as seen in the screenshot here). I tried rolling back to previous commits, deleting .cache and node_modules, updating all packages, and re-cloning the repo to no avail.
My latest commit was able to compile with no problem in the production environment, so I think it's some sort of misconfiguration on my machine...The environment variables are all identical, and I tried using the production Ghost server as my "ghost-source" on my local machine (since I know it works if the production build compiled successfully), but nothing changed. I've been stuck on this for days and have no idea how to even debug what's happening right now. Any advice/insights would be greatly appreciated.

Dynamic Validation on K8s Configuration Files (YAML) Using Custom Rules

I am looking for a static validator that validates Kubernetes deployment or service yaml files based on custom rules. For example, I can have a rule to disallow some fields in the yaml files (although they are valid fields in K8s), or specify a range for values of a field. The validation is triggered independent of kubectl.
The closest solution I found is this kube-lint: https://github.com/viglesiasce/kube-lint. However, it does not seem to be supported since the last commit is March 2017.
Can anyone let me know if there is anything else that does the dynamic validation on K8s yaml files based on custom rules?
I believe the thing you are looking for is an Admission Controller and its two baked-in kinds "validating" and "mutating." However, as the docs say, if that's not powerful enough for your needs there is also Dynamic Admission Controller.
Be sure to watch Pod Security Policies as it matures out of beta (or, I guess, try it even now)
I haven't ever used them to know what the user experience is like (such as: does kubectl offer a friendly message, or just "401: Nope" kind of thing?), but as for the "disallow some fields" part, I am pretty confident they will do exactly as you wish.

Resources