GCP Audit Logs Filter - google-cloud-logging

I am trying to create Log sink on folder level which captures the audit logs and in that I would like to add filter based on project labels. Is there any we can achieve that?

Related

Elastic Cloud APM not showing logs in Transactions Page

What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES

How to view and Interprete Vertex AI Logs

We have deployed Models in the Vertex AI endpoint.
Now we want to know and interpret logs regarding events
of Node creation, POD creation, user API call matric etc.
Is there any way or key by which we can filter the logs for Analysis?
As you did not specify your question I will provide quite a general answer which might help other members.
There is a Documentation which explains Vertex AI logging information - Vertex AI audit logging information.
Google Cloud services write audit logs to help you answer the questions, "Who did what, where, and when?" within your Google Cloud resources.
Currently Vertex AI supports 2 types of Audit Logs:
Admin Activity audit logs
Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions.
Data Access audit logs
Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
Two others like System Event logs and Policy Denied logs are currently not supported in Vertex AI. In guide Google services with audit logs you can find more information.
If you want to view audit logs, you can use Console, gcloud command or API. Depending on how you want to get them you should follow steps mentioned in Viewing audit logs. For example, if you would use Console, you will use Log Explorer.
Additional threads which might be helpful:
How do we capture all container logs on google Vertex AI?
How to structure container logs in Vertex AI?
For container logs (logs that are created by your model) you can't currently,
the entire log entry is captured by the Vertex AI platform and assigned as a string to the "message" field within the parent "jsonPayload" fields,
the answer above of #PjoterS suggests a workaround to that limitation which isn't easy in my opinion.
It would have been better if Vertex had offered some mechanism by which you could log directly to the endpoint resource from the container using their gcloud logging lib or better, unpack the captured log fields as sub fields to the "jsonPayload" parent field, or into "message"

How to get Validation error message into a Attribute from ValidateResult Processor of Nifi

I am trying to validate a json using ValidateRecord Processor via Avroschemaregistry. I need to store validation error message into a sql table, so i tried to capture the error message in attribute but i am unable to capture the error message in attribute, any idea how to do it
After your ValidateRecord Processor, you can choose to route flow files which are 'invalid' to a separte log and route them to your sql table, you can do the same if they 'fail'. I am assuming from the 'error message' you mean the 'bullentin' which would occur when the Processor can neither validate or invalidate the flow file based on your schema.
A potential solution to this is to use the SiteToSiteBulletinReportingTask
Screenshot of SiteToSiteBulletinReportingTask
You can build a dataflow to receive these bulletin events, manipulate them as you want and store them in a location of your choice for your auditing needs.
From the sounds of it, the SiteToSiteBulletinReportingTask should be able to achieve what you want. To implement this, add a iteToSiteBulletinReportingTask to the 'Reporting Tasks' in the NiFi Settings: Reporting Tasks in NiFi Settings
You can name your input port and have it flow towards your SQL store and you should have what you're after.
You need to allow NiFi nodes to receive data via site-to-site on the input port and you also need to grant the correct permissions on the root process group so the nodes are able to see the component, view and modify the data.
Side note: I would usually log everything, and have all failures and invalid route to log files, which I put to store, e.g. HBase/SQL. One suggestion I've seen is configure the logging subsystem to additionally send specific error categories to your destination of choice (e.g. active notification vs passive parsing of logs). NiFi is leveraging a very flexible logback system (an evolution of log4j). The best part - changes to the $NIFI_HOME/conf/logback.xml configuration file do not require an instance restart, will be picked up within 30 seconds or less.

how to see console.log in AWS lambda functions

Where do you see the console.log() calls made inside of AWS Lambda functions? I looked at AWS Cloud Watch event log and didn't see them there. Is there a CLI way to see them?
console.log() should definitely end up in the CloudWatch logs for your function. You should be able to find the correct log group in the web console interface for your function under the Monitoring tab - Jump to Logs. Note that you will have a different log stream for each invocation of your function, and there may be a delay between logs being written and logs showing up in a stream, so be patient.
It's possible you do not have the IAM permissions to create log groups or write to log streams. Ashan has provided links on how to fix that.
Additionally, you can use the awslogs tool to list groups/streams, as well as to download or tail groups/streams:
To list available groups: awslogs groups
To list available streams in group app/foo: awslogs streams app/foo
To "tail -f" all streams from a log group app/foo: awslogs get app/foo ALL --watch
Make sure you the IAM role assigned to the AWS Lambda function has permission to write to CloudWatch Logs. For more information regarding the policy refer Using Identity-Based Policies (IAM Policies)for CloudWatch Logs.
In addition, you should be able to view the CloudWatch log group by clicking on the CloudWatch Logs under Add Triggers in Lambda Console.

How to get details of log form dashboards of kibana

Hi I am new to Kibana (ELK stack).I have created an error dashboard in Kibana but If I need more information about errors how can I get that.Like If I want to know because of what error has been created.How can I get it.
Any initiatives will be appreciated.
For any information that you want to display in Kibana you need to have a source from where that particular information is available. In this case if we assume that the logs being parsed by Logstash has the information on the errors, then you can parse/filter the logs via the pipeline of Logstash to extract the information and store it in a field.
Now the document which are saved in the ES contain the information, this can be used to perform various aggregations or apply different mathematical functions and then eventually visualize it.

Resources