What events are triggered for PV/PVC and from where? - events

kubectl get events list the events for the K8s objects.
From where the events are triggered for PV/PVC actually ?
There is a list of volume events
https://docs.openshift.com/container-platform/4.5/nodes/clusters/nodes-containers-events.html
but it does not identifies that which events are for which resource ?

Let`s start from what exactly is an Kubernetes event. Those are object that provide insight into what is happening inside a cluster, such as what decisions were made by scheduler or why some pods were evicted from the node. Those API objects are persisted in etcd.
You can read more about them here and here.
There is also an excellent tutorial about Kubernetes events which you may find here.
There are couple of ways to view/fetch more detailed events from Kubernetes:
Use kubectl get events -o wide. This will give you information about object, subobject and source of the event. Here`s an example:
LAST SEEN TYPE REASON OBJECT SUBOBJECT SOURCE MESSAGE
<unknown> Warning FailedScheduling pod/web-1 default-scheduler running "VolumeBinding" filter plugin for pod "web-1": pod has unbound immediate PersistentVolumeClaims
6m2s Normal ProvisioningSucceeded persistentvolumeclaim/www-web-1 k8s.io/minikube-hostpath 2481b4d6-0d2c-11eb-899d-02423db39261 Successfully provisioned volume pvc-a56b3f35-e7ac-4370-8fda-27342894908d
6m2s Normal ProvisioningSucceeded persistentvolumeclaim/www-web-1 k8s.io/minikube-hostpath 2481b4d6-0d2c-11eb-899d-02423db39261 Successfully provisioned volume pvc-a56b3f35-e7ac-4370-8fda-27342894908d
Use kubectl get events --output json will give you list of the event in json format containing other details such as selflink.
---
"apiVersion": "v1",
"count": 1,
"eventTime": null,
"firstTimestamp": "2020-10-13T12:07:17Z",
"involvedObject": {
---
"kind": "Event",
"lastTimestamp": "2020-10-13T12:07:17Z",
"message": "Created container nginx",
"metadata": {
---
Selflink can be used to determine the the API location from where the data is being fetched.
We can take as an example /api/v1/namespaces/default/events/ and fetch the data from API server using kubectl proxy:
kubectl proxy --port=8080 & curl http://localhost:8080/api/v1/namespaces/default/events/
Using all those information you can narrow down to a specific details from the underlying object using field-selector:
kubectl get events --field-selector type=!Normal
or
kubectl get events --field-selector involvedObject.kind=PersistentVolumeClaim
LAST SEEN TYPE REASON OBJECT MESSAGE
44m Normal ExternalProvisioning persistentvolumeclaim/www-web-0 waiting for a volume to be created, either by external provisioner "k8s.io/minikube-hostpath" or manually created by system administrator
44m Normal Provisioning persistentvolumeclaim/www-web-0 External provisioner is provisioning volume for claim "default/www-web-0"
44m Normal ProvisioningSucceeded persistentvolumeclaim/www-web-0 Successfully provisioned volume pvc-815beb0a-b5f9-4b27-94ce-d21f2be728d5
Please also remember that all the information provided by kubectl events are the same ones from the kubectl describe <ojbect>.
Lastly, if you look carefully into the event.go code you may see all the events reference for volumes. If you compare those with the Table 13. Volumes you can see that they are almost the same (execpt for WaitForPodScheduled and ExternalExpanding)
This means that Openshift provided an aggregated set of information about possible events from different kubernetes that may occur in the cluster.

Related

Elastic Cloud APM not showing logs in Transactions Page

What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES

Save pod metadata in external log analysis tool

We currently save all Kubernetes logs on a central log analysis tool. We use fluentbit to ship the logs.
Although we are able to retrieve and analyze the logs sent to stdout by the containers, we are unable to find the information shown in pods when we describe them (kubectl describe pod somepod). We want to save this information too, as it shows the Exit Code and Reason for a Terminated pod. Like exit codes 137 and OOMKilled.
Having this information in prometheus would also be valid. In prometheus we can see that some of the information is present like kube_pod_container_status_terminated_reason but the "exit code" is missing.
How could this be performed?

How to view and Interprete Vertex AI Logs

We have deployed Models in the Vertex AI endpoint.
Now we want to know and interpret logs regarding events
of Node creation, POD creation, user API call matric etc.
Is there any way or key by which we can filter the logs for Analysis?
As you did not specify your question I will provide quite a general answer which might help other members.
There is a Documentation which explains Vertex AI logging information - Vertex AI audit logging information.
Google Cloud services write audit logs to help you answer the questions, "Who did what, where, and when?" within your Google Cloud resources.
Currently Vertex AI supports 2 types of Audit Logs:
Admin Activity audit logs
Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions.
Data Access audit logs
Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
Two others like System Event logs and Policy Denied logs are currently not supported in Vertex AI. In guide Google services with audit logs you can find more information.
If you want to view audit logs, you can use Console, gcloud command or API. Depending on how you want to get them you should follow steps mentioned in Viewing audit logs. For example, if you would use Console, you will use Log Explorer.
Additional threads which might be helpful:
How do we capture all container logs on google Vertex AI?
How to structure container logs in Vertex AI?
For container logs (logs that are created by your model) you can't currently,
the entire log entry is captured by the Vertex AI platform and assigned as a string to the "message" field within the parent "jsonPayload" fields,
the answer above of #PjoterS suggests a workaround to that limitation which isn't easy in my opinion.
It would have been better if Vertex had offered some mechanism by which you could log directly to the endpoint resource from the container using their gcloud logging lib or better, unpack the captured log fields as sub fields to the "jsonPayload" parent field, or into "message"

GCP - creating a VM instance and extracting logs

I jave a JAVA application in which I am using GCP to create VM instances from images.
In this application, I would like to allow the user to view the vm creation logs in order to be updated on the status of the creation, and to be able to see failure points in detail.
I am sure such logs exist in GCP, but have been unable to find specific APIOs which let me see a specific action, for example creation of instance "X".
Thanks for the help
When you create a VM, the answer that you have is a JobID (because the creation take time and the Compute Engine API answer immediately). To know the status of the VM start (and creation) you have to poll regularly this JobID.
In the logs, you can also filter with this JobID to select and view only the logs that you want on the Compute API side (create/start errors).
If you want to see the logs of the VM, filter the logs not with the JobID but with the name of the VM, and its zone.
In Java, you have client libraries that help you to achieve this

how to see console.log in AWS lambda functions

Where do you see the console.log() calls made inside of AWS Lambda functions? I looked at AWS Cloud Watch event log and didn't see them there. Is there a CLI way to see them?
console.log() should definitely end up in the CloudWatch logs for your function. You should be able to find the correct log group in the web console interface for your function under the Monitoring tab - Jump to Logs. Note that you will have a different log stream for each invocation of your function, and there may be a delay between logs being written and logs showing up in a stream, so be patient.
It's possible you do not have the IAM permissions to create log groups or write to log streams. Ashan has provided links on how to fix that.
Additionally, you can use the awslogs tool to list groups/streams, as well as to download or tail groups/streams:
To list available groups: awslogs groups
To list available streams in group app/foo: awslogs streams app/foo
To "tail -f" all streams from a log group app/foo: awslogs get app/foo ALL --watch
Make sure you the IAM role assigned to the AWS Lambda function has permission to write to CloudWatch Logs. For more information regarding the policy refer Using Identity-Based Policies (IAM Policies)for CloudWatch Logs.
In addition, you should be able to view the CloudWatch log group by clicking on the CloudWatch Logs under Add Triggers in Lambda Console.

Resources