GCP - creating a VM instance and extracting logs - spring-boot

I jave a JAVA application in which I am using GCP to create VM instances from images.
In this application, I would like to allow the user to view the vm creation logs in order to be updated on the status of the creation, and to be able to see failure points in detail.
I am sure such logs exist in GCP, but have been unable to find specific APIOs which let me see a specific action, for example creation of instance "X".
Thanks for the help

When you create a VM, the answer that you have is a JobID (because the creation take time and the Compute Engine API answer immediately). To know the status of the VM start (and creation) you have to poll regularly this JobID.
In the logs, you can also filter with this JobID to select and view only the logs that you want on the Compute API side (create/start errors).
If you want to see the logs of the VM, filter the logs not with the JobID but with the name of the VM, and its zone.
In Java, you have client libraries that help you to achieve this

Related

Elastic Cloud APM not showing logs in Transactions Page

What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES

How to view and Interprete Vertex AI Logs

We have deployed Models in the Vertex AI endpoint.
Now we want to know and interpret logs regarding events
of Node creation, POD creation, user API call matric etc.
Is there any way or key by which we can filter the logs for Analysis?
As you did not specify your question I will provide quite a general answer which might help other members.
There is a Documentation which explains Vertex AI logging information - Vertex AI audit logging information.
Google Cloud services write audit logs to help you answer the questions, "Who did what, where, and when?" within your Google Cloud resources.
Currently Vertex AI supports 2 types of Audit Logs:
Admin Activity audit logs
Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions.
Data Access audit logs
Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
Two others like System Event logs and Policy Denied logs are currently not supported in Vertex AI. In guide Google services with audit logs you can find more information.
If you want to view audit logs, you can use Console, gcloud command or API. Depending on how you want to get them you should follow steps mentioned in Viewing audit logs. For example, if you would use Console, you will use Log Explorer.
Additional threads which might be helpful:
How do we capture all container logs on google Vertex AI?
How to structure container logs in Vertex AI?
For container logs (logs that are created by your model) you can't currently,
the entire log entry is captured by the Vertex AI platform and assigned as a string to the "message" field within the parent "jsonPayload" fields,
the answer above of #PjoterS suggests a workaround to that limitation which isn't easy in my opinion.
It would have been better if Vertex had offered some mechanism by which you could log directly to the endpoint resource from the container using their gcloud logging lib or better, unpack the captured log fields as sub fields to the "jsonPayload" parent field, or into "message"

How to show table like information in Nagios?

My setup is something like this.
I have multiple servers. Each server has multiple instances of same service (multi-tenant like architecture). Now I want to get status of all services running on all servers using SNMP.
The problem I am facing is, how can someone show table like information in Nagios?
i.e. when I click on any particular server, it will show me list of services. Now when I click on any service, it should again give me the list of instances of that particular service.
There's no such a feature in Nagios. You will need to set up a check for each of the services running on the monitored host.

Open a JDBC connection in a specific AS400 subsystem

I have a web service that calls some stored procedure on a AS400 via JTOpen.
What I would like to do is that the connections used to call the stored procedures was opened in a specific subsystem with a specific user, instead of qusrwrk/quser as now (default).
I think I can be able to clone the qusrwrk subsystem to make it start with a specific user, but what I cannot figure out is the mechanism to open the connection in the specific subsystem.
I guess there should be a property at connection level to say subsystem=MySubsystem.
But unfortunatly I haven't found that property.
Any hint would be appreciated.
Flavio
Let the system take care of the subsystem the job database server job is started in.
You should just focus on the application (which is what IBM i excels in).
If need be, you can tweak subsystem parameters for QUSRWRK to improve performance by allocating memory, etc.
The system uses a pool of prestarted jobs as described in the FAQ: When I do WRKACTJOB, why is the host server job running under QUSER instead of the profile specified on the AS400 object?
To improve performance, the host server jobs are prestarted jobs running under QUSER. When the Toolbox connects to a host server job in order to perform an API call, run a command, etc, a request is sent from the Toolbox to an available prestarted job. This request includes the user profile specified on the AS400 object that represents the connection. The host server job receives the request and swaps to the specified user profile before it runs the request. The host server itself originally runs under the QUSER profile, so output from the WRKACTJOB command will show the job as being owned by QUSER. However, the job is in fact running under the profile specified on the request. To determine what profile is being used for any given host server job, you can do one of three things:
1. Display the job log for that job and find the message indicating which user profile is used as a result of the swap.
2. Work with the job and display job status attributes to view the current user profile.
3. Use Navigator for i to view all of the server jobs, which will list the current user of each job. You can also use Navigator for i to look at the server jobs being used by a particular user.

AWS console not showing all instances during volume attach

I do the following using AWS web console:
Attach EBS volume-A to instance-A. Make some changes to data on volume-A and detach it
Launch new instance-B (in the same zone as instance-A)
Try attach volume-A to the new instance-B. But the new instance does not appear in the instances list during attach volume process (dialog box).
If I try the same attach using command line EC2 API (volume-A and instance-B), it works fine!
Do you know if this is a bug in AWS web console or am I doing something wrong in the console? Tried page refresh in Step #3 but it still would not list the new instance.
In order to attach, both volumes has to be in the same zone. So if you are going to attach a volume into a instance check the zone of the instance's attached volume. If those are not matching create a new instance with the same zone as the zone of the volume that you need to attached.
The volume and the instance have to be in the same region AND the same zone.
If you have a volume in us-east-1a and the instance in us-east-1b, you would need to move the volume to us-east-1b to make it work.
Even I had faced this problem yesterday and a day before. It looks like Amazon problem with their cache. Not sure WHY.
To bring back the stuff as is, I had to sign-out and make sure things are good. But it's always good to work with CLI, works better.
Although the user interface may not list the instance ID, you can attempt to add the volume anyway. If it's genuinely impossible (rather than a cache issue) you will get an error message.
Paste in the instance ID (i-xxxxxxx) manually then type your mount point (e.g. /dev/sdf) and click Attach.
For the benefit of others: some instance types do not support encrypted volumes, which may be why the instance doesn't appear in the list. I get the following error:
Error attaching volume: 'vol-12341234' is encrypted and 't2.medium' does not support encrypted volumes.

Resources