Stackdriver API to update a policy - google-cloud-stackdriver

I would like to update a stackdriver alert policy using the monitor api with gcloud. I'm getting an error for the condition. I'm basically trying to add a filter to get rid of /dev/loop.* devices. Any help would be very appreciated.
gcloud alpha monitoring policies update projects/projectnameID/alertPolicies/policynumberxxx/conditions/conditionnumberxxx \
--condition-filter='metric.type="agent.googleapis.com/disk/percent_used" AND resource.type="gce_instance"' metric.label."device"!=monitoring.regex.full_match "/dev/loop.*"
I can do this via the GUI but some legacy policies will only update via API calls.

Related

How to view and Interprete Vertex AI Logs

We have deployed Models in the Vertex AI endpoint.
Now we want to know and interpret logs regarding events
of Node creation, POD creation, user API call matric etc.
Is there any way or key by which we can filter the logs for Analysis?
As you did not specify your question I will provide quite a general answer which might help other members.
There is a Documentation which explains Vertex AI logging information - Vertex AI audit logging information.
Google Cloud services write audit logs to help you answer the questions, "Who did what, where, and when?" within your Google Cloud resources.
Currently Vertex AI supports 2 types of Audit Logs:
Admin Activity audit logs
Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources. For example, these logs record when users create VM instances or change Identity and Access Management permissions.
Data Access audit logs
Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data.
Two others like System Event logs and Policy Denied logs are currently not supported in Vertex AI. In guide Google services with audit logs you can find more information.
If you want to view audit logs, you can use Console, gcloud command or API. Depending on how you want to get them you should follow steps mentioned in Viewing audit logs. For example, if you would use Console, you will use Log Explorer.
Additional threads which might be helpful:
How do we capture all container logs on google Vertex AI?
How to structure container logs in Vertex AI?
For container logs (logs that are created by your model) you can't currently,
the entire log entry is captured by the Vertex AI platform and assigned as a string to the "message" field within the parent "jsonPayload" fields,
the answer above of #PjoterS suggests a workaround to that limitation which isn't easy in my opinion.
It would have been better if Vertex had offered some mechanism by which you could log directly to the endpoint resource from the container using their gcloud logging lib or better, unpack the captured log fields as sub fields to the "jsonPayload" parent field, or into "message"

how to see console.log in AWS lambda functions

Where do you see the console.log() calls made inside of AWS Lambda functions? I looked at AWS Cloud Watch event log and didn't see them there. Is there a CLI way to see them?
console.log() should definitely end up in the CloudWatch logs for your function. You should be able to find the correct log group in the web console interface for your function under the Monitoring tab - Jump to Logs. Note that you will have a different log stream for each invocation of your function, and there may be a delay between logs being written and logs showing up in a stream, so be patient.
It's possible you do not have the IAM permissions to create log groups or write to log streams. Ashan has provided links on how to fix that.
Additionally, you can use the awslogs tool to list groups/streams, as well as to download or tail groups/streams:
To list available groups: awslogs groups
To list available streams in group app/foo: awslogs streams app/foo
To "tail -f" all streams from a log group app/foo: awslogs get app/foo ALL --watch
Make sure you the IAM role assigned to the AWS Lambda function has permission to write to CloudWatch Logs. For more information regarding the policy refer Using Identity-Based Policies (IAM Policies)for CloudWatch Logs.
In addition, you should be able to view the CloudWatch log group by clicking on the CloudWatch Logs under Add Triggers in Lambda Console.

configure slack with watcher on found

i have a web hook in slack and i have to configure it with watcher. the doc of elastic says this -
To configure a Slack account in Watcher, you set the watcher.actions.slack.service property in elasticsearch.yml. You must set the url to your incoming webhook integration URL.
but found doesn't give access to yml.
For example, on the local server, the following snippet configures an account called notify-monitoring and sets the default sender name to Watcher.
watcher.actions.slack.service:
account:
monitoring:
url: https://hooks.slack.com/services/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
message_default:
from: Watcher
how do i configure it on found??
It's been a while, but for anyone else who comes across this, the Elastic Cloud (previously branded as Found.io), now does support setting some (and only some!) elasticsearch.yml configuration through their web interface.
It's accessible under:
Clusters => Cluster Name => Configuration => User Settings
I was able to insert the Slack notification settings there successfully!
IMPORTANT: this change does not propagate immediately. It took some time for me, so you might have to go get a cup of coffee while it's applied.
Also, another tip: You might find it easier to add a default_account for Slack, as follows:
watcher.actions.slack.service:
default_account: monitoring
account:
monitoring:
url: <your_slack_hook_url>
message_defaults:
from: watcher

Is it possible to disable the source/destination check from the Amazon SDK?

I am creating a VPC in Amazon's cloud, but I can't figure out how to disable the source/dest check on my NAT instance from the AWS SDK. Specifically, I am using ruby and the docs show a call that will return a boolean indicating if it is on or not: http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/EC2/Instance.html#source_dest_check-instance_method
I don't see anywhere that I can actually set it from the AWS SDK. I can do it through the console or through the command line tools, but it looks like they might have left this out of the API?
No it hasn't been left out from the API. You can use this:
http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/EC2/Client.html#modify_instance_attribute-instance_method
or this:
http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/EC2/Client.html#modify_network_interface_attribute-instance_method

Queued Build is not connecting to db as it uses domainName\computerName instead of domanName\username

I am trying to queue a build in my own build definition. But the sql connection in my code throws an exception that Login failed for user 'domainName\computerName$' which is natural since it should have used domainName\userAlias.
My question is why is it using domainName\computerName, and how to make it use windows auth instead? Can some one please help me with this?
You need to set the service account that the build service uses on the server(s) running your Build Agent(s). It sounds like it's currently set to run as Network Service.
You can change it by firing up TFS Admin Console, and going to Build Configuration and changing the properties on the service:

Resources