Google Cloud Monitoring Ruby client permission issue - metrics

I am following the Ruby code sample to add a custom metrics to stackdriver, however, I keep getting the permission denied error.
client = Google::Cloud::Monitoring::Metric.new
project_name = Google::Cloud::Monitoring::V3::MetricServiceClient.project_path project_id
descriptor = Google::Api::MetricDescriptor.new(
type: "custom.googleapis.com/my_metric#{random_suffix}",
metric_kind: Google::Api::MetricDescriptor::MetricKind::GAUGE,
value_type: Google::Api::MetricDescriptor::ValueType::DOUBLE,
description: "This is a simple example of a custom metric."
)
result = client.create_metric_descriptor project_name, descriptor
the error I got is "Google::Gax::PermissionDeniedError (GaxError RPC failed, caused by 7:Permission monitoring.metricDescriptors.create denied (or the resource may not exist).)"
The environment variable GOOGLE_APPLICATION_CREDENTIALS is set, and it works fine for the Google Cloud Storage code below
storage = Google::Cloud::Storage.new project: project_id
# Make an authenticated API request
storage.buckets.each do |bucket|
puts bucket.name
end
At this point, I don't know what is the problem. Do I need to set up a different credential for Cloud Monitoring?

Related

Specifying MSK credentials in an AWS CDK stack

I have code that seems to "almost" deploy. It will fail with the following error:
10:55:25 AM | CREATE_FAILED | AWS::Lambda::EventSourceMapping | QFDSKafkaEventSour...iltynotifyEFE73996
Resource handler returned message: "Invalid request provided: The secret provided in 'sourceAccessConfigurations' is not associated with cluster some-valid-an. Please provide a secret associated with the cluster. (Service: Lambda, Status Code: 400, Request ID: some-uuid )" (RequestToken: some-uuid, HandlerErrorCode: InvalidRequest)
I've cobbled together the cdk stack from multiple tutorials, trying to learn CDK. I've gotten it to the point that I can deploy a lambda, specify one (or more) layers for the lambda, and even specify any of several different sources for triggers. But our production Kafka requires credentials... and I can't figure out for the life of me how to supply those so that this will deploy correctly.
Obviously, those credentials shouldn't be included in the git repo of my codebase. I assume I will have to set up a Secrets Manager secret with part or all of the values. We're using scram-sha-512, and it includes a user/pass pair. The 'secret_name' value to Secret() is probably the name/path of the Secrets Manager secret. I have no idea what the second, unnamed param is for, and I'm having trouble figuring that out. Can anyone point me in the right direction?
Stack code follows:
#!/usr/bin/env python3
from aws_cdk import (
aws_lambda as lambda_,
App, Duration, Stack
)
from aws_cdk.aws_lambda_event_sources import ManagedKafkaEventSource
from aws_cdk.aws_secretsmanager import Secret
class ExternalRestEndpoint(Stack):
def __init__(self, app: App, id: str) -> None:
super().__init__(app, id)
secret = Secret(self, "Secret", secret_name="integrations/msk/creds")
msk_arn = "some valid and confirmed arn"
# Lambda layer.
lambdaLayer = lambda_.LayerVersion(self, 'lambda-layer',
code = lambda_.AssetCode('utils/lambda-deployment-packages/lambda-layer.zip'),
compatible_runtimes = [lambda_.Runtime.PYTHON_3_7],
)
# Source for the lambda.
with open("src/path/to/sourcefile.py", encoding="utf8") as fp:
mysource_code = fp.read()
# Config for it.
lambdaFn = lambda_.Function(
self, "QFDS",
code=lambda_.InlineCode(mysource_code),
handler="lambda_handler",
timeout=Duration.seconds(300),
runtime=lambda_.Runtime.PYTHON_3_7,
layers=[lambdaLayer],
)
# Set up the event (managed Kafka).
lambdaFn.add_event_source(ManagedKafkaEventSource(
cluster_arn=prototype_mks,
topic="foreign.endpoint.availabilty.notify",
secret=secret,
batch_size=100, # default
starting_position=lambda_.StartingPosition.TRIM_HORIZON
))
Looking into a code sample, I understand that you are working with Amazon MSK as an event source, and not just self-managed (cross-account) Kafka.
I assume I will have to set up a Secrets Manager secret with part or all of the values
You don't need to setup credentials. If you use MSK with SALS_SCRAM, you already have credentials, which must be associated with MSK cluster.
As you can see from the doc, you secret name should start with AmazonMSK_, for example AmazonMSK_LambdaSecret.
So, in the code above, you will need to fix this line:
secret = Secret(self, "Secret", secret_name="AmazonMSK_LambdaSecret")
I assume you already aware of the CDK python doc, but will just add here for reference.

GlueContext write_dynamic_frame fails after adding BucketOwnerFullControl configuration

I am trying to write dataframe into different account s3 bucket as json output.
The following code is failing with S3 Access denied error in GLUE Spark Streaming job. But If I run the code without first line in following code, it works and output it written into S3 bucket
glueContext._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
glueContext.write_dynamic_frame.from_options(frame=dynamic_df, connection_type="s3",
connection_options={"path": output_path},
format=file_format, transformation_ctx="datasink")
Here is the error log:
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code:
AccessDenied; Request ID: W52125NY7G3EF7WH; S3 Extended Request ID:
4t9JOJedv2qNRy6W8ySxdQQ7r+TMN1MWpZCFOK1IKO6W4gx4a2oKuK5vwXUPnh4HkkPAG+LnEIc=;
Proxy: null), S3 Extended Request ID: 4t9JOJedv2qUPnh4HkkPAG+LnEIc= at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
at
This looks strange to me as Bucket has full permission and it works perfectly when 2nd line alone executed but bucket owner is still glue account which I am trying to change it using fs.s3.canned.acl.
The destination bucket also setup with Bucket owner preferred option
Please suggest me what I am doing wrong.
Thanks
It's not enough to have permission in bucket policies only.
Glue role was missing s3:PutObjectAcl permission in IAM.

How to load a AzureML model in an Azure Databricks compute?

I am trying to run a DatabricksStep. I have used ServicePrincipalAuthentication to authenticate the run:
appId = dbutils.secrets.get(<secret-scope-name>, <client-id>)
tenant = dbutils.secrets.get(<secret-scope-name>, <directory-id>)
clientSecret = dbutils.secrets.get(<secret-scope-name>, <client-secret>)
subscription_id = dbutils.secrets.get(<secret-scope-name>, <subscription-id>)
resource_group = <aml-rgp-name>
workspace_name = <aml-ws-name>
svc_pr = ServicePrincipalAuthentication(
tenant_id=tenant,
service_principal_id=appId,
service_principal_password=clientSecret)
ws = Workspace(
subscription_id=subscription_id,
resource_group=resource_group,
workspace_name=workspace_name,
auth=svc_pr
)
The authentication is successful since running the following block of code gives the desired output:
subscription_id = ws.subscription_id
resource_group = ws.resource_group
workspace_name = ws.name
workspace_region = ws.location
print(subscription_id, resource_group, workspace_name, workspace_region, sep='\n')
However, the following block of codes gives an error:
model_name=<registered-model-name>
model_path = Model.get_model_path(model_name=model_name, _workspace=ws)
loaded_model = joblib.load(model_path)
print('model loaded!')
This is giving an error:
UserErrorException:
Message:
Operation returned an invalid status code 'Forbidden'. The possible reason could be:
1. You are not authorized to access this resource, or directory listing denied.
2. you may not login your azure service, or use other subscription, you can check your
default account by running azure cli commend:
'az account list -o table'.
3. You have multiple objects/login session opened, please close all session and try again.
InnerException None
ErrorResponse
{
"error": {
"message": "\nOperation returned an invalid status code 'Forbidden'. The possible reason could be:\n1. You are not authorized to access this resource, or directory listing denied.\n2. you may not login your azure service, or use other subscription, you can check your\ndefault account by running azure cli commend:\n'az account list -o table'.\n3. You have multiple objects/login session opened, please close all session and try again.\n ",
"code": "UserError"
}
}
The error is Forbidden Error even though I have authenticated using ServicePrincipalAuthentication.
How to resolve this error to run inference using an AML registered model in ADB?
The Databricks workspace need to be present in the same subscription as your AML workspace.
This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.
Here is the Model class register.

Using CDK deploy cannot read temporary S3 bucket that holds Lambda Code

When I deploy a Lambda "code" using CDK the deploy process (cloudformation running under presumably my user) does not have seem to have access to the bucket that holds the Lambda code.
I followed this tutorial: https://intro-to-cdk.workshop.aws/what-is-cdk.html and see this error when I run cdk deploy:
Lambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-19kn1ypcmzq2q/assets/5327df
Lambda Code:
const handler = new lambda.Function(this, "TimestreamLambda", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.fromAsset(path.join(__dirname, '../resources')),
handler: "index.hello_world",
...
cdk and #aws-cdk version is 1.73.0 but I also tried with 1.71.0
Notes:
I see the bucket under my account (in my region).
When logged into this account I can see and download the asset file
the downloaded zip file has the correct contents.
More error details:
12/24 | 9:15:19 PM | CREATE_FAILED | AWS::Lambda::Function | TimestreamLambda (TimestreamLambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-28hiljazvaim/assets/5327df740bdc9c380ff567xxxxxxxxxxx7a68a.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException; Request ID: 1b813776-7647-4767-89bc-XXXXXXXXX; Proxy: null)
new Function (/Users/<user>/dev/cdk/cdk-workshop/node_modules/#aws-cdk/aws-lambda/lib/function.ts:593:35)
\_ new CdkWorkshopStack (/Users/<user>/dev/cdk/cdk-workshop/lib/cdk-workshop-stack.ts:33:21)
I also see this (using the -v option) during deploy:
env: {
CDK_DEFAULT_REGION: 'us-west-2',
CDK_DEFAULT_ACCOUNT: '94646XXXXX',
CDK_CONTEXT_JSON: '{"#aws-cdk/core:enableStackNameDuplicates":"true","aws-cdk:enableDiffNoFail":"true","#aws-cdk/core:stackRelativeExports":"true","aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '7.0.0',
CDK_CLI_VERSION: '1.73.0'
}
As it turns out this was an issue with the internal authentication system my company uses to access AWS. Instead of using my regular AWS account to access I had to create a temporary account (which also sets a temporary token).

Databrick - Reading BLOB from Mounted filestorage

I am using Azure databricks and I ran the following Python code:
sas_token = "<my sas key>"
dbutils.fs.mount(
source = "wasbs://<container>#<storageaccount>.blob.core.windows.net",
mount_point = "/mnt/gl",
extra_configs = {"fs.azure.sas.<container>.<storageaccount>.blob.core.windows.net": sas_token})
This seemed to run fine. So I then ran:
df = spark.read.text("/mnt/gl/glAgg_LE.csv")
Which gave me the error:
shaded.databricks.org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Not sure what I'm doing wrong though. I'm pretty sure my sas key is correct.
Ok if you are getting this error - double check both the SAS key and the container name.
Turned out I had pointed it to the wrong container!

Resources