Google Developer API: "The caller does not have permission" For a service account with Full Admin Access - google-api

I'm trying to setup automation for my App publication via google developer api. I have setup a service account as per the user guide with service account user + owner permissions. The service account shows up on the google play console and I've give it the full Admin access + full access on the app.
I've created the .json key for the service account as a small test, tried to use google-api-client library on python to see if it works. Note that I have only created a dummy app with a package name and haven't uploaded any APKs yet.
from google.oauth2 import service_account
import googleapiclient.discovery
import httplib2
if __name__ == '__main__':
scopes = ['https://www.googleapis.com/auth/androidpublisher']
package_name = 'xxxxxxxxxxxx'
service_account_file = 'creds.json'
credentials = service_account.Credentials.from_service_account_file(
service_account_file, scopes=scopes)
print(type(credentials))
# delegated_cre.dentials = credentials.with_subject('rohella.anshuman#gmail.com')
android = googleapiclient.discovery.build('androidpublisher', 'v3', credentials=credentials)
res = android.edits().insert(body={}, packageName=package_name).execute()
print(res)
However, I keep getting the 403 permission denied error. I've checked all the access on api cloud/play console and it should work as per the official docs. Here is the log output
(env) ➜ android-publish python main.py
<class 'google.oauth2.service_account.Credentials'>
Traceback (most recent call last):
File "/Users/xxxxxx/Workspace/xxxxxxxx/android-publish/main.py", line 14, in <module>
res = android.edits().insert(body={}, packageName=package_name).execute()
File "/Users/xxxxxxx/Workspace/xxxxxxxx/android-publish/env/lib/python3.9/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "/Users/xxxxxx/Workspace/xxxxx/android-publish/env/lib/python3.9/site-packages/googleapiclient/http.py", line 938, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://androidpublisher.googleapis.com/androidpublisher/v3/applications/com.odecloud.odesocial.beta/edits?alt=json returned "The caller does not have permission". Details: "The caller does not have permission">
Additional Info:
Permissions on google cloud console for the service account user.

Related

How to load a AzureML model in an Azure Databricks compute?

I am trying to run a DatabricksStep. I have used ServicePrincipalAuthentication to authenticate the run:
appId = dbutils.secrets.get(<secret-scope-name>, <client-id>)
tenant = dbutils.secrets.get(<secret-scope-name>, <directory-id>)
clientSecret = dbutils.secrets.get(<secret-scope-name>, <client-secret>)
subscription_id = dbutils.secrets.get(<secret-scope-name>, <subscription-id>)
resource_group = <aml-rgp-name>
workspace_name = <aml-ws-name>
svc_pr = ServicePrincipalAuthentication(
tenant_id=tenant,
service_principal_id=appId,
service_principal_password=clientSecret)
ws = Workspace(
subscription_id=subscription_id,
resource_group=resource_group,
workspace_name=workspace_name,
auth=svc_pr
)
The authentication is successful since running the following block of code gives the desired output:
subscription_id = ws.subscription_id
resource_group = ws.resource_group
workspace_name = ws.name
workspace_region = ws.location
print(subscription_id, resource_group, workspace_name, workspace_region, sep='\n')
However, the following block of codes gives an error:
model_name=<registered-model-name>
model_path = Model.get_model_path(model_name=model_name, _workspace=ws)
loaded_model = joblib.load(model_path)
print('model loaded!')
This is giving an error:
UserErrorException:
Message:
Operation returned an invalid status code 'Forbidden'. The possible reason could be:
1. You are not authorized to access this resource, or directory listing denied.
2. you may not login your azure service, or use other subscription, you can check your
default account by running azure cli commend:
'az account list -o table'.
3. You have multiple objects/login session opened, please close all session and try again.
InnerException None
ErrorResponse
{
"error": {
"message": "\nOperation returned an invalid status code 'Forbidden'. The possible reason could be:\n1. You are not authorized to access this resource, or directory listing denied.\n2. you may not login your azure service, or use other subscription, you can check your\ndefault account by running azure cli commend:\n'az account list -o table'.\n3. You have multiple objects/login session opened, please close all session and try again.\n ",
"code": "UserError"
}
}
The error is Forbidden Error even though I have authenticated using ServicePrincipalAuthentication.
How to resolve this error to run inference using an AML registered model in ADB?
The Databricks workspace need to be present in the same subscription as your AML workspace.
This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.
Here is the Model class register.

Google Cloud Monitoring Ruby client permission issue

I am following the Ruby code sample to add a custom metrics to stackdriver, however, I keep getting the permission denied error.
client = Google::Cloud::Monitoring::Metric.new
project_name = Google::Cloud::Monitoring::V3::MetricServiceClient.project_path project_id
descriptor = Google::Api::MetricDescriptor.new(
type: "custom.googleapis.com/my_metric#{random_suffix}",
metric_kind: Google::Api::MetricDescriptor::MetricKind::GAUGE,
value_type: Google::Api::MetricDescriptor::ValueType::DOUBLE,
description: "This is a simple example of a custom metric."
)
result = client.create_metric_descriptor project_name, descriptor
the error I got is "Google::Gax::PermissionDeniedError (GaxError RPC failed, caused by 7:Permission monitoring.metricDescriptors.create denied (or the resource may not exist).)"
The environment variable GOOGLE_APPLICATION_CREDENTIALS is set, and it works fine for the Google Cloud Storage code below
storage = Google::Cloud::Storage.new project: project_id
# Make an authenticated API request
storage.buckets.each do |bucket|
puts bucket.name
end
At this point, I don't know what is the problem. Do I need to set up a different credential for Cloud Monitoring?

botot3 attach_volume throwing volume not available

I am trying to attach volume to instance using boto3 but its failed to attach with below error
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (IncorrectState) when calling the AttachVolume operation: vol-xxxxxxxxxxxxxxx is not 'available'.
I can see volumme exists in aws console but somehow boto3 is not able to attach volume.
os.environ['AWS_DEFAULT_REGION'] = "us-west-1"
client = boto3.client('ec2', aws_access_key_id=access_key, aws_secret_access_key=secret_key,
region_name='us-west-1')
response1 = client.attach_volume(
VolumeId=volume_id,
InstanceId=instance_id,
Device='/dev/sdg',
)
I tried using aws cli for attaching the same and its working fine after exporting AWS_DEFAULT_REGION="us-west-1"
Also tried exporting the same in python script using os.environ['AWS_DEFAULT_REGION'] = "us-west-1" but python script is failing with the same error as mentioned above.
I figured it out. I am not giving enough time after creating ebs volume. I am able to attach now after adding sleep

Google Cloud Storage can't find project name

I am using the python client library for the Google Storage API, and I have a file, pythonScript.py, that has the contents:
# Imports the Google Cloud client library
from google.cloud import storage
# Instantiates a client
storage_client = storage.Client()
# The name for the new bucket
bucket_name = 'my-new-bucket'
# Creates the new bucket
bucket = storage_client.create_bucket(bucket_name)
print('Bucket {} created.'.format(bucket.name))
When I try to run it I get this in the terminal:
Traceback (most recent call last): File
"pythonScript.py", line 11, in
bucket = storage_client.create_bucket(bucket_name) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/storage/client.py",
line 218, in create_bucket
bucket.create(client=self) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/storage/bucket.py",
line 199, in create
data=properties, _target_object=self) File "/home/joel/MATI/env/lib/python3.5/site-packages/google/cloud/_http.py",
line 293, in api_request
raise exceptions.from_http_response(response) google.cloud.exceptions.Conflict: 409 POST
https://www.googleapis.com/storage/v1/b?project=avid-folder-180918:
Sorry, that name is not available. Please try a different one.
I am not sure why, since I do have the GSS API enabled for my project, and the default configuration seems to be correct. The out output of gcloud config list is:
[compute]
region = us-east1
zone = us-east1-d
[core]
account = joel#southbendcodeschool.com
disable_usage_reporting = True
project = avid-folder-180918
Your active configuration is: [default]
Bucket names are globally unique. Someone else must already own the bucket named "my-new-bucket."

Forbidden Error on get_thing_shadow with boto3, aws iot and alexa

I am running a custom alexa skill with flask-ask that connects to aws iot.
Using same credentials work when running the script on local machine and using ngrok to assign to Alexa skill endpoint. But when I use zappa to upload as lambda, I get the following:
File "/var/task/main.py", line 48, in get_shadow
res=client.get_thing_shadow(thingName="test_light")
File "/var/runtime/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 543, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (ForbiddenException) when calling the GetThingShadow operation: Forbidden
When using ngrok, the skill works completely fine. What am I missing here? Help!
The problem was VPC access. I had to provide the role the VPC access policy and it worked.

Resources