Error In google sheets with Golang - go

when i follow the steps from https://developers.google.com/sheets/api/quickstart/go
and
run as go run quickstart.go
I got Error as below
2017/08/03 12:29:22 Unable to retrieve data from sheet. Get https://sheets.googleapis.com/v4/spreadsheets/14FXalPXVUHZ2SyNBUWJpfSzUSSimYYIR5mUU36r6_BQ/values/A%3AC?alt=json: oauth2: cannot fetch token: 401 Unauthorized
Response: {
"error" : "unauthorized_client"
}
exit status 1

We have to delete credential file in our system because after 1 day or certain time access token of oauth2 protocol got expired.For new access token,you need to delete credential file and run program again.

Related

GlueContext write_dynamic_frame fails after adding BucketOwnerFullControl configuration

I am trying to write dataframe into different account s3 bucket as json output.
The following code is failing with S3 Access denied error in GLUE Spark Streaming job. But If I run the code without first line in following code, it works and output it written into S3 bucket
glueContext._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
glueContext.write_dynamic_frame.from_options(frame=dynamic_df, connection_type="s3",
connection_options={"path": output_path},
format=file_format, transformation_ctx="datasink")
Here is the error log:
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code:
AccessDenied; Request ID: W52125NY7G3EF7WH; S3 Extended Request ID:
4t9JOJedv2qNRy6W8ySxdQQ7r+TMN1MWpZCFOK1IKO6W4gx4a2oKuK5vwXUPnh4HkkPAG+LnEIc=;
Proxy: null), S3 Extended Request ID: 4t9JOJedv2qUPnh4HkkPAG+LnEIc= at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
at
This looks strange to me as Bucket has full permission and it works perfectly when 2nd line alone executed but bucket owner is still glue account which I am trying to change it using fs.s3.canned.acl.
The destination bucket also setup with Bucket owner preferred option
Please suggest me what I am doing wrong.
Thanks
It's not enough to have permission in bucket policies only.
Glue role was missing s3:PutObjectAcl permission in IAM.

How to load a AzureML model in an Azure Databricks compute?

I am trying to run a DatabricksStep. I have used ServicePrincipalAuthentication to authenticate the run:
appId = dbutils.secrets.get(<secret-scope-name>, <client-id>)
tenant = dbutils.secrets.get(<secret-scope-name>, <directory-id>)
clientSecret = dbutils.secrets.get(<secret-scope-name>, <client-secret>)
subscription_id = dbutils.secrets.get(<secret-scope-name>, <subscription-id>)
resource_group = <aml-rgp-name>
workspace_name = <aml-ws-name>
svc_pr = ServicePrincipalAuthentication(
tenant_id=tenant,
service_principal_id=appId,
service_principal_password=clientSecret)
ws = Workspace(
subscription_id=subscription_id,
resource_group=resource_group,
workspace_name=workspace_name,
auth=svc_pr
)
The authentication is successful since running the following block of code gives the desired output:
subscription_id = ws.subscription_id
resource_group = ws.resource_group
workspace_name = ws.name
workspace_region = ws.location
print(subscription_id, resource_group, workspace_name, workspace_region, sep='\n')
However, the following block of codes gives an error:
model_name=<registered-model-name>
model_path = Model.get_model_path(model_name=model_name, _workspace=ws)
loaded_model = joblib.load(model_path)
print('model loaded!')
This is giving an error:
UserErrorException:
Message:
Operation returned an invalid status code 'Forbidden'. The possible reason could be:
1. You are not authorized to access this resource, or directory listing denied.
2. you may not login your azure service, or use other subscription, you can check your
default account by running azure cli commend:
'az account list -o table'.
3. You have multiple objects/login session opened, please close all session and try again.
InnerException None
ErrorResponse
{
"error": {
"message": "\nOperation returned an invalid status code 'Forbidden'. The possible reason could be:\n1. You are not authorized to access this resource, or directory listing denied.\n2. you may not login your azure service, or use other subscription, you can check your\ndefault account by running azure cli commend:\n'az account list -o table'.\n3. You have multiple objects/login session opened, please close all session and try again.\n ",
"code": "UserError"
}
}
The error is Forbidden Error even though I have authenticated using ServicePrincipalAuthentication.
How to resolve this error to run inference using an AML registered model in ADB?
The Databricks workspace need to be present in the same subscription as your AML workspace.
This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.
Here is the Model class register.

Google Cloud Monitoring Ruby client permission issue

I am following the Ruby code sample to add a custom metrics to stackdriver, however, I keep getting the permission denied error.
client = Google::Cloud::Monitoring::Metric.new
project_name = Google::Cloud::Monitoring::V3::MetricServiceClient.project_path project_id
descriptor = Google::Api::MetricDescriptor.new(
type: "custom.googleapis.com/my_metric#{random_suffix}",
metric_kind: Google::Api::MetricDescriptor::MetricKind::GAUGE,
value_type: Google::Api::MetricDescriptor::ValueType::DOUBLE,
description: "This is a simple example of a custom metric."
)
result = client.create_metric_descriptor project_name, descriptor
the error I got is "Google::Gax::PermissionDeniedError (GaxError RPC failed, caused by 7:Permission monitoring.metricDescriptors.create denied (or the resource may not exist).)"
The environment variable GOOGLE_APPLICATION_CREDENTIALS is set, and it works fine for the Google Cloud Storage code below
storage = Google::Cloud::Storage.new project: project_id
# Make an authenticated API request
storage.buckets.each do |bucket|
puts bucket.name
end
At this point, I don't know what is the problem. Do I need to set up a different credential for Cloud Monitoring?

Hyperledger composer multi user identity

I am following below tutorial
https://hyperledger.github.io/composer/integrating/enabling-rest-authentication.html
I am able to complete the steps till setting up default wallet identity. After this when i try system ping method I get the error.
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "Error trying to ping. Error: Error trying to query chaincode. Error: chaincode error (status: 500, message: Error: The current identity has not been registered:maeid1)",
"stack": "Error: Error trying to ping. Error: Error trying to query chaincode. Error: chaincode error (status: 500, message: Error: The current identity has not been registered:maeid1)\n at _checkRuntimeVersions.then.catch (/home/praval/.nvm/versions/node/v6.11.1/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:696:34)"
}
}
The same error I get while testing from access token.
curl -v http://localhost:3000/api/system/ping?access_token=xxxxx
Though I can run network ping successfully.
composer network ping -p hlfv1 -n 'digitalproperty-network' -i maeid1 -s NfUhmXtiaSUH
Thanks for help.
The problem you are seeing is described by this issue
https://github.com/hyperledger/composer/issues/1761
Both the CLI and Rest server have enrolled the user but this results in both environments storing certificates for the same identity that differ (for example issue and expiry dates). Whichever environment used their certificate first for that identity and activated that identity/participant in the runtime has their certificate registered. When the other environment presents their certificate it isn't found (because it is different to the first environment) and so reports that the identity is not registered.
The way to address this is if you plan to use the identity in the rest server, don't ping it from the CLI first.

Getting an error while calling GET /system/ping

I am getting an error while calling GET /system/ping
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "error trying login and get user Context. Error: error trying to enroll user. Error: Enrollment failed with errors [[{\"code\":400,\"message\":\"Authorization failure\"}]]",
}
}
I have made the participant
Blockchain Participant
{
'$class': 'org.optum.blockchainv5.Participant',
ParticipantId: 'ParticipantId:2',
Name: 'Vipul Bajaj'
}
Then issued an identity to the participant
System Identity
{
userID: 'ParticipantId:2',
userSecret: 'dPJbJBsaOLaf'
}
And then added that identity to default wallet
Wallet Identity
{
enrollmentID: 'ParticipantId:2',
enrollmentSecret: 'dPJbJBsaOLaf',
id: 3
}
And then set this wallet identity default by calling the POST /wallets/1/identities/3/setDefault
Got response code 204
And after calling GET system/ping gave me error.
just following up - if you're still getting this error, could you attach a trace log setting export DEBUG=composer:* ` then re-running the rest server - the log file is in a 'logs' directory (from where you start the composer-rest-server). Then we can see what's going on with the POST.
I had a similar issue.
I was trying to deploy a composer hlfv1 network instance locally. I was running the ./createComposerProfile.sh script. This script has this line cp "${DIR}"/hlfv1/composer/creds/* ~/.hfc-key-store
This copies all the credentials on your creds folder and overrides the ones created by composer identity import on your ~/.hfc-key-store
You could copy the credentials from ~/.hfc-key-store to the creds folder or comment out this line.

Resources