How to load a AzureML model in an Azure Databricks compute? - azure-databricks

I am trying to run a DatabricksStep. I have used ServicePrincipalAuthentication to authenticate the run:
appId = dbutils.secrets.get(<secret-scope-name>, <client-id>)
tenant = dbutils.secrets.get(<secret-scope-name>, <directory-id>)
clientSecret = dbutils.secrets.get(<secret-scope-name>, <client-secret>)
subscription_id = dbutils.secrets.get(<secret-scope-name>, <subscription-id>)
resource_group = <aml-rgp-name>
workspace_name = <aml-ws-name>
svc_pr = ServicePrincipalAuthentication(
tenant_id=tenant,
service_principal_id=appId,
service_principal_password=clientSecret)
ws = Workspace(
subscription_id=subscription_id,
resource_group=resource_group,
workspace_name=workspace_name,
auth=svc_pr
)
The authentication is successful since running the following block of code gives the desired output:
subscription_id = ws.subscription_id
resource_group = ws.resource_group
workspace_name = ws.name
workspace_region = ws.location
print(subscription_id, resource_group, workspace_name, workspace_region, sep='\n')
However, the following block of codes gives an error:
model_name=<registered-model-name>
model_path = Model.get_model_path(model_name=model_name, _workspace=ws)
loaded_model = joblib.load(model_path)
print('model loaded!')
This is giving an error:
UserErrorException:
Message:
Operation returned an invalid status code 'Forbidden'. The possible reason could be:
1. You are not authorized to access this resource, or directory listing denied.
2. you may not login your azure service, or use other subscription, you can check your
default account by running azure cli commend:
'az account list -o table'.
3. You have multiple objects/login session opened, please close all session and try again.
InnerException None
ErrorResponse
{
"error": {
"message": "\nOperation returned an invalid status code 'Forbidden'. The possible reason could be:\n1. You are not authorized to access this resource, or directory listing denied.\n2. you may not login your azure service, or use other subscription, you can check your\ndefault account by running azure cli commend:\n'az account list -o table'.\n3. You have multiple objects/login session opened, please close all session and try again.\n ",
"code": "UserError"
}
}
The error is Forbidden Error even though I have authenticated using ServicePrincipalAuthentication.
How to resolve this error to run inference using an AML registered model in ADB?

The Databricks workspace need to be present in the same subscription as your AML workspace.
This notebook demonstrates the use of DatabricksStep in Azure Machine Learning Pipeline.
Here is the Model class register.

Related

Using CDK deploy cannot read temporary S3 bucket that holds Lambda Code

When I deploy a Lambda "code" using CDK the deploy process (cloudformation running under presumably my user) does not have seem to have access to the bucket that holds the Lambda code.
I followed this tutorial: https://intro-to-cdk.workshop.aws/what-is-cdk.html and see this error when I run cdk deploy:
Lambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-19kn1ypcmzq2q/assets/5327df
Lambda Code:
const handler = new lambda.Function(this, "TimestreamLambda", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.fromAsset(path.join(__dirname, '../resources')),
handler: "index.hello_world",
...
cdk and #aws-cdk version is 1.73.0 but I also tried with 1.71.0
Notes:
I see the bucket under my account (in my region).
When logged into this account I can see and download the asset file
the downloaded zip file has the correct contents.
More error details:
12/24 | 9:15:19 PM | CREATE_FAILED | AWS::Lambda::Function | TimestreamLambda (TimestreamLambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-28hiljazvaim/assets/5327df740bdc9c380ff567xxxxxxxxxxx7a68a.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException; Request ID: 1b813776-7647-4767-89bc-XXXXXXXXX; Proxy: null)
new Function (/Users/<user>/dev/cdk/cdk-workshop/node_modules/#aws-cdk/aws-lambda/lib/function.ts:593:35)
\_ new CdkWorkshopStack (/Users/<user>/dev/cdk/cdk-workshop/lib/cdk-workshop-stack.ts:33:21)
I also see this (using the -v option) during deploy:
env: {
CDK_DEFAULT_REGION: 'us-west-2',
CDK_DEFAULT_ACCOUNT: '94646XXXXX',
CDK_CONTEXT_JSON: '{"#aws-cdk/core:enableStackNameDuplicates":"true","aws-cdk:enableDiffNoFail":"true","#aws-cdk/core:stackRelativeExports":"true","aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '7.0.0',
CDK_CLI_VERSION: '1.73.0'
}
As it turns out this was an issue with the internal authentication system my company uses to access AWS. Instead of using my regular AWS account to access I had to create a temporary account (which also sets a temporary token).

.NET Core 3.1 Docker in Visual Studio accessing Azure Key Vault

I am trying to run a .NET Core 3.1 Application in Docker locally in Visual Studio. The application needs to access a Azure Key Vault.
When I run the application I get the following error:
One or more errors occurred. (Parameters: Connection String: [No
connection string specified], Resource: https://vault.azure.net,
Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried the following 3 methods to get an access
token, but none of them worked.
Parameters: Connection String: [No
connection string specified], Resource: https://vault.azure.net,
Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried to get token using Managed Service Identity.
Access token could not be acquired. Connection refused
Parameters:
Connection String: [No connection string specified], Resource:
https://vault.azure.net, Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried to get token using Visual Studio. Access
token could not be acquired. Environment variable LOCALAPPDATA not
set.
Parameters: Connection String: [No connection string specified],
Resource: https://vault.azure.net, Authority:
https://login.windows.net/53d4d1e1-3360-4735-8aad-21c6155f528a.
Exception Message: Tried to get token using Azure CLI. Access token
could not be acquired. /bin/bash: az: No such file or directory
Note: it works fine using IIS Express! Please help! :D
Please set the required environment variables when using DefaultAzureCredential to authenticate Azure key vault.
In this scenario, it means to set the environment variables in Dockerfile.
ENV AZURE_CLIENT_ID=<Your AZURE CLIENT ID>
ENV AZURE_CLIENT_SECRET=<Your CLIENT SECRET>
ENV AZURE_TENANT_ID=<Your TENANT ID>
In an attempt to avoid the accepted answer (because of obvious security issues), and to simplify and automate E. Staal's answer (on a duplicate question), I came up with this:
Update your .gitignore file, by adding the following line to the bottom of it:
appsettings.local.json
Right click on the project in Solution Explorer, and click on Properties; in the Build Events tab, find the Pre-build event command line text box and add the following code:
cd /d "$(ProjectDir)"
if exist "appsettings.local.json" del "appsettings.local.json"
if "$(ConfigurationName)" == "Debug" (
az account get-access-token --resource=https://vault.azure.net > appsettings.local.json
)
In your launchSettings.json (or using the Visual Editor under project settings) configure the following values:
{
"profiles": {
// ...
"Docker": {
"commandName": "Docker",
"environmentVariables": {
"DOTNET_ENVIRONMENT": "Development",
"AZURE_TENANT_ID": "<YOUR-AZURE-TENANT-ID-HERE>"
}
}
}
}
In your Program.cs file find the CreateHostBuilder method and update the ConfigureAppConfiguration block accordingly -- here is mine as an example:
Host.CreateDefaultBuilder(args).ConfigureAppConfiguration
(
(ctx, cfg) =>
{
if (ctx.HostingEnvironment.IsDevelopment())
{
cfg.AddJsonFile("appsettings.local.json", true);
}
var builtConfig = cfg.Build();
var keyVault = builtConfig["KeyVault"];
if (!string.IsNullOrWhiteSpace(keyVault))
{
var accessToken = builtConfig["accessToken"];
cfg.AddAzureKeyVault
(
$"https://{keyVault}.vault.azure.net/",
new KeyVaultClient
(
string.IsNullOrWhiteSpace(accessToken)
? new KeyVaultClient.AuthenticationCallback
(
new AzureServiceTokenProvider().KeyVaultTokenCallback
)
: (x, y, z) => Task.FromResult(accessToken)
),
new DefaultKeyVaultSecretManager()
);
}
}
)
If this still doesn't work, verify that az login has been performed and that az account get-access-token --resource=https://vault.azure.net works correctly for you.

Google Cloud Monitoring Ruby client permission issue

I am following the Ruby code sample to add a custom metrics to stackdriver, however, I keep getting the permission denied error.
client = Google::Cloud::Monitoring::Metric.new
project_name = Google::Cloud::Monitoring::V3::MetricServiceClient.project_path project_id
descriptor = Google::Api::MetricDescriptor.new(
type: "custom.googleapis.com/my_metric#{random_suffix}",
metric_kind: Google::Api::MetricDescriptor::MetricKind::GAUGE,
value_type: Google::Api::MetricDescriptor::ValueType::DOUBLE,
description: "This is a simple example of a custom metric."
)
result = client.create_metric_descriptor project_name, descriptor
the error I got is "Google::Gax::PermissionDeniedError (GaxError RPC failed, caused by 7:Permission monitoring.metricDescriptors.create denied (or the resource may not exist).)"
The environment variable GOOGLE_APPLICATION_CREDENTIALS is set, and it works fine for the Google Cloud Storage code below
storage = Google::Cloud::Storage.new project: project_id
# Make an authenticated API request
storage.buckets.each do |bucket|
puts bucket.name
end
At this point, I don't know what is the problem. Do I need to set up a different credential for Cloud Monitoring?

Getting an error while calling GET /system/ping

I am getting an error while calling GET /system/ping
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "error trying login and get user Context. Error: error trying to enroll user. Error: Enrollment failed with errors [[{\"code\":400,\"message\":\"Authorization failure\"}]]",
}
}
I have made the participant
Blockchain Participant
{
'$class': 'org.optum.blockchainv5.Participant',
ParticipantId: 'ParticipantId:2',
Name: 'Vipul Bajaj'
}
Then issued an identity to the participant
System Identity
{
userID: 'ParticipantId:2',
userSecret: 'dPJbJBsaOLaf'
}
And then added that identity to default wallet
Wallet Identity
{
enrollmentID: 'ParticipantId:2',
enrollmentSecret: 'dPJbJBsaOLaf',
id: 3
}
And then set this wallet identity default by calling the POST /wallets/1/identities/3/setDefault
Got response code 204
And after calling GET system/ping gave me error.
just following up - if you're still getting this error, could you attach a trace log setting export DEBUG=composer:* ` then re-running the rest server - the log file is in a 'logs' directory (from where you start the composer-rest-server). Then we can see what's going on with the POST.
I had a similar issue.
I was trying to deploy a composer hlfv1 network instance locally. I was running the ./createComposerProfile.sh script. This script has this line cp "${DIR}"/hlfv1/composer/creds/* ~/.hfc-key-store
This copies all the credentials on your creds folder and overrides the ones created by composer identity import on your ~/.hfc-key-store
You could copy the credentials from ~/.hfc-key-store to the creds folder or comment out this line.

No rows in result set when trying to deploy a business network

I get the following error when trying to deploy a fabric composer business network archive to a 0.6 local hyperledger fabric setup.
BusinessNetworkDefinition:fromArchive() < [object Object]
HFCConnection :deploy() Deploying business network org.acme.biznet#0.0.1
FSConnectionProfileStore :load() Loaded connection profile hlfabric {
"type": "hlf",
"membershipServicesURL": "grpc://localhost:7054",
"peerURL": "grpc://localhost:7051",
"eventHubURL": "grpc://localhost:7053",
"keyValStore": "/tmp/keyValStore",
"deployWaitTime": 300,
"invokeWaitTime": 30,
"certificate": null,
"certificatePath": null
}
HFCUtil :deployChainCode() function init force true concerto
HFCUtil :deployChainCode() onError {"error":{"code":2,"metadata":{"_internal_repr":{}}},"msg":"Error: sql: no rows in result set"}
ConnectorServer :Error: Error: sql: no rows in result set() undefined
ConnectorServer :connectionDeploy() <
When you enroll with a hyperledger fabric instance, the credentials are stored in the keyValStore directory defined in your connection profile. If you then try to interact with a different hyperledger fabric instance or your stop and restart the docker containers for a locally running hyperledger fabric instance (effectively defining a new instance) with the same connection profile and this the credentials already stored in the keyValStore you get this error. This is because the credentials are not valid for that instance.
To fix the problem, either change your connection profile to use a different directory for keyValStore or remove that directory and it's contents and try again.

Resources