I am not able to create secret scope on Azure Databricks from Databricks CLI. I run a command like this:
databricks secrets "create-scope" --scope "edap-dev-kv" --scope-backend-type AZURE_KEYVAULT --resource-id "/subscriptions/ba426b6f-65cb-xxxx-xxxx-9a1e1656xxxx/resourceGroups/edap-dev-rg/providers/Microsoft.KeyVault/vaults/edap-dev-kv" --profile profile_edap_dev2_dbx --dns-name "https://edap-dev-kv.vault.azure.net/"
I get error msg:
Error: b'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>\n<title>
Error 400 io.jsonwebtoken.IncorrectClaimException:
Expected aud claim to be: 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d, but was: https://management.core.windows.net/.
</title>\n</head>\n<body><h2>HTTP ERROR 400</h2>\n<p>
Problem accessing /api/2.0/secrets/scopes/create.
Reason:\n<pre> io.jsonwebtoken.IncorrectClaimException:
Expected aud claim to be: 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d,
but was: https://management.core.windows.net/.</pre></p>\n</body>\n</html>\n'
I have tried doing it with both user (personal) and service principal's AAD token. (I've found somewhere that it it should be a AAD token of user account.)
I am able to do it with GUI using same parameters.
In your case, the personal access token was issued for incorrect service - it was issued for https://management.core.windows.net/. but it's required that you use resource ID of the Azure Databricks - 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d.
Simplest way to do that is to use az-cli with following command:
az account get-access-token -o tsv --query accessToken \
--resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d
Related
When I deploy a Lambda "code" using CDK the deploy process (cloudformation running under presumably my user) does not have seem to have access to the bucket that holds the Lambda code.
I followed this tutorial: https://intro-to-cdk.workshop.aws/what-is-cdk.html and see this error when I run cdk deploy:
Lambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-19kn1ypcmzq2q/assets/5327df
Lambda Code:
const handler = new lambda.Function(this, "TimestreamLambda", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.fromAsset(path.join(__dirname, '../resources')),
handler: "index.hello_world",
...
cdk and #aws-cdk version is 1.73.0 but I also tried with 1.71.0
Notes:
I see the bucket under my account (in my region).
When logged into this account I can see and download the asset file
the downloaded zip file has the correct contents.
More error details:
12/24 | 9:15:19 PM | CREATE_FAILED | AWS::Lambda::Function | TimestreamLambda (TimestreamLambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-28hiljazvaim/assets/5327df740bdc9c380ff567xxxxxxxxxxx7a68a.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException; Request ID: 1b813776-7647-4767-89bc-XXXXXXXXX; Proxy: null)
new Function (/Users/<user>/dev/cdk/cdk-workshop/node_modules/#aws-cdk/aws-lambda/lib/function.ts:593:35)
\_ new CdkWorkshopStack (/Users/<user>/dev/cdk/cdk-workshop/lib/cdk-workshop-stack.ts:33:21)
I also see this (using the -v option) during deploy:
env: {
CDK_DEFAULT_REGION: 'us-west-2',
CDK_DEFAULT_ACCOUNT: '94646XXXXX',
CDK_CONTEXT_JSON: '{"#aws-cdk/core:enableStackNameDuplicates":"true","aws-cdk:enableDiffNoFail":"true","#aws-cdk/core:stackRelativeExports":"true","aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '7.0.0',
CDK_CLI_VERSION: '1.73.0'
}
As it turns out this was an issue with the internal authentication system my company uses to access AWS. Instead of using my regular AWS account to access I had to create a temporary account (which also sets a temporary token).
Using the AWS CLI is it possible to download a Lambda Layer?
I have seen this documented command.
https://docs.aws.amazon.com/lambda/latest/dg/API_GetLayerVersion.html
But when I try to run it with something like below.
aws lambda get-layer-version --layer-name arn:aws:lambda:us-east-1:209497400698:layer:php-73 --version-number 7
I get this error.
An error occurred (InvalidParameterValueException) when calling the
GetLayerVersion operation: Invalid Layer name:
arn:aws:lambda:us-east-1:209497400698:layer:php-73
Is downloading a layer possible via the CLI?
As an extra note I am trying to download any of these layers
https://runtimes.bref.sh/
It should be possible to download a layer programmatically using the AWS CLI. For example
# https://docs.aws.amazon.com/cli/latest/reference/lambda/get-layer-version.html
URL=$(aws lambda get-layer-version --layer-name YOUR_LAYER_NAME_HERE --version-number YOUR_LAYERS_VERSION --query Content.Location --output text)
curl $URL -o layer.zip
For the arn's in that web page, I had to use the other api which uses an arn value. For example:
# https://docs.aws.amazon.com/cli/latest/reference/lambda/get-layer-version-by-arn.html
URL=$(aws lambda get-layer-version-by-arn --arn arn:aws:lambda:us-east-1:209497400698:layer:php-73:7 --query Content.Location --output text)
curl $URL -o php.zip
HTH
-James
I am using this terraform manifest to deploy AKS on Azure. I can do this via the commandline fine and it works, as I have azure cli configured on my machine to generate client id and secret
https://github.com/anubhavmishra/terraform-azurerm-aks
However, I am now building this on Azure Devops Pipeline
So, far i have managed to run terraform init and plan with backend storage on Azure, using Azure Devops using this extension
https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
Question: How do i get client id and secret on the Azure devops pipeline and set that as an environment variable for terraform? I tried creating a bash az command in the pipeline
> az ad sp create-for-rbac --role="Contributor"
> --scopes="/subscriptions/YOUR_SUBSCRIPTION_ID"
but failed with this error
> 2019-03-27T10:41:58.1042923Z
2019-03-27T10:41:58.1055624Z Setting AZURE_CONFIG_DIR env variable to: /home/vsts/work/_temp/.azclitask
2019-03-27T10:41:58.1060006Z Setting active cloud to: AzureCloud
2019-03-27T10:41:58.1069887Z [command]/usr/bin/az cloud set -n AzureCloud
2019-03-27T10:41:58.9004429Z [command]/usr/bin/az login --service-principal -u *** -p *** --tenant ***
2019-03-27T10:42:00.0695154Z [
2019-03-27T10:42:00.0696915Z {
2019-03-27T10:42:00.0697522Z "cloudName": "AzureCloud",
2019-03-27T10:42:00.0698958Z "id": "88bfee03-551c-4ed3-98b0-be68aee330bb",
2019-03-27T10:42:00.0704752Z "isDefault": true,
2019-03-27T10:42:00.0705381Z "name": "Visual Studio Enterprise",
2019-03-27T10:42:00.0706362Z "state": "Enabled",
2019-03-27T10:42:00.0707434Z "tenantId": "***",
2019-03-27T10:42:00.0716107Z "user": {
2019-03-27T10:42:00.0717485Z "name": "***",
2019-03-27T10:42:00.0718161Z "type": "servicePrincipal"
2019-03-27T10:42:00.0718675Z }
2019-03-27T10:42:00.0719185Z }
2019-03-27T10:42:00.0719831Z ]
2019-03-27T10:42:00.0728173Z [command]/usr/bin/az account set --subscription 88bfee03-551c-4ed3-98b0-be68aee330bb
2019-03-27T10:42:00.8569816Z [command]/bin/bash /home/vsts/work/_temp/azureclitaskscript1553683312219.sh
2019-03-27T10:42:02.4431342Z ERROR: Directory permission is needed for the current user to register the application. For how to configure, please refer 'https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal'. Original error: Insufficient privileges to complete the operation.
2019-03-27T10:42:02.5271752Z [command]/usr/bin/az account clear
2019-03-27T10:42:03.3092558Z ##[error]Script failed with error: Error: /bin/bash failed with return code: 1
2019-03-27T10:42:03.3108490Z ##[section]Finishing: Azure CLI
Here is how I do it with Azure Pipelines.
Create a Service Principal for Terraform.
Create the following variables in your pipeline
ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID
ARM_TENANT_ID
If you choose to store ARM_CLIENT_SECRET as a secret in Azure DevOps you will need to do the following in your task under the Environment Variables sections of the task to get it decrypted so terraform can read it.
you just need to grant your service connections rights to create service principals. but I'd generally advise against that, just precreate a service principal and use it in your pipeline. creating a new service principal on each run seems excessive.
you can use build\release variables and populate those with client id\secret
The approach defined in the post https://medium.com/#maninder.bindra/creating-a-single-azure-devops-yaml-pipeline-to-provision-multiple-environments-using-terraform-e6d05343cae2?
can be considered as well. Here the Keyvault task is used to fetch the secrets from Azure Vault (these include terraform backend access secrets as well as aks sp secrets):
#KEY VAULT TASK
- task: AzureKeyVault#1
inputs:
azureSubscription: '$(environment)-sp'
KeyVaultName: '$(environment)-pipeline-secrets-kv'
SecretsFilter: 'tf-sp-id,tf-sp-secret,tf-tenant-id,tf-subscription-id,tf-backend-sa-access-key,aks-sp-id,aks-sp-secret'
displayName: 'Get key vault secrets as pipeline variables'
And then you can use the secrets as variables in the rest of the pipeline. FOr instance aks-sp-id can be referred to as $(aks-sp-id). So the bash/azure-cli task can be something like
# AZ LOGIN USING TERRAFORM SERVICE PRINCIPAL
- script: |
az login --service-principal -u $(tf-sp-id) -p $(tf-sp-secret) --tenant $(tf-tenant-id)
cd $(System.DefaultWorkingDirectory)/tf-infra-provision
Followed by terraform init and plan (plan shown below, see post for complete pipeline details)
# TERRAFORM PLAN
echo '#######Terraform Plan########'
terraform plan -var-file=./tf-vars/$(tfvarsFile) -var="client_id=$(tf-sp-id)" -var="client_secret=$(tf-sp-secret)" -var="tenant_id=$(tf-tenant-id)" -var="subscription_id=$(tf-subscription-id)" -var="aks_sp_id=$(aks-sp-id)" -var="aks_sp_secret=$(aks-sp-secret)" -out="out.plan"
Hope this helps.
Any one came across hyperledger composer's chaincode error like : Error: The current identity must be activated (ACTIVATION_REQUIRED)?? The identity which I am using showing ISSUED in composer-playground. But once I am using System/ping through REST server, chaincode log showing this error. I hope at the time when a participant submits a transaction using an enrollment certificate, the Composer chaincode extracts the enrollment ID from the enrollment certificate, and uses it to look up the participant instance that the identity was issued to. I issued identity through CLI and then I am using it in REST server without doing anything in CLI. I am not sure how to overcome this error. Appreciate! any help.
I updated all composer components to 0.12.2. I used the following CLI commands to issue identity:
composer participant add -p jiyababa -n 'digitalproperty-network' -i PeerAdmin -s adminpw -d '{"$class":"net.biz.digitalPropertyNetwork.Person","personId":"dcsen#abc.com","firstName":"Dul","lastName":"Sen"}'
composer identity issue -p jiyababa -n 'digitalproperty-network' -i admin -s adminpw -u dcsen1 -a "resource:net.biz.digitalPropertyNetwork.Person#dcsen#abc.com"
Still getting the same error at composer chaincode.
2017-09-17 14:56:12.599 UTC [Composer] Error -> ERRO 01e #JS : IdentityManager :getIdentity() Error: The current identity has not been registered:admin 2017-09-17 14:56:12.682 UTC [Composer] Error -> ERRO 01f #JS : IdentityManager :getIdentity() Error: The current identity has not been registered:admin 2017-09-17 15:09:58.641 UTC [Composer] Error -> ERRO 020 #JS : IdentityManager :validateIdentity() Error: The current identity must be activated (ACTIVATION_REQUIRED)
I also tried using "admin" user to add participant and issue identity but no luck: getting composer chaincode error::
Error: Unhandled promise rejection {activationRequired:true} at [anon] (/chaincode/input/src/composer/vendor/gopkg.in/olebedev/go-duktape.v3/duk_console.c:55) internal
#JS : IdentityManager :validateIdentity() Error: The current identity must be activated (ACTIVATION_REQUIRED)
But I can ACTIVATE the identity through composer CLI using the following command:
composer network ping -n digitalproperty-network -p jiyababa -i dcsen1 -s BEkeKFlLVnBL
Once I ACTIVATED through CLI, I could NOT use the identity in REST Server. That means, first transaction request from REST server not activating the identity in identity registry.
This can happen if you are using an old version of the CLI/Client Application/Rest Server trying to connect to a much newer version of the composer runtime that is deployed when you deploy the business network.
It sounds like you deployed a business network and issued identities using a newver version of the CLI, but haven't updated the Rest Server to the same version.
Information about updating can be found at
https://hyperledger.github.io/composer/managing/updating-composer.html
A temporary work around is to ACTIVATE the card yourself:
$ composer identity list -c admin#basic-sample-network
✔ List all identities in the business network
-
$class: org.hyperledger.composer.system.Identity
identityId: 8dc315997a5ad0ade3b4343c6b81ae37a3c2c7f22eddab90dd09717e7459772e
name: admin
issuer: ac3dbcbe135ba48b29f97665bb103f8260c38d3872473e584314392797c595f3
certificate:
"""
-----BEGIN CERTIFICATE-----
MIICAjCCAaigAwIBAgIUOA7RAw1TbKo2UjwkeS9YRCSFupowCgYIKoZIzj0EAwIw
czELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNh
biBGcmFuY2lzY28xGTAXBgNVBAoTEG9yZzEuZXhhbXBsZS5jb20xHDAaBgNVBAMT
E2NhLm9yZzEuZXhhbXBsZS5jb20wHhcNMTgwODA4MDYzODAwWhcNMTkwODA4MDY0
MzAwWjAhMQ8wDQYDVQQLEwZjbGllbnQxDjAMBgNVBAMTBWFkbWluMFkwEwYHKoZI
zj0CAQYIKoZIzj0DAQcDQgAEeBeSqbzishSi0Q0+f0HavwPsN1240zIxuL12iWUR
U9aEO/cLusEr9fg44UUh3xzp4VQGChJ5TNRu4s/uBbuFxqNsMGowDgYDVR0PAQH/
BAQDAgeAMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFF1ZYXNpBsGXEomhlTBT9NeJ
CUqIMCsGA1UdIwQkMCKAIBmrZau7BIB9rRLkwKmqpmSecIaOOr0CF6Mi2J5H4aau
MAoGCCqGSM49BAMCA0gAMEUCIQCMuttwm6sSCjtwl8xk4FZM4PHH0F5YGxJvNUjn
SeeCCQIgAmmD9aabcY7jHttdfAZ2zNepihdRKjN1xsxy4i7KaQ4=
-----END CERTIFICATE-----
"""
state: ACTIVATED
participant: resource:org.hyperledger.composer.system.NetworkAdmin#admin
Command succeeded
I have created an AWS keypair.
I am following the instructions here word for word: https://aws.amazon.com/articles/4926593393724923
When I type in "aws emr create-cluster --name SparkCluster --ami-version 3.2 --instance-type m3.xlarge --instance-count 3 --ec2-attributes KeyName=MYKEY --applications Name=Hive --bootstrap-actions Path=s3://support.elasticmapreduce/spark/install-spark"
replacing MYKEY with both the full path and just the name of my key pair (I've tried everything), I get the following error:
`A client error (InvalidSignatureException) occurred when calling the RunJobFlow operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
The Canonical String for this request should have been
'POST
/
content-type:application/x-amz-json-1.1
host:elasticmapreduce.us-east-1.amazonaws.com
user-agent:aws-cli/1.7.5 Python/2.7.8 Darwin/14.1.0
x-amz-date:20150210T180927Z
x-amz-target:ElasticMapReduce.RunJobFlow
content-type;host;user-agent;x-amz-date;x-amz-target
dbb58908194fa8deb722fdf65ccd713807257deac18087025cec9a5e0d73c572'
The String-to-Sign should have been
'AWS4-HMAC-SHA256
20150210T180927Z
20150210/us-east-1/elasticmapreduce/aws4_request
c83894ad3b43c0657dac2c3ab7f53d384b956087bd18a3113873fceeabc4ae26'`
What am I doing wrong?
GOT IT. Sadly, the above page mentions nothing about having to set the environment variables AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY. You must do this first. I learned you had to do that first from a totally different setup guide: http://spark.apache.org/docs/1.2.0/ec2-scripts.html.
After I set that, the Amazon instructions worked.