How to include the environment variables for IBM CloudEngine create/update task via secret Manager (keyValue Secret value) - ibm-cloud-code-engine

After successfully creating Secret Manager instance in IBMcloud. I have created a key-value secret in default secret-group, this secret contains the default set of environment variables which will be used for CodeEngine deployment.
However, I am now continuously facing issues with trying to deploy the app, i use the below command to deploy, let me know if there is anything wrong here
ibmcloud ce application create --name ce-sample-app --image IMAGE_NAME --cpu 1 --env-from-secret sample-portal-ce-app-env-variables --registry-secret xyxyxyxyxyxy

The secrets that you reference in the Code Engine commands, are secrets managed in Code Engine directly rather than in a separate Secret Manager service instance.
Here is the documentation about secrets in Code Engine: https://cloud.ibm.com/docs/codeengine?topic=codeengine-configmap-secret
Basically you would need to create your secret like this:
ibmcloud ce secret create --name sample-portal-ce-app-env-variables --from-literal ENVVAR1=value1 --from-literal "ENVVAR2=value with space"
There are also options to import all environment variables from a file into a secret. To see all options, run:
ibmcloud ce secret create --help

Related

Azure Machine Learning Compute Instance Not Creating using Azure CLI and Azure DevOps Pipeline

I create my Azure Machine Learning Workspace using Azure CLI:
$env="prd"
$instance="001"
$location="uksouth"
$suffix="predict-$env-$location-$instance"
$rg="rg-$suffix"
$ws="mlw-$suffix"
$computeinstance="vm-$suffix".Replace('-','')
$computeinstance
az group create --name $rg --location $location
az configure --defaults group=$rg
az ml workspace create --name $ws
az configure --defaults workspace=$ws
az ml compute create --name $computeinstance --size Standard_DS11_v2 --type ComputeInstance
I run the above code manually in Visual Studio Code, and everything works properly.
However, when I integrate the above into an Azure DevOps pipeline via the YAML:
steps:
- bash: az extension add -n ml
displayName: 'Install Azure ml extension'
- task: AzureCLI#2
inputs:
azureSubscription: "$(AZURE_RM_SVC_CONNECTION)"
scriptType: 'ps'
scriptLocation: 'scriptPath'
scriptPath: './environment_setup/aml-cli.ps1'
The pipeline creates the Azure Machine Learning workspace as expected.
The pipeline creates the compute instance, which has the status "Running" and green status.
However, the compute instance has all applications greyed out. This means I cannot connect to the compute instance using a terminal, notebook or otherwise, essentially making it useless. The application links in the following screenshot are not clickable:
I attempted:
Specifying brand new resource names.
Creating the workspace and compute in separate pipelines in case of a timing issue.
Deleting the resource group first using:
az group delete -n rg-predict-prd-uksouth-001 --force-deletion-types Microsoft.Compute/virtualMachines --yes
All to no avail.
How do I create a useable Azure Machine Learning compute instance using Azure CLI and Azure DevOps pipelines?
Earlier only the creator of the instance was allowed to run the jupyter,jupyterlab etc(check the comments on this issue) but now there is a feature in preview that allows to create a compute instance "on behalf of" someone.
So please try passing the aad user's objectid who will access the compute instance's development tools. The same can be done using arm templates also using 'personalComputeInstanceSettings' property.
az ml compute create --name $computeinstance --size Standard_DS11_v2
--type ComputeInstance --user-object-id <aaduserobjectid> --user-tenant-id <aaduserobjecttenantid>

Container access to gcloud credentials denied

I'm trying to implement the container that converts data from HL7 to FHIR (https://github.com/GoogleCloudPlatform/healthcare/tree/master/ehr/hl7/message_converter/java) on Google Cloud. However, I can't build the container, locally, on my machine, to later deploy to the cloud.
The error that occurs is always in the authentication part of the credentials when I try to rotate the image locally using the docker:
docker run --network=host -v ~/.config:/root/.config hl7v2_to_fhir_converter
/healthcare/bin/healthcare --fhirProjectId=<PROJECT_ID> --fhirLocationId=<LOCATION_ID> --
fhirDatasetId=<DATASET_ID> --fhirStoreId=<STORE_ID> --pubsubProjectId=<PUBSUB_PROJECT_ID> --
pubsubSubscription=<PUBSUB_SUBSCRIPTION_ID> --apiAddrPrefix=<API_ADDR_PREFIX>
I am using Windows and have already performed the command below to create the credentials:
gcloud auth application-default login
The credential, after executing the above command, is saved in:
C:\Users\XXXXXX\AppData\Roaming\gcloud\application_default_credentials.json
The command -v ~ / .config: /root/.config is supposed to enable the docker to search for the credential when running the image, but it does not. The error that occurs is:
The Application Default Credentials are not available. They are available if running in Google
Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined
pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials for more information.
What am I putting error on?
Thanks,
A container runs isolated to the rest of the system, it's its strength and that's why this packaging method is so popular.
Thus, all the configuration on your environment is void if you don't pass it to the container runtime environment, like the GOOGLE_APPLICATION_CREDENTIALS env var.
I wrote an article on this. Let me know if it helps, and, if not, we will discussed the blocking point!

Azure docker registry - bash script to check if a docker tag already exists

What I need is to build an image (as a CI product) and push it only if the tag version is not on our private azure hosted docker registry already.
Following this stackoverflow answer I tried to replicate the bash script there with the azure registery login server but it does not seem to support the exact same api (getting a 404). How can I achieve this "check if version/tag exists in registry" via the http/REST api with azure container registry? (Without using the built in az tool)
How can I achieve this "check if version/tag exists in registry" via
the http/REST api with azure container registry?
In Azure container registry, we should use Authorization: Basic to authenticate it.
You can use ACR username and password to get the credentials, then use this script to list all tags:
export registry="jasonacrr.azurecr.io"
export user="jasonacrr"
export password="t4AH+K86xxxxxxx2SMxxxxxzjNAMVOFb3c"
export operation="/v2/aci-helloworld/tags/list"
export credentials=$(echo -n "$user:$password" | base64 -w 0)
export catalog=$(curl -s -H "Authorization: Basic $credentials" https://$registry$operation)
echo "Catalog"
echo $catalog
Output like this:
[root#jasoncli jason]# echo $catalog
{"name":"aci-helloworld","tags":["v1","v2"]}
Then you can use shell to check the tag existing or not.
Hope this helps.
Update:
More information about Azure container registry integration with Azure AD, please refer to this article.

InvalidIamUserArnException when registering on prem instance

No matter what instance name I choose, whenever I perform the following on an on prem instance:
aws deploy register --instance-name test --tags "Key=Name,Value=test" --region us-west-2 --debug
The following exception is thrown (always):
2016-04-12 11:02:52,625 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 400
ERROR
A client error (InvalidIamUserArnException) occurred when calling the RegisterOnPremisesInstance operation: Iam User ARN
arn:aws:iam::xxx:user/AWS/CodeDeploy/test is not in a valid format
Register the on-premises instance by following the instructions in "Configure Existing On-Premises Instances by Using AWS CodeDeploy" in the AWS CodeDeploy User Guide.
Despite this error, the user gets created on amazon, and I can continue to register the on prem instance with the following:
aws deploy register-on-premises-instance --instance-name test --iam-user-arn arn:aws:iam::xxx:user/test
aws deploy install --override-config --config-file codedeploy.onpremises.yml --region us-west-2 --agent-installer s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.msi
The instance is registered and the user is created, but when deploying to it, I always get "No hosts succeeded". The logs for the codedeploy agent show no errors.
I am not sure whats happening here either since no logs on either end, in codedeploy console or on the on prem machine codedeploy agent. Any ideas?
Please note I am using Windows Embedded Standard 2010 (which is not in the supported list) with the latest version of aws cli but I have successfully deployed to it in the past (with previous version of aws cli).
Figured it out, seems to be broken* if you try and let 'aws deploy register' create IAM user for you. However, if you create the user first (via console or aws cli), then it will work.
You can pass in the option '--iam-user-arn arn:aws:iam::xxx:user/OnPremCodeDeploy' with the 'aws deploy register' command afterwards.
I created the on prem yml manually with the correct access keys from manually creating user and then finally ran:
aws deploy install --overide-config --config-file conf.onpremises.yml --region us-west-2 --agent-installer s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.msi
* at least w/ codedeployagent OFFICIAL_1.0.1.950_msi and windows embedded
Could you check if the IAM user you registered the on-premises instance with CodeDeploy has proper permissions? Including the following.
"iam:CreateAccessKey",
"iam:CreateUser",
"iam:DeleteAccessKey",
"iam:DeleteUser",
"iam:DeleteUserPolicy",
"iam:ListAccessKeys",
"iam:ListUserPolicies",
"iam:PutUserPolicy",
"iam:GetUser"
This can also be referred here: http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-configure-on-premises-host.html#how-to-configure-on-premises-host-prerequisites

Cloudera CDH on EC2

I am an aws newbie, and I'm trying to run Hadoop on EC2 via Cloudera's AMI. I installed the AMI, downloaded the cloudera-haddop-for-ec2-tools, and now I'm trying to configure
haddop-ec2-env.sh
It is asking for the following:
AWS_ACCOUNT_ID
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
EC2_KEYDIR
PRIVATE_KEY_PATH
when running:
./hadoop-ec2 launch-cluster my-cluster 10
i'm getting
AWS was not able to validate the provided access credentials
Firstly, I have the first 3 attributes for my own account. This is a corporate account, and I received an email with the access key id and secret access key for my email. Is it possible that my account doesn't have the proper permissions to do what is needed here. Exactly why does this script need my credentials? What does it need to do?
Secondly, where is the EC2 key dir? I've uploaded my key.pem file that amazon created for me, and hard coded that into the PRIVATE_KEY_PATH and chmod 400 on the .pem file. Is that the correct key that this script needs?
Any help is appreciated?
Sam
The cloudera ec2 tools heavily rely on the amazon ec2 api tools. Therefore, you must do the following:
1) Download amazon ec2 api tools from http://aws.amazon.com/developertools/351
2) Download cloudera ec2 tools from http://cloudera-packages.s3.amazonaws.com/cloudera-for-hadoop-on-ec2-0.3.0.tar.gz
3) Set the following env variables I am only giving Unix based examples
export EC2_HOME=<path-to-tools-from-step-1>
export $PATH=$PATH:$EC2_HOME/bin
export $PATH=$PATH:<path-to-cloudera-ec2-tools>/bin
export EC2_PRIVATE_KEY=<path-to-private-key.pem>
export EC2_CERT=<path-to-cert.pem>
4) In cloudera-ec2-tools/bin set the following variables
AWS_ACCOUNT_ID=<amazon-acct-id>
AWS_ACCESS_KEY_ID=<amazon-access-key>
AWS_SECRET_ACCESS_KEY=<amazon-secret-key>
EC2_KEYDIR=<dir-where-the-ec2-private-key-and-ec2-cert-are>
KEY_NAME=<name-of-ec2-private-key>
And then run
$ hadoop-ec2 launch-cluster my-hadoop-cluster 10
Which will create a hadoop cluster called "my-hadoop" with 10 nodes on multiple ec2 machines

Resources