Azure docker registry - bash script to check if a docker tag already exists - bash

What I need is to build an image (as a CI product) and push it only if the tag version is not on our private azure hosted docker registry already.
Following this stackoverflow answer I tried to replicate the bash script there with the azure registery login server but it does not seem to support the exact same api (getting a 404). How can I achieve this "check if version/tag exists in registry" via the http/REST api with azure container registry? (Without using the built in az tool)

How can I achieve this "check if version/tag exists in registry" via
the http/REST api with azure container registry?
In Azure container registry, we should use Authorization: Basic to authenticate it.
You can use ACR username and password to get the credentials, then use this script to list all tags:
export registry="jasonacrr.azurecr.io"
export user="jasonacrr"
export password="t4AH+K86xxxxxxx2SMxxxxxzjNAMVOFb3c"
export operation="/v2/aci-helloworld/tags/list"
export credentials=$(echo -n "$user:$password" | base64 -w 0)
export catalog=$(curl -s -H "Authorization: Basic $credentials" https://$registry$operation)
echo "Catalog"
echo $catalog
Output like this:
[root#jasoncli jason]# echo $catalog
{"name":"aci-helloworld","tags":["v1","v2"]}
Then you can use shell to check the tag existing or not.
Hope this helps.
Update:
More information about Azure container registry integration with Azure AD, please refer to this article.

Related

Container access to gcloud credentials denied

I'm trying to implement the container that converts data from HL7 to FHIR (https://github.com/GoogleCloudPlatform/healthcare/tree/master/ehr/hl7/message_converter/java) on Google Cloud. However, I can't build the container, locally, on my machine, to later deploy to the cloud.
The error that occurs is always in the authentication part of the credentials when I try to rotate the image locally using the docker:
docker run --network=host -v ~/.config:/root/.config hl7v2_to_fhir_converter
/healthcare/bin/healthcare --fhirProjectId=<PROJECT_ID> --fhirLocationId=<LOCATION_ID> --
fhirDatasetId=<DATASET_ID> --fhirStoreId=<STORE_ID> --pubsubProjectId=<PUBSUB_PROJECT_ID> --
pubsubSubscription=<PUBSUB_SUBSCRIPTION_ID> --apiAddrPrefix=<API_ADDR_PREFIX>
I am using Windows and have already performed the command below to create the credentials:
gcloud auth application-default login
The credential, after executing the above command, is saved in:
C:\Users\XXXXXX\AppData\Roaming\gcloud\application_default_credentials.json
The command -v ~ / .config: /root/.config is supposed to enable the docker to search for the credential when running the image, but it does not. The error that occurs is:
The Application Default Credentials are not available. They are available if running in Google
Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined
pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials for more information.
What am I putting error on?
Thanks,
A container runs isolated to the rest of the system, it's its strength and that's why this packaging method is so popular.
Thus, all the configuration on your environment is void if you don't pass it to the container runtime environment, like the GOOGLE_APPLICATION_CREDENTIALS env var.
I wrote an article on this. Let me know if it helps, and, if not, we will discussed the blocking point!

Trigger a shell script in Azure

I'm using a Kubernetes cluster in Azure running an ingress controller. The ingress controller routes to different services via a given context root.
To add another service and connect it to my ingress I build a simple shell script looking like this:
kubectl apply -f $1'-svc.yaml'
some script magic here to add a new route in the hello-world-ingress.json
kubectl apply -f 'hello-world-ingress.json'
I tested the script on my local machine and everything works as expected. Now I want to trigger the script with an HTTP rest call on Azure.
Does anyone have an idea how to do that? So far I know:
I need the Azure cli with Kubernetes to run the kubectl command
I need something to build the HTTP trigger. I tried using AzureFunctions, but I wasn't able to install the Azure cli in Azure Functions on the Azure Portal and I wasn't able to install Azure cli + Azure Functions in a Docker Container.
Does anyone have an idea how to trigger my shell script via HTTP in Azure in an environment where the Azure cli exists?
The easiest way, in my opinion, is to set up an Azure instance with kubectl and the Azure cli configured to talk to your cluster and on that same server setup something like shell2http. For example:
shell2http -export-all-vars /mybash "yourbash.sh"
shell2http -form /apply "kubectl apply -f $v'-svc.yaml'"
shell2http -export-all-vars /domore "domore.sh"
Where $v above is the name of your deployment.

Packer and AWS credentials: CryptProtectData failed

I am provisioning a Windows machine using Packer. I use a Powershell Script to do most of the provisioning.
An important provisioning step is to download some software from a private S3 bucket. In attempt to first set AWS credentials I run this snippit:
echo "Configure AWS"
echo "AWS_ACCESS_KEY_ID: ${env:AWS_ACCESS_KEY_ID}"
echo "AWS_SECRET_ACCESS_KEY: ${env:AWS_SECRET_ACCESS_KEY}"
echo "AWS_DEFAULT_REGION: ${env:AWS_DEFAULT_REGION}"
Set-AWSCredentials -AccessKey ${env:AWS_ACCESS_KEY_ID} -SecretKey ${env:AWS_SECRET_ACCESS_KEY} -StoreAs default
And invariably get an error when Packer runs it on the machine:
amazon-ebs: Set-AWSCredentials : CryptProtectData failed.
amazon-ebs: At C:\Windows\Temp\script.ps1:15 char:1
amazon-ebs: + Set-AWSCredentials -AccessKey ${env:AWS_ACCESS_KEY_ID} -SecretKey
amazon-ebs: ${env:AWS_SECR ...
If I run this command directly on the Windows instance it works fine.
Thanks,
Jevon
from the PowerShell doc:
The PowerShell Tools can use either of two credentials stores.
The AWS SDK store, which encrypts your credentials and stores them in your home folder. The AWS SDK for .NET and AWS Toolkit for Visual
Studio can also use the AWS SDK store.
The credentials file, which is also located in your home folder, but stores credentials as plain text. By default, the credentials file is
stored here: `C:\Users\username.aws. The AWS SDKs and the AWS Command
Line Interface can also use the credentials file. If you are running a
script outside of your AWS user context, be sure that the file that
contains your credentials is copied to a location where all user
accounts (local system and user) can access your credentials.
From google search, it seems people turn to use BasicAWSCredentials
I am not sure this is something you can do (depending if you use an SDK or not), if not you can use the second approach described in doc and store the variables in C:\Users\username\.aws and use S3 command with the credentials stored from this file

Azure xplat to run a CustomScriptExtension in a Windows VM

I am creating Windows VMs from the azure xplat cli, using the following command:
azure network vnet create --location "East US" testnet
azure vm create --vm-name xplattest3 --location "East US" --virtual-network-name testnet --rdp 3389 xplattest3 ad072bd3082149369c449ba5832401ae__Windows-Server-Remote-Desktop-Session-Host-on-Windows-Server-2012-R2-20150828-0350 username SAFEpassword!
After the Windows VM is created I would like to execute a powershell script to configure the server. As far I understand, this is done by executing a CustomScriptExtension.
I found several examples for PowerShell but no examples for Xplat cli.
I would like, for example, to run the following HelloWorld PowerShell script:
New-Item -ItemType directory -Path C:\HelloWorld
After reading documentation I should be able to run a CustomExtensionScript by executing something like this (the following command does not work):
azure vm extension set xplattest3 CustomScriptExtension Microsoft.Compute 1.4 -i '{"URI":["https://gist.githubusercontent.com/tk421/8b7dd37145eaa8f82e2f/raw/36c11aafd3f5d6b4af97aab9ef5303d80e8ab29b/azureCustomScriptExtensionTest"] }'
I think that the problem is the parameter -i. I have not been able to find an example on Internet. There are some references and documentation such as MSDN and Github, but no examples.
Therefore, my question: How to execute a PowerShell script after creating a Windows VM in Azure using the xplat cli ?
Please note that the my current approach is a CustomScriptExtension, but anything that allows to bootstrap a configuration script will be considered!
EDIT How do I know it is failing ?
After I run the command azure vm extension ...:
xplat cli confirms that the command has been executed properly.
As per MSDN documentation, the folder C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\ is created, but there is no script downloaded to C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\{version-number}\Downloads\{iteration}
The folder C:\HelloWorld is not created, which means that the contents of the script has not been executed.
I cannot find any sort of logs or a trace to know what happened. Does anyone knows where can I find this information ?
The parameters (The Json) that I used after reading the MSDN documentation were not correct. However, you can get clues of the correct parameters by reading the C# code.
And the final command is:
azure vm extension set xplattest3 CustomScriptExtension Microsoft.Compute 1.4 -i '{"fileUris":["https://macstoragetest.blob.core.windows.net/testcontainername/createFolder.ps1"], "commandToExecute": "powershell -ExecutionPolicy Unrestricted -file createFolder.ps1" }'
This command successfully creates the C:\HelloWorld directory.
NOTE: I decided to upload the script to Azure as I read in a post and in the documentation that is mandatory. However I just made a test to download the original script from Github and it is working fine, so I guess that the documentation is a bit outdated.
EDIT: I created an detailed article that explains how to provision windows servers with xplat-cli in Azure.

How can I setup "bq" (bigquery) shell command without gcloud or gsutil?

I want to give service account information outside to "bq" command as a parameter, in order to change its authentication flexibly. I passed some options to "bq" command like:
$ bq load --service_account=<my_service_account>
--service_account_credential_file=<credential_output>
--service_account_private_key_file=<path_to_pem>
--project_id=<my_project>
<dataset>.<table> <localfile> <schema>
It works well in my local PC after configuring with gsutil with my account, but it seems that "bq" command doesn't accept any commands before configuring with "gcloud" or "gsutil" even if service account information is given:
> You do not currently have an active account selected.
> Please run:
>
> $ gcloud auth login
>
> to obtain new credentials, or if you have already logged in with a different account:
>
> $ gcloud config set account ACCOUNT
>
> to select an already authenticated account to use.
Can I achieve this purpose with "bq" or not? I've tried exporting GOOGLE_APPLICATION_CREDENTIALS environment but it doesn't seem to work.
Thank you in advance.
For authentication in "bq", there is "init" command. But even if you run:
bq init
The system returns the following warning:
The "init" command is no longer needed with the Cloud SDK.
To authenticate, run gcloud auth.
So, it seems that there may be no recommended way for authentication solely from "bq" without using "gcloud".

Resources