I'm writing a simple bash script to pull an image from Azure Container Registry. If I type the command on the shell, I get authenticated and the images are pulled without any issue. However, when I run the same commands using the bash script, I get the unauthorized error.
Script
#!/bin/sh
sudo service docker start
docker logout
az logout
docker login myregistry.azurecr.io
sudo docker pull myregistry.azurecr.io/rstudio-server:0.1
Error
Error response from daemon: Get "https://myregistry.azurecr.io/v2/": unauthorized: aad access token with sp failed client id must be guid
Error response from daemon: Head "https://myregistry.azurecr.io/v2/rstudio-server/manifests/0.1": unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
I don't understand why it's happening even when I'm logged in.
Tested in my environment working fine for me.
Make sure Your password will be stored unencrypted in /root/.docker/config.json if not try to authenticate it manually by providing username and password in bash script.
sudo service docker start
docker logout
az logout
docker login myregistry.azurecr.io --username $SP_APP_ID --password $SP_PASSWD
sudo docker pull myregistry.azurecr.io/rstudio-server:0.1
You can also use the username and password of ACR as prvided in below picture inplace of APP_ID and SP_PASSWD
Would Suggest you to please follow this Microsoft Document for more information about authentication of ACR from Docker
I have configured specific Lambda alias (Please note this) as Cognito trigger using CLI as there is no provision in web console to do so. Now I am getting PreSignUp invocation failed due to the error AccessDeniedException while signing up. I am not sure about it but this might be happening as I have configured trigger using CLI. I tried to locate Cognito specific role in IAM but I didn't find such role.
So how can I update missing permissions in IAM?
I used the following CLI command to update the permissions.
aws lambda add-permission --function-name <ARN of the lambda alias> --source-arn <user-pool-arn> --action lambda:InvokeFunction --principal cognito-idp.amazonaws.com
The question is very similar to Google Cloud Tasks cannot authenticate to Cloud Run, but I am not using my own domain. I am using cloud run domain itself.
I have followed both below tutorial (both are kind of similar):
https://cloud.google.com/run/docs/triggering/using-tasks
https://cloud.google.com/tasks/docs/creating-http-target-tasks
I am not sure if there is more to the tutorial that I am missing, but below is the complete scenario.
I have some Django REST based APIs running on Google Cloud Run, and they are public. Furthermore, I can use them without any issue (though then need authentication).
I have created a Google Tasks Queue, and I am sending tasks to it using the following code
class QueueTaskMixin:
def send_task(self, payload=None):
url = 'https://my-public-cloud-run-app.a.run.app/api/conversation_types/'
client = tasks_v2.CloudTasksClient(credentials=credentials)
# `credentials` above belongs to a service account which have all sort of accesses
# Cloud SQL Admin
# Cloud SQL Client
# Cloud SQL Editor
# Cloud Tasks Admin
# Cloud Tasks Task Runner
# Service Account Token Creator
# Service Account User
# Cloud Run Admin
# Cloud Run Invoker
# Cloud Run Service Agent
# Storage Admin
parent = client.queue_path(
project=settings.GS_PROJECT_ID,
location=settings.GS_Q_REGION,
queue=settings.GS_Q_NAME)
task = {
'http_request':
{
'headers': {
'Content-type': 'application/json',
},
'http_method': tasks_v2.HttpMethod.POST,
'url': url,
'oidc_token': {
'service_account_email': settings.GS_Q_IAM_EMAIL,
# GS_Q_IAM_EMAIL is a another Service account that has
# Cloud Tasks Enqueuer
# Service Account User
# Cloud Run Invoker
}
}
}
if payload:
if isinstance(payload, dict):
payload = json.dumps(payload)
converted_payload = payload.encode()
task['http_request']['body'] = converted_payload
response = client.create_task(
request={'parent': parent, 'task': task}
)
print('Created task {}'.format(response.name))
Now I am getting PERMISSION_DENIED(7): HTTP status code 403 error.
My API logs shows the same:
"POST /api/conversation_types/ HTTP/1.1" 403 58 "-" "Google-Cloud-Tasks"
Forbidden: /api/conversation_types/
Now what I am not sure is whether this 403 error is thrown by
Two Google services trying to authorize each other
Or my API. Because my API requires authentication/authorization. As in, a user needs to log in using their username and password, and they will get a JWT token, and then they can call this API.
Referring to the documentation, I am not sure where I have to provide my APIs username/password/JWT token. The documentation says:
To authenticate between Cloud Tasks and an HTTP Target handler, Cloud Tasks creates a header token. This token is based on the credentials in the Cloud Tasks Enqueuer service account, identified by its email address.
Do I need to add this service account email address into my APIs as a user? Do I use oidc or oauth?
Any comments or answers much appreciated.
Update 1 - After removing auth from my API, Cloud Tasks is able to call the API successfully. So now how do I auth Cloud Task to be able to run my API?
Update 2 - Tried using OAuthToken and got error
400 Invalid url for HttpRequest.oauth_token. The hostname in the url must end with ".googleapis.com"
Looks like will have to go for OIDC token only.
Update 3 - Google Docs says:
OIDC tokens are signed JSON Web Tokens (JWT) and are used primarily to assert identity and not to provide any implicit authorization against a resource, unlike OAuth tokens, which do provide access.
Does that mean OIDC tokens cannot be used for authorization? Because I am getting authorization error here. Can't use OIDC, can't use OAuth, what to use then?
Update 4
As per comments from #johnhanley, I have updated my API to accept Bearer token for authentication. But even if my API is getting the token correctly, it won't be able to authorize it and will give invalid token error (I verified it using curl command. In both cases where token format is incorrect or token is incorrect, API simply return 403 Forbidden error).
Anyone can tell how to give the password (I can generate that in my API for the service account user email) to the service account so that using that password and email ID as username, OIDC can generate a token and use it to authenticate. Or am I going in the wrong direction?
Check the presence of Cloud Tasks service agent (service-[project-number]#gcp-sa-cloudtasks.iam.gserviceaccount.com) in your IAM Admin Console Page and grant him the roles/iam.serviceAccountTokenCreator role.
Don't turn off authentication, it does work. Here are some steps to follow in the form of bash shell commands and comments to explain the steps
Set some variables
PROJECT_ID='my-gcp-project-123'
TASK_ENQUER_SVC_ACCOUNT='tasks-creator-svc'
TASKS_SRV_ACCOUNT='cloud-tasks-svc'
CLOUD_RUN_SRV_ACCOUNT='cloud-run-svc'
GPC_REGION='australia-southeast1'
Create a cloud task queue
gcloud tasks queues create my-task-queue
Create a service account for sending the tasks - you need cloudtasks.enqueuer and serviceAccountUser
gcloud iam service-accounts create $TASK_ENQUER_SVC_ACCOUNT \
--display-name "Service account for sending cloud tasks"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
# cloudtasks.enqueuer role is needed to create tasks
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/cloudtasks.enqueuer"
# service account user role is needed for permission iam.serviceAccounts.actAs
# so a task can be generated as service account $TASKS_SRV_ACCOUNT
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/iam.serviceAccountUser"
Create the service account for running the Cloud Run service
gcloud iam service-accounts create $CLOUD_RUN_SRV_ACCOUNT \
--display-name "Service account for running cloud run service"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$CLOUD_RUN_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
Create the service account that cloud-tasks will use - no special permissions needed
gcloud iam service-accounts create $TASKS_SRV_ACCOUNT \
--display-name "Service account for the cloud-tasks"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASKS_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
Deploy your cloud run service with ingress=all and no-allow-unauthenticated
Cloud tasks does not use serverless VPC connectors, so ingress needs to be all
gcloud run deploy my-cloud-run-service \
--image=us.gcr.io/${PROJECT_ID}/my-docker-image \
--concurrency=1 \
--cpu=2 \
--memory=250Mi \
--max-instances=10 \
--platform=managed \
--region=$GPC_REGION \
--no-allow-unauthenticated \
--port=8080 \
--service-account=$CLOUD_RUN_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--ingress=all \
--project=${PROJECT_ID}
You must now give permission for TASKS_SRV_ACCOUNT to call the Cloud Run target service
gcloud run services add-iam-policy-binding my-cloud-run-service \
--member=serviceAccount:$TASKS_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role=roles/run.invoker \
--region=$GPC_REGION
You can then send your cloud tasks with authentication tokens as per instructions at this step
https://cloud.google.com/tasks/docs/creating-http-target-tasks#token
The service_account_email parameter is the TASKS_SRV_ACCOUNT created above
I am following the virtual assistant get started sample:
Virtual asistant
I am stuck on the step "Skill Authentication".
I tried to use the following command with all the arguments and generated botsecret for --secret argument.
msbot connect generic --name "Authentication" --keys "{\"YOUR_AUTH_CONNECTION_NAME\":\"Azure Active Directory v2\"}" --bot YOURBOTFILE.bot --secret "YOUR_BOT_SECRET" --url "portal.azure.net"
I still get the following error:
Error: You are attempting to perform an operation which needs access to the secret and --secret is missing
Can someone tell me what am I missing?
Trying to configure portworx volume backups (ptxctl cloudsnap) to localhost minio server (emulating S3).
First step is to create cloud credentials using ptxctl cred c
e.g.
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000
This results in:
Error configuring cloud provider.Make sure the credentials are correct: RequestError: send request failed caused by: Get https://10.0.0.1:9000/: EOF
disabling SSL (which is not configured as this is just a localhost test) gives me:
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000 --s3-disable-ssl
Which returns:
Not authenticated with the secrets endpoint
I've tried this with both minio gateway (nas) and minio server - same result.
Portworx container is running within Rancher
Any thoughts appreciated
Resolved via instructions at https://docs.portworx.com/secrets/portworx-with-kvdb.html
i.e. set secret type to kvdb in /etc/pwx/config.json
"secret": {
"cluster_secret_key": "",
"secret_type": "kvdb"
},
Then login using ./pxctl secrets kvdb login
After this, credentials create was successful and subsequent cloudsnap backup. Test was using --s3-disable-ssl switch
Note - kvdb is plain text so not suitable for production obvs.