I've installed kubectl (version 1.16.0) on Windows 10, and the command works fine.
However, when tryin to run kubectl config set-credentials <some_param> --auth-provider=oidc, I get the following error: Error: unknown flag: --auth-provider.
This happens even though when I run kubectl config set-credentials -h I can see the --auth-provider as a possible option..
How can it be fixed?
If you want to use the kubectl oidc authenticator during authentication process, which sets the id_token as a bearer token for all requests and refreshes the token once it expires. After you’ve logged into your provider, use kubectl to add your id_token, refresh_token, client_id, and client_secret to configure the plugin.
Proper configuration of command kubectl config set-credentials is that:
First you have to define user name for whom credentials will be created. Then you can pass additional parameters (enable oidc as auth-provider and add arguments to it). This is how proper syntax of kubectl config set-credentials command should look like:
$ kubectl config set-credentials USER_NAME \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=( issuer url ) \
--auth-provider-arg=client-id=( your client id ) \
--auth-provider-arg=client-secret=( your client secret ) \
--auth-provider-arg=refresh-token=( your refresh token ) \
--auth-provider-arg=idp-certificate-authority=( path to your ca certificate ) \
--auth-provider-arg=id-token=( your id_token )
More information about authentication you can find here: kubernetes-authentication.
Related
I'm writing a simple bash script to pull an image from Azure Container Registry. If I type the command on the shell, I get authenticated and the images are pulled without any issue. However, when I run the same commands using the bash script, I get the unauthorized error.
Script
#!/bin/sh
sudo service docker start
docker logout
az logout
docker login myregistry.azurecr.io
sudo docker pull myregistry.azurecr.io/rstudio-server:0.1
Error
Error response from daemon: Get "https://myregistry.azurecr.io/v2/": unauthorized: aad access token with sp failed client id must be guid
Error response from daemon: Head "https://myregistry.azurecr.io/v2/rstudio-server/manifests/0.1": unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
I don't understand why it's happening even when I'm logged in.
Tested in my environment working fine for me.
Make sure Your password will be stored unencrypted in /root/.docker/config.json if not try to authenticate it manually by providing username and password in bash script.
sudo service docker start
docker logout
az logout
docker login myregistry.azurecr.io --username $SP_APP_ID --password $SP_PASSWD
sudo docker pull myregistry.azurecr.io/rstudio-server:0.1
You can also use the username and password of ACR as prvided in below picture inplace of APP_ID and SP_PASSWD
Would Suggest you to please follow this Microsoft Document for more information about authentication of ACR from Docker
metaboss update uri -a 4KmoJffVvFHmNdnHbbjDaWaBTHSfNMqetZVjAVCGyJve-u https://arweave.net/XfP4jW_sF8msGJ5CWQ_ptmjlcYgumlpgKq__QrugU0c -k keypair.json.
keypair address is Em4dctbgQ2nkwRdbj7pdsL5hBVmMe6CoCnCucZvX5J9E
I have got the same error and solved with using Solana config command
solana config set --url https://api.devnet.solana.com
This will work well.
The question is very similar to Google Cloud Tasks cannot authenticate to Cloud Run, but I am not using my own domain. I am using cloud run domain itself.
I have followed both below tutorial (both are kind of similar):
https://cloud.google.com/run/docs/triggering/using-tasks
https://cloud.google.com/tasks/docs/creating-http-target-tasks
I am not sure if there is more to the tutorial that I am missing, but below is the complete scenario.
I have some Django REST based APIs running on Google Cloud Run, and they are public. Furthermore, I can use them without any issue (though then need authentication).
I have created a Google Tasks Queue, and I am sending tasks to it using the following code
class QueueTaskMixin:
def send_task(self, payload=None):
url = 'https://my-public-cloud-run-app.a.run.app/api/conversation_types/'
client = tasks_v2.CloudTasksClient(credentials=credentials)
# `credentials` above belongs to a service account which have all sort of accesses
# Cloud SQL Admin
# Cloud SQL Client
# Cloud SQL Editor
# Cloud Tasks Admin
# Cloud Tasks Task Runner
# Service Account Token Creator
# Service Account User
# Cloud Run Admin
# Cloud Run Invoker
# Cloud Run Service Agent
# Storage Admin
parent = client.queue_path(
project=settings.GS_PROJECT_ID,
location=settings.GS_Q_REGION,
queue=settings.GS_Q_NAME)
task = {
'http_request':
{
'headers': {
'Content-type': 'application/json',
},
'http_method': tasks_v2.HttpMethod.POST,
'url': url,
'oidc_token': {
'service_account_email': settings.GS_Q_IAM_EMAIL,
# GS_Q_IAM_EMAIL is a another Service account that has
# Cloud Tasks Enqueuer
# Service Account User
# Cloud Run Invoker
}
}
}
if payload:
if isinstance(payload, dict):
payload = json.dumps(payload)
converted_payload = payload.encode()
task['http_request']['body'] = converted_payload
response = client.create_task(
request={'parent': parent, 'task': task}
)
print('Created task {}'.format(response.name))
Now I am getting PERMISSION_DENIED(7): HTTP status code 403 error.
My API logs shows the same:
"POST /api/conversation_types/ HTTP/1.1" 403 58 "-" "Google-Cloud-Tasks"
Forbidden: /api/conversation_types/
Now what I am not sure is whether this 403 error is thrown by
Two Google services trying to authorize each other
Or my API. Because my API requires authentication/authorization. As in, a user needs to log in using their username and password, and they will get a JWT token, and then they can call this API.
Referring to the documentation, I am not sure where I have to provide my APIs username/password/JWT token. The documentation says:
To authenticate between Cloud Tasks and an HTTP Target handler, Cloud Tasks creates a header token. This token is based on the credentials in the Cloud Tasks Enqueuer service account, identified by its email address.
Do I need to add this service account email address into my APIs as a user? Do I use oidc or oauth?
Any comments or answers much appreciated.
Update 1 - After removing auth from my API, Cloud Tasks is able to call the API successfully. So now how do I auth Cloud Task to be able to run my API?
Update 2 - Tried using OAuthToken and got error
400 Invalid url for HttpRequest.oauth_token. The hostname in the url must end with ".googleapis.com"
Looks like will have to go for OIDC token only.
Update 3 - Google Docs says:
OIDC tokens are signed JSON Web Tokens (JWT) and are used primarily to assert identity and not to provide any implicit authorization against a resource, unlike OAuth tokens, which do provide access.
Does that mean OIDC tokens cannot be used for authorization? Because I am getting authorization error here. Can't use OIDC, can't use OAuth, what to use then?
Update 4
As per comments from #johnhanley, I have updated my API to accept Bearer token for authentication. But even if my API is getting the token correctly, it won't be able to authorize it and will give invalid token error (I verified it using curl command. In both cases where token format is incorrect or token is incorrect, API simply return 403 Forbidden error).
Anyone can tell how to give the password (I can generate that in my API for the service account user email) to the service account so that using that password and email ID as username, OIDC can generate a token and use it to authenticate. Or am I going in the wrong direction?
Check the presence of Cloud Tasks service agent (service-[project-number]#gcp-sa-cloudtasks.iam.gserviceaccount.com) in your IAM Admin Console Page and grant him the roles/iam.serviceAccountTokenCreator role.
Don't turn off authentication, it does work. Here are some steps to follow in the form of bash shell commands and comments to explain the steps
Set some variables
PROJECT_ID='my-gcp-project-123'
TASK_ENQUER_SVC_ACCOUNT='tasks-creator-svc'
TASKS_SRV_ACCOUNT='cloud-tasks-svc'
CLOUD_RUN_SRV_ACCOUNT='cloud-run-svc'
GPC_REGION='australia-southeast1'
Create a cloud task queue
gcloud tasks queues create my-task-queue
Create a service account for sending the tasks - you need cloudtasks.enqueuer and serviceAccountUser
gcloud iam service-accounts create $TASK_ENQUER_SVC_ACCOUNT \
--display-name "Service account for sending cloud tasks"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
# cloudtasks.enqueuer role is needed to create tasks
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/cloudtasks.enqueuer"
# service account user role is needed for permission iam.serviceAccounts.actAs
# so a task can be generated as service account $TASKS_SRV_ACCOUNT
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASK_ENQUER_SVC_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/iam.serviceAccountUser"
Create the service account for running the Cloud Run service
gcloud iam service-accounts create $CLOUD_RUN_SRV_ACCOUNT \
--display-name "Service account for running cloud run service"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$CLOUD_RUN_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
Create the service account that cloud-tasks will use - no special permissions needed
gcloud iam service-accounts create $TASKS_SRV_ACCOUNT \
--display-name "Service account for the cloud-tasks"
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member \
serviceAccount:$TASKS_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role "roles/logging.logWriter"
Deploy your cloud run service with ingress=all and no-allow-unauthenticated
Cloud tasks does not use serverless VPC connectors, so ingress needs to be all
gcloud run deploy my-cloud-run-service \
--image=us.gcr.io/${PROJECT_ID}/my-docker-image \
--concurrency=1 \
--cpu=2 \
--memory=250Mi \
--max-instances=10 \
--platform=managed \
--region=$GPC_REGION \
--no-allow-unauthenticated \
--port=8080 \
--service-account=$CLOUD_RUN_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--ingress=all \
--project=${PROJECT_ID}
You must now give permission for TASKS_SRV_ACCOUNT to call the Cloud Run target service
gcloud run services add-iam-policy-binding my-cloud-run-service \
--member=serviceAccount:$TASKS_SRV_ACCOUNT#${PROJECT_ID}.iam.gserviceaccount.com \
--role=roles/run.invoker \
--region=$GPC_REGION
You can then send your cloud tasks with authentication tokens as per instructions at this step
https://cloud.google.com/tasks/docs/creating-http-target-tasks#token
The service_account_email parameter is the TASKS_SRV_ACCOUNT created above
I am trying to use Hashicorp vault for storing secrets for service accounts username and passwords --- I am following this link https://www.vaultproject.io/docs/auth/userpass.html to create the user name and password.
My question here is that here as per this example I am specifying the password "foo" when i curl this from any ec2 instances, as part of automations, so we want to automated this and codes will come from git:
curl \
--request POST \
--data '{"password": "foo"}' \
http://10.10.218.10:8200/v1/auth/userpass/login/mitchellh
Our policy is that we should NOT store any password in git... How do I run this curl and get authenticated to vault without specify the password for the user? is this possible?
Why you don't want to use aws-auth-method?
Also, if you are sure to want to use password authentication I think you can do something like this:
Generate user/password in the Vault, store user passwords in the Vault and set a policy to allow reading specific user password for specific ec2-instance (EC2 auth method);
In the ec2-instance run consul-template which will authenticate in the Vault with an ec2-instance role;
This consul-template will generate curl command with specific user name and password
Use this command
I have a spring boot application deployed on pivotal cloud foundry.
I'm trying to tunnel (cf ssh) to that application in pcf from my spring boot application, but not able to find any api or client libraries to achieve it.
Actual cli command to tunnel pcf:
cf ssh -N -T -L 10001:localhost:10001 ms name
Any suggestions are welcome.
If you're trying to write Java code that would do the same thing as the cf ssh command, that should be possible. It's standard SSH, but with short-lived credentials so the trick will be generating credentials that you can use from your app.
Here's an example of using a standard SSH/SCP/SFTP client, note that ssh.bosh-lite.com will be your SSH domain, which you can see from cf curl /v2/info:
$ ssh -p 2222 cf:$(cf app app-name --guid)/0#ssh.bosh-lite.com
$ scp -P 2222 -oUser=cf:$(cf app app-name --guid)/0 my-local-file.json ssh.bosh-lite.com:my-remote-file.json
$ sftp -P 2222 cf:$(cf app app-name --guid)/0#ssh.bosh-lite.com
https://github.com/cloudfoundry/diego-ssh#cloud-foundry-via-cloud-controller-and-uaa
That said, you should be able to do something similar with any standard SSH Java library.
As mentioned above, the trick is in getting credentials. The username will be the format cf:application-guid/app-instance-number, which is easy, but the password needs to be generated with cf ssh-code, or the comparable call to the UAA API.
Ex: curl -vv -H 'Accept: application/json' -H "Authorization: $(cf oauth-token)" "https://uaa.run.pivotal.io/oauth/authorize?client_id=ssh-proxy&response_type=code"
This example uses curl to send the request and cf oauth-token to get a valid Oauth2 bearer token for the logged in user. You could get a valid bearer token in a number of ways, including making direct API calls or using the cf-java-client. It just needs to be a valid token for the user that should perform the SSH action (i.e. it would be the user that's running cf ssh).
Hope that helps!