I am using the Azure CLI to perform a health check on some Azure VMs. The health checks are deployed through a Jenkins stage, using bash. The stage itself may take several hours to complete, during which, several az 'vm run-commands' are executed that all require the proper credentials.
I also have several Jenkins pipelines that deploy different products and that are supposed to be able to run in parallel. All of them have the same health checks stage.
When I execute 'az login' to generate an auth token and 'az account set' to set the subscription, as far as I understood, this data is written to a profile file (~/.azure/azureProfile.json). So this is well and all, but whenever I trigger a parallel pipeline on this Jenkins container, if I use a different Azure subscription, the profile file will naturally get overwritten with the different credentials, which causes the other health check to fail whenever it gets to the next vm run-command execution since it's looking for a Resource Group, which exists in a different subscription.
I was thinking of potentially creating a new unique Linux user as part of each stage run and then removing it once it's done, so all pipelines will have separate profile files. This is a bit tricky though, since this is a Jenkins docker container using an alpine image and I would need to create the users with each pipeline rather than in the dockerfile, which brings me to a whole other drama - to give the Jenkins user sufficient privileges to create and delete users and so on...
Also, since the session credentials are stored in the ~/.azure/accessTokens.json and azureProfile.json files by default, I could theoretically generate a different directory for each execution, but I couldn't find a way to alter those default files/location in the Azure docs.
How do you think is the best/easier approach to workaround this?
Setting the AZURE_CONFIG_DIR environment variable does the trick as described here.
I would try to keep az login as it is, remove az account set and use --subscription argument for each command instead.
You can see that ~/.azure/azureProfile.json contains tenantId and user information for each subscription and ~/.azure/accessTokens.json contains all tokens.
So, if you precise each time your subscription explicitly you will not depend on common user context.
I have my Account 1 for subscription xxxx-xxxx-xxxxx-xxxx, and Account 2 for subscription yyyy-yyyy-yyyy-yyyy and I do:
az login # Account 1
az login # Account 2
az group list --subscription "xxxx-xxxx-xxxxx-xxxx"
az group list --subscription "yyyy-yyyy-yyyy-yyyy"
and it works well under the same unix user
Related
What is the best way to authenticate to Google Cloud Storage Bucket from a shell script (To be scheduled to run daily/hourly) using a service account?
I have gone through the below link, but I still have some doubts regarding the login process.
How to use Service Accounts with gsutil, for uploading to CS + BigQuery
Are the below mentioned login steps a one-time process? If yes how does the login work for subsequent executions?
My understanding is that the below commands writes content to the .boto file which is used in subsequent executions?
But according to below link - it writes to a separate json file inside .config/gcloud?
Does gsutil support creating boto files with service account info?
In such a case what is the use of a .boto file ? and why/when do we need to pass it via BOTO_PATH/BOTO_CONFIG?
In gsutil (standalone), login using below steps
gsutil config -e
Optionally -o to output to a file other than ~/.boto
gsutil as part of gcloud
gcloud auth activate-service-account SERVICE_ACCOUNT#DOMAIN.COM --key-file=/path/key.json --project=PROJECT_ID
What is the best way to prevent intervention from other scripts?
For example, let us assume we have shell script S1, connecting to project P1 to upload data to Bucket B1, If another shell script say S2 is triggered at exactly the same time connecting to Project P2 uploading to Bucket B2, will it cause an issue?
What is the best practice to avoid such issues?
Is it possible to limit the login to only the time of script execution?
Say, the script is scheduled using cron to run at 10:00 AM UTC and the script completes its execution by 10:30 AM UTC.
Is it possible to prevent any actions in the time between 10:30 till next run?
In other words is it possible to log out and then login programatically without intervention?
Environment: Centos
The principle of BOTO file is exactly to answer your question 2. You can have 2 credentials that have access to 2 different buckets. Create 2 boto file and use the correct one for each script.
For the 3rd question it's possible to set condition on the bucket access.
Select a bucket and go to right-hand side in the info panel, and click on add credential.
Then, add your credential, your role, and click on add condition (you must set the uniform permission definition on the bucket to have available that feature)
And then define a condition to allow the permission after 10am your timezone and before 11am your timezone (you don't have minute granularity)
Mac here, in case it makes a difference. I am on 2 separate GCP/gcloud/GKE/Kubernetes projects and have two different gmails for each of them:
Project 1: flim-flam, where my email is myuser1#gmail.example.com (pretend its a gmail)
Project 2: foo-bar, where my email is myuser2#gmail.example.com
I log into my myuser1#gmail.example.com account via gcloud auth login and confirm I am logged in as that account. For instance, I go to the GCP console and verify (in the UI) that I am in fact logged in as myuser1#gmail.example.com. Furthermore, when I run gcloud config configurations list I get:
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
flim-flam True myuser1#gmail.example.com flim-flam
foo-bar False myuser2#gmail.example.com foo-bar
From my flim-flam project, when I run kubectl delete ns flimflam-app I get permission errors:
Error from server (Forbidden): namespace "flimflam-app" is forbidden: User "myuser2#gmail.example.com" cannot delete resource "namespaces" in API group "" in the namespace "flimflam-app": requires one of ["container.namespaces.delete"] permission(s).
So gcloud thinks I'm logged in as myuser1 but kubectl thinks I'm logged in as myuser2. How do I fix this?
gcloud and kubectl share user identities but their configuration is in different files.
Using gcloud auth login does not update (!) existing (!) kubectl configurations. The former (on Linux) are stored in ${HOME}/.config/gcloud and the latter in ${HOME}/.kube/config.
I don't have a copy on hand but, if you check ${HOME}/.kube/config, it likely references the other Google account. You can either duplicate the users entry and reference it from the context. Or you could edit the existing users entry.
Actually, better yet use gcloud container clusters get-credentials to update kubectl's configuration with the currently-active gcloud user. This command updates ${HOME}/.kube/config for you.
I'm trying to setup terraform to handle creation of fine-grained user permissions, and have been able to create:
Cognito User Pools, Identity Pools
IAM Roles, Permissions
What I'm struggling with is how to link them together. I have two types of user:
Standard User
Manager
As such, I have found two ways that I could use to correctly hook up the correct IAM policy upon login:
Method 1 - Create a custom attribute, and Use the "Choose Role With Rules" to set a rule to set an IAM policy based on the attribute
Method 2 - Create Cognito Groups, and link users and the required IAM policy to each group.
The problem, as far as I can see, is that Terraform doesn't currently support either of those cases, so I need to find a work around. So, my question is essentially, how do I get around terraform's lack of support in some areas?
I've seen some projects that use [Ruby, Go, etc.] to make up for some of the limitations, but I don't quite understand where to start and what is the best option for my needs. I haven't been able to find much in Google yet (possibly https://github.com/infrablocks/ruby_terraform). Does anyone have a good guide or resource I could use to get started?
If terraform does not support something you can use the local-exec provisioner to execute commands after resource creation. For example you could use the aws cli to add a custom attribute:
resource "aws_cognito_identity_pool" "main" {
# ...
provisioner "local-exec" {
command = "aws cognito-idp add-custom-attributes --user-pool-id ${aws_cognito_identity_pool.main.id} --custom-attributes <your attributes>"
}
}
local-exec docs
We are using IAM permissions for groups and users with great success for S3, SQS, Redshift, etc. The IAM for S3 in particular gives lovely level of details by path and bucket.
I am bumping into some head scratching when it comes to EC2 permissions.
How do I create a permission that allows an IAM user to:
create up to n instances
do whatever he/she wants on those instances only (terminate / stop / describe)
...and makes it impossible for him/her to affect our other instances (change termination / terminate / etc.) ?
I've been trying Conditions on tag ("Condition": {"StringEquals": {"ec2:ResourceTag/purpose": "test"}}), but that means that all of our tools need to be modified to add that tag at creation time.
Is there a simpler way?
Limiting the number of instances an IAM user can create is not possible (unfortunately). All you have is a limit on the number of instances in the entire account.
Limiting permissions to specific instances is possible, but you have to specify the permissions for each instance-ID, using this format:
arn:aws:ec2:region:account:instance/instance-id
More information is available here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-iam-actions-resources.html
I'm trying to create a scheduled task in a Group Policy that runs a script that lives on the domain periodically.
I understand that storing the password in GP is a no no, so avoiding that. However, it seems like there is no way to deploy a scheduled task that can run with access to the network.
I tried the "System" account, that failed with access denied. I also tried using the "Do not store password" setting with a named account, which also prevents network access.
The scripts live in \domain\netlogon land and has full read access to authenticated users.
Is there anyway to accomplish this without having to manually install the task on every server and provide a named service account?
This is a Windows 2012 server domain with about 20 servers.
I ended up getting the "System" account to work correctly.