Mac here, in case it makes a difference. I am on 2 separate GCP/gcloud/GKE/Kubernetes projects and have two different gmails for each of them:
Project 1: flim-flam, where my email is myuser1#gmail.example.com (pretend its a gmail)
Project 2: foo-bar, where my email is myuser2#gmail.example.com
I log into my myuser1#gmail.example.com account via gcloud auth login and confirm I am logged in as that account. For instance, I go to the GCP console and verify (in the UI) that I am in fact logged in as myuser1#gmail.example.com. Furthermore, when I run gcloud config configurations list I get:
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
flim-flam True myuser1#gmail.example.com flim-flam
foo-bar False myuser2#gmail.example.com foo-bar
From my flim-flam project, when I run kubectl delete ns flimflam-app I get permission errors:
Error from server (Forbidden): namespace "flimflam-app" is forbidden: User "myuser2#gmail.example.com" cannot delete resource "namespaces" in API group "" in the namespace "flimflam-app": requires one of ["container.namespaces.delete"] permission(s).
So gcloud thinks I'm logged in as myuser1 but kubectl thinks I'm logged in as myuser2. How do I fix this?
gcloud and kubectl share user identities but their configuration is in different files.
Using gcloud auth login does not update (!) existing (!) kubectl configurations. The former (on Linux) are stored in ${HOME}/.config/gcloud and the latter in ${HOME}/.kube/config.
I don't have a copy on hand but, if you check ${HOME}/.kube/config, it likely references the other Google account. You can either duplicate the users entry and reference it from the context. Or you could edit the existing users entry.
Actually, better yet use gcloud container clusters get-credentials to update kubectl's configuration with the currently-active gcloud user. This command updates ${HOME}/.kube/config for you.
Related
I have a simple etcd server running and I am using this github project called etcd-keeper to visualize the data in the etcd.
you can find the etcd-keeper project here: https://github.com/evildecay/etcdkeeper
I have created the root using etcdctl and everything works fine.
And I needed to create a another user that has limited view access. So, I created another test-user user and added read-only role with relevant persmissions.
Everything is good but, when I try to access the etcd server using etcd-keeper it doesn't allow me to log in with the test-user credentials unless I signed in with root user first
I don't need to share the root user credentials with the person logs with test-user. Otherwise no point in creating a new user noh.
I get this warning as below:
Can someone please help me to fix this problem? Is this error from etcd servr side? Anyone has used this etcd-keeper before?
Thank you.
I am using the Azure CLI to perform a health check on some Azure VMs. The health checks are deployed through a Jenkins stage, using bash. The stage itself may take several hours to complete, during which, several az 'vm run-commands' are executed that all require the proper credentials.
I also have several Jenkins pipelines that deploy different products and that are supposed to be able to run in parallel. All of them have the same health checks stage.
When I execute 'az login' to generate an auth token and 'az account set' to set the subscription, as far as I understood, this data is written to a profile file (~/.azure/azureProfile.json). So this is well and all, but whenever I trigger a parallel pipeline on this Jenkins container, if I use a different Azure subscription, the profile file will naturally get overwritten with the different credentials, which causes the other health check to fail whenever it gets to the next vm run-command execution since it's looking for a Resource Group, which exists in a different subscription.
I was thinking of potentially creating a new unique Linux user as part of each stage run and then removing it once it's done, so all pipelines will have separate profile files. This is a bit tricky though, since this is a Jenkins docker container using an alpine image and I would need to create the users with each pipeline rather than in the dockerfile, which brings me to a whole other drama - to give the Jenkins user sufficient privileges to create and delete users and so on...
Also, since the session credentials are stored in the ~/.azure/accessTokens.json and azureProfile.json files by default, I could theoretically generate a different directory for each execution, but I couldn't find a way to alter those default files/location in the Azure docs.
How do you think is the best/easier approach to workaround this?
Setting the AZURE_CONFIG_DIR environment variable does the trick as described here.
I would try to keep az login as it is, remove az account set and use --subscription argument for each command instead.
You can see that ~/.azure/azureProfile.json contains tenantId and user information for each subscription and ~/.azure/accessTokens.json contains all tokens.
So, if you precise each time your subscription explicitly you will not depend on common user context.
I have my Account 1 for subscription xxxx-xxxx-xxxxx-xxxx, and Account 2 for subscription yyyy-yyyy-yyyy-yyyy and I do:
az login # Account 1
az login # Account 2
az group list --subscription "xxxx-xxxx-xxxxx-xxxx"
az group list --subscription "yyyy-yyyy-yyyy-yyyy"
and it works well under the same unix user
The scenario is as follows:
I have TeamCity set up to use AWS EC2 hosts running Windows Server 2012 R2 as build agents. In this configuration, the TeamCity agent service is running as SYSTEM. I am trying to implement FastBuild as our new compilation process. In order to use the distributed compilation functionality of FastBuild, the build agent host needs to have access to a shared network folder. Unfortunately, I cannot seem to give this kind of access from one machine to another.
To help further the explanation, I'll use named examples. The networked folder, C:\Shared-Folder, lives on a host named Central-Host. The build agent lives on Builder-Host. Everything is running Windows Server 2012 R2 on EC2 hosts that are fully network permissive to each other via AWS security groups. What I need is to share a directory from Central-Host so that Builder-Host can fully access it via a directory structure like this:
\\Central-Host\Shared-Folder
By RDPing into both hosts using the default Administrator account, I can very easily set up the network sharing and browse (while on Builder-Host) to the \\Central-Host\Shared-Folder location. I can also open up the command line and run:
type NUL > \\Central-Host\Shared-Folder\Empty.txt
with the result of an empty text file being created at that networked location.
The problem arises from the SYSTEM account. When I grab PSTOOLS and use the command:
PSEXEC -i -s cmd.exe
I can test commands that will be given by TeamCity. Again, it is a service being run as SYSTEM which, I need to emphasize, cannot be changed to a normal User due to other issues we have when using TeamCity agents under the User account type.
After much searching I have discovered how to set up Active Directory services so that I can add Users and Computers from the domain but after doing so, I still face access denied errors. I am probably missing something important and I hope someone here can help. I believe this problem will be considered "solved" when I can successfully run the "type NUL" command shown above.
This is not an answer for the permissions issue, but rather a way to avoid it. (Wanted to add this as a comment, but StackOverflow won't let me - weird.)
The shared network drive is used only for the remote worker discovery. If you have a fixed list of workers, instead of using the worker discovery, you can specify them explicitly in your config file as follows:
Settings
{
.Workers =
{
'hostname1' // specify hostname
'hostname2'
'192.168.0.10' // or ip
}
... // the other stuff that goes here
This functionality is not documented, as to-date all users have wanted the automatic worker discovery. It is fine to use however, and if it is indeed useful, it can be elevated to a supported feature with just a documentation update.
I am getting error "CWWIM4538E Multiple principals were found" at server startup. I know the cause as the local WAS admin account has its duplicate in LDAP repository. I simply wants to remove the local WAS user gracefully offline as server won't come up. I tried playing around with changing the user id info in fileregistry.xml and corresponding change in security.xml but to no avail.
Seems that you've added LDAP into "federated repositories" and forgot to remove "internalFileRepository" which contains wasadmin as well. You can do it in profiles/dmgr/config/cells/myCell/wim/config/wimconfig.xml, where you just remove it from the realm.
I'm trying to publish locally against a SQLExpress instance to test the publish capability of VS Database Project and i"m running into an error where it's trying to create a user that already exists within the database. The user creation isn't being wrapped within an IF EXISTS and I'm not seeing any type of setting to control or enforce this.
Specifically, it's throwing:
Creating [xyz\abc46518]...
(208,1): SQL72014: .Net SqlClient Data Provider: Msg 15063, Level 16, State 1, Line 1 The login already has an account under a different user name.
(208,0): SQL72045: Script execution error. The executed script:
CREATE USER [xyz\abc46518] FOR LOGIN [xyz\abc46518];
While other parts of the script have 'IF EXISTS' and 'IF NOT EXISTS', this part of the script does not.
I'd like to be able to have this as a part of the script to control the users within the database. Should someone choose to grant access and it's not in source, it's going away when we deploy.
I started using database projects, they are great, except for the user permissions.
I only have dba permissions to our databases, whereas handling logins is at the server level.
So when creating the database projects I'd get the following code generated:
CREATE USER [UserName] for login
Which when I went to build the project would error.
Well I wanted the users, but didn't want the hassle of having to keep track of post deployment scripts, largely because they ruined my lovely TFS structure.
My solution, which is a bit of a hack, instead of creating a user, I just created a role with the same name:
CREATE ROLE [UserName]
AUTHORIZATION [dbo];
Now I can assign permissions to the user for my objects (I know all access should be through roles, but its not my database, so I'm happy to hack a fix)
We never deploy roles ourselves, so it doesn't matter to us devs that its a role or a user.