I need to run some GAM scripts across two domains as a client is migrating; is this possible? I've been trying to find any documentation on setup but there doesn't seem to be :/
Any help is appreciated!
Note: This is not a question about primary or secondary domains - if you need more information on primary/secondary switching I've found GAM3DirectoryCommands to be very helpful and descriptive!
It's not directly possible; however you can achieve this in the following manner:
create a CONFIGS folder (you can actually call this wahtever makes sense to you), and a sub-folder for each domain
move your client_secrets.json, oauth2.txt and oauth2service.json files from you GAM install folder into your newly-created domain folder
run gam info domain and gam info user to see how this affected your gam install
to create a new domain setup, run gam create project
once complete, run gam info domain and gam info user again to see the difference
remember to save your new config to your CONFIGS folder for this new domain too!
when you've replaced the config files, run gam user <your_admin_account_for_the_current_domain> check serviceaccount before running any commands, otherwise you're likely to get some errors ;)
Now you can switch between domains by replacing client_secrets.json, oauth2.txt and oauth2service.json in your GAM install folder (and remember to check serviceaccount).
Obviously there are many more elegant ways to do this - but this will form the basis of your powershell script in any case ;P
Hope this helps!
It is fairly easy, easier than the current answer in fact.
gam reads an OAUTHFILE environment variable which points to the access credentials (oauth2.txt by default). This is all that matters, as you can use the same GCP project for multiple domains.
So, all you need is an easy way to go around changing the OAUTFILE variable. I personally go with this gam alias (instead of the default alias in .bash_profile):
gam() { export OAUTHFILE=~/.local/share/gam/auth-$1.txt; shift; "~/.local/share/gam/gam" "$#" ; unset OAUTHFILE }
So my gam syntax now is
gam <something> info domain
No that above I use ~/.local/share/gam/... your paths might differ. Also you might want a $HOME/.gam-secrets and put them there.
Something is a simple string to uniquely identify each domain (it will be used for constructing the OAUTHFILE in the alias.
DO NOT forget to give your GCP service account access to all the domains (https://admin.google.com/AdminHome?#OGX:ManageOauthClients).
I can't think of a shorter path to gaving a multi-domain gam.
Related
I've spent 3 days beating my head against this before coming here in desperation.
So long story short I thought I'd fire up a simple PHP site to allow moderators of a gaming group I'm in the ability to start GCP servers on demand. I'm no developer so I'm looking at this from a Systems perspective to find the simplest solution to do the job.
I fired up an Ubuntu 18.04 machine on GCP and set it up with the Google SDK, authorised it for access to the project and was able to simply run gcloud commands which worked fine. Had some issues with the PHP file calling the shell script to run the same commands but with some testing I can see it's now calling the shell script no worries (it broadcasts wall "test") to console everytime I click the button on the PHP page.
However what does not happen is the execution of the gcloud command. If I manually run this shell script it starts up the instance no worries and broadcasts wall, if I click the button it broadcasts but that's it. I've set the files to have execution rights and I've even added the user nginx runs as to have sudo rights, putting sudo sh in front of the command in the PHP file also made no difference. Please find the bash script below:
#!/bin/bash
/usr/lib/google-cloud-sdk/bin/gcloud compute instances start arma3s1-prod --zone=australia-southeast1-b
wall "test"
Any help would be greatly appreciated, this coupled with an automated shut down would allow our gaming group to save money by only running the servers people want to play on.
Any more detail you want about the underlying system please let me know.
So I asked a PHP dev at work about this and in two seconds flat she pointed out the issue and now I feel stupid. In /etc/passwd the www-data user had /usr/sbin/nologin and after I fixed that running the script gcloud wanted permissions to write a log file to /var/www. Fixed those and it works fine. I'm not terribly worried about the page or even server being hacked and destroyed, I can recreate them pretty easily.
Thanks for the help though! Sometimes I think I just need to take a step back and get a set fresh of eyes on the problem.
When you launch a command while logged in, you have your account access rights to the Google cloud API but the PHP account doesn't have those.
Even if you add the www-data user to root, that won't fix the problem, maybe create some security issues but nothing more.
If you really want to do this you should create a service account and giving the json to the env variable, GOOGLE_APPLICATION_CREDENTIALS, which only have the rights on the compute instance inside your project this way your PHP should have enough rights to do what you are asking him.
Note that the issue with this method is that if you are hacked there is a change the instance hosting your PHP could be deleted too.
You could also try to make a call to prepared cloud function which will create the instance, this way, even if your instance is deleted the cloud function would still be there.
Being fairly new to the Linux environment, and not having local resources to inquire on, I would like to ask what is the preferred method of starting a process at startup as a specific user on a Ubuntu 12.04 system. The reasoning for such a setup is that this machine(s) will be hosting an Input/Output Controller (IOC) in an industrial setting. If the machine fails or restarts, this process must boot automatically..... everytime.
My internet searches have provided two such area's to perform this task:
/etc/rc.local
/etc/init.d/
I ask for the specific advantages and disadvantages of each approach. I'll add that some of these machines are clients and some are servers, but all need to run an IOC, and preferably in the same manner.
Within what ever method above is deemed to be the most appropriate, a bash shell script must be run as my specified user. It is my understanding all start up process are owned by root. So I question if this is the best practice:
sudo -u <user> start_ioc.sh
If this is the case, then I believe it is required to create a file under:
/etc/sudoers.d/
Using:
sudo visudo -f <filename>
Where within this file you assign the appropriate rights and paths to the user. Most of my searches has shown this as the proper format:
<user or group> <host or IP>=(<user or group to run as>)NOPASSWD:<list of comma separated applications>
root ALL=(user)NOPASSWD:/usr/bin/start_ioc.sh
So for final additional information, the ultimate reason for this approach, which may also be flawed logic, is that the IOC process needs to have access to a network attached server (NAS). Allowing root access to the NAS is I believe a no-no, where the user can have the appropriate permissions assigned.
This may not be the best answer, but it is how I decided to complete this task:
Exactly as this post here:
how to run script as another user without password
I did use rc.local to initiate the process at startup. It seems to be working quite well.
I am currently making a CLI tool gem that downloads files from some service and saves them to a specified folder.
I was wondering, what would be the best way to store user settings for it?
For example, the folder to download the files to, the api access token and secret, that kind of thing.
I wouldn't want to ask a user for that input on every run.
I would read the API stuff from environment variables. Let the users decide if they want to enter it every time or set the variable in a .bashrc or .bash_profile file.
And I would ask for the download folder every time.
A PHP scriptof mine wants to write into a log folder, the resulting error is:
Unable to open the log file "E:\approot\framework\log/dev.log" for writing.
When I set the writing permissions for the WebRole User RD001... manually it works fine.
Now I want to set the folder permissions automatically. Is there an easy way to get it done?
Please note that I'm very new to IIS and the stuff around, I would appreciate precise answers, thx.
Short/Technical Response:
You could probably set permissions on a particular folder using full-trust and a startup taks. However, you'd need to account for a stateless OS and changing drive letters (possible, not likely) in this script, which would make it difficult. Also, local storage is not persisted, so you'd have no way to ensure this data stayed in the case of a reboot.
Recommendation: Don't write local, read below ...
EDIT: Got to thinking about this, and while I still recommend against this, there is a 3rd option: You can allocate local storage in the service config, then access it from PHP using a dll reference, then you will have access to that folder. Please remember local storage is not persisted, so it's gone during a reboot.
Service Config for local:
http://blogs.mscommunity.net/blogs/dadamec/archive/2008/12/11/azure-reading-and-writing-with-localstorage.aspx
Accessing config from php:
http://phpazure.codeplex.com/discussions/64334?ProjectName=phpazure
Long / Detailed Response:
In Azure, you really are encouraged to approach things as a platform and not as "software on a server". What I mean there is that ideas such as "write something to a local log file" are somewhat incompatible with the cloud "idea". Depending on your usage, you could (and should) convert this script to output this data to some cloud-based or external storage, vs just placing it on the disk.
I would suggest modifying this script to leverage the PHP Azure SDK and write these log entries out to table or blob storage in Azure. If this sounds good, please provide the PHP and I can give an exact example.
The main reason for that (besides pushing the cloud idea) is that in Azure, you cannot assume the host machine ("role instance") will maintain an OS state, so while you can set some things such as folder permissions, you can't rely on them sticking that way. You have no real way to guarantee those permissions won't be reset when the fabric has to update your role and react to some lower level problem. For example, a hard-drive cage on the rack where your current instance lives could fail. If the failure were bad enough, the Fabric controller would need to rebuild your instance. When that happens, your code is moved to an entirely different server, so the need would arise to re-set those permissions. Also, depending on the changes, the E:\ could all of a sudden need to be the F:\ or X:\ drive and you wouldn't know.
Its much better to pretend (at some level) that your application is running "in Azure" and not "on a server in azure", so you make no assumptions about the hosting environment. So anything you need outside of your code (data, logs, audits, etc) should be stored somewhere you can control (Azure Storage, external call-out, etc)
When setting up a new Hudson/Jenkins instance i run into the problem that i have to manually provide all the email addresses for the scm users.
We are using subversion and i can't generate the mail addresses from the usernames. I got a mapping but i found no way to copy / edit that without making use of the gui. With 20+ users that gets boring and i'd like to have just edit a file or something.
Maybe i'm missing some trivial thing like a scmusers.xml (which totally would do the job) ?
I've got 2 solutions so far:
The users are stored in users/USERNAME/config.xml could be versioned / updated / etc.
Makeing use of the RegEx+Email+Plugin, create one rule per user and version that file.
With 20+ users, setting up a list for the scm users is the way to go. Then when folks add/leave the group, you only have to edit the mailing list instead of the Hudson jobs. Also depending on your mailing list software, folks might be able to add and drop themselves from the list which would save you the time of maintaining it yourself in Hudson.
You might also want to look into the alias support of whatever email server your Hudson server is using. Let Hudson send out the emails it wants to using the SVN usernames, but then define aliases in your /etc/aliases file (or equivalent for your email server) that map the SVN usernames to the actual email addresses.