Is it possible to identify a user (maybe by Heroku email) that is running a one off dyno (i.e. heroku run rails console)? The use case is attributing changes automatically to that user.
I think is not possible, since a Heroku Dyno is
(...) a lightweight Linux container that runs a single user-specified
command.
If you try to check the current logged user with $ id -u -n you will have a different result on every execution, and you don't have any specific id or user ref in the enviroment variables (i.e. $ env) to permit you to infer it.
You can try to pass the user name/id to the app as command argument or enviroment variable.
Related
I'm writing a bash script, it will be run as part of a cron job by a specific user made for the purpose. I have an azure account name and account key, I would rather not have the account key in the script. How is this normally handled?
Right now I'm leaning towards storing it as an environment variable for the user.
In the end I went with a dedicated user for the job and stored the key as an environment variable for that user.
I am using the Azure CLI to perform a health check on some Azure VMs. The health checks are deployed through a Jenkins stage, using bash. The stage itself may take several hours to complete, during which, several az 'vm run-commands' are executed that all require the proper credentials.
I also have several Jenkins pipelines that deploy different products and that are supposed to be able to run in parallel. All of them have the same health checks stage.
When I execute 'az login' to generate an auth token and 'az account set' to set the subscription, as far as I understood, this data is written to a profile file (~/.azure/azureProfile.json). So this is well and all, but whenever I trigger a parallel pipeline on this Jenkins container, if I use a different Azure subscription, the profile file will naturally get overwritten with the different credentials, which causes the other health check to fail whenever it gets to the next vm run-command execution since it's looking for a Resource Group, which exists in a different subscription.
I was thinking of potentially creating a new unique Linux user as part of each stage run and then removing it once it's done, so all pipelines will have separate profile files. This is a bit tricky though, since this is a Jenkins docker container using an alpine image and I would need to create the users with each pipeline rather than in the dockerfile, which brings me to a whole other drama - to give the Jenkins user sufficient privileges to create and delete users and so on...
Also, since the session credentials are stored in the ~/.azure/accessTokens.json and azureProfile.json files by default, I could theoretically generate a different directory for each execution, but I couldn't find a way to alter those default files/location in the Azure docs.
How do you think is the best/easier approach to workaround this?
Setting the AZURE_CONFIG_DIR environment variable does the trick as described here.
I would try to keep az login as it is, remove az account set and use --subscription argument for each command instead.
You can see that ~/.azure/azureProfile.json contains tenantId and user information for each subscription and ~/.azure/accessTokens.json contains all tokens.
So, if you precise each time your subscription explicitly you will not depend on common user context.
I have my Account 1 for subscription xxxx-xxxx-xxxxx-xxxx, and Account 2 for subscription yyyy-yyyy-yyyy-yyyy and I do:
az login # Account 1
az login # Account 2
az group list --subscription "xxxx-xxxx-xxxxx-xxxx"
az group list --subscription "yyyy-yyyy-yyyy-yyyy"
and it works well under the same unix user
I have a .sh file that is stored in GCS. I am trying to schedule the .sh file through google cloud shell.
I can run the same file using gsutil cat gs://miptestauto/baby.sh | sh command but not able to schedule it.
Following is my code for scheduling the file:
16 17 * * * gsutil cat gs://miptestauto/baby.sh | sh
It displays the message as "auto saving..done" but the scheduled job is not get displayed when I use crontab -l
# contents of .sh file
bin/bash
bq load --source_format=CSV babynames.baby_destination13 gs://testauto/yob2010.txt name:string,gender:string,count:integer
Please can anyone tell me how schedule it using google cloud shell.
I am not using compute engine/app engine. Just wanted to schedule it using the cloud shell.
thank you in advance :)
As per the documentation, Cloud Shell is intended for interactive use only. The Cloud Shell instances are provisioned on a per-user, per-session basis and sessions are terminated after an hour of inactivity.
In order to schedule a daily cron job, the instance needs to be up and running all time but this doesn’t happen with Cloud Shell and I believe your jobs are not running because of this.
When you start Cloud Shell, it provisions a f1-micro instance which is the same machine type you can get for free if you are eligible for “Always Free”. Therefore you can create a f1-micro instance, configure the cron job on it and leave it running so it can execute the daily job.
You can check free usage limits at https://cloud.google.com/compute/pricing#freeusage
You can also use the Cloud Scheduler product https://cloud.google.com/scheduler which is a serverless managed Cron like scheduler.
To schedule a script you first have to create a project if you don’t have one. I assume you already have a project so if that’s the case just create the instance that you want for scheduling this script.
To create the new instance:
At the Google Cloud Platform Console click on Products & Services which is the icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on Compute Engine and then click on VM Instances.
Go to the menu bar above the instance section and there you will see a Create Instance button. Click it and fill in the configuration values that you want your new instance to have. The values that you select will determine your VM instance features. You can choose, among other values, the name, zone and machine type for your new instance.
In the Machine type section click the drop-down menu tab to select an “f1-micro instance”.
In the Identity and API access section, give access scope to the Storage API so that you can read and write to your bucket in case you need to do so; the default access scope only allows you to read. Also enable BigQuery API.
Once you have the instance created and access to the bucket, just create your cron job inside your new instance: In the user account under which the cron job will execute, run crontab -e and edit this file to run the cron job that will execute your baby.sh script.The following documentation link should help you with this.
Please note, if you want to view output from your script you may need to redirect it to your current terminal.
Being fairly new to the Linux environment, and not having local resources to inquire on, I would like to ask what is the preferred method of starting a process at startup as a specific user on a Ubuntu 12.04 system. The reasoning for such a setup is that this machine(s) will be hosting an Input/Output Controller (IOC) in an industrial setting. If the machine fails or restarts, this process must boot automatically..... everytime.
My internet searches have provided two such area's to perform this task:
/etc/rc.local
/etc/init.d/
I ask for the specific advantages and disadvantages of each approach. I'll add that some of these machines are clients and some are servers, but all need to run an IOC, and preferably in the same manner.
Within what ever method above is deemed to be the most appropriate, a bash shell script must be run as my specified user. It is my understanding all start up process are owned by root. So I question if this is the best practice:
sudo -u <user> start_ioc.sh
If this is the case, then I believe it is required to create a file under:
/etc/sudoers.d/
Using:
sudo visudo -f <filename>
Where within this file you assign the appropriate rights and paths to the user. Most of my searches has shown this as the proper format:
<user or group> <host or IP>=(<user or group to run as>)NOPASSWD:<list of comma separated applications>
root ALL=(user)NOPASSWD:/usr/bin/start_ioc.sh
So for final additional information, the ultimate reason for this approach, which may also be flawed logic, is that the IOC process needs to have access to a network attached server (NAS). Allowing root access to the NAS is I believe a no-no, where the user can have the appropriate permissions assigned.
This may not be the best answer, but it is how I decided to complete this task:
Exactly as this post here:
how to run script as another user without password
I did use rc.local to initiate the process at startup. It seems to be working quite well.
From a Windows Service running on a Terminal Server (in global space), we would like to be able to start up a process running a windows application in a specific user's Terminal Server sessions.
How does one go about doing this?
The Scenerio: the windows service starts at boot time. After the user has logged into a Terminal Server user session, based on some criteria known only to the windows service, the windows service wants to start a process in the user's session running a windows application.
An example: We would like to display a 'Shutdown in 5 minutes' warning to the users. The windows service would detect this condition, and start up a process in each user session that starts the windows app that displays the warning. And, yes, I know there are other ways of displaying a warning dialog, this is the example, what we want to do is much more invasive.
You can use CreateProcessAsUser to do this - but it requires a bit of effort. I believe the following steps are the basic required procedure:
Get the user's session (WTSQuerySessionInformation).
Get a token for that user (WTSQueryUserToken).
Create a duplicate token for your use (DuplicateTokenEx).
Use the token to create an environment block (CreateEnvironmentBlock).
Launch the application with CreateProcessAsUser, using the block above.
You'll also want to make sure to clean up all of the appropriate handles, tokens, etc., after you've launched the process.
Really late reply but maybe somebody will find this helpful.
You can use PsExec to launch an application on a remote (or local) server inside a specified session by using the following command:
psexec \\COMPUTER_NAME -i SESSION_ID APPLICATION_NAME
Where SESSION_ID indicates the session id in which to launch the application.
You will need to know what sessions are active on the server and which session id maps to which user login. The following thread provides a nice code sample for this exact problem: How do you retrieve a list of logged-in/connected users in .NET?
Late reply but in the answer above DuplicateToken is not necessary since WTSQueryUserToken already returns a primary token.