how to schedule the shell script using Google Cloud Shell? - shell

I have a .sh file that is stored in GCS. I am trying to schedule the .sh file through google cloud shell.
I can run the same file using gsutil cat gs://miptestauto/baby.sh | sh command but not able to schedule it.
Following is my code for scheduling the file:
16 17 * * * gsutil cat gs://miptestauto/baby.sh | sh
It displays the message as "auto saving..done" but the scheduled job is not get displayed when I use crontab -l
# contents of .sh file
bin/bash
bq load --source_format=CSV babynames.baby_destination13 gs://testauto/yob2010.txt name:string,gender:string,count:integer
Please can anyone tell me how schedule it using google cloud shell.
I am not using compute engine/app engine. Just wanted to schedule it using the cloud shell.
thank you in advance :)

As per the documentation, Cloud Shell is intended for interactive use only. The Cloud Shell instances are provisioned on a per-user, per-session basis and sessions are terminated after an hour of inactivity.
In order to schedule a daily cron job, the instance needs to be up and running all time but this doesn’t happen with Cloud Shell and I believe your jobs are not running because of this.
When you start Cloud Shell, it provisions a f1-micro instance which is the same machine type you can get for free if you are eligible for “Always Free”. Therefore you can create a f1-micro instance, configure the cron job on it and leave it running so it can execute the daily job.
You can check free usage limits at https://cloud.google.com/compute/pricing#freeusage

You can also use the Cloud Scheduler product https://cloud.google.com/scheduler which is a serverless managed Cron like scheduler.

To schedule a script you first have to create a project if you don’t have one. I assume you already have a project so if that’s the case just create the instance that you want for scheduling this script.
To create the new instance:
At the Google Cloud Platform Console click on Products & Services which is the icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on Compute Engine and then click on VM Instances.
Go to the menu bar above the instance section and there you will see a Create Instance button. Click it and fill in the configuration values that you want your new instance to have. The values that you select will determine your VM instance features. You can choose, among other values, the name, zone and machine type for your new instance.
In the Machine type section click the drop-down menu tab to select an “f1-micro instance”.
In the Identity and API access section, give access scope to the Storage API so that you can read and write to your bucket in case you need to do so; the default access scope only allows you to read. Also enable BigQuery API.
Once you have the instance created and access to the bucket, just create your cron job inside your new instance: In the user account under which the cron job will execute, run crontab -e and edit this file to run the cron job that will execute your baby.sh script.The following documentation link should help you with this.
Please note, if you want to view output from your script you may need to redirect it to your current terminal.

Related

How to Authenticate to gsutil in a shell script using service account

What is the best way to authenticate to Google Cloud Storage Bucket from a shell script (To be scheduled to run daily/hourly) using a service account?
I have gone through the below link, but I still have some doubts regarding the login process.
How to use Service Accounts with gsutil, for uploading to CS + BigQuery
Are the below mentioned login steps a one-time process? If yes how does the login work for subsequent executions?
My understanding is that the below commands writes content to the .boto file which is used in subsequent executions?
But according to below link - it writes to a separate json file inside .config/gcloud?
Does gsutil support creating boto files with service account info?
In such a case what is the use of a .boto file ? and why/when do we need to pass it via BOTO_PATH/BOTO_CONFIG?
In gsutil (standalone), login using below steps
gsutil config -e
Optionally -o to output to a file other than ~/.boto
gsutil as part of gcloud
gcloud auth activate-service-account SERVICE_ACCOUNT#DOMAIN.COM --key-file=/path/key.json --project=PROJECT_ID
What is the best way to prevent intervention from other scripts?
For example, let us assume we have shell script S1, connecting to project P1 to upload data to Bucket B1, If another shell script say S2 is triggered at exactly the same time connecting to Project P2 uploading to Bucket B2, will it cause an issue?
What is the best practice to avoid such issues?
Is it possible to limit the login to only the time of script execution?
Say, the script is scheduled using cron to run at 10:00 AM UTC and the script completes its execution by 10:30 AM UTC.
Is it possible to prevent any actions in the time between 10:30 till next run?
In other words is it possible to log out and then login programatically without intervention?
Environment: Centos
The principle of BOTO file is exactly to answer your question 2. You can have 2 credentials that have access to 2 different buckets. Create 2 boto file and use the correct one for each script.
For the 3rd question it's possible to set condition on the bucket access.
Select a bucket and go to right-hand side in the info panel, and click on add credential.
Then, add your credential, your role, and click on add condition (you must set the uniform permission definition on the bucket to have available that feature)
And then define a condition to allow the permission after 10am your timezone and before 11am your timezone (you don't have minute granularity)

How can I run a .sh script on Google Cloud Shell on schedule?

I have a .sh script in Google Cloud Shell that automates my instance shutdown, backup, restart sequence.
How can I run a .sh script on Schedule (i.e. daily) in a simplest possible way?
I am not a professional and I've read all documentation about cron jobs, Cloud Scheduler, Cloud Tasks... but none of the examples in the documentation appear to detail a simple task that I need, and I do not have enough knowledge yet to understand these multiple services in details.... I just need a simple direction pointer to understand how to connect my Google Cloud Shell .sh script with any form of scheduler, as in:
Run a .sh script that I have in my virtual 5gb Cloud Sell Storage on schedule (daily at specific time), instead of manually opening Google Cloud Console and using a terminal to run the same script with "bash" command?
I just need to know what I need to learn/do to make this happen.
Thank you for your input.
That's not going to be possible. The Cloud Shell will turn off shortly after you close the tab. For this you'll need to use an actual VM. You can run one for free using the e2 micro instance.
https://cloud.google.com/free/docs/gcp-free-tier/#compute
Once you got this setup you can learn crontab to run your script on a schedule.

If I'm creating 10 compute instances through a script on GCP, will those instances get created sequentially or parallely?

I'm trying to create 10 instances on GCP console through a shell script. The allocation of resources to these instances is done parallelly(All starts getting created at once) or sequentially (Creation of Instance#2 starts when resources have been allocated to Instance#1)?
The Google Cloud Console does not support shell scripts.
If you mean that you are using the SDK CLI gcloud, you have the option --async to not wait for the API command to complete. Otherwise, the commands run one at a time.
The shell itself waits for the program to complete, which does not mean that the instance has been created. There is some overhead in launching gcloud. I do not recommend using the & to launch multiple gcloud commands at the same time.
One last item, check your quota to make sure that you can launch 10 instances in the zone(s) that you desire.
Compute Engine Quotas

send argument/command to already running Powershell script

Until we can implement our new HEAT SM system i am needing to create some workflows to ease our currently manual user administration processes.
I intend to use Powershell to execute the actual tasks but need to use VBS to send an argument to PS from an app.
My main question on this project is, Can an argument be sent to an already running Powershell process?
Example:
We have a PS menu app that we will launch in the AM and leave running all day.
I would love for there to be a way to allow PS to listen for commands/args and take action on them as they come in.
The reason I am wanting to do it this way is because one of the tasks needs to disable exchange features and the script will need to establish a connection a remote PSsession which, in our environment, can take between 10-45 seconds. If i were to invoke the command directly from HEAT (call-logging software) it would lock up while also preventing the tech from moving on to another case until the script terminates.
I have searched all over for similar functionality but i fear that this is not possible with PS.
Any suggestions?
I had already setup a script to follow this recommendation but i was curious to see if there was a more seamless approach
As suggested by one of the comments by #Tony Hinkle
I would have the PS script watch for a file, and then have the VBScript script create a file with the arguments. You would either need to start it on another thread (since the menu is waiting for user input), or just use a separate script that in turn starts another instance of the existing PS script with a param used to specify the needed action

Amazon AMI Windows instance + "user-data"?

is it possible to send a "user-data" to the Windows instance at the loading? I know that amazon allow to send it to *nix-based instance, but I can't find any information for Windows.
Thanks for the help,
Cyril
Amazon updated EC2Config on Windows AMIs on April 11, 2012 to support scripting through user-data for batch scripts and in May 2012 to support powershell scripts.
<script></script> tags will create and execute a batch file.
<powershell></powershell> tags will create and execute a powershell script.
Note that by default it only runs at instance initialization, so if you want it to execute each time you boot, you have to run the EC2ConfigServiceSettings and tell it to allow this always.
I am not aware of a direct way about it. But, you can create a start-up script inside your instance, that will allow you reading user-data each time you reboot your system. Inside your user-data, you can configure what's going to run only once or every single time your instance loads.

Resources