We have tons of services hosted in Pivotal Cloud Foundry(PCF). What is the best way to restart all micro-services in given space via scripts? Other challenge we have is we want to start services in order and introduce some delay between each service start ups. We are doing it manually right now but it is tedious and time consuming. Please suggest if anyway we can automate it. Thx.
I would suggest you write a wrapper-script either in powershell or bash that will first execute cf apps in your space
The above command will give you the App Name. Read that text and have your wrapper-script execute cf restart <APP_NAME> in a loop
This will restart all the Apps in your space...
Regarding Introducing delay in service start ups.. I would suggest you to have a CI/CD process to have your apps deployed (a Jenkins process, for example) with which you can have a complete control over your deployments
To realize what #Arun suggested:
for i in $(cf apps | grep '[0-9]/[0-9]' | cut -d" " -f1); do cf restart $i; done
To have a certain order, you could maintain a text file containing the app names in the correct order:
first_app
second_app
(...)
If the file is called app_order.txt, then to restart in order with a delay of say 30 seconds each, do:
while read i; do cf restart $i; sleep 30; done < app_order.txt
Related
I have a .sh script in Google Cloud Shell that automates my instance shutdown, backup, restart sequence.
How can I run a .sh script on Schedule (i.e. daily) in a simplest possible way?
I am not a professional and I've read all documentation about cron jobs, Cloud Scheduler, Cloud Tasks... but none of the examples in the documentation appear to detail a simple task that I need, and I do not have enough knowledge yet to understand these multiple services in details.... I just need a simple direction pointer to understand how to connect my Google Cloud Shell .sh script with any form of scheduler, as in:
Run a .sh script that I have in my virtual 5gb Cloud Sell Storage on schedule (daily at specific time), instead of manually opening Google Cloud Console and using a terminal to run the same script with "bash" command?
I just need to know what I need to learn/do to make this happen.
Thank you for your input.
That's not going to be possible. The Cloud Shell will turn off shortly after you close the tab. For this you'll need to use an actual VM. You can run one for free using the e2 micro instance.
https://cloud.google.com/free/docs/gcp-free-tier/#compute
Once you got this setup you can learn crontab to run your script on a schedule.
I have to restart 100's of scripts for at least 20+ users once the server restarts for any reason. I wanted to come up with a single script to trigger of all scripts/programs under all users with just one script (without root privilege).
Is it possible to do so? Linux If not, what is the best approach to proceed?
Thanks,
You can make a boot script in /etc/init.d/rc3.d that will use su - someuser -c somescript for the different users.
When you want the different users to control which scripts they want, you can give them control over the somescript (perhaps $HOME/bin/startme.sh).
When you are anxious that the scripts are always running, you can consider another approach: do not start them at a server restart, but put them in monitoring script in the users crontab. Each minute (or 5 minutes or hour) this monitor script can check the running scripts and restart them when needed.
I have a .sh file that is stored in GCS. I am trying to schedule the .sh file through google cloud shell.
I can run the same file using gsutil cat gs://miptestauto/baby.sh | sh command but not able to schedule it.
Following is my code for scheduling the file:
16 17 * * * gsutil cat gs://miptestauto/baby.sh | sh
It displays the message as "auto saving..done" but the scheduled job is not get displayed when I use crontab -l
# contents of .sh file
bin/bash
bq load --source_format=CSV babynames.baby_destination13 gs://testauto/yob2010.txt name:string,gender:string,count:integer
Please can anyone tell me how schedule it using google cloud shell.
I am not using compute engine/app engine. Just wanted to schedule it using the cloud shell.
thank you in advance :)
As per the documentation, Cloud Shell is intended for interactive use only. The Cloud Shell instances are provisioned on a per-user, per-session basis and sessions are terminated after an hour of inactivity.
In order to schedule a daily cron job, the instance needs to be up and running all time but this doesn’t happen with Cloud Shell and I believe your jobs are not running because of this.
When you start Cloud Shell, it provisions a f1-micro instance which is the same machine type you can get for free if you are eligible for “Always Free”. Therefore you can create a f1-micro instance, configure the cron job on it and leave it running so it can execute the daily job.
You can check free usage limits at https://cloud.google.com/compute/pricing#freeusage
You can also use the Cloud Scheduler product https://cloud.google.com/scheduler which is a serverless managed Cron like scheduler.
To schedule a script you first have to create a project if you don’t have one. I assume you already have a project so if that’s the case just create the instance that you want for scheduling this script.
To create the new instance:
At the Google Cloud Platform Console click on Products & Services which is the icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on Compute Engine and then click on VM Instances.
Go to the menu bar above the instance section and there you will see a Create Instance button. Click it and fill in the configuration values that you want your new instance to have. The values that you select will determine your VM instance features. You can choose, among other values, the name, zone and machine type for your new instance.
In the Machine type section click the drop-down menu tab to select an “f1-micro instance”.
In the Identity and API access section, give access scope to the Storage API so that you can read and write to your bucket in case you need to do so; the default access scope only allows you to read. Also enable BigQuery API.
Once you have the instance created and access to the bucket, just create your cron job inside your new instance: In the user account under which the cron job will execute, run crontab -e and edit this file to run the cron job that will execute your baby.sh script.The following documentation link should help you with this.
Please note, if you want to view output from your script you may need to redirect it to your current terminal.
I'm having issues keeping the queue:work command running on my server. I tried nohup, but as soon as I close the terminal (which times out every 5 minutes or so no matter what I've tried) the process goes away.
I thought about running a script in cron to kick off the nohup command, however that runs in jailshell too so I have no way of seeing if the process is still running from a previous cron or not and I don't want a potential 20k copies of this running because it's trying to kick off every minute.
I also don't have access to install software to install Supervisord.
So, what other solutions can I use to ensure this stays running?
EDIT I contacted the support for my host, and pretty much it looks like there are no real alternatives for me. I think I'm going to have to set this project up on Linode, or rework things to not have queuing tasks.
It seems that the problem resides in the shell configuration, because the command ps is rewritten to show only the children process.
The solution is to ask your hosting provider (or change it yourself if allowed) to set this variable:
SHELL="/bin/bash"
This simple fix allowed me to have the function working properly.
Now my Kernel.php looks as follows:
$command = "ps faux | grep queue:work";
exec($command, $task_list);
// Process are duplicate in ps and show also the command as two single lines
$running_process = (count($task_list) / 2) - 1;
if($running_process < 1)
$schedule->command('queue:work --queue=high,low --tries=3')
->everyMinute();
else if($running_process > 5)
// If too many are active, restart everything to avoid overload
$schedule->call(function(){
Artisan::call('queue:restart');
})->everyMinute();
This code makes sure that at least one worker is always running, and at the same time forces a restart if you have more that 5 workers active.
I have a bot machine (controlled via mobile device) which
connects to the Server and fetch information from it by method os
"ssh, shell script, os commands,sql queries etc" than it feed that
information over the internet (private)
I want to disallow this multiple connection to the server via the
bot machine ONLY.. there are other machine which connects to the server which must not be affected
Suppose
Client A from his mobile acess bot machine (via webpage) than the bot
machine connect to server (1st session) now if the process of this
connection is 5 minute during this period the bot machine will be
creating, quering, deleting, appending, updating etc
in the very mean time of that 5 minute duration (suppose 2min after
the 1st session started) Client B from his mobile access bot machine
(via webpage) than the bot machine connect to server (2nd session) now
it will conflict with the 1st session and create Havoc...
Limitation
Now first of all i do not want to editing any setting on the SERVER
ANY WHAT SO EVER
I do not want to edit the webpage/mobile etc
I already know abt the lock file method of parallel shell script and
it is implemented at script level but what abt the OS commands and
stuff like that which are not in bash script
My Thougth
What i thougt was whenever we create a connection with server it
create a process named what ever (SSH) which is viewable in ps -fu
OSUSER so by applying a unique id/tag/name to our connection we can
identify if one session is active or not. This will be check as soon
as the bot connects to the server. But i do not know how to do
that.... Please also suggest any more information over it.
Also is there is way to identify if the existing process is hanged or
the time of the process started or elapsed?
Maybe try using limits.conf to enforce a hard limit of 1 login for the user/group.
You might need a periodic cron job to check for and remove any stale logins.
Locks/mutexes are hard to get right and add complexity. Limits.conf is a standard feature of most unix/linux systems and should be more reliable, emphasis on should...
A similar question was raised here:
https://unix.stackexchange.com/questions/127077/number-of-ssh-connections-on-a-single-linux-machine
Details here:
http://linux.die.net/man/5/limits.conf
I assume you have a single login for the ssh account and that this runs a script on login
Add something like this to the script at login
#!/bin/bash
LOCK_FILE="/tmp/sshlock"
trap "rm $LOCK_FILE; exit" SIGHUP SIGINT SIGTERM
if [ $(( (`date +%s` - `stat -L --format %Y $LOCK_FILE`) < (30*60) )) ]; then
exit 0
fi
touch $LOCK_FILE
When the processes that the ssh login calls end, delete the $LOCK_FILE
The trap statement is an important part of this way of locking, please do use it
The "30*60" is a 30 minute timeout, thanks to the answer on this question How can I tell if a file is older than 30 minutes from /bin/sh?