I have a system service, foo that is started and stopped via /usr/sbin/service restart foo. It in turn appears to be controlled by a shell script /etc/init.d/foo
How can I create a "pre-start" hook, so that I can run an extra shell script prior to this service starting? In this case, the pre-start hook is extra config that has to be fetched from a cloud provider metadata catalog, and then jacked into a configuration file necessary for foo to start properly.
I have considered modifying /etc/init.d/foo directly, which would work. But which would complicate expected frequent patch-level upgrades which I will catch by apt-get upgrade. I want to avoid a solution that requires re-establishing the hook.
A second option is that I could create a fooWrapper service, remove foo from all run levels, and then just start/stop fooWrapper. The implementation of that script would just be my secret sauce + invoking /etc/init.d/foo. The trouble with that is again package upgrades, whether foo would re-insert itself into the various runlevels, and I would then end up running two conflicting copies.
Your setup suggests that you use sysv init and not yet systemd. If this is the case, read on. Otherwise ignore this answer.
In general, you will have a link like S20foo in /etc/rc.d/rc3.d. The 20 and 3 may be different for you. Normally, you would create a script /etc/init.d.pre_foo that gets your extra config and link it to /etc/rc.d/rc3.d/S19pre_foo. This will start pre_foo before foo.
Related
I have an Ansible script to update and maintain my WildFly installation. One of my tasks in this setup is managing the MySQL-driver and in order to perform an update on that driver, I have to first disable the application that uses that driver, before I can replace the it and set up all my datasources anew.
My CLI script starts with the following lines:
if (outcome == success) of /deployment=my-app-1.1.ear:read-resource
deployment disable my-app-1.1.ear
end-if
My problem is that I am here very depending on the actual name of the application and that name can change over time since I have my version information in there.
I tried the following:
set foo=`ls /deployment`
deployment disable $foo
It did not work since when I look at foo I see that it was not my-app-1.1.ear but ["my-app-1.1.ear"] -- so I feel that I might be going in the right direction, even though I have not got it right.
I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.
Being fairly new to the Linux environment, and not having local resources to inquire on, I would like to ask what is the preferred method of starting a process at startup as a specific user on a Ubuntu 12.04 system. The reasoning for such a setup is that this machine(s) will be hosting an Input/Output Controller (IOC) in an industrial setting. If the machine fails or restarts, this process must boot automatically..... everytime.
My internet searches have provided two such area's to perform this task:
/etc/rc.local
/etc/init.d/
I ask for the specific advantages and disadvantages of each approach. I'll add that some of these machines are clients and some are servers, but all need to run an IOC, and preferably in the same manner.
Within what ever method above is deemed to be the most appropriate, a bash shell script must be run as my specified user. It is my understanding all start up process are owned by root. So I question if this is the best practice:
sudo -u <user> start_ioc.sh
If this is the case, then I believe it is required to create a file under:
/etc/sudoers.d/
Using:
sudo visudo -f <filename>
Where within this file you assign the appropriate rights and paths to the user. Most of my searches has shown this as the proper format:
<user or group> <host or IP>=(<user or group to run as>)NOPASSWD:<list of comma separated applications>
root ALL=(user)NOPASSWD:/usr/bin/start_ioc.sh
So for final additional information, the ultimate reason for this approach, which may also be flawed logic, is that the IOC process needs to have access to a network attached server (NAS). Allowing root access to the NAS is I believe a no-no, where the user can have the appropriate permissions assigned.
This may not be the best answer, but it is how I decided to complete this task:
Exactly as this post here:
how to run script as another user without password
I did use rc.local to initiate the process at startup. It seems to be working quite well.
I have a script that pulls some data from a web service and populates a mysql database. The idea is that this runs every minute, so I added a cron job to execute the script.
However, I would like the ability to occasionally suspend and re-start the job without modifying my crontab.
What is the best practice for achieving this? Or should I not really be using crontab to schedule something that I want to occasionally suspend?
I am considering an implementation where a global variable is set, and checked inside the script. But I thought I would canvas for more apt solutions first. The simpler the better - I am new to both scripting and ruby.
If I were you my script would look at a static switch, like you said with your global variable, but test for a file existence instead of a global variable. This seems clean to me.
Another solution is to have a service not using crontab but calling your script every minute. This service would be like other services in /etc/init.d or (/etc/rc.d depending on your distribution) and have start, stop and restart commands as other services.
These 2 solutions can be mixed:
the service only create or delete the switching file, and the crontab line is always active.
Or your service directly edits the crontab like this, but
I prefer not editing the crontab via a script and the described technique in the article is not atomic (if you change your crontab between the reading and the writting by the script your change is lost).
So at your place I would go for 1.
Is there way to invoke a CakePHP console shell on server without shell access? I have written a shell for performing some once off (and hence not a cron task) post DB upgrade tasks.
I could always just copy the logic into a temporary controller, call its actions via http and then delete it, but was wondering if there was a better way to go about it.
It seems that this is a one off script you might want to typically be running after DB updates right?
If that's the case, you can make it part of your "DB update script"
If you use anything like capistrano, you can include there too.
In all cases, if you don't want to touch the shell, I agree that having a controller to call the console code (or any php file running exec() as mentioned previously) would do the trick.
Also, if you want to run it just once and have it scheduled - don't forget that you have the "at" command (instead of cron) which will run it at that scheduled date (see http://linux.about.com/library/cmd/blcmdl1_at.htm)
Hope it helps,
Cheers,
p.s: if its a console shell and you don't want to run it from the console, then just don't make it a console shell.
I have to agree with elvy. Since this is something that you need to do once in a while after other events have happened, why not just create an 'admin' area for your application and stick code for that update in there?
you may be able to use php's exec function to call it from any old php script.
http://www.php.net/exec