declare init script load after it dependences , or declare run levels for it - init.d

I create an init script for wildfly 10, but after reboot it loads before it dependences applications.
How can I change that it will load as last process ?
I know that I can change run levels on update-rc.d, but what is correct way to do so?

If the dependencies also run as init.d scripts you can change the ordering by modifying the number before the script name in the symlinks in /etc/rcX.d/
E.g. in one of my servers /etc/rc5.d/ contains:
K01apache-htcacheclean S01dns-clean S01lxcfs S01rsyslog
To ensure your script starts last, prefix it with a number higher than its dependencies (or S99 to ensure it is the very last script to run)

Related

append a parameter to a command in the file and run the appended command

I have a the following command in a file called $stat_val_result_command.
I want to add -Xms1g parameter at the end of the file so that is should look like this:
<my command in the file> -Xms1g
However, I want to run this command after append. I am running this in a workflow system called "nextflow". I tied many things, including following, but it does not working. check the script section which runs in Bash by default:
process statisticalValidation {
input:
file stat_val_result_command from validation_results_command.flatten()
output:
file "*_${params.ticket}_statistical_validation.txt" into validation_results
script:
"""
echo " -Xms1g" >> $stat_val_result_command && ```cat $stat_val_result_command```
"""
}
Best to avoid appending to or manipulating input files localized in the workdir as these can be, and are by default, symbolic links to the original files.
In your case, consider instead exporting the JAVA_TOOL_OPTIONS environment variable. This might or might not work for you, but might give you some ideas if you have control over how the scripts are being generated:
export JAVA_TOOL_OPTIONS="-Xms1g"
bash "${stat_val_result_command}"
Also, it's generally better to avoid localizing and running scripts like this. It might be unavoidable, but usually there are better options. For example, third-party scripts, like your Bash script could be handled more simply:
Grant the execute permission to these files and copy them into a
folder named bin/ in the root directory of your project repository.
Nextflow will automatically add this folder to the PATH environment
variable, and the scripts will automatically be accessible in your
pipeline without the need to specify an absolute path to invoke them.
This of course assumes you can control and parameterize the process that creates your Bash scripts.

Run each Command individually but distribute them in Multiple Make-files

I wanted to have every Env have its own Makefile (local, dev, production).
So I created 3 directories and a Makefile for every Directory.
Then creates a common MakeFile which includes all other child Makefiles as :
I was able to include my child commands in Parent file but the issue is
If I ran make local , it executes all commands inside Makefile.local
But instead I want each command must be ran individual
When mentioned like make local local_command or even make local_command , local_command must be executed only.
You likely want something like:
TOP_LEVEL_TARGS := dev local prod
$(TOP_LEVEL_TARGS):
make -f config/local/Makefile.$# $(filter-out $(TOP_LEVEL_TARGS), $(MAKECMDGOALS))
This will invoke a sub-make with all the command goals of the original make commands (minus the top level targets).

Running a command before every execution of a script/program

I'm using Bash on Windows and what I'm missing is a good IDE. I've tried running a GUI app with the following tutorial but it doesn't work well every time and it's frustrating.
What I want is to run the script that would copy the files from a folder on Windows into a folder on Unix subsystem, but only the files that are different. And same for the other direction (if I change something from terminal, to be updated in the Windows folder). I want that script to be run every time I call ./SOME_EXECUTABLE in that folder. For the check weather the file was changed or not I can use hg status because I'm mostly working with Mercurial.
Is there a way to do this without making a separate shell script that would combine those calls? Something like a macro.
You could use a function in .bashrc to achieve your requirement, and run your required script that copies stuff from across machines as you needed. Assuming you have the script in place, lets say, e.g. copyScript.sh you can add a function like
function copyOnExecute() {
./copyScript.sh
./EXECUTABLE
}
This way you can call the function copyOnExecute, every time you want to run your executable.
You could add an alias to .bash_aliases such as:
alias execute="./copyScript.sh && ./$1"
You can replace ./ with your scripts path
This runs the executable after your script has finished, and only if it finished successfuly.
It would be easier for you then writing a function in one of the .rc files
If you'd write a function called execute, in the future you might forget where It was written, using an alias helps you avoid this.

How can I execute a script on AIX after my own .profile/.kshrc takes effect?

background:
My colleagues and I all login a AIX server with user "root", after login everyone loads their .profile/.kshrc/.netrc etc., then start their work, to execute their own shell scripts.
problem:
when I crontab a script, it will fail because some cmds in it is only defined in my own environment.
The failure remains even I add the sentences of source the .profile/.kshrc/.netrc in the script. It appears it just can not remember the former system setting.
question:
How can I edit the script to get the task ran on my own environment?
A script run by cron should set its own PATH to assure it's starting from a known situation.
Make an inventory of all external commands used by the script, list the directories where they live, then add a line to the top of the script:
PATH=/first/dir:/second/dir
Etc...
In most case you want to include /usr/bin and/or /bin -- for scripts run as root /usr/sbin is another favourite.

Should you change the current directory in a shell script?

I've always mentally regarded the current directory as something for users, not scripts, since it is dependent on the user's location and can be different each time the script is executed.
So when I came across the Java jar utility's -C option I was a little puzzled.
For those who don't know the -C option is used before specifying a file/folder to include in a jar. Since the path to the file/folder is replicated in the jar, the -C option changes directories before including the file:
in other words:
jar -C flower lily.class
will make a jar containing the lily.class file, whereas:
jar flower/lily.class
will make a flower folder in the jar which contains lily.class
For a jar-ing script I'm making I want to use Bourne wild-cards folder/* but that would make using -C impossible since it only applies to the next immediate argument.
So the only way to use wild-cards is run from the current directory; but I still feel uneasy towards changing and using the current directory in a script.
Is there any downside to using the current directory in scripts? Is it frowned upon for some reason perhaps?
I don't think there's anything inherently wrong with changing the current directory from a shell script. Certainly it won't cause anything bad to happen, if taken by itself.
In fact, I have a standard script that I use for starting up a Java-based server, and the very first line is:
cd `dirname $0`
This ensures that the rest of the commands in the script are executed in the directory that contains the script file itself (useful when a single machine is hosting multiple server instances), regardless of where the shell script was actually invoked from. Without changing the current directory in the script, it would only work correctly if the user remember to manually cd into the corresponding directory before running the script.
In this case, performing the cd operation from within the script removes a manual step from the server startup/shutdown process, and makes things slightly less error-prone as a result.
So as with most things, there are legitimate uses for this sort of thing. And I'm sure there are also some questionable ones, as well. It really depends upon what's most appropriate for your specific use-case. Which is something I can't really comment on...I always just let maven build my JAR's for me.

Resources