run second ksh script after first ksh is done - bash

I have 3 scripts for example: first.ksh, second.ksh, third.ksh.
I run all of those scripts one by one manually, when the first is done I run the second and then also the third. those scripts take time to run, doing them manually is time-consuming because is required me to be in front of the computer.
how can write a script or query which runs those scripts one by one, after one is finished, automatically?

Assuming the scripts are in the current working directory,
./first.sh ; ./second.sh ; ./third.sh
Where ; is the separator for "sequential list"
The topic you're looking for is "shell programming". There are many Unix shells, and they share basic features defined by POSIX and Single Unix Specification. Most of them have additional features as well as online documentations.

Related

Call UniVerse Command from Shell

I have a UniVerse (Rocket U2) system, and want to be able to call certain UniVerse/TCL commands from a shell script. However whenever I run the uv binary it seems to stop the execution of the rest of the shell script.
For Example if I run:
/u2/uv/bin/uv
It starts a UniVerse session. The next line of the script (RUNPY run_tests.py) is meant to be executed in the TCL environment, but is never input to TCL. I have tried passing in string parameters to the uv binary to be executed, but doesn't appear to do anything.
Is there a way to call UniVerse/TCL commands from a UNIX/Shell environment?
You can type this manually or put it into a shell script. I have not run into any issues with this paradigm, but your choice of shell could theoretically affect this. You certainly want to either be in the directory of the account you want execute it in or cd to it in the script.
/u2/uv/bin/uv <<start
RUNPY run_tests.py
start
Good Luck.
One thing to watch out for is if you have a LOGIN paragraph or something else that runs automatically to start your application (which is really common), then you need to find a way to bypass this for non-interactive users.
https://groups.google.com/forum/#!topic/comp.databases.pick/B2hzuXq3X9A mentions
IF OCONV(#TTY,'MCU')='PHANTOM' THEN ABORT
In UD, I kick off scripts from unix as a phantom to a) capture the log output in PH and b) end the process if extra input is requested, rather than hanging around. In UD that's
$echo "PHANTOM COUNT VOC" | udt
UniData Release 8.1 Build: (2008)
Current UniData home is /unidata/ud81/.
Current working directory is /usr/ud81/demo
:PHANTOM COUNT VOC
PHANTOM process 18743448 started.
COMO file is '_PH_/dsiroot45172_18743448'.
:
Critical abort condition found.
$cat _PH_/dsiroot45172_18743448
COUNT VOC
14670 record(s) counted.
PHANTOM process 18743448 has completed.
Van Amburg's answer is the most correct for handling multiple lines of input. The variant I used was instead of the << command for multi-line strings I just added quotes around a single command (single and double quotes both work):
/u2/uv/bin/uv "RUNPY run_tests.py"

bash script rsync itself from remote host - how to?

I have multiple remote sites which run a bash script, initiated by cron (running VERY frequently -- 10 minutes or less), in which one of it's jobs is to sync a "scripts" directory. The idea is for me to be able to edit the scripts in one location (a server in a data center) rather than having to log into each remote site and doing any edits manually. The question is, what are the best options for syncing the script that is currently running the sync? (I hope that's clear).
I would imagine syncing a script that is currently running would be very bad. Does the following look feasible if I run it as the last statement of my script? pros? cons? Other options??
if [ -e ${newScriptPath} ]; then
mv ${newScriptPath} ${permanentPath}" | at "now + 1 minute"
fi
One problem I see is that it's possible that if I use "1 minute" (which is "at's" smallest increment), and the script ends, and cron initiates the next job before "at" replaces the script, it could try to replace it during the next run of the script....
Changing the script file during execution is indeed dangerous (see this previous answer), but there's a trick that (at least with the versions of bash I've tested with) forces bash to read the entire script into memory, so if it changes during execution there won't be any effect. Just wrap the script in {}, and use an explicit exit (inside the {}) so if anything gets added to the end of the file it won't be executed:
#!/bin/bash
{
# Actual script contents go here
exit
}
Warning: as I said, this works on the versions of bash I have tested it with. I make no promises about other versions, or other shells. Test it with the shell(s) you'll be using before putting it into production use.
Also, is there any risk that any of the other scripts will be running during the sync process? If so, you either need to use this trick with all of them, or else find some general way to detect which scripts are in use and defer updates on them until later.
So I ended up using the "at" utility, but only if the file changed. I have a ".cur" and ".new" version of the script on the local machine. If the MD5 is the same on both, I do nothing. If they are different, I wait until after the main script completes, then force copy the ".new" to the ".cur" in a different script.
I create the same lock file (name) for the update_script so another instance of the first script won't run if I'm changing it..
part in main script....
file1=`script_cur.sh`
file2=`script_new.sh`
if [ "$file1" == "$file2" ] ; then
echo "Files have the same content"
else
echo "Files are different, scheduling update_script.sh at command"
at -f update_script.sh now + 1 minute
fi

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

How can I standardize shell script header #! on different hosts?

I manage a large number of shell (ksh) scripts on server A. Each script begins with the line...
#!/usr/bin/ksh
When I deploy to machine B, C, and D I frequently need to use a different shell such as /bin/ksh, /usr/local/bin/ksh or even /usr/dt/bin/ksh. Assume I am unable to install a new version of ksh and I am unable to create links in any protected directories such as /usr/local/bin. At the moment I have a sed script which modifies all the scripts but I would prefer not to do this. I would like to standardize the header so that it no longer needs to be changed from server to server. I don't mind using something like
#!~/ksh
And creating a link which is on every server but I have had problems with finding home using "~" in the past when using rsh (maybe is was ssh) to call a script (AIX specifically I think). Another option might be to create a link in my home directory and ensuring that it is first in my PATH, and simply using
#!ksh
Looking for a good solution. Thanks.
Update 8/26/11 - Here is the solution I came up with. The installation script looks for the various versions of ksh installed on the server and then copies one of the ksh 93 programs to /tmp/ksh93. The scripts in the framework all refer to #!/tmp/ksh93 and they don't need to be changed from one server to the other. The script also set some variables so that if the file is every removed from /tmp, it will immediately be put back the next time a scheduled task runs, which is at a minimum every minute.
As rettops noted, you can use:
#!/usr/bin/env ksh
This will likely work for you. However, there can be some drawbacks. See Wikipedia on Shebang for a fairly thorough discussion.
#! /usr/bin/env ksh
will use whatever ksh is in the user's path.

Run a list of bash scripts consecutively

I have a load of bash scripts that backup different directories to different locations. I want each one to run every day. However, I want to make they don't run simultaneously.
I've wrote a script that basically just calls each script in succession and sits in cron.daily, but I want a way for this script to work even if I add and remove backup scripts without having to manually edit it.
So what I need to go is generate a list of the scripts (e.g. "dir -1 /usr/bin/backup*.sh") and then run each script it finds in turn.
Thanks.
#!/bin/sh
for script in /usr/bin/backup*.sh
do
$script
done
#!/bin/bash
for SCRIPT in /usr/bin/backup*.sh
do
[ -x "$SCRIPT" ] && [ -f "$SCRIPT" ] && $SCRIPT
done
If your system has run-parts then that will take care of it for you. You can name your scripts like "10script", "20anotherscript" and they will be run in order in a manner similar to the rc*.d hierarchy (which is run via init or Upstart, however). On some systems it's a script. On mine it's a binary executable.
It is likely that your system is using it to run hourly, daily, etc., cron jobs just by dropping scripts into directories such as /etc/cron.hourly/
Pay particular attention, though, to how you name your scripts. (Don't use dots, for example.) Check the man page specific to your system, since file naming restrictions may vary.

Resources