Multiple jobs on server using a script - bash

I'm trying to automate sending many jobs to a server using the qsub command. I have made a shell script which creates multiple batch scripts based on some input files, using printf. The problem is these jobs don't run. When I open these batch scripts created from my shell script with gedit and save them without modifying them, they then work. This makes me think this is some kind of formatting issue.
Could you give me a solution to this issue?
Here's the shell script that creates the scripts to be submitted:
#!/bin/sh
cd /home/PATH/
FILES=$(ls inpt/ | grep "centers")
i=1
declare -i i
for f in $FILES
do
printf "#!/bin/bash\ncd /home/PATH/\n./nvt inpt/%b" "$f" > run-script$i.sh
i=$i+1
done

You must set the executable bit to your scripts:
printf "#!/bin/bash\ncd /home/PATH/\n./nvt inpt/%b" "$f" > run-script$i.sh
chmod +x run-script$i.sh
To be sure that it is not a formting problem (or any problem with printf) you can try to use echo:
echo '#!/bin/bash' > run-script$i.sh
echo cd /home/PATH/ >> run-script$i.sh
echo ./nvt "inpt/$f" >> run-script$i.sh

Related

Calling one file in which there are multiple file names and has to be run in parallel

Imagine, a sample.txt file contains multiple file names (a.sh,b.sh,c.sh...). I have another file test.sh, through which I want to run all the files in parallel present in sample.txt and get exit status of each file.
Can you please help me with this?
Thanks in advance.
Ok, I think you want this:
#!/usr/bin/bash
export PATH=.:$PATH
parallel -a sample.txt '{} ; if [ $? -eq 0 ]; then echo "PASS:" {}; else echo "FAIL:" {};fi'
First test that you can run the commands in serial:
bash sample.txt
If this fails, fix that first; GNU Parallel will not magically make something work that did not work before.
Then define the testing function:
tester() {
if (eval "$#") >&/dev/null; then
perl -e 'printf "\033[30;102m[ OK ]\033[0m #ARGV\n"' "$#"
else
perl -e 'printf "\033[30;101m[FAIL]\033[0m #ARGV\n"' "$#"
fi
}
export -f tester
Then call the testing function:
parallel tester :::: sample.txt
If you are not using bash (and export -f fails above) then try (use version 20161022 or later):
env_parallel tester :::: sample.txt
If your fix above was to prepend each line with bash you can make GNU Parallel do that (in which case you do not need to fix sample.txt):
parallel tester bash :::: sample.txt
To run more jobs in parallel use -j0. To get the actual exit code use --joblog mylog.txt and look in mylog.txt.

With Bash or ZSH is there a way to use a wildcard to execute the same command for each script?

I have a directory with script files, say:
scripts/
foo.sh
script1.sh
test.sh
... etc
and would like to execute each script like:
$ ./scripts/foo.sh start
$ ./scripts/script1.sh start
etc
without needing to know all the script filenames.
Is there a way to append start to them and execute? I've tried tab-completion as it's pretty good in ZSH, using ./scripts/*[TAB] start with no luck, but I would imagine there's another way to do so, so it outputs:
$ ./scripts/foo.sh start ./scripts/script1.sh start
Or perhaps some other way to make it easier? I'd like to do so in the Terminal without an alias or function if possible, as these scripts are on a box I SSH to and shouldn't be modifying *._profile or .*rc files.
Use a simple loop:
for script in scripts/*.sh; do
"$script" start
done
There's just one caveat: if there are no such *.sh files, you will get an error. A simple workaround for that is to check if $script is actually a file (and executable):
for script in scripts/*.sh; do
[ -x "$script" ] && "$script" start
done
Note that this can be written on a single line, if that's what you're after for:
for script in scripts/*.sh; do [ -x "$script" ] && "$script" start; done
Zsh has some shorthand loops that bash doesn't:
for f (scripts/*.sh) "$f" start

nohup for loop output naming

I use for loop to use specific tool with the set of files:
nohup sh -c 'for i in ~/files/*txt; do ID=`echo ${i} | sed 's/^.*\///'`; ./tool $i &&
mv output ${ID}.out; done' &
This tool has specific naming for outputted files and I want to rename the output as it would be overwritten and it is simpler for me.
However this specific mv doesn't work with nohup - files are not renamed individually and get overwritten.
How can I solve this problem.
Why the complicated nohup dance, and not just
for i in ~/files/*.txt; do
./tool $i && mv output `basename $i`.out
done

Bash script to run over ssh cannot see remote file

The script uses scp to upload a file. That works.
Now I want to log in with ssh, cd to the directory that holds the uploaded file, do an md5sum on the file. The script keeps telling me that md5sum cannot find $LOCAL_FILE. I tried escaping: \$LOCAL_FILE. Tried quoting the EOI: <<'EOI'. I'm partially understanding this, that no escaping means everything happens locally. echo pwd unescaped gives the local path. But why can I do "echo $MD5SUM > $LOCAL_FILE.md5sum", and it creates the file on the remote machine, yet "echo md5sum $LOCAL_FILE > md5sum2" does not work? And if it the local md5sum, how do I tell it to work on the remote?
scp "files/$LOCAL_FILE" "$i#$i.567.net":"$REMOTE_FILE_PATH"
ssh -T "$i#$i.567.net" <<EOI
touch I_just_logged_in
cd $REMOTE_DIRECTORY_PATH
echo `date` > I_just_changed_directories
echo `whoami` >> I_just_changed_directories
echo `pwd` >> I_just_changed_directories
echo "$MD5SUM" >> I_just_changed_directories
echo $MD5SUM > $LOCAL_FILE.md5sum
echo `md5sum $LOCAL_FILE` > md5sum2
EOI
You have to think about when $LOCAL_FILE is being interpreted. In this case, since you've used double-quotes, it's being interpreted on the sending machine. You need instead to quote the string in such a way that $LOCAL_FILE is in the command line on the receiving machine. You also need to get your "here document" correct. What you show just sends the output to touch to the ssh.
What you need will look something like
ssh -T address <'EOF'
cd $REMOTE_DIRECTORY_PATH
...
EOF
The quoting rules in bash are somewhat arcane. You might want to read up on them in Mendel Cooper's Advanced Guide to Bash Scripting.

OSX bash script works but fails in crontab on SFTP

this topic has been discussed at length, however, I have a variant on the theme that I just cannot crack. Two days into this now and decided to ping the community. THx in advance for reading..
Exec. summary is I have a script in OS X that runs fine and executes without issue or error when done manually. When I put the script in the crontab to run daily it still runs but it doesnt run all of the commands (specifically SFTP).
I have read enough posts to go down the path of environment issues, so as you will see below, I hard referenced the location of the SFTP in the event of a PATH issue...
The only thing that I can think of is the IdentityFile. NOTE: I am putting this in the crontab for my user not root. So I understand that it should pickup on the id_dsa.pub that I have created (and that has already been shared with the server)..
I am not trying to do any funky expect commands to bypass the password, etc. I dont know why when run from the cron that it is skipping the SFTP line.
please see the code below.. and help is greatly appreciated.. thx
#!/bin/bash
export DATE=`date +%y%m%d%H%M%S`
export YYMMDD=`date +%y%m%d`
PDATE=$DATE
YDATE=$YYMMDD
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED="~/Dropbox/"
USER="user"
HOST="host.domain.tld"
A="/tmp/5nPR45bH"
>${A}.file1${PDATE}
>${A}.file2${PDATE}
BYEbye ()
{
rm ${A}.file1${PDATE}
rm ${A}.file2${PDATE}
echo "Finished cleaning internal logs"
exit 0
}
echo "get -r *" >> ${A}.file1${PDATE}
echo "quit" >> ${A}.file1${PDATE}
eval mkdir ${FEED}${YDATE}
eval cd ${FEED}${YDATE}
eval /usr/bin/sftp -b ${A}.file1${PDATE} ${USER}#${HOST}
BYEbye
exit 0
Not an answer, just comments about your code.
The way to handle filenames with spaces is to quote the variable: "$var" -- eval is not the way to go. Get into the habit of quoting all variables unless you specifically want to use the side effects of not quoting.
you don't need to export your variables unless there's a command you call that expects to see them in the environment.
you don't need to call date twice because the YYMMDD value is a substring of the DATE: YYMMDD="${DATE:0:6}"
just a preference: I use $HOME over ~ in a script.
you never use the "file2" temp file -- why do you create it?
since your sftp batch file is pretty simple, you don't really need a file for it:
printf "%s\n" "get -r *" "quit" | sftp -b - "$USER#$HOST"
Here's a rewrite, shortened considerably:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
FEED_DIR="$HOME/Dropbox/$(date +%Y%m%d)"
USER="user"
HOST="host.domain.tld"
mkdir "$FEED_DIR" || { echo "could not mkdir $FEED_DIR"; exit 1; }
cd "$FEED_DIR"
{
echo "get -r *"
echo quit
} |
sftp -b - "${USER}#${HOST}"

Resources