output redirection inside bsub command - bash

Is it possible to use output redirection inside bsub command such as:
bsub -q short "cat <(head -2 myfile.txt) > outputfile.txt"
Currently this bsub execution fails. Also my attempts to escape the redirection sign and the parenthesis were all failed, such as:
bsub -q short "cat \<\(head -2 myfile.txt\) > outputfile.txt"
bsub -q short "cat <\(head -2 myfile.txt\) > outputfile.txt"
*Note, I'm well aware that the redirection in this simple command is not necessary as the command could easily be written as:
bsub -q short "head -2 myfile.txt > outputfile.txt"
and then it would indeed be executed properly (without errors). I am however interested in implementing the redirection of output '<' within the context of a more composed command, and am bringing this simple command here as an example only.

<(...) is process substitution -- a bash extension not available on baseline POSIX shells. system(), subprocess.Popen(..., shell=True) and similar calls use /bin/sh, which is not guaranteed to have such extensions.
As a mechanism that works with any possible command without needing to worry about how to correctly escape it into a string, you can export that function and any variables it uses through the environment:
# for the sake of example, moving filenames out-of-band
in_file=myfile.txt
out_file=outputfile.txt
mycmd() { cat <(head -2 <"$in_file") >"$out_file"; }
export -f mycmd # export the function into the environment
export in_file out_file # and also any variables it uses
bsub -q short 'bash -c mycmd' # ...before telling bsub to invoke bash to run the function

<(...) is a bash feature while your command runs with sh.
Invoke bash explicitly to handle your bash-only features:
bsub -q short "bash -c 'cat <(head -2 myfile.txt) > outputfile.txt'"

Related

How to hide output error messages from terminal? [duplicate]

I have a Bash script that runs a program with parameters. That program outputs some status (doing this, doing that...). There isn't any option for this program to be quiet. How can I prevent the script from displaying anything?
I am looking for something like Windows' "echo off".
The following sends standard output to the null device (bit bucket).
scriptname >/dev/null
And if you also want error messages to be sent there, use one of (the first may not work in all shells):
scriptname &>/dev/null
scriptname >/dev/null 2>&1
scriptname >/dev/null 2>/dev/null
And, if you want to record the messages, but not see them, replace /dev/null with an actual file, such as:
scriptname &>scriptname.out
For completeness, under Windows cmd.exe (where "nul" is the equivalent of "/dev/null"), it is:
scriptname >nul 2>nul
Something like
script > /dev/null 2>&1
This will prevent standard output and error output, redirecting them both to /dev/null.
An alternative that may fit in some situations is to assign the result of a command to a variable:
$ DUMMY=$( grep root /etc/passwd 2>&1 )
$ echo $?
0
$ DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
1
Since Bash and other POSIX commandline interpreters does not consider variable assignments as a command, the present command's return code is respected.
Note: assignement with the typeset or declare keyword is considered as a command, so the evaluated return code in case is the assignement itself and not the command executed in the sub-shell:
$ declare DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
0
Try
: $(yourcommand)
: is short for "do nothing".
$() is just your command.
Like andynormancx' post, use this (if you're working in an Unix environment):
scriptname > /dev/null
Or you can use this (if you're working in a Windows environment):
scriptname > nul
This is another option
scriptname |& :
Take a look at this example from The Linux Documentation Project:
3.6 Sample: stderr and stdout 2 file
This will place every output of a program to a file. This is suitable sometimes for cron entries, if you want a command to pass in absolute silence.
rm -f $(find / -name core) &> /dev/null
That said, you can use this simple redirection:
/path/to/command &>/dev/null
In your script you can add the following to the lines that you know are going to give an output:
some_code 2>>/dev/null
Or else you can also try
some_code >>/dev/null

(bash) flock: when to use the -c option?

Can anyone explain to me why the -c option exists in flock?
I can't find a good description of how it differs from simply specifying the command(s) to execute after flock (apart from its limitation of no arguments to the command).
-c invokes a shell with the command.
Consider this:
flock .lock somecommand > myfile
Since > is interpretted by the current shell and not flock, myfile will be truncated before the lock is captured.
You can work around this with -c:
flock .lock -c 'somecommand > myfile'
Now the redirection is performed after the lock is captured. However, it is indeed useless since you could just have invoked a shell yourself:
flock .lock sh -c 'somecommand > myfile'

Pass args for script when going thru pipe

I have this example shell script:
echo /$1/
So I may call
$ . ./script 5
# output: /5/
I want to pipe the script into sh(ell), but can I pass the arg too ?
cat script | sh
# output: //
You can pass arguments to the shell using the -s option:
cat script | bash -s 5
Use bash -s -- <args>
e.g, install google cloud sdk
~ curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
cat script | sh -s -- 5
The -s argument tells sh to take commands from standard input and not to require a filename as a positional argument. (Otherwise, without -s, the next non-flag argument would be treated as a filename.)
The -- tells sh to stop processing further arguments so that they are only picked up by the script (rather than applying to sh itself). This is useful in situations where you need to pass a flag to your script that begins with - or -- (e.g.: --dry-run) that must be ignored by sh. Also note that a single - is equivalent to --.
cat script | sh -s - --dry-run

How to invoke bash, run commands inside the new shell, and then give control back to user?

This must either be really simple or really complex, but I couldn't find anything about it... I am trying to open a new bash instance, then run a few commands inside it, and give the control back to the user inside that same instance.
I tried:
$ bash -lic "some_command"
but this executes some_command inside the new instance, then closes it. I want it to stay open.
One more detail which might affect answers: if I can get this to work I will use it in my .bashrc as alias(es), so bonus points for an alias implementation!
bash --rcfile <(echo '. ~/.bashrc; some_command')
dispenses the creation of temporary files. Question on other sites:
https://serverfault.com/questions/368054/run-an-interactive-bash-subshell-with-initial-commands-without-returning-to-the
https://unix.stackexchange.com/questions/123103/how-to-keep-bash-running-after-command-execution
This is a late answer, but I had the exact same problem and Google sent me to this page, so for completeness here is how I got around the problem.
As far as I can tell, bash does not have an option to do what the original poster wanted to do. The -c option will always return after the commands have been executed.
Broken solution: The simplest and obvious attempt around this is:
bash -c 'XXXX ; bash'
This partly works (albeit with an extra sub-shell layer). However, the problem is that while a sub-shell will inherit the exported environment variables, aliases and functions are not inherited. So this might work for some things but isn't a general solution.
Better: The way around this is to dynamically create a startup file and call bash with this new initialization file, making sure that your new init file calls your regular ~/.bashrc if necessary.
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
echo "source ~/.bashrc" > $TMPFILE
echo "<other commands>" >> $TMPFILE
echo "rm -f $TMPFILE" >> $TMPFILE
# Start the new bash shell
bash --rcfile $TMPFILE
The nice thing is that the temporary init file will delete itself as soon as it is used, reducing the risk that it is not cleaned up correctly.
Note: I'm not sure if /etc/bashrc is usually called as part of a normal non-login shell. If so you might want to source /etc/bashrc as well as your ~/.bashrc.
You can pass --rcfile to Bash to cause it to read a file of your choice. This file will be read instead of your .bashrc. (If that's a problem, source ~/.bashrc from the other script.)
Edit: So a function to start a new shell with the stuff from ~/.more.sh would look something like:
more() { bash --rcfile ~/.more.sh ; }
... and in .more.sh you would have the commands you want to execute when the shell starts. (I suppose it would be elegant to avoid a separate startup file -- you cannot use standard input because then the shell will not be interactive, but you could create a startup file from a here document in a temporary location, then read it.)
bash -c '<some command> ; exec /bin/bash'
will avoid additional shell sublayer
You can get the functionality you want by sourcing the script instead of running it. eg:
$cat script
cmd1
cmd2
$ . script
$ at this point cmd1 and cmd2 have been run inside this shell
Append to ~/.bashrc a section like this:
if [ "$subshell" = 'true' ]
then
# commands to execute only on a subshell
date
fi
alias sub='subshell=true bash'
Then you can start the subshell with sub.
The accepted answer is really helpful! Just to add that process substitution (i.e., <(COMMAND)) is not supported in some shells (e.g., dash).
In my case, I was trying to create a custom action (basically a one-line shell script) in Thunar file manager to start a shell and activate the selected Python virtual environment. My first attempt was:
urxvt -e bash --rcfile <(echo ". $HOME/.bashrc; . %f/bin/activate;")
where %f is the path to the virtual environment handled by Thunar.
I got an error (by running Thunar from command line):
/bin/sh: 1: Syntax error: "(" unexpected
Then I realized that my sh (essentially dash) does not support process substitution.
My solution was to invoke bash at the top level to interpret the process substitution, at the expense of an extra level of shell:
bash -c 'urxvt -e bash --rcfile <(echo "source $HOME/.bashrc; source %f/bin/activate;")'
Alternatively, I tried to use here-document for dash but with no success. Something like:
echo -e " <<EOF\n. $HOME/.bashrc; . %f/bin/activate;\nEOF\n" | xargs -0 urxvt -e bash --rcfile
P.S.: I do not have enough reputation to post comments, moderators please feel free to move it to comments or remove it if not helpful with this question.
With accordance with the answer by daveraja, here is a bash script which will solve the purpose.
Consider a situation if you are using C-shell and you want to execute a command
without leaving the C-shell context/window as follows,
Command to be executed: Search exact word 'Testing' in current directory recursively only in *.h, *.c files
grep -nrs --color -w --include="*.{h,c}" Testing ./
Solution 1: Enter into bash from C-shell and execute the command
bash
grep -nrs --color -w --include="*.{h,c}" Testing ./
exit
Solution 2: Write the intended command into a text file and execute it using bash
echo 'grep -nrs --color -w --include="*.{h,c}" Testing ./' > tmp_file.txt
bash tmp_file.txt
Solution 3: Run command on the same line using bash
bash -c 'grep -nrs --color -w --include="*.{h,c}" Testing ./'
Solution 4: Create a sciprt (one-time) and use it for all future commands
alias ebash './execute_command_on_bash.sh'
ebash grep -nrs --color -w --include="*.{h,c}" Testing ./
The script is as follows,
#!/bin/bash
# =========================================================================
# References:
# https://stackoverflow.com/a/13343457/5409274
# https://stackoverflow.com/a/26733366/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://stackoverflow.com/a/2853811/5409274
# https://www.linuxquestions.org/questions/other-%2Anix-55/how-can-i-run-a-command-on-another-shell-without-changing-the-current-shell-794580/
# https://www.tldp.org/LDP/abs/html/internalvariables.html
# https://stackoverflow.com/a/4277753/5409274
# =========================================================================
# Enable following line to see the script commands
# getting printing along with their execution. This will help for debugging.
#set -o verbose
E_BADARGS=85
if [ ! -n "$1" ]
then
echo "Usage: `basename $0` grep -nrs --color -w --include=\"*.{h,c}\" Testing ."
echo "Usage: `basename $0` find . -name \"*.txt\""
exit $E_BADARGS
fi
# Create a temporary file
TMPFILE=$(mktemp)
# Add stuff to the temporary file
#echo "echo Hello World...." >> $TMPFILE
#initialize the variable that will contain the whole argument string
argList=""
#iterate on each argument
for arg in "$#"
do
#if an argument contains a white space, enclose it in double quotes and append to the list
#otherwise simply append the argument to the list
if echo $arg | grep -q " "; then
argList="$argList \"$arg\""
else
argList="$argList $arg"
fi
done
#remove a possible trailing space at the beginning of the list
argList=$(echo $argList | sed 's/^ *//')
# Echoing the command to be executed to tmp file
echo "$argList" >> $TMPFILE
# Note: This should be your last command
# Important last command which deletes the tmp file
last_command="rm -f $TMPFILE"
echo "$last_command" >> $TMPFILE
#echo "---------------------------------------------"
#echo "TMPFILE is $TMPFILE as follows"
#cat $TMPFILE
#echo "---------------------------------------------"
check_for_last_line=$(tail -n 1 $TMPFILE | grep -o "$last_command")
#echo $check_for_last_line
#if tail -n 1 $TMPFILE | grep -o "$last_command"
if [ "$check_for_last_line" == "$last_command" ]
then
#echo "Okay..."
bash $TMPFILE
exit 0
else
echo "Something is wrong"
echo "Last command in your tmp file should be removing itself"
echo "Aborting the process"
exit 1
fi
$ bash --init-file <(echo 'some_command')
$ bash --rcfile <(echo 'some_command')
In case you can't or don't want to use process substitution:
$ cat script
some_command
$ bash --init-file script
Another way:
$ bash -c 'some_command; exec bash'
$ sh -c 'some_command; exec sh'
sh-only way (dash, busybox):
$ ENV=script sh
Here is yet another (working) variant:
This opens a new gnome terminal, then in the new terminal it runs bash. The user's rc file is read first, then a command ls -la is sent for execution to the new shell before it turns interactive.
The last echo adds an extra newline that is needed to finish execution.
gnome-terminal -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'
I also find it useful sometimes to decorate the terminal, e.g. with colorfor better orientation.
gnome-terminal --profile green -- bash -c 'bash --rcfile <( cat ~/.bashrc; echo ls -la ; echo)'

How to remove inherited functions in sh (posix)

How do I ensure that there are no unexpected functions inherited from the parent when my script is run? If using bash,
#!/bin/bash -p
will do the trick, as will invoking the script through env -i. But I cannot rely on the user to invoke env, I don't want to rely on bash, I don't want to do an exec-hack and re-exec the script, and
#!/usr/bin/env -i sh
does not work.
So I'm looking for a portable way (portable == posix) to ensure that the user hasn't defined functions that will unexpectedly modify the behavior of the script. My current solution is:
eval $( env | sed -n '/\([^=]*\)=(.*/s//\1/p' |
while read -r name; do echo unset -f $name\;; done )
but that's pretty ugly and of dubious robustness. Is there a good way to get the functionality that 'unset -f -a' should provide?
edit
Slightly less ugly, but no better (I don't like parsing the output of env):
unset -f $( env | sed -n '/\([^=]*\)=(.*/s//\1/p' | tr \\012 \ )
#!/bin/bash --posix
results in:
SHELLOPTS=braceexpand:hashall:interactive-comments:posix
same as:
#!/bin/sh
SHELLOPTS=braceexpand:hashall:interactive-comments:posix
and "sh" is posix...
EDIT:
tested a few functions - unset was not required in my case...
EDIT2:
compare output of "set", not just "env"
EDIT3:
the following example - output of both "set|wc" also gives same results:
#!/bin/sh
set
set|wc
unset -f $( env | sed -n '/\([^=]*\)=(.*/s//\1/p' | tr \\012 \ )
set
set|wc
How about using the following env shebang line that sets a reasonable PATH variable to invoke the sh interpreter:
#!/usr/bin/env -i PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/xpg4/bin sh

Resources