I would like to write bash function to suppress output into command line in bash.
I have included a following into the $HOME/.bashrc
# suspend output to the terminal
noout(){
$* &>/dev/null &
}
And as an example I have created alias for evince:
alias evince='noout evince'
This works just fine for files without spaces in the file names. However if I launch something like:
evince Jack\ London\ -\ The\ Star\ Rover.pdf
Bash splits the file names into several bits and evince opens several empty windows.
Thanks for any help to make it working.
Try this:
noout() {
"$#" >/dev/null 2>&1 &
}
I'm not sure why you want to do it in the background, but that's your choice. The relevant aspect of my answer is the quotes and the use of $# instead of $*. See also what does "$#" mean in a shell script.
Related
i am writing a shell script practice.sh. I want to give my first argument $1 from command line to ls command in script.e.g
if I run my script in terminal $bash practice.sh *.mp3
the argument *.mp3
I want to use for ls command
#!/bin/bash
output=$ls $1
it doesn't work
any help?
The obvious answer for what you say you want is just
#!/bin/bash
ls "$1"
which will run ls, passing it (just) the first argument to the script.
However, you also say you want to run this like: practice.sh *.mp3 which runs the script with many arguments (not just one) -- the *.mp3 will be expanded to be all the of the .mp3 files in the current directory. For that, you likely want something more like
#!/bin/bash
ls "$#"
which will pass all of the arguments to your script (however many there are) to the ls command.
These scripts will just run ls with its stdout connected to whatever your script has its stdout connceted to, so the output will (likely) just appear on your terminal. If you instead want to capture the output of the ls command (so you can do something else with it), you need something like
#!/bin/bash
output=$(ls "$#")
which will run ls with all the arguments, and capture the output in the variable $output. You can then do things with that variable.
Use shell expansion to record the output of the command in the variable output:
output=$(ls $1)
This will record the output of the command ls $1 in the variable output.
You can then use echo $output to print out your output.
You can read more about shell expansion in the GNU Bash reference manual.
Here is a simple test case script which behaves differently in zsh vs bash when I run with $ source test_script.sh from the command line. I don't necessarily know why there is a difference if my shebang clearly states that I want bash to run my script other than the fact that the which command is a built-in in zsh and a program in bash. (FYI - the shebang directory is where my bash program lives which may not be the same as yours--I installed a new version using homebrew)
#!/usr/local/bin/bash
if [ "$(which ls)" ]; then
echo "ls command found"
else
echo "ls command not found"
fi
if [ "$(which foo)" ]; then
echo "foo command found"
else
echo "foo command not found"
I run this script with source ./test-script.sh from zsh and Bash.
Output in zsh:
ls command found
foo command found
Output in bash:
ls command found
foo command not found
My understanding is that default for test or [ ] (which are the same thing) evaluate a string to true if it's not empty/null. To illustrate:
zsh:
$ which foo
foo not found
bash:
$ which foo
$
Moreover if I redirect standard error in zsh like:
$ which foo 2> /dev/null
foo not found
zsh still seems to send foo not found to standard output which is why (I am guessing) my test case passed for both under the zshell; because the expansion of "$(which xxx)" returned a string in both cases (e.g. /some/directory and foo not found (zsh will ALWAYS return a string?).
Lastly, if I remove the double quotes (e.g. $(which xxx)), zsh gives me an error. Here is the output:
ls command found
test_scritp.sh:27: condition expected not:
I am guessing zsh wanted me to use [ ! "$(which xxx)" ]. I don't understand why? It never gave that error when running in bash (and isn't this supposed to run in bash anyway?!).
Why isn't my script using bash? Why is something so trivial as this not working? I understand how to make it work fine in both using the -e option but I simply want to understand why this is all happening. Its driving me bonkers.
There are two separate problems here.
First, the proper command to use is type, not which. Like you note, the command which is a zsh built-in, whereas in Bash, it will execute whatever which command happens to be on your system. There are many variants with different behaviors, which is why POSIX opted to introduce a replacement instead of trying to prescribe a particular behavior for which -- then there would be yet one more possible behavior, and no way to easily root out all the other legacy behaviors. (One early common problem was with a which command which would examine the csh environment, even if you actually used a different shell.)
Secondly, examining a command's string output is a serious antipattern, because strings differ between locales ("not found" vs. "nicht gefunden" vs. "ei löytynyt" vs. etc etc) and program versions -- the proper solution is to examine the command's exit code.
if type ls >/dev/null 2>&1; then
echo "ls command found"
else
echo "ls command not found"
fi
if type foo >/dev/null 2>&1; then
echo "foo command found"
else
echo "foo command not found"
fi
(A related antipattern is to examine $? explicitly. There is very rarely any need to do this, as it is done naturally and transparently by the shell's flow control statements, like if and while.)
Regarding quoting, the shell performs whitespace tokenization and wildcard expansion on unquoted values, so if $string is command not found, the expression
[ $string ]
without quotes around the value evaluates to
[ command not found ]
which looks to the shell like the string "command" followed by some cruft which isn't syntactically valid.
Lastly, as we uncovered in the chat session (linked from comments) the OP was confused about the precise meaning of source, and ended up running a Bash script in a separate process instead. (./test-script instead of source ./test-script). For the record, when you source a file, you cause your current shell to read and execute it; in this setting, the script's shebang line is simply a comment, and is completely ignored by the shell.
I often perform configuration changes using single line commands on Mac OS, Linux or even Windows and I want to easily log them in a file, so I can replay if I have to reconfigure the machine again.
Please not that I want to do these only for some commands, so the shell history is of not use.
Ideally I would like to be able to use some kind of shell extension that logs some of the commands.
As you know if you start your bash command with a space, this command is not logged into the history.
What if I can have another prefix that would do the opposite? Is there something there that can be used for this? A solution for bash would be more than enough and if there is an already existing solution it would much better than me writing a new one.
You could do your logging in PROMPT_COMMAND, extracting the specific commands from shell history and writing them to a file.
Something like:
log () {
last_command="$(history -p \!\!)"
if [[ $last_command == " "* ]] # save commands starting with *two* spaces
then
printf "%s\n" "$last_command" >> ~/special.log
fi
}
PROMPT_COMMAND="log; $PROMPT_COMMAND"
This has problems:
PROMPT_COMMAND is run each time the prompt is printed. Just pressing Enter multiple times could cause a command to be logged multiple times.
Marking with two spaces would, of course, need you to remove ignorespace or ignoreboth from HISTCONTROL so that commands starting spaces are logged at all.
AFAICT, history is updated when the next command is read, so the command is logged after the next command returns to the prompt, since that's when the correct history is available in PROMPT_COMMAND.
All this would be easier in zsh, with a preexec hook:
preexec () {
if [[ $1 == " "* ]]
then
printf "%s\n" "$1" >> ~/special.log
fi
}
The preexec function automatically gets the command as the first argument if history is enabled, saving us a deal of trouble. It is run when the command has been read, but before it begins execution, so the timing is perfect. From the documentation:
preexec
Executed just after a command has been read and is about to be
executed. If the history mechanism is active (regardless of whether
the line was discarded from the history buffer), the string that the
user typed is passed as the first argument, otherwise it is an empty
string. The actual command that will be executed (including expanded
aliases) is passed in two different forms: the second argument is a
single-line, size-limited version of the command (with things like
function bodies elided); the third argument contains the full text
that is being executed.
$ ls
$ echo foo | echo bar
bar
$ cat ~/special.log
ls
echo foo | echo bar
A function in .bashrc can be used like a prefix:
log_this_command () {
echo "$#" >> ~/a_log_file # log the command to file
"$#" # and run the command itself
}
Caveat: this only logs expanded arguments, rather than the raw input.
Source function with the same name function screencapture {echo "used parms: $#"; command screencapture $#}
appending to log file function screencapture {echo "$(date) screencapture " $# >> ~/log.txt; command screencapture $#}
as one runs screencapture command, log entry is created and command executes as uninterfered
you could automate in creating these functions, if the list of them is like .... all of them
Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs
I want to inject a transparent wrappering command on each shell command in a make file. Something like the time shell command. ( However, not the time command. This is a completely different command.)
Is there a way to specify some sort of wrapper or decorator for each shell command that gmake will issue?
Kind of. You can tell make to use a different shell.
SHELL = myshell
where myshell is a wrapper like
#!/bin/sh
time /bin/sh "$0" "$#"
However, the usual way to do that is to prefix a variable to all command calls. While I can't see any show-stopper for the SHELL approach, the prefix approach has the advantage that it's more flexible (you can specify different prefixes for different commands, and override prefix values on the command line), and could be visibly faster.
# Set Q=# to not display command names
TIME = time
foo:
$(Q)$(TIME) foo_compiler
And here's a complete, working example of a shell wrapper:
#!/bin/bash
RESULTZ=/home/rbroger1/repos/knl/results
if [ "$1" == "-c" ] ; then
shift
fi
strace -f -o `mktemp $RESULTZ/result_XXXXXXX` -e trace=open,stat64,execve,exit_group,chdir /bin/sh -c "$#" | awk '{if (match("Process PID=\d+ runs in (64|32) bit",$0) == 0) {print $0}}'
# EOF
I don't think there is a way to do what you want within GNUMake itself.
I have done things like modify the PATH env variable in the Makefile so a directory with my script linked to all name the bins I wanted wrapped was executed rather than the actual bin. The script would then look at how it was called and exec the actual bin with the wrapped command.
ie. exec time "$0" "$#"
These days I usually just update the targets in the Makefile itself. Keeping all your modifications to one file is usually better IMO than managing a directory of links.
Update
I defer to Gilles answer. It's a better answer than mine.
The program that GNU make(1) uses to run commands is specified by the SHELL make variable. It will run each command as
$SHELL -c <command>
You cannot get make to not put the -c in, since that is required for most shells. -c is passed as the first argument ($1) and <command> is passed as a single argument string as the second argument ($2).
You can write your own shell wrapper that prepends the command that you want, taking into account the -c:
#!/bin/sh
eval time "$2"
That will cause time to be run in front of each command. You need eval since $2 will often not be a single command and can contain all sorts of shell metacharacters that need to be expanded or processed.