I know there are some bash variables that are assigned after executing a Bash command. Some of them are $? to get the return value of the process or $BASH_COMMAND to get the actual call line, $1, $2 etc to retrieve the call args, etc.
A simple trick with trap would (taken from this question) allow me to store the last executed command:
alariva#trinsic:~/test$ trap 'previous_command=$this_command; this_command=$BASH_COMMAND' DEBUG
alariva#trinsic:~/test$ ls -l #I want to read this comment
total 0
-rw-rw-r-- 1 alariva alariva 0 Aug 23 01:30 readme.md
alariva#trinsic:~/test$ echo $previous_command
ls -l
alariva#trinsic:~/test$ echo $?
0
I need to get the comment that may come after the last command, but I'm not aware of any variable that would store it. Is there any way to read it?
I would like to get a similar behavior to this:
alariva#trinsic:~/test$ ls -l #I want this comment
readme.md
alariva#trinsic:~/test$ echo $BASH_COMMENT
I want this comment
alariva#trinsic:~/test$
Of course, the current situation is that I cannot retrieve any info from this:
alariva#trinsic:~/test$ echo $BASH_COMMENT
alariva#trinsic:~/test$
I'm also aware that comments may be completely stripped out after Bash interprets the call, so in that case I wonder if there exists a workaround (like a hook or something) to read it before it actually reaches bash.
So far, this is what I achieved:
alariva#trinsic:~/test$ ls -l #tosto
total 0
alariva#trinsic:~/test$ LAST=`fc -l | cut -c 6- | tail -n2 | head -n1`
alariva#trinsic:~/test$ echo "${LAST##*\#}"
tosto
alariva#trinsic:~/test$
Not sure if this is the best possible solution and if it'd work on all scenarios but looks like the behavior I want to achieve. Is there any built-in/alternative way to get this?
The closest solution I came up so far is the following.
alariva#trinsic:~/test$ ls -l #tosto
total 0
alariva#trinsic:~/test$ LAST=`fc -l | cut -c 6- | tail -n2 | head -n1`
alariva#trinsic:~/test$ echo "${LAST##*\#}"
tosto
alariva#trinsic:~/test$
While that will work for most of the scenarios I use, it still will fail to get the full comment on some scenarios where more than one # is found:
alariva#trinsic:~/test$ ls -l #tosto #only partial
total 0
alariva#trinsic:~/test$ LAST=`fc -l | cut -c 6- | tail -n2 | head -n1`
alariva#trinsic:~/test$ echo "${LAST##*\#}"
only partial
alariva#trinsic:~/test$
Improvements on this answer are welcome.
Related
I have a bash/zsh command with multiple pipes | that fails when using set -o pipefail. For simplicity assume the command is
set -o pipefail; echo "123456" | head -c2 | grep 5 | cat
How do I quickly find out which command is the first to fail and why? I know I can check the exit code, but that doesn't show which part of the pipeline failed first.
Is there something simpler than the rather verbose check of building up the pipeline one by one and checking for the first failing exit code?
Edit: I removed the contrived code example I made up as it confused people about my purpose of asking. The actual command that prompted this question was:
zstdcat metadata.tsv.zst | \
tsv-summarize -H --group-by Nextclade_pango --count | \
tsv-filter -H --ge 'count:2' | \
tsv-select -H -f1 >open_lineages.txt
In bash, use echo "${PIPESTATUS[#]}" right after the command to get the exit status for each component in a space separated list:
#!/bin/bash
$ set -o pipefail; echo "123456" | head -c2 | grep 5 | cat
$ echo ${PIPESTATUS[#]}
0 0 1 0
Beware zsh users, you need to use the lower case pipestatus instead:
#!/bin/zsh
$ set -o pipefail; echo "123456" | head -c2 | grep 5 | cat
$ echo $pipestatus
0 0 1 0
In fish you can also simply use echo $pipestatus for the same output.
${PIPESTATUS[#]} right after is the answer you were looking for. However, I want to advise on the first example. It's a good habit to anticipate error, so instead of testing after you should have check the path prior everything.
if [ -d "/nonexistent_directory" ]; then
# here pipe shouldn't fail to grep
# ...unless there's something wrong with "foo"
# ...but "grep" may be a failure if the pattern isn't found
if ! ls -1 "/nonexistent_directory" | grep 'foo' ; then
echo "The command 'grep foo' failed."
# else
# echo "The pipeline succeeded."
fi
else
echo "The command 'ls /nonexistent_directory' failed."
fi
Whenever possible, avoid greping ls output in script, that' fragile...
I have a small bash script (check_status) by which I am trying to know if a process is running or not.
#!/bin/bash
# check argument
if ["$1" == ""];
then
echo "Invalid argument"
exit 3
fi
PN=$(ps -ef | grep $1 | wc -l)
echo "process is $1: executing $PN"
if [ $PN -gt 1 ];
then
status=OK
message=UP
exit=0
else
status=CRITICAL
message=DOWN
exit=2
fi
echo "$status - $1 is $message"
exit $exit
However, when I run this from shell sh checkstatus xyz I get this:
check_status: 3: check_status: [xyz: not found
process is xyz: executing 3
OK - xyz is UP
Now, my first problem is the check_status: 3: check_status: [xyz: not found error. I dont know why its showing up.
Next,there is no process xyz running in my server. So, as per my understanding I am running ps -ef | grep xyz | wc -l in the shell which should echo 1 if no process is running. But, I am getting 3.
How do I fix this?
Update
I changed if ["$1" == ""]; to if [ "$1" = "" ] Now I am not getting the error. But still my PN=$(ps -ef | grep $1 | wc -l) is returning 3.
I then updated PN=$(ps -ef | grep $1 | wc -l) to PN=$(ps -ef | grep $1 ) which gave me the following response:
admin 14674 4570 0 12:03 pts/2 00:00:00 sh check_status xyz
admin 14675 14674 0 12:03 pts/2 00:00:00 sh check_status xyz
admin 14677 14675 0 12:03 pts/2 00:00:00 grep xyz
One sh check_status xyz and one grep xyz makes seance to me. But, any idea why I see two of them ?
(1) As mentioned elsewhere, you'll need spaces around "[" and "]".
(2) If your ps supports the -c option, you should consider using it. Otherwise, if you use ps, you will need to parse the output in some way. (You might want to insert "| tee /dev/tty" to see what your ps command is producing.) But is grep (or pgrep) really what you want here? The messages your script is producing suggest otherwise.
(3) If, for example, you want an exact match of the basename, consider the following (which is broken down into separate steps so you can more easily adapt it to your purpose):
ps -c | awk '{print $4}' | grep "^$x\$"
Other than the spaces around [ that others have mentioned, you should change this:
PN=$(ps -ef | grep $1 | wc -l)
to this:
PN=$(pidof $1 | wc -w)
That will get you a count of running processes that match the name you specified.
The reason you're getting a greater count than expected from your original code is because the grep command also adds one to the count, and it will also produce a hit on any other process that might happen to contain the same characters in its name as the process being targeted. Using pidof eliminates both of these factors.
couldn't find this on SO. I ran the following command in the terminal:
>> grep -Rl "curl" ./
and this displays the list of files where the keyword curl occurs. I want to count the number of files. First way I can think of, is to count the number of lines in the output that came in the terminal. How can I do that?
Pipe the result to wc using the -l (line count) switch:
grep -Rl "curl" ./ | wc -l
Putting the comment of EaterOfCode here as an answer.
grep itself also has the -c flag which just returns the count
So the command and output could look like this.
$ grep -Rl "curl" ./ -c
24
EDIT:
Although this answer might be shorter and thus might seem better than the accepted answer (that is using wc). I do not agree with this anymore. I feel like remembering that you can count lines by piping to wc -l is much more useful as you can use it with other programs than grep as well.
Piping to 'wc' could be better IF the last line ends with a newline (I know that in this case, it will)
However, if the last line does not end with a newline 'wc -l' gives back a false result.
For example:
$ echo "asd" | wc -l
Will return 1 and
$ echo -n "asd" | wc -l
Will return 0
So what I often use is grep <anything> -c
$ echo "asd" | grep "^.*$" -c
1
$ echo -n "asd" | grep "^.*$" -c
1
This is closer to reality than what wc -l will return.
"abcd4yyyy" | grep 4 -c gives the count as 1
When I write
ls | head -1
the output is
file.txt
When I write
ls | head -1 > output.txt or
echo `ls | head -1` > output.txt
the file output.txt contains
^[[H^[[2Jfile.txt
This makes me trouble because I need to use the output of head -1 as an argument of another command.
How can I achieve this?
Possibly your ls is aliased to something like ls --color=always. Try /bin/ls | head -1 > output.txt
These are probably terminal escape codes for coloring. Your ls setup seems to be broken, normally coloring should only be done when connected directly to a terminal.
ls --color=never | head -1
should fix the issue.
I want to count the number of lines output from a command in a bash script. i.e.
COUNT=ls | wc -l
But I also want the script to output the original output from ls. How to get this done? (My actual command is not ls and it has side effects. So I can't run it twice.)
The tee(1) utility may be helpful:
$ ls | tee /dev/tty | wc -l
CHANGES
qpi.doc
qpi.lib
qpi.s
4
info coreutils "tee invocation" includes this following example, which might be more instructive of tee(1)'s power:
wget -O - http://example.com/dvd.iso \
| tee >(sha1sum > dvd.sha1) \
>(md5sum > dvd.md5) \
> dvd.iso
That downloads the file once, sends output through two child processes (as started via bash(1) process substitution) and also tee(1)'s stdout, which is redirected to a file.
ls | tee tmpfile | first command
cat tmpfile | second command
Tee is a good way to do that, but you can make something simpler:
ls > __tmpfile
cat __tmpfile | wc -l
cat __tmpfile
rm __tmpfile