Watch and wc not yielding any results - bash

I try to constantly display the size of the current folder using watch , the below command does not work however, what do I do wrong ? I use zsh shell
$ watch ls -a | wc -l

what do I do wrong ?
Shell parses | as a pipe. So when shell sees:
watch ls -a | wc -l
It parses it as two command with one command output redirected to the other:
( watch ls -a ) | ( wc -l )
It runs the command watch with two arguments ls and -a and a command wc with single argument -l. Because watch ls -a never ends and wc -l only outputs when the input ends, you don't see anything printed out. wc -l waits until all input lines are printed, which never happens.
Because watch internally calls the shell, you can:
watch 'ls -a | wc -l'
This runs a single command watch with one argument ls -a | wc -l. watch internally spawns a shell and passes the string ls -a | wc -l to it. Then this internal shell spawns two new processes ls -a and wc -l with input/output connected.

Related

How to find the number of instances of current script running in bash?

I have the below code to find out the number of instances of current script running that is running with same arg1. But looks like the script creates a subshell and executes this command which also shows up in output. What would be the better approach to find the number of instances of running script ?
$cat test.sh
#!/bin/bash
num_inst=`ps -ef | grep $0 | grep $1 | wc -l`
echo $num_inst
$ps aux | grep test.sh | grep arg1 | grep -v grep | wc -l
0
$./test.sh arg1 arg2
3
$
I am looking for a solution that matches all running instance of ./test.sh arg1 arg2 not the one with ./test.sh arg10 arg20
The reason this creates a subshell is that there's a pipeline inside the command substitution. If you run ps -ef alone in a command substitution, and then separately process the output from that, you can avoid this problem:
#!/bin/bash
all_processes=$(ps -ef)
num_inst=$(echo "$all_processes" | grep "$0" | grep -c "$1")
echo "$num_inst"
I also did a bit of cleanup on the script: double-quote all variable references to avoid weird parsing, used $() instead of backticks, and replaced grep ... | wc -l with grep -c.
You might also replace the echo "$all_processes" | ... with ... <<<"$all_processes" and maybe the two greps with a single grep -c "$0 $1":
...
num_inst=$(grep -c "$0 $1" <<<"$all_processes")
...
Modify your script like this:
#!/bin/bash
ps -ef | grep $0 | wc -l
No need to store the value in a variable, the result is printed to standard out anyway.
Now why do you get 3?
When you run a command within back ticks (fyi you should use syntax num_inst=$( COMMAND ) and not back ticks), it creates a new sub-shell to run COMMAND, then assigns the stdout text to the variable. So if you remove the use of $(), you will get your expected value of 2.
To convince yourself of that, remove the | wc -l, you will see that num_inst has 3 processes, not 2. The third one exists only for the execution of COMMAND.

Using pidof pipe with wc -l in Bash does not return expected 0 value

Scenario
I'm trying add the following flag before the execution of main process in shell script to make sure that this script is running only once at a time by checking the number of pids of same-name script:
this_bash=$(basename $0)
this_pid=${$}
is_running="$(pidof -x $this_bash -o $this_pid | wc -l )"
I found it always returns 1, even when there is no other script with the same name running.
Investigation
For further information I tried this:
z=$(pidof -x $this_bash -o $this_pid)
echo "[$z]"
echo "[$(pidof -x $this_bash -o $this_pid)]"
echo "[$($z | wc -l )]"
echo "[$(pidof -x $this_bash -o $this_pid | wc -l )]"
the square brackets are to make sure there are no hidden white space chars.
The result was:
[]
[]
[0]
[1]
Question
I don't understand why storing pidof as variable returns the expected result, while piping the commands directly does not.

Different output when running command in bash script vs terminal

When I run the following code in a bash script I receive an output of 2
#!/bin/bash
HIPPO=$(ps -a | grep hippo | wc -l)
echo "$HIPPO"
However when I run the command ps -a | grep hippo | wc -l straight from a command prompt I get an output of 0
Reading the documentation on ps particularly the -a flag, I'm not understanding why the output is different.
How is called your script? If you named it with hippo, it will count in your ps call.
https://superuser.com/questions/935374/difference-between-and-in-shell-script
When you do the command substitution, the command gets runs once according to the above. So I am assuming, the echo is picking a zombie process that ran that command.

Bash - Two processes for one script

I have a shell script, named test.sh :
#!/bin/bash
echo "start"
ps xc | grep test.sh | grep -v grep | wc -l
vartest=`ps xc | grep test.sh | grep -v grep | wc -l `
echo $vartest
echo "end"
The output result is :
start
1
2
end
So my question is, why are there two test.sh processes running when I call ps using `` (the same happens with $()) and not when I call ps directly?
How can I get the desired result (1)?
When you start a subshell, as with the backticks, bash forks itself, then executes the command you wanted to run. You then also run a pipeline which causes all of those to be run in their own subshells, so you end up with the "extra" copy of the script that's waiting for the pipeline to finish so it can gather up the output and return that to the original script.
We'll do a little expermiment using (...) to run processes explicitly in subshells and using the pgrep command which does ps | grep "name" | grep -v grep for us, just showing us the processes that match our string:
echo "Start"
(pgrep test.sh)
(pgrep test.sh) | wc -l
(pgrep test.sh | wc -l)
echo "end"
which on a run for me produces the output:
Start
30885
1
2
end
So we can see that running pgrep test.sh in a subshell only finds the single instance of test.sh, even when that subshell is part of a pipeline itself. However, if the subshell contains a pipeline then we get the forked copy of the script waiting for the pipeline to finish

bash output redirect prob

I want to count the number of lines output from a command in a bash script. i.e.
COUNT=ls | wc -l
But I also want the script to output the original output from ls. How to get this done? (My actual command is not ls and it has side effects. So I can't run it twice.)
The tee(1) utility may be helpful:
$ ls | tee /dev/tty | wc -l
CHANGES
qpi.doc
qpi.lib
qpi.s
4
info coreutils "tee invocation" includes this following example, which might be more instructive of tee(1)'s power:
wget -O - http://example.com/dvd.iso \
| tee >(sha1sum > dvd.sha1) \
>(md5sum > dvd.md5) \
> dvd.iso
That downloads the file once, sends output through two child processes (as started via bash(1) process substitution) and also tee(1)'s stdout, which is redirected to a file.
ls | tee tmpfile | first command
cat tmpfile | second command
Tee is a good way to do that, but you can make something simpler:
ls > __tmpfile
cat __tmpfile | wc -l
cat __tmpfile
rm __tmpfile

Resources