Using output of head -1 - bash

When I write
ls | head -1
the output is
file.txt
When I write
ls | head -1 > output.txt or
echo `ls | head -1` > output.txt
the file output.txt contains
^[[H^[[2Jfile.txt
This makes me trouble because I need to use the output of head -1 as an argument of another command.
How can I achieve this?

Possibly your ls is aliased to something like ls --color=always. Try /bin/ls | head -1 > output.txt

These are probably terminal escape codes for coloring. Your ls setup seems to be broken, normally coloring should only be done when connected directly to a terminal.
ls --color=never | head -1
should fix the issue.

Related

how to open a file after writing to it using Bash (in one line)

is there a way to
open or cat a file that's been newly created e.g.
cat $(cat 1210.hc.vcf | head -n1 > ok.txt )
open $(cat 1210.hc.vcf | head -n1 > ok.txt )
Open a file after writing to it using Bash (in one line)
You can use tee to write to stdout and the file.
head -n 1 1210.hc.vcf | tee ok.txt
What comes in my mind:
cat 1210.hc.vcf | head -n1 > ok.txt; cat !$

Bash Script - tail to file

I have the following in a bash script file watcher.sh.
grep ERROR $ExampleLogFile > $ErrorLogFile
When I run this, it copied the lines from ExampleLogFile to ErrorLogFile that contain ERROR successfully.
I need to make it so it continually monitors the ExampleLogFile for changes and writes those to the ErrorLogFile.
I was thinking of doing the following, but this doesn't work:
tail -f grep ERROR $ExampleLogFile > $ErrorLogFile
It does write some of the lines, but its not the ones containing ERROR.
tail: grep: No such file or directory
tail: ERROR: No such file or directory
Any advise please.
You can use tee command here.
tail -f $ExampleLogFile | grep --line-buffered ERROR | tee $ErrorLogFile
It will store and print to stdout at the same time.
You need:
while :; do grep ERROR $ExampleLogFile > $ErrorLogFile; sleep 2; done
This should achieve what you want without needing the tail command.
If the file will ever be cleared though this will not work as you might expect because the grep will pull only current entries in the $ErrorLogFile.
You can arrange the tail/grep in a pipe
tail -f $ExampleLogFile | grep ERROR > $ErrorLogFile
Remember that this command will never exit by itself (tail will continue to look for additional data). You will have to arrange for some other exit condition (e.g., timeout, explicit kill, etc).
tail -f $ExampleLogFile | grep --line-buffered ERROR > $ErrorLogFile
or paranoic:
stdbuf -oL tail -f $ExampleLogFile | stdbuf -oL grep --line-buffered ERROR > $ErrorLogFile
But most probably you want to include existing lines too. In that case:
tail -n +1 -f $ExampleLogFile | grep --line-buffered ERROR > $ErrorLogFile

How to read the last executed command comment (after #) in bash?

I know there are some bash variables that are assigned after executing a Bash command. Some of them are $? to get the return value of the process or $BASH_COMMAND to get the actual call line, $1, $2 etc to retrieve the call args, etc.
A simple trick with trap would (taken from this question) allow me to store the last executed command:
alariva#trinsic:~/test$ trap 'previous_command=$this_command; this_command=$BASH_COMMAND' DEBUG
alariva#trinsic:~/test$ ls -l #I want to read this comment
total 0
-rw-rw-r-- 1 alariva alariva 0 Aug 23 01:30 readme.md
alariva#trinsic:~/test$ echo $previous_command
ls -l
alariva#trinsic:~/test$ echo $?
0
I need to get the comment that may come after the last command, but I'm not aware of any variable that would store it. Is there any way to read it?
I would like to get a similar behavior to this:
alariva#trinsic:~/test$ ls -l #I want this comment
readme.md
alariva#trinsic:~/test$ echo $BASH_COMMENT
I want this comment
alariva#trinsic:~/test$
Of course, the current situation is that I cannot retrieve any info from this:
alariva#trinsic:~/test$ echo $BASH_COMMENT
alariva#trinsic:~/test$
I'm also aware that comments may be completely stripped out after Bash interprets the call, so in that case I wonder if there exists a workaround (like a hook or something) to read it before it actually reaches bash.
So far, this is what I achieved:
alariva#trinsic:~/test$ ls -l #tosto
total 0
alariva#trinsic:~/test$ LAST=`fc -l | cut -c 6- | tail -n2 | head -n1`
alariva#trinsic:~/test$ echo "${LAST##*\#}"
tosto
alariva#trinsic:~/test$
Not sure if this is the best possible solution and if it'd work on all scenarios but looks like the behavior I want to achieve. Is there any built-in/alternative way to get this?
The closest solution I came up so far is the following.
alariva#trinsic:~/test$ ls -l #tosto
total 0
alariva#trinsic:~/test$ LAST=`fc -l | cut -c 6- | tail -n2 | head -n1`
alariva#trinsic:~/test$ echo "${LAST##*\#}"
tosto
alariva#trinsic:~/test$
While that will work for most of the scenarios I use, it still will fail to get the full comment on some scenarios where more than one # is found:
alariva#trinsic:~/test$ ls -l #tosto #only partial
total 0
alariva#trinsic:~/test$ LAST=`fc -l | cut -c 6- | tail -n2 | head -n1`
alariva#trinsic:~/test$ echo "${LAST##*\#}"
only partial
alariva#trinsic:~/test$
Improvements on this answer are welcome.

count number of lines in terminal output

couldn't find this on SO. I ran the following command in the terminal:
>> grep -Rl "curl" ./
and this displays the list of files where the keyword curl occurs. I want to count the number of files. First way I can think of, is to count the number of lines in the output that came in the terminal. How can I do that?
Pipe the result to wc using the -l (line count) switch:
grep -Rl "curl" ./ | wc -l
Putting the comment of EaterOfCode here as an answer.
grep itself also has the -c flag which just returns the count
So the command and output could look like this.
$ grep -Rl "curl" ./ -c
24
EDIT:
Although this answer might be shorter and thus might seem better than the accepted answer (that is using wc). I do not agree with this anymore. I feel like remembering that you can count lines by piping to wc -l is much more useful as you can use it with other programs than grep as well.
Piping to 'wc' could be better IF the last line ends with a newline (I know that in this case, it will)
However, if the last line does not end with a newline 'wc -l' gives back a false result.
For example:
$ echo "asd" | wc -l
Will return 1 and
$ echo -n "asd" | wc -l
Will return 0
So what I often use is grep <anything> -c
$ echo "asd" | grep "^.*$" -c
1
$ echo -n "asd" | grep "^.*$" -c
1
This is closer to reality than what wc -l will return.
"abcd4yyyy" | grep 4 -c gives the count as 1

bash output redirect prob

I want to count the number of lines output from a command in a bash script. i.e.
COUNT=ls | wc -l
But I also want the script to output the original output from ls. How to get this done? (My actual command is not ls and it has side effects. So I can't run it twice.)
The tee(1) utility may be helpful:
$ ls | tee /dev/tty | wc -l
CHANGES
qpi.doc
qpi.lib
qpi.s
4
info coreutils "tee invocation" includes this following example, which might be more instructive of tee(1)'s power:
wget -O - http://example.com/dvd.iso \
| tee >(sha1sum > dvd.sha1) \
>(md5sum > dvd.md5) \
> dvd.iso
That downloads the file once, sends output through two child processes (as started via bash(1) process substitution) and also tee(1)'s stdout, which is redirected to a file.
ls | tee tmpfile | first command
cat tmpfile | second command
Tee is a good way to do that, but you can make something simpler:
ls > __tmpfile
cat __tmpfile | wc -l
cat __tmpfile
rm __tmpfile

Resources