Bash script sometimes not working [duplicate] - bash

This question already has answers here:
How can I use a file in a command and redirect output to the same file without truncating it?
(14 answers)
Closed 5 years ago.
Okay, so i have strange problem with the following piece of code:
who > tmp
cat tmp | awk '{print $1}' | sort | uniq > tmp
ps aux | grep -Fvf tmp
It is supposed to list processes of all users not logged in at the moment. Problem is it sometimes works, and sometimes doesn't and I have no idea what causes it. I can enter exactly same commands and I get 2 different results. I've narrowed the problem to 2nd line > tmp redirect, where it saves proper user list or wipes the file completely and I have no idea why it happens.
PS. I know it may not be proper solution for the task, but it's what I came up with during limited time I was given.

It's probably a timing issue: you're reading from and truncating the file in the same pipeline.
The simple solution is to not use temp files at all:
ps aux | grep -Fvf <(who | awk '{print $1}' | sort -u)

Related

How can I delete empty line from my ouput by grep? [duplicate]

This question already has answers here:
Remove empty lines in a text file via grep
(11 answers)
Closed 4 years ago.
Exists way to remove empty lines with cat myfile | grep -w #something ?
I looking for simple way for remove empty lines from my output like in the way the presented above.
This really belongs on the codegolfing stackexchange because it's not related to how anyone would ever write a script. However, you can do it like this:
cat myfile | grep -w '.*..*'
It's equivalent to the more canonical grep ., but adds explicit .*s on either side so that it will always match the complete line, thereby satisfying the word boundary conditions imposed by -w
You can pipe your output to awk to easily remove empty lines
cat myfile | grep -w #something | awk NF
EDIT: so... you just want cat myfile | awk NF?
if you have to use grep, you can do grep myfile -v '^[[:blank:]]*$'

How to create dynamic substring with awk [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Let say i have file like below.
ABC_DEF_G-1_P-249_8.CSV
I want to cut to be like this below.
ABC_DEF_G-1_P-249_
I use this awk command to do that like below.
ls -lrt | grep -i .CSV | tail -1 | awk -F ' ' '{print $8}' | cut -c 1-18
Question is, if the number 1, is growing, how to make the substring is dynamic
example like below...
ABC_DEF_G-1_P-249_
....
ABC_DEF_G-10_P-249_
ABC_DEF_G-11_P-249_
...
ABC_DEF_G-1000_P-249_
To display the file names of all .CSV without everything after the last underscore, you can do this:
for fname in *.CSV; do echo "${fname%_*}_"; done
This removes the last underscore and evertyhing that follows it (${fname%_*}), and then appends an underscore again. You can assign that, for example, to another variable.
For an example file list of
ABC_DEF_G-1_P-249_9.CSV
ABC_DEF_G-10_P-249_8.CSV
ABC_DEF_G-1000_P-249_4.CSV
ABC_DEF_G-11_P-249_7.CSV
ABC_DEF_G-11_P-249_7.txt
this results in
$ for fname in *.CSV; do echo "${fname%_*}_"; done
ABC_DEF_G-1_P-249_
ABC_DEF_G-10_P-249_
ABC_DEF_G-1000_P-249_
ABC_DEF_G-11_P-249_
You can do this with just ls and grep
ls -1rt | grep -oP ".*(?=_\d{1,}\.CSV)"
If you are concerned about the output of ls -1, as mentioned in the comments you can use find as well
find -type f -printf "%f\n" | grep -oP ".*(?=_\d{1,}\.CSV)"
Outputs:
ABC_DEF_G-1_P-249
ABC_DEF_G-1000_P-249_
This assumes you want everything except the _number.CSV, if it needs to be case insensitive then you can the -i flag to the grep. The \d{1,} allows for the number between _ and .CSV to grow from one to many digits. Also doing it this way you don't have to worry about if the number 1 in your example increases:
ABC_DEF_G-1_P-249
You should not be parsing ls. Perhaps you are looking for something like this:
base=$(printf "%s\n" * | grep -i .CSV | tail -1 | awk -F ' ' '{print $8}' | cut -c 1-18)
However, that's a useless use of grep you want to get rid of right there -- Awk does everything grep does, and everything tail does, too, and actually, everything cut does as well. The grep can also be avoided by using a better wildcard, though:
base=$(printf "%s\n" *.[Cc][Ss][Vv] | awk 'END { print substr($8, 1, 18) }')
In the shell itself, you can do much the same thing with no external processes at all. Proposing a suitable workaround would perhaps require a better understanding of what you are trying to accomplish, though.

No Output When Using tail -f With Multiple Grep Commands In Mac OS-X [duplicate]

This question already has an answer here:
Read from a endless pipe bash [duplicate]
(1 answer)
Closed 7 years ago.
I'm trying to execute the following command from the Mac OS X terminal:
$tail -f FILE_PATH | grep "DESIRED_STRING" | grep -v "EXCLUDED_STRING"
Unfortunately I do not get any results in return.
But, when using cat instead of tail -f:
$cat FILE_PATH | grep "DESIRED_STRING" | grep -v "EXCLUDED_STRING"
I get the expected result. Unfortunately, this workaround is no good for me, as I need to tail the file in realtime.
grep buffers its output by default. Since tail -f never completes, neither does grep and you must wait until the last one has accumulated enough output to fill its buffer. With cat, the command eventually completes, allowing both greps to complete and print whatever output they have accumulated (whether or not its buffer was filled).
Adding --line-buffered to the grep commands changes how grep buffers its output, allowing you to see the output as each line is complete.

How to ensure file written with sed w command is closed

I'm using the sed 'w' command to get the labels from a TeX document using:
/\\label{[a-zA-Z0-9]*}/w labels.list
This script is part of a pipeline in which, later on, awk reads the file that sed has just written. e.g
cat bob | sed -f sedScript | awk -f awkScript labels.list -
Sometimes the pipeline produces the correct output, sometimes it doesn't (for exactly the same input file 'bob'). It's random.
I can only conclude that sometimes awk tries to read the file before sed has closed it properly. Is there anyway I can force sed to close the file at the end of the script, or any other suggestions as to what the problem may be?
All stages in a pipeline run in parallel. This is an extremely important and defining feature of pipes, and there is nothing you can or should attempt to do in order to prevent or circumvent that.
Instead, you should rewrite your script so that all data dependencies are executed and finished in the order you need them to be. In the general case, you'd do
cat bob | sed -f sedScript > tempfile
cat tempfile | awk -f awkScript labels.list -
or equivalently in your case:
grep '\\label{[a-zA-Z0-9]*}' bob > labels.list
awk -f awkScript labels.list bob

How to get rid of duplicates? [duplicate]

This question already has answers here:
Remove duplicate entries in a Bash script [duplicate]
(4 answers)
Closed 8 years ago.
Hi I am writing a script in bash which read the contents of files that have the word "contact"(in the current directory) in them and sorts all the data in those files in alphabetical order
and writes them to a file called "out.txt". I was wondering if there was any way in which I could get rid of duplicate content. Any help would be appreciated
The code I have written so far.
#!/bin/bash
cat $(ls | grep contact) > out.txt
sort out.txt -o out.txt
sort has option -u (long option: --unique) to output only unique lines:
sort -u out.txt -o out.txt
EDIT: (Thanks to tripleee)
Your script, at present, contains problems of parsing ls output,
This is a better substitute for what you are trying to do:
sort -u *contact* >out.txt
Use this using the uniq command (easier to remember than flags)
#!/bin/bash
cat $(ls | grep contact) | sort | uniq > out.txt
or the -u flag for sort like this
#!/bin/bash
cat $(ls | grep contact) | sort -u > out.txt
uniq may do what you need. It copies lines from input to output, omitting a line if it was the line it just output.
Take a look at the "uniq" command, and pipe it through there after sorting.

Resources