This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 3 years ago.
I'm looking to store the hash of my most recently downloaded file in my downloads folder as a variable.
So far, this is what I have:
md5sum $(ls -t | head -n1) | awk '{print $1}'
Output:
user#ci-lux-soryan:~/Downloads$ md5sum $(ls -t | head -n1) | awk '{print $1}'
c1924742187128cc9cb2ec04ecbd1ca6
I have tried storing it as a variable like so, but it doesn't work:
VTHash=$(md5sum $(ls -t | head -n1) | awk '{print $1}')
Any ideas, where am I going wrong
As #Cyrus outlined parsing ls has its own pitfalls and therefore better to avoid it altogether rather than allowing unexpected corner cases. The following shall facilitate the requirements epitomised.
VTHash="$(find -type f -mtime 0 | tail -n 1 | xargs md5sum | awk '{ print $1 }')"
Related
This question already has answers here:
Reading output of a command into an array in Bash
(4 answers)
Closed 1 year ago.
I have this command which gives me a list of directories that have had changes in them when comparing two different git branches:
git diff test production --name-only | awk -F'/' 'NF!=1{print $1}' | sort -u
k8s
postgres
scripts
I want to iterate through the values it returns (in this case k8s, postgres, and scripts).
I can't figure out how to convert these values to an array though. I've tried a couple things:
changedServices=$(git diff test production --name-only | awk -F'/' 'NF!=1{print $1}' | sort -u)
Which just treats it as a multiline string.
And the following with the error message...
declare -a changedServices=$(git diff test production --name-only | awk -F'/' 'NF!=1{print $1}' | sort -u)
declare: changedServices: inconsistent type for assignment
How would I go about parsing this list as an array?
var=$() is a string assignment. For arrays you don't include the $, but you can also use mapfile as it's generally a better option
mapfile -t changedServices < <(git diff test production --name-only | awk -F'/' 'NF!=1{print $1}' | sort -u)
The -t option removes trailing delimiters.
If you don't have mapfile, another thing you can do is
changedServices=()
while IFS= read -r line; do
changedServices+=("${line}")
done < <(git diff test production --name-only | awk -F'/' 'NF!=1{print $1}' | sort -u)
How to feed xargs to a piped grep for a piped cat command.
Command 1:
(Generates a grep pattern with unique PIDs for a particular date time, read from runtime.log)
cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='%
The output of above command is (It's custom grep pattern):
2018/09/13 14:50.*PID=13109
2018/09/13 14:50.*PID=14575
2018/09/13 14:50.*PID=15741
Command 2:
(Reads runtime.log and fetch the appropriate lines based on the grep pattern (Ideally the grep pattern should comes from command 1))
cat runtime.log | grep '2018/09/13 14:50.*PID=13109'
The question is How to combine both Command 1 & Command 2
Below combined version of command doesn't gives the expected output (The produced output had lines having the date other than '2018/09/13 14:50')
cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='% | cat runtime.log xargs grep
grep has an option -f. From man grep:
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX .)
So you could use
cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='% > a_temp_file
cat runtime.log | grep -f a_temp_file
The shell has a syntax that avoids having to create the temporary file. <(). From man bash:
Process Substitution
Process substitution is supported on systems that support named pipes
(FIFOs) or the /dev/fd method of naming open files. It takes the form
of <(list) or >(list). The process list is run with its input or
output connected to a FIFO or some file in /dev/fd. The name of this
file is passed as an argument to the current command as the result of
the expansion. If the >(list) form is used, writing to the file will
provide input for list. If the <(list) form is used, the file passed
as an argument should be read to obtain the output of list.
So you can combine it to:
cat runtime.log | grep -f <(cat runtime.log | grep -e '2018/09/13 14:50' | awk -F'[ ]' '{print $4}' | awk -F'PID=' '{print $2}' | sort -u | xargs -I % echo '2018/09/13 14:50.*PID='%)
This question already has answers here:
How to get "wc -l" to print just the number of lines without file name?
(10 answers)
Closed 7 years ago.
I'm counting the number of lines in a big file using
wc -l myFile.txt
Result is
110 myFile.txt
But I want only the number
110
How can I do that?
(I want the number of lines as an input argument in a bash script)
There are lots of ways to do this. Here are two:
wc -l myFile.txt | cut -f1 -d' '
wc -l < myFile.txt
Cut is an old Unix tool to
print selected parts of lines from each FILE to standard output.
You can use cat and pipe wc -l:
cat myFile.txt | wc -l
Or if you insist wc -l be the first command, you can use awk:
wc -l myFile.txt | awk '{print $1}'
You can try
wc -l file | awk '{print $1}'
I have a list of files starting with the word "output", and I want to sum up the total number of rows in all the files.
Here's my strategy:
for f in `find outpu*`;do wc -l $f | awk '{x+=$1}END{print $1}' ; done
Before piping over, if there were a way I could do something like >> to a temporary variable and then run the awk command after, I could accomplish this goal.
Any tips?
use this to see details and sum :
wc -l output*
and this to see only the sum:
wc -l output* | tail -n1 | cut -d' ' -f1
Here is some stuff for fun, check it out:
grep -c . out* | cut -d':' -f2- | paste -sd+ | bc
all lines, including empty ones:
grep -c '' out* | cut -d':' -f2- | paste -sd+ | bc
you can play in grep with conditions on lines in files
Watch out, this find command will only find stuff in your current directory if there is one file matching outpu*.
One way of doing it:
awk 'END{print NR}' $(find 'outpu*')
Provided that there is not an insane amount of matching filenames that overflows the maximum command length limit of your shell.
This question already has answers here:
How can I pipe stderr, and not stdout?
(11 answers)
Closed 9 years ago.
In a simple case, let say we have some standard error:
$ ls /fake/file
ls: /fake/file: No such file or directory
QUESTION: Is it possible to parse out "/fake/file" from the standard error without having to write it out to a file first? For example:
$ ls /fake/file 2> tmp.file; sed 's/.* \(.*\):.*/\1/' tmp.file
/fake/file
Something like this?
ls /fake/file 2>&1 | awk -F: '{print $2}'
either way should fetch you the filename
ls /fake/file 2>&1 | awk -F: '{print $2}' | awk '{print $3}'
or
ls /fake/file 2>&1 | awk '{print $4}' | awk -F: '{print $1}'
or
ls /fake/file 2>&1 | sed 's/.* \(.*\):.*/\1/'