I have the code in my bash scripts that works unstable:
# check every line of check_list file presents in my_prog output
MY_LIST=`./my_prog`
for l in $(cat check_list); do
if ! echo -n "$MY_LIST" | grep -q -x "$l"; then
die "Bad line: '$l'"
fi
done
This piece of code of my huge scripting pool shows "Bad line: 'smthng'" with probability around 1/5000. I wasn't able to represent this event by the naked script but only in my huge scripting pool.
However this code seems to work very fine:
# check every line of check_list file presents in my_prog output
./my_prog > my_list
for l in $(cat check_list); do
if ! grep -q -x "$l" "my_list"; then
die "Bad line: '$l'"
fi
done
The reason why I don't like the second statement is that its use an intermediate file "my_list".
What could be a problem of unstable working of the first statement?
Instead of calling grep for every line in your check_list, you can run one awk program:
awk '
FILENAME == ARGV[1] {check_list[$0]; next}
$0 in check_list {
print "bad line: " $0
exit 1
}
' check_list <(./my_prog)
Or, see if there are any common lines between your program's output and your check_list:
common=$( comm -12 <(sort -u check_list) <(./my_prog | sort -u) )
if [ -n "$common" ]; then
echo "bad lines: "
echo "$common"
die
fi
I don't know what's wrong with the first version but you can easily eliminate the creation of a temporary file.
Note the you'll have to correct the logic, i did not really understand that, probably you'll want to update a variable in the inner loop and decide whether to die after the inner loop.
for i in $*; do
for l in $(cat check_list); do
if ! echo "$i" | grep -q -x "$l"; then
die "Bad line: '$i', '$l'"
fi
done
done | ./my_prog
Related
I would like to use this code snippet to update a files date and time stamp using there file name:
Example file names:
2009.07.04-03.42.01.mov
2019.06.08-01.12.08.mov
I get the following error "The action “Run Shell Script” encountered an error: “touch: out of range or illegal time specification: [[CC]YY]MMDDhhmm[.SS]”
How would I modify this code snippet?
for if in "$#"
do
date_Time=$(echo "$if" | awk '{ print substr( $0, 1, length($0)-7 ) }' | sed 's/\.//g' | sed 's/-//')
touch -t "$date_Time" "$if"
done
UPDATE (01/05/2022):......
I would also like the code to work for the following filename formats...
And file names with no time info (time would default to 12pm):
2009.07.04.mov
2019.06.08.mov
And file names with description info:
2009.07.04-file-description.mov
2019.06.08-video-file info.mp4
2019.06.08-video-old-codec.avi
The error message suggests that you passed in file names which do not match your examples. Perhaps modify your code to display an error message if it is called with no files at all, and remove the path if it is passed files with directory names.
As an aside, if is a keyword, so you probably don't want to use it as a variable name, even though it is possible.
#!/bin/sh
if [ $# == 0 ]; then
echo "Syntax: $0 files ..." >&2
exit 1
fi
for f in "$#"
do
date_Time=$(echo "$f" | awk '{ sub(/.*\//, ""); gsub(/[^0-9]+/, ""); print substr( $0, 1, length($0)-7 ) }')
touch -t "$date_Time" "$if"
done
Notice also how I factored out the sed scripts; Awk can do everything sed can do, so I included the final transformation in the main script. (As an aside, sed 's/[-.]//g' would do both in one go; or you could do sed -e 's/\.//' -e 's/-//' with a single sed invocation.)
If you use Bash, you could simplify this further:
#!/bin/bash
if [ $# == 0 ]; then
echo "Syntax: $0 files ..." >&2
exit 1
fi
for f in "$#"
do
base=${f##*/}
dt=${base//[!0-9]/}
dt=${dt:0:12}
case $dt in
[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9])
touch -t "$dt" "$f";;
*) echo "$0: $f did not seem to contain a valid date ($dt)" >&2;;
esac
done
Notice also how the code now warns if it cannot extract exactly 14 digits from the file name. The parameter expansions are somewhat clumsy but a lot more efficient than calling Awk on each file name separately (and the Awk code wasn't particularly elegant or robust either anyway).
Quick and small bash function
setDateFileFromName() {
local _file _dtime
for _file ;do
_dtime="${_file%.*.mov}"
_dtime="${_dtime##*/}"
touch -t ${_dtime//[!0-9]/} "$_file"
done
}
Then
setDateFileFromName /path/to store dir/????.??.??-??.??.??.mov
Remark. This work with filenames formated as your sample. Any change in filename format will break this!
I am learning bash. I would like to get the return value and matched line by grep at once.
if cat 'file' | grep 'match_word'; then
match_by_grep="$(cat 'file' | grep 'match_word')"
read a b <<< "${match_by_grep}"
fi
In the code above, I used grep twice. I cannot think of how to do it by grep once. I am not sure match_by_grep is always empty even when there is no matched words because cat may output error message.
match_by_grep="$(cat 'file' | grep 'match_word')"
if [[ -n ${match_by_grep} ]]; then
# match_by_grep may be an error message by cat.
# So following a and b may have wrong value.
read a b <<< "${match_by_grep}"
fi
Please tell me how to do it. Thank you very much.
You can avoid the double use of grep by storing the search output in a variable and seeing if it is not empty.
Your version of the script without double grep.
#!/bin/bash
grepOutput="$(grep 'match_word' file)"
if [ ! -z "$grepOutput" ]; then
read a b <<< "${grepOutput}"
fi
An optimization over the above script ( you can remove the temporary variable too)
#!/bin/bash
grepOutput="$(grep 'match_word' file)"
[[ ! -z "$grepOutput" ]] && (read a b <<< "${grepOutput}")
Using double-grep once for checking if-condition and once to parse the search result would be something like:-
#!/bin/bash
if grep -q 'match_word' file; then
grepOutput="$(grep 'match_word' file)"
read a b <<< "${grepOutput}"
fi
When assigning a variable with a string containing a command expansion, the return code is that of the (rightmost) command being expanded.
In other words, you can just use the assignment as the condition:
if grepOutput="$(cat 'file' | grep 'match_word')"
then
echo "There was a match"
read -r a b <<< "${grepOutput}"
(etc)
else
echo "No match"
fi
Is this what you want to achieve?
grep 'match_word' file ; echo $?
$? has a return value of the command run immediately before.
If you would like to keep track of the return value, it will be also useful to have PS1 set up with $?.
Ref: Bash Prompt with Last Exit Code
This is just a simple problem but I don't understand why I got an error here. This is just a for loop inside an if statement.
This is my code:
#!/bin/bash
if (!( -f $argv[1])) then
echo "Argv must be text file";
else if ($#argv != 1) then
echo "Max argument is 1";
else if (-f $argv[1]) then
for i in `cut -d ',' -f2 $argv[1]`
do
ping -c 3 $i;
echo "finish pinging host $i"
done
fi
Error is in line 16, which is the line after fi, that is a blank line .....
Can someone please explain why i have this error ????
many, many errors.
If I try to stay close to your example code:
#!/bin/sh
if [ ! -f "${1}" ]
then
echo "Argv must be text file";
else if [ "${#}" -ne 1 ]
then
echo "Max argument is 1";
else if [ -f "${1}" ]
then
for i in $(cat "${1}" | cut -d',' -f2 )
do
ping -c 3 "${i}";
echo "finish pinging host ${i}"
done
fi
fi
fi
another way, exiting each time the condition is not met :
#!/bin/sh
[ "${#}" -ne 1 ] && { echo "There should be 1 (and only 1) argument" ; exit 1 ; }
[ ! -f "${1}" ] && { echo "Argv must be a file." ; exit 1 ; }
[ -f "${1}" ] && {
for i in $(cat "${1}" | cut -d',' -f2 )
do
ping -c 3 "${i}";
echo "finish pinging host ${i}"
done
}
#!/usr/local/bin/bash -x
if [ ! -f "${1}" ]
then
echo "Argument must be a text file."
else
while-loop-script "${1}"
fi
I have broken this up, because I personally consider it extremely bad form to nest one function inside another; or truthfully to even have more than one function in the same file. I don't care about file size, either; I've got several scripts which are 300-500 bytes long. I'm learning FORTH; fractalism in that sense is a virtue.
# while-loop-script
while read line
do
IFS="#"
ping -c 3 "${line}"
IFS=" "
done < "${1}"
Don't use cat in order to feed individual file lines to a script; it will always fail, and bash will try and execute the output as a literal command. I thought that sed printing would work, and it often does, but for some reason it very often substitutes spaces for newlines, which is extremely annoying as well.
The only absolutely bulletproof method of feeding a line to a script that I know of, which will preserve all space and formatting, is to use while-read loops, rather than substituted for cat or for sed loops, as mentioned.
Something else which you will need to do, in order to be sure about preserving whitespace, is to set the internal field seperator (IFS) to something that you know your file will not contain, and then resetting it back to whitespace at the end of the loop.
For every opening if, you must have a corresponding closing fi. This is also true for else if. Better use elif instead
if test ! -f "$1"; then
echo "Argv must be text file";
elif test $# != 1; then
echo "Max argument is 1";
elif test -f "$1"; then
for i in `cut -d ',' -f2 "$1"`
do
ping -c 3 $i;
echo "finish pinging host $i"
done
fi
There's also no argv variable. If you want to access the command line arguments, you must use $1, $2, ...
Next point is $#argv, this evaluates to $# (number of command line args) and argv. This looks a lot like perl.
Furthermore, testing is done with either test ... or [ ... ], not ( ... )
And finally, you should enclose at least your command line arguments in double quotes "$1". If you don't and there is no command line argument, you have for example
test ! -f
instead of
test ! -f ""
This lets the test fail and go on to the second if, instead of echoing the proper message.
I have this scenarios:
File Content:
10.1.1.1
10.1.1.2
10.1.1.3
10.1.1.4
I want sed or awk so that when i cat the file every time new line is returned.
like
First iteration:
cat ip | some magic
10.1.1.1
Second iteration returns
10.1.1.2
Third iteration returns
10.1.1.3
Fourth iteration returns
10.1.1.4
and after n number of iterations, it returns to line 1
Fifth iteration returns:
10.1.1.1
Can we do it using sed or awk.
You will need to store the line number in a file and increment it with modulus at each invocation.
get_line () {
if [[ ! -e /var/local/get_line.next ]]
then
if [[ ! -e /var/local ]]
then
mkdir -p /var/local
fi
line_no=1
else
line_no=$(< /var/local/get_line.next)
fi
file_length=(wc -l < ip_file)
if ((file_length == 0))
then
echo "Error: Data file is empty" >&2
return 1
fi
if ((line > file_length))
then
line=1
fi
sed -n "$line_no{p;q}" ip_file
echo "$((++line_no))" > /var/local/get_line.next
}
This is in the form of a function which you can incorporate in a script. Feel free to change the location of the get_line.next file. Note that permissions will need to be correct to read or write the files or to create the directory, if necessary.
You will not need to use cat.
You can't do this with cat. You also can't seek on a pipe so you can't use a pipe ..
You can do this with a nested while loop
while ((1))
do
while read line
do
echo "$line"
done <somefile
done
My bash script doesn't work the way I want it to:
#!/bin/bash
total="0"
count="0"
#FILE="$1" This is the easier way
for FILE in $*
do
# Start processing all processable files
while read line
do
if [[ "$line" =~ ^Total ]];
then
tmp=$(echo $line | cut -d':' -f2)
count=$(expr $count + 1)
total=$(expr $total + $tmp)
fi
done < $FILE
done
echo "The Total Is: $total"
echo "$FILE"
Is there another way to modify this script so that it reads arguments into $1 instead of $FILE? I've tried using a while loop:
while [ $1 != "" ]
do ....
done
Also when I implement that the code repeats itself. Is there a way to fix that as well?
Another problem that I'm having is that when I have multiple files hi*.txt it gives me duplicates. Why? I have files like hi1.txt hi1.txt~ but the tilde file is of 0 bytes, so my script shouldn't be finding anything.
What i have is fine, but could be improved. I appreciate your awk suggestions but its currently beyond my level as a unix programmer.
Strager: The files that my text editor generates automatically contain nothing..it is of 0 bytes..But yeah i went ahead and deleted them just to be sure. But no my script is in fact reading everything twice. I suppose its looping again when it really shouldnt. I've tried to silence that action with the exit commands..But wasnt successful.
while [ "$1" != "" ]; do
# Code here
# Next argument
shift
done
This code is pretty sweet, but I'm specifying all the possible commands at one time. Example: hi[145].txt
If supplied would read all three files at once.
Suppose the user enters hi*.txt;
I then get all my hi files read twice and then added again.
How can I code it so that it reads my files (just once) upon specification of hi*.txt?
I really think that this is because of not having $1.
It looks like you are trying to add up the totals from the lines labelled 'Total:' in the files provided. It is always a good idea to state what you're trying to do - as well as how you're trying to do it (see How to Ask Questions the Smart Way).
If so, then you're doing in about as complicated a way as I can see. What was wrong with:
grep '^Total:' "$#" |
cut -d: -f2 |
awk '{sum += $1}
END { print sum }'
This doesn't print out "The total is" etc; and it is not clear why you echo $FILE at the end of your version.
You can use Perl or any other suitable program in place of awk; you could do the whole job in Perl or Python - indeed, the cut work could be done by awk:
grep "^Total:" "$#" |
awk -F: '{sum += $2}
END { print sum }'
Taken still further, the whole job could be done by awk:
awk -F: '$1 ~ /^Total/ { sum += $2 }
END { print sum }' "$#"
The code in Perl wouldn't be much harder and the result might be quicker:
perl -na -F: -e '$sum += $F[1] if m/^Total:/; END { print $sum; }' "$#"
When iterating over the file name arguments provided in a shell script, you should use '"$#"' in place of '$*' as the latter notation does not preserve spaces in file names.
Your comment about '$1' is confusing to me. You could be asking to read from the file whose name is in $1 on each iteration; that is done using:
while [ $# -gt 0 ]
do
...process $1...
shift
done
HTH!
If you define a function, it'll receive the argument as $1. Why is $1 more valuable to you than $FILE, though?
#!/bin/sh
process() {
echo "doing something with $1"
}
for i in "$#" # Note use of "$#" to not break on filenames with whitespace
do
process "$i"
done
while [ "$1" != "" ]; do
# Code here
# Next argument
shift
done
On your problem with tilde files ... those are temporary files created by your text editor. Delete them if you don't want them to be matched by your glob expression (wildcard). Otherwise, filter them in your script (not recommended).