I have files of below name format.
test_1_452161_987654321.ARC
test_1_452162_987654321.ARC
test_1_452163_987654321.ARC
.
.
.
test_1_452190_987654321.ARC
i.e i need to delete 30 files in the above case where user will give the below inputs
Start_File = 452161
End_File = 452190
How to delete the above series files? (i am new to bash/shell programming i tried using while loop and remove which dint work so seeking experts help)
My Code
echo "Enter Start File"
read start
echo "Enter End File"
read end
i=$start
j=$end
while [$i -le $j]; do
rm *$i*.ARC
((i++))
done
Replace everything after fourth line by
for ((i=$start; i<=$end; i++)); do
echo rm *_${i}_*.ARC
done
If everything looks fine, remove echo.
You can use awk + xargs:
printf "%s\n" test*ARC | awk -v s=452161 -v e=452190 -F_ '$3 >= s && $3 <= e' | xargs rm
awk is splitting file names on _ and then checking 3rd field is within start/end range.
You may be interested in expanding a sequence additional documentation. The code below will echo the file names you are interested in based on the provided start and end input.
echo -n "Enter Start File => "; read start
echo -n "Enter End File => "; read end
for file in *{$start..$end}*
do
echo $file
done
EDIT: Variable expansion works in zsh but not bash
Just to add to your options, you can use seq:
echo "Enter Start File"
read start
echo "Enter End File"
read end
for i in $(seq $start $end)
do
rm -f *_${i}_*.ARC
done
To me this seems clean and clear.
Related
I have bash script which checks presence of certain files and that the content has a valid format. It uses variable prefixes so i can easily add/remove new files w/o the need of further adjustments.
Problem is that i need to run this on AIX servers where bash is not present. I've adjusted the script except the part with variable prefixes. After some attempts i am lost and have no idea how to properly migrate the following piece of code so it runs under sh ( $(echo ${!ifile_#}) ). Alternatively i have ksh or csh if plain sh is not an option.
Thank you in advance for any help/hints
#!/bin/sh
# Source files
ifile_one="/path/to/file/one.csv"
ifile_two="/path/to/file/two.csv"
ifile_three="/path/to/file/three.csv"
ifile_five="/path/to/file/four.csv"
min_columns='10'
existing_files=""
nonexisting_files=""
valid_files=""
invalid_files=""
# Check that defined input-files exists and can be read.
for input_file in $(echo ${!ifile_#})
do
if [ -r ${!input_file} ]; then
existing_files+="${!input_file} "
else
nonexisting_files+="${!input_file} "
fi
done
echo "$existing_files"
echo "$nonexisting_files"
# Check that defined input files have proper number of columns.
for input_file_a in $(echo "$existing_files")
do
check=$(grep -v "^$" $input_file_a | sed 's/[^;]//g' | awk -v min_columns="$min_columns" '{ if (length == min_columns) {print "OK"} else {print "KO"} }' | grep -i KO)
if [ ! -z "$check" ]; then
invalid_files+="${input_file_a} "
else
valid_files+="${input_file_a} "
fi
done
echo "$invalid_files"
echo "$valid_files"
Bash returns expected output (of the four ECHOes):
/path/to/file/one.csv /path/to/file/two.csv /path/to/file/three.csv
/path/to/file/four.csv
/path/to/file/three.csv
/path/to/file/one.csv /path/to/file/two.csv
ksh/sh throws:
./report.sh[14]: "${!ifile_#}": 0403-011 The specified substitution is not valid for this command.
Thanks #Benjamin W. and #user1934428 , ksh93 arrays are the answer.
So bellow code works for me as desired.
#!/bin/ksh93
typeset -A ifile
ifile[one]="/path/to/file/one.csv"
ifile[two]="/path/to/file/two.csv"
ifile[three]="/path/to/file/three.csv"
ifile[whatever]="/path/to/file/something.csv"
existing_files=""
nonexisting_files=""
for input_file in "${!ifile[#]}"
do
if [ -r ${ifile[$input_file]} ]; then
existing_files+="${ifile[$input_file]} "
else
nonexisting_files+="${ifile[$input_file]} "
fi
done
Need to make a shell script that splits every csv file that uses \n as separator, the limit per file is the number of words and
I can't cut the line in half.
Finished script with the help of a wizard!
Example:
sh SliceByWords.sh 1000 .
Slices every file by 1000 words and put every part into subfolder
function has_number_number_of_words {
re='^[0-9]+$'
if ! [[ $1 =~ $re ]] ; then
echo "error: Not a number, please run the command with the number of words per file" >&2; exit 1
fi
}
#MAIN
has_number_number_of_words $1
declare -i WORDLIMIT=$1 # N of lines to part each file
subdir="Result"
mkdir $subdir
format=*.csv
for name in $format; do mv "$name" "${name// /___}"; done
for i in $format;
do
if [[ "$i" == "$format" ]]
then
echo "No Files"
else
( locali=$(echo $i | awk '{gsub(/ /,"\\ ");print}');
localword=$i;
FILENAMEWITHOUTEXTENSION="${localword%.*}" ;
subnoext=$subdir"/"$FILENAMEWITHOUTEXTENSION;
echo Processing file "$FILENAMEWITHOUTEXTENSION";
awk -v NOEXT=$subnoext -v wl=$WORDLIMIT -F" " 'BEGIN{fn=1}{c+=NF}{sv=NOEXT"_snd_"fn".csv";print $0>sv;}c>wl{c=0;++fn;close(sv);}' $localword;
)&
fi
done
wait #wait
for name in $format; do mv "$name" "${name//___/ }"; done
echo All files done.
Since i couldnt figure out how to enter awk files with spaces , im using
for name in $format; do mv "$name" "${name//___/ }"; done
I think this would be a lot easier to handle with awk:
awk -F" " 'BEGIN{filenumber=1}{counter+=NF}{print $0 > FILENAME"_part_"filenumber} counter>1000{counter=0;++filenumber}' yourinputfile
awk here is:
Splitting each line by space -F" "
Before processing the file set the filenumber variable to 1
Bump the counter variable by the number of fields in the line {counter+=NF}
Print out the line to the file, numbered by a variable. Using the FILENAME built-in variable here to pull through yourinputfile. {print $0 > FILENAME"_part_"filenumber}
If the counter has popped over 1000, then send it back to 0 and bump the filenumber variable by 1 counter>1000{counter=0;++filenumber}
Minimized a bit:
awk -F" " 'BEGIN{fn=1}{c+=NF}{print $0>FILENAME"_part_"fn}c>1000{c=0;++fn}' yourinputfile
I'm using bash on cygwin.
I have to take a .csv file that is a subset of a much larger set of settings and shuffle the new csv settings (same keys, different values) into the 1000-plus-line original, making a new .json file.
I have put together a script to automate this. The first step in the process is to "clean up" the csv file by extracting lines that start with "mme " and "sms ". Everything else is to pass through cleanly to the "clean" .csv file.
This routine is as follows:
# clean up the settings, throwing out mme and sms entries
cat extract.csv | while read -r LINE; do
if [[ $LINE == "mme "* ]]
then
printf "$LINE\n" >> mme_settings.csv
elif [[ $LINE == "sms "* ]]
then
printf "$LINE\n" >> sms_settings.csv
else
printf "$LINE\n" >> extract_clean.csv
fi
done
My problem is that this thing stubs its toe on the following string at the end of one entry: 100%." When it's done with the line, it simply elides the %." and the new-line marker following it, and smears the two lines together:
... 100next.entry.keyname...
I would love to reach in and simply manually delimit the % sign, but it's not a realistic option for my use case. Clearly I'm missing something. My suspicion is that I am in some wise abusing cat or read in the first line.
If there is some place I should have looked to find the answer before bugging you all, by all means point me in that direction and I'll sod off.
Syntax for printf is :
printf format [argument]...
In [ printf ] format string, anything followed by % is a format specifier as described in the link above. What you would like to do is :
while read -r line; do # Replaced LINE with line, full uppercase variable are reserved for the syste,
if [[ "$line" = "mme "* ]] # Here* would glob for anything that comes next
then
printf "%s\n" $line >> mme_settings.csv
elif [[ "$line" = "sms "* ]]
then
printf "%s\n" $line >> sms_settings.csv
else
printf "%s\n" $line >> extract_clean.csv
fi
done<extract.csv # Avoided the useless use of cat
As pointed out, your problem is expanding a parameter containing a formatting instruction in the formatting argument of printf, which can be solved by using echo instead or moving the parameter to be expanded out of the formatting string, as demonstrated in other answers.
I recommend not looping over your whole file with Bash in the first place, as it's notoriously slow; you're extracting lines starting with certain patterns, which is a job at which grep excels:
grep '^mme ' extract.csv > mme_settings.csv
grep '^sms ' extract.csv > sms_settings.csv
grep -v '^mme \|^sms ' extract.csv > extract_clean.csv
The third command uses the -v option (extract lines that don't match) and alternation to exclude lines both starting with mme and sms.
I'm trying to write a small script that will count entries in a log file, and I'm incrementing a variable (USCOUNTER) which I'm trying to use after the loop is done.
But at that moment USCOUNTER looks to be 0 instead of the actual value. Any idea what I'm doing wrong? Thanks!
FILE=$1
tail -n10 mylog > $FILE
USCOUNTER=0
cat $FILE | while read line; do
country=$(echo "$line" | cut -d' ' -f1)
if [ "US" = "$country" ]; then
USCOUNTER=`expr $USCOUNTER + 1`
echo "US counter $USCOUNTER"
fi
done
echo "final $USCOUNTER"
It outputs:
US counter 1
US counter 2
US counter 3
..
final 0
You are using USCOUNTER in a subshell, that's why the variable is not showing in the main shell.
Instead of cat FILE | while ..., do just a while ... done < $FILE. This way, you avoid the common problem of I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?:
while read country _; do
if [ "US" = "$country" ]; then
USCOUNTER=$(expr $USCOUNTER + 1)
echo "US counter $USCOUNTER"
fi
done < "$FILE"
Note I also replaced the `` expression with a $().
I also replaced while read line; do country=$(echo "$line" | cut -d' ' -f1) with while read country _. This allows you to say while read var1 var2 ... varN where var1 contains the first word in the line, $var2 and so on, until $varN containing the remaining content.
Always use -r with read.
There is no need to use cut, you can stick with pure bash solutions.
In this case passing read a 2nd var (_) to catch the additional "fields"
Prefer [[ ]] over [ ].
Use arithmetic expressions.
Do not forget to quote variables! Link includes other pitfalls as well
while read -r country _; do
if [[ $country = 'US' ]]; then
((USCOUNTER++))
echo "US counter $USCOUNTER"
fi
done < "$FILE"
minimalist
counter=0
((counter++))
echo $counter
You're getting final 0 because your while loop is being executed in a sub (shell) process and any changes made there are not reflected in the current (parent) shell.
Correct script:
while read -r country _; do
if [ "US" = "$country" ]; then
((USCOUNTER++))
echo "US counter $USCOUNTER"
fi
done < "$FILE"
I had the same $count variable in a while loop getting lost issue.
#fedorqui's answer (and a few others) are accurate answers to the actual question: the sub-shell is indeed the problem.
But it lead me to another issue: I wasn't piping a file content... but the output of a series of pipes & greps...
my erroring sample code:
count=0
cat /etc/hosts | head | while read line; do
((count++))
echo $count $line
done
echo $count
and my fix thanks to the help of this thread and the process substitution:
count=0
while IFS= read -r line; do
((count++))
echo "$count $line"
done < <(cat /etc/hosts | head)
echo "$count"
USCOUNTER=$(grep -c "^US " "$FILE")
Incrementing a variable can be done like that:
_my_counter=$[$_my_counter + 1]
Counting the number of occurrence of a pattern in a column can be done with grep
grep -cE "^([^ ]* ){2}US"
-c count
([^ ]* ) To detect a colonne
{2} the colonne number
US your pattern
Using the following 1 line command for changing many files name in linux using phrase specificity:
find -type f -name '*.jpg' | rename 's/holiday/honeymoon/'
For all files with the extension ".jpg", if they contain the string "holiday", replace it with "honeymoon". For instance, this command would rename the file "ourholiday001.jpg" to "ourhoneymoon001.jpg".
This example also illustrates how to use the find command to send a list of files (-type f) with the extension .jpg (-name '*.jpg') to rename via a pipe (|). rename then reads its file list from standard input.
The following program reads a file and it intends to store the all values (each line) into a variable but doesn't store the last line. Why?
file.txt :
1
2
.
.
.
n
Code :
FileName=file.txt
if test -f $FileName # Check if the file exists
then
while read -r line
do
fileNamesListStr="$fileNamesListStr $line"
done < $FileName
fi
echo "$fileNamesListStr" // 1 2 3 ..... n-1 (but it should print up to n.)
Instead of reading line-by-line, why not read the whole file at once?
[ -f $FileName ] && fileNameListStr=$( tr '\n' ' ' < $FileName )
One probable cause is that there misses a newline after the last line n.
Use the following command to check it:
tail -1 file.txt
And the following fixes:
echo >> file.txt
If you really need to keep the last line without newline, I reorganized the while loop here.
#!/bin/bash
FileName=0
if test -f $FileName ; then
while [ 1 ] ; do
read -r line
if [ -z $line ] ; then
break
fi
fileNamesListStr="$fileNamesListStr $line"
done < $FileName
fi
echo "$fileNamesListStr"
The issue is that when the file does not end in a newline, read returns non-zero and the loop does not proceed. The read command will still read the data, but it will not process the loop. This means that you need to do further processing outside of the loop. You also probably want an array instead of a space separated string.
FileName=file.txt
if test -f $FileName # Check if the file exists
then
while read -r line
do
fileNamesListArr+=("$line")
done < $FileName
[[ -n $line ]] && fileNamesListArr+=("$line")
fi
echo "${fileNameListArr[#]}"
See the "My text files are broken! They lack their final newlines!" section of this article:
http://mywiki.wooledge.org/BashFAQ/001
As a workaround, before reading from the text file a newline can be appended to the file.
echo "\n" >> $file_path
This will ensure that all the lines that was previously in the file will be read. Now the file can be read line by line.