logback-spring.xml how I can archive to gz if <prudent>true</prudent> - spring

how I can archive to gz if <prudent>true</prudent>? my logging configuration in logback-spring.xml and I can't switch off prudent. How I need change configuration? May be I need make new appender?
`<appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender">
<prudent>true</prudent>
<append>true</append>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}-%d{yyyyMMdd}.log.gz</fileNamePattern>
<maxHistory>${loggingMaxHistory:2}</maxHistory>
</rollingPolicy>`
. . .

in logback-spring.xml you can't use correct way, because new appender will write their log in general log file. you will have double lines in log. I chose another way. write bash script (two variants) and place it at folder /etc/cron.daily/
1 variant - script archiving files after n-day create/change
mkdir -p /var/log/archive/ ;
find /var/log/*.log -maxdepth 1 -mtime +3 -exec gzip "{}" \;
mv /var/log/hi1/*.gz /var/log/archive/ ;
2 variant - script archiving files after n-day create by date from filename (xxx-yyyymmdd.log)
#!/bin/bash
logFolder='/logs/'
archFolder=$logFolder'archive/'
mkdir -p $archFolder ;
todayDate="$(date +%Y%m%d)"
todayMon=${todayDate:4:2}
todayDay=${todayDate:6:2}
todayYear=${todayDate:0:4}
lastMon=$(($todayMon-1))
#days after present not archived
backlog=3
if (( $lastMon>0 )); then
last=$(echo $(cal ${lastMon} ${todayYear}) | awk '{print $NF}')
else
last=31
fi
for i in $logFolder*.log; do
logDate=$(basename "$i" ".log" | egrep -o '[0-9]'| xargs)
logDate2=$(echo "$logDate" | sed 's/ //g')
logDiffDate=$((todayDate - logDate2))
logmin=(100-$last)
logmax1=$((100+$backlog))
logmax=($logmax1-$last)
logone=$(($backlog+1))
if (($logDiffDate>$logone)); then
if (($logmin>$logDiffDate)); then
gzip $i
fi
if (($logmax<$logDiffDate)); then
gzip $i
fi
fi
done
mv $logFolder*.gz $archFolder ;
first variant shorter second smarter

Related

bash iterate over a directory sorted by file size

As a webmaster, I generate a lot of junk files of code. Periodically I have to purge the unneeded files filtered by extention. Example: "cleaner txt" Easy enough. But I want to sort the files by size and process them for the "for" loop. How can I do that?
cleaner:
#/bin/bash
if [ -z "$1" ]; then
echo "Please supply the filename suffixes to delete.";
exit;
fi;
filter=$1;
for FILE in *.$filter; do clear;
cat $FILE; printf '\n\n'; rm -i $FILE; done
You can use a mix of find (to print file sizes and names), sort (to sort the output of find) and cut (to remove the sizes). In case you have very unusual file names containing any possible character including newlines, it is safer to separate the files by a character that cannot be part of a name: NUL.
#/bin/bash
if [ -z "$1" ]; then
echo "Please supply the filename suffixes to delete.";
exit;
fi;
filter=$1;
while IFS= read -r -d '' -u 3 FILE; do
clear
cat "$FILE"
printf '\n\n'
rm -i "$FILE"
done 3< <(find . -mindepth 1 -maxdepth 1 -type f -name "*.$filter" \
-printf '%s\t%p\0' | sort -zn | cut -zf 2-)
Note that we must use a different file descriptor than stdin (3 in this example) to pass the file names to the loop. Else, if we use stdin, it will also be used to provide the answers to rm -i.
Inspired from this answer, you could use the find command as follows:
find ./ -type f -name "*.yaml" -printf "%s %p\n" | sort -n
find command prints the the size of the files and the path so that the sort command prints the results from the smaller one to the larger.
In case you want to iterate through (let's say) the 5 bigger files you can do something like this using the tail command like this:
for f in $(find ./ -type f -name "*.yaml" -printf "%s %p\n" |
sort -n |
cut -d ' ' -f 2)
do
echo "### $f"
done
If the file names don't contain newlines and spaces
while read filesize filename; do
printf "%-25s has size %10d\n" "$filename" "$filesize"
done < <(du -bs *."$filter"|sort -n)
while read filename; do
echo "$filename"
done < <(du -bs *."$filter"|sort -n|awk '{$0=$2}1')

generate list of files and directories in macOS terminal like find / -ls but adding the columns: creation time date, modified time date

I need generate list of files and directories in macOS terminal like find / -ls > list.txt but adding the columns: creation time date, modified time date in the begin. Is possible to separate every field with tab char on output?
Finally I got it!
echo -e "inode birth\tmodified\tcreated\ttype and permissions\tsize in bytes\tuser id\tgroup id\tmd5 hash\tfile name with full path" | tee "$(date "+%Y-%m-%d_%H-%M") list of files and dirs.txt";find ~ | while read file; do (stat -f %SB "$file";stat -f %Sm "$file"; stat -f %Sc "$file"; stat -f %Sp "$file"; stat -f %Dz "$file"; stat -f %Su "$file"; stat -f %Sg "$file"; md5 -q -s "$file"; echo $file;)|tr '\n' '\t';echo ; done | tee -a "$(date "+%Y-%m-%d_%H-%M") list of files and dirs.txt"

Bash script to String concat two variables and do File compare

what I am trying to achieve is, to delete same filenames(filename+modfiedtimestamp)exisitng in Src_Dir1 and Src_Dir2
So first i have tried to deploy all the filenames to tempa(Src_Dir1) and tempb(Src_Dir2) respectively.
Below is the screenshot of the source directory.
Files inside archive be like this and few files outside too..
So, initially I am want to deal with the files inside Archive(SRC_Dir1) and later outside Archive(SRC_Dir2) what I am trying to do is to use a while loop to read each and every filename and string concat with the modified timestamp(mtime) and input to tempc(like for example it should be like AirTimeActs_2018-12-03.csv+2019-01-24 14:41:53.000000000 -0500 = AirTimeActs_2018-12-03.csv_2019-01-24 14:41:53.000000000 -0500 this is how it should be generating into tempc file for each and every filename inside Archive(SRC_Dir1). This is where I am stuck under string concat variable section on how to proceed. Please help me with the code, hope I am comprehensible.
IMPORTANT
(Really appreciate it, if you help me out with the extension of the code which i haven't mentioned here and yet to achieve which is - >
Have to implement the same code(which I am trying to do for tempa, I'd like to do it for tempb too and name it as tempd) and then do a file data compare between tempc and tempd) if there is any kind of same data filename, then delete the file existing in Src_Dir2, if there is no same data filename, then do nothing.)
#!/bin/bash
Src_Dir1=path/Airtime_Activation/Archive
Src_Dir2=path/Airtime_Activation/
find "$Src_Dir1" -maxdepth 1 -name "*.xlsx" -o -name "*.csv" | sed "s/.*\///" > -print>path/Airtime_Activation/temp_a
find "$Src_Dir2" -maxdepth 1 -name "*.xlsx" -o -name "*.csv" | sed "s/.*\///" > -print>path/Airtime_Activation/temp_b
echo 'phase1'
cat path/Airtime_Activation/temp_a | while read file;
do
echo 'phase1.5'
echo "$file"
echo 'phase2'
mtime=$(stat -c '%y' $file)
Full_name=${file}_${mtime}
echo "$Full_name" >> path/Airtime_Activation/temp_c
echo 'phase3'
done
#!/bin/bash
Src_Dir1=path/Airtime_Activation/Archive
Src_Dir2=path/Airtime_Activation/
find "$Src_Dir1" -maxdepth 1 -name "*.xlsx" -o -name "*.csv" | sed "s/.*\///" > -print>path/Airtime_Activation/temp_a
find "$Src_Dir2" -maxdepth 1 -name "*.xlsx" -o -name "*.csv" | sed "s/.*\///" > -print>path/Airtime_Activation/temp_b
echo 'phase1'
cat path/Airtime_Activation/temp_a | while read file;
do
echo 'phase1.5'
echo "$file"
echo 'phase2'
mtime=$(stat -c '%y' $file)
Full_name=${file}_${mtime}
echo "$Full_name" >> path/Airtime_Activation/temp_c
echo 'phase3'
done
cat /path/Airtime_Activation/temp_b | while read file
#while IFS="" read -r -d $'\0' file;
do
#echo "$file"
echo 'phase2'
mtime=$(stat -c '%y' $Src_Dir2/$file)
Full_name=${file}_${mtime}
echo "$Full_name" >> path/temp_d
echo 'phase3'
done
#file compare and delete old files from outisde archive
grep -Ff temp_d temp_c > path/Airtime_Activation/temp_e
cat path/Airtime_Activation/temp_e | while read file
#while IFS="" read -r -d $'\0' file;
do
#echo "$file"
echo 'phase2'
echo "${file%_*}"
rm $Src_Dir2/${file%_*}
echo 'phase3'
done

Is there a way to optimize this code and make it faster or anyother better solution?

I am looking to collect yang models from my project .jar files.Though i came with an approach but it takes time and my colleagues are not happy.
#!/bin/sh
set -e
# FIXME: make this tuneable
OUTPUT="yang models"
INPUT="."
JARS=`find $INPUT/system/org/linters -type f -name '*.jar' | sort -u`
# FIXME: also wipe output?
[ -d "$OUTPUT" ] || mkdir "$OUTPUT"
for jar in $JARS; do
artifact=`basename $jar | sed 's/.jar$//'`
echo "Extracting modules from $artifact"
# FIXME: better control over unzip errors
unzip -q "$jar" 'META-INF/yang/*' -d "$artifact" \
2>/dev/null || true
dir="$artifact/META-INF/yang"
if [ -d "$dir" ]; then
for file in `find $dir -type f -name '*.yang'`; do
module=`basename "$file"`
echo -e "\t$module"
# FIXME: better duplicate detection
mv -n "$file" "$OUTPUT"
done
fi
rm -rf "$artifact"
done
If the .jar files don't all change between invocations of your script then you could make the script significantly faster by caching the .jar files and only operating on the ones that changed, e.g.:
#!/bin/env bash
set -e
# FIXME: make this tuneable
output='yang models'
input='.'
cache='/some/where'
mkdir -p "$cache" || exit 1
readarray -d '' jars < <(find "$input/system/org/linters" -type f -name '*.jar' -print0 | sort -zu)
# FIXME: also wipe output?
mkdir -p "$output" || exit 1
for jarpath in "${jars[#]}"; do
diff -q "$jarpath" "$cache" || continue
cp "$jarpath" "$cache"
jarfile="${jarpath##*/}"
artifact="${jarfile%.*}"
printf 'Extracting modules from %s\n' "$artifact"
# FIXME: better control over unzip errors
unzip -q "$jarpath" 'META-INF/yang/*' -d "$artifact" 2>/dev/null
dir="$artifact/META-INF/yang"
if [ -d "$dir" ]; then
readarray -d '' yangs < <(find "$dir" -type f -name '*.yang' -print0)
for yangpath in "${yangs[#]}"; do
yangfile="${yangpath##*/}"
printf '\t%s\n' "$yangfile"
# FIXME: better duplicate detection
mv -n "$yangpath" "$output"
done
fi
rm -rf "$artifact"
done
See Correct Bash and shell script variable capitalization, http://mywiki.wooledge.org/BashFAQ/082, https://mywiki.wooledge.org/Quotes, How can I store the "find" command results as an array in Bash for some of the other changes I made above.
I assume you have some reason for looping on the .yang files and not moving them if a file by the same name already exists rather than unzipping the .jar file into the final output directory.

Why is while not not working?

AIM: To find files with a word count less than 1000 and move them another folder. Loop until all under 1k files are moved.
STATUS: It will only move one file, then error with "Unable to move file as it doesn't exist. For some reason $INPUT_SMALL doesn't seem to update with the new file name."
What am I doing wrong?
Current Script:
Check for input files already under 1k and move to Split folder
INPUT_SMALL=$( ls -S /folder1/ | grep -i reply | tail -1 )
INPUT_COUNT=$( cat /folder1/$INPUT_SMALL 2>/dev/null | wc -l )
function moveSmallInput() {
while [[ $INPUT_SMALL != "" ]] && [[ $INPUT_COUNT -le 1003 ]]
do
echo "Files smaller than 1k have been found in input folder, these will be moved to the split folder to be processed."
mv /folder1/$INPUT_SMALL /folder2/
done
}
I assume you are looking for files that has the word reply somewhere in the path. My solution is:
wc -w $(find /folder1 -type f -path '*reply*') | \
while read wordcount filename
do
if [[ $wordcount -lt 1003 ]]
then
printf "%4d %s\n" $wordcount $filename
#mv "$filename" /folder2
fi
done
Run the script once, if the output looks correct, then uncomment the mv command and run it for real this time.
Update
The above solution has trouble with files with embedded spaces. The problem occurs when the find command hands its output to the wc command. After a little bit of thinking, here is my revised soltuion:
find /folder1 -type f -path '*reply*' | \
while read filename
do
set $(wc -w "$filename") # $1= word count, $2 = filename
wordcount=$1
if [[ $wordcount -lt 1003 ]]
then
printf "%4d %s\n" $wordcount $filename
#mv "$filename" /folder2
fi
done
A somewhat shorter version
#!/bin/bash
find ./folder1 -type f | while read f
do
(( $(wc -w "$f" | awk '{print $1}' ) < 1000 )) && cp "$f" folder2
done
I left cp instead of mv for safery reasons. Change to mv after validating
I you also want to filter with reply use #Hai's version of the find command
Your variables INPUT_SMALL and INPUT_COUNT are not functions, they're just values you assigned once. You either need to move them inside your while loop or turn them into functions and evaluate them each time (rather than just expanding the variable values, as you are now).

Resources