Control if a new file incoming in a folder with "comm" - shell

I'm using a comm in a infinite cycle for view if a new file incoming in a folder, but i not have difference from 2 files but for example if incominig file "a" i view in output:
a a.out a.txt b.txt test.cpp testshell.sh
a.out a.txt b.txt test.cpp testshell.sh
my Code is this:
#! /bin/ksh
ls1=$(ls);
echo $ls1 > a.txt;
while [[ 1 > 0 ]] ; do
ls2=$(ls);
echo $ls2 > b.txt;
#cat b.txt;
#sort b.txt > b.txt;
#diff -u a.txt b.txt;
#diff -a --suppress-common-lines -y a.txt b.txt
comm -3 a.txt b.txt;
printf "\n";
ls1=$ls2;
echo $ls1 > a.txt;
#cat a.txt;
#sleep 2;
#sort a.txt > a.txt;
done
THANKS

#! /bin/ksh
set -vx
PreCycle="$( ls -1 )"
while true
do
ThisCycle="$( ls -1 )"
echo "${PreCycle}${ThisCycle}" | uniq
PreCycle="${ThisCycle}"
sleep 10
done
give add and removed difference but without use of file. Could directly give new file same way but uniq -f 1 failed (don't understand why) when used on list prefixed by + and - depending of source

Related

Unix: How to replace a pattern with every string from another file

I have file named fileA , containing
comA comB comC
then another file named fileB , containing
for bp in `pgrep REPLACE_IT`;
do
echo 1 > /proc/REPLACE_IT/oom_adj
echo 1 > /proc/$bp/oom_score_adj
done 2>/dev/null
How to substitute every word REPLACE_IT in fileB with every word in fileA , then print result to fileC?
Desired output in fileC:
for bp in `pgrep comA`;
do
echo 1 > /proc/comA/oom_adj
echo 1 > /proc/$bp/oom_score_adj
done 2>/dev/null
for bp in `pgrep comB`;
do
echo 1 > /proc/comB/oom_adj
echo 1 > /proc/$bp/oom_score_adj
done 2>/dev/null
for bp in `pgrep comC`;
do
echo 1 > /proc/comC/oom_adj
echo 1 > /proc/$bp/oom_score_adj
done 2>/dev/null
Thanks for any advice
would something like that do the trick ?
for i in $(cat fileA); do
sed 's|REPLACE_IT|'$i'|g' <fileB >temp
cat temp >> fileC
done

How to copy only that data to a file which is not present in that file in shell script / bash?

I have to put some data in a file which should be unique.
suppose in
file1 I have following data.
ABC
XYZ
PQR
and now I want to add MNO DES ABC then it should only copy "MNO" and "DES" as "ABC" is already present.
file1 should look like
ABC
XYZ
PQR
MNO
DES
(ABC should be there for only once.)
Easiest way: this sholud add non-matching line in f1
diff -c f1 f2|grep ^+|awk -F '+ ' '{print $NF}' >> f1
or if '+ ' is going to be a part of actual text:
diff -c f1 f2|grep ^+|awk -F '+ ' '{ for(i=2;i<=NF;i++)print $i}' >> f1
shell script way:
I have compare script that compares line counts/lenght etc.. but for your requirement I think below part should do the job....
input:
$ cat f1
ABC
XYZ
PQR
$ cat f2
MNO
DES
ABC
output after script*
$ ./compareCopy f1 f2
-----------------------------------------------------
comparing f1 f2
-----------------------------------------------------
Lines check - DONE
$ cat f1
ABC
XYZ
PQR
DES
MNO
#!/bin/sh
if [ $# != "2" ]; then
echo
echo "Requires arguments from command prompt"
echo "Usage: compare <file1> <file2>"
echo
exit
fi
proc="compareCopy"
#sort files for line by line compare
cat $1|sort > file1.tmp
cat $2|sort > file2.tmp
echo "-----------------------------------------------------"
echo " comparing $1 $2" |tee ${proc}_compare.result
echo "-----------------------------------------------------"
file1_lines=`wc -l $1|cut -d " " -f1`
file2_lines=`wc -l $2|cut -d " " -f1`
#Check each line
x=1
while [ "${x}" -le "${file1_lines}" ]
do
f1_line=`sed -n ${x}p file1.tmp`
f2_line=`sed -n ${x}p file2.tmp`
if [ "${f1_line}" != "${f2_line}" ]; then
echo "line number ${x} don't match in both $1 and $2 files" >> ${proc}_compare.result
echo "$1 line: "${f1_line}"" >> ${proc}_compare.result
echo "$2 line: "${f2_line}"" >> ${proc}_compare.result
# so add this line in file 1
echo $f2_line >> $1
fi
x=$[${x} +1]
done
rm -f file1.tmp file2.tmp
echo "Lines check - DONE" |tee -a ${proc}_compare.result
Use fgrep:
fgrep -vf file1 file2 > file2.tmp && cat file2.tmp >> file1 && rm file2.tmp
which fetches all lines of file2 that are not in file1 and appends the result to file1.
You may want to take a look at this post: grep -f maximum number of patterns?
Perl one liner
file one:
1
2
3
file two:
1
4
3
Print Only Unique Line
perl -lne 'print if ++$n{ $_ } == 1 ' file_one.txt file_two.txt
Or
perl -lne 'print unless ++$n{ $_ } ' file_one.txt file_two.txt
output
1
4
3
2
The natural way:
sort -u File1 File2 >Temp && mv Temp File1
The tricky way if the files are already sorted:
comm File1 File2 | awk '{$1=$1};1' >Temp && mv Temp File1

How to concatenate two files and write between them?

I am trying to achieve something like this with bash script:
c.txt:
contents of a.txt
###
contents of b.txt
Basically I want to write a constant string between the contents of two files and save to a new one without modifying the originals.
This was the closest I could get:
echo "###" >> a.txt|cat b.txt >> out.txt
Using - as a filename usually means to use standard input. Thus:
echo 'something' | cat a.txt - b.txt > new.txt
You could do it with three commands:
cat a.txt > out.txt
echo "###" >> out.txt
cat b.txt >> out.txt
And perhaps make a function out of it:
append_hash() { cat $1 > $3; echo "###" >> $3; cat $2 >> $3; }
Usage:
append_hash a.txt b.txt out.txt

Taking line intersection of several files

I see comm can do 2 files and diff3 can do 3 files. I want to do for more files (5ish).
One way:
comm -12 file1 file2 >tmp1
comm -12 tmp1 file3 >tmp2
comm -12 tmp2 file4 >tmp3
comm -12 tmp3 file5
This process could be turned into a script
comm -12 $1 $2 > tmp1
for i in $(seq 3 1 $# 2>/dev/null); do
comm -12 tmp`expr $i - 2` $(eval echo '$'$i) >tmp`expr $i - 1`
done
if [ $# -eq 2 ]; then
cat tmp1
else
cat tmp`expr $i - 1`
fi
rm tmp*
This seems like poorly written code, even to a newbie like me, is there a better way?
It's quite a bit more convoluted than it has to be. Here's another way of doing it.
#!/bin/bash
# Create some temp files to avoid trashing and deleting tmp* in the directory
tmp=$(mktemp)
result=$(mktemp)
# The intersection of one file is itself
cp "$1" "$result"
shift
# For each additional file, intersect with the intermediate result
for file
do
comm -12 "$file" "$result" > "$tmp" && mv "$tmp" "$result"
done
cat "$result" && rm "$result"

Merge files with sort -m and give error if files not pre-sorted?

need some help out here.
I have two files,
file1.txt >
5555555555
1111111111
7777777777
file2.txt >
0000000000
8888888888
2222222222
4444444444
3333333333
when I run,
$ sort -m file1.txt file2.txt > file-c.txt
the output file-c.txt get the merged within file1 and file2 but it is not sorted.
file-c.txt >
0000000000
5555555555
1111111111
7777777777
8888888888
2222222222
4444444444
3333333333
When it happens I need an error saying that the files (file1 and file2) is not sorted and the merge can't merge the files before it has been sorted. So when I run $ sort -m file1.txt file2.txt > file-c.txt I have to get an error saying that it cannot merge file1 and file2 to file-c because they are not yet sorted.
Hope you guys understand me :D
If I understand what you're asking, you could do this:
DIFF1=$(diff <(cat file1.txt) <(sort file1.txt))
DIFF2=$(diff <(cat file2.txt) <(sort file2.txt))
if [ "$DIFF1" != "" ]; then
echo 'file1 is not sorted'
elif [ "$DIFF2" != "" ]; then
echo 'file2 is not sorted'
else
sort -m file1.txt file2.txt
fi
This works in Bash (and other shells) and does the following:
Set the DIFF1 variable to the output of a diff of a cat and a sort of file1 (this will be empty if the cat and sort are the same meaning if the file is sorted
Set the DIFF2 variable in the same manner as DIFF1 but for file2
Do a simple if .. elif .. else to check and see whether file1 AND file2 are sorted, and if so do a command line sort of the two
Is this what you were looking for?
EDIT: Alternately per #twalberg if your version of sort supports it, you can do this:
if ! sort -c file1.txt
then echo 'file1 is not sorted'
elif ! sort -c file2.txt
then echo 'file2 is not sorted'
else
sort -m file1.txt file2.txt
fi

Resources