Example i run
sh mycode Manu gg44
And I need to get file with name Manu
with content:
gg44
192.168.1.2.(second line) (this number I explain below)
(in the directory DIR=/h/Manu/HOME/hosts there is already file Alex
cat Alex
ff55
198.162.1.1.(second line))
So mycode creates file named Manu with the first line gg44 and generate IP at the second line.
BUT for generating IP he has compare with Alex file IP. So second line of Manu has to be 198.162.1.2. If we have more than one files in the directory then we have to check all second lines of all files and then generate according to them.
[CODE]
DIR=/h/Manu/HOME/hosts #this is a directory where i have my files (structure of the files above)
for j in $1 $2 #$1 is Manu; $2 is gg44
do
if [ -d $DIR ] #checking if directory exists (it exists already)
then #if it exists
for i in $* # for every file in this directory do operation
do
sort /h/ManuHOME/hosts/* | tail -2 | head -1 # get second line of every file
IFS="." read A B C D # divide number in second line into 4 parts (our number 192.168.1.1. for example)
if [ "$D" != 255 ] #compare D (which is 1 in our example: if its less than 255)
then
D=` expr $D + 1 ` #then increment it by 1
else
C=` expr $C + 1 ` #otherwise increment C and make D=0
D=0
fi
echo "$2 "\n" $A.$B.$C.$D." >/h/Manu/HOME/hosts/$1
done done #get $2 (which is gg44 in example as a first line and get ABCD as a second line)[/CODE]
In the result it creates file with name Manu and first line, but second line is totally wrong. It gives me ...1.
Also error message
sort: open failed: /h/u15/c2/00/c2rsaldi/HOME/hosts/yu: No such file or directory
yu n ...1.
#!/bin/bash
dir=/h/Manu/HOME/hosts
filename=$dir/$1
firstline=$2
# find the max IP address from all current files:
maxIP=$(awk 'FNR==2' $dir/* | cut -d. -f4 | sort -nr | head -1)
ip=198.162.1.$(( maxIP + 1 ))
cat > $filename <<END
$firstline
$ip
END
I'll leave it up to you to decide what to do when you get more than 255 files...
Related
I want to grab the rows containing Subject01, Subject02,...Subject50 in the text file and separate them in each file. Below is code I did and the result of outputted files are empty. Can anyone tell me what am I doing wrong?
Subject01/path/here 4
Subject01/path/here 1
Subject02/path/here 3
Subject03/path/here 5
Subject03/path/here 6
...
so one of the output can be in the format below:
Subject03/path/here 5
Subject03/path/here 6
here is the code I tried and it failed.
#!/bin/sh
subject=Subject
for i in {01..50}
do
awk '{ if ($1 == "${subject}${i}") { print } }' output-0 > output-0-sub-$i
done
You can simpyl use grep for that
for f in {01..10}; do
grep "Subject$f" inputFile.txt >> output-0-sub-$f
if [[ ! -s output-0-sub-${f} ]] ; then
rm output-0-sub-$f
fi
done
The if condition is checking if the file is empty and if so, it is deleted.
You could also add the -f flag to check if the file exists, but it depends on how your script works.
Hello i need help with one script that its on Solaris system:
I will explain the script analytically:
i have these files :
i)
cat /tmp/BadTransactions/TRANSACTIONS_DAILY_20180730.txt
201807300000000004
201807300000000005
201807300000000006
201807300000000007
201807300000000008
201807200002056422
201807230003099849
201807230003958306
201806290003097219
201806080001062012
201806110001633519
201806110001675603
ii)
cat /tmp/BadTransactions/test_data_for_validation_script.txt
20180720|201807200002056422||57413620344272|030341-213T |580463|WIRE||EUR|EUR|20180720|20180720|||||||00000000000019.90|00000000000019.90|Debit||||||||||MPA|||574000|129|||||||||||||||||||||||||31313001103712|BFNJKL|K| I P P BONNIER PUBLICATIO|||FI|PERS7
20180723|201807230003099849||57100440165173|140197-216U|593619|WIRE||EUR|EUR|20180723|20180723|||||||00000000000060.00|00000000000060.00|Debit||||||||||MPA|||571004|106|||||||||||||||||||||||||57108320141339|Ura Basket / UraNaiset|||-div|||FI|PERS
20180723|201807230003958306||57206820079775|210489-0788|593619|WIRE||EUR|EUR|20180721|20180723|||||||00000000000046.00|00000000000046.00|Debit||||||||||MPA|||578800|106|||||||||||||||||||||||||18053000009026|IC Kodit||| c/o Newsec Asset Manag|||FI|PERS
20180629|201806290003097219||57206820079775|210489-0788|593619|WIRE||EUR|EUR|20180628|20180629|||||||00000000000856.00|00000000000856.00|Debit||||||||||MPA|||578800|106|||||||||||||||||||||||||18053000009018|IC Kodit||| c/o Newsec Asset Manag|||FI|PERS
20180608|201806080001062012||57206820079441|140197-216S|580463|WIRE||EUR|EUR|20180608|20180608|||||||00000000000019.90|00000000000019.90|Debit||||||||||MPA|||541002|129|||||||||||||||||||||||||57108320141339|N FN|K| IKI I P BONNIER PUBLICATION|||FI|PERS7
20180611|201806110001633519||57206820079525|140197-216B|593619|WIRE||EUR|EUR|20180611|20180611|||||||00000000000242.10|00000000000242.10|Debit||||||||||MPA|||535806|106|||||||||||||||||||||||||57108320141339|As Oy Haikkoonsilta|| mannerheimin|||FI|PERS9
20180611|201806110001675603||57206820079092|140197-216Z|580463|WIRE||EUR|EUR|20180611|20180611|||||||00000000000019.90|00000000000019.90|Debit||||||||||MPA|||536501|129|||||||||||||||||||||||||57108320141339|N ^NLKL|K| I P NJ BONNIER PUBLICAT|||FI|PERS7
The script has to check each line of the
/tmp/BadTransactions/TRANSACTIONS_DAILY_20180730.txt and if the strings are on
the /tmp/BadTransactions/test_data_for_validation_script.txt it will create a
new file `/tmp/BadTransactions/TRANSACTIONS_DAILY_NEW_20180730.txt
From this new file it will count all the " | " in each line and if its more than 64 it will delete the " | " in 61th posistion of the line . This will be continued until its line has 64 pipes.
For example if one line has 67 " | " it will delete the 61th , then it will check it again and now has 66 " | | so it will delete the 61th " | " , etc... until it reach 64 pipes.So all the line have to have 64th " | ".
Here is my code , but in this code i have managed to delete only the 61th pipe in each line , i cannot make the loop so that it will check each line until it reach the 64 pipes.
I will appreciate it if you could help me.
#!/bin/bash
PATH=/usr/xpg4/bin:/bin:/usr/bin
while read line
do
grep "$line" /tmp/BadTransactions/test_data_for_validation_script.txt
awk 'NR==FNR { K[$1]; next } ($2 in K)' /tmp/BadTransactions/TRANSACTIONS_DAILY_20180730.txt FS="|" /opt/NorkomC
onfigS2/inbox/TRANSACTIONS_DAILY_20180730.txt > /tmp/BadTransactions/TRANSACTIONS_DAILY_NEW_20180730.txt
sed '/\([^|]*[|]\)\{65\}/ s/|//61' /tmp/BadTransactions/TRANSACTIONS_DAILY_NEW_20180730.txt
done < /tmp/BadTransactions/TRANSACTIONS_DAILY_20180730.txt > /tmp/BadTransactions/TRANSACTIONS_DAILY_NEW_201807
30.txt
Ok, in this problem you have several pieces of code.
You need to read a file line by line
Check each line against another file
Examine the matching line for the occurrences of "|"
Delete recursively the 61st "|" until the string will remain with 64 of them
You could do something like this
#!/bin/bash
count() { ### We will use this to count how many pipes are there
string="${1}"; shift
char="${1}"
printf "%s" "${string}" | grep -o -e "${char}" | grep -c .
}
file1="/tmp/BadTransactions/TRANSACTIONS_DAILY_20180730.txt" ### File to read
file2="/tmp/BadTransactions/test_data_for_validation_script.txt" ### File to check for duplicates
file3="/tmp/BadTransactions/TRANSACTIONS_DAILY_NEW_20180730.txt" ### File where to save our final work
printf "" > "${file3}" ### Delete (eventual) history
exec 3<"${file1}" ### Put our data in file descriptor 3
while read -r line <&3; do ### read each line and put it in var "$line"
string="$(grep -e "${line}" "${file2}")" ### Check the line against second file
while [ "$(count "${string}" "|")" -gt 64 ]; do ### While we have more than 64 "|"
string="$(printf "%s" "${string}" | sed -e "s/|//61")" ### Delete the 61st occurrence
done
printf "%s" "${string}" >> "${file3}" ### Save the correct line in the third file
done
exec 3>&- ### Clean file descriptor 3
This is not tested, but should work.
N.B. Please note that I am giving for granted that grep will return only one occurrence from second file...
If it is not your case you have to manually check each value with something like:
for value in $(grep -e "${line}" "${file2}"); do
...
done
EDIT:
For systems like Solaris or others that doesn't have GNU grep installed you can substitute the count method as follow:
count() {
string="${1}"; shift
char="${1}"
printf "%s" "${string}" | awk -F"${char}" '{print NF-1}'
}
I have a reference file with device names in them. For example WABEL8499IPM101. I'm using this script to set the base name (without the last 3 digits) to look at the reference file and see what is already used. If 101 is used it will create a file for me with 102, 103 if I request 2 total. I'm looking to use an input file to run it multiple times. I'm also trying to figure out how to start at 101 if there isn't a name found when searching the reference file
I would like to loop this using an input file instead of manually entering bash test.sh WABEL8499IPM 2 each time. I would like to be able to build an input file of all the names that need compared and then output. It would also be nice that if there isn't a match that it starts creating names at WABEL8499IPM101 instead of just WABEL8499IPM1.
Input file example:
ColumnA (BASE NAME) ColumnB (QUANTITY)
WABEL8499IPM 2
Script:
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
# base name, such as "WABEL8499IPM"
device_name=$1
# quantity, such as "2"
quantityNum=$2
# the largest in sequence, such as "WABEL8499IPM108"
max_sequence_name=$(cat $SRCFILE | grep -o -e "$device_name[0-9]*" | sort --reverse | head -n 1)
# extract the last 3digit number (such as "108") from max_sequence_name
max_sequence_num=$(echo $max_sequence_name | rev | cut -c 1-3 | rev)
# create new sequence_name
# such as ["WABEL8499IPM109", "WABEL8499IPM110"]
array_new_sequence_name=()
for i in $(seq 1 $quantityNum);
do
cnum=$((max_sequence_num + i))
array_new_sequence_name+=($(echo $device_name$cnum))
done
#CODE FOR CREATING OUTPUT FILE HERE
#for fn in ${array_new_sequence_name[#]}; do touch $fn; done;
# write log
for sqn in ${array_new_sequence_name[#]};
do
echo $sqn >> $LOGFILE
done
Usage:
bash test.sh WABEL8499IPM 2
Result in the log file:
WABEL8499IPM109
WABEL8499IPM110
Just wrap a loop around your code instead of assuming the args come in on the command line.
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
while read device_name quantityNum
do max_sequence_name=$( grep -o -e "$device_name[0-9]*" $SRCFILE |
sort --reverse | head -n 1)
max_sequence_num=${max_sequence_name: -3}
array_new_sequence_name=()
for i in $(seq 1 $quantityNum)
do cnum=$((max_sequence_num + i))
array_new_sequence_name+=("$device_name$cnum")
done
for sqn in ${array_new_sequence_name[#]};
do echo $sqn >> $LOGFILE
done
done < input.file
I'd maybe pass the input file as the parameter now.
I have files in a directory stored as
abc123.0000.pdb
abc123.0001.pdb
abc123.0002.pdb
.
.
.
abc123.0456.pdb
Note "abc123" is arbitrary and so is the number "0456". I figured that I can get the largest filename using
\ls | tail -1
But how do I obtain the digits "456" only without the padded zeros and store it as a variable in a bash script?
awk is a good tool for this:
ls | awk '
{
if(match($0, /^.*\.0*([0-9]+)\.pdb$/, a)) {
if(max <= a[1]) {
max = a[1]
}
}
}END{
print max
}'
Each line of the input (from ls in this case) is run through the regex /^.*\.0*([0-9]+)\.pdb$/ which matches any digits (without leading zeros) directly after a . and before a .pdb extension. You view an explanation of the regex here. If the match is successful, the number is set to a[1] and is compared with max. At the end, the largest number is printed out, or nothing if no matches were found.
This can also be run in a single line:
ls | awk '{if(match($0,/^.*\.0*([0-9]+)\.pdb$/,a)){if(max<=a[1]){max=a[1]}}}END{print max}'
This is more robust than your solution of ls | tail -1 | egrep -o [1-9]+ | tail -1, which will fail if:
A file such as z.txt is added to the directory.
The last file has a 0 in the middle or end of the number, such as abc123.0101.pdb or abc123.0010.pdb.
The numbers go above 9999. For example if abc123.9999.pdb and abc123.10000.pdb exist, abc123.9999.pdb may be sorted last by ls.
Try this:
abc123.0000.pdb
regex='^[^.]*.0*([0-9]+).pdb'
for f in ./*; do
[[ $f =~ $regex ]]
echo "for file $f: ${BASH_REMATCH[1]}"
done
I have a large directory of data files which I am in the process of manipulating to get them in a desired format. They each begin and end 15 lines too soon, meaning I need to strip the first 15 lines off one file and paste them to the end of the previous file in the sequence.
To begin, I have written the following code to separate the relevant data into easy chunks:
#!/bin/bash
destination='media/user/directory/'
for file1 in `ls $destination*.ascii`
do
echo $file1
file2="${file1}.end"
file3="${file1}.snip"
sed -e '16,$d' $file1 > $file2
sed -e '1,15d' $file1 > $file3
done
This worked perfectly, so the next step is the worlds simplest cat command:
cat $file3 $file2 > outfile
However, what I need to do is to stitch file2 to the previous file3. Look at this screenshot of the directory for better understanding.
See how these files are all sequential over time:
*_20090412T235945_20090413T235944_* ### April 13
*_20090413T235945_20090414T235944_* ### April 14
So I need to take the 15 lines snipped off the April 14 example above and paste it to the end of the April 13 example.
This doesn't have to be part of the original code, in fact it would be probably best if it weren't. I was just hoping someone would be able to help me get this going.
Thanks in advance! If there is anything I have been unclear about and needs further explanation please let me know.
"I need to strip the first 15 lines off one file and paste them to the end of the previous file in the sequence."
If I understand what you want correctly, it can be done with one line of code:
awk 'NR==1 || FNR==16{close(f); f=FILENAME ".new"} {print>f}' file1 file2 file3
When this has run, the files file1.new, file2.new, and file3.new will be in the new form with the lines transferred. Of course, you are not limited to three files: you may specify as many as you like on the command line.
Example
To keep our example short, let's just strip the first 2 lines instead of 15. Consider these test files:
$ cat file1
1
2
3
$ cat file2
4
5
6
7
8
$ cat file3
9
10
11
12
13
14
15
Here is the result of running our command:
$ awk 'NR==1 || FNR==3{close(f); f=FILENAME ".new"} {print>f}' file1 file2 file3
$ cat file1.new
1
2
3
4
5
$ cat file2.new
6
7
8
9
10
$ cat file3.new
11
12
13
14
15
As you can see, the first two lines of each file have been transferred to the preceding file.
How it works
awk implicitly reads each file line-by-line. The job of our code is to choose which new file a line should be written to based on its line number. The variable f will contain the name of the file that we are writing to.
NR==1 || FNR==16{f=FILENAME ".new"}
When we are reading the first line of the first file, NR==1, or when we are reading the 16th line of whatever file we are on, FNR==16, we update f to be the name of the current file with .new added to the end.
For the short example, which transferred 2 lines instead of 15, we used the same code but with FNR==16 replaced with FNR==3.
print>f
This prints the current line to file f.
(If this was a shell script, we would use >>. This is not a shell script. This is awk.)
Using a glob to specify the file names
destination='media/user/directory/'
awk 'NR==1 || FNR==16{close(f); f=FILENAME ".new"} {print>f}' "$destination"*.ascii
Your task is not that difficult at all. You want to gather a list of all _end files in the directory (using a for loop and globbing, NOT looping on the results of ls). Once you have all the end files, you simply parse the dates using parameter expansion w/substing removal say into d1 and d2 for date1 and date2 in:
stuff_20090413T235945_20090414T235944_end
| d1 | | d2 |
then you simply subtract 1 from d1 into say date0 or d0 and then construct a previous filename out of d0 and d1 using _snip instead of _end. Then just test for the existence of the previous _snip filename, and if it exists, paste your info from the current _end file to the previous _snip file. e.g.
#!/bin/bash
for i in *end; do ## find all _end files
d1="${i#*stuff_}" ## isolate first date in filename
d1="${d1%%T*}"
d2="${i%T*}" ## isolate second date
d2="${d2##*_}"
d0=$((d1 - 1)) ## subtract 1 from first, get snip d1
prev="${i/$d1/$d0}" ## create previous 'snip' filename
prev="${prev/$d2/$d1}"
prev="${prev%end}snip"
if [ -f "$prev" ] ## test that prev snip file exists
then
printf "paste to : %s\n" "$prev"
printf " from : %s\n\n" "$i"
fi
done
Test Input Files
$ ls -1
stuff_20090413T235945_20090414T235944_end
stuff_20090413T235945_20090414T235944_snip
stuff_20090414T235945_20090415T235944_end
stuff_20090414T235945_20090415T235944_snip
stuff_20090415T235945_20090416T235944_end
stuff_20090415T235945_20090416T235944_snip
stuff_20090416T235945_20090417T235944_end
stuff_20090416T235945_20090417T235944_snip
stuff_20090417T235945_20090418T235944_end
stuff_20090417T235945_20090418T235944_snip
stuff_20090418T235945_20090419T235944_end
stuff_20090418T235945_20090419T235944_snip
Example Use/Output
$ bash endsnip.sh
paste to : stuff_20090413T235945_20090414T235944_snip
from : stuff_20090414T235945_20090415T235944_end
paste to : stuff_20090414T235945_20090415T235944_snip
from : stuff_20090415T235945_20090416T235944_end
paste to : stuff_20090415T235945_20090416T235944_snip
from : stuff_20090416T235945_20090417T235944_end
paste to : stuff_20090416T235945_20090417T235944_snip
from : stuff_20090417T235945_20090418T235944_end
paste to : stuff_20090417T235945_20090418T235944_snip
from : stuff_20090418T235945_20090419T235944_end
(of course replace stuff_ with your actual prefix)
Let me know if you have questions.
You could store the previous $file3 value in a variable (and do a check if it is not the first run with -z check):
#!/bin/bash
destination='media/user/directory/'
prev=""
for file1 in $destination*.ascii
do
echo $file1
file2="${file1}.end"
file3="${file1}.snip"
sed -e '16,$d' $file1 > $file2
sed -e '1,15d' $file1 > $file3
if [ -z "$prev" ]; then
cat $prev $file2 > outfile
fi
prev=$file3
done