Basically, how do I make a string substitution in which the substituted string is transformed by an external command?
For example, given the line 5aaecdab287c90c50da70455de03fd1e ./2015/01/26/GOPR0083.MP4, how to pipe the second part of the line (./2015/01/26/GOPR0083.MP4) to command xargs stat -c %.6Y and then replace it with the result so that we end up with 5aaecdab287c90c50da70455de03fd1e 1422296624.010000?
This can be done with a script, however a one-liner would be nice.
#!/bin/bash
hashtime()
{
while read longhex fname; do
echo "$longhex $(stat -c %.6Y "$fname")"
done
}
if [ $# -ne 1 ]; then
echo Usage: ${0##*/} infile 1>&2
exit 1
fi
hashtime < $1
exit 0
# one liner
awk 'BEGIN { args="stat -c %.6Y " } { printf "%s ", $1; cmd=args $2; system(cmd); }' infile
A one-liner using GNU sed, which will process the whole file:
sed -E "s/([[:xdigit:]]+) +(.*)/stat -c '\1 %.6Y' '\2'/e" file
or, using plain bash
while read -r hash pathname; do stat -c "$hash %.6Y" "$pathname"; done < file
It's typical to use awk sed cut to reformat input. For example:
line="5aaecdab287c90c50da70455de03fd1e ./2015/01/26/GOPR0083.MP4"
echo "$line" |
cut -d' ' -f2- |
xargs stat -c %.6Y
I need to add space at the end of each line except the header lines.Below is the example of my file:
13120000005000002100000000000000000000081D000
231200000000000 000 00XY018710V000000000
231200000000000 000 00XY018710V000000000
13120000012000007000000000000000000000081D000
231200000000000 000 00XY057119V000000000
So 1st & 4th line(starting with 131200 ) is my header line...Except my header I want 7-8spaces at the end of each line.
Please find the code that I am currently using:
find_list=`find *.dat -type f`
Filename='*.dat'
filename='xyz'
for file in $find_list
do
sed -i -e 's/\r$/ /' "$file"
n=1
loopcounterpre=""
newfile=$(echo "$filename" | sed -e 's/\.[^.]*$//')".dat"
while read line
do
if [[ $line != *[[:space:]]* ]]
then
rowdetail=$line
loopcounter=$( echo "$rowdetail" | cut -b 1-6)
if [[ "$loopcounterpre" == "$loopcounter" ]]
then
loopcounterpre=$loopcounter
#Increases the counter for in the order of 001,002 and so on until the Pay entity is changed
n=$((n+1))
#Resets the Counter to 1 when the pay entity changes
else
loopcounterpre=$loopcounter
n=1
fi
printf -v m "%03d" $n
llen=$(echo ${#rowdetail})
rowdetailT=$(echo "$rowdetail" | cut -b 1-$((llen-3)))
ip=$rowdetailT$m
echo "$ip" >> $newfile
else
rowdetail=$line
echo "$rowdetail" >> $newfile
fi
done < $file
bye
EOF
done
The entire script can be replaced with one line of GNU sed:
sed -is '/^131200\|^1351000/!s/$/ /' $(find *.dat -type f)
Using awk:
$ awk '{print $0 ($0~/^(131200|1351000)/?"":" ")}' file
print current record $0 and if it starts with $0~/^(131200|1351000)/ print "" else : print " ".
I'm trying to put together a bash script that will search a bunch of files and if it finds a particular string in a file, it will add a new line on the line after that string and then move on to the next file.
#! /bin/bash
echo "Creating variables"
SEARCHDIR=testfile
LINENUM=1
find $SEARCHDIR* -type f -name *.xml | while read i; do
echo "Checking $i"
ISBE=`cat $i | grep STRING_TO_SEARCH_FOR`
if [[ $ISBE =~ "STRING_TO_SEARCH_FOR" ]] ; then
echo "found $i"
cat $i | while read LINE; do
((LINENUM=LINENUM+1))
if [[ $LINE == "<STRING_TO_SEARCH_FOR>" ]] ; then
echo "editing $i"
awk -v "n=$LINENUM" -v "s=new line to insert" '(NR==n) { print s } 1' $i
fi
done
fi
LINENUM=1
done
the bit I'm having trouble with is
awk -v "n=$LINENUM" -v "s=new line to insert" '(NR==n) { print s } 1' $i
if I just use $i at the end, it will output the content to the screen, if I use $i > $i then it will just erase the file and if I use $i >> $i it will get stuck in a loop until the disk fills up.
any suggestions?
Unfortunately awk dosen't have an in-place replacement option, similar to sed's -i, so you can create a temp file and then remove it:
awk '{commands}' file > tmpfile && mv tmpfile file
or if you have GNU awk 4.1.0 or newer, the -i inplace is added, so you can do:
awk -i inplace '{commands}' file
to modify the original
#cat $i | while read LINE; do
# ((LINENUM=LINENUM+1))
# if [[ $LINE == "<STRING_TO_SEARCH_FOR>" ]] ; then
# echo "editing $i"
# awk -v "n=$LINENUM" -v "s=new line to insert" '(NR==n) { print s } 1' $i
# fi
# done
# replaced by
sed -i 's/STRING_TO_SEARCH_FOR/&\n/g' ${i}
or use awk in place of sed
also
# ISBE=`cat $i | grep STRING_TO_SEARCH_FOR`
# if [[ $ISBE =~ "STRING_TO_SEARCH_FOR" ]] ; then
#by
if [ $( grep -c 'STRING_TO_SEARCH_FOR' ${i} ) -gt 0 ]; then
# if file are huge, if not directly used sed on it, it will be faster (but no echo about finding the file)
If you can, maybe use a temporary file?
~$ awk ... $i > tmpfile
~$ mv tmpfile $i
Or simply awk ... $i > tmpfile && mv tmpfile $i
Note that, you can use mktemp to create this temporary file.
Otherwise, with sed you can insert a line right after a match:
~$ cat f
auie
nrst
abcd
efgh
1234
~$ sed '/abcd/{a\
new_line
}' f
auie
nrst
abcd
new_line
efgh
1234
The command search if the line matches /abcd/, if so, it will append (a\) the line new_line.
And since sed as the -i to replace inline, you can do:
if [[ $ISBE =~ "STRING_TO_SEARCH_FOR" ]] ; then
echo "found $i"
echo "editing $i"
sed -i "/STRING_TO_SEARCH_FOR/{a
\new line to insert
}" $i
fi
I want convert a column of data in a txt file to a row of a csv file using unix commands.
example:
ApplChk1,
ApplChk2,
v_baseLoanAmountTI,
v_plannedClosingDateField,
downPaymentTI,
this is a column which present in a txt file
I want output as follows in a csv file
ApplChk1,ApplChk2,v_baseLoanAmountTI,v_plannedClosingDateField,downPaymentTI,
Please let me know how to do it.
Thanks in advance
If that's a single column, which you want to convert to row, then there are many possibilities:
tr -d '\n' < filename ; echo # option 1 OR
xargs echo -n < filename ; echo # option 2 (This option however, will shrink spaces & eat quotes) OR
while read x; do echo -n "$x" ; done < filename; echo # option 3
Please let us know, how the input would look like, for multi-line case.
A funny pure bash solution (bash ≥ 4.1):
mapfile -t < file.txt; printf '%s' "${MAPFILE[#]}" $'\n'
Done!
for i in `< file.txt` ; do echo -n $i; done; echo ""
gives the output
ApplChk1,ApplChk2,v_baseLoanAmountTI,v_plannedClosingDateField,downPaymentTI,
To send output to a file:
{ for i in `< file.txt` ; do echo -n $i ; done; echo; } > out.csv
When I run it, this is what happens:
[jenny#jennys:tmp]$ more file.txt
ApplChk1,
ApplChk2,
v_baseLoanAmountTI,
v_plannedClosingDateField,
downPaymentTI,
[jenny#jenny:tmp]$ { for i in `< file.txt` ; do echo -n $i ; done; echo; } > out.csv
[jenny#jenny:tmp]$ more out.csv
ApplChk1,ApplChk2,v_baseLoanAmountTI,v_plannedClosingDateField,downPaymentTI,
perl -pe 's/\n//g' your_file
the above will output to stdout.
if you want to do it in place:
perl -pi -e 's/\n//g' your_file
You could use the Linux command sed to replace line \n breaks by commas , or space :
sed -z 's/\n/,/g' test.txt > test.csv
You could also add the -i option if you want to change file in-place :
sed -i -z 's/\n/,/g' test.txt
After writing some unix scripts I am able to manage to get data from different xml files to csv format and now I got stuck with the following problem
file1.csv : contains
1,5,6,7,8
2,3,4,5,9
1,6,10,11,12
1,5,11,12
file2.csv : contains
1,Mango,Tuna,Webby,Through,Franky,Sam,Sumo
2,Franky
3,Sam
4,Sumo
5,Mango,Tuna,Webby
6,Tuna,Webby,Through
7,Through,Sam,Sumo
8,Nothing
9,Sam,Sumo
10,Sumo,Mango,Tuna
11,Mango,Tuna,Webby,Through
12,Mango,Tuna,Webby,Through,Franky
output I want is
1,5,6,7,8
Mango,Tuna,Webby,Through,Franky,Sam,Sumo
Mango,Tuna,Webby
Tuna,Webby,Through
Through,Sam,Sumo
Nothing
Common word:None
2,3,4,5,9
Franky
Sam
Sumo
Mango,Tuna,Webby
Sam, Sumo
Common Word:None
1,6,10,11,12
Mango,Tuna,Webby,Through,Franky,Sam,Sumo
Tuna,Webby,Through
Sumo,Mango,Tuna
Mango,Tuna,Webby,Through
Mango,Tuna,Webby,Through,Franky
Common word: Tuna
1,5,11,12
Mango,Tuna,Webby,Through,Franky,Sam,Sumo
Mango,Tuna,Webby
Mango,Tuna,Webby,Through
Mango,Tuna,Webby,Through,Franky
Common word: Mango,Tuna,Webby
I apprecaite any help.
Thanks
I got some solution but not complete
##!/bin/bash
count=1
count_2=1
for i in `cat file1.csv`
do
echo $i > $count.txt
cat $count.txt | tr "," "\n" > $count_2.txt
count=`expr $count + 1`
count_2=`expr $count_2 + 1`
done;
#this code will create separte files for each line in file1.csv,
bash file3_search.sh
##########################
file3_search.sh
================
##!/bin/bash
cat file2.csv | sed '/^$/d' | sed 's/[ ]*$//' > trim.txt
dos2unix -q 1.txt 1.txt
dos2unix 2.txt 2.txt
dos2unix 3.txt 3.txt
echo "1st Combination results"
for i in `cat 1.txt`
do
cat trim.txt | egrep -w $i
done > Combination1.txt;
echo "2nd Combination results"
for i in `cat 2.txt`
do
cat trim.txt | egrep -w $i
done > Combination2.txt;
echo "3rd Combination results"
for i in `cat 3.txt`
do
cat trim.txt | egrep -w $i
done > Combination3.txt;
Guys I am not good at programming (I am software tester) please someone can re-factor my code and also please tell me how to get the common word in those Combination.txt file
IMHO it works:
for line in $(cat 1.csv) ; do
echo $line ;
grepline=`echo $line | sed 's/ \+//g;s/,/,|/g;s/^\(.*\)$/^(\1,)/'`;
egrep $grepline 2.csv
egrep $grepline 2.csv | \
awk -F "," '
{ for (i=2;i<=NF;i++)
{s[$i]+=1}
}
END { for (key in s)
{if (s[key]==NR) { tp+=key "," }
}
if (tp!="") {print "Common word(s): " gensub(/,$/,"","g",tp)}
else {print "Common word: None"}}'
echo
done
HTH
Here's an answer for you. It depends on associative array capabilities of bash version 4:
IFS=,
declare -a words
# read and store the words in file2
while read line; do
set -- $line
n=$1
shift
words[$n]="$*"
done < file2.csv
# read file1 and process
while read line; do
echo "$line"
set -- $line
indexes=( "$#" )
NF=${#indexes[#]}
declare -A common
for (( i=0; i<$NF; i++)); do
echo "${words[${indexes[$i]}]}"
set -- ${words[${indexes[$i]}]}
for word; do
common[$word]=$(( ${common[$word]} + 1))
done
done
printf "Common words: "
n=0
for word in "${!common[#]}"; do
if [[ ${common[$word]} -eq $NF ]]; then
printf "%s " $word
(( n++ ))
fi
done
[[ $n -eq 0 ]] && printf "None"
unset common
printf "\n\n"
done < file1.csv