Lets say that I have:
cat FILENAME1.txt
Definition john
cat FILENAME2.txt
Definition mary
cat FILENAME3.txt
Definition gary
cat textfile.edited
text
text
text
I want to obtain an ouput like:
1 john text
2 mary text
3 gary text
I tried to use "stored" values from FILENAMES "generated" by a loop. I wrote this:
for file in $(ls *.txt); do
name=$(cat $file| grep -i Definition|awk '{$1="";print $0}')
#echo $name --> this command works as it gives the names
done
cat textfile.edited| awk '{printf "%s\t%s\n",NR,$0}'
which very close to what I want to get
1 text
2 text
3 text
My issue was coming through when I tried to add the "stored" value. I tried the following with no success.
cat textfile.edited| awk '{printf "%s\t%s\n",$name,NR,$0}'
cat textfile.edited| awk '{printf "%s\t%s\n",name,NR,$0}'
cat textfile.edited| awk -v name=$name '{printf "%s\t%s\n",NR,$0}'
Sorry if the terminology used is not the best, but I started scripting recently.
Thank you in advance!!!
One solution using paste and awk ...
We'll append a count to the lines in textfile.edited (so we can see which lines are matched by paste):
$ cat textfile.edited
text1
text2
text3
First we'll look at the paste component:
$ paste <(egrep -hi Definition FILENAME*.txt) textfile.edited
Definition john text1
Definition mary text2
Definition gary text3
From here awk can do the final slicing-n-dicing-n-numbering:
$ paste <(egrep -hi Definition FILENAME*.txt) textfile.edited | awk 'BEGIN {OFS="\t"} {print NR,$2,$3}'
1 john text1
2 mary text2
3 gary text3
NOTE: It's not clear (to me) if the requirement is for a space or tab between the 2nd and 3rd columns; above solution assumes a tab, while using a space would be doable via a (awk) printf call.
You can do all with one awk command.
First file is the textfile.edited, other files are mentioned last.
awk 'NR==FNR {text[NR]=$0;next}
/^Definition/ {namenr++; names[namenr]=$2}
END { for (i=1;i<=namenr;i++) printf("%s %s %s\n", i, names[i], text[i]);}
' textfile.edited FILENAME*.txt
You can avoid awk with
paste -d' ' <(seq $(wc -l <textfile.edited)) \
<(sed -n 's/^Definition //p' FILE*) \
textfile.edited
Another version of the paste solution with a slightly careless grep -
$: paste -d\ <( grep -ho '[^ ]*$' FILENAME?.txt ) textfile.edited
john text
mary text
gary text
Or, one more way to look at it...
$: a=( $(sed '/^Definition /s/.* //;' FILENAME[123].txt) )
$: echo "${a[#]}"
john mary gary
$: b=( $(<textfile.edited) )
$: echo "${b[#]}"
text text text
$: c=-1 # initialize so that the first pre-increment returns 0
$: while [[ -n "${a[++c]}" ]]; do echo "${a[c]} ${b[c]}"; done
john text
mary text
gary text
This will put all the values in memory before printing anything, so if the lists are really large it might not be your best bet. If they are fairly small, it's pretty efficient, and a single parallel index will keep them in order.
If the lines are not the same as the number of files, what did you want to do? As long as there aren't more files than lines, and any extra lines are ok to ignore, this still works. If there are more files than lines, then we need to know how you'd prefer to handle that.
A one-liner using GNU utilities:
paste -d ' ' <(cat -n FILENAME*.txt | sed 's/\sDefinition//') textfile.edited
Or,
paste -d ' ' <(cat -n FILENAME*.txt | sed 's/^\s*//;s/\sDefinition//') textfile.edited
if the leading white spaces are not desired.
Alternatively:
paste -d ' ' <(sed 's/^Definition\s//' FILENAME*.txt | cat -n) textfile.edited
Does anyone know how to align the columns properly? I've spent so much time on it but when I make changes the rest of the columns fall out of alignment. It needs to be neatly aligned like the little table I drew below:
________________________________________________________
|Username| UID | GID | Home | Shell |
________________________________________________________
| root | 0 | 0 |/root |/bin/bash |
| posgres| 120 | 125 |/var/lib/postgresql |/bin/bash |
| student| 100 | 1000|/home/student |/bin/bash |
________________________________________________________
#!/bin/bash
echo "_________________________________________________________"
echo -e "| Username\t| UID\t| GID\t| Home\t| Shell\t\t|"
echo "---------------------------------------------------------"
awk 'BEGIN { FS=":" }
{ printf("\033[34m| %s\033[0m\t\t| %s\t| %s\t| %s\t| %s\t|\n", $1, $3, $4, $6, $7)
}' /etc/passwd | sed -n '/\/bin\/bash/p'
Does anyone know how to align the columns properly?
It's what our lecturer wanted.
Since it could be an assignment, I will share some hints instead of posting the working codes.
First of all, awk can do the job of your |sed, you can save one process. For example awk '/pattern/{print $1}
To build the table, you need to find out the longest value in each column before you really print the output. Because you need this value to decide the column width in your printf function. Read the doc of the printf function, you can pad the %s.
You can read the lines you need and store each column in an array, e.g. col1[1]=foo;col2[1]=bar; col1[2]=foo2; col2[2]=bar2 here [1] would be the line number NR. You can also use a multi-dimension array or the array of array to do that. Do a google search you will find some tutorials.
When you got everything you need, you can start printing.
good luck.
With bash, awk, ed and column it's not pretty but it does the job.
#!/usr/bin/env bash
mapfile -t passwd < <(
printf '0a\nUsername:2:UID:GID:5:Home:Shell:/bin/bash\n.\n,p\nQ\n' |
ed -s /etc/passwd
)
ed -s <(
printf '%s\n' "${passwd[#]}" |
awk 'BEGIN { FS=":" ; OFS=" | "
} /\/bin\/bash$/ {
print "| " $1, $3, $4, $6, $7 " |"
}' | column -t) < <(
echo '$t$
$s/|/+/g
$s/[^+]/-/g
1t0
1s/|/+/g
1s/[^+]/-/g
3t3
3s/|/+/g
3s/[^+]/-/g
,p
Q
'
)
As far as remember just awk and column and GNU datamash can do what the OP's is asking but I can't remember where is the link now.
Works on GNU ed but should work also with the BSD variant of ed
Also mapfile is a bash4+ feature jfyi.
I have file with two columns
apple apple
ball cat
cat hat
dog delta
I need to extract values that are common in two columns (occur in both columns) like
apple apple
cat cat
There is no ordering in items in each column.
Could you please try following and let me know if this helps you.
awk '
{
col1[$1]++;
col2[$2]++;
}
END{
for(i in col1){
if(col2[i]){
while(++count<=(col1[i]+col2[i])){
printf("%s%s",i,count==(col1[i]+col2[i])?ORS:OFS)}
count=""}
}
}' Input_file
NOTE: It will print the values if found in both the columns exactly number of times they are occurring in both the columns too.
$ awk '{a[$1];b[$2]} END{for(k in a) if(k in b) print k}' file
apple
cat
to print the values twice change to print k,k
with sort/join
$ join <(cut -d' ' -f1 file | sort) <(cut -d' ' -f2 file | sort)
apple
cat
perhaps,
$ function f() { cut -d' ' -f"$1" file | sort; }; join <(f 1) <(f 2)
Assuming I can use unix commands:
cut -d' ' -f2 fil | egrep `cut -d' ' -f1 < fil | paste -sd'|'` -
Basically what this does is this:
The second cut command collects all the words in the first column. The paste command joins them with a pipe (i.e. dog|cat|apple).
The first cut command takes the second column of words in the list and pipes them into a regexp-enabled egrep command.
Here is the closest I could get. Maybe you could loop through whole file and print when it reaches another occurrence.
Code
cat file.txt | gawk '$1==$2 {print $1,"=",$2}'
or
gawk '$1==$2 {print $1,"=",$2}' file.txt
I have a bash file with increasing integers in the first column and some text behind.
1,text1a,text1b
2,text2a,text2b
3,text3a,text3b
4,text4a,text4b
...
I would like to add line 1+2, 3+4 etc. and add the outcome to a new csv file.
The desired output would be
1,text1a,text1b,2,text2a,text2b
3,text3a,text3b,4,text4a,text4b
...
A second option without the numbers would be great as well. The actual input would be
1,text,text,,,text#text.com,2,text.text,text
2,text,text,,,text#text.com,3,text.text,text
3,text,text,,,text#text.com,2,text.text,text
4,text,text,,,text#text.com,3,text.text,text
Desired outcome
text,text,,,text#text.com,2,text.text,text,text,text,,,text#text.com,3,text.text,text
text,text,,,text#text.com,2,text.text,text,text,text,,,text#text.com,3,text.text,text
$ pr -2ats, file
gives you
1,text1a,text1b,2,text2a,text2b
3,text3a,text3b,4,text4a,text4b
UPDATE
for the second part
$ cut -d, -f2- file | pr -2ats,
will give you
text,text,,,text#text.com,2,text.text,text,text,text,,,text#text.com,3,text.text,text
text,text,,,text#text.com,2,text.text,text,text,text,,,text#text.com,3,text.text,text
awk solution:
awk '{ printf "%s%s",$0,(!(NR%2)? ORS:",") }' input.csv > output.csv
The output.csv content:
1,text1a,text1b,2,text2a,text2b
3,text3a,text3b,4,text4a,text4b
----------
Additional approach (to skip numbers):
awk -F',' '{ printf "%s%s",$2 FS $3,(!(NR%2)? ORS:FS) }' input.csv > output.csv
The output.csv content:
text1a,text1b,text2a,text2b
text3a,text3b,text4a,text4b
3rd approach (for your extended input):
awk -F',' '{ sub(/^[0-9]+,/,"",$0); printf "%s%s",$0,(!(NR%2)? ORS:FS) }' input.csv > output.csv
With bash, cut, sed and paste:
paste -d, <(cut -d, -f 2- file | sed '2~2d') <(cut -d, -f 2- file | sed '1~2d')
Output:
text1a,text1b,text2a,text2b
text3a,text3b,text4a,text4b
I hoped to get started with something simple as
printf '%s,%s\n' $(<inputfile)
This turns out wrong when you have spaces inside your text fields.
The improvement is rather a mess:
source <(echo "printf '%s,%s\n' $(sed 's/.*/"&"/' inputfile|tr '\n' ' ')")
Skipping the first filed can be done in the same sed command:
source <(echo "printf '%s,%s\n' $(sed -r 's/([^,]*),(.*)/"\2"/' inputfile|tr '\n' ' ')")
EDIT:
This solution will fail when it has special characters, so you should use a simple solution as
cut -f2- file | paste -d, - -
I am working on the following bash script:
# contents of dbfake file
1 100% file 1
2 99% file name 2
3 100% file name 3
#!/bin/bash
# cat out data
cat dbfake |
# select lines containing 100%
grep 100% |
# print the first and third columns
awk '{print $1, $3}' |
# echo out id and file name and log
xargs -rI % sh -c '{ echo %; echo "%" >> "fake.log"; }'
exit 0
This script works ok, but how do I print everything in column $3 and then all columns after?
You can use cut instead of awk in this case:
cut -f1,3- -d ' '
awk '{ $2 = ""; print }' # remove col 2
If you don't mind a little whitespace:
awk '{ $2="" }1'
But UUOC and grep:
< dbfake awk '/100%/ { $2="" }1' | ...
If you'd like to trim that whitespace:
< dbfake awk '/100%/ { $2=""; sub(FS "+", FS) }1' | ...
For fun, here's another way using GNU sed:
< dbfake sed -r '/100%/s/^(\S+)\s+\S+(.*)/\1\2/' | ...
All you need is:
awk 'sub(/.*100% /,"")' dbfake | tee "fake.log"
Others responded in various ways, but I want to point that using xargs to multiplex output is rather bad idea.
Instead, why don't you:
awk '$2=="100%" { sub("100%[[:space:]]*",""); print; print >>"fake.log"}' dbfake
That's all. You don't need grep, you don't need multiple pipes, and definitely you don't need to fork shell for every line you're outputting.
You could do awk ...; print}' | tee fake.log, but there is not much point in forking tee, if awk can handle it as well.