merge output in separate columns (comma separated) - shell

I have two outputs of a file (fetch using cut command).
One is
700316307503
700315522410
700317709443
and second is
ab
bc
cd
Both outputs have same number of rows.
I need to merge them or arrange them in a new file as following (comma separated new file)
700316307503,ab
700315522410,bc
700317709443,cd

One way using paste:
paste -d"," file1 file2 >file3

Related

Making a list from data in a few variable bash script [duplicate]

I want to merge different lists with delimiter "-".
first list has 2 words
$ cat first
one
who
second list has 10000 words
$ cat second
languages
more
simple
advanced
home
expert
......
......
test
nope
i want two list merge, same ...
$cat merge-list
one-languages
one-more
....
....
who-more
....
who-test
who-nope
....
Paste should do the trick.
paste is a Unix command line utility which is used to join files horizontally (parallel merging) by outputting lines consisting of the sequentially corresponding lines of each file specified, separated by tabs, to the standard output.
Example
paste -d - file1 file2
EDIT:
I just saw that your two files have different length. Unfortunately paste is not
helping with these kinds of problems. But you could of course use something like this:
for i in `cat file1`; do
for j in `cat file2`; do
echo $i-$j
done
done

How to append a column from another file to an existing file using awk?

My problem is the following: I have multiple tab separated files (A, B, C and D) each containing 40 columns of which the first 10 are always the same (all files also have the same number of rows). In order to have one file instead of four separate ones, I want to create a new file which contains the first 10 columns once (which are the same in all files) followed by column 25 of each file A, B, C and D since I'm not interested in the other columns.
So my output file should look like this:
column_1 column_2 column_3 .... column_9 column_10 column_25_A column_25_B column_25_C column_25_D
So far I was able to create a new file containing column_1 to column_10 using the following command:
awk -v FS='\t' -v OFS='\t' '{print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10}' file_A.txt > output_file.txt
However, I cannot manage to now append the desired columns from the other files. I've tried the paste command as well as this one:
awk -v FS='\t' -v OFS='\t' '{print $25}' file_A.txt >> output_file
The above command however gives me correct column I want to append to the output file if I omit the redirection.
What do I have to do in order to append the desired columns from one file to another using awk? Or is this not possible?
untested
$ paste <(cut -f1-10,25 fileA) <(cut -f25 fileB) <(cut -f25 fileC) <(cut -f25 fileD)

Delimit output created using a grep command

I want to run a line of code to create a Text to columns output on a file which is comma delimited.
I have created a file which results in the output been created in the format comma delimited. So all the data is in column A separated by a ,.
The file is apilist.xls
I want to add a line of code that will take the data here and separate out the data in to the columns, similar to the excel command text to columns.
I have tried:
$ cat apilist.xls | tr "\\t" "," > apilist.csv
This gives the message:
bash: $: command not found
It creates the apilist.csv file but there is no data in this.
Any suggestions on how to progress this?
$ cat apilist.xls | tr "\\t" "," > apilist.csv
Delimited output - move the data from one column to be separated into multiple columns.
You would need to convert your .xls file to .csv before you're able to work on it. Here are a number of methods to convert the file: https://linoxide.com/linux-how-to/methods-convert-xlsx-format-files-csv-linux-cli/
You could then use awk to create columns between the data where it is separated by a comma. This is assuming that you have three columns separated by a comma - you would need to edit the below command to meet your needs:
awk -F ',' '{printf("%s %s %s\n", $1, $2, $3)}' input_file

Merge two .txt files together and reorder

I have two .txt files both 42 lines each (42nd line is just a blank space).
One file is called date.txt, and each line has a format like:
2017-03-16 10:45:32.175 UTC
The second file is called version, and each line has a format like:
1.2.3.10
Is there a way to merge the two files together so that the date is appended to the version number (separated by a space). So the 1st line of each file is merged together, then the second line, third line, etc...
So it would look like:
1.2.3.10 2017-03-16 10:45:32.175 UTC
After that, is it possible to reorder the new file by the date and time? (Going from the oldest date to the latest/current one).
The end file should still be 42 lines long.
Thanks!
Use paste:
paste file1.txt file2.txt -d' ' > result.txt
-d is used to set the delimiter.
You can then attempt to sort by second and third column using sort:
sort -k2,3 result.txt > sorted.txt
-k is used to select columns to sort by.
But note that this doesn't parse the date and time, only sorts them as strings.
In general:
paste file1.txt file2.txt | sort -k2,3 > result.txt

Bash grep in file which is in another file

I have 2 files, one contains this :
file1.txt
632121S0 126.78.202.250 1
131145S0 126.178.20.250 1
the other contain this : file2.txt
632121S0 126.78.202.250 OBS
131145S0 126.178.20.250 OBS
313359S2 126.137.37.250 OBS
I want to end up with a third file which contains :
632121S0 126.78.202.250 OBS
131145S0 126.178.20.250 OBS
Only the lines which start by the same string in both files. I can't remember how to do it. I tried several grep, egrep and find, i still cannot use it properly...
Can you help please ?
You can use this awk:
$ awk 'FNR==NR {a[$1]; next} $1 in a' f1 f2
632121S0 126.78.202.250 OBS
131145S0 126.178.20.250 OBS
It is based on the idea of two file processing, by looping through files as this:
first loop through first file, storing the first field in the array a.
then loop through second file, checking if its first field is in the array a. If that is true, the line is printed.
To do this with grep, you need to use a process substitution:
grep -f <(cut -d' ' -f1 file1.txt) file2.txt
grep -f uses a file as a list of patterns to search for within file2. In this case, instead of passing file1 unaltered, process substitution is used to output only the first column of the file.
If you have a lot of these lines, then the utility join would likely be useful.
join - join lines of two files on a common field
Here's a set of examples.

Resources