UNIX - Simple merging of two files as in the input - shell

Input File1:
HELLO
HOW
Input File2:
ARE
YOU
output file should be
HELLO
HOW
ARE
YOU
My input files will be in one folder and my script has to fetch the input files from that folder and merge all the files as in the above given order.
Thanks

You can simply use cat as shown below:
cat file1 file2
or, to concatenate all files in a folder (assuming there are not too many):
cat folder/*

sed '' file1 file2
hope this works fine +

cat:
cat file1 file2 >output
perl:
perl -plne '' file1 file2 >output
awk:
awk '1' file1 file2 >output

Related

How to compare entries in one file to two files?

I have a file (named file1) which consists of names and their IPs. It looks something like this :-
VM1 2.33.4.22
VM2 88.43.21.34
VM3 120.3.45.66
VM4 99.100.34.5
VM5 111.3.4.66
and i have two files (file2 and file3) which consists solely of IPs.
File 2 consists of:-
120.3.45.66
88.43.21.34
File 3 consists of :-
99.100.34.5
I want to compare file1 to file2 and file3 and get the names and IPs that are not present in file2 and file3. So output would be:
VM1 2.33.4.22
VM5 111.3.4.66
How can i get the desired output?
sed 's/\./\\./g; s/.*/ &$/' file2 file3 | grep -vf - file1
Use sed to turn the entries in files 2 and 3 in to appropriate regexes.
Pipe this regex list to grep, with -f - to get the pattern list from standard input, and -v to print non matching lines in file 1.
You can write a shell script that will do it for you.
#!/bin/sh
cat $1.txt $2.txt > mergedFile.txt
grep -v -f mergedFile.txt $3.txt
You can run the script by using the following command
sh check.sh file2 file3 file1
awk 'NR==FNR { out[$1]=1; next} !out[$2]' <(/bin/cat file2 file3) file1
This uses basically the same thing as the sed solution, using awk instead.

Shell script for merging dotenv files with duplicate keys

Given two dotenv files,
# file1
FOO="X"
BAR="B"
and
# file2
FOO="A"
BAZ="C"
I want to run
$ ./merge.sh file1.env file2.env > file3.env
to get the following output:
# file3
FOO="A"
BAR="B"
BAZ="C"
So far, I used the python-dotenv module to parse the files into dictionaries, merge them and write them back. However, I feel like there should be a simple solution in shell that rids myself of a third-party module for such a basic task.
Answer
Alright, so I ended up using
$ sort -u -t '=' -k 1,1 file1 file2 | grep -v '^$\|^\s*\#' > file3
which omits blank lines and comments. Nevertheless, the proposed awk solution works just as fine.
Another quite simple approach is to use sort:
sort -u -t '=' -k 1,1 file1 file2 > file3
results in a file where the keys from file1 take precedence over the keys from file2.
Using a simple awk script:
awk -F= '{a[$1]=$2}END{for(i in a) print i "=" a[i]}' file1 file2
This stores all key values in the array a and prints the array content when both files are parsed.
The keys that are in file2 override the ones in file1.
To add new values only from file2 and NOT overwrite initial values from file1. Omit spaces from file 2.
grep "\S" file2 >> file1
awk -F "=" '!a[$1]++' file1 > file3

Why `cat file1 file2 > file2` command fall into endless loop?

When executing command cat file1 file2 > file2 in terminal at Mac 10.11, it will fall into endless loop. I was expected file1 content is add to the head of file2 and write it into file2.
Why is that happening?
Update:
As Benjamin W. mentioned, I think the reason is the input file and output file has been set to the same file. But why cat file1 file2 > file1 will not hang with endless loop?

Joining two file with sed awk separated by comma

I have two files.
file1.txt
example1
example2
example3
file2.txt
testing1
testing2
testing3
I am trying to join the values from these two files into a new comma separated file, with output
desired output
example1,testing1
example2,testing2
example3,testing3
Could anyone help to do this in awk/sed ?
thank you
You can just use paste:
paste -d, file1 file2
example1,testing1
example2,testing2
example3,testing3
Or, you can use awk:
awk -v OFS=, 'FNR==NR{a[++i]=$0; next} {print a[FNR], $0}' file1 file2
example1,testing1
example2,testing2
example3,testing3
This might work for you (GNU sed):
sed 'Rfile2' file1 | sed 'N;y/\n/,/'
The first sed script reads a line from file1 then a line from file2. The second script takes this output and reads two lines at a time replacing the newline between the lines by a comma.
N.B. This expects each file1/2 to be the same length.
You can also use pr other than paste command
[akshay#localhost tmp]$ cat file1
example1
example2
example3
[akshay#localhost tmp]$ cat file2
testing1
testing2
testing3
[akshay#localhost tmp]$ pr -mtJS',' file1 file2
example1,testing1
example2,testing2
example3,testing3

Unix: One line bash command to merge 3 files together. extracting only the first line of each

I am having time with my syntax here:
I have 3 files with various content file1 file2 file3 (100+ lines). I am trying to merge them together, but only the first line of each file should be merged. The point is to do it using one line of bash code:
sed -n 1p file1 file2 file3 returns only the first line of file1
You might want to try
head -n1 -q file1 file2 file3.
It's not clear if by merge you mean concatenate or join?
In awk by joining (each first line in the files printed side by side):
$ awk 'FNR==1{printf "%s ",$0}' file1 file2 file3
1 2 3
In awk by concatenating (each first line in the files printed one after another):
$ awk 'FNR==1' file1 file2 file3
1
2
3
I suggest you use head as explained by themel's answer. However, if you insist in using sed you cannot simply pass all files to it, since they are implicitly concatenated and you lose information about what the first line is in each file respectively. So, if you really want to do it in sed, you need bash to help you out:
for f in file1 file2 file3; do sed -n 1p "$f"; done
You can avoid calling external processes by using the read built-in command:
for f in file1 file2 file3; do read l < $f; echo "$l"; done > merged.txt

Resources