I try to diff to 2 file in bash in Jenkins job
If i do this in minigw (eg gitbash) all working just fine
But if run same command in Jenkins i got all from files 2
For example i try 3 different methods
comm -1 -3 --nocheck-order file1.txt file2.txt
grep -vxF -f file1.txt file2.txt
diff --changed-group-format='%>' --unchanged-group-format='' file1.txt file2.txt
File is output of sqlplus command i and output is already sorted like this:
First file:
STANDARD
CONSTANT
PL_SQL
CREATE_OUT
RECALL
And second file
STANDARD
CONSTANT
PL_SQL
CREATE_OUT
RECALL
CONFIRM
I'm using git bash as shell and works in Windows. In git bash if i run any above command i got only changes and if i run same command in Jenkins i got all output from file2.txt
This driving me crazy(
UPD. I also try windows command findstr /bevg:file1.txt file2.txt
Same result(
Related
I have downloaded some 90 fasta files from NCBI for bacterial genomes. The downloaded files have default names given by NCBI. I need to change it to my desired file names. Thus I have created two .txt files:
file1.txt - having the default files names provided by NCBI. listed out the names provided by NCBI in file1.txt
file2.txt - having the names to replace the default. listed out the names to replace the NCBI names
Both the files are made in an order so that 1st entry of file1.txt is corresponding to 1st entry of file2.txt.
Now all the downloaded files are in a folder. the folder having the files
and I need a script which reads file1.txt, matches with the file name in the folder and replace it with the names in file2.txt.
I am not a bioinformatician, new to this genre. I look forward to your help. Can this process be made simpler?
This can done with a very small awk one-liner. For convenience, lets first combine your file1 and file2 to make processing easier. This can be done with paste file1.txt file2.txt >> names.txt.
names.txt will be a text file with the old names in the first column and the new names in the second. Awk lets us conveniently run through a file line-by-line (or record-by-record in its terminology) and access each column/field.
Assuming you are in the directory with all these files, as well as names.txt, you can simply run awk '{system("mv " $1 " " $2)}' names.txt to transform them all. This will run through all the lines in names.txt, take the filename given in the first column, and move it to the name given in the second column. The system command allows you to access more basic file system operations through the shell, like moving mv, copying cp, or removing rm files.
Use paste
and xargs like so:
paste file1.txt file2.txt | xargs --verbose -n2 mv
The command is using paste to write lines from 2 files side by side, separated by TABs, to STDOUT. The STDOUT is read by xargs using a pipe (|). Option --verbose prints the command, and option -n2 specifies the max number of arguments for xargs to be 2, so that the resulting commands that are executed are something like mv old_file new_file.
Alternatively, use the Perl one-liners below.
Print the commands to rename the files, without executing the commands ("dry run"):
paste file1.txt file2.txt | perl -lane '$cmd = "mv $F[0] $F[1]"; print $cmd;'
Print the commands to rename the files, then actually execute them:
paste file1.txt file2.txt | perl -lane '$cmd = "mv $F[0] $F[1]"; print $cmd; system $cmd;'
The command is using paste to write lines from 2 files side by side, separated by TABs, to STDOUT. The STDOUT is read by the Perl one-liner using a pipe (|) to pass it to Perl one-liner's STDIN.
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
$F[0], $F[1] : first and second elements of the array #F into which the line is split. They are old and new file names, respectively.
system executes the command $cmd, which actually moves the files.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
I am looking for a way to run a command like smartctl on a file containing device names like /dev/sda; (one per line). The ansible playbook should be able to read each line and make it an arg to the command.
Are you looking for something like this?
<file_with_smartctl_args xargs -n1 smartctl
Replace file_with_smartctl_args with the file (complete path!) that contains the names of the files (arguments) you want to pass to smartctl. This will run "smartctl" one time for EACH of the lines (arguments) in the file.
Example:
If the file /usr/me/smartctl_args contains the following text:
file1
file2
file3
The command:
</usr/me/smartctl_args xargs -n1 smartctl
Will run smartctl 3 times (since the file has 3 lines in it), like this:
smartctl file1
smartctl file2
smartctl file3
The initial < tells the Unix shell that your "standard input" is going to come from the filename that follows (/usr/me/smartctl_args). Then, xargs will convert the "standard input" to command arguments, the -n1 option causes xargs to execute the command (smartctl) once for each argument.
I have 2 outputs from below commands (Using SHELL)
Output 1
bash-4.2$ cat job4.txt | cut -f2 -d ":" | cut -f1 -d "-"
Gathering Facts
Upload the zipped binaries to repository
Find the file applicatons in remote node
Upload to repository
include_tasks
Check and create on path if release directory exists
change dir
include_tasks
New release is unzipped
Starting release to unzip package delivered
Get the file contents
Completed
Playbook run took 0 days, 0 hours, 5 minutes, 51 seconds
Output 2
bash-4.2$ awk '{print $NF}' job4.txt
4.78s
2.48s
1.87s
0.92s
0.71s
0.66s
0.55s
0.44s
0.24s
0.24s
0.24s
0.03s
seconds
My actual output should be in excel. Like Output 1 should go to column 1 and Output 2 should go to column 2.
Kindly suggest.
Assuming your output1 and output2 are in files file1.txt and file2.txt and last line of output1 can be ignored:
paste -d"," file1.txt file2.txt > mergedfile.csv
write the first out to a file . Similarly do it for 2nd one file as well.
cmd goes here > file1.txt
2ndcmd goes here > file2.txt
Then To merge files line by line, you can use the paste command. And you can use a delimiter "\t" as different and write to csv
paste file1.txt file2.txt > mergedfile.csv
Ref: https://geek-university.com/linux/merge-files-line-by-line/
This question already has answers here:
Bash - remove all lines beginning with 'P'
(5 answers)
Closed 3 years ago.
I'm trying to copy a file but skip a specific line in the file that starts with 'An', using the bash in mac terminal.
The file only has 4 lines:
Kalle Andersson 036-134571
Bengt Pettersson 031-111111
Anders Johansson 08-806712
Per Eriksson 0140-12321
I know how to copy the file using the command cp and to grab a specific line in the file using the grep command.
I do not know how i can delete a specific line i the file.
I have used the cp command:
cp file1.txt file2.txt
to copy the file.
And I used the
grep 'An' file2.txt
I expect the result where the new file have the three lines:
Kalle Andersson 036-134571
Bengt Pettersson 031-111111
Per Eriksson 0140-12321.
Is there an way I can do this in a single command?
As Aaron said:
grep -vE '^An' file1.txt > file2.txt
What you do here is use grep with the -v option. That means print every line, except the one that matches. Furthermore, you instruct the shell to redirect the output of the grep to file2.txt. That is the meaning of the>.
There are a lot of commands in Unix/Linux that can be used for this. sed is an obvious candidate, awk can do it, as in
awk '{if (!/^An/) print}' file1.txt > file2.txt
Another option is ed:
ed file1.txt <<EOF
1
/^An
d
w file2.txt
q
EOF
There are several unix commands that are designed to operate on two files. Commonly such commands allow the contents for one of the "files" to be read from standard input by using a single dash in place of the file name.
I just came across a technique that seems to allow both files to be read from standard input:
comm -12 <(sort file1) <(sort file2)
My initial disbelieving reaction was, "That shouldn't work. Standard input will just have the concatenation of both files. The command won't be able to tell the files apart or even realize that it has been given the contents of two files."
Of course, this construction does work. I've tested it with both comm and diff using bash 3.2.51 on cygwin 1.7.7. I'm curious how and why it works:
Why does this work?
Is this a Bash extension, or is this straight Bourne shell functionality?
This works on my system, but will this technique work on other platforms? (In other words, will scripts written using this technique be portable?)
Bash, Korn shell (ksh93, anyway) and Z shell all support process substitution. These appear as files to the utility. Try this:
$ bash -c 'echo <(echo)'
/dev/fd/63
$ ksh -c 'echo <(echo)'
/dev/fd/4
$ zsh -c 'echo <(echo)'
/proc/self/fd/12
You'll see file descriptors similar to the ones shown.
This is a standard Bash extension. <(sort file1) opens a pipe with the output of the sort file1 command, gives the pipe a temporary file name, and passes that temporary file name on the comm command line.
You can see how it works by getting echo to tell you what's being passed to the program:
echo <(sort file1) <(sort file2)