Adding data to excel from text file using shell - shell

I have 2 outputs from below commands (Using SHELL)
Output 1
bash-4.2$ cat job4.txt | cut -f2 -d ":" | cut -f1 -d "-"
Gathering Facts
Upload the zipped binaries to repository
Find the file applicatons in remote node
Upload to repository
include_tasks
Check and create on path if release directory exists
change dir
include_tasks
New release is unzipped
Starting release to unzip package delivered
Get the file contents
Completed
Playbook run took 0 days, 0 hours, 5 minutes, 51 seconds
Output 2
bash-4.2$ awk '{print $NF}' job4.txt
4.78s
2.48s
1.87s
0.92s
0.71s
0.66s
0.55s
0.44s
0.24s
0.24s
0.24s
0.03s
seconds
My actual output should be in excel. Like Output 1 should go to column 1 and Output 2 should go to column 2.
Kindly suggest.

Assuming your output1 and output2 are in files file1.txt and file2.txt and last line of output1 can be ignored:
paste -d"," file1.txt file2.txt > mergedfile.csv

write the first out to a file . Similarly do it for 2nd one file as well.
cmd goes here > file1.txt
2ndcmd goes here > file2.txt
Then To merge files line by line, you can use the paste command. And you can use a delimiter "\t" as different and write to csv
paste file1.txt file2.txt > mergedfile.csv
Ref: https://geek-university.com/linux/merge-files-line-by-line/

Related

diff 2 files and print only difference bash in Jenkins job

I try to diff to 2 file in bash in Jenkins job
If i do this in minigw (eg gitbash) all working just fine
But if run same command in Jenkins i got all from files 2
For example i try 3 different methods
comm -1 -3 --nocheck-order file1.txt file2.txt
grep -vxF -f file1.txt file2.txt
diff --changed-group-format='%>' --unchanged-group-format='' file1.txt file2.txt
File is output of sqlplus command i and output is already sorted like this:
First file:
STANDARD
CONSTANT
PL_SQL
CREATE_OUT
RECALL
And second file
STANDARD
CONSTANT
PL_SQL
CREATE_OUT
RECALL
CONFIRM
I'm using git bash as shell and works in Windows. In git bash if i run any above command i got only changes and if i run same command in Jenkins i got all output from file2.txt
This driving me crazy(
UPD. I also try windows command findstr /bevg:file1.txt file2.txt
Same result(

How to delete the rows from one file having the particular column text in other file through shell scripting?

Situation: We are having two files in different paths on server. One file i.e. FILE A is in reject folder and the other FILE B is in the archive folder. we are reading the FILE A from reject folder and picking up the word after the text "ITEM:" and then search it in FILE B, If it is found then i have to delete the entire row from FILE B having that word.
INPUT:
File A:
hi my name is himansh agarwal.
My employee id is x56723
I live in Banaglore
I have an ITEM: WORDPRESS
FILE B:
Hi My name is joseph.
i live in miami.
I dont go to office
I dont have an WORDPRESS.
i am very hungry.
I love to go out.
OUTPUT:
File B should be renamed as FILE B_1 and the text inside it should not contain the line having the text WORDPRESS.
FILE B:
Hi My name is joseph.
i live in miami.
I dont go to office
i am very hungry.
I love to go out.
You can pick out the text with awk:
awk '$0 ~ /ITEM: / { sub(/^.*ITEM: /,"",$0); print $0; }' file_a
You can exclude a line from a file using grep -v
grep -v WORDPRESS file_b
You can use command substitution to feed the result of one command as a parameter to a second command:
grep -v "$(echo WORDPRESS)" file_b
You can output the result of a command to a file:
echo Test > file_b2
This gives:
grep -v "$(awk '$0 ~ /ITEM: / { sub(/^.*ITEM: /,"",$0); print $0; }' file_a)" file_b > file_b_1
(This assumes that the output is file B_1, not file B. The latter can be acheived with a few mv commands)
(You can do it with spaces in filenames, just quite the filenames everywhere then)

How to quickly check a .gz file without unzip? [duplicate]

How to get the first few lines from a gziped file ?
I tried zcat, but its throwing an error
zcat CONN.20111109.0057.gz|head
CONN.20111109.0057.gz.Z: A file or directory in the path name does not exist.
zcat(1) can be supplied by either compress(1) or by gzip(1). On your system, it appears to be compress(1) -- it is looking for a file with a .Z extension.
Switch to gzip -cd in place of zcat and your command should work fine:
gzip -cd CONN.20111109.0057.gz | head
Explanation
-c --stdout --to-stdout
Write output on standard output; keep original files unchanged. If there are several input files, the output consists of a sequence of independently compressed members. To obtain better compression, concatenate all input files before compressing
them.
-d --decompress --uncompress
Decompress.
On some systems (e.g., Mac), you need to use gzcat.
On a mac you need to use the < with zcat:
zcat < CONN.20111109.0057.gz|head
If a continuous range of lines needs be, one option might be:
gunzip -c file.gz | sed -n '5,10p;11q' > subFile
where the lines between 5th and 10th lines (both inclusive) of file.gz are extracted into a new subFile. For sed options, refer to the manual.
If every, say, 5th line is required:
gunzip -c file.gz | sed -n '1~5p;6q' > subFile
which extracts the 1st line and jumps over 4 lines and picks the 5th line and so on.
If you want to use zcat, this will show the first 10 rows
zcat your_filename.gz | head
Let's say you want the 16 first row
zcat your_filename.gz | head -n 16
This awk snippet will let you show not only the first few lines - but a range you can specify. It will also add line numbers which i needed for debugging an error message pointing to a certain line way down in a gzipped file.
gunzip -c file.gz | awk -v from=10 -v to=20 'NR>=from { print NR,$0; if (NR>=to) exit 1}'
Here is the awk snippet used in the one liner above. In awk NR is a built-in variable (Number of records found so far) which usually is equivalent to a line number. the from and to variable are picked up from the command line via the -v options.
NR>=from {
print NR,$0;
if (NR>=to)
exit 1
}

unix delete rows from multiple files using input from another file

I have multiple (1086) files (.dat) and in each file I have 5 columns and 6384 lines.
I have a single file named "info.txt" which contains 2 columns and 6883 lines. First column gives the line numbers (to delete in .dat files) and 2nd column gives a number.
1 600
2 100
3 210
4 1200
etc...
I need to read in info.txt, find every-line number corresponding to values less than 300 in 2nd column (so it is 2 and 3 in above example). Then I need to read these values into sed-awk or grep and delete these #lines from each .dat file. (So I will delete every 2nd and 3rd row of dat files in the above example).
More general form of the question would be (I suppose):
How to read numbers as input from file, than assign them to the rows to be deleted from multiple files.
I am using bash but ksh help is also fine.
sed -i "$(awk '$2 < 300 { print $1 "d" }' info.txt)" *.dat
The Awk script creates a simple sed script to delete the selected lines; the script it run on all the *.dat files.
(If your sed lacks the -i option, you will need to write to a temporary file in a loop. On OSX and some *BSD you need -i "" with an empty argument.)
This might work for you (GNU sed):
sed -rn 's/^(\S+)\s*([1-9]|[1-9][0-9]|[12][0-9][0-9])$/\1d/p' info.txt |
sed -i -f - *.dat
This builds a script of the lines to delete from the info.txt file and then applies it to the .dat files.
N.B. the regexp is for numbers ranging from 1 to 299 as per OP request.
# create action list
cat info.txt | while read LineRef Index
do
if [ ${Index} -lt 300 ]
then
ActionReq="${ActionReq};${Index} b
"
fi
done
# apply action on files
for EachFile in ( YourListSelectionOf.dat )
do
sed -i -n -e "${ActionReq}
p" ${EachFile}
done
(not tested, no linux here). Limitation with sed about your request about line having the seconf value bigger than 300. A awk is more efficient in this operation.
I use sed in second loop to avoid reading/writing each file for every line to delete. I think that the second loop could be avoided with a list of file directly given to sed in place of file by file
This should create a new dat files with oldname_new.dat but I havent tested:
awk 'FNR==NR{if($2<300)a[$1]=$1;next}
!(FNR in a)
{print >FILENAME"_new.dat"}' info.txt *.dat

Copy/Paste part of a file into another file using Terminal (or Shell)

I am trying to copy part of a .txt file from the line number n to the line number n+y (let's say 1000 to 1000000).
I tried with operators and sed, and it failed. Here's the command I tried:
sed -n "1000, 1000000p" path/first/file > path/second/file
if you know how many lines are in your source file (wc -l) you can do this .. assume 12000 lines and you want lines 2000 - 7000 in your new file (total of 5000 lines).
cat myfile | tail -10000 | head -5000 > newfile
Read the last 10k lines, then read the 1st 5k lines from that.
sed command should work fine, replace double quotes with single quotes.
sed -n '1000, 1000000p' path/first/file > path/second/file

Resources