I have a file that contain data in the format
2 Mar 1 1234 141.98.80.59
1 Mar 1 1234 171.239.249.233
5 Mar 1 admin 116.110.119.156
4 Mar 1 admin1 177.154.8.15
2 Mar 1 admin 141.98.80.63
2 Mar 1 Admin 141.98.80.63
i tried this command to convert into csv format but it is giving me the output with extra (,) in the front
cat data.sql | tr -s '[:blank:]' ',' > data1.csv
,2,Mar,1,1234,141.98.80.59
,1,Mar,1,1234,171.239.249.233
,5,Mar,1,admin,116.110.119.156
,4,Mar,1,admin1,177.154.8.15
,2,Mar,1,admin,141.98.80.63
,2,Mar,1,Admin,141.98.80.63
In my file there is 6 character space is there in-front on every record
how can i remove extra comma from the front
how [to] remove extra comma from the front using awk:
$ awk -v OFS=, '{$1=$1}1' file
Output:
2,Mar,1,1234,141.98.80.59
1,Mar,1,1234,171.239.249.233
5,Mar,1,admin,116.110.119.156
...
Output with #EdMorton's version proposed in comments:
2,Mar,1,1234,141.98.80.59
1,Mar,1,1234,171.239.249.233
5,Mar,1,admin,116.110.119.156
...
The improved version of your current method is:
cat data.sql | sed -E -e 's/^[[:blank:]]+//g' -e 's/[[:blank:]]+/,/g' > data1.csv
But do be aware that replacing spaces/commas isnt a real way of changing this format into a CSV. If there are/were any commas and/or spaces present in the actual data this approach would fail.
The fact that your example source file has the .sql extension suggests that perhaps you get this file by exporting a database, and have already stripped parts of the file away with other tr statements ? If that is the case, a better approach would be to export to CSV (or another format) directly
edit: Made sed statement more portable as recommended by per QuasÃmodo in the comments.
Using Miller is
mlr --n2c -N remove-empty-columns ./input.txt >./output.txt
The output will be
2,Mar,1,1234,141.98.80.59
1,Mar,1,1234,171.239.249.233
5,Mar,1,admin,116.110.119.156
4,Mar,1,admin1,177.154.8.15
2,Mar,1,admin,141.98.80.63
2,Mar,1,Admin,141.98.80.63
I have 2 outputs from below commands (Using SHELL)
Output 1
bash-4.2$ cat job4.txt | cut -f2 -d ":" | cut -f1 -d "-"
Gathering Facts
Upload the zipped binaries to repository
Find the file applicatons in remote node
Upload to repository
include_tasks
Check and create on path if release directory exists
change dir
include_tasks
New release is unzipped
Starting release to unzip package delivered
Get the file contents
Completed
Playbook run took 0 days, 0 hours, 5 minutes, 51 seconds
Output 2
bash-4.2$ awk '{print $NF}' job4.txt
4.78s
2.48s
1.87s
0.92s
0.71s
0.66s
0.55s
0.44s
0.24s
0.24s
0.24s
0.03s
seconds
My actual output should be in excel. Like Output 1 should go to column 1 and Output 2 should go to column 2.
Kindly suggest.
Assuming your output1 and output2 are in files file1.txt and file2.txt and last line of output1 can be ignored:
paste -d"," file1.txt file2.txt > mergedfile.csv
write the first out to a file . Similarly do it for 2nd one file as well.
cmd goes here > file1.txt
2ndcmd goes here > file2.txt
Then To merge files line by line, you can use the paste command. And you can use a delimiter "\t" as different and write to csv
paste file1.txt file2.txt > mergedfile.csv
Ref: https://geek-university.com/linux/merge-files-line-by-line/
I have a log file which contains several repeats of the pattern Fre --. I need to remove only first occurrence of this pattern and the next 20 lines after that and keep other matches intact. I need to do it in a bash terminal, using sed preferably or awk or perl. I would highly appreciate your help.
I tried
sed -e '/Fre --/,+20d' log.log
but it deletes all the patterns and next 20 lines after that. I want only first pattern to be removed
There is a more or less similar question and some answers here: How to remove only the first occurrence of a line in a file using sed but I don't know how to change it to remove 20 lines after the first match
Pretty sure that someone will find a nice sed command but I know awk better.
You can try :
awk '/Fre --/ && !found++{counter=21}--counter<0' log.log
Explanations :
/Fre --/ -> if it finds pattern Fre --
&& !found++ -> and if it didn't find it before
{counter=21} -> it sets counter value at 21 (because you want to remove the line + the next 20s)
--counter<0 -> decreases the counter and prints the line only if counter < 0
As mentioned by #Sundeep, #EdMorton solution is safer on very big files.
awk '/Fre --/ && !found++{counter=21}!(counter&&counter--)' log.log
NOTE
If you want the deletions to be saved into the original file, you will have to copy the contents of the awk command into a temp file, and then move the temp file into the original file. Always be careful before editing the original file since you may lose precious informations.
Run the first command first :
awk '/Fre --/ && !found++{counter=21}!(counter&&counter--)' log.log > log.log.tmp
Then check the .tmp file and you can run the second command to apply the changes if .tmp file looks ok :
mv log.log.tmp log.log
$ seq 20 | awk '!f && /3/{c=4; f=1} !(c&&c--)'
1
2
7
8
9
10
11
12
13
14
15
16
17
18
19
20
See Printing with sed or awk a line following a matching pattern
I have a csv file with multiple values on each line like this
0,1,2,3,4,5,6
I would like to convert it to
0
1
2
3
4
5
6
Is there any quick, easy way to do this in linux terminal?
cat mycsvfile.txt | tr ',' '\n' > aaa.txt <enter>
Just Found this (http://www.askmeaboutlinux.com/?p=2742)
In only one command you can use sed -i 's/,/\n/g' file.txt
This will replace, in your entire file.txt, the char , by the char \n.
You may find explanation on how this command works on this answer.
I am trying to copy part of a .txt file from the line number n to the line number n+y (let's say 1000 to 1000000).
I tried with operators and sed, and it failed. Here's the command I tried:
sed -n "1000, 1000000p" path/first/file > path/second/file
if you know how many lines are in your source file (wc -l) you can do this .. assume 12000 lines and you want lines 2000 - 7000 in your new file (total of 5000 lines).
cat myfile | tail -10000 | head -5000 > newfile
Read the last 10k lines, then read the 1st 5k lines from that.
sed command should work fine, replace double quotes with single quotes.
sed -n '1000, 1000000p' path/first/file > path/second/file