Inside a text file, I want to find any line containing 4294967295
<DVAMarker>{"DVAMarker":{"mCuePointList":[{"mKey":"marker_guid","mValue":"4e469eea-d7a9-49e8-b034-a4001272ddfb"},{"mKey":"keywordExtDVAv1_87b5cf3c-5b50-4ceb-8ae4-0978c58d3775","mValue":"{\"color\":4294967295}"}],"mDuration":{"ticks":3911846400000},"mMarkerID":"9c6d9e19-3790-4e25-bd3f-8808f1ce73ea","mName":"Montenegro","mStartTime":{"ticks":88062266880000},"mType":"Comment"}}</DVAMarker>
Then insert mComment":"BLUE", before "mCuePointList", so the result would look like
<DVAMarker>{"DVAMarker":{mComment":"BLUE", "mCuePointList":[{"mKey":"marker_guid","mValue":"4e469eea-d7a9-49e8-b034-a4001272ddfb"},{"mKey":"keywordExtDVAv1_87b5cf3c-5b50-4ceb-8ae4-0978c58d3775","mValue":"{\"color\":4294967295}"}],"mDuration":{"ticks":3911846400000},"mMarkerID":"9c6d9e19-3790-4e25-bd3f-8808f1ce73ea","mName":"Montenegro","mStartTime":{"ticks":88062266880000},"mType":"Comment"}}</DVAMarker>
I am using bash and gawk in Terminal on a mac.
I am not sure if this is a json or xml(doesn't look to me, though I am not an expert in it), with awk if you could try following.
awk '/4294967295/{sub(/"mCuePointList"/,"mComment\":\"BLUE\",&")} 1' Input_file
Related
I have a file example.txt, I want to delete and replace fields in it.
The following commands are good, but in a very messy way, unfortunately I'm a rookie to sed command.
The commands I used:
sed 's/\-I\.\.\/\.\.\/\.\.//\n/g' example.txt > example.txt1
sed 's/\-I/\n/g' example.txt1 > example.txt2
sed '/^[[:space:]]*$/d' > example.txt2 example.txt3
sed 's/\.\.\/\.\.\/\.\.//g' > example.txt3 example.txt
and then I'm deleting all the unnecessary files.
I'm trying to get the following result:
Common/Components/Component
Common/Components/Component1
Common/Components/Component2
Common/Components/Component3
Common/Components/Component4
Common/Components/Component5
Common/Components/Component6
Comp
App
The file looks like this:
-I../../../Common/Component -I../../../Common/Component1 -I../../../Common/Component2 -I../../../Common/Component3 -I../../../Common/Component4 -I../../../Common/Component5 -I../../../Common/Component6 -IComp -IApp ../../../
I want to know how the best way to transform input format to output format standard text-processing tool with 1 call with sed tool or awk.
With your shown samples, please try following awk code. Written and tested in GNU awk.
awk -v RS='-I\\S+' 'RT{sub(/^-I.*Common\//,"Common/Components/",RT);sub(/^-I/,"",RT);print RT}' Input_file
output with samples will be as follows:
Common/Components/Component
Common/Components/Component1
Common/Components/Component2
Common/Components/Component3
Common/Components/Component4
Common/Components/Component5
Common/Components/Component6
Comp
App
Explanation: Simple explanation would be, in GNU awk. Setting RS(record separator) as -I\\S+ -I till a space comes. In main awk program, check if RT is NOT NULL, substitute starting -I till Common with Common/Components/ in RT and then substitute starting -I with NULL in RT. Then printing RT here.
If you don't REALLY want the string /Components to be added in the middle of some output lines then this may be what you want, using any awk in any shell on every Unix box:
$ awk -v RS=' ' 'sub("^-I[./]*","")' file
Common/Component
Common/Component1
Common/Component2
Common/Component3
Common/Component4
Common/Component5
Common/Component6
Comp
App
That would fail if any of the paths in your input contained blanks but you don't show that as a possibility in your question so I assume it can't happen.
What about
sed -i 's/\-I\.\.\/\.\.\/\.\.//\n/g
s/\-I/\n/g
/^[[:space:]]*$/d
s/\.\.\/\.\.\/\.\.//g' example.txt
i have csv file contains data like, I need to get all fields as it is except last one.
"one","two","this has comment section1"
"one","two","this has comment section2 and ( anything ) can come here ( ok!!!"
gawk 'BEGIN {FS=",";OFS=","}{sub(FS $NF, x)}1'
gives error-
fatal: Unmatched ( or (:
I know if i remove '(' from second line solves the problem but i can not remove anything from comment section.
With any awk you could try:
awk 'BEGIN{FS=",";OFS=","}{$NF="";sub(/,$/,"")}1' Input_file
Or with GNU awk try:
awk 'BEGIN{FS=",";OFS=","}NF{--NF};1' Input_file
Since you mention that everything can come here, you might also have a line that looks like:
"one","two","comment with a , comma"
So it is a bit hard to just use the <comma>-character as a field separator.
The following two posts are now very handy:
What's the most robust way to efficiently parse CSV using awk?
[U&L] How to delete the last column of a file in Linux (Note: this is only for GNU awk)
Since you work with GNU awk, you can thus do any of the following two:
$ awk -v FPAT='[^,]*|"[^"]+"' -v OFS="," 'NF{NF--}1'
$ awk 'BEGIN{FPAT="[^,]*|\"[^\"]+\"";OFS=","}NF{NF--}1'
$ awk 'BEGIN{FPAT="[^,]*|\042[^\042]+\042";OFS=","}NF{NF--}1'
Why is your command failing: The sub(ere,repl,in) command of awk assumes that the first part ere is an extended regular expression. Hence, the bracket has a special meaning. If you want to replace fields which are known and unique, you should not use sub, but just redefine the field:
$ awk '{$NF=""}'
If you want to replace a string matching a field, you should do this:
s=$(number);while(i=index(s,$0)){$0=substr(1,i-1) "repl" substr(i+length(s),$0) }
I have a file with an argument
testArgument=
It could have something equal to it or nothing but I want to comment it and add the new line with supplied info
Before:
testArgument=Something
Results:
#testVariable=Something
#Comments to let the user know of why the change
testVariable=NewSomething
Should I loop it or should I use something like sed? I need it to be compatible for Ubuntu and Debian and bash.
You could use sed like this:
sed 's/^\(testArgument\)=.*/#&\n\n#Comment here\n\1=NewSomething/' file
& prints the full match in the replacement and \1 refers to the first capture group "testArgument".
To perform the substitution on the file in-place (i.e. replace the contents of the original file), add the -i switch. Otherwise, if you want to output the command to a new file, do sed '...' file > newfile.
If you are using a different version of sed that doesn't support \n newlines in the replacement, see this answer for some ways to deal with it.
Alternatively, using GNU awk:
gawk '/^testArgument/ {$0 = gensub(/^(testArgument)=.*/, "#\\0\n\n#Comment here\n\\1=NewSomething", 1)}1' file
You can use awk
awk '/^testArugment/ {$0="#"$0"\n\n#Comments to let the user know of why the change\ntestVariable=NewSomething"}1' file
cat file
some data
testArugment=Something
more data
awk '/^testArugment/ {$0="#"$0"\n\n#Comments to let the user know of why the change\ntestVariable=NewSomething"}1' file
some data
#testArugment=Something
#Comments to let the user know of why the change
testVariable=NewSomething
more data
To change the original file
awk 'code....' file > tmp && mv tmp file
I have a document with 1+ million of the following strings and I like to create some new structures byextract some parts and create a csv file for it, what's the quickest way to do this?
document/0006-291X(85)91157-X
I would like to have a file with on each line the original string and the extracted parts
document/0006-291X(85)91157-X;0006-291X;85
You can try this one-liner awk:
awk -F "[/()]" -v OFS=';' '{print $0,$(NF-2),$(NF-1)}' your-file
It parses the fields of each line with taking /,(,) as delimiters. Then it prints out the whole line, the 3rd field and the second field starting from the end of the line. The option -v OFS=';' prints semicolumns as output field separator.
I'm looking for a way to remove lines within multiple csv files, in bash using sed, awk or anything appropriate where the file ends in 0.
So there are multiple csv files, their format is:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLElong,60,0
EXAMPLEcon,120,6
EXAMPLEdev,60,0
EXAMPLErandom,30,6
So the file will be amended to:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
A problem which I can see arising is distinguishing between double digits that end in zero and 0 itself.
So any ideas?
Using your file, something like this?
$ sed '/,0$/d' test.txt
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
For this particular problem, sed is perfect, as the others have pointed out. However, awk is more flexible, i.e. you can filter on an arbitrary column:
awk -F, '$3!=0' test.csv
This will print the entire line is column 3 is not 0.
use sed to only remove lines ending with ",0":
sed '/,0$/d'
you can also use awk,
$ awk -F"," '$NF!=0' file
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
this just says check the last field for 0 and don't print if its found.
sed '/,[ \t]*0$/d' file
I would tend to sed, but there is an egrep (or: grep -e) -solution too:
egrep -v ",0$" example.csv