I have list of data in a text file that looks like this:
2116571574
2116571583
2116572143
and I want to add ".ok" at the end of each line to look like this:
2116571574.ok
2116571583.ok
2116572143.ok
Is there a possibility that this can be done via Terminal? Im using MacOS.
awk -v'RS=\n+' '{print $0 ".ok" }' myfile
You need awk on a terminal for this
Related
Inside a text file, I want to find any line containing 4294967295
<DVAMarker>{"DVAMarker":{"mCuePointList":[{"mKey":"marker_guid","mValue":"4e469eea-d7a9-49e8-b034-a4001272ddfb"},{"mKey":"keywordExtDVAv1_87b5cf3c-5b50-4ceb-8ae4-0978c58d3775","mValue":"{\"color\":4294967295}"}],"mDuration":{"ticks":3911846400000},"mMarkerID":"9c6d9e19-3790-4e25-bd3f-8808f1ce73ea","mName":"Montenegro","mStartTime":{"ticks":88062266880000},"mType":"Comment"}}</DVAMarker>
Then insert mComment":"BLUE", before "mCuePointList", so the result would look like
<DVAMarker>{"DVAMarker":{mComment":"BLUE", "mCuePointList":[{"mKey":"marker_guid","mValue":"4e469eea-d7a9-49e8-b034-a4001272ddfb"},{"mKey":"keywordExtDVAv1_87b5cf3c-5b50-4ceb-8ae4-0978c58d3775","mValue":"{\"color\":4294967295}"}],"mDuration":{"ticks":3911846400000},"mMarkerID":"9c6d9e19-3790-4e25-bd3f-8808f1ce73ea","mName":"Montenegro","mStartTime":{"ticks":88062266880000},"mType":"Comment"}}</DVAMarker>
I am using bash and gawk in Terminal on a mac.
I am not sure if this is a json or xml(doesn't look to me, though I am not an expert in it), with awk if you could try following.
awk '/4294967295/{sub(/"mCuePointList"/,"mComment\":\"BLUE\",&")} 1' Input_file
I have a list like this
6.53143.S
6.47643.S
6.53161.S
dots are just for presentation
some bash scripting
6.53143.S
6.47643.S
6.53161.s
Try this :
awk '{print NR, $0}' file
If your data actually looks like this:
- 6.53143.S
- 6.47643.S
- 6.53161.S
use:
$ awk '$1=NR' file
1 6.53143.S
2 6.47643.S
3 6.53161.S
In case you only want to print the line numbers along with lines then use simple cat for the same.
cat -n Input_file
I'm attempting to parse a make -n output to make sure only the programs I want to call are being called. However, awk tries to interpret the contents of the output and run (?) it. Errors are something like awk: fatal: Cannot find file 'make'. I have gotten around this by saving the output as a temporary file and then reading that into awk. However, I'm sure there's a better way; any suggestions?
EDIT: I'm using the output later in my script and would like to avoid saving a file to increase speed if possible.
Here's what isn't working:
my_input=$(make -n file)
my_lines=$(echo $my_input | awk '/bin/ { print $1 }') #also tried printf and cat
Here's what works but obviously takes longer than it has to because of writing the file:
make -n file > temp
my_lines=$(awk '/bin/ { print $1 }' temp)
Many thanks for your help!
You can directly parse the output when it is generated by the following command and save the result in a file.
make -n file | grep bin > result.out
If you really want to go for an overkill awk solution, change your second line in the following way:
my_lines="$(awk '/bin/ { print }' temp)"
I have splited a file into multiple text files using below command -
awk '{print $2 > $1"_npsc.txt"}' complete.txt
I want to store all the output generated text files to another directory. How I can achieve this ? Please help.
You could do something like:
awk '{print $2 > "path/to/directory"$1"_npsc.txt"}' complete.txt
Just make sure that you create the director first (and replace path/to/directory with a the path that you like)
I have a CSV file that I need to split by date. I've tried using the AWK code listed below (found elsewhere).
awk -F"," 'NR>1 {print $0 >> ($1 ".csv"); close($1 ".csv")}' file.csv
I've tried running this within terminal in both OS X and Debian. In both cases there's no error message (so the code seems to run properly), but there's also no output. No output files, and no response at the command line.
My input file has ~6k rows of data that looks like this:
date,source,count,cost
2013-01-01,by,36,0
2013-01-01,by,42,1.37
2013-01-02,by,7,0.12
2013-01-03,by,11,4.62
What I'd like is for a new CSV file to be created containing all of the rows for a particular date. What am I overlooking?
I've resolved this. Following the logic of this thread, I checked my line endings with the file command and learned that the file had the old-style Mac line terminators. I opened my input CSV file with Text Wrangler and saved it again with Unix style line endings. Once I did that, the awk command listed above worked as expected. It took ~5 seconds to create 63 new CSV files broken out by date.
For retrieve information in a log file with ";" separator I use:
grep "END SESSION" filename.log | cut -d";" -f2
where
-d, --delimiter=DELIM use DELIM instead of TAB for field delimiter
-f, --fields=LIST select only these fields; also print any line
that contains no delimiter character, unless
the -s option is specified