How to remove multi-line text pattern from all text files (recursively)? - windows

I want to remove this pattern from all text files walking through the directories (recursively) with Windows batch script (.bat).
How to do that?
Here's a multi-line text pattern:
/* this is
a multi-line
pattern
*/

I'd say cmd isn't efficient enough to do that. Try python or some other a-bit-higher-language.

Related

Bash: replace specific newline with space

I have numerous files with extension .awesome containing lines like the following:
something =
[51,42,12]
Where something =* is in all the files as well as **[ (numbers vary.)
I would like to get rid of the newline, but don't know how. I came across tr, but worry it would replace all newlines. My files contain multiple newlines that I would like to retain (only change this newline.) I've been able to successfully to find and replace in the past with sed, but am having specifically with the special characters (\n and =.) In addition, I'm reading that sed is line by line and cannot handle something like this.
Any guidance would be appreciated.
GNU sed solution:
Sample test.awesome file contents:
some text
another text
something =
[51,42,12]
text
text
The job:
sed '/something =/{N; s/\n/ /;}' test.awesome
The output:
some text
another text
something = [51,42,12]
text
text

Programmatically delete all text between 2 characters in osx terminal

I have a thousand of txt files
1.txt
2.txt
3.txt
in each files, several times I have tags among my text:
{somethinghere...blablabla} than the text I want to keep than again {somethinghere...blablabla}
I'm not very pratical in mac osx command line, can someone help me to write a command opening each file, parsing it, and deleting all text included by two "{"?
To be clear:
First of all I need to open each file, than parse the text. When the loop finds a "{" it starts deleting till it founds a "}". When done parsing it saves and close the file. That's what I need to do.
$ sed -i.bak -e 's#{[^}]*}##g' *.txt
-i.bak make a backup copy of each modified files. If you don't want backups, on OsX use -i'' (the quotes are not necessary on Linux)
in substitutions, the delimiter can be another character than /, here I choose #, so : s#<REGEX>#<REMPLACEMENT># (the basic form for substitutions are s///)
In the regex, we search a litteral { and all but not a } with [^}]. * means 0 or more occurences. Last, we search the closing } and we replace the matching part by nothing, so it delete what was matching
the g modifier #the end means not only one match but all

How to loop through a series of files with Bash

I am trying to loop through a series of files and modify them. The files follow a pattern but I can't use pattern because I don't need all the files that match a pattern but just those between a certain sequence of numbers.
Example:
for files in D70_3113.NEF...D70_3330.NEF;do exiftool -GPS...; done
If you want to loop through the list of numbers, you can use a brace expansion:
for files in D70_{3113..3330}.NEF; do exiftool -GPS...; done
It depends on what you can expect from your naming scheme. I can't tell if your files can range from
D70_3113.NEF to D79_9999.NEF
or
D70_3113.NEF to D70_3999.NEF
or what have you. Assuming the latter, you could do:
for files in D70_3[0-9][0-9][0-9].NEF; do exiftool -GPS...; done
...just let the shell's pattern matching do the job for you.
Caveat: If you have too many files, the "for" command line may be too long. In that case you'd need to do find and pipe its output into a "while" loop. But today's command lines can run quite long... over 100,000 characters. See Bash command line and input limit

How to search for text in file and display lines

I'm currently trying to search several .sql files that I have for certain text. All of the .sql files are in the same folder. Here's the command I'm using:
grep 'text to search for'
However, that isn't displaying the lines that contain the text I'm searching for. Is there a way to display those lines and print them to a new text file?
You have to pass an file or file pattern argument, like:
grep 'text to search for' *.sql

sed/awk/bash to replace text between two strings with external file contents

I'm looking to write a script/command, that'll take inputFile1, look for a specific start and end string in it, and replace all the text in between them
with the full contents of inputFile2.
Ideally, but not mandatory, this should work without a need to escape special characters, so I can put the strings in variables that get called by the script (that way I could easily reuse it multiple times).
As an example, I have file inputYes.txt with contents:
DummyOne
Start
That
What
Yes
End
DummyTwo
And inputNo.txt with contents:
This
Why
Not
And I want the script to search inputYes.txt for the strings Start and End, and replace all the text in between with the contents of inputNo.txt, and write to the file.
So after running it, inputYes.txt should read
DummyOne
Start
This
Why
Not
End
DummyTwo
sed '/end_string/rinputFile2
/start_string/,/end_string/d' inputFile1

Resources