In cscope how to search for a string with period in it? - ctags

In cscope how to search for a string with period in it? for eg. I have a line
.mount = testmount
in my code, but if i try to search for the C symbol or even text string .mount, i get every line that contains the word mount instead. Using backslash as an escape character doesn't seem to be working either.

Search for egrep pattern instead of text or symbol if the search string has a period because a period is a valid input for searching egrep patterns and files but not for searching symbols or text.
When cscope is started, it will bring up the following options. Choose "Find this egrep pattern:".
Find this C symbol:
Find this global definition:
Find functions called by this function:
Find functions calling this function:
Find this text string:
Change this text string:
Find this egrep pattern:.mount
Find this file:
Find files:

Related

ultimative find command in bash to trace all illegal and unknown chars and symbols in filenames

Many similar solutions can be found to detect or change "illegal" characters in filenames.
Most solutions require you to know the illegal characters.
A solution to find those filenames often ends with something like:
find . -name "*[\+\;\"\\\=\?\~\<\>\&\*\|\$\'\,\{\}\%\^\#\:\(\)]*"
This is already quite good, but sometimes there are cryptic characters (e.g. h͔͉̝e̻̦l̝͍͓l̢͚̻o͇̝̘w̙͇͜o͔̼͚r̝͇̞l̘̘d̪͔̝.̟͔̠t͉͖̼x̟̞t̢̝̦ or ʇxʇ.pʅɹoʍoʅʅǝɥ or © or €), symbols, or characters from other character sets in my file names. I can not trace these files this way. Inverse lookarounds or the find command with regex is probably the right approach, but I don't get anywhere.
In other words: Find all filenames which do NOT match the following pattern [^a-zA-Z0-9 ._-] would be perfect. Mostly [:alnum:] but with regex the command would be more flexibel.
find . ! -name [^a-zA-Z0-9 ._-] does not do the job. Any idea?
I use bash or zsh on OSX.
You can try
find . -name '*[!a-zA-Z0-9 ._-]*'

Braces in shell parameter expansion don't work right

I have a program parsing two files and comparing them looking for conflicts between the two and allowing the user to decide what action to take. As a result, I need to be able to parse the lines below. If a string contains { or } when using pattern replacement parameter expansion it will cause an error.
I was looking for a potential work around for the following lines
F=TSM_CLASS="Test text {class}"
newstring=${F//{class}/\\{class\\}}
Results:
echo $newstring
TSM_CLASS="Test text }/\{class\}}"
${F//{class} is a complete parameter expansion which replaces every instance of {class in F's value with empty string. To embed braces in the pattern and/or the replacement string, you need to quote them.
$ F=TSM_CLASS="Test text {class}"
$
$ echo "${F//{class\}/\\{class\\\}}"
TSM_CLASS=Test text \{class\}

Slice keywords from log text files

I have a big log file with lines as
[2016-06-03T10:03:12] No data: TW.WA2
,
[2016-06-03T11:03:02] wrong overlaps: XW.W12.HHZ.2007.289
and as
[2016-06-03T14:05:26] failed to correct YP.CT02.HHZ.2012.334 because No matching response.
Each line consists of a timestamp, a reason for the logging and a keyword composed of some substrings connected by dots (TW.WA2, XW.W12.HHZ.2007.289 and YP.CT02.HHZ.2012.334 in above examples).
The format of the keywords of a specific type is fixed (substrings are joined by fixed number of dots).
The substrings are composed of letters and digits (0-5 chars, but not all substrings can be empty, generally only one at maximum, e.g., XW.WTA12..2007.289).
I want to
extract the keywords
save different types of keywords uniqued to separated files
Currently I tried grep, but only the classification is done.
grep "wrong overlaps" logfile > wrong_overlaps
grep "failed to correct" logfile > no_resp
grep "No data" logfile > no_data
In no_data, the contents are expected as like
AW.AA1
TW.WA2
TW.WA3
...
In no_resp, the contents are expected as like
XP..HHZ.2002.334
YP.CT01.HHZ.2012.330
YP.CT02.HHZ.2012.334
...
However, the simple grep commands above save the full lines. I guess I need regex to extract the keywords?
Assuming a keyword is defined by containing period and surrounded by letters and digits, then the followed regex will match all keywords:
% grep -oE '\w+(\.\w+)+' data
TW.WA2
XW.W12.HHZ.2007.289
YP.CT02.HHZ.2012.334
-o will print the matches only. And -E enables Extended Regular Expressions
This will however not make it possible for you to split it into multiply files, eg: Creating a file wrong_overlaps that contains all lines with wrong overlaps.
You can use -P to enable Perl Compatible Regular Expressions which support lookbehinds:
% grep -oP '(?<=wrong overlaps: )\w+(\.\w+)+' data
XW.W12.HHZ.2007.289
But note that PCRE doesn't support variable length lookbehinds so you will need to type out the full pattern before, eg:
something test string: ABC:DEF
ABC:DEF Can be extracted with:
(?<=test string: )\w+(\.\w+)+
But not
(?<=test string)\w+(\.\w+)+

How can I get a long listing of text files containing "foo" followed by two digits?

Using metacharacters, I need to perform a long listing of all files whose name contains the string foo followed by two digits, then followed by .txt. foo**.txt will not work, obviously. I can't figure out how to do it.
Use Valid Shell Globbing with Character Class
To find your substring anywhere in a filename like bar-foo12-baz.txt, you need a wilcard before and after the match. You can also use a character class in your pattern to match a limited range of characters. For example, in Bash:
# Explicit character classes.
ls -l *foo[0-9][0-9]*.txt
# POSIX character classes.
ls -l *foo[[:digit:]][[:digit:]]*.txt
See Also
Filename Expansion
Pattern Matching
Something like ls foo[0-9][0-9]*.txt of whatever exactly fits your pattern.

Shortcut in nano editor for adding quotation marks to every word beginning with $ in a bash script?

I am new to writing in bash and I just finished this long script but I made the mistake of not adding quotation marks to all the variables beginning with the unary operator $. Adding all the quotation marks by hand is going to take a while. Is there a short cut I can use so all the words in the text file beginning with $ get quotation marks around them? So if a line in the file looks like:
python myProgram.py $car1 $car2 $speed1 $speed2
Then after the shortcut it will appear as
python myProgram.py "$car1" "$car2" "$speed1" "$speed2"
I am writing the script using nano.
Use global search and replace with the expression (\$\w+).
Switch to search and replace mode with C-\.
Switch to regex mode with Alt-R.
Type the expression (\$\w+). Hit Enter.
Type in the replacement expression "\1" replace the captured expression with quotations. Hit Enter.
On the match, hit A for All.
Given your need, it doesn't seem mandatory to provide a solution based on that editor. If you have access to a shell you might try this simple sed command:
sed -i.bak -r 's/\$\w+/"&"/g' my-script.sh
This is far from being perfect but should do the job in your particular case. If the above command:
-i.bak will perform the replacement "in place" -- that is modifying the original file, making a backup with the .bak extension
s/..../..../g is the usual sed command to search and replace using a pattern. The search pattern is between the first two \. The replacement is between the last two /
\$\w+ this pattern correspond to a $ followed by one or more letters (\w+). The backslash before $ is needed because that character normally has special meaning in a search pattern.
"&" is the replacement string. In there, the & is replaced by the string found in the search pattern. Broadly speaking this put quotes arround any string matching the search pattern.

Resources