I need to process the stack similar to below which has a string "oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver" but I need the exact method call that failed. e.g. in this case the exact one is "oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver.attachFile". There are multiple stacks that I have as part of html pages which I need to process and there will be multiple methods failing. How can I do it in shell script? This entire stack is in one line html entry which is causing grep to return the whole stack itself. I tried multiple things but nothing worked clearly. I guess awk or similar regex tool could be a way to go but not sure.
at oracle.adf.view.rich.automation.test.selenium.RichWebDriverTest.getElement(RichWebDriverTest.java:1414)
at oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver.attachFile(ApplcoreWebdriver.java:1460)
at oracle.apps.fnd.applcore.attachments.ui.util.accessor.FndManageAttachmentsPopupAccessor.attachFile(FndManageAttachmentsPopupAccessor.java:1475)
at oracle.apps.fnd.applcore.attachments.ui.util.accessor.FndManageAttachmentsPopupAccessor.updateRowFileAttachment(FndManageAttachmentsPopupAccessor.java:550)
at oracle.apps.fnd.applcore.attachments.ui.AttachmentsBaseSelenium.testLMultiFileAdd_22108390(AttachmentsBaseSelenium.java:1782)
at oracle.javatools.test.WebDriverRunner.run(WebDriverRunner.java:122)
I cannot guarantee it being the most efficient of solutions, but am able so suit the need. Am using all grep, sed combined together.
Splitting the single big html file as multiple lines using cat & tr
cat file | tr ' ' '\n' | grep -w "$search_string" | sed 's/(.*)//'
For testing purposes am using the stack snippet which you shared in the OP.
$ cat file
at oracle.adf.view.rich.automation.test.selenium.RichWebDriverTest.getElement(RichWebDriverTest.java:1414)
at oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver.attachFile(ApplcoreWebdriver.java:1460)
at oracle.apps.fnd.applcore.attachments.ui.util.accessor.FndManageAttachmentsPopupAccessor.attachFile(FndManageAttachmentsPopupAccessor.java:1475)
at oracle.apps.fnd.applcore.attachments.ui.util.accessor.FndManageAttachmentsPopupAccessor.updateRowFileAttachment(FndManageAttachmentsPopupAccessor.java:550)
at oracle.apps.fnd.applcore.attachments.ui.AttachmentsBaseSelenium.testLMultiFileAdd_22108390(AttachmentsBaseSelenium.java:1782)
at oracle.javatools.test.WebDriverRunner.run(WebDriverRunner.java:122)
Running the above command for the search string oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver
$ cat file | tr ' ' '\n' | grep -w "oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver" | sed 's/(.*)//'
oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver.attachFile
Any suggestions simplifying it are welcome.
Well, I think I found a working solution:
for stack in $all_stacks
do
words=`echo $stack | tr ' ' '\n'`
for word in $words
do
search_string="oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver"
if [[ $word == *${search_string}* ]];
then
echo $word
fi
done
done
The idea was to split the line (stack) into words separated by space and process each word by matching with the required string. Then the matching word finally prints the required value (including api call)
Update based on ans from #Inian :
Below is updated one command to achieve this by processing multiple html files which contains error stack to be processed.
find . -name '*-errors.html' | xargs cat | tr ' ' '\n' | grep -w "oracle.apps.fnd.applcore.test.selenium.ApplcoreWebdriver" | sed 's/(.*)//' | cut -d '<' -f1 | sort -u
Related
I am using zsh on macOS
I have a shell script that produces a text file with speedtest results in the following layout:
Download: 63.57 Mbps (data used: 69.3 MB)
Upload: 16.11 Mbps (data used: 23.0 MB)
I can manipulate the layout and produce this:
↓ 63.57 Mbps |
↑ 16.11 Mbps
Note the line break before the first line of text and the one after the pipe. In the Terminal only the final line is printed out: ↑ 16.11 Mbps
The script to transform the input is this:
DOWNLOAD=$(cat ~/Terminal_Projects/temp_speedtest_result.txt | grep Download | sed 's/ Download: /↓ /g' | sed 's/ (data used: //g' | sed -E 's/[0-9]{1,4}\.[0-9] MB)//g' | sed 's/\n\r\t//')
UPLOAD=$(cat ~/Terminal_Projects/temp_speedtest_result.txt | grep Upload | sed 's/ Upload: /↑ /g' | sed 's/ (data used: //g' | sed -E 's/[0-9]{1,4}\.[0-9] MB)//g' | tr '\n' ' ')
RESULT=$DOWNLOAD" | "$UPLOAD
echo $RESULT
I used multiple instances of sed because I couldn't get it to work in just one instance. You may know how to get it to work.
What I want to do is output the DOWNLOAD and UPLOAD variables on a single line. I have another very similar script that achieves that with exactly the same manipulation of variables.
What I have tried:
Using RESULT="$DOWNLOAD | $UPLOAD"
Using RESULT="${DOWNLOAD} | ${UPLOAD}"
Using tr '\n' ' ' instead of the sed command to remove \n
I tried removing the up and down arrows in case those symbols aren't supported - same behaviour.
I have tried using sed on the RESULT variable to try removing new lines. I also tried writing the contents of the RESULT variable to a new temp txt file and then retrieving the contents of the file and using grep to extract the results one by one in the hope the new lines would not be copied. Didn't work for me.
It looks like there are line breaks that I have been unable to remove but I could be wrong.
I am new to command line and shell scripts. Trying to apply my very limited knowledge to a new scenario. Any help would be appreciated.
tr seems to do the job and echo -n "$UPLOAD" shows on a single line, so I think you're on the right track and only need to fix the DOWNLOAD part.
I suggest you simplify the script a bit using something along these lines:
INPUT_FILE="~/Terminal_Projects/temp_speedtest_result.txt"
DOWNLOAD="$(grep Download $INPUT_FILE | awk '{print "↓" $2 " " $3}' | tr '\n' ' ')"
UPLOAD="$(grep Upload $INPUT_FILE | awk '{print "↓" $2 " " $3}' | tr '\n' ' ')"
echo "$DOWNLOAD | $UPLOAD"
How about
RESULT="$(grep -Ew '(Down|Up)load' <~/Terminal_Projects/temp_speedtest_result.txt | tr '\n' ' ')"
? This is more efficient (only one grep and tr process needed) and also fixes the bug you have in your solution by your use of a pipe in RESULT=$DOWNLOAD" | "$UPLOAD (which should have brought up an error message).
I've a file that contains info that I'm retrieving such way
Command
cat 2018_02_15_09_01_08_result.tsv | grep -o [A-Z]\\*[0-9]*:[0-9]* | sort | uniq | sed -e 's/^/HLA-/' |tr '\n' ',' | sed '$ s/.$//'
Output
HLA-A*30:02,HLA-B*18:01,HLA-C*05:01
But I'm trying to save this in variable, the asterisk and a letter disappears, I've tried several ways, adding/removing commas etc and I'm yet not able to print it properly.
hla=`cat 2018_02_15_09_01_08_result.tsv | grep -o [A-Z]\\*[0-9]*:[0-9]* | sort | uniq | sed -e 's/^/HLA-/' |tr '\n' ',' | sed '$ s/.$//'`
echo $hla
HLA-05:01,HLA-18:01,HLA-30:02
echo "$hla"
HLA-05:01,HLA-18:01,HLA-30:02
There are multiple errors here, most of which will be aptly diagnosed by http://shellcheck.net/ without any human intervention.
You really should single-quote your regular expressions unless you specifically require the shell to perform wildcard expansion and whitespace tokenization on the regex before executing the command.
The obsolescent `command` in backticks introduces some unfortunate additional shell handling on the string inside the backticks. The solution since the 1990s is to prefer the $(command) syntax for command substitution, which does not exhibit this problem.
The cat is useless; grep knows full well how to read a file.
Try this refactored code:
hla=$(grep -o '[A-Z]*[0-9]*:[0-9]*' 2018_02_15_09_01_08_result.tsv |
sort -u | sed -e 's/^/HLA-/' |tr '\n' ',' | sed '$ s/.$//')
echo "$hla"
The double quotes around the variable interpolation in the echo are necessary and useful; notice also the line wraps for legibility and the use of sort -u in preference over sort | uniq (and generally try to reduce the number of processes -- once I understand what the sed | tr | sed does I can probably propose a simplification for that, too). Perhaps the simplest fix would be to refactor all of this into a single Awk script, but without access to the input, it's hard to tell you in more detail what that might look like.
(Also, are you really sure you need to capture the value to a variable? Often variable=value; echo "$variable" is just an obscure and inefficient way to say echo "value". And variable=$(command); echo "$variable" is better written simply command and capturing the command's standard output just so you can print it to standard output is a pure waste of cycles, unless you are planning to do something more with that variable's value.)
I've solved it by saving the output of the command with a redirection:
cat 2018_02_15_09_01_08_result.tsv |
grep -o [A-Z]\\*[0-9]*:[0-9]* |
sort | uniq |
sed -e 's/^/HLA-/' |tr '\n' ',' | sed '$ s/.$//' > out_file
hla=`cat out_file`
echo $hla
which gets me the expected HLA-A*30:02,HLA-B*18:01,HLA-C*05:01. Not the ideal solution, but it works.
Okay so I have a textfile containing multiple strings, example of this -
Hello123
Halo123
Gracias
Thank you
...
I want grep to use these strings to find lines with matching strings/keywords from other files within a directory
example of text files being grepped -
123-example-Halo123
321-example-Gracias-com-no
321-example-match
so in this instance the output should be
123-example-Halo123
321-example-Gracias-com-no
With GNU grep:
grep -f file1 file2
-f FILE: Obtain patterns from FILE, one per line.
Output:
123-example-Halo123
321-example-Gracias-com-no
You should probably look at the manpage for grep to get a better understanding of what options are supported by the grep utility. However, there a number of ways to achieve what you're trying to accomplish. Here's one approach:
grep -e "Hello123" -e "Halo123" -e "Gracias" -e "Thank you" list_of_files_to_search
However, since your search strings are already in a separate file, you would probably want to use this approach:
grep -f patternFile list_of_files_to_search
I can think of two possible solutions for your question:
Use multiple regular expressions - a regular expression for each word you want to find, for example:
grep -e Hello123 -e Halo123 file_to_search.txt
Use a single regular expression with an "or" operator. Using Perl regular expressions, it will look like the following:
grep -P "Hello123|Halo123" file_to_search.txt
EDIT:
As you mentioned in your comment, you want to use a list of words to find from a file and search in a full directory.
You can manipulate the words-to-find file to look like -e flags concatenation:
cat words_to_find.txt | sed 's/^/-e "/;s/$/"/' | tr '\n' ' '
This will return something like -e "Hello123" -e "Halo123" -e "Gracias" -e" Thank you", which you can then pass to grep using xargs:
cat words_to_find.txt | sed 's/^/-e "/;s/$/"/' | tr '\n' ' ' | dir_to_search/*
As you can see, the last command also searches in all of the files in the directory.
SECOND EDIT: as PesaThe mentioned, the following command would do this in a much more simple and elegant way:
grep -f words_to_find.txt dir_to_search/*
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I am trying to cat a file to create a copy of itself, but at the same time replace some values
my command is:
cat ${FILE} | sed "s|${key}|${value}|g" > ${TEMP_FILE}
However, when I open the temp file, none of the keys have been replaced- just a straight swap. I have echoed the values of key and value and they are correct - they come from an array element.
Yet if I use a plain string not a variable, it works fine for one type of key - i.e:
cat ${FILE} | sed "s|example_key|${value}|g" > ${TEMP_FILE}
The example_key instances within the file are replaced which is what I want.
However, when I try to use my array $key parameter, it does nothing. No idea why :-(
Command usage:
declare -a props
...
....
for x in "${props[#]}"
do
key=`echo "${x}" | cut -d '=' -f 1`
value=`echo "${x}" | cut -d '=' -f 2`
# global replace on the $FILE
cat ${FILE} | sed "s|${key}|${value}|g" > ${TEMP_FILE}
#cat ${FILE} | sed "s|example_key|${value}|g" > ${TEMP_FILE}
done
array elements are stored in the following format: $key=$value
key='echo "${x}" | cut -d '=' -f 1
value='echo "${x}" | cut -d '=' -f 2
Use back-ticks, not single-quotes, if you want to do command substitution.
key=`echo "${x}" | cut -d '=' -f 1`
value=`echo "${x}" | cut -d '=' -f 2`
Also note that as you loop over the series of key=value pairs, you're overwriting your temp file each time, using only one substitution applied to the original file.. So after the loop is finished, the best you can hope for is that only the last substitution will be applied.
I'd also suggest not doing this in multiple passes -- do it by passing multiple expressions to sed:
for x in "${props[#]}" ; do
subst="$subst -e 's=$x=g'"
done
sed $subst "${FILE}" > "${TEMP_FILE}"
I'm using a trick, by using = as the delimiter for the sed substitution expression, we don't have to separate the key from the value. The command simply becomes:
sed -e 's=foo=1=g' -e 's=bar=2=g' "${FILE}" > "${TEMP_FILE}"
Thanks to #BillKarwin for spotting the crux of the problem: each iteration of the loop wipes out the previous iterations' replacements, because the result of a single key-value pair replacement replaces the entire output file every time.
Try the following:
declare -a props
# ...
cp "$FILE" "$TEMP_FILE"
for x in "${props[#]}"; do
IFS='=' read -r key value <<<"$x"
sed -i '' "s|${key}|${value}|g" "${TEMP_FILE}"
done
Copies the input file to the output file first, then replaces the output file in-place (using sed's -i option) in every iteration of the loop.
I also streamlined the code to parse each line into key and value, using read.
Also note that I consistently double-quoted all variable references.
#anubhava makes a good general point: depending on the variable values, a different regex delimiter may be needed (in your case: if the keys or values contained '|', you couldn't use '|' to delimit the regexes).
Update: #BillKarwin makes a good point: performing the replacements one by one, in a loop, is inefficient.
Here's a one-liner that avoids loops altogether:
sed -f <(awk -F'=' '{ if ($0) print "s/" $1 "/" substr($0, 1+length($1)) "/g" }' \
"$FILE") "$FILE" > "$TEMP_FILE"
Uses awk to build up the entire set of substitution commands for sed (one per line).
Then feeds the result via process substitution as a command file to sed with -f.
Handles the case where values have embedded = chars. correctly.
COMPANY_NAME=`cat file.txt | grep "company_name" | cut -d '=' -f 2`
outputs something like this
"Abc Inc";
What I want to do is I want to remove the trailing ";" as well. How can i do that? I am a beginner to bash. Any thoughts or suggestions would be helpful.
This will remove the last character contained in your COMPANY_NAME var regardless if it is or not a semicolon:
echo "$COMPANY_NAME" | rev | cut -c 2- | rev
I'd use sed 's/;$//'. eg:
COMPANY_NAME=`cat file.txt | grep "company_name" | cut -d '=' -f 2 | sed 's/;$//'`
foo="hello world"
echo ${foo%?}
hello worl
I'd use head --bytes -1, or head -c-1 for short.
COMPANY_NAME=`cat file.txt | grep "company_name" | cut -d '=' -f 2 | head --bytes -1`
head outputs only the beginning of a stream or file. Typically it counts lines, but it can be made to count characters/bytes instead. head --bytes 10 will output the first ten characters, but head --bytes -10 will output everything except the last ten.
NB: you may have issues if the final character is multi-byte, but a semi-colon isn't
I'd recommend this solution over sed or cut because
It's exactly what head was designed to do, thus less command-line options and an easier-to-read command
It saves you having to think about regular expressions, which are cool/powerful but often overkill
It saves your machine having to think about regular expressions, so will be imperceptibly faster
I believe the cleanest way to strip a single character from a string with bash is:
echo ${COMPANY_NAME:: -1}
but I haven't been able to embed the grep piece within the curly braces, so your particular task becomes a two-liner:
COMPANY_NAME=$(grep "company_name" file.txt); COMPANY_NAME=${COMPANY_NAME:: -1}
This will strip any character, semicolon or not, but can get rid of the semicolon specifically, too.
To remove ALL semicolons, wherever they may fall:
echo ${COMPANY_NAME/;/}
To remove only a semicolon at the end:
echo ${COMPANY_NAME%;}
Or, to remove multiple semicolons from the end:
echo ${COMPANY_NAME%%;}
For great detail and more on this approach, The Linux Documentation Project covers a lot of ground at http://tldp.org/LDP/abs/html/string-manipulation.html
Using sed, if you don't know what the last character actually is:
$ grep company_name file.txt | cut -d '=' -f2 | sed 's/.$//'
"Abc Inc"
Don't abuse cats. Did you know that grep can read files, too?
The canonical approach would be this:
grep "company_name" file.txt | cut -d '=' -f 2 | sed -e 's/;$//'
the smarter approach would use a single perl or awk statement, which can do filter and different transformations at once. For example something like this:
COMPANY_NAME=$( perl -ne '/company_name=(.*);/ && print $1' file.txt )
don't have to chain so many tools. Just one awk command does the job
COMPANY_NAME=$(awk -F"=" '/company_name/{gsub(/;$/,"",$2) ;print $2}' file.txt)
In Bash using only one external utility:
IFS='= ' read -r discard COMPANY_NAME <<< $(grep "company_name" file.txt)
COMPANY_NAME=${COMPANY_NAME/%?}
Assuming the quotation marks are actually part of the output, couldn't you just use the -o switch to return everything between the quote marks?
COMPANY_NAME="\"ABC Inc\";" | echo $COMPANY_NAME | grep -o "\"*.*\""
you can strip the beginnings and ends of a string by N characters using this bash construct, as someone said already
$ fred=abcdefg.rpm
$ echo ${fred:1:-4}
bcdefg
HOWEVER, this is not supported in older versions of bash.. as I discovered just now writing a script for a Red hat EL6 install process. This is the sole reason for posting here.
A hacky way to achieve this is to use sed with extended regex like this:
$ fred=abcdefg.rpm
$ echo $fred | sed -re 's/^.(.*)....$/\1/g'
bcdefg
Some refinements to answer above. To remove more than one char you add multiple question marks. For example, to remove last two chars from variable $SRC_IP_MSG, you can use:
SRC_IP_MSG=${SRC_IP_MSG%??}
cat file.txt | grep "company_name" | cut -d '=' -f 2 | cut -d ';' -f 1
I am not finding that sed 's/;$//' works. It doesn't trim anything, though I'm wondering whether it's because the character I'm trying to trim off happens to be a "$". What does work for me is sed 's/.\{1\}$//'.