WebEdit and List Combo field Automation using UFT - hp-uft

Hi I am trying to Automate the input field for searching the equity on the website "https://www.nseindia.com/" using UFT. I am able to set value in the WebEdit field but I am not able to submit using UFT
Below is the descriptive code :
Set Obrowser = Browser("name:=NSE - National Stock Exchange of India Ltd\.")
Set oPage = Obrowser.Page("title:=NSE - National Stock Exchange of India Ltd\.")
oPage.WebList("html id:=QuoteSearch").Select "Equity"
oPage.WebEdit("name:=companyED","index:=0").Set "SBIN"
oPage.WebEdit("name:=companyED","index:=0").Submit
enter code here
Image of the field which is highlighted
Could you please help me in handling this type of input box which is shown in screenshot

I see that when you set a value in the search field we get a list of matching results. If you click on the appropriate result the search is performed.
Instead of submit try the following:
oPage.WebElement("html id:=ajax_response").Link("text:=.*SBIN.*").Click
This assumes there is only one match (you can fine-tune it if there are more).
Explanation:
We first look for the list of results that fit the search term (this is in a SPAN with id=ajax_response). Then, under that, we look for the Link that we want to click on. In this case there's only one match so the description doesn't really matter.

Related

Google Sheets getting Invalid error in drop down but item is on the list

In Google Sheets, I created a data validation cell to select different time slots. I am getting an error when choosing 6:00, 7:00, 8:00, 9:00 even tho all of them are in the list. Error says: "Invalid: Input must be an item on the specified list"
Added the data validation like this:
Thanks a lot in advance!
The file is here: https://docs.google.com/spreadsheets/d/1NSBx87sWScwe2Vtp9FOcH3BupvPJ_jzISUeAk6gSBOM/edit#gid=612415402
Make sure to actually pick a choice on the dropdown OR your input is exactly the same with the choice.
By default, when you manually enter 6:00-9:00 it automatically adds 0 in the beginning as sheets has detected your input as time.
So you should update your data validation list to use 06:00-09:00 instead so it matches when sheets adjusts your input when they are manually entered.

How to properly scraping filtered content using XPath Query to Google Sheet?

So, this is about a content from a website which I want to get and put it in my Google Sheets, but I'm having difficulty understanding the class of the content.
target link: https://www.cnbc.com/quotes/?symbol=XAU=
This number is what I want to get from. Picture 1: The part which i want to scrape
And this is what the code looks like in inspector. Picture 2: The code shown in inspector
The target is inside a span attribute but the span attribute looks very difficult to me, so I tried to simplify it using this line of code here =IMPORTXML("https://www.cnbc.com/quotes/?symbol=XAU=","//table[#class='quote-horizontal regular']//tr/td/span")
Picture 3: List is shown when putting the code
After some tries, I am able to get the right target, but it confuse me, Im using this code =IMPORTXML("https://www.cnbc.com/quotes/?symbol=XAU=","//table[#class='quote-horizontal regular']//tr/td/span[#class='last original'][1]")
Picture 4: The right target is shown when the xpath query is more specified
As what you can see in 2nd Picture, 'last original' is not really the full name of the class, when I put the 'last original ng-binding' instead it gave me an error saying imported content is empty
So, correct me if my code is wrong, or accidental worked out somehow because there's another correct way?
How about this answer?
Modified formula 1:
When the name of class is last original and last original ng-binding, how about the following xpath and formula?
=IMPORTXML(A1,"//span[contains(#class,'last original')][1]")
In this case, the URL of https://www.cnbc.com/quotes/?symbol=XAU= is put in the cell "A1".
In this case, //span[contains(#class,'last original')][1] is used as the xpath. The value of span that the name of class includes last original is retrieved. So last original and last original ng-binding can be used.
Modified formula2:
As other xpath, how about the following xpath and formula?
=IMPORTXML(A1,"//meta[#itemprop='price']/#content")
It seems that the value is included in the metadata. So this sample retrieves the value from the metadata.
Reference:
IMPORTXML
To complete #Tanaike's answer, two alternatives :
=IMPORTXML(B2;"//span[#class='year high']")
"Year high" seems always equal to the current stock index value.
Or, with value retrieved from the script element :
=IMPORTXML(B2;"substring-before(substring-after(//script[contains(.,'modApi')],'""last\"":\""'),'\')")
Note : since I'm based in Europe, you need to replace ; with , in the formulas.

UFT/QTP - Extract Values From List Within WebEdit

I am attempting to capture all the list items in the WebList elements throughout the entire application, however, while below code works on the WebLists, it does not work on this WebEdit.
When you click on the WebEdit, a long list of values appear (similar to a WebList) and as you type for your value, the list becomes shorter. That is how the WebEdit was set up.
But now, how do I get the values in this list?
Here is the code I have for the WebLists:
Code
Set WebLink = Browser("browser").Page("page")
listval = WebLink.WebElement("xpath:= ((//*[contains(text(), 'Name')]))[1]/following::SELECT[1]").GetROProperty("all items")
listvalues = split(listval,";")
For j = LBound(listvalues,1) To UBound(listvalues,1)
'Print listvalues(j)
writeToTextFile(listvalues(j))
Next
ExitTest
The short answer is: it depends on the implementation.
The long one:
There is no universal widget for comboboxes (Like there is for edit fields or lists / selects, radiobuttons etc) => there is no universal solution but only guidelines.
You need to spy on those objects that appear in the combobox, see their XPath and / or other properties (the css classname they belong to, for example) and then execute a second query that selects all such items. Afterwards you have to extract the value of the selected elements; which might be as simple as getting the innertext Property or you may need to dig even deeper in the HTML hierarchies.
You would need to pay careful attention for synchronisation(Waiting until all search result elements appear), Filtering (using the XPath, Description Objects and ChildObjects method on your WebPage) and then extraction( getting the property /element that contains the actual value of that WebElement)
So again: These combobox solutions are not universal therefore without seeing their code the best what one can provide to you is universal guidelines which should work in most of the situations. (You would need some familiarity with Web Programming and the UFT Framework / Robot)

Scraping all data from Reddit searches

I am using PRAW to scrape data off of reddit. I am using the .search method to search very specific people. I can easily print the title of the submission if the keyword is in the title, but if the keyword is in the text of the submission nothing pops up. Here is the code I have so far.
import praw
reddit = praw.Reddit(----------)
alls = reddit.subreddit("all")
for submission in alls.search("Yoa ming",sort = comment, limit = 5):
print(submission.title)
When I run this code i get
Yoa Ming next to Elephant!
Obama's Yoa Ming impression
i used to yoa ming... until i took an arrow to the knee
Could someone make a rage face out of our dearest Yoa Ming? I think it would compliment his first one so well!!!
If you search Yoa Ming on reddit, there are posts that dont contain "Yoa Ming" in the title but "Yoa Ming" in the text and those are the posts I want.
Thanks.
You might need to update the version of PRAW you are using. Using v6.3.1 yields the expected outcome and includes submissions that have the keyword in the body and not the title.
Also, the sort=comment parameter should be sort='comments'. Using an invalid value for sort will not throw an error but it will fall back to the default value, which may be why you are seeing different search results between your script and the website.

Plone: generic search form for portal_catalog

I have a few Plone sites with Archetypes-based contents.
I noticed that the vanilla portal_catalog search form (manage_catalogView) allows to filter by language, portal_type (one!) and path only - since these are always available.
Thus, whenever I need a quick search by any other criteria, this involves programming, e.g. writing a throw-away Script (Python).
Is there some extension which provides a generic search form, offering all configured search indexes? E.g.:
Search for IDs
Search for Creator
Search for creation time (two fields, for min and max; one of them or both could be used)
review state (use the distinct values for selectable choices)
...
Perhaps I missunderstood your question, but you don't need external methods for a catalog search or a custom extension.
You can use a python script object and call it right from url.
Go to ZMI root, add a Script(python) using the selection field in the upper right corner. Give an id and delete the example content.
Use the examples for queryCatalog or searchCatalog you will find in http://docs.plone.org/develop/plone/searching_and_indexing/query.html
and call your script using the TEST Tab or form URL.
Example:
In ZMI Root Folder, I created a python script called my_test
catalog = context.portal_catalog
from DateTime import DateTime
# DateTime deltas are days as floating points
end = DateTime() + 0.1
start = DateTime() - 1
date_range_query = { 'query':(start,end), 'range': 'min:max'}
query = {'id': ['id_1', 'id_2'],
'Creator': ['creator1', 'creator2'],
'created': date_range_query,
'review_state': ['published', 'pending']}
results = catalog.queryCatalog(query)
for brain in results:
print brain.pretty_title_or_id()
return printed
Finally, if you set some parameters into your python script you can also pass them through URL or the TEST tab.
Hope it helps you

Resources