DCT: Read a list of DCRs - teamsite

I encountered an impediment while attempting to create a DCT that will allow a users to select from a list of existing DCRs. In this case, the DCT allows users to create an "Editor's Pick" list from all available articles.
To accomplish this, I need to create a DCT that can parse all available DCRs for a certain node ("articleHeadline" for example) and return the DCR name as an option value, and the headline as the option text.
I first thought to use FormsAPI, but unless I make all of the DCRs available via HTTP requests, I couldn't find a useful method.
The second attempt was to create a datasource as described in the manual "Teamsite 7.2: Data Capture Development" (pp. 153, 224-225) but I was stymied by my inability to find useful documentation on the subject.
Can anyone point me to documentation that will help to create a DCR-reading dataource or to some other method that will help me to accomplish this task?

You can read a list of DCR names in a particular folder to display in a select box by using the browser tag in your Data Capture Template(DCT).
<browser initial-dir="templatedata/Student/data" ceiling-dir="templatedata/Student/data" required="f" readonly="f">
The above browser tag can read all the DCR names in the templatedata/Student/data folder and will provide you with a drop down box.

Related

How do I access h2o xgb model input features after saving a model to disk and reloading it?

I'm using h2o's xgboost implementation in Python. I've saved a model to disk and I'm trying to load it later on for analysis and predicting. I'm trying to access the input features list or, even better, the feature list used by the model which does not include the features it decided not to use. The way people advise doing this is to use varimp function to get the variable importance and while this does remove features that aren't used in the model this actually gives you the variable importance of intermediate features created by OHE the categorical features, not the original categorical feature names.
I've searched for how to do this and so far I've found the following but no concrete way to do this:
Someone asking something very similar to this and being told the feature has been requested in Jira
Said Jira ticket which has been marked resolved but I believe says this was implemented but not customer visible.
A similar ticket requesting this feature (original categorical feature importance) for variable importance heatmaps but it is still open.
Someone else who found an unofficial way to access the columns with model._model_json['output']['names'] but that doesn't give the features that weren't used by the model and they are told to use a different method that doesn't work if you have saved the model to disk and reloaded it (which I am doing).
The only option I see is to just use the varimp features, split on period character to break the OHE feature names, select the first part of all the splits, and then run a set over everything to get the unique column names. But I'm hoping there's a better way to do this.

How to extract a list of URLs from specific domain?

I'm using Firefox 53, and have Scrapbook X and want to save a lot of pages using the Save Multiple URLs feature, but before I do that, I want to extract a specific list of URLs without having to do so manually.
The site I'm looking at extracting data from is www.address-data.co.uk - namely this page.
What I want to do is extract only the URLs and subpages within that page but not the privacy policy or contact us page and all the sub-pages with the EH postcodes.
Is there a way to do this online, or any tool for Mac OS X that can find all related URLs before I copy them into Scrapbook's Save Multiple URLs (where I save them in a subfolder of Scrapbook)?
I assume that EH45 is typical of those you want to extract from the page you mentioned. Like its siblings it's of the form https://address-data.co.uk/postcode-district-EH<postcode number>.
This means that you can make a complete list of the urls if you have a list of the numbers, or of the postcodes.
My main difficulty in answering is that I don't know what tools (especially programming tools) you might have at your disposal. I will assume only that you have, or can obtain, access to an editor that can do macros or that can edit columns. On Windows I would use Emerald (ow known as Crimson).
Then copy the contents of the table in the EH page (not the table headings) and remove everything except the first column. Finally, prepend every item in the column with 'https://address-data.co.uk/postcode-district-'.
PS: This might also be a good question to put on SuperUser.

The system cannot find the file specified in uft 12.01

I was trying to use Insight feature of UFT to avoid using the build configuration of libraries from development side for a flex based application. When i tried using the method "GetVisibleText" UFT 12.01 returns "The system cannot find the file specified". But i was click on different buttons in the same page Example buttton x, Button y at my wish. So it means UFT is distinguishes the objects. My purpose is to check on the dynamic text objects in the page. Note : "GetRoProperty" returned nothing and there is only one property called "similarity" and its returning a constant value at all the times immaterial of different pages.
UFT's Insight technology uses images in order to identify objects, the fact that it identifies button x does not mean that it has any intrinsic understanding that it contains the text "x".
In Insight the similarity property is used in order to decide how dissimilar a control has to be from the captured image in order for it not to constitute a match. Similarity isn't a regular identification property as we are used to. This is why you get the same value for each test object (it doesn't mean that the specific object supports this property).
Regarding GetVisibleText, UFT uses OCR in order to extract the text. You can specify which language you're expecting in the last parameter.
In any case none of these things should fail due to not being able to find a file. I have two thoughts on the matter:
Are you using descriptive programming to identify the InsightObject (see link further on) if so perhaps the image file you specified isn't found?
What OCR Mechanism are you using? (Tools ⇒ Options ⇒ GUI Testing ⇒ Text Recognition), perhaps the mechanism you're using isn't installed correctly and this is causing the failure, try using a different OCR mechanism.
You can read a bit more about Insight here.

REXX /CLIST PANEL- finding code location

Is there any way to find quickly the program behind a rexx/clist panel.
I know that i have check one by one all the panle librairies to find the panel.
But it takes lot of time.
Thanks
First step is is to turn panelid on with the ispf panelid command
panelid on
This will list name of panel on all ISPF panels being displayed
Actually you do not need to search each panel library, you can use Ispf rexx program
to allocate a DataId to ispplib and edit using the DataId i.e.
/* rexx */
address ispexec
'LMINIT DATAID(didVar) DDNAME(ISPPLIB)'
'edit DATAID('didVar') memeber(panelname)'
'lmfree DATAID('didVar')'
Note: If you make changes while editing, the changes are saved in the first library in the list. So if ISPPLIB is setup as
my.panels
test.panels
prod.panels
any changes will always be save in my.panels
Note: if you edit without a specifying the member, the member list will include a dataset number relating to the top level where the panel will be picked up from.
Note: There is almost certainly a limit on the number datasets that can be accessed this way. So if there are a lot of datasets allocated to ISPPLIB, there could be issues.
Hopefully there will be a
Relationship between where the panel is stored and where the rexx/clist is stored
relationship between the panel name and the rexx/clist name; often they are nearly the same. Some times the panel might have a P at a certain character position while the rexx might have a R
If there is no relationship between the panel and the Rexx/clist; you will have to search for it. You could set up a batch search for to search for the panel in all the rexx/clist libraries. A bit of a pain to set up, but it only has to be done once and then you have it for future use.
If you want to get really smart you could use the LM services to extract rexx/clist libraries
Building on some of what #Bruce Martin said, type TSO ISRDDN on any COMMAND ==> line in ISPF. Use the member command to search your SYSPROC and SYSEXEC concatenations. You can also use SRCHFOR when in a member list, looking for the panel name.

How do I take each line of a text file and insert them into a web form? Specifically, for testing domain name availability

I wrote a Ruby script that appended "data" to the beginning of every word of the English dictionary, and then filtered out various strings using different parameters, and now I want to use a site like namecheap or gandi.net in order to take each of these strings and insert them into the domain name availability checker in order to determine which ones are available.
It is my understanding that this will involve making a POST HTTP request of some kind, as well as grabbing the element in question, but I don't really understand the dynamics of what to read about in order to do this kind of thing.
I imagine that after a few requests I will be limited, but as a learning exercise I am still curious as to how I would go about doing this.
I inspected the element (on namecheap) to see what the tag looked like, to find any uniquely identifiable class/id names that I could use to grab that specific part of the source, and found that inside a fieldset tag, there was a line of HTML that I can't seem to paste here, so here is a picture:
Thanks in advance for any guidance in helping me learn about web scripting!

Resources