I've written an Apple Script for checking a UI Element (table) of a specific application (Avid Pro Tools). The table consists of a given number of rows. Each row has an attribute for selected (boolean) and index (integer). The script is returning a list of the index number of every row which has the attribute "selected" to true. The script is working, however, it is incredibly slow. It will take a few seconds to return the values. Is there any way to speed this up?
tell application "System Events"
return value of attribute "AXIndex" of (rows whose value of attribute "AXSelected" is true) of table "Track List" of (windows whose name contains "Mix: ") of application process "Pro Tools" of application "System Events"
end tell
This is more an extended comment than a bone fide answer, but here is what would I would consider in a similar situation:
What are you doing with the AXIndex values later on in the script, i.e. do you need the index numbers to the rows, or can you just store the references to the row objects ? Retrieving a list of index values suggests you'll be iterating through those values at some point, so I wonder if you're slowing your script down by accessing an attribute you may not need, then iterating through a list you could incorporate into the filter.
If there's an attribute named "AXSelected", there's a good chance there's a property named selected, which usually contains the same value and would be faster to retrieve:
... of (rows whose selected = true) of ...
Is the script actually being slow, or is it just having to perform a demanding set of complex operations ? There are two whose filters and one of these is performing a contains comparison (you can think of this as asking it to iterate through characters in a string to find a matching subset, versus an equality test that would on require a glance to know when something doesn't match). See what happens when you don't filter the windows list ? If the purpose of the filter is to isolate windows that have a table "Track List" element, you might not need to: if you're unlucky, removing the filter would create an error for the first window where System Events can't find that particular table (and hence the attributes you're retrieving); but quite often, it just inserts some missing value items in your final result that would be a small trade-off for a big increase in speed.
Finally, your actual construction of the compound filters is syntactically flawed, and I'm surprised it actually runs and returns meaningful results. The sub-clause that reads:
(rows whose value of attribute "AXSelected" is true) of table "Track List"
doesn't actually make sense, because there's no indication at the point where you define the filter what rows is and what other element they belong to. It's seems obvious that you've stated they belong to the table object, but the statement actually references some objects of class row that could exist anywhere, and the ones that are selected are the ones that belong to table "Track List". As an analogy, it's sort of like a split infinitive, that the English language has somehow acclimatised to accepting these as syntax forms that make sense because we assume the underlying meaning without much difficulty, but they are logically corrupt and in some other languages would result in a person not being able to understand what is being said, or assume an incorrect meaning.
So I wonder if AppleScript might be doing that, and if so, is AppleScript making incorrect assumptions and returning inaccurate results; or is AppleScript making correct assumptions, but being slowed down in order to untangle the syntax ?
Here's the correct form of the expression, including removal of the superfluous double-referencing of application "System Events":
tell application "System Events" to return ¬
the value of attribute "AXIndex" of ¬
(rows of table "Track List" of ¬
(windows of application process "Pro Tools" whose name contains "Mix:") ¬
whose value of attribute "AXSelected" is true)
Hopefully, the way I've split the clauses over multiple lines helps to illustrate more clearly why this makes syntactical sense in a way the original does not. It's also when I noticed the same ambiguous referencing occur with the windows filter clause.
Conclusion
I can't promise any of these suggestions will result in quicker execution times. This is more of a walk-through the thought processes that help me improve my scripts, by considering all the "what ifs?" and asking the seemingly-pointless questions every time to myself.
Feel free to provide a bit more context and insight into what the rest of your script is doing overall, and perhaps it'll reveal a different way to get the same result at the end but in less time.
Related
I'm trying to get a text from a textfield with Get Text, but in some cases this field is optional and the robot crashes because it doesn't have anything in the field.
You have multiple options. It's hard to say which one fits best you so here is a pool of possible solutions:
when NOT using the Modern Design, you can easily use the Element exists activity, self explaining
if you use the Modern Design and miss old activities like Element exists, go to the filter dropdown and select Show Classic, this way you are now also able to choose Element exists
you could also wrap such failing activities into a Try Catch, then your process wont fail, but a Try Catch should always be the last way out
when using the Modern Design, you can try Find Element, if the returned object is empty you know that it was not found, make sure to set a proper Timeout here, otherwise you wait for 30 seconds
but on your case it could be better to use an Image exists or Find Image Matches as you said you are looking for text in a textfield, just inverse it and look for an empty textfield, and if you have no matches all is fine
But to be honest, I would go for the Element exists. Give this a try, but be aware that in the future this activity might be replaced by something else and your process will need a little bit of rework.
I have a ML database with a few tens of thousands of documents in it, and a query that returns some simple calculated values for either all or a subset of those documents. The document count has grown to the point that the "all documents" option no longer reliably runs without timing out, and is only going to get worse as the document count grows. The obvious solution is for the client application to use the other form and paginate the results. It's an offline batch process, so overall speed isn't an issue - we'd just like to keep individual requests sane.
The paged version of the query is very simple:
declare namespace ns = "http://some.namespace/here"
declare variable $fromCount external;
declare variable $toCount external;
<response> {
for $doc in fn:doc()/ns:entity[$fromCount to $toCount]
return
<doc> omitted for brevity </doc>
} </response>
The problem is that the query is slower the further through the document set the requested page is; presumably because it's having to load every document in order, check whether it's the right type and iterate until its found $fromCount ns:entitys before it even begins building the response.
One wrinkle is that there are other types of document in the database, so just using fn:doc isn't a realistic option (although, they are in different directories, so xdmp:directory() might be an option; something I'll look into.)
There also isn't currently an index on the ns:entity element; would that help? It's always the root-node of a document, and the documents are quite large, so I'm concerned about the size of the index. Also, (the slow part of) this query isn't interested in the value of the element, just that it exists.
I thought about using the search: api for it's built-in paging, but it seems overkill for a query that is intended to match all documents; surely it's possible to manually construct the query that search:search() would build internally.
It seems like what I really need is an efficient list of all root-nodes of a certain type in the database. Does Marklogic maintain such a thing? If Not would an index solve the problem?
Edit: It turns out that the answer in my case is use the xdmp:directory() option, since ML apparently stores a fast, in-memory list of all documents. Still, if there is a more general solution to the problem, it's bound to be of interest, so I'll leave the question here.
Your analysis is correct:
presumably because it's having to load every document in order, check whether it's the right type and iterate until its found $fromCount ns:entitys before it even begins building the response
The usual answer is cts:search plus the unfiltered option. You found that xdmp:directory was faster, but you should still be able to measure pagination times as O(n) even if the scale is smaller. See http://docs.marklogic.com/guide/performance/unfiltered#chapter - basically the database is guarding against returning false positives, unless you tell it not to.
Another approach might be to use cts:uris and its limit option, but this might require managing pagination state in terms of start values rather than page counts. For example if the last item on page 1 was "cat", you would use "cat" as arg2 when calling cts:uris for the next page. You could still use pagination start-stop values, too. That would still be O(n) - but at a much smaller scale.
I've read somewhere (can't remember/find where) an article about web usability describing when to use drop downs and when to use autocomplete fields.
Basically, the article says that the human brain cannot store more then the last five options presented to choose.
For example, in a profile form, where there is your current occupation, and the system gives you a bunch of options, when you read the sixth options, your brain can't remember the first one anymore. This example is a great place to use an autocomplete field, where the user types something that he thinks that would be his occupation and then select the better from the few options filtered.
I would like to hear your opinion about this subject.
When should I use a drop-down and when I should use a Autocomplete field?
For a limited list, don't use an autocomplete edit box or combobox, but use a listbox where all values are visible all at once. For limited lists, especially with static content of up to about 8 items, this takes up real estate, but presents the user with a better immediate overview.
For less than 5 items a radiogroup or checkbox group (multiple selections) may also be better.
For lists whose content is dynamic, like a list of contacts, a (scrolling) listbox or combobox are appropriate because you never know how many items will be in the list. To keep it manageable, you will need to allow for some kind of filtering and/or autocomplete.
Autocomplete usually suffers from the fact that what the users types needs to match a string from the beginning. I hate those except for when they are used to complete a value based on what I typed in that (type of) field before. E.g. what browsers nowadays offer when filling out online forms.
Allowing a user to start typing in a combobox usually suffers the same drawback. But admittedly it doesn't need to if the filtering is based on "like %abc%" instead of "starts with abc"
When dealing with lists that can have many similar items, I really like the way GMail's "To" field handles it. You start typing any part of someone's name or e-mail address and GMail will drop down a list presenting all the contacts whose name or e-mail address contains the characters you have typed so far anywhere within them. Using the up and down keys changes the selection in the dropped down list (without affecting what you have typed) and pressing enter adds the currently selected item to the "To" field. By far the best user experience I have had so far when having to select something from a list.
Haven't found any components yet that can do this, but it's not too hard to "fake" by combining an edit box and a listbox that drops down when you start typing and has its contents is filtered based on what has been typed so far.
I would use 2 criteria,
1) How long is the list, if the list contains 5 elements you better use a combobox as it will be easier for the user (better UX)
2) In case the list is long, how easy for the user to remember the prefix of what he/she is looking for... if it's not easy, using autocomplete is irrelevant..
I'd say it depends on the diversity in the list, and the familiarity with the list items.
If for example the list contained over 5 car brands (list items I'm familiar with), no problem.
If on the other hand the list has over 5 last names, it could take me some more time before I'd make a selection.
You should probably just try out both options and trust your gut on which you find easier to use.
Here's the opposite approach:
The worst time to use an auto-complete box is when you have a finite and relatively small set of options, and the user doesn't know the range of valid options. For example, if you're selling used cars and you have a mixed bag of brands, simply listing the brands in a combobox is more efficient and easier to browse than an auto-complete method.
Being able to remember the last 5 options is most likely irrelevant unless you have a giant list of options and are requiring the user to select the most relevant item.
An alternative approach is to use both. I believe Dojo has a widget that acts as both a combobox and an auto-complete field. You can either choose to start typing and it will narrow down the possible options, or you can interact with it with your mouse and browse it like a combobox.
I usually look at how big the list is going to be. If there are going to be more than 15 options then it just seems easier to find if they can just start typing.
The other circumstance for me is when there is an other option and they can free type it. This essentially eliminates the need for two controls since you can combine in one.
The main difference has nothing to do with usability but more to do with what defines the acceptable inputs.
You normally use a ComboBox when you have a predefined list of acceptable inputs (e.g. an Enum or list of occupations).
An auto-complete field is best used when there are many known inputs BUT custom input is accepted as well. The user will become frustrated if they type "Programmer" in as their occupation but it wasn't one of the pre-defined, acceptable inputs, and they are given a message that their input is not valid.
Keep in mind that ComboBoxes do allow you to type in them to select the first matching option. Some types of ComboBoxes (depending on the UI framework you are using) even allow free-form text fields at the top or side of the field to search or add to the list.
Of coarse the best way to determine what YOUR user will prefer is testing: A/B, field, user, etc.
Hope this helps you solve your dilemma!
I've long thought (but never practiced for some reason) that a dropdown menu that is dynamically generated and only contains one item, should automatically select that item. This would opposed to the typical approach that I've observed where a blank entry is made at the top and you still have to interact with the menu to make the single available selection.
An example is when I login to my online banking and select "View Paper Statements". I've only got one account so the next step in the process is to present me with a dropdown where I have to select that single account to proceed. In this case, by implementing the solution above, it would take one less click to select the account and proceed to viewing it. Even better in this case would be to eliminate the dropdown menu step altogether and go right to the statement.
Can you think of a case where auto-select of a single item would produce undesirable results?
Can you think of a case where auto-select of a single item would produce undesirable results?
Yes - any case where the user has the option not to select any option.
In your bank account example, pre-selecting the only value makes sense. But if you have e.g. some kind of form where users can provide voluntary information, they will need the possiblity to leave that field blank or otherwise give a possible incorrect answer.
So it really should depend on the nature of the data in that dropdown whether pre-selection is a good idea or not.
I completely agree, in the case you describe. But there are times where you want to force the user to make a selection actively -- e.g., when the value of the field is somehow optional or additive.
In your case, without selecting an account, there's probably no useful way to proceed, so automatic selection does make sense. But for example, an application I'm working on allows the user to specify a number of descriptive fields (movie metadata, basically -- title, release year, genre, etc.), many of which are optional, and some of them are represented by drop-down menus. Allowing the user to leave the default selection blank lets him tell us, effectively, "I don't want to use this field," so we leave it blank, and the data remains clean.
Just one example, although you're right -- in your case, I can see how that'd be annoying. :)
If there is truly only one possibility you shouldn't ask a user to decide between Option A. (Bad grammar to illustrate the point)
If the field can be left blank, it's not an option with a single answer. Instead you have a choice between Option A "meaningful data" and Option B "".
Iif list has blank option but the form does not allow that to be blank, it's a choice between Option A. (Bad grammar to illustrate the point)
Sometimes you want the user to explicitly select an option, even when it's the only option. If the option is selected automatically, the user may never even realise it, even though they may not be happy with the result.
For example, I'm reviewing my savings account on my internet bank site. Then I go to set up a payment. As it happens, I'm not allowed to make payments from my savings account, so the payment form automatically selects my only other account. If I don't notice this then I will end up making the payment from my other account when I was expecting to use my savings account. If I had known, I wouldn't have made the payment at all.
Perhaps this is slightly contrived. But unless you can be certain that the user will be happy with the (only) choice, you should ensure that they choose it explicitly.
I agree. If there's only one item in the dropdown and it's required that the user select something, then just default to the single item. I can't think of any negative effects of this (but I'm certainly not a UI expert)
What I like to do in this instance depends on a few factors.
If the dropdown is a required field and ends up with only one item due to dynamic generation, I try to avoid displaying it as a dropdown entirely. I end up showing it as an uneditable text field instead (or not displaying it at all if it isn't necessary). Why make it even look like it's a choice when it isn't?
If the dropdown is not required, then it makes total sense to display a blank choice in addition to the single value.
If it's REQ and there's a single record/value .. I'd try and change the control to a Display field rather than the Drop Down.
I have an AppleScript program which creates XML tags and elements within an Adobe InDesign document. The data is in tables, and tagging each cell takes .5 seconds. The entire script takes several hours to complete.
I can post the inner loop code, but I'm not sure if SO is supposed to be generic or specific. I'll let the mob decide.
[edit]
The code builds a list (prior to this loop) which contains one item per row in the table. There is also a list containing one string for each column in the table. For each cell, the program creates an XML element and an XML tag by concatenating the items in the [row]/[column] positions of the two lists. It also associates the text in that cell to the newly-created element.
I'm completely new to AppleScript so some of this code is crudely modified from Adobe's samples. If the code is atrocious I won't be offended.
Here's the code:
repeat with columnNumber from COL_START to COL_END
select text of cell ((columnNumber as string) & ":" & (rowNumber as string)) of ThisTable
tell activeDocument
set thisXmlTag to make XML tag with properties {name:item rowNumber of symbolList & "_" & item columnNumber of my histLabelList}
tell rootXmlElement
set thisXmlElement to make XML element with properties {markup tag:thisXmlTag}
end tell
set contents of thisXmlElement to (selection as string)
end tell
end repeat
EDIT: I've rephrased the question to better reflect the correct answer.
The problem is almost certainly the select. Is there anyway you could extract all the text at once then iterate over internal variables?
I figured this one out.
The document contains a bunch of data tables. In all, there are about 7,000 data points that need to be exported. I was creating one root element with 7,000 children.
Don't do that. Adding each child to the root element got slower and slower until at about 5,000 children AppleScript timed out and the program aborted.
The solution was to make my code more brittle by creating ~480 children off the root, with each child having about 16 grandchildren. Same number of nodes, but the code now runs fast enough. (It still takes about 40 minutes to process the document, but that's infinitely less time than infinity.)
Incidentally, the original 7,000 children plan wasn't as stupid or as lazy as it appears. The new solution is forcing me to link the two tables together using data in the tables that I don't control. The program will now break if there's so much as a space where there shouldn't be one. (But it works.)
I can post the inner loop code, but I'm not sure if SO is supposed to be generic or specific. I'll let the mob decide.
The code you post as an example can be as specific as you (or your boss) is comfortable with - more often than not, it's easier to help you with more specific details.
If the inner loop code is a reasonable length, I don't see any reason you can't post it. I think Stack Overflow is intended to encompass both general and specific questions.
Are you using InDesign or InDesign Server? How many pages is your document (or what other information can you tell us about your document/ID setup)?
I do a lot of InDesign Server development. You could be seeing slow-downs for a couple of reasons that aren't necessarily code related.
Right now, I'm generating 100-300 page documents almost completely from script/xml in about 100 seconds (you may be doing something much larger).