I want to copy the current selection, even if it's in another application like Mail, when the user hits a specified hotkey like Cultured Code does it in Things when you create a new task. I got the hotkey working and I know how to place and get stuff on and from the pasteboard. But I have no idea how to get a current selection.
Anyone? Thanks!
You do this with a Service Provider. See the Service Implementation Guide. For what you're talking about, it should work very well. You don't need to do your own hotkey code; it'll do that for you. You don't even have to be running; it'll launch you.
To #Josh Caswell's point about OmniFocus, they're doing stuff fancier than just "the current selected text." They also copy the message itself into the inbox item as an attachment. That's what the plugin is assisting with.
This is a job for AppleScript, which is why applications that do clipping like this only support certain other applications to clip from -- those other applications have to support AS.
You'll have to take a look at the Mail AS dictionary and figure out how to get the selected text, and I believe that unfortunately you'll have to do the same with each application from which you want to clip.
Another possibility: it sounds like OmniFocus uses a Mail plugin for this functionality -- from http://forums.omnigroup.com/showthread.php?t=13906:
Starting in 10.6, Mail.app will refuse to use plugins... install the Clip-o-tron from that updated release... "OmniMailMessageEnabler...".
Related
I have an App Store app in which the free version is not scriptable, but the premium version is. AppleScript support is one of the key differences. I know the App Store reviewers are pushing more and more towards free + in-app-purchase, which will help declutter the App Store. Fine, I'll play ball.
Now I need to do something programmatically that I've always just worked into the build.
Is there a way to disable AppleScript if my OSAScriptingDefinition and NSAppleScriptEnabled are set in my Info.plist? This would still allow people to open the dictionary, and maybe they'd like what they see and consider activating the upgrade. Or,
Is there a way to enable AppleScript after the fact? Obviously with code-signing, I can't do things like modify Info.plist, or add my SDEF to the bundle later. But maybe if the SDEF were in a non-standard location, I could load it from the bundle and tell the system about it manually.
Does the SDEF have to live in my bundle? If not, I'm not sure how to point to the user's Application Support directory in the sandbox. I've also considered xinclude an SDEF I can install after the fact, but again, the SDEF and plist require actual directory paths and not functions.
I've tried a couple of things such as attempting to set NSScriptSuiteRegistry's singleton to nil, to no effect.
Because OSAScriptingDefinition and NSAppleScriptEnabled enable "automatic" support, surely there must be a manual way to make them to effect if not in the plist, and hopefully with a public API.
Any ideas here? Thanks!
A few points, for orientation:
All AppleScript commands are subclasses of NSScriptCommand
All AppleScript objects are represented by subclasses of
NSScriptObjectSpecifier
The scriptability of an app is controlled by its shared instance of NSScriptSuiteRegistry
This gives you a few options. You could try, for instance, overriding NSScriptSuiteRegistry setSharedScriptSuiteRegistry: and setting it to nil for the free version. You could also write a category on NSScriptCommand and/or NSScriptObjectSpecifier that does a version check. That would give you fine-grained control: you could call it from any methods that handle a script command or returns a script object, and decide on the fly which you want to allow and which you want to block; maybe even pop up a 'Pay for Full AppleScript Access' dialog.
CocoaScripting is a black box and not very adaptable. Simplest (kludgy) solution would be to wait until CS has installed its Apple event handlers then call -[NSAppleEventManager setEventHandler:andSelector:forEventClass:andEventID:] to replace those with a dummy handler that always sends back a "requires in-app purchase" error. (Don’t replace the standard open, quit, etc. handlers obviously.)
I'm doing an automation script for installation wizards using AutoIt. I'm trying to handle window changes in some way.
Can some one explain how these GUI's work?
When I click on the Next button it looks just like the components in the GUI is beeing changed. Is this tha case? Or is a new window created and the old destroyed?
I've noticed that the process ID is the same for all windows.
I'm sure there is some way to know which "state" the GUI is in, or which step?
By the way. All the windows has the same title.
Thanks
/Anders
This will be dependant on the program you are automating.
The easiest approach would be to look at what changes in the GUI between stages, likely candidates are if there is a label that is giving instructions for that step, or a button that has text changing (e.g. if the button says "Finish" then you know your at the end).
Most installer programs have child windows for grouping the controls of each stage. These are typically implemented as dialog resources (as can be seen when using something like reshacker on them). So although the window remains the same, the panels are being created/destroyed as appropriate. This is a very neat method of doing it, for the obvious reason that you don't need to have to code to create/destroy a lot of controls. Resource created dialogs don't have nice class names like windows sometimes do though, so this may not be a reliable way to check the state.
I'm trying to build an terminal application that, when started, will take the user away from their prompt and present them with a screen with an interactive menu. I would like the user to be able to interact with it in the following way:
They will start the application by running my_app from the terminal. This will start the application and present them with the root menu.
They will use the cursor keys to navigate around the menu and use the [ENTER] key to make a selection.
When they make a selection, they will be presented with another screen/menu when they will do some work. When they are finished this work, they will press a key that will take them back to the root menu.
The key thing I'm after is for it to not be a scrolling view that just adds more information to the end. I'd like it to have distinct, encapsulated views with a navigation hierarchy. My problem is that I don't know how to produce such a view and present it to the user, and then dismiss it again once they're done. If someone could give me some kind of design pattern for this kind of application, I'll be able to take it from there.
FWIW, I'm using Ruby and would like the app to be cross-platform. If that's too much to ask, then Windows will suffice.
I'm a Linux guy and I want to suggest ncurses library for you. there's an ongoing effort to port this also to Windows.
AFAIK it's going pretty well, please check this question.
I wanted to have a GUI front-end for a script that accepts numerous command-line options, most of them are UNIX paths. So I thought rather than typing them in (even with auto-completion) every time, I'd create a GUI front end which contains text boxes with buttons beside them, which when clicked will invoke the file browser dialogue. Later, I thought I'd extend this to other scripts which would sure require a different set of GUI elements. This made me think if there's any existing app that would let me create a GUI dialog, after parsing some kind of description of the items that I want that window should contain.
I know of programs like Zenity, but I think it's doesn't give me what I want. For example, if I were to use it for the first script, it'll end up flashing sequence of windows in succession rather than getting everything done from a single window.
So, basically I'm looking at some corss-platform program that lets me create a window from a text description, probably XML or the like. Please suggest.
Thanks
Jeenu
Mozilla's XUL is a cross platform application framework - . You could write an app as a Firefox plugin or a standalone XUL application.
mono and monodevelop could work for this. Or even something super simple like shoes.
In PowerBuilder's IDE, the code autocomplete feature uses the clipboard to communicate the completed text to the code window. By doing so, it overrides whatever was stored on the clipboard before. So, if you had the winning numbers of the next lottary stored on your clipboard, and you used the autocomplete to turn m_goodfor into m_goodfornothing, you've just lost your only chance of ever getting rich, and you're left with nothing on your clipboard.
Features like that are the reason I hate software. It looks like it was implemented by some intern that noone was looking after. However, there's also a chance I got all worked up for nothing, and making such use of the clipboard is absolutely legit. So, can an app use the clipboard for its own purposes? Who is considered the owner of the clipboard?
(Bonus votes to whoever puts himself in place of the feature's programmer, and provides some reasoning for this being done on purpose, assuming the users would actually benefite from it)
You are probably right on the intern reasoning. There is absolutely no reason why an application would use the clipboard to communicate information other than pure laziness. Even between processes, there are other, better ways of communicating information.
Other then letting the user paste information in another application, there is no reason to use the clipboard.
The programmer did it because it was easy, and put his needs above those of the end user. There are many programs that do this, particularly add-ins to outlook, VB, etc., which copy/paste their buttons onto the toolbar. Any user that runs a clipboard extender (like my own ClipMate) will absolutely hate this behavior (and you'll be "busted" right away).
Here is my favorite quote on the subject:
“Programs should not transfer data into our out of the clipboard without an explicit instruction from the user.”
— Charles Petzold, Programming Windows 3.1, Microsoft Press, 1992
An app should never change anything on the clipboard without the user initiating that action. My .02 anyway.
Bonus votes to whoever puts himself in place of the feature's programmer, and provides some reasoning for this being done on purpose
Using clipboard for application communication
They are always a better way to do it. The programmer might have done it this way because it was faster to implement OR because he really wanted to have this value in the clipboard after the action. At least, if he didn't wanted to have it in the clipboard he could have get the value from the clipboard, store its value then replace the old content of the clipboard inside the clipboard and everything would have be more "transparent" ans less frustrating for the end-user.
I have built a piece of functionality into an App that uses the clipboard. Business was requesting a way for users to seamlessly capture a screen shot and upload it.
I worked with the business to develop it and what we came up with was a user simply hit the print screen key and clicked "upload" in my app.
The Java Applet running in the background pulled the image off the clipboard and displayed a formatted preview to the user, The user then added in a file name and description and clicked save.
Using the clipboard this way saved the user the time of having to capture the screen shot save it somewhere then find it through an upload interface. Even if we did go that route by the user hitting print screen to capture the image in the first place they are already overwriting whatever was on the clipboard in the first place.
Using the clipboard isn't all bad but I certainly agree using it in an IDE is a def no no.