I have implemented inappsettings to have a preferences view in my application to be able to edit settings.bundle values straight in the app.
However now I wanted to read from settings.bundle but reading the iOS programming guide I found out that settings.bundle should be read on application startup.
So is it not possible to access this preferences any time in my code? Inappsettings would not make any sense if the user could not update preferences any time while the app is running.
Inappsettings offers the method InAppSettings registerDefaults
- (void)initialize{
if([self class] == [AppDelegate class]){
[InAppSettings registerDefaults];
}
}
But I am not sure if that makes it possible to read preferences any time. Any suggestions?
Edit: in my app I have three views, one is the dashboard. The other, a option and a mail view, are shown modally.
In the preferences the user can setup some basic things that I need to send the message. So when the user just started typing and wants to change e.g. the transmission gateway he opens the option view, which is the inappsettings view, and changes some things. I would like to read this changes without restarting the app.
Clearly Apple wants to impose external preferences UI paradigm on us. Please recall how Mail.app works. Sometimes that paradigm makes sense, sometimes doesn't. If you have immutable preference set, you don't have to think about events handling onPrefferenceChange, concurrency, synchronization, OS backup and replication issues,etc... However the facilities that Apple provided are very limited in what they can do. You can't even enter an arbitrary string there (Mail.app clearly uses some private API for that). So you have to make a choice whether you can use what Apple offers or implement your own preference system (may be based on top of NSUSerDefaults or something). I implemented my own once and I use it ever since. I prefer to have a real in-app preference system. The main advantage of that is to able changing its value without leaving an app.
Related
Soon I will have to work with OS X and tools like hammerspoon are missing some important capabilities for me. I need to be able to intercept keyboard and mouse events completely from the focused application. Say I ctrl+alt+apple+left_click on an application, I don't want the application to know about that left click. So far the only thing I came up with was to build a transparent fullscreen application, though I'm not sure how feasible that is yet.
Any better idea or hints how to go about this in a language of your choice?
Thanks!
You will need to create an event tap. However, the application will have to run as the root user, or the user will have to authorize that the application has been granted rights to accessibility features.
Apple's documentation can be found here.
Interestingly enough, I am in the process of writing a blog post about how to use event taps (including an ObjectiveC API that I wrote for my own use), but the post won't be made available for another week or so.
How can I read a global UI selection within MacRuby? For instance, of selected text in Preview.
Having no experience in Ruby and Cocoa, I've decided to take a plunge and to write a small dictionary app to aid myself with translation. All the pieces are ready, I just need to know how to read selected text on hotkey.
You can't, because there isn't one.
There is not one global selection. There is one text selection per text view (or other selectable-text-containing view). A window may have any number of such views, an application may have any number of such windows open, and the user may have any number of such applications running.
A further problem is that not all applications are Cocoa. Of those that are, most are accessible, but not all; custom views may trip you up (think of the Text tool in a graphics editor, for example). If the user selects text in a non-Cocoa application, chances are you won't be able to read it.
If you want to access the selected text in the focused view in the focused window in the focused application, the best way to do that is to make your application provide a Service, which the user can invoke from nearly any Cocoa application and some of the more enlightened Carbon apps. That's the best you can do.
Apple's own Dictionary gets special treatment in AppKit (including the availability of a floating Dictionary panel in Cocoa and Carbon apps), but otherwise works the same way: It provides a service that shows up in every Services menu (if the user hasn't turned it off).
I'm in the situation where I love the Terminal.app of the mac, but I would love to add some further enhancements, like split views, terminal sets, etc.
Basically I tried to rebuild the Terminal.app with an NSTask/PseudoTTY approach which basically works but just doesn't feel and behave like the beloved Terminal.app itself. There's also no need to reinvent the wheel, I think.
So is there any approach to start an cocoa application (b) from another cocoa application (a) and manage the window or view of b from a? Like I have a ManageTerminals.app that start 6 Terminals and puts the views of them fullscreen in a grid, every instance being a fully working Terminal.app?
I found the SIMBL that basically allows to do something like that. At least the website says so. But there are no manuals or documentation available.
Does anybody have an idea how to accomplish this? I don't want to change an App, I just want to manage the size and appearance of the window/view on the screen.
Thanks for any ideas or concepts!
-- EDIT
I tried Apples ScriptingBridge now which almost does the job. There's just one little last step missing that might be a show stopper. Right now I have the following:
terminal = [SBApplication applicationWithBundleIdentifier:#"com.apple.Terminal"];
[terminal activate];
if([terminal isRunning]){
TerminalWindow *terminalWindow = [[[terminal windows] get] objectAtIndex:0];
view = (NSView*)[terminalWindow contentView];
}
Of course it's giving me an unrecognized selector, because there's no method to retrieve the view from the terminalWindow in the Terminal header. But if that was possible I could create x instances of my application and replug the view of the terminals to an own window that manages only the views.
Does someone know how to accomplish this, or do you think it's totally capsuled away?
You can launch an application with [[NSWorkspace sharedWorkspace] launchApplication:#"iChat"]. However you can't manage views. Youre only allowed to change the windows frame. AppleScript might help you here out. I've never used SIMBL before but theres a [wiki page][code.google.com/p/simbl/w/list]
You should probably take a look at iTerm which is an open-source terminal emulator for Mac OS X. You might be able to modify it to your needs, or at least see how a terminal emulator works via Cocoa.
Otherwise, you can use the Accessibility framework to control the position of other apps' windows. The user has to specifically allow this via the "Allow access for assistive devices" preference in the Accessibility pane of System Preferences.
Doing much more than that becomes more complex. Apple Events/AppleScript may give you the tools you need. I know Terminal has an AppleScript interface but I'm not sure how complete it is. I really don't recommend using SIMBL. This does allow you to inject your code into another app's memory space but since you would need to reverse engineer the other app you cannot guarantee stability.
I'm facing a problem for an application I'm writing (http://code.google.com/p/blazingstars/issues/detail?id=25), where my program is a menulet (menu bar) application that uses the Accessibility API to interact with and control another program. I do the usual things like registering for the API notifications and getting the window list through API calls, etc., but I realized a while ago that if my program is started in a second Space (virtual desktop) after the program I'm interacting with is started in the first, my program will crash and burn because it can't access any information about its target. (Is there a way around that problem I'm missing?)
A simple solution would be to popup a dialog asking the user to restart the program in the correct Space, but for the life of me I can't figure out how to tell which Space my target is in, either through NSWorkspace or the Accessibility API, so that I can compare it to the Space that I'm in. Any ideas?
Note that setting the collection behaviour to NSWindowCollectionBehaviorCanJoinAllSpaces isn't going to do me any good because I have to do a bunch of work upon launch, so I have to be in the same space as my target right from the start.
I think you can do this with the APIs in CGWindow.h..
Specifically see CGWindowListCopyWindowInfo() and kCGWindowWorkspace.
I've used these APIs to do all types of things like getting window contents, window frames, etc...
If that doesn't work then you might want to try this private API:
extern CGSError CGSGetWindowWorkspace(const CGSConnectionID cid,
CGSWindowID wid,
CGSWorkspaceID *workspace);
The trick would be getting the connection ID of the target process.
You should probably redesign your app so that it delays its initialization until the app you want to control is in the current space.
There is no easy way to do this under Leopard because there are no official "space change" notifications, but the blog post and comments on this page may help.
Using the Apple OS X Cocoa framework, how can I post a sheet (slide-down modal dialog) on the window of another process?
Edit: Clarified a bit:
My application is a Finder extension to do Subversion version control (http://scplugin.tigris.org/). Part of my application is a plug-in (a Contextual Menu Item for Finder); the bulk of my application, however, is in a separate daemon proces. For several reasons, we've chosen to put virtually all the code into the daemon; the plug-in only defines the menu itself, and Apple-Events over to the Daemon.
Sometimes, the daemon needs to prompt the user for further information. It can toss a window on-screen for this, but that's disruptive (randomly positioned), and it seems to me the work flow here is legitimately modal, for example "select a file, pick 'commit' from the menu, provide commit comments, do the operation."
Interprocess cooperation (such as passing a reference of some kind) is acceptable: both processes are mine, but I want to avoid binding the sheet's code into the primary process.
Really, it sounds like you're trying to have your inter-process communication happen at the view level, which isn't really how Cocoa generally works. Things will be much easier if you separate your layers a bit more than that.
Why don't you want to put the sheet code into the other process? It's view code, and view code is inherently process-specific. The right thing to do here is probably to add somewhat generic modal-sheet support to your plugin code, and an IPC call that your daemon can make to summon that code. Trying to ship view objects over to the remote process is going to be nightmarish if you can make it work at all.
You're fighting the frameworks with this approach.
You can't add a sheet to a window in another process, because you have at most only the most restricted access to the windows in the other process.
Please don't do this. Make the interaction nonmodal if at all possible. Especially in something like a commit, it's much nicer to be able to browse around your files while you're writing commit comments.
OS X does have window groups, but I don't think they can (easily) span applications.
Another thing to consider is that in OS X it's possible to have many Finder windows open on the same folder (unlike in OS 9). Even if you did have sufficient privileges/APIs to add a sheet to a Finder window, it's not like the modality of that window would prevent the user from being able to continue working with the files.
(My personal opinion as a long-time Mac user is that this kind of interaction would drive me right up the wall.)