I'm looking for a way to embed another application into my own view.
The business reason is that the company has many small Electron apps (basically a small portable web program with a self-contained browser) that the company wants to embed inside an OS X program. These Electron apps would ideally integrate and display inside a subview seamlessly, so they look like little web frames inside our larger program.
I think programatically it would be easiest to open another program as a subview, but I'll take whatever I can get. Maybe even capturing it's NSWindow somehow. (Electron source is available so it is easily discoverable.) Maybe a way to dock the other program inside mine, or (getting more desperate) finding its view and sending commands to constrain it's size and location on top of mine.
So far all I've found says it is not really possible. I've found I can take the more desperate course. I can launch a process, find its view, and position it inside a spot on my display; when the window is moved or the content is scrolled send messages to move the other window. But that isn't really integrated, the menu stays separate, etc., but I cannot incorporate it.
Any ideas or helpful implementation details?
EDIT 1: Thanks for those responses. How about if we could have the electron apps expose their NSWindow somehow? Could that be leveraged? I'm thinking the application could send messages and (somehow, not sure exactly) to set the parent window inside this one. In Windows API it is much easier since you can call SetParent on anything, even items inside different processes. But Cocoa seems more difficult.
This isn't really a thing you can do in Mac OS X. Applications are not "composable" in the way you're hoping for - while it is possible to share a view with a subprocess under certain very specific circumstances (e.g, Safari or Chrome tab renderers), this requires the subapplication to be written in a very specific way to permit that. It's not something that would be feasible in the situation you're describing.
If you have access to the source of these Electron apps, consider combining them into a single overarching Electron application. Alternatively, if it's not possible for these applications to coexist within a single Electron app, you may want to consider using something like Chromium Embedded Framework to build your wrapper application; note, however, that this may require you to implement parts of the Electron framework yourself.
You cannot do that. Cocoa requires you to have only one NSApplication instance per UI app. So you will to fork/exec out new process and launch your applications.
If you can recompile the source code then you can create custom subclass of NSApplication and use that custom class in all the applications or you can create NSthread of other applications without NSApplication instance and go from there.
Related
If I am NOT using NIB/XIB files for my UI then where is it best to place/launch the code that initialises my GUI?
The logical place seems to be the App Delegate's applicationDidFinishLaunching: method...or perhaps there is a window controller or delegate that should be used? Or perhaps in the main.swift file?
It all depends on what your program does and the preffered order of operations.
If you want to call a view controller right away then you can put it in the applicationDidFinishLaunching: method like you mentioned but you could also put it in methods of other classes if you want something some operations to run in the background first.
The main thing to remember is that you want the user to feel like your application isn't frozen. So add a progress indicator or such if it will be a longer loading time than average. Apple requires a loading screen for all published iOS apps so that is a perfect place to put a loading image.
I want to create a system wide minimize-in-place feature that occurs when double-clicking the title bar of any visible window in layer 0.
It seems that this would be a really simple feature to re-implement... When a title-bar is double-clicked, just draw the title bar only. That's it. The problem is implimenting it in all applications. I think it requires writting a custom framework to override the behavior in AppKit? Maybe NSApplication, NSWindow or NSView?
How can I recreate minimize-in-place?
Is a framework my only choice? If I create a framework, can I replace the behavior of minimize in 3rd party apps?
Which framework do I need to override in order to intercept and recreate the default behavior of the minimize button?
More about minimize-in-place:
I am familiar with WindowShade by Unsanity, this is exactly what I want to create. Supposedly unsanity is working on a Lion version, but their track record is bismal. Minimize-in-place was a system feature way back in the days of OS 7 or 8. I have tried other utilities that try to replace this feature, and there aren't any that do minimize-in-place at a core system level like it needs to be done. Please don't offer utitlity suggestions, I am going to build my own.
I have built an Application that recreates minimize-in-place, but it's not good enough.
My Application semi-successfully recreates minimize-in-place by putting "placeholder" windows (belonging to my app) in place of the 3rd party windows when they get minimized to dock. When my window (title bar only) gets double-clicked, I close my window and restore the real window from the dock.
My custom app works perfectly, but there is a lot of application switching going on. I have optimized the switching between apps to be nearly seamless, but the fact remains that there is application switching going on every time a window title bar is double-clicked. The result of application switching is that menu bars switch back and forth, pallets of 3rd party apps hide themselves when my app takes focus, and the list goes on.
So, although I've built a concept app, this method isn't going to work as I'd like it to. Minimize-in-place needs to be implemented using some other method than building an Application, and I need help understanding how to do it.
What I know think I need to do. Suggestions and assistance welcome.
I think I need to write a custom framework that replaces AppKit? This seems overwhelming even though I only need a super-tiny portion of the code to be overridden? i.e. the core _minimize function whatever that may be.
When a title-bar of 3rd party window is double-clicked, just clip to the title bar and let the rest of the system function as normal. On un-minimize (double click 2nd time), set clip back to full window.
Simple right?
Thanks for any assistance/suggestions,
Chris
As the title suggests...
Is it possible to add custom Data Detectors to Cocoa apps?
If so, a gentle nudge in the right direction would be great.
Note: To be clear. I want to add new detectors to currents apps. I am not writing a new app.
Thankyou
W
It's not even possible to build a custom data detector on anything but iOS 4. NSDataDetector is only available on iOS 4 and above.
If they existed on OS X and were a plug-in class like Spotlight importers, that'd be a nice feature. Perhaps filing a request at bugreport.apple.com would help it along?
Later update
I think the reason this hasn't been opened up with an API is because they're only meant to find common data (contact info, dates, URLs) for which there is only one (or just a few) uses. That is, contact info can be stored or used in "the" system-designated app. URLs can be auto-highlighted so they're linkable (clicks invoke the system-designated handler - Safari, an app registered to a protocol, etc.). But there's only one direction to funnel those actions and the endpoint is always a major "convenience app" meant to manage this common information (contacts, calendar, browser, email app, phone app...)
On the other hand, consider app-specific information. Data formatted a certain way for use with one app or platform might mean something else entirely to another application. In fact, this is rather common. So what happens when a string like %%SOMESTRING%% is detected? To one app, it might be a placeholder token. To another, it might be a user name. To another still, it might be interpreted as %%USERNAME followed by %%. Suddenly the simple system-wide UI for handling basic data types has to account for multiple actions and/or multiple "data detector plugins" claiming all or part of a format.
I'm not sure we'll ever see custom data detector APIs on iOS or Mac for this reason alone.
While custom data detectors aren't available at the OS level, there is a mechanism that will get you almost there. One possibility is to create a Workflow in Automator and save it in the Services menu.
It can be configured to be active when text is highlighted. You'd either go to the current app's main menu and select the Workflow under "Services", or else right click on the text and go to the "Services" menu from there. Not as easy as clicking on the text as you would a URL, but pretty close.
Create a workflow in Automator on Mac
I would like to write some tests for the GUI of my Cocoa program.
Is there any good GUI testing framework for Cocoa apps? The only thing I found is Squish, which, at 2.400€, is well beyond my budget…
Any ideas? How do you test your Cocoa GUIs?
It depends on what you mean by "testing Cocoa GUIs."
If you want tools like the old Virtual User tool included with MPW, then those are few & far between; you'll be looking at tools like Squish and Eggplant.
If you want to write unit tests for your application's human interface, I suggest you follow a "trust, but verify" approach where you trust that as long as you're making the right connections (according to your framework) that your user can interact properly with your framework. That means you can do the majority of your testing by verifying your model and controller code are hooked up to your views correctly.
On my weblog, I've written a couple of examples of how to do this specifically with Cocoa, one for testing user interfaces built with target-action, and one for testing user interfaces built with Cocoa bindings. (Remember, of course, that the two technologies aren't exclusive: If you want to do drag & drop in a table view managed via Cocoa bindings, you'd also have a data source and probably a delegate hooked up via target-action.)
The thing I don't write unit tests for — generally — is the positioning or type of controls in their superview. Sometimes that is important to get and keep correct, however; in that case, I can just query the appropriate properties of the controls and verify them using the standard assertions.
What I virtually never do is write code to "simulate events." The closest I've ever come to that is constructing a fake drag info object and passing that to an outline view data source to ensure it will deal with drags correctly.
I would suggest you take a look at Google's Toolbox for Macintosh. It has, among some other nice goodies, a very nice set of state and rendering test additions for NSView and CALayers. In your unit tests you assert that the view/layer state or rendered image matches a saved (by name) template. If the template doesn't exist in the test bundle or doesn't match the saved version, a new encoded state or rendered TIFF is produced for review. GTM provides categories for NSView and CALayer to do state encoding and rendering. Obviously you can override these categories on your own NSView or CALayer subclasses to encode relevant state (using the NSCoder protocol) or rendering.
It also allows you to (easily) programatically send key events and run the run loop from with unit tests and it supports unit testing on both OS X and iPhone.
I created an open source Python package that uses the Apple Accessibility API among others to create a classic GUI automation library, giving you visibility into and interaction with Cocoa GUIs. PyATOM home page
You might check out and consider Eggplant by TestPlant (formally Redstone Software) at http://www.testplant.com/.
Here is an article that Apple featured on them last year.
The latest CocoaCast podcast has an interview with Ian Dees the author of "Scripted GUI Testing with Ruby". You can find out more at CocoaCast
I would like to extend some existing applications' drag and drop behavior, and I'm wondering if there is any way to hack on drag and drop support or changes to drag and drop behavior by monitoring the app's message loop and injecting my own messages.
It would also work to monitor for when a paste operation is executed, basically to create a custom behavior when a control only supports pasting text and an image is pasted.
I'm thinking Detours might be my best bet, but one problem is that I would have to write custom code for each app I wanted to extend. If only Windows was designed with extensibility in mind!
On another note, is there any OS that supports extensibility of this nature?
If you're willing to do in-memory diddling while the application is loaded, you could probably finagle that.
But if you're looking for an easy way to just inject code you want into another window's message pump, you're not going to find it. The skills required to accomplish something like this are formidable (unless someone has wrapped all of this up in an application/library that I'm unaware of, but I doubt it). It's like clipboard hooking, writ-large: it's frowned upon, there are tons of gotchas, and you're extremely likely to introduce significant instability into your system if you don't really know what you're doing.
Well, think of this from the point of view of the app designer. If you wrote an application, do you want users to be able to inject things into your application (more importantly, would you want to incur the support/revenue headache of clueless users doing this and then blaming you)? Each application's drag and drop infrastructure is written specifically for the application, not to allow you to drop anything you want onto it (potentially causing crashes and all sorts of other nasty behaviour when you drag something onto an app that simply can't handle it). Stuff like this is hard to do for a reason.
It is possible to do, but it's a lot of work: you need to acquire the window handle of the thing you want to drop something onto, and then replace that window's message handler with your own. That's fraught with danger, of course, since you either have to replicate all of the existing functionality of that window yourself, or risk the app not working correctly.
Hm thats really too bad. I suppose there are sometimes reasons why apps don't exist yet. Basically what I'm trying to do is simplify the process of sending image links to people using various apps (mainly web browser text forms, but also anytime I'm editing in a terminal window) by hooking the process of pasting an image in a text context, uploading the image in the background, and pasting a url to where the image was uploaded all with a single action.
Edit: I suppose the easier solution to this is to just create a new keyboard combo that is hooked by my app before it gets to any other app. There's no reason in particular that I need to tie it to copy/paste functionality.