Right clicking with a Webkit view - ruby

I'm working on a project in Ruby right now that is essentially a web-app. We like the format of web-apps and some of the natural agile advantages we have building for the web. However, we want to be able to package our application and distribute it in a standalone format.
Ideally, we would like to essentially make a .app package for Mac and a .exe for Windows that just opens a Webkit view, connects to our server and renders the HTML we serve it.
Not so hard so far, though this is a little beyond our current expertise (especially the Windows development) but all surmountable.
The issue is that we'd like to enable right-clicking, as you can in the iTunes store (which is a Webkit view that has custom events for right-clicks). We want to give our right-clicks special meaning in our application, too, and have it act context sensitive.
What do we do? Where can we even start?

Do you want to do this from your webapp or from your native app side?
If you're doing this from a Cocoa app, you can just implement the webView:contextMenuItemsForElement:defaultMenuItems: WebUIDelegate method and return an array of custom NSMenuItem's corresponding to your custom actions.
If you want to do this from the web app itself, you can add an event listener for the "contextmenu" event like so:
document.addEventListener("contextmenu", function(event) {
event.preventDefault();
console.log("My spiffy custom right click menu here!");
}, false);
You need to be aware though, that if you use the above code in your webapp, you can't modify the browser's native right click menu, just replace it with your own custom creation.

Related

Nesting an application inside OS X subview

I'm looking for a way to embed another application into my own view.
The business reason is that the company has many small Electron apps (basically a small portable web program with a self-contained browser) that the company wants to embed inside an OS X program. These Electron apps would ideally integrate and display inside a subview seamlessly, so they look like little web frames inside our larger program.
I think programatically it would be easiest to open another program as a subview, but I'll take whatever I can get. Maybe even capturing it's NSWindow somehow. (Electron source is available so it is easily discoverable.) Maybe a way to dock the other program inside mine, or (getting more desperate) finding its view and sending commands to constrain it's size and location on top of mine.
So far all I've found says it is not really possible. I've found I can take the more desperate course. I can launch a process, find its view, and position it inside a spot on my display; when the window is moved or the content is scrolled send messages to move the other window. But that isn't really integrated, the menu stays separate, etc., but I cannot incorporate it.
Any ideas or helpful implementation details?
EDIT 1: Thanks for those responses. How about if we could have the electron apps expose their NSWindow somehow? Could that be leveraged? I'm thinking the application could send messages and (somehow, not sure exactly) to set the parent window inside this one. In Windows API it is much easier since you can call SetParent on anything, even items inside different processes. But Cocoa seems more difficult.
This isn't really a thing you can do in Mac OS X. Applications are not "composable" in the way you're hoping for - while it is possible to share a view with a subprocess under certain very specific circumstances (e.g, Safari or Chrome tab renderers), this requires the subapplication to be written in a very specific way to permit that. It's not something that would be feasible in the situation you're describing.
If you have access to the source of these Electron apps, consider combining them into a single overarching Electron application. Alternatively, if it's not possible for these applications to coexist within a single Electron app, you may want to consider using something like Chromium Embedded Framework to build your wrapper application; note, however, that this may require you to implement parts of the Electron framework yourself.
You cannot do that. Cocoa requires you to have only one NSApplication instance per UI app. So you will to fork/exec out new process and launch your applications.
If you can recompile the source code then you can create custom subclass of NSApplication and use that custom class in all the applications or you can create NSthread of other applications without NSApplication instance and go from there.

Incorporate document features into shoebox app

I'm working on a shoebox (library style) app for Mac OS X which will incorporate a page view like in Apple's text edit sample code.
Since this is a shoebox app, I cannot simply generate a document-based Xcode project because I want the user to generate a new "canvas" to work on from within the app. Rather than having the user open a new document through an open command they create a new object in the navigation sidebar, or simply open an already created document also from the sidebar. I'm looking at a UI similar to Albums in iPhoto.
Is there an easy way to incorporate the features of document-based applications (i.e. page setup, print setup etc) into a non-document-based app without having to implement each method individually? Would anyone have any suggestions as to how to implement this otherwise?

Custom Navigation with Xamarin.Forms

I’m working on an application for Android and iOS, which requires a certain flexibility for one or two views. That’s why we created & implemented a service that translated a basic list of objects into a user interface for both iOS & Android. But now that Xamarin.Forms is released, we decided to replace our service by the one Xamarin provides. I did succeed in creating the views with Xamarin.Forms, resulting in better looking & smoother running pages. But my problem lies in the navigation of it. Here is a little drawing on what I would like to achieve:
I would like my app to start an activity that starts with a custom fragment. After clicking a button on this fragment, I would like the page I created with the Xamarin.Forms api to be added to my current navigation stack! Once the user is finished with the Xamarin.Forms page, it navigates to a second custom fragment, all that without breaking the navigation cycle. Does anybody have an idea on how I can achieve this?
For the iOS developpers: replace Activity with NavigationController & Fragment with ViewController
Take a look at CarouselPage for Xamarin.Forms' own approach. It doesn't look like that's what you need but you can also look at its source code and maybe make a custom renderer yourself.
You may also want to take a look at MVVM
As for the easier/hackier way you'd want to make a button on each page and when the button is tapped execute Navigation.PushModalAsync(nextPage) - there won't be a "< Back" button any more, you may need to implement that yourself if you need it.
If by your meaning of 'current navigation stack' is for using the native Navigation of each platform, then remember that you don't have to use Xamarin.Forms' Navigation Model and functions such like PushAsync.
If you prefer to do Navigation with code specific to each platform then you can do this the same as normal. Just create your stub pages in each platform specific project and set the Xamarin.Forms content for each page from the shared project.
From each platform specific stub page (Activity / UIView / PhoneApplicationPage) you could then execute an Action<> call setting on the shared Xamarin.Forms page to help with the navigation, or alternatively, hook into a custom-event that is raised from the Xamarin.Forms** page back to the platform specific stub page to allow you to do navigation from there.
Like Sten mentioned there won't be any 'Back' button so you will most likely have to do that yourself.

Minimize-in-place - Do I need a custom framework

I want to create a system wide minimize-in-place feature that occurs when double-clicking the title bar of any visible window in layer 0.
It seems that this would be a really simple feature to re-implement... When a title-bar is double-clicked, just draw the title bar only. That's it. The problem is implimenting it in all applications. I think it requires writting a custom framework to override the behavior in AppKit? Maybe NSApplication, NSWindow or NSView?
How can I recreate minimize-in-place?
Is a framework my only choice? If I create a framework, can I replace the behavior of minimize in 3rd party apps?
Which framework do I need to override in order to intercept and recreate the default behavior of the minimize button?
More about minimize-in-place:
I am familiar with WindowShade by Unsanity, this is exactly what I want to create. Supposedly unsanity is working on a Lion version, but their track record is bismal. Minimize-in-place was a system feature way back in the days of OS 7 or 8. I have tried other utilities that try to replace this feature, and there aren't any that do minimize-in-place at a core system level like it needs to be done. Please don't offer utitlity suggestions, I am going to build my own.
I have built an Application that recreates minimize-in-place, but it's not good enough.
My Application semi-successfully recreates minimize-in-place by putting "placeholder" windows (belonging to my app) in place of the 3rd party windows when they get minimized to dock. When my window (title bar only) gets double-clicked, I close my window and restore the real window from the dock.
My custom app works perfectly, but there is a lot of application switching going on. I have optimized the switching between apps to be nearly seamless, but the fact remains that there is application switching going on every time a window title bar is double-clicked. The result of application switching is that menu bars switch back and forth, pallets of 3rd party apps hide themselves when my app takes focus, and the list goes on.
So, although I've built a concept app, this method isn't going to work as I'd like it to. Minimize-in-place needs to be implemented using some other method than building an Application, and I need help understanding how to do it.
What I know think I need to do. Suggestions and assistance welcome.
I think I need to write a custom framework that replaces AppKit? This seems overwhelming even though I only need a super-tiny portion of the code to be overridden? i.e. the core _minimize function whatever that may be.
When a title-bar of 3rd party window is double-clicked, just clip to the title bar and let the rest of the system function as normal. On un-minimize (double click 2nd time), set clip back to full window.
Simple right?
Thanks for any assistance/suggestions,
Chris

Is it possible to add custom Data Detectors to OSX Cocoa applications? (such as Mail.app / Safari)

As the title suggests...
Is it possible to add custom Data Detectors to Cocoa apps?
If so, a gentle nudge in the right direction would be great.
Note: To be clear. I want to add new detectors to currents apps. I am not writing a new app.
Thankyou
W
It's not even possible to build a custom data detector on anything but iOS 4. NSDataDetector is only available on iOS 4 and above.
If they existed on OS X and were a plug-in class like Spotlight importers, that'd be a nice feature. Perhaps filing a request at bugreport.apple.com would help it along?
Later update
I think the reason this hasn't been opened up with an API is because they're only meant to find common data (contact info, dates, URLs) for which there is only one (or just a few) uses. That is, contact info can be stored or used in "the" system-designated app. URLs can be auto-highlighted so they're linkable (clicks invoke the system-designated handler - Safari, an app registered to a protocol, etc.). But there's only one direction to funnel those actions and the endpoint is always a major "convenience app" meant to manage this common information (contacts, calendar, browser, email app, phone app...)
On the other hand, consider app-specific information. Data formatted a certain way for use with one app or platform might mean something else entirely to another application. In fact, this is rather common. So what happens when a string like %%SOMESTRING%% is detected? To one app, it might be a placeholder token. To another, it might be a user name. To another still, it might be interpreted as %%USERNAME followed by %%. Suddenly the simple system-wide UI for handling basic data types has to account for multiple actions and/or multiple "data detector plugins" claiming all or part of a format.
I'm not sure we'll ever see custom data detector APIs on iOS or Mac for this reason alone.
While custom data detectors aren't available at the OS level, there is a mechanism that will get you almost there. One possibility is to create a Workflow in Automator and save it in the Services menu.
It can be configured to be active when text is highlighted. You'd either go to the current app's main menu and select the Workflow under "Services", or else right click on the text and go to the "Services" menu from there. Not as easy as clicking on the text as you would a URL, but pretty close.
Create a workflow in Automator on Mac

Resources