Is it possible to override the default folder browser dialog in Windows? - windows

For a while now I've disliked the default folder browser dialog in Windows:
Granted, at least it has the text box with autocomplete; but if you go strictly with the tree view, it can take a lot of clicks and scrolling to get where you want!
It'd be nice if I could develop a superior (to my taste) UI and have this override my system's default. That is, whenever an application requests a native folder browser from Windows on my system, I'd like to be able to define my own such control so that it will be displayed instead of the built-in one. Naturally I could/would then also offer this to others to install on their systems if they like.
Does Windows provide an API to override this particular feature? Maybe via a shell extension or something like that? (I've never done anything that interacts directly with the OS like that; so I don't even know where to start looking.)
Basically I am asking if this OS-level functionality is configurable within Windows.

An app called FlashFolder seems to have done that, and has a lot of good reviews (meaning it at least works for someone) but doesn't work for me at all on Windows 8. If you have an earlier version of Windows perhaps you'll have more luck.

Related

Why is moving a GUI window from the code discouraged?

Well, the title almost says it all : Why should I not move a GUI (e.g. Gtk) window on screen from the code ? In Gtk 3 there was an API for moving windows on screen, but it was removed in Gtk 4, because it is not good to move a window from code; only the user should do so (don't ask me to provide sources for that, I read it somewhere but have forgotten where and cannot find it). But I cannot think of any reason why it shouldn't be good, but of several reasons why it could be good, for example to restore the position of a window between application restarts. Could you please shed some light on this ?
The major reason why is that it can't possibly work cross-platform, so it is broken API by definition. That’s also why it was removed in GTK4. For example: this is impossible to implement when running on top of a Wayland session, since the protocol doesn't allow getting/setting global coordinates. If you still want to have something similar working, you'll have to call the specific platform API (for example, X11) for those platforms that you want to support.
On the reason why it’s not supported by some display protocols: it’s bad for UX and security. In terms of UX: some compositors can have special behavior because they need to work on a small device, or because they have a kiosk mode in which everything should always run fullscreen, or they provide a tiling experience. Applications positioning their windows themselves then tend to give unexpected behaviour. In terms of security: if you allow this, it’s technically possible for an application to reposition and resize itself so that it covers your screens while making itself transparent, without it being noticeable, which means it has the possibility of scraping all input.

Nesting an application inside OS X subview

I'm looking for a way to embed another application into my own view.
The business reason is that the company has many small Electron apps (basically a small portable web program with a self-contained browser) that the company wants to embed inside an OS X program. These Electron apps would ideally integrate and display inside a subview seamlessly, so they look like little web frames inside our larger program.
I think programatically it would be easiest to open another program as a subview, but I'll take whatever I can get. Maybe even capturing it's NSWindow somehow. (Electron source is available so it is easily discoverable.) Maybe a way to dock the other program inside mine, or (getting more desperate) finding its view and sending commands to constrain it's size and location on top of mine.
So far all I've found says it is not really possible. I've found I can take the more desperate course. I can launch a process, find its view, and position it inside a spot on my display; when the window is moved or the content is scrolled send messages to move the other window. But that isn't really integrated, the menu stays separate, etc., but I cannot incorporate it.
Any ideas or helpful implementation details?
EDIT 1: Thanks for those responses. How about if we could have the electron apps expose their NSWindow somehow? Could that be leveraged? I'm thinking the application could send messages and (somehow, not sure exactly) to set the parent window inside this one. In Windows API it is much easier since you can call SetParent on anything, even items inside different processes. But Cocoa seems more difficult.
This isn't really a thing you can do in Mac OS X. Applications are not "composable" in the way you're hoping for - while it is possible to share a view with a subprocess under certain very specific circumstances (e.g, Safari or Chrome tab renderers), this requires the subapplication to be written in a very specific way to permit that. It's not something that would be feasible in the situation you're describing.
If you have access to the source of these Electron apps, consider combining them into a single overarching Electron application. Alternatively, if it's not possible for these applications to coexist within a single Electron app, you may want to consider using something like Chromium Embedded Framework to build your wrapper application; note, however, that this may require you to implement parts of the Electron framework yourself.
You cannot do that. Cocoa requires you to have only one NSApplication instance per UI app. So you will to fork/exec out new process and launch your applications.
If you can recompile the source code then you can create custom subclass of NSApplication and use that custom class in all the applications or you can create NSthread of other applications without NSApplication instance and go from there.

Should I use TMainMenu in Firemonkey to support both Windows and OS-X?

I'm reading the documentation for menus in Firemonkey desktop applications. It explains that there are two completely different menu components, one is to be used for Windows (TMenuBar) and the other is to be used for OS-X (TMainMenu).
Further, it also explains that a TMenuBar does not display on OS-X (nonstandard for OS-X), and that a TMainMenu is placed in the non-client area of the Windows form (nonstandard for Windows)
It's my understanding that Firemonkey is supposed to be one code-base for multiple platforms, but it appears they want me to separate the two. I can understand the menus work differently across both platforms, but it seems like an unnecessary pain to implement two different main menus (and conditionally show/hide them depending on the platform). I have no intention of using the special capabilities of menus specific to either platform. Not to mention the TMenuBar is completely ugly.
Since the TMainMenu also shows on Windows, but yet also claims it's "nonstandard for Windows", can I assume that the TMainMenu is sufficient for both? Or do I really need to implement a separate TMenuBar just for Windows? What are the implications if I don't separate them?
I saw this video, but It's for Delphi XE2, and I can't find such an option in the Delphi XE8 TMenuBar control. And again, the TMenuBar is very ugly and doesn't work like typical menus, like the TMainMenu does. I'm confused why they would advise to use this TMenuBar at all.
The help page linked to is wrong if being 'FireMonkey-native' (so to speak) is not a concern (for what I mean by that, see below). TMainMenu is not 'non-standard' on Windows - it wraps the Windows native menu bar API like the VCL equivalent. TMenuBar, in contrast, is completely custom.
That said, in general the fashion has been to use custom menu bars on Windows since Office 97 did so way back nearly twenty years ago, however the original menu bar API is still fully supported and used by (e.g.) Notepad in Windows 10. Further, writing a decent custom menu so that it fakes a real one properly - as well proving the additional functionalty that led to not using a real one in the first place - takes a fair bit of effort and detailed API knowledge. Unfortunately it might be doubted whether the FMX offering enjoyed this, which isn't to say it won't get better in the future.
One caveat - one reason to use TMenuBar might be if you are using FMX's custom styling options, and want your menu bars to fully participate.

Creating native GtkMenu from Firefox extension

Is it possible to create a native GtkMenu (I mean not XUL, but a real GtkMenu) from a Firefox extension? (and add to the firefox window). I would like to make GlobalMenu work with Firefox, which currently doesn't work due to the lack of native GUI.
If you want to have your menu inside firefox window, then... I don't know the details, but the usual way in X is to make a X window (for example GtkWindow) with your GtkMenu. Then you should make firefox "swallow" that window, something like all the systray utilities do that (keywords:reparenting X window--should be enough). I guess there should be special XUL element for than, but I don't see any.
If you dont want to, you can simply make your own GtkWindow with GtkMenu and show it. I guess you should probably do this from another thread. Regarding your other question here -- it is fully possible to write an extension in C++, as long as you use Gecko SDK and Gecko API. You can then link with whatever library you want, including GTK.
I think you should ask this question on one of Mozilla's mailing lists, for example mozilla.dev.extensions (they are listed on http://www.mozilla.org/community/developer-forums.html).

Is there any way to override the drag/drop or copy/paste behavior of an existing app in Windows?

I would like to extend some existing applications' drag and drop behavior, and I'm wondering if there is any way to hack on drag and drop support or changes to drag and drop behavior by monitoring the app's message loop and injecting my own messages.
It would also work to monitor for when a paste operation is executed, basically to create a custom behavior when a control only supports pasting text and an image is pasted.
I'm thinking Detours might be my best bet, but one problem is that I would have to write custom code for each app I wanted to extend. If only Windows was designed with extensibility in mind!
On another note, is there any OS that supports extensibility of this nature?
If you're willing to do in-memory diddling while the application is loaded, you could probably finagle that.
But if you're looking for an easy way to just inject code you want into another window's message pump, you're not going to find it. The skills required to accomplish something like this are formidable (unless someone has wrapped all of this up in an application/library that I'm unaware of, but I doubt it). It's like clipboard hooking, writ-large: it's frowned upon, there are tons of gotchas, and you're extremely likely to introduce significant instability into your system if you don't really know what you're doing.
Well, think of this from the point of view of the app designer. If you wrote an application, do you want users to be able to inject things into your application (more importantly, would you want to incur the support/revenue headache of clueless users doing this and then blaming you)? Each application's drag and drop infrastructure is written specifically for the application, not to allow you to drop anything you want onto it (potentially causing crashes and all sorts of other nasty behaviour when you drag something onto an app that simply can't handle it). Stuff like this is hard to do for a reason.
It is possible to do, but it's a lot of work: you need to acquire the window handle of the thing you want to drop something onto, and then replace that window's message handler with your own. That's fraught with danger, of course, since you either have to replicate all of the existing functionality of that window yourself, or risk the app not working correctly.
Hm thats really too bad. I suppose there are sometimes reasons why apps don't exist yet. Basically what I'm trying to do is simplify the process of sending image links to people using various apps (mainly web browser text forms, but also anytime I'm editing in a terminal window) by hooking the process of pasting an image in a text context, uploading the image in the background, and pasting a url to where the image was uploaded all with a single action.
Edit: I suppose the easier solution to this is to just create a new keyboard combo that is hooked by my app before it gets to any other app. There's no reason in particular that I need to tie it to copy/paste functionality.

Resources