I need to display a full screen DirectX window from a Qt app.
Although directX isn't supported directly anymore by Qt this should be easy enough - just override QWidget, provide your own paintEvent() and set a WA_PaintOnScreen attribute.
But when the app is full screen DirectX is grabbing all the mouse and keyboard inputs - so the only way out of the app is ctrl-alt-del.
ps. Even if I wrote DirectX keyboard handlers I would still have to find a way of creating the correct Qkeypress event to pass to Qt.
Has anyone done this? Or is there a simple way to tell DirectX not to grab the keyboard?
To my knowledge Direct3D does not get the keyboard. Your problem more likely arises from the fact that Direct3D in full-screen is quite a different beast. Things like GDI (which Qt may well use to do rendering) do not work by default, the run-time hooks lots of bits of information. That info then, presumably, never manages to get to Qt. The options you have are to re-implement Qt to render using Direct3D (Lighthouse project?) or to use a pseudo full screen. This is usually done by creating a window that has a client area the same size as the screen and then positioning it correctly.
The latter would probably be the simplest solution ...
There was an attempt to get a D3DWidget kind of thing in Qt 4.3-4.5 or something like that, but it never was stabilized or approved and later even removed.
Perhaps indeed lighthouse is an option (with a medium sized amount of work, basically links OS/DX stuff to Qt stuff) or you can take a look at the old direct3D code in older Qt branches. I never used it, and it probably isn't intended to use with recent versions of Qt, but it's better than nothing.
Related
Can anyone provide some insight on how to "duplicate" an iTunes style window in Windows? Specifically I am looking for the following features:
1) rounded window
2) top and bottom toolbars
3) rounded text fields
I'm currently attempting a bit of cross-platform development with Real Studio and while I've discovered the mechanism by which to perform the rounded windows in OS X (declare method call to HIWindowSetContentBorderThickness or SetContentBorderThickness), I cannot find in the MSDN how to do similar things in Windows. Obviously Apple accomplished it in actually writing iTunes for Windows. Perhaps they wrote custom controls from the ground up.
SIDENOTE: I found this article from a few years back that briefly discusses it (http://discuss.joelonsoftware.com/default.asp?joel.3.454369.12), but this is pretty much all I could find.
Even if I can't duplicate it exactly, some direction on which Windows libraries might contain the functionality I need to do it "manually" would be nice. Any further assistance would be greatly appreciated.
There's no API for doing Apple-style rounded corners, but there are lower-level APIs for creating windows (both frame windows and controls) of any shape you want.
I don't use RealStudio, but I believe it allows you to access both .NET and native Win32 APIs, so:
If you're using .NET Windows.Forms, read Shaped Windows Forms and Controls in Visual Studio .NET. It's written for VB7, but should be easy to translate to your favorite language.
If you're using the raw Win32 API, there are at least two ways to do this. The simplest, but most limited, is to call the SetWindowRgn API, which sets the shape of your window to anything you can create as an HRGN. But that probably won't cut it for you. You don't want jagged edges; you want smooth curves, with alpha-blended borders, and maybe shadows. (At least that's what Apple does.) The Layered Windows API is the way to do this. It allows arbitrary shapes (even changing on the fly, if you use UpdateLayeredWindow—although you don't need that feature to emulate iTunes), alpha transparency, and complicated hit testing. Since the original article is very out of date, and doesn't cover all of the functionality, also see Layered Windows for the current documentation, which has links to the references.
there is a third party controls that do what you want. It works on Mac & Windows.
http://www.madebyfiga.com/fgsourcelist/
works well.
sb
Through experience I have found that the native windows forms/components don’t like to be changed. I know using Delphi or Visual Studio you are given native windows components to populate a form or window with and then you attach code on events that these components may do (onClick for example).
However, how do all of these programs like Word or google’s Chrome browser alter the standard windows’ window? I thought it was somehow protected?
Chrome seems to have tabs actually on the window’s frame?
I know you can also get toolkits like Swing and QT that have their own controls/components to populate a form. How do these work? (How does the operating system/computer know what a non-native button should act like? For example; Chrome's back and forward buttons, they're not native components?).
I can understand how OpenGL/DirectX window would work because you’re telling the computer exactly what to draw with polygons/quads.
I hope this question is clear!
Windows does not protect GUI elements. Windows and controls can be subclassed to handle various drawing operations in a custom way. For example, windows may override and reimplement the handling of the WM_NCPAINT message to draw a custom titlebar and frame:
http://msdn.microsoft.com/en-us/library/dd145212(VS.85).aspx
Some Windows controls have an "owner-draw" mode. If you use this, you get to draw the control (or at least vital parts of the control), while Windows takes care of responding to user input in the standard way.
Swing ant QT draw their own widgets at a low level using basic primitives, but they also have theme engines which can mimic the native controls.
Qt moved to native controls a while back. As for how swing does it, it gets a basic window from the OS. Then much like Opengl\Directx it does all of the drawing with in that window. As for where to position things that is what the layout managers do. Each manager has a layout style horizontal, vertical, grid, components it has to draw and a section of window it is expected to fill. From there it does some pretty easy math to allocate its space to its controls.
There's no magic: non native controls are simply drawn on a blank window. Or, instead of being drawn they may be represented as one of several bitmaps based on state (ie: a button may be represented as a .png for the normal state, another .png for the pressed state, etc)
I'm from a Windows programming background when writing tools, but have been programming using Carbon and Cocoa for the past year. I have introduced myself to Mac by, I admit it, hiding from UI programming. I've been basically wapping my OpenGL code in a view, then staying in my comfort zone using my platform agnostic OpenGL C++ code as usual.
However, now I want to start porting one of my more sophisticated applications to Mac OS.
Typically I use the standard Visual Studio dockable MDI approach, which is excellent, but very Windows-like. From using a Mac primarily now for a while, I don't tend to see this sort of method used for Mac UIs. Even Xcode doesn't support the idea of drag and drop/dockable views, unfortunately. I see docked views with splitter panels, but that's about it.
The closest thing I've seen to the Visual Studio approach is Photoshop CS4, which is pretty nice.
So what is the general consensus on this? Is there are more Mac-like way of achieving the same thing that I haven't seen? If not, I'm happy to write a window manager in Cocoa myself, so that I can finally delve in an learn what looks like an excellent API.
Note, I don't want to use QT or any other cross-platform libraries. The whole point is that I want to make a Mac app look like a Mac app, leave the Windows app looking like a Windows app. I always find the cross-platform libraries tend to lose this effect, and when I see a native Mac UI, with fancy Cocoa transitions and animations, I always smile. It's also a good excuse for me to learn Cocoa.
That being said, if there is an Open Source Cocoa library to do this, I'd love to know about it! I'd love to see how someone else achieves this, and would help smooth the Cocoa learning curve.
Cheers,
Shane
UPDATE: I forgot to mention a critical point. I support plugins, which can have their own UI to display various plugin specific information. I don't know which plugins will be loaded and I don't know where their UI will live, if I don't support docking. I'd love to hear people's thoughts on this, specifically: How do I support a plugin view architecture, if the UI can't change? Where do I put the plugin views?
Coming from a Windows background, you feel the need to have docking windows, but is it really essential to the app? Apple's philosophy (in my opinion) is that the designer knows better than the user how things should look and work. For example, iTunes is a pretty sophisticated app, but it doesn't let you change the UI around, change the skin, etc., because Apple wants to keep it consistent. They offer the full view, the mini player, and a handful of different viewing options, but they don't let you pull the source list off into a separate window, or dock it in other positions. They think it should be on the left, so there it stays...
You said you "want to make a Mac app look like a Mac app", and as you pointed out, Mac apps don't tend to have docking windows. Therefore, implementing your own docking windows is probably a step in the wrong direction ;)
+1 to Ken's answer.
From a user perspective unless its integral to the app like it is in Adobe CS or Eclipse i want everything as concise as possible and all the different options and displays out of my way so i can focus on the document.
I think you will find with mac users that those who have the "user skill" to make use of rearranging panels will in most cases opt for hot key bindings instead, and those who dont have that level of "skill" youre just going to confuse.
I would recommend keeping it as simple as possible.
One thing that's common among many Mac apps is the ability to hide all the chrome and focus on your content. That's the point behind the "tic tac" toolbar control in the top right corner of many windows. A serious weakness of many docking UIs is that they expect you to have the window take up most of the screen, because the docked panels can obscure content. Even if docked panels are collapsable, the space left by them is often just wasted and filled with white space. So, if you build a docking panel into your interface, you should expect it to be visible most of the time. For example, iTunes' source list is clearly designed to be visible all the time, but you can double-click a playlist to open it in a new window.
To get used to the range of Mac controls, I'd suggest you try doing some serious work with some apps that don't have a cross-platform UI; for example, the iWork apps, Interface Builder or Preview. Take note of where controls appear and why—in toolbars, in bottom bars, in inspectors, in source lists/sidebars, in panels such as IB's Library or the Font and Color panels, in contextual HUDs. Don't forget the menu bar either. Get an idea of the feel of controls—their responsiveness, modality, sizing, grouping and consistency. Try to develop some taste—not everything is perfect; just try iCal if you want to have something to make fun of.
Note that there's no "one size fits all" for controls, which can be an issue with docking UIs. It's important to think about workflow: how commonly used the control would be, whether you can replace it with direct manipulation, whether a visible indication of its state is necessary, whether it's operable from the keyboard and mouse where appropriate, and so forth. Figure out how the control's placement and behavior lets the user work more efficiently.
As a simple example of example of a good versus bad control placement and behavior in otherwise-decent applications, compare image masking in OmniGraffle and Keynote. In OmniGraffle, this uses the Image inspector where you have to first click on an unlabeled button ("Natural size") in order to enable the appropriate controls, then adjust size and position away in a low-fidelity fashion with an image thumbnail or by typing percentages into fields. Trying to resize the frame directly behaves in a bizarre and counterintuitive fashion.
In Keynote, masking starts with a sensibly named menu item or toolbar item, uses a HUD which pops up the instant you click on a masked image and allows for direct manipulation including a sensible display of the extent of the image you're masking. While you're dragging a masked image around, it even follows the guides. Advanced users can ignore the HUD entirely, just double-clicking the image to toggle mask editing and using the handles for sizing. It should be easy to see, with a few caveats (e.g. the state of "Edit Mask" mode should be visible in the HUD rather than just from the image; the outer border of the image you're masking should be more effectively used) Keynote is substantially better at this, in part because it doesn't use an inspector.
That said, if you do have a huge number of options and the standard tabbed inspector layout doesn't work for you, check out the Omni Group's OmniInspector framework. Try to use it for good, and hopefully you'll figure out how to obsess over UI as much as you do over graphics now :-)
(running in slow motion, reaching out in panic) Nnnnnoooooooo!!!!!
:-) Seriously, as I mentioned in reply to Ken's excellent answer, trying to force a "Windowsism" on an OS X UI is definitely a bad idea. In my opinion, the biggest problem with Windows UI is third-party developers inventing new and inconsistent ways of presenting UI, rather than being consistent and following established conventions. To a Mac user, that's the sign of a terrible application. It's that way for a reason.
I encourage you to rethink your UI app's implementation from the ground up with the Mac OS in mind. If you've done your job well, the architecture and model (sans platform-specific implementation) should clearly translate to any platform.
In terms of UI, you've been using a Mac for a year, so you should have a pretty good idea of "the norm". If you have doubts, it's best to post a question specifically detailing what you need to present and your thoughts on how you might do it (or asking how if you have no idea).
Just don't whack your app with the ugly stick by forcing it to behave as if it were running in Windows when it's clearly not. That's the kiss of death for an app to Mac users.
I am evaluating Embedded Linux GUI toolkits for an upcoming project and have put together a must have “feature list” to help me with the decision:
Color gradients in graphics (for menu headers buttons, icons, etc…)
The ability to draw complex wave graphics with say a background grid, notations.
The ability to swap between to landscape and portrait orientation.
Qt Extended seems to be a popular toolkit with a wide user base. Can anyone tell me if the above features are available in Qt Extended? Any links to tutorials or documentation would be great!
Yes, either through a custom style, a style sheet or by implementing custom widgets (depending on how exotic you want to get)
Sure, you can draw anything. For what you're after, check out Qwt.
Yes, but you might have to apply some magic of your own here. The screen driver does support this, so it can be done. Also, the Qt for Symbian port does this on the fly, so looking at their solution and then applying it to your scenario might work.
I want to build a desktop app where the size of both the window and the content is resized automaticly according to the resolution of the monitor. I know it can be done easily with the docking features of .NET Forms, but my customer insists on going with Linux so I can't use it.
I tried Flex & Air, but the content is not resized automaticaly when I put the app in fullscreen or in another resolution (the app goes full screen but I still have tiny buttons). Now, I am looking at Qt and Gtk...
Is there a GUI framework that can do that? I don't care about the programming language.
Also, since the app will go in a bar it would be nice to be able to customize easily the skin. (like in Flex, WPF, etc.)
Regards,
Pascal
An excellent place to start is understanding how the Screen class works: MSDN Even though that is .Net, it will give you an idea of how the screen size, dpi, etc. can be obtained. In addition that information should translate to the Mono platform. Since your client is insisting on Linux, you should look at MonoDevelop and then possibly the GTK# framework. My understanding is that GTK# is not a very friendly (that is pretty) development system (yet).
See:
MonoDevelop
GTK#