I am looking for a way to reproduce this behaviour from the Windows drag&drop file system :
The cursor isn't visible with the default screencap feature of Windows, but of course it's on the right side file.
I'm looking to reproduce this right side image following the cursor in a program and setting the image myself. Having it be removed automatically on MOUSE_UP would be a big plus.
I'd like to use the Windows API directly so this process could work in the same way across languages, but if anyone has a better way I'm perfectly willing to hear it, of course.
Thanks.
I am trying to setup individual window sharing for a project in Unity for Windows. The way I'm currently going about doing this is by using EnumWindows(), IsVisableWindow(), and GetWindowText() to create a dictionary of window titles and handles, then calling StartScreeCapturebyWindowId() to share the selected window.
This works relatively well for most process; the window of the process and only the window of the process is streamed. However, for certain programs (like Google Chrome, Discord, and Windows Photos) the captured area is set correctly, but overlapping windows are not culled out.
Does anyone know what could be causing this problem? Is there something wrong with the way I'm grabbing the handles for these windows? Or is there something about starting a screen capture that I am missing?
You certainly did the correct things. However, you also hit the limitation to the Windows part of the SDK. To understand this better, the set of programs are UWP applications. They have different ways to share the visible pixels. Previously version of Agora SDK could not even show the window. Starting from 3.0.1, the SDK uses Rectangle cutting method to get the window display. You may further read the online documentation about that API here.
There isn't much Agora can do for the near term. So you will just need to deal with the user experience (e.g. by warning them) or look at solutions like using Web SDK instead.
I am developing a program that needs two full screen Direct3D dispalys. According to the documentation I should create the swap chains in windowed mode and then switch to full screen mode. While this works fine on Windows 8 (currently I'm just using Alt-Enter to do the switch), it does not work on Windows 7. On Windows 7 I get a problem similar to this issue where the screen that has most recently been switched to full screen works fine, but the other screen, which was previously working just fine in full screen, goes black (and stays at the same full screen resolution) until I take it out of full screen.
You can find my mess of a rough prototype at this tag.
Apparently there have been other bugs relating specifically to Windows 7 in the past... But I appear to be getting an issue that is slightly different.
Also, I have tried disabling DWM Composition like suggested in the linked question, but that did not do anything to resolve the problem..
Please let me know if there's any more information I can provide about the issue... I guess worst case scenario is I simply fall back on DX9 which apparently works fine for multi-monitor full screen setup...
I think I may have figured out the cause of the problem: It may be because I was creating multiple ID3D11Device for the same Adapter.
My code was overly complex for what I needed as I was following techniques introduced in this article when I really didn't need anything more than a single thread for all windows and rendering. After sharing the same ID3D10Device for each Render Target (one Render Target per Output, one ID3D10Device per Adapter) I have successfully got DXGI with DX10 rendering two full screen displays as shown in my rough memory-leaking proof of concept.
Since this was my first time working with any of this tech, I used this article to help me along with this process: Display Different images per monitor directX 10
I'm from a Windows programming background when writing tools, but have been programming using Carbon and Cocoa for the past year. I have introduced myself to Mac by, I admit it, hiding from UI programming. I've been basically wapping my OpenGL code in a view, then staying in my comfort zone using my platform agnostic OpenGL C++ code as usual.
However, now I want to start porting one of my more sophisticated applications to Mac OS.
Typically I use the standard Visual Studio dockable MDI approach, which is excellent, but very Windows-like. From using a Mac primarily now for a while, I don't tend to see this sort of method used for Mac UIs. Even Xcode doesn't support the idea of drag and drop/dockable views, unfortunately. I see docked views with splitter panels, but that's about it.
The closest thing I've seen to the Visual Studio approach is Photoshop CS4, which is pretty nice.
So what is the general consensus on this? Is there are more Mac-like way of achieving the same thing that I haven't seen? If not, I'm happy to write a window manager in Cocoa myself, so that I can finally delve in an learn what looks like an excellent API.
Note, I don't want to use QT or any other cross-platform libraries. The whole point is that I want to make a Mac app look like a Mac app, leave the Windows app looking like a Windows app. I always find the cross-platform libraries tend to lose this effect, and when I see a native Mac UI, with fancy Cocoa transitions and animations, I always smile. It's also a good excuse for me to learn Cocoa.
That being said, if there is an Open Source Cocoa library to do this, I'd love to know about it! I'd love to see how someone else achieves this, and would help smooth the Cocoa learning curve.
Cheers,
Shane
UPDATE: I forgot to mention a critical point. I support plugins, which can have their own UI to display various plugin specific information. I don't know which plugins will be loaded and I don't know where their UI will live, if I don't support docking. I'd love to hear people's thoughts on this, specifically: How do I support a plugin view architecture, if the UI can't change? Where do I put the plugin views?
Coming from a Windows background, you feel the need to have docking windows, but is it really essential to the app? Apple's philosophy (in my opinion) is that the designer knows better than the user how things should look and work. For example, iTunes is a pretty sophisticated app, but it doesn't let you change the UI around, change the skin, etc., because Apple wants to keep it consistent. They offer the full view, the mini player, and a handful of different viewing options, but they don't let you pull the source list off into a separate window, or dock it in other positions. They think it should be on the left, so there it stays...
You said you "want to make a Mac app look like a Mac app", and as you pointed out, Mac apps don't tend to have docking windows. Therefore, implementing your own docking windows is probably a step in the wrong direction ;)
+1 to Ken's answer.
From a user perspective unless its integral to the app like it is in Adobe CS or Eclipse i want everything as concise as possible and all the different options and displays out of my way so i can focus on the document.
I think you will find with mac users that those who have the "user skill" to make use of rearranging panels will in most cases opt for hot key bindings instead, and those who dont have that level of "skill" youre just going to confuse.
I would recommend keeping it as simple as possible.
One thing that's common among many Mac apps is the ability to hide all the chrome and focus on your content. That's the point behind the "tic tac" toolbar control in the top right corner of many windows. A serious weakness of many docking UIs is that they expect you to have the window take up most of the screen, because the docked panels can obscure content. Even if docked panels are collapsable, the space left by them is often just wasted and filled with white space. So, if you build a docking panel into your interface, you should expect it to be visible most of the time. For example, iTunes' source list is clearly designed to be visible all the time, but you can double-click a playlist to open it in a new window.
To get used to the range of Mac controls, I'd suggest you try doing some serious work with some apps that don't have a cross-platform UI; for example, the iWork apps, Interface Builder or Preview. Take note of where controls appear and why—in toolbars, in bottom bars, in inspectors, in source lists/sidebars, in panels such as IB's Library or the Font and Color panels, in contextual HUDs. Don't forget the menu bar either. Get an idea of the feel of controls—their responsiveness, modality, sizing, grouping and consistency. Try to develop some taste—not everything is perfect; just try iCal if you want to have something to make fun of.
Note that there's no "one size fits all" for controls, which can be an issue with docking UIs. It's important to think about workflow: how commonly used the control would be, whether you can replace it with direct manipulation, whether a visible indication of its state is necessary, whether it's operable from the keyboard and mouse where appropriate, and so forth. Figure out how the control's placement and behavior lets the user work more efficiently.
As a simple example of example of a good versus bad control placement and behavior in otherwise-decent applications, compare image masking in OmniGraffle and Keynote. In OmniGraffle, this uses the Image inspector where you have to first click on an unlabeled button ("Natural size") in order to enable the appropriate controls, then adjust size and position away in a low-fidelity fashion with an image thumbnail or by typing percentages into fields. Trying to resize the frame directly behaves in a bizarre and counterintuitive fashion.
In Keynote, masking starts with a sensibly named menu item or toolbar item, uses a HUD which pops up the instant you click on a masked image and allows for direct manipulation including a sensible display of the extent of the image you're masking. While you're dragging a masked image around, it even follows the guides. Advanced users can ignore the HUD entirely, just double-clicking the image to toggle mask editing and using the handles for sizing. It should be easy to see, with a few caveats (e.g. the state of "Edit Mask" mode should be visible in the HUD rather than just from the image; the outer border of the image you're masking should be more effectively used) Keynote is substantially better at this, in part because it doesn't use an inspector.
That said, if you do have a huge number of options and the standard tabbed inspector layout doesn't work for you, check out the Omni Group's OmniInspector framework. Try to use it for good, and hopefully you'll figure out how to obsess over UI as much as you do over graphics now :-)
(running in slow motion, reaching out in panic) Nnnnnoooooooo!!!!!
:-) Seriously, as I mentioned in reply to Ken's excellent answer, trying to force a "Windowsism" on an OS X UI is definitely a bad idea. In my opinion, the biggest problem with Windows UI is third-party developers inventing new and inconsistent ways of presenting UI, rather than being consistent and following established conventions. To a Mac user, that's the sign of a terrible application. It's that way for a reason.
I encourage you to rethink your UI app's implementation from the ground up with the Mac OS in mind. If you've done your job well, the architecture and model (sans platform-specific implementation) should clearly translate to any platform.
In terms of UI, you've been using a Mac for a year, so you should have a pretty good idea of "the norm". If you have doubts, it's best to post a question specifically detailing what you need to present and your thoughts on how you might do it (or asking how if you have no idea).
Just don't whack your app with the ugly stick by forcing it to behave as if it were running in Windows when it's clearly not. That's the kiss of death for an app to Mac users.
Up to now I been using the pda emulator in visual studios 2008 (I am using windows mobile 6.1 professional sdk).
So I just dragged and dropped most of my GUI components into the form. In one instance I made a panel then in this panel I dynamically generated labels in it with certain location positions.
I then put it on my Hp PAQ 110 Classic pda and it looked fine and everything. Then I was looking through the emulators one of them was called professional square. So I decided to run it and when it ran my program it looked like crap.
I had missing labels, missing controls and it just looked horrible.
I thought maybe it would like do some resizing for me but it seems to either did a shitty job or it does not do it at all.
So how do you make a GUI that will work well on all mobile phones(or at least the vast majority of them).
Is there like X number of types of mobile phones? Like the emulator emulates a pda and it works on my HP one so I am assuming that all window mobile device pdas have the same screen size.
Then the next question is how do you make the controls position properly from one device to another? I heard of people using XML files that have all the location position, sizes and etc that they call up and I guess essentially generate the GUI dynamically based on the information in XML.
But I could not find any examples how the XML file would look like, how to detect what phone type it is so that I could call up the right node of the file for that phone.
I am not sure if there are any other ways but this seems better then a set of GUI forms for each one.
Also would it be recommended to have most things in a panel so that way even if the stuff is bigger you can at least turn auto scrolling on.
thanks
I spent a good amount of time looking at different solutions for this problem (see my question here as well) and ended up with a pragmatic approach - consistent use of docking. You have to restrict yourself to the least common denominator, i.e. the lowest resolution you want to support, in terms of how much you can fit on the screen. The good news was that grids always use the entire available real estate, and my forms flow correctly on all devices and the screens don't look like they are broken.
This is far from being an easy task. You can follow some guidelines, but the only thing that will actually work is to always test the User Interface in all possible screen resolutions. Emulators are a good way to start, however it will be better to have an actual device. Some things like font sizes and text readability can only be tested in a real device. So, these are my advices:
Try to use docking for positioning your controls.
You need to be able to handle orientation changes correctly. Using docking helps, but again you always need to test in different screen resolutions.
At some point you will find out that it is inevitable to detect the screen size and adapt the User Interface dynamically. I don't agree that you should restrict yourself to only display what can fit in the smallest screen. A professional application should adapt itself to the available screen size and take full advantage of it.
Structure your application so that it is easy to support new screen resolutions. Make the main User Interface code screen size agnostic. Make it get all information about dynamic resizing - positioning from a configuration class. This way you only need to enhance a single item in your code in order to support a new screen resolution.
And of course:
Test in all possible screen resolutions. After even a minor change to the User Interface, retest.
Eventhough the above posts where helpful this video I found solves all my problems and you don't have to develop for the the lowest screen.
http://www.microsoft.com/events/series/detail/webcastdetails.aspx?seriesid=86&webcastid=5112