I have been following the vulkan tutorial for a while now and just got to the part where I was able to create a triangle.
I wanted to get transparency working because it's a major part of my project.
I made the clear color alpha value 0.0, and started experimenting.
I spent some time bashing my head against the wall, because the validation layers kept telling me that these alpha compositing(?) flags were not supported, so I went as far as to create the windows myself (instead of with glfw, which is what I was doing).
Some time later, I noticed that the support for the flags is there, but only for my integrated graphics card:
So when using the 1050 I get the black:
But when using integrated graphics:
boom works
This was even a bit weird to me because it was working even though I was still setting the compositeAlpha to VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR. Looks like windows does the alpha blending without the need for the flags.
I did then experiment with changing it to the POST and PRE flags, but saw no difference at all.
I'm new to this so I'm at a loss as to how things may work.
One more thing is, my windows advanced display settings shows this:
And so I thought that maybe, as it's the integrated graphics that's doing the rendering of what seems to be the windows system (I'm assuming that's what "display 1" entails), it is the only adapter that has access to those framebuffers and is thus the only that's capable of doing the blending.
Am I right? Because if I am, then I must find a way to display with the driver that's doing the windows rendering, and render with the most powerful one, since it seems that windows often picks integrated graphics for it's display stuff.
And if I'm not, I'd be really glad for you to explain to me why I'm dumb :D
Thank you.
Related
I am trying to write a simple console application for Windows 10 that changes the screen brightness. Ultimately, I want to use this application with AutoHotKey, but this is secondary.
In researching this, almost everything I found referred to Android, which doesn't help. I did find this Q&A about changing the screen brightness with C, but unfortunately that is for Linux.
This archived thread contains a script that (while appearing quite hacky) makes a good impression - but it
is deprecated, and on many [Systems] it will not return the full brightness settings array. So, where you should have 8 levels, IOCTL_VIDEO_QUERY_DISPLAY_BRIGHTNESS will only return 6, or none at all. (by jkiel, 1)
So I would prefer to use the WmiMonitorBrightness class (2 3) over IOCTL_VIDEO_QUERY_SUPPORTED_BRIGHTNESS. It also provides a lot finer granularity, but I lack the skills to implement it correctly.
So how can I change the screen brightness on Windows 10? Possibly using the mentioned WmiMonitorBrightness class? I don't mind if it is a C application, an AutoHotKey script or something else, that I can control from console.
Looking for a solution, I found this software from 2008 which works like a charm for me on Windows 10. The developer provided the C# source and compiled application here. "You will need the Microsoft .NET Framework 2.0 or greater for the app to work."
Plus it supports the full granularity of setting brightness on your screen.
To use the console app, the following parameters are allowed:
DisplayBrightnessConsole.exe
This will return the current brightness level.
DisplayBrightnessConsole.exe -getlevels
This will return all possible brightness levels accepted by the display, separated by a new line.
DisplayBrightnessConsole.exe 20 (or some other brightness level number)
This will set the brightness level of the display to the parameter given, in this case, 20.
The code currently only works on single-display systems. If your system has more than one display, it
will only work on the first (generally primary) display. It should be fairly easy to modify it to support more.
Possibly helpful for people using python - but it doesn't help in my specific case:
How can I detect brightness changes using Python and WMI on Windows 10?
I'm going to have the task to make sure that an animation created for in Unity3D can be run on a Microsoft Hololens. I don't have any further information about the animation yet but I wanted to ask in advance if there are any big things i should keep in mind.
In the animation you're playing a "character" in first person mode, controlled by wasd or the arrow keys and you can look up, down, left, right with the mouse. There are (as known to me) no special interactions besides colliders.
And another question: is it easier to test the animation on the actual hololens or to use a hololens-emulator on my laptop?
I know it's a lot to ask right now without any code or stuff but I still hope that some of you can give me a little advide :)
In my experience it is difficult to say. The HoloLens, besides it is an awesome device with nice specs for that size, has quite limited graphical power. Try to minimize your model's vetices to a reasonable low amount (e.g. using Blender's decimate feature). Set down the quality in Unity's quality setting as proposed in the Dev-Guide.
For your emulation question: The emulator does not emulate the HoloLens' specs (like processor, memory...), but emulates input concepts etc., while running a Hyper-V virtual machine. So the performance in the emulator is dependent to your computer's hardware and is not related to the actual performance on a HoloLens.
Also take a look at the performance guidelines from Microsoft
I worked on HoloLens for a couple of projects. A few points that can be useful for you:
the first big thing I would keep in my mind is understanding if the character has to move in a VR environment. In this case HoloLens is almost useless because its lenses will allow you to see the surroundings [the real ones] distracting you from the virtual world. This is exactly what happens with their pre-installed HoloTour. Nice attempt but you will not totally feel in Roma or Machu Picchu
the second big thing that I would consider is the fact that - at least for the first release - HoloLens has a very limited field of view, that "amounts to the size of a monitor in front of you – equivalent to 15 inches" [source]. It is likely that - in a situation where the character will look in every direction - the objects that you put in the AR space will end up being cut or invisible
about testing: the emulator is really exceptional, I didn't find great differences between it and the real device. Of course if you already have the real HoloLens I would use that. But if not I would first develop and test on the emulator to understand if the project is worth the purchase
All I want is to make a simple application that you can resize while rendering.
(IE resize while never once seeing the buffer out the edge of the screen)
Most commercial, professional, and major open source programs seem to be capable of this, while most all personal or hobbyist programs never seem to be capable of this. (I have no idea why)
I want to make a professional looking program like that.
A few examples of what I'm talking about:
https://gamedev.stackexchange.com/questions/127691/how-to-stop-sdl-from-freezing-the-rendering-while-resizing-the-window
https://www.gamedev.net/forums/topic/488074-win32-message-pump-and-opengl---rendering-pauses-while-draggingresizing/
https://en.sfml-dev.org/forums/index.php?topic=19388.5
What I have used in the past for windowing are:
SDL (Currently)
SFML
GLFW +OpenGL
And this problem applies to all 3 from what I can recall.
I would like to know the following:
If this is a problem that is solvable, please tell me why or why not
I've never once looked that low-level (OS APIs nor Graphics back-ends) so I just want to know why.
What's the way to solve this? Is it within my means?
Is the solution -really- a perfect solution? I've seen many people suggest solutions that have various problems
(IE minimizing the buffer but not getting rid of it entirely OR you got rid of it but there's a ton of flickering (I forgot why but it doesn't matter))
My current understanding is that this a Win32 API/Windows API related bug related to blocking.
I don't have any deeper understanding or knowledge on how to create my own solution easily, but if I must learn then I will.
we are developing a skinned application, and under vista/windows 7, on some machines, skinned applications sometimes loses their skin. here's an example for the problem, and here's how the application looks when it's good.
this happens to us whether we develop with native Win32 API or in QT. It happens spontanously, with no event that might explain it. btw, we see it happens sometimes to some other applications, too
we solve it by repainting everything every 2-3 seconds. but this is an ugly hack...
any ideas why this could happen?
thanks _very_much_ for any lead -
Lior
Shot in the dark, but it sounds like a graphics driver problem. I'd check whether the problematic machines all have the same graphics card or the same version of the graphics driver, and how the driver collection on those machines compares with the OK ones.
Shot in the dark #2: You're running out of GDI resources because your app (or another app running on the same machine) is leaking GDI handles.
It's been while since I've had to use any tools for detecting "GDI handle leaks" (Google or Bing on it).
Here's some links to go read up on:
http://msdn.microsoft.com/en-us/magazine/cc301756.aspx
http://www.nirsoft.net/utils/gdi_handles.html
http://msdn.microsoft.com/en-us/magazine/cc188782.aspx
We had an arcade/redemption game running on Win98, but hardware which can run it has finally gone obsolete. The game used a number of scaling effects, some through the 3D path, and played some tricks moving things in and out of video memory. If I was to undertake porting it to run on Windows 7, how much trouble would it likely be? Would it mostly be recompilation, or have the APIs undergone such transformation that I might as well re-write the device interfaces?
Don't think of it as porting to Win7. Just simply port to DX9 and let DX do the work with the Win7 parts. Infact, you could probably just leave it as is and it shuold run -- but you mention you do crazy things with video memory that I assume has nothing to do with DX. (ie either through GDI or some other hack?). Anyway, the DX7, 8 and 9 APIs all have quite drastic differences. But the nice thing is they're all backwards compatible. If the code you have is pure dx7, try compiling against the latest SDK and see if it works on win7 straight off.
It's been a while since I've written any DirectX7 code (or Direct X code at all) but if I recall there were some significant API changes event between 7 and 8 - let along 9 or 10 - that would make such a change a bit more difficult. Specifically I think the major change was that they refactored the system after 7 to merge DirectDraw into Direct3D so that the two systems were no longer completely separate between 7 and 8. I haven't look at it since then, but I suspect that given the number of new coding methods and like that the API has change quite a bit so it is probably going to be a bit of a project to make these changes rather than mostly recompilation like you might have hoped.
You EVER moved things in and out of video memory? shudder
Still ... its quicker to do that now than when DX7 was around. What exactly were you doing? From your description its impossible to say how easy it would be. A DX7 app should still run on Windows 7, I can't think of what odd features you may have used that would cause it to break.
Also converting an application to DX9 from 7 is not actually all that hard (Converting to DX10+ would be a nightmare). They are still relatively similar ... the main thing that has changed since those days is the shortening of things like D3DTRANSFORMSTAGESTATE_* or D3DRENDERSTATE_* to D3DTSS_* or D3DRS_*.
Edit: The biggest change I can think of that has happened since DX7 is that graphics card manufacturers have dropped support for palettised textures which "could" break some old apps on modern machines. That really is a very simple fix though ...
Edit2: Decompressing things from disk into a texture can be a bit of a pain. Your main issue is the fact that you end up suffering A performance hit when you create the texture. However if you have a load of textures already created and open then you can load to the relevant texture as and when you please. You only suffer a lock/unlock hit. That can be mitigated by loading a resource a few frames in advance. If you do this, though, it will no doubt require multi-threading and calling D3D from multiple threads. If you do this set the multi-thread flag on the device.