Are there DirectX guidelines for binding and unbinding resources between draw calls? - directx-11

All DirectX books and tutorials strongly recommend reducing resource allocations between draw calls to a minimum – yet I can’t find any guidelines that get more into details. Reviewing a lot of sample code found in the web, I have concluded that programmers have completely different coding principles regarding this subject. Some even set and unset
VS/PS
VS/PS ResourceViews
RasterizerStage
DepthStencilState
PrimitiveTopology
...
before and after every draw call (although the setup remains unchanged), and others don’t.
I guess that's a bit overdone...
From my own experiments I have found that the only resources I have to bind on every draw call are the ShaderResourceViews (to VS and PS in my case). This requirement may be caused by the use of compute shaders since I bind/unbind UAVs to buffers that are bound to VS / PS later on.
I have lost many hours of work before I detected that this rebinding was necessary. And I guess that many coders aren’t sure either and prefer to unbind and rebind a “little too much” instead of running into a similar trap.
Question 1: Are there at least some rules of thumb regarding this problem?
Question 2: Is it possible that my ShaderResourceViews bound to VS/PS are unbound by the driver/DirectX core because I bind UAVs to the same buffers before the CS dispatch call (I don’t unbind the SRVs myself)?
Question 3: I don't even set VS/PS to null before I use the compute shaders. Works w/o problems yet I feel constantly unsure whether or not I'm digging my next trap using such a "lazy" approach.

You want to have as less overhead, but also while avoiding invalid pipeline state. That's why some people unbind everything (try to prevent as much), it depends on uses cases, and of course you can balance this a bit.
To balance this you can pre allocate a specific resource to a slot, depending on resource type, since you have a different number of slots, different rules can apply
1/Samplers and States
You have 16 slots, and generally 4-5 samplers you use 90% of the time (linear/point/anisotropic/shadow).
So on application startup create those states and bind them to each shader stage you need (try not to start at zero slot, since they would easily be overriden by mistake).
Create a shader header file with mapping SamplerState -> slot, and use it in your shaders, so any slot update is reflected automatically.
Reuse this as much as possible, and only bind custom samplers.
For standard states (Blend/Depth/Rasterizer), building a small collection of common states on application startup and bind as needed is common practice.
An easy way to minimize Render State binding at low cost, you can build a stack, so you set a default state, and if a shader needs a more specific state, it can push new state to the stack, once it's done, pop last state and apply it again to the pipeline.
2/Constant Buffers
You have 14 slots, which is quite a lot, it's pretty rare (at least in my use cases) to use all of them, specially now you can also use buffers/structuredbuffers as well.
One simple common case is setting a reserved slots for camera (with all data that you need, view/projection/viewprojection, plus their inverses since you might need that too.
Bind it to (all if needed) shader stage slots, and only thing you have to do is update your cbuffer every frame, it's ready to use anywhere.
3/Shader stages
You pretty much never need to unbind Compute Shader, since it's fully separated from the pipeline.
On the other side, for pipeline stage, instead of unbinding, a reasonably good practice is to set all the ones you need and set to null the ones you don't.
If you don't follow this as example and render a shadow map (depth buffer only), a pixel shader might still be bound.
If you forget to unset a Geometry Shader that you previously used, you might end up with invalid layout combination and your object will not render (error will only show up in runtime debug mode).
So setting the full shader stage adds little overhead, but the safety trade off is very far from negligible.
In your use case (using only VS/PS and CS to build), you can safely ignore that.
4/Uavs-RenderTargets-DepthStencil
For write resources, always unset when you done with unit of work. Within the same routine you can optimize inside, but at the end of your render/compute shader function, set your output back to null, since pipeline will not allow anything to be rebound as ShaderResource while it's on output.
Not unsetting a write resource at the end of your function is recipe for disaster.
5/ShaderResourceView
This is very situational, but idea is to minimize while also avoiding runtime warnings (which can be harmless, but then hide important messages).
One eventual thing is to reset to null all shader resource inputs at the beginning of the frame, to avoid a buffer still bound in VS to be set as UAV in CS for example, this costs you 6 pipeline calls per frame, but it's generally worth it.
If you have enough spare registers and some constant resources you can also of course set those in some reserved slots and bind those once and for all.
6/IA related resources
For this one, you need to set the right data to draw your geometry, so everytime you bind it it's pretty reasonable to set InputLayout/Topology . You can of course organize your draw calls to minimize switches.
I find Topology to be rather critical to be set properly, since invalid topology (for example, using Triangle List with a pipeline including tesselation), will draw nothing and give you a runtime warning, but it's very common that on AMD card it will just crash your driver, so better to avoid that as it becomes rather hard to debug.
Generally never really unbinding vertex/index buffers (since just overwriting them and Input layout tells how to fetch anyway).
Only exception to this rule if in the case those buffers are generated in compute/stream out, to avoid the above mentioned runtime warning.

Answer 1 : less is better.
Answer 2 : it is the opposite, you have to unbind a view before you bind the resource with a different kind of view. You should enable the debug layer to catch errors like this.
Answer 3 : that's fine.

Related

Does the guarantee of non-divergence when dispatching single work item exist?

As we know, work items running on GPUs could diverge when there are conditional branches. One of those mentions exist in Apple's OpenCL Programming Guide for Mac.
As such, some portions of an algorithm may run "single-threaded", having only 1 work item running. And when it's especially serial and long-running, some applications take those work back to CPU.
However, this question concerns only GPU and assume those portions are short-lived. Do these "single-threaded" portions also diverge (as in execute both true and false code paths) when they have conditional branches? Or will the compute units (or processing elements, whichever your terminology prefers) skip those false branches?
Update
In reply to comment, I'd remove the OpenCL tag and leave the Vulkan tag there.
I included OpenCL as I wanted to know if there's any difference at all between clEnqueueTask and clEnqueueNDRangeKernel with dim=1:x=1. The document says they're equivalent but I was skeptical.
I believe Vulkan removed the special function to enqueue a single-threaded task for good reasons, and if I'm wrong, please correct me.
Do these "single-threaded" portions also diverge (as in execute both true and false code paths) when they have conditional branches?
From an API point of view it has to appear to the program that only the active branch paths were taken. As to what actually happens, I suspect you'll never know for sure. GPU hardware architectures are nearly all confidential so it's impossible to be certain.
There are really two cases here:
Cases where a branch in the program turns into a real branch instruction.
Cases where a branch in the program turns into a conditional select between two computed values.
In the case of a real branch I would expect most cases to only execute the active path because it's a horrible waste of power to do both, and GPUs are all about energy efficiency. That said, YMMV and this isn't guaranteed at all.
For simple branches the compiler might choose to use a conditional select (compute both results, and then select the right answer). In this case you will compute both results. The compiler heuristics will generally aim to choose this where computing both results is less expensive than actually having a full branch.
I included OpenCL as I wanted to know if there's any difference at all between clEnqueueTask and clEnqueueNDRangeKernel with dim=1:x=1. The document says they're equivalent but I was skeptical.
Why would they be different? They are doing the same thing conceptually ...
I believe Vulkan removed the special function to enqueue a single-threaded task for good reasons, and if I'm wrong, please correct me.
Vulkan compute dispatch is in general a whole load simpler than OpenCL (and also perfectly adequate for most use cases), so many of the host-side functions from OpenCL have no equivalent in Vulkan. The GPU side behavior is pretty much the same. It's also worth noting that most of the holes where Vulkan shaders are missing features compared to OpenCL are being patched up with extensions - e.g. VK_KHR_shader_float16_int8 and VK_KHR_variable_pointers.
Q : Or will the compute units skip those false branches?
The ecosystem of CPU / GPU code-execution is rather complex.
The layer of hardware is where the code-paths (translated into "machine"-code) operate. On this laye, the SIMD-Computing-Units cannot and will not skip anything they are ordered to SIMD-process by the hardware-scheduler (next layer).
The layer of hardware-specific scheduler (GPUs have typically right two-modes: a WARP-mode scheduling for coherent, non-diverging code-paths efficiently scheduled in SIMD-blocks and greedy-mode scheduling). From this layer, the SIMD-Computing-Units are loaded to work on SIMD-operated blocks-of-work, so any first divergence detected on the lower layer (above) breaks the execution, flags the SIMD-hardware scheduler about blocks, deferred to be executed later and all known SIMD-specific block-device-optimised scheduling is well-known to start to grow less-efficient and less-efficient, due to each such run-time divergence.
The layer of { OpenCL | Vulkan API }-mediated device-specific programming decides a lot about the ease or comfort of human-side programming of the wide range of the target-devices, all without knowing about its respective internal constraints, about (compiler decided) preferred "machine"-code computing problem re-formulation and device-specific tricks and scheduling. A bit oversimplified battlefield picture has made for years human-users just stay "in front" of the mediated asynchronous work-units ( kernel's ) HOST-to-DEVICE scheduling queues and wait until we receive back the DEVICE-to-HOST delivered results back, doing some prior-H2D/posterior-D2H memory transfers, if allowed and needed.
The HOST-side DEVICE-kernel-code "scheduling" directives are rather imperative and help the mediated-device-specific programming reflect user-side preferences, yet leave user blind from seeing all internal decisions ( assembly-level reviews are indeed only for hard-core, DEVICE-specific, GPU-engineering Aces and hard to modify, if willing to )
All that said, "adaptive" run-time values' based decisions to move a particular "work-unit" back-to-the-HOST-CPU, rather than finalising it all in DEVICE-GPU, are not, to the best of my knowledge, taking place on the bottom of this complex computing ecosystem hierarchy ( afaik, it would be exhaustively expensive to try to do so ).

Does Xcode Memory graph offer any smart visual indicators for strong references that aren't memory cycles?

As a follow up to my previous How can I create a reference cycle using dispatchQueues?:
For the strong references (that create leaks, but aren't reference cycles) e.g. Timer, DispatchSourceTimer, DispatchWorkItem, the memory graph doesn't create a purple icon, I suspect it's simply because it doesn't find two objects pointing back to each other strongly.
I know I can go back and forth and observe that a specific class is just not leaving the memory, but wondering if Xcode is providing anything more.
Is there any other indicator?
I know Xcode visually shows the number of instances of a type in memory. But is there a way to filter objects that have more than 3 instances in memory?
You ask:
For the strong references (that create leaks, but aren't reference cycles) e.g. Timer, DispatchSourceTimer, DispatchWorkItem, the memory graph doesn't create a purple icon, I suspect it's simply because it doesn't find two objects pointing back to each other strongly.
Yes. Or more accurately, the strong reference cycle warning is produced when there are two (or more objects) whose only strong references are between each other.
But in the case of repeating timers, notification center observers, GCD sources, etc., these are not, strictly speaking, strong reference cycles. The issue is that the owner (the object that is keeping a strong reference to our app’s object) is just some persistent object that won’t get released while our app is running. Sure, our object might still be “abandoned memory” from our perspective, but it’s not a cycle.
By way of example, consider repeating timer that is keeping strong reference to our object. The main runloop is keeping strong reference to that timer and won’t release it until the timer is invalidated. There’s no strong reference cycle, in the narrow sense of the term, as our app doesn’t have strong reference back to the runloop or the timer. But nonetheless, the repeating timer will keep a strong reference to our object (unless we used [weak self] pattern or what have you).
It would be lovely if the “Debug Memory Graph” knew about these well-known persistent objects (like main runloop, default notification center, libDispatch, etc.), and perhaps drew our attention to those cases where one of these persistent objects were the last remaining strong reference to one of our objects. But it doesn’t, at least at this point.
This is why we employ the technique of “return to point that most of my custom objects should be have been deallocated” and then “use ‘debug memory graph’ to identify what wasn’t released and see what strong references are persisting”. Sure, it would be nice if Xcode could draw our attention to these automatically, but it doesn’t.
But if our app has some quiescent state, where we know the limited types of objects that should still be around, this “debug memory graph” feature is still extremely useful, even in the absence of some indicator like the strong reference cycle warning.
I know I can go back and forth and observe that a specific class is just not leaving the memory, but wondering if Xcode is providing anything more.
Is there any other indicator?
No, not that I know of.
I know Xcode visually shows the number of instances of a type in memory. But is there a way to filter objects that have more than 3 instances in memory?
Again, no, not that I know of.

A C++11 based signals/slots with ordering

I may be a bit in over my head here, but if you never try new things - you'll never learn I suppose. I'm working with some multi-touch stuff and have built myself a small but functional GUI library. Up until now I've been used boosts Signals2 library to have my detected gestures be distributed to all active GUI elements (whether on screen or not). I'm a big fan of avoiding pre-mature optimization, so things have been honky-dory until now.
I've used vs2013's profiler to find out that when the user goes touch crazy (the device supports up to 41 simultaneous touches), then my system grinds to a halt, and Signals2 is the culprit. Keep in mind that each touch can trigger a number of Gestures which are all communicated to every GUI element that have registered to interact with this type of Gesture.
Now there are a number of ways to deal with this bottleneck:
Have GUI elements work more cleverly and disconnect them when they're off-screen.
Optimize the signals/slots system so the calls are resolved faster.
Prioritization of events.
I'm not a big fan of ever having to deal with 3 - if avoidable - as it'll directly impact the responsiveness of my application. Nr. 1 should probably be implemented, but I'm more interested in getting to Nr. 2 first.
I don't really need any big fancy stuff. The Signals/Slots system I'd need really only needs to do the core emission stuff along with these 2 feature:
Slots must be able to return a value ending the emission - effectively cancelling any subsequent handling of a signal.
Slots must be order-able - and fairly efficient at this. GUI elements that are interacted with will pop-up above others, so this type of change in order is bound to happen quite often.
I stumbled across this really interesting implementation
https://testbit.eu/2013/cpp11-signal-system-performance/
which seems to have everything except the 'ordering' I need. I've only looked over the code once, and it does seem a little intimidating. If I were to try and add ordering capabilities, I'd prefer not to change too much stuff around if necessary. Does anyone have experience with this stuff? I'm fairly certain that a linked list is not optimal for constant removal and insertion, but then again, it probably needs to be optimized the most for constant emission calls.
Any thoughts are most welcome!
--- Update ---
I've spend a little time adding the features I needed to the code put into the public domain above and pasted the complete (and somewhat hacky version) here:
SimpleSignal with added features
In short, I've added:
Blockable Connections (Implemented via simple IF statement)
Depth/Order parameter (Linear-search insertion)
With those additions, keep in mind it has these current issues:
Blocked connections are simply skipped, not actively removed from the data-structure, so having a lot of blocked connections will affect run-time performance.
Depth is only maintained during insertion. So if you'd like to change the depth you'll have to disconnect and reconnect your slot.
Since the SignalLink interface has become exposed (as a result of my block implementation), it's less safe from a user perspective. It's way easier for you to shoot yourself in the foot with this version by messing with existing references and pointers.
This implementation hasn't been as thoroughly tested as the original I'm sure. I did try out the new functionality a bit. User beware.

Display list vs. VAO performance

I recently implemented functionality in my rendering engine to make it able to compile models into either display lists or VAOs based on a runtime setting, so that I can compare the two to each other.
I'd generally prefer to use VAOs, since I can make multiple VAOs sharing actual vertex data buffers (and also since they aren't deprecated), but I find them to actually perform worse than display lists on my nVidia (GTX 560) hardware. (I want to keep supporting display lists anyway to support older hardware/drivers, however, so there's no real loss in keeping the code for handling them.)
The difference is not huge, but it is certainly measurable. As an example, at a point in the engine state where I can consistently measure my drawing loop using VAOs to take, on a rather consistent average, about 10.0 ms to complete a cycle, I can switch to display lists and observe that cycle time decrease to about 9.1 ms on a similarly consistent average. Consistent, here, means that a cycle normally deviates less than ±0.2 ms, far less than the difference.
The only thing that changes between these settings is the drawing code of a normal mesh. It changes from the VAO code whose OpenGL calls look simply thusly...
glBindVertexArray(id);
glDrawElements(GL_TRIANGLES, num, GL_UNSIGNED_SHORT, NULL); // Using an index array in the VAO
... to the display-list code which looks as follows:
glCallList(id);
Both code paths apply other states as well for various models, of course, but that happens in the exact same manner, so those should be the only differences. I've made explicitly sure to not unbind the VAO unnecessarily between draw calls, as that, too, turned out to perform measurably worse.
Is this behavior to be expected? I had expected VAOs to perform better or at least equally to display lists, since they are more modern and not deprecated. On the other hand, I've been reading on the webs that nVidia's implementation has particularly well optimized display lists and all, so I'm thinking perhaps their VAO implementation might still be lagging behind. Has anyone else got findings that match (or contradict) mine?
Otherwise, could I be doing something wrong? Are there any known circumstances that make VAOs perform worse than they should, on nVidia hardware or in general?
For reference, I've tried the same differences on an Intel HD Graphics (Ironlake) as well, and there it turned out that using VAOs performed just as well as simply rendering directly from memory, while display lists were much worse than either. I wish I had AMD hardware to try on, but I don't.

Game loop performance and component approach

I have an idea of organising a game loop. I have some doubts about performance. May be there are better ways of doing things.
Consider you have an array of game components. They all are called to do some stuff at every game loop iteration. For example:
GameData data; // shared
app.registerComponent("AI", ComponentAI(data) );
app.registerComponent("Logic", ComponentGameLogic(data) );
app.registerComponent("2d", Component2d(data) );
app.registerComponent("Menu", ComponentMenu(data) )->setActive(false);
//...
while (ok)
{
//...
app.runAllComponents();
//...
}
Benefits:
good component-based application, no dependencies, good modularity
we can activate/deactivate, register/unregister components dynamically
some components can be transparently removed or replaced and the system still will be working as nothing have happened (change 2d to 3d)(team-work: every programmer creates his/her own components and does not require other components to compile the code)
Doubts:
inner loop in the game loop with virtual calls to Component::run()
I would like Component::run() to return bool value and check this value. If returned false, component must be deactivated. So inner loop becomes more expensive.
Well, how good is this solution? Have you used it in real projects?
Some C++ programmers have way too many fears about the overhead of virtual functions. The overhead of the virtual call is usually negligible compared to whatever the function does. A boolean check is not very expensive either.
Do whatever results in the easiest-to-maintain code. Optimize later only if you need to do so. If you do find you need to optimize, eliminating virtual calls will probably not be the optimization you need.
In most "real" games, there are pretty strict requirements for interdependencies between components, and ordering does matter.
This may or may not effect you, but it's often important to have physics take effect before (or after) user interaction proecssing, depending on your scenario, etc. In this situation, you may need some extra processing involved for ordering correctly.
Also, since you're most likely going to have some form of scene graph or spatial partitioning, you'll want to make sure your "components" can take advantage of that, as well. This probably means that, given your current description, you'd be walking your tree too many times. Again, though, this could be worked around via design decisions. That being said, some components may only be interested in certain portions of the spatial partition, and again, you'd want to design appropriately.
I used a similar approach in a modular synthesized audio file generator.
I seem to recall noticing that after programming 100 different modules, there was an impact upon performance when coding new modules in.
On the whole though,I felt it was a good approach.
Maybe I'm oldschool, but I really don't see the value in generic components because I don't see them being swapped out at runtime.
struct GameObject
{
Ai* ai;
Transform* transform;
Renderable* renderable;
Collision* collision;
Health* health;
};
This works for everything from the player to enemies to skyboxes and triggers; just leave the "components" that you don't need in your given object NULL. You want to put all of the AIs into a list? Then just do that at construction time. With polymorphism you can bolt all sorts of different behaviors in there (e.g. the player's "AI" is translating the controller input), and beyond this there's no need for a generic base class for everything. What would it do, anyway?
Your "update everything" would have to explicitly call out each of the lists, but that doesn't change the amount of typing you have to do, it just moves it. Instead of obfuscatorily setting up the set of sets that need global operations, you're explicitly enumerating the sets that need the operations at the time the operations are done.
IMHO, it's not that virtual calls are slow. It's that a game entity's "components" are not homogenous. They all do vastly different things, so it makes sense to treat them differently. Indeed, there is no overlap between them, so again I ask, what's the point of a base class if you can't use a pointer to that base class in any meaningful way without casting it to something else?

Resources