Rendering multiple models in DirectX11? - directx-11

How can I render different geometries before presenting? I have read a lot about the rendering pipeline but I haven't seen any mention of the possibility to configure the pipeline differently for different objects.
My guess is that either 1) the rendering pipeline remembers the depths of the output pixels from the last pass so I can simply reconfigure the bindings and configurations so the parts of the other models that is nearer is visible, or 2) the pipeline is memoryless so some super big buffer that contains all the objects in the scene is needed.
Please give pointers to the keywords I need to look for so I can read more about this. I have read pretty much everything on MSDN but I am not sure I have understood the parts that relate to this.

Related

LabVIEW - How to accumulate data in array?

I made a program aimed to simulate the intensity of light when many light bulbs are put together. I have intensity data of one bulb in xls.-files. So, I want to program to work as follows.
Open the xls.-file and get the data.
Put the data into different positions. I put one data set (one bulb) in each excel sheet. This is to simulate putting the bulb in different places.
Sum the data in the same cell across the different sheets.
My LabVIEW front panel and block diagram are:
My problem is this program runs too slowly. How should I improve this? I have an idea to make a big array and accumulate data in that array. However, I do not know how to do it. The Insert Into Array and Replace Array Subset functions are not suitable for my purposes.
The most probable reason of slow performance is that you do a lot of operations on Excel file. You should rather read data into memory and operate on them in VI. At the end, if you need, you can update the Excel file with final results.
It would be difficult to tell you exactly how to do it. As you said, you're beginner and I think that the best way would be to simple do some LabVIEW exercises and gain more experience to understand how to work with arrays :) I recommend to take a look at examples (Help->Find Examples), read some user guides from ni.com or find other "getting started" materials on the Internet.
Check these, you may find them useful:
https://zone.ni.com/reference/en-XX/help/371361R-01/lvhowto/lv_getting_started/
https://www.ni.com/getting-started/labview-basics/data-structures
https://www.ni.com/pl-pl/support/documentation/supplemental/08/labview-arrays-and-clusters-explained.html

Reaction-diffusion parallel growing method

I've created many types of reaction-diffusion patterns using different parameters for death and feed rates etc. Working with them on Ready by GollyGang (a simple C++ software that can grow the patterns based on parameters and code) However, they all end up in curly, combined, maze-like forms, or dots etc. Like this:
What I want to achieve though is more like parallel, straight lines that occasionaly combine looking like veins or growing branches; see the image below:
I've searched for any formula for this but couldn't find any. What parameters should I play with?
For the Gray-Scott reaction-diffusion system, have a look near k=0.0625, F=0.045:
On a sphere it looks like this:
I don't know how they've done the nice spiral though. Perhaps painting into the image is enough to nudge it along. Or you might need to draw an initial pattern. Or perhaps you have to apply a constant bias to the direction of the lines.
Ready: https://github.com/GollyGang/ready
Gray-Scott parameter map: http://mrob.com/pub/comp/xmorphia/
Link to the Nervous System lampshade shown in the question: http://n-e-r-v-o-u-s.com/shop/product.php?code=66&search=lighting

Is it okay to use a single MeshFaceMaterial across several objects in Three.JS?

I am parsing and loading a 3d object file (similar to ColladaLoader in operation). This contains several objects, and many of the individual objects have several materials across different faces. So, I use MeshFaceMaterial. No problems so far.
However, a handful of the objects re-use materials across them. Would it be appropriate to create a single MeshFaceMaterial and use it across all objects? There are about 120 objects per file. My concern is that if I go down this route it may impair performance (e.g. excessive draw calls, or maybe allocation of memory for each material per object?) as the majority of the objects use their own unique materials. Or, is this an undue concern and the renderer is suitability mature for this not to be a problem? The online documentation only mentions shared geometry, not entire shared Three.Mesh objects.
Using renderer.info as suggested, I was able to take a look at the draw calls, and I'm happy to report they were the same regardless of if you used a single shared MeshFaceMaterial, or individual ones per object. That's not to say there may be other performance penalties, and it does seem logical in clearly separate objects to keep things separate, but for this usage case where there is some crossover, it was not a problem.

performance of layered canvases vs manual drawImage()

I've written a small graphics engine for my game that has multiple canvases in a tree(these basically represent layers.) Whenever something in a layer changes, the engine marks the affected layers as "soiled" and in the render code the lowest affected layer is copied to its parent via drawImage(), which is then copied to its parent and so on up to the root layer(the onscreen canvas.) This can result in multiple drawImage() calls per frame but also prevents rerendering anything below the affected layer. However, in frames where nothing changes no rendering or drawImage() calls take place, and in frames where only foreground objects move, rendering and drawImage() calls are minimal.
I'd like to compare this to using multiple onscreen canvases as layers, as described in this article:
http://www.ibm.com/developerworks/library/wa-canvashtml5layering/
In the onscreen canvas approach, we handle rendering on a per-layer basis and let the browser handle displaying the layers on screen properly. From the research I've done and everything I've read, this seems to be generally accepted as likely more efficient than handling it manually with drawImage(). So my question is, can the browser determine what needs to be re-rendered more efficiently than I can, despite my insider knowledge of exactly what has changed each frame?
I already know the answer to this question is "Do it both ways and benchmark." But in order to get accurate data I need real-world application, and that is months away. By then if I have an acceptable approach I will have bigger fish to fry. So I'm hoping someone has been down this road and can provide some insight into this.
The browser cannot determine anything when it comes to the canvas element and the rendering as it is a passive element - everything in it is user rendered by the means of JavaScript. The only thing the browser does is to pipe what's on the canvas to the display (and more annoyingly clear it from time to time when its bitmap needs to be re-allocated).
There is unfortunately no golden rule/answer to what is the best optimization as this will vary from case to case - there are many techniques that could be mentioned but they are merely tools you can use but you will still have to figure out what would be the right tool or the right combination of tools for your specific case. Perhaps layered is good in one case and perhaps it doesn't bring anything to another case.
Optimization in general is very much an in-depth analysis and break-down of patterns specific to the scenario, that are then isolated and optimized. The process if often experiment, benchmark, re-adjust, experiment, benchmark, re-adjust, experiment, benchmark, re-adjust... of course experience reduce this process to a minimum but even with experience the specifics comes in a variety of combinations that still require some fine-tuning from case to case (given they are not identical).
Even if you find a good recipe for your current project it is not given that it will work optimal with your next project. This is one reason no one can give an exact answer to this question.
However, when it comes canvas what you want to achieve is a minimum of clear operations and minimum areas to redraw (drawImage or shapes). The point with layers is to groups elements together to enable this goal.

D3: What are the most expensive operations?

I was rewriting my code just now and it feels many magnitudes slower. Previously it was pretty much instant, now my animations take 4 seconds to react to mouse hovers.
I tried removing transitions and not having opacity changes but it's still really slow.
Though it is more readable. - -;
The only thing I did was split large functions into smaller more logical ones and reordered the grouping and used new selections. What could cause such a huge difference in speed? My dataset isn't large either...16kb.
edit: I also split up my monolithic huge chain.
edit2: I fudged around with my code a bit, and it seems that switching to nodeGroup.append("path") caused it to be much slower than svg.append("path"). The inelegant thing about this though is that I have to transform the drawn paths to the middle when using svg while the entire group is already transformed. Can anyone shed some insight and group.append vs svg.append?
edit3: Additionally I was using opacity:0 to hide all my path line before redrawing, which caused it to become slower and slower because these lines were never removed. Switched to remove();
Without data it is hard to work with or suggest a solution. You don't need to share private data but it helps to generate some fake data with the same structure. It's also not clear where your performance hit comes if we can't see how many dom elements you are trying to make/interact with.
As for obvious things that stand out, you are not doing things in a data driven way for drawing your segments. Any time you see a for loop it is a hint that you are not using d3's selections when you could.
You should bind listEdges to your paths and draw them from within the selection, it's ok to transform them to the center from there. also, you shouldn't do d3.select when you can do nodeGroup.select, this way you don't need to traverse the entire page when searching for your circles.

Resources