I'm using Direct3D 11 and Visual Studio 2012.
I've read that VS2012 can automatically compile FBX files during the build phase, similar to HLSL files. I haven't been able to find any documentation on how to do this, though. What build action do I set for the FBX files?
Also, what function or functions should I look up on MSDN / Google that relate to loading the compiled FBX file? A tutorial or stackoverflow link, etc, will work. I just have nothing to go on at the moment, so I don't even know what to Google for. My searches haven't turned up anything.
I'm trying to transition from rendering my manually defined cube to rendering a cube, or any model, loaded from an external model file.
You can add build customizations to read FBX files into the asset pipeline, but if you want to load them at runtime, need to build the vertex and index buffers yourself. Here is the relevant MSDN page on loading and using 3D models, and it reads:
Direct3D 11 does not provide functions for creating resources from 3-D
models. Instead, you have to write code that reads the 3-D model file
and creates vertex and index buffers that represent the 3-D model and
any resources that the model requires—for example, textures or
shaders.
Related
I had a use case in which I need to convert .vox files to .glb and vice-versa. Is there any packages available that I could integrate in my backend application.
Found libraries like three.js and assimp. Although they include various formats but .vox isn't included.
I do not want to use ITK for segmentation .Is there any filters in VTK to perform precised vessel segmentation for MIP(Maximum intensity projection)Dicom images.
I tried by installing itk but i have done my code in visual studio 8 and the latest version of itk is not supported in VS-8.So i imported my vs-8 project to Visual studio 2015,but there are so many errors in them ,i spend many days correcting that error but no use.
That's why i am asking whether there any filters in VTK to segment the blood vessels?
VTK is mostly used for visualization and less for processing the data.
What you are looking for is called a "vesselness filter". There are a few implementations of such a filter on github:
https://github.com/search?q=vesselness+filter
The first one, https://github.com/ntnu-bioopt/libfrangi, seems to use only openCV so I guess that could be a good choice for you if you want to limit your dependencies.
Edit:
There is a library called VMTK for vessels analysis. It's based on VTK and it includes a vesselnessfilter. You can directly input VTK image data, so I guess that's the closest framework that can find! Also some other features of the library might be of interest to you.
Look here: http://www.vmtk.org/doc/html/classvtkvmtkVesselnessMeasureImageFilter.html
Goal: I'm trying to develop my first simple Oculus rift application using visual studio.
Background: Computer Engineer/programmer of arbitrary languages-rusty at C++/very rusty at visual studio/inexperienced with 3D balls to the wall programming.
DirectX Progress: I found this excelent tutorial (http://3dgep.com/introduction-to-directx-11/) and I rebuilt it walking thru the code; this taught me a lot. My code never actually would run though, likely an issue with linkers or precompiled headers, so I reverted back to the original Demo File.
Oculus Progress: I've learned a lot about using the LibOVR, successfully compiled my first program which was to gather sensor data. Never ran it though.
Visual Studio: I currently have one solution setup, with two projects (DirectXTemplate and LibOVR). I'm thinking I should merge the two projects and turn the DirectXTemplate into a library so I can access all the functions defined in these files (though I will likely need to modify them as development progresses). How do I go about doing this? Is this the right thing to do?
I also have some general questions:
List item
Projects/Solutions: what is the difference and how should I lay things out to achieve my goal?
List item
My winAPI main function is in my own cpp file, it calls functions from directXtempalte... most of these are working except the LoadContent function fails about half way thru, I think due to the shaders. I'm really confused about the shaders in the tutorial, particularly walking about at compile time vs. at run time shaders and suspect its an issue with the Linker, precompiled headers, include directories or something like that. There are so many views of the properties tab in VS its causing more confusion and errors. So my real question here is how do I control this better? I mean, the properties window changes depending on what project/solution/file I select and it also changes based on the mode selected in the properties window... getting the properties window right for all these objects has proven to be a highly error prone process requiring iterative trial and error... this really really sucks and wastes tons of time... How can it be avoided?
List item
How do I turn the directX template in to a library like the LIBOVR and should I? Keep in mind the directX template/library will be updated massively as the project progresses but LIBOVER will not be. When all things are done, I'll be using the LIBOVR functions for to deal with the oculus (this is static but is updated by the Vendor) and DirectXTemplate/Library functions to deal with direct X (this will be custom build, using the template as the starting point.
I'm building off a project in three.js and one of the ideas I'm fiddling around with would allow users to write their own shader code. Code from the user would dynamically load to the gpu, much like in this example. In such a setup, the user would benefit greatly from having some way to display compile time errors generated by his code. I've looked into the code from the above example, but this instance works directly with WebGl.
Are there any alternatives I might consider that leverage the three.js library to detect compile time shader errors?
I'm just guessing but it looks like what you'd want to do is use WebGL to compile and link the shaders. If there are errors display them. If compiling and linking was successful then make a three.js ShaderMaterial and pass in the shader source that just worked.
If you view-source on glsl.heroku.com/e you can see in the createShader code it checks for errors and attempts to highlight specific lines in the source.
I'm fairly new to OpenCV and Visual Studio as well. My question is not so much technical but theoretically anyways:
I'm working on a bigger project but do not have access to all its subcomponents etc. I wrote a few classes and functions that other members want to use. However, I'm using some OpenCV specific things (because I'm lazy and dont want to implement everything all by myself) but the other members dont use it and they want to keep the project size relatively small.
My question is: How can I provide my code as a library or something similar that includes all my opencv dependencies? Can I create a dll of my code and just ship the opencv dlls with it? Is there a way to bundle everything into one file with only one header?
How would you solve this problem?
Summarizing: I want my functions in a library and shipped as small as possible (with opencv dependencies)
KR
Put all your code in a DLL, and then ship OpenCV DLLs along with yours.
Or: put all your code in a DLL, and perform static linking with OpenCV.