DirectX & Oculus setup inside Visual Studio, main, precompiled headers, linker and files - visual-studio-2013

Goal: I'm trying to develop my first simple Oculus rift application using visual studio.
Background: Computer Engineer/programmer of arbitrary languages-rusty at C++/very rusty at visual studio/inexperienced with 3D balls to the wall programming.
DirectX Progress: I found this excelent tutorial (http://3dgep.com/introduction-to-directx-11/) and I rebuilt it walking thru the code; this taught me a lot. My code never actually would run though, likely an issue with linkers or precompiled headers, so I reverted back to the original Demo File.
Oculus Progress: I've learned a lot about using the LibOVR, successfully compiled my first program which was to gather sensor data. Never ran it though.
Visual Studio: I currently have one solution setup, with two projects (DirectXTemplate and LibOVR). I'm thinking I should merge the two projects and turn the DirectXTemplate into a library so I can access all the functions defined in these files (though I will likely need to modify them as development progresses). How do I go about doing this? Is this the right thing to do?
I also have some general questions:
List item
Projects/Solutions: what is the difference and how should I lay things out to achieve my goal?
List item
My winAPI main function is in my own cpp file, it calls functions from directXtempalte... most of these are working except the LoadContent function fails about half way thru, I think due to the shaders. I'm really confused about the shaders in the tutorial, particularly walking about at compile time vs. at run time shaders and suspect its an issue with the Linker, precompiled headers, include directories or something like that. There are so many views of the properties tab in VS its causing more confusion and errors. So my real question here is how do I control this better? I mean, the properties window changes depending on what project/solution/file I select and it also changes based on the mode selected in the properties window... getting the properties window right for all these objects has proven to be a highly error prone process requiring iterative trial and error... this really really sucks and wastes tons of time... How can it be avoided?
List item
How do I turn the directX template in to a library like the LIBOVR and should I? Keep in mind the directX template/library will be updated massively as the project progresses but LIBOVER will not be. When all things are done, I'll be using the LIBOVR functions for to deal with the oculus (this is static but is updated by the Vendor) and DirectXTemplate/Library functions to deal with direct X (this will be custom build, using the template as the starting point.

Related

C++ <random> not working in debug "standalone" app for SWIG-Python library using VS configurations

I don't have formal VS training, and I usually use it to program simple tools for my research. (I'm a faculty member).
I'm currently working on a C++ library for Python using SWIG, so I followed the steps suggested in How to create a DLL with SWIG from Visual Studio 2010?
Step no. 25 says "You can't build the Debug version unless you build a debug version of Python itself", but I thought one should be able to build a debug version of the C++ stuff by writing a main that uses the library from C++ itself, without touching Python or involving Python at all. (Please let me know if I'm wrong.)
A while ago I tried creating two projects in one solution (one for the library, one for a testing app), but I wasn't quite convinced with the result, so I thought it was time to try configurations. I modified the Debug config for my SWIG project following the suggestions in Redifining C/C++ entry point in Microsoft Visual Studio 2015 and the comments (changed configuration type, extension, and entry point, and added additional dependencies vcruntimed.lib and ucrtd.lib, also excluded from build the .i and the _wrap.cxx files).
The project compiles and runs, but the methods/functions in the standard <random> C++ library are returning non-random numbers. Update/clarification: In the following code,
std::normal_distribution<double> rand::distn(0, 1);
std::uniform_real_distribution<double> rand::distu(0, 1);
std::mt19937_64 rand::generator;
void rand::init() {
generator.seed((unsigned long)time(NULL));
}
double rand::u01()
{
return distu(generator);
}
the function u01() returns 0.0 always, while when calling it from Python it works as expected.
I checked the code and the generator is being seeded correctly. Also the library is still working fine from Python, so I tend to think this is not a coding but a configuration issue.
I know this would make a better question if I posted a minimal working example, but before investing time (which I think I don't have) on it I was wandering if there is something obvious I'm missing, that a more knowledgeable VS user could easily spot. Please don't get me wrong, if I'm mistaken and the answer is not so apparent, I'll really try to make the time.
Thanks in advance.

Can I use FSI to debug my code?

Is there a way to run my .fs file with either a breakpoint or a command like System.Diagnostics.Debugger.Launch() in it so that I can use FSI to examine values, apply functions to them, plot them, etc.?
I know this question has been asked before but I tried the answers and could not make them work. Clear instructions or a link to a write-up explaining the process would be of great help not only to myself, but also, I believe, to all beginners.
Unfortunately, you cannot hit a breakpoint and jump into FSI. The context of a running .NET program is quite different to that of an interactive FSI session and they are not compatible enough to just switch between one or the other. I can understand an expectation of this kind of debugging when coming from other dynamic/interpreted languages such as JavaScript, Python etc. It is a really powerful feature.
As you have already noted, you can use the VS immediate window but you need to use its C#-like syntax and respect its limitations. Also, since it's not F#, you need to understand the F# to .NET conversion in order to make full use of it.
If you use Paket for dependency management in your project you have another option: Add generate_load_scripts: true to your paket.dependencies. This will give you a file like .paket\load\net452\main.group.fsx, which you can load into FSI to get all of the dependencies for your project. However, you are still responsible for loading in your own source files and building up some state similar to what is found at your desired breakpoint.
To hit a break point, in visual studio or visual studio code, you just click to the left of the line number you want to set your breakpoint. This is very much a supported feature in .fs files.

Recommended Cross-Platform (Windows and Android) Project Set Up using OpenTK

I am starting to develop a scientific software that I hope I will be able to run on multiple platforms. My plan is to use OpenTK for the rendering of the scientific models and plots. As of the moment I have a prototype that runs on Windows using OpenTK 1.1 libraries from http://www.opentk.com/ (a simpler version just with OpenTK and a more complicated one with OpenTK + WindwosForms). I am trying to port that prototype to Android.
It seems that the syntax using by the Xamarin.Android OpenTK library is nearly identical to the one that I am currently using for Windows (with the only difference that OpenGL -> OpenGL ES and GameWindow -> AndroidGameView) so the porting shouldn't be an issue. However, I was hoping that I could avoid a copy-paste method and get a more permanent solution having a shared OpenTK code between the Windows and the Android version.
I have read trough the Xamarin documentation about the shared vs PCL methods for cross-platform development. However, I still struggle to figure out how to set-up a Visual Studio solution with an Android and Windows project and a shared code that will include OpenTK. Is that even possible and can someone give me an example of how to do it? I did explore an example I found for rendering a rotating cube using OpenTK for a shared Android/iOS project (http://developer.xamarin.com/content/TexturedCubeES30/) but in my case I need to use a different OpenTK library for the Windows and for the Android project.
I also found this Do the Android and iOS versions of OpenTK have the same API? discussion. It is very similar to what I would like to do but in my case I am trying to setup a project for Windows and Android (for now).
Can I use only one OpenTK library (which one?) that is being called from both the Android and the Windows project and what will be the right way to set-up both projects so they share the same OpenTK code. This is the first time I am dealing with writing a cross-platform code so I am a bit lost.
Edit: I was able to get a prototype running using Shared Xamarin project and compiler flags as proposed below. Code was indeed not very pretty at places but I got over 70% code re-usability between the two platforms so it was worth the effort. This is how I used the compiler flags in case someone is looking for the same thing (credit to SKall from the Xamarin forums):
#if __ANDROID__
using OpenTK.Graphics.ES11;
#else
using OpenTK.Graphics.OpenGL;
#endif
I used the #if syntax similarly where there were small differences between the syntax of the routines.
It does not seem like OpenTK has its logic inside of a PCL in the first place, so your plans on putting it there are going to get hard to achieve.
However, if you split out your code, such that most of it is contained in classes, which are not highly dependent on the underlying platform, you will be able to create a Class Library Project for each platform and link your files between the platform specific projects. Inside of the classes it contains you will use #if definitions to choose whether to use AndroidGameView or GameWindow and the same goes for other platform specific types. It will make the code ugly, but this is the alternative to PCL.
You could try to see how much of the OpenTK code compiles inside of a PCL and inject the platform specific stuff at runtime, but it will require considerably more work from you. However, it will make the code a lot more cleaner to look at.
To ease the file linking, you could make one of those Shared Projects and chuck in all of the logic in there.
Some more info about code sharing here: http://developer.xamarin.com/guides/cross-platform/application_fundamentals/building_cross_platform_applications/sharing_code_options/
Dependency injection: https://en.wikipedia.org/wiki/Dependency_injection

Quickly testing a function that is a part of a big DLL project

I use VS2010 for C++ development, and I often end up doing work in some dll project and after everything compiles nicely I would like to try to run dummy data on some classes, but ofc the fact that it is a dll and not an exe with main makes that a no go. So is there a simple way to do what I want, or Im cursed till eternity to c/p parts of a big project into small testing one?
Ofc changing the type of the project also works, but I would like to have some almost like iteractive shell way of testing functions.
I know this isn't a library or anything, but if you want to run the dll on windows simply without framing it into anything, or writing a script, you can use rundll32.exe within windows. It allows you to run any of the exported functions in the dll. The syntax should be similiar to:
rundll32.exe PathAndNameofDll,exportedFunctionName [ArgsToTheExportedFunction]
http://best-windows.vlaurie.com/rundll32.html -- is a good simple still relevant tutorial on how to use this binary. Its got some cool tricks in there that may surprise you.
If you are wondering about a 64-bit version, it has the same name (seriously microsoft?) check it out here:
rundll32.exe equivalent for 64-bit DLLs
Furthermore, if you wanted to go low level, you could in theory utilize OllyDbg which comes with a DLL loader for running DLL's you want to debug (in assembly), which you can do the same type of stuff in (call exported functions and pass args) but the debugger is more for reverse engineering than code debugging.
I think you have basically two options.
First, is to use some sort of unit tests on the function. For C++ you can find a variety of implementations, for one take a look at CppUnit
The second option is to open the DLL, get the function via the Win32API and call it that way (this would still qualify as unit testing on some level). You could generalize this approach somewhat by creating an executable that does the above parametrized with the required information (e.g. dll path, function name) to achieve the "interactive shell" you mentioned -- if you decide to take this path, you can check out this CodeProject article on loading DLLs from C++
Besides using unit tests as provided by CppUnit, you can still write your own
small testing framework. That way you can setup your Dll projects as needed,
load it, link it, whatever you want and prove it with some simple data as
you like.
This is valueable if you have many Dlls that depend on each other to do a certain job.
(legacy Dlls projects in C++ tend to be hardly testable in my experience).
Having done some frame application, you can also inspect the possibilities that
CppUnit will give you and combine it with your test frame.
That way you will end up with a good set of automated test, which still are
valueable unit tests. It is somewhat hard starting to make unit tests if
a project already has a certain size. Having your own framework will let you
write tests whenever you make some change to a dll. Just insert it into your
framework, test what you expect it to do and enhance your frame more and more.
The basic idea is to separate the test, the testrunner, the testdata and the asserts
to be made.
I’m using python + ctypes to build quick testing routines for my DLL applications.
If you are using the extended attribute syntax, will be easy for you.
Google for Python + ctypes + test unit and you will find several examples.
I would recommend Window Powershell commandlets.
If you look at the article here - http://msdn.microsoft.com/en-us/magazine/cc163430.aspx you can see how easy it is to set up. Of course this article is mostly about testing C# code, but you can see how they talk about also being able to load any COM enabled DLL in the same way.
Here you can see how to load a COM assembly - http://blogs.technet.com/b/heyscriptingguy/archive/2009/01/26/how-do-i-use-windows-powershell-to-work-with-junk-e-mail-in-office-outlook.aspx
EDIT: I know a very successful storage virtualization software company that uses Powershell extensively to test both it's managaged and unmanaged (drivers) code.

Best practices in Visual Studio C++

Visual Studio seems to want to put class contructor code and event handling code in the .h file. I have only been involved in small 1 man projects and was wondering what the general industry standard was.
For Visual C++ Application projects what code would one put in the .h file? I am used to the mode classical C++ way of declaring your class in the .h file and coding in the .cpp file. Does this still apply to Visual Studio applications?
I have a strong C background which would explain my preference for this. The VSC++ compiler doesn't seem to mind.
In short: What is one supposed to put in which type of file?
TIA
Ends
There is no widely accepted industry standard. By putting (short) function definitions in the header, you give the compiler a better chance to inline the code. The benefit is that it can make the code run faster (keep those functions short, though). However, this comes at the cost of exposing more code to the clients who include that header, making you (or your colleagues) recompile more files when you change the implementation.
You also have to take into account the cost of going against your tools. Since VC++'s wizards insist on putting the functions in the headers, you have to move them everytime if you disagree.
It's really project-specific, I would say.
If you're using MFC and you're talking about the generated code, it's best to leave it alone.
If you're trying to do 'normal' C++ development, put as little as you can get away with in the header, as it means client code doesn't depend on too many implementation details. What you can get away with depends a little on use of templates, and how much indirection your performance budget can support.
For Visual C++ Application projects
what code would one put in the .h
file? I am used to the mode classical
C++ way of declaring your class in the
.h file and coding in the .cpp file.
Does this still apply to Visual Studio
applications?
Short: Yes
Long: Depends on the person or language. In c++ the header is for declaring and cpp for the coding. For C# you have one file (or if you use interfaces, 2)
This might seem minor, but just remember: headers are #included in several places. (And headers including headers complicates things further.) Any time you change a header, a lot of files are gonna be compiled again. Keeping as little of frequently changing code in the header reduces recompilation of dependant files.
Another thing: an uncluttered header file gives you a quick overview of what a class/form has to offer.

Resources