I've read this DirectX 11 tutorial on VS2015 (http://www.rastertek.com/dx11s2tut04.html), and found out that the author compiles the vertex and pixel shader separately, using the .vs file and .ps file respectively.
And I also found out that in the book "3D Game Programming with DirectX 11" the author use .fx file to organize the shaders throughout the book.
Which method should I use to develop my direct3D program with the latest version of Windows SDK, I wonder? Since I've heard that the Effects11 framework might be deprecated in the future.
You should avoid using fx targets for new projects, and opt for per-stage compilation instead. Note that this is independent of whether you actually put your shader code in separate files, though having one .vs or .ps per shader is a common convention. Full D3D11 support for effects profiles (i.e. fx_5_0) is already deprecated in the latest (Windows 10) compiler, and there is no fx_5_1 at all (some directx-12 features require shader model 5.1).
Related
How does one read or write files (png, txt, jpg...) in OpenGL ES? Target is Android through Visual Studio.
Unfortunately it's not as simple as placing the assets in the same directory as the main program and then referencing them using fstream.h or stdio.h like with the Opengl equivalent. I've tried creating folders like res/raw and using android/asset_manager.h and similar libraries. Is it even possible through this IDE? I'll be done in Unity by the time this gets resolved...
You don't. OpenGL is an API concerned with transforming vertices, and drawing pixels on a screen. File formats are outside the definition of OpenGL. In other words, if you want to use a *.png as an input/output format, you'll need to find a 3rd party library that supports that file format (e.g. libPNG), and use that to transfer the pixel data to OpenGL.
The raw file stream classes (e.g. ifstream) have zero concept of a file format. Again, another reason why you use a 3rd party library.
Unity is a full fledged game engine, and as such has spent time building support for various file formats (e.g. PNG, obj, etc). OpenGL is far lower level that that. A good place to start for image data, is a lib such as DevIL (which itself includes other 3rd party libraries such as libPNG, libJPEG, etc).
As i catch it structure D3DX11_IMAGE_LOAD_INFO is deprecated in DX 11 for Windows 8.1 and up, what kind of structure can i use for replacement for this structure.
D3DX11_IMAGE_LOAD_INFO is part of the D3DX11 utility library from the DirectX SDK.
D3DX9, D3DX10, and D3DX11 are all deprecated along with the legacy DirectX SDK. See MSDN for the full details here.
Depending on what exactly you were wanting to do with D3DX11 here, there are a number of different options (all of which are open source under the MIT license).
The DirectXTex library provides the functionality in D3DX for loading bitmaps, resizing and converting them, generating mipmaps, compressing, and then writing them out as .DDS files. This is usually overkill for most applications to do at run-time, and not a particularly good use of end-user's time anyhow, but it's great for writing custom content tool pipelines for texture processing. The DirectXTex package includes a 'sample' which is the venerable texconv command-line tool written to use DirectXTex instead of D3DX.
The DDSTextureLoader module is intended to handle efficient loading of .DDS files and creating Direct3D 11 resources from them. It does not perform any runtime conversions, so some legacy files with pixel formats that do not directly map to a DXGI format will fail to load and in some cases the DXGI format of the file is not supported by the device and will also fail to load. For these cases, you will want to use DirectXTex to convert them offline to something that you can rely on being able to load on your target machine. This code supports the full range of Direct3D 11 resources including 1D, 2D, 3D, cubemaps, and texture arrays with mipmaps. The DDSTextureLoader module is included in both the DirectXTK library and in the DirectXTex package.
For very simple cases, there is also a WICTextureLoader module which can load standard bitmap files, does some runtime conversions and resizing, and then creates a Direct3D 11 texture 2D from it. It can optionally enable the 'auto-gen mipmaps' feature of Direct3D 11 to provide some basic mipmap support as well (standard bitmap files can't store mipmaps with the base image the way a .DDS file can). This makes use of the Windows Imaging Component (WIC), but is much more 'heavyweight' than DDSTextureLoader. This gives you less control over the quality of the filtering (particularly mipmaps), and does not support complex textures like volume maps, cubemaps, or texture arrays. The WICTextureLoader module is also included in both the DirectXTK library and in the DirectXTex package.
The ScreenGrab module is intended as a light-weight texture saver for creating 'screen shot' bitmap files from render target textures. The ScreenGrab module is included in the DirectXTK library and DirectXTex package.
-- excerpt from this post
For a complete catalog of replacements for legacy D3DX, see this post. There are similar posts for samples, tools, and the DirectX components.
Since you've marked this question with the VS 2013 tag, I'm assuming you are using Visual Studio 2013. You should read about the Windows 8.1 SDK that comes with it. There's a NuGet package for DirectX Tool Kit that works with VS 2013 Update 5, as well as a "Direct3D Game" template package for VS 2013 that you might want to check out.
Note: I just wanna say at first that I tried literally everything I could find about the subject (MSDN, Stack Overflow, D3DCoder, etc.) without any success (after solving one, another error was waiting for me). So I am posting here by pure demotivation (there are similar posts already, yes, but none of them actually helped me out).
Here is how it goes:
A few months ago, I decided to start learning modern OpenGL by pure curiosity, and finally decided to switch to DirectX after reading the downsides of OpenGL (I was also only targeting Windows platform). I think it was one of the worst move I ever made: I heard that OpenGL was lacking of documentation and everything was a mess with third party libraries, but I realized that DirectX was way worst than what it seemed to be compared to OpenGL. In fact, trying to code in modern DirectX 11 using Windows 7 and VS2013 is just a pain (especially shaders) and the time it takes to figure it all out is just a time waster. After reading a lot of porting articles on MSDN about alternative libraries, like DirectXTK, DirectXTex, DirectXMesh, Effects11 and DXUT, I still don't know what to do and how to setup a fully working modern project in Windows 7. Specifically, the 5 (/5_0) shader model (deprecated) combined with the new Effects11 library (not deprecated) is the thing that confuse me the most.
By the way, I am currently reading the latest Frank Luna book about the subject ('Introduction to 3D Game Programming with Direct3D 11') and I still can't get his samples working at all (even with DirectX SDK). Also, I don't know if what I am learning is relevant or not since he wrote it before the Windows SDK switch. For your information, the latest error I am trying to solve with his samples (for those who know the book) is a E_NOINTERFACE from D3DX11CreateEffectFromMemory function (at runtime):
HR(D3DX11CreateEffectFromMemory(compiledShader->GetBufferPointer(), compiledShader- >GetBufferSize(),
0, md3dDevice, &mFX));
// Done with compiled shader.
ReleaseCOM(compiledShader); -> crash here
All that being said, here is what I wanna know:
Are there any clear step-by-step tutorials on how to setup a modern DirectX 11 project in Windows 7 using VS2013 or it is still in pre-alpha stage (just kidding)?
What is actually going on with the shader model, the HLSL compiler, and the .fx files and what should be used (I hear everywhere that it is deprecated but no replacement seems to exist yet)?
For those who know the book, any idea on how to build the old DirectX SDK samples without getting this silly runtime error?
Thanks a lot!
E_NOINTERFACE is an usual error in that context, so likely there's something wrong with the code around it you are not showing in your question.
You can still use the legacy DirectX SDK with VS 2013, but it takes a slightly different procedure than was used with VS 2010. In VC++ Directories set Executable to $(ExecutablePath);$(DXSDK_DIR)Utilities\bin\x86 or $(ExecutablePath);$(DXSDK_DIR)Utilities\bin\x64;$(DXSDK_DIR)Utilities\bin\x86, Include to $(IncludePath);$(DXSDK_DIR)Include, and Library to $(LibraryPath);$(DXSDK_DIR)Lib\x86 or $(LibraryPath);$(DXSDK_DIR)Lib\x64. Read MSDN for some other details of doing this. I've also made some notes w.r.t. to that book here.
You actually don't need the legacy DirectX SDK, but you may find it easier to do that for now using that book. VS 2013 comes with the Windows 8.1 SDK that has all the OS headers for DirectX 11 along with D3DCompile #47.
You can use the Direct3D tutorial for a simple example of setting up a Win32 desktop app (i.e. one that works on Windows 7) with a device, swapchain, and window. This makes no use of legacy DirectX SDK.
There are some additional Effects Tutorial Win32 Sample you can use as well. Instructions on adding the Effects 11 library is on the CodePlex wiki under Documentation / Effects 11 / Adding to a VS solution.
Your questions about FX vs. not are also covered on the CodePlex: Is Effects 11 deprecated? and How do I avoid using fx_5_0?. Also on StackOverflow.
Many of the older DirectX SDK samples have been reposted to MSDN Code Gallery and do not require the legacy DirectX SDK to build. You should read these posts for the fate of various DirectX SDK things:
DirectX SDK Samples Catalog
DirectX SDK Tools Catalog
Living without D3DX
DirectX SDKs of a certain age
The story for learning DirectX 11 with Windows Store apps / Windows phone 8.x aps is a lot cleaner, and is well supported by VS templates and MSDN documentation. Win32 desktop apps are of course a completely reasonable option, but you have to distinguish between legacy and modern with a bit of research. You still start with the standard Win32 desktop app project template in VS.
Note: Windows by default only supports OpenGL v1.5 software renderer. You have to install 3rd party ICD to get anything else, and there are no OpenGL VS templates.
I have found plenty of articles and how-tos online about making plugins for Photoshop on a Mac. Trouble is, many are old, apply only to CS1/2/3/4, or refer to tools or APIs that (it appears) are obsolete. Some articles say you must use CodeWarrior, but it seems this no longer even exists in the Mac programming realm.
Today, in 2011, making plugins only for CS5 and only on a Mac running Snow Leopard, what is the proper toolchain and what libraries/APIs/frameworks should I be using?
I've gotten the impression that Carbon (whatever exactly that is) is old and to be avoided, but it's not clear if that's true for plugins. I am not clear as to whether I should use Cocoa (whatever that is) or not. I do think I will need Core Foundation (whatever...) Is there a choice about 64 vs. 32 bit or is CS5 purely 64 bit and that's that? (I prefer 64 bit, of course.) I do have the Photoshop CS5 SDK, and Photoshop CS5 itself installed.
Can XCode can be used as an IDE? I'll hand-code a makefile and compile at the command line, if that's easier or the only way possible. If XCode can be used, which project template to use? What is this "Mach-O" I read about, and how does that apply to PS plugins?
It's especially confusing since I'm a total noob at Mac programming of any kind, though many years experienced on Linux and other platforms.
Mission accomplished! (Months ago.. I just realized I had this question sitting here.)
Cocoa is useful for GUI settings windows and other things - it's a huge gob of stuff - but I ended up using it only for the "About" popup window for my plugin.
Completely forget about Carbon for the combination of CS5 or later, OSX 10.6 or later, and 64 bit. Apparently parts of Carbon had been made 64 bit in the past, but should be ignored now.
XCode is a fine editor. Start with a "dylib" project using C. C++ and Obj-C source files can be added without any fuss. There's no way getting around just needing to use XCode for some simple toy projects to gain familiarity with how it organizes things and builds apps and libraries. This is the only real "tool" needed; the rest is APIs - header files and libraries (or "frameworks" in the Apple world). While toying with XCode, get to know what a "bundle" is - a folder containing the executable and other files needed by the app.
Paths need to be set up to the Photoshop CS5 API, there being two or three specific directories to be listed. You may need to copy certain common source files out of the Photoshop example plugins directory, and there was a bit of trouble with a file named MachOMacrezXcode.h about which see What is the meaning of exit code 3 from Rez?
Unfortunately there were no truly useful examples of well-written plugins for CS5 on 64-bit. A combination of the Dissolve example, the SimpleFormat file read/write plugin, browsing source for plugins at http://www.telegraphics.com.au/sw/product/FilterFoundry and asking questions on the Adobe Photoshop SDK forum.
Pay no attention to the clumsy process of using some "Plugin Suite" for obtaining memory. It's like Microsoft's old 16-bit Windows API where you needed "memory handles" and thick malarky that is now several times obsolete. These days, good ol' malloc/free or new/delete are fine.
With all the arrowhead wounds I now have in my back, maybe I should write a book or something...
I am new to opengl and using C#,opentk for development. My Application is very light weight (just 2d graphics) and i am planning to use software rendering when hardware rendering is not available.
How do i make sure software rendering works on all computers ? (when hardware rendering is not available.)
Should i distribute Software rendering libraries like Mesa, myself. or it will already available on all (Windows) OS ?
in other words, opengl32.dll is always available on all modern windows OS ( > XP SP2 ) or should i distribute that also ?
( My Application is very simple (simple 2d graphics) as of now. I selected opengl instead of GDI+/WPF because, i may extend it to 3D in future. )
OpenGL is a system library. You should not distribute it with your application. Especially on Unix/Linux systems, where it should be installed using the distribution's package manager.
Since opengl32.dll is included in Windows, it falls back to Software Rendering automatically if the pixel format you chose in your application isn't hardware accelerated by the graphics driver.
I tried leveraging OpenTk as well, but in itself creates a dependency and - particularly as a newbie - doesn't really do anything but confuse learning OpenGL versus learning someone else's interpretation of the framework.
OpenGL is - as the other answerer suggested - a system library. With this, it's a functions contained in a C DLL which you import through the API.
OpenTK imports these functions for you, that's the only real benefit it adds, but in doing so, many of the types are reinterpreted as are the function calls as per the author of OpenTK.
This creates an additional learning curve - as most of the internet references you're going to find are going to be OpenGL - so not only will you be struggling with understanding OpenGL - which isn't easy - but you're also going to be dealing with OpenTK interpretations of the OpenGL standards.
Now keep in mind that MANY open source projects such as OpenTK start as open source until they get sufficient enough user base, when they convert over to a for profit model. So let's say you learn and become dependent on OpenTK, well if/when they switch to a for profit model and you're tapped on cash, you are SOL (Shit outta luck). Or you have to pay their price.
What I did was - I took the source for API mapping for OpenTK's OpenGL mapping and renamed everything as per my tastes. it's a bit of work, but it's worth the labor and it helped me get to understand OpenGL.
As for distribution. I have absolutely no external dependencies I rely on other than the OpenGL DLL which should already be on the system.
ALL the DLLs you need for OpenGl will ALREADY be preinstalled on any windows OS you're dealing with. I can't speak for other OS flavors, but I suspect this may be the case.
On a final note: OpenGL handles 'toggling between software and hardware rendering' innately. So libraries like MESA and OpenTK add VERY little value at HIGH potential costs.
What are those costs?
1) Redistributable packaging and licensing. They still come with a license and most license are subject to change at any time.
2) Conversion from open source or free distribution to a for profit model.
Invest in yourself. OpenGL documentation is vast and at times confusing, and my advice is to avoid the knee jerk temptation to 'take the easy' path versus the leverage other's models - for one simple reason.
I know you're using this for a 2d application. And even if you're using Orthogonal view, the fact of the matter is you're learning OpenGL with is a patterned with 3d in mind. So give yourself the gift up front of teaching yourself because there really is no 'easy path' to understanding 3d - and thus no real value add to the external dependencies you're leaning towards using.
One thing to keep in mind: Modeling. IF/when you switch to 3d modeling, doing vertex creation through hand coding in opengl is a bitch. I use Blender to create my obj models in and read those into my own c# application which reads in the 3d models and lets me manipulate them from there.
I HAD been using C++, which sure is faster, but once I converted the APIs to c# code and started managing my own memory leveraging the garbage collection model, it became SO much easier than having to learn someone else's library.
Dont use redistributeables. And leverage the code from OpenTK, with modification, but don't include OpenTK as a redistributeable.
That's my advice.