I have some models (geometries) which have some vertexinformation. For example position, normal, color, texcoord each of this information has its own vertexbuffer. But some of these models have texture coordinates some not...
To manage these differences I wrote a vertexshader which is checking, if the flag inside the constantbuffer "hasTextureCoordinates" is == 1. And if so it uses the texcoord parameter or not.
But Directx does not realy like this workaround and prints out:
D3D11 INFO: ID3D11DeviceContext::Draw: Element [3] in the current Input Layout's declaration references input slot 3, but there is no Buffer bound to this slot. This is OK, as reads from an empty slot are defined to return 0. It is also possible the developer knows the data will not be used anyway. This is only a problem if the developer actually intended to bind an input Buffer here. [ EXECUTION INFO #348: DEVICE_DRAW_VERTEX_BUFFER_NOT_SET]
I'm not sure if every hardware handles this correctly, also it's not nice to see inside the output this "warnings" every frame...
I know i could write two shaders, one with and one without texcoods, but the problem is that this is not the only maybe missing parameter... some has color other not, some has color and texturecoordinates and so on. And to write a shader for each combination of vertexbuffer inputs is extremly redundant. this is extremly bad, because if we change one shader, we have to change all other too. There is also the possibility of put parts of the shader to different files and include them, but it's confusing.
Is there a way to say directx that the specific vertexbuffer is optional?
Or does someone knows a better solution for this problem?
You can suppress this specific message programmatically. As it's an INFO rather than an ERROR or CORRUPTION message, it's safe to ignore.
#include <wrl/client.h>
using Microsoft::WRL::ComPtr;
ComPtr<ID3D11Debug> d3dDebug;
if ( SUCCEEDED( device.As(&d3dDebug) ) )
{
ComPtr<ID3D11InfoQueue> d3dInfoQueue;
if ( SUCCEEDED( d3dDebug.As(&d3dInfoQueue) ) )
{
#ifdef _DEBUG
d3dInfoQueue->SetBreakOnSeverity( D3D11_MESSAGE_SEVERITY_CORRUPTION, true );
d3dInfoQueue->SetBreakOnSeverity( D3D11_MESSAGE_SEVERITY_ERROR, true );
#endif
D3D11_MESSAGE_ID hide[] =
{
D3D11_MESSAGE_ID_SETPRIVATEDATA_CHANGINGPARAMS,
D3D11_MESSAGE_ID_DEVICE_DRAW_VERTEX_BUFFER_NOT_SET, // <--- Your message here!
// Add more message IDs here as needed
};
D3D11_INFO_QUEUE_FILTER filter = {};
filter.DenyList.NumIDs = _countof(hide);
filter.DenyList.pIDList = hide;
d3dInfoQueue->AddStorageFilterEntries( &filter );
}
}
In addition to suppressing 'noise' messages, in debug builds this also causes the debug layer to generate a break-point if you do hit a ERROR or CORRUPTION message as those really need to be fixed.
See Direct3D SDK Debug Layer Tricks
Note I'm using ComPtr here to simplify the QueryInterface chain, and I assume you are keeping your device as a ComPtr<ID3D11Device> device as I do in in Anatomy of Direct3D 11 Create Device
I also assume you are using VS 2013 or later so that D3D11_INFO_QUEUE_FILTER filter = {}; is sufficient to zero-fill the structure.
Related
I am currently trying to make some TensorFlow Inference (C backend) using Boost::GIL (challenging). I need a few thinks, I have been able to load my png image (rgb8_image_t)
and did a conversion to rgb32_f_image_t.
I still need 3 thinks, the raw pointer of the data, memory allocated, and dimensions.
for the memory allocated unfortunately the function total_allocated_size_in_bytes() is private, so I did this:
boost::gil::view(dest).size() * boost::gil::view(dest).num_channels() * sizeof(value_type);
Which is valid, if I do not have any extra padding for alignment story. But does it exist any nice alternative?
For the dimension, I should match with numpy (from PILLOW), I hope both libraries are using the same memory layout pattern. From my understanding, by default, datas are interleaved and contiguous so, it should be good.
Last the raw pointer _memory, it is a private data member of the Image class with no dedicated function. boost::gil::view(dest).row_begin(0) returns a iterator on the first pixel but I not sure how I could get the pointer of the data _memory. Any suggestions ?
Thank you very much,
++t
ps: TensorFlow proposes a C++ backend, however, it is not installed from any package managers, and manipulate Bazel is beyond my strength.
GIL documentation pretty accurately documents the various memory layouts.
The point of the library, though, is to abstract away the memory layouts. If you require some representation (planar/interleaved, packed or unpacked) you are doing things "the hard way" for the library interface.
So, I think you can read and convert in one go, e.g. for a jpeg:
gil::rgb32f_image_t img;
gil::image_read_settings<gil::jpeg_tag> settings;
read_and_convert_image("input.jpg", img, settings);
Now getting the raw data is possible:
auto* raw_data = gil::interleaved_view_get_raw_data(view(img));
It happens to be the case that the preferred implementation storage is interleaved, which is likely what you're expecting. If your particular image storage is planar, the call will not compile (and you'd probably want planar_view_get_raw_data(vw, plane_index) instead).
Note that you'll have to reinterpret_cast to float [const]* if you need that, because there is not public interface to get a reference to the scoped_channel_value<>::value_, but the BaseChannelValue type is indeed float and you can assert that the wrapper doesn't add additional weight:
static_assert(sizeof(float) == sizeof(raw_data[0]));
Alternative Approach:
Conversely, you can setup your own raw pixel buffer, mount a mutable view into it and use that to read/convert your initial load into:
// get dimension
gil::image_read_settings<gil::jpeg_tag> settings;
auto info = gil::read_image_info("input.jpg", settings).get_info();
// setup raw pixel buffer & view
using pixel = gil::rgb32f_pixel_t;
auto data = std::make_unique<pixel[]>(info._width * info._height);
auto vw = gil::interleaved_view(info._width, info._height, data.get(),
info._width * sizeof(pixel));
// load into buffer
read_and_convert_view("input.jpg", vw, settings);
I've actually checked that it works correctly by writing out the resulting view:
//// just for test - doesn't work for 32f, so choose another pixel format
//gil::write_view("output.png", vw, gil::png_tag());
while trying to create an object oriented OpenACC implementation I stumbled upon this question.
From there I took the code provided by #mat-colgrove at the GTC15 (code available at http://www.pgroup.com/lit/samples/gtc15_S5233.tar).
Since I am interested how to use objects to manage data on with OpenACC I posted another question.
I was quite impressed by the ease of the OpenACCArray::swap function, so I created a small example to test it (see gist).
First I tried to just swap and hope that it is sufficient to swap the pointers on the host, but this ends in a fatal memory error. (presumably because the size and capacity members are not updated on the device)
A safer approach, that I assumed to work is to update the host, swap arrays and update device. This runs but creates wrong results.
I am compiling for nvidia accelerators.
Looks like this is my fault since I didn't test the swap routine.
The problem here is while the code is swapping the data on the host, the device copy of the objects still point to the old array. The fix is to re-attach (i.e. set the object's device pointers to the correct arrays) the lists.
void swap(OpenACCArray<type>& x)
{
type* tmp_list = list;
int tmp_size = _size;
int tmp_capacity = _capacity;
list = x.list;
_size = x._size;
_capacity = x._capacity;
x.list = tmp_list;
x._size = tmp_size;
x._capacity = tmp_capacity;
#ifdef _OPENACC
#pragma acc update device(_size,_capacity,x._size,x._capacity)
acc_attach((void**)&list);
acc_attach((void**)&x.list);
#endif
}
"acc_attach" is a PGI extension that hopefully will be adopted in the OpenACC 3.0 standard.
Thanks for trying things out and let me know if you encounter other issues.
- Mat
Now that we've reached Swift 2.0, I've decided to convert my, as yet unfinished, OS X app to Swift. Making progress but I've run into some issues with using termios and could use some clarification and advice.
The termios struct is treated as a struct in Swift, no surprise there, but what is surprising is that the array of control characters in the struct is now a tuple. I was expecting it to just be an array. As you might imagine it took me a while to figure this out. Working in a Playground if I do:
var settings:termios = termios()
print(settings)
then I get the correct details printed for the struct.
In Obj-C to set the control characters you would use, say,
cfmakeraw(&settings);
settings.c_cc[VMIN] = 1;
where VMIN is a #define equal to 16 in termios.h. In Swift I have to do
cfmakeraw(&settings)
settings.c_cc.16 = 1
which works, but is a bit more opaque. I would prefer to use something along the lines of
settings.c_cc.vim = 1
instead, but can't seem to find any documentation describing the Swift "version" of termios. Does anyone know if the tuple has pre-assigned names for it's elements, or if not, is there a way to assign names after the fact? Should I just create my own tuple with named elements and then assign it to settings.c_cc?
Interestingly, despite the fact that pre-processor directives are not supposed to work in Swift, if I do
print(VMIN)
print(VTIME)
then the correct values are printed and no compiler errors are produced. I'd be interested in any clarification or comments on that. Is it a bug?
The remaining issues have to do with further configuration of the termios.
The definition of cfsetspeed is given as
func cfsetspeed(_: UnsafeMutablePointer<termios>, _: speed_t) -> Int32
and speed_t is typedef'ed as an unsigned long. In Obj-C we'd do
cfsetspeed(&settings, B38400);
but since B38400 is a #define in termios.h we can no longer do that. Has Apple set up replacement global constants for things like this in Swift, and if so, can anyone tell me where they are documented. The alternative seems to be to just plug in the raw values and lose readability, or to create my own versions of the constants previously defined in termios.h. I'm happy to go that route if there isn't a better choice.
Let's start with your second problem, which is easier to solve.
B38400 is available in Swift, it just has the wrong type.
So you have to convert it explicitly:
var settings = termios()
cfsetspeed(&settings, speed_t(B38400))
Your first problem has no "nice" solution that I know of.
Fixed sized arrays are imported to Swift as tuples, and – as far as I know – you cannot address a tuple element with a variable.
However,Swift preserves the memory layout of structures imported from C, as
confirmed by Apple engineer Joe Groff:. Therefore you can take the address of the tuple and “rebind” it to a pointer to the element type:
var settings = termios()
withUnsafeMutablePointer(to: &settings.c_cc) { (tuplePtr) -> Void in
tuplePtr.withMemoryRebound(to: cc_t.self, capacity: MemoryLayout.size(ofValue: settings.c_cc)) {
$0[Int(VMIN)] = 1
}
}
(Code updated for Swift 4+.)
I am trying to get a set of binary images' eccentricity and solidity values using the regionprops function. I obtain the label matrix using the vision.ConnectedComponentLabeler function.
This is the code I have so far:
files = getFiles('images');
ecc = zeros(length(files)); %eccentricity values
sol = zeros(length(files)); %solidity values
ccl = vision.ConnectedComponentLabeler;
for i=1:length(files)
I = imread(files{i});
[L NUM] = step(ccl, I);
for j=1:NUM
L = changem(L==j, 1, j); %*
end
stats = regionprops(L, 'all');
ecc(i) = stats.Eccentricity;
sol(i) = stats.Solidity;
end
However, when I run this, I get an error says indicating the line marked with *:
Error using ConnectedComponentLabeler/step
Variable-size input signals are not supported when the OutputDataType property is set to 'Automatic'.'
I do not understand what MATLAB is talking about and I do not have any idea about how to get rid of it.
Edit
I have returned back to bwlabel function and have no problems now.
The error is a bit hard to understand, but I can explain what exactly it means. When you use the CVST Connected Components Labeller, it assumes that all of your images that you're going to use with the function are all the same size. That error happens because it looks like the images aren't... hence the notion about "Variable-size input signals".
The "Automatic" property means that the output data type of the images are automatic, meaning that you don't have to worry about whether the data type of the output is uint8, uint16, etc. If you want to remove this error, you need to manually set the output data type of the images produced by this labeller, or the OutputDataType property to be static. Hopefully, the images in the directory you're reading are all the same data type, so override this field to be a data type that this function accepts. The available types are uint8, uint16 and uint32. Therefore, assuming your images were uint8 for example, do this before you run your loop:
ccl = vision.ConnectedComponentLabeler;
ccl.OutputDataType = 'uint8';
Now run your code, and it should work. Bear in mind that the input needs to be logical for this to have any meaningful output.
Minor comment
Why are you using the CVST Connected Component Labeller when the Image Processing Toolbox bwlabel function works exactly the same way? As you are using regionprops, you have access to the Image Processing Toolbox, so this should be available to you. It's much simpler to use and requires no setup: http://www.mathworks.com/help/images/ref/bwlabel.html
I am unable to figure out how to properly use the EM_SETHANDLE mechanism to set the text for an edit control. Get and Set window text will be too slow for my application.
From the documentation I understand that the allocated buffer will be sued by the control and it works partially for me.
When the text is entered in the control, it is seen in the buffer but when the buffer is updated using memcpy etc (no bug in the code), the updated text won't show properly. I even tried EM_SETHANDLE (SetHandle() ) after every update but it fails after a couple of attempts. There is some kind of heap allocation failure. RedrawWindow() won't work either.
I am unable to get any proper info on the net on the usage. Help!
My code, leaving the app specific details, looks like this.
// init
HANDLE m_hMem = HeapAlloc(...)
m_edit.SetHandle(m_hMem)
// on some event
char *pbuf = (char*)m_hMem;
memcpy(...)
thanks in advance
The docs for EM_GETHANDLE tells you that this memory has to be movable memory allocated by LocalAlloc.
I'm guess you can get away with something like this:
int cbCh = sizeof(TCHAR) > 1 ? sizeof(TCHAR) : IsUsingComCtlV6() ? sizeof(WCHAR) : sizeof(char);
HLOCAL hOrgMem = SendMessage(hEdit,EM_GETHANDLE,0,0);
HLOCAL hNewMem = LocalReAlloc(hOrgMem,cbCh * cchYourTextLength,LMEM_MOVEABLE);
if (hNewMem)
{
//LocalLock, assign string, LocalUnlock
SendMessage(hEdit,EM_SETHANDLE,(WPARAM)hNewMem,0);
}
Looks like you need to allocate the memory with LocalAlloc. See the companion message EM_GETHANDLE: http://msdn.microsoft.com/en-us/library/bb761576(v=vs.85).aspx