DX11: Indexed drawing doesn't produce any visual output - directx-11

For our student project I've been tinkering with an OBJ-loader in order to import models into our application.
It loads without issues, and drawing it kind of works without index (the model is obviously not represented correctly because I'm not using an index buffer)
However, drawing with DeviceContext->DrawIndexed shows nothing on screen.
Without indexed drawing
With indexed drawing
Buffer creation method:
void ObjectLoader::CreateBuffers()
{
//Index buffer
D3D11_BUFFER_DESC iBufferDesc;
memset(&iBufferDesc, 0, sizeof(iBufferDesc));
iBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
iBufferDesc.Usage = D3D11_USAGE_DEFAULT;
iBufferDesc.ByteWidth = sizeof(DWORD);
D3D11_SUBRESOURCE_DATA indexData;
indexData.pSysMem = &ind;
pDevice->CreateBuffer(&iBufferDesc, &indexData, &pIndexBuffer);
//Vertex buffer
D3D11_BUFFER_DESC bufferDesc;
memset(&bufferDesc, 0, sizeof(bufferDesc));
bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.ByteWidth = sizeof(TriangleVertex) * this->NumberOfVerts();
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = tva;
pDevice->CreateBuffer(&bufferDesc, &data, &pVertexBuffer);
}
Draw method:
void ObjectLoader::Draw()
{
if (pDevice == nullptr)
return;
UINT32 vertexSize = sizeof(float) * 5;
UINT32 offset = 0;
pDeviceContext->IASetVertexBuffers(0, 1, &pVertexBuffer, &vertexSize, &offset);
pDeviceContext->IASetIndexBuffer(this->pIndexBuffer, DXGI_FORMAT_R32_UINT, 0);
pDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
pDeviceContext->DrawIndexed(vIndex.size(),0 , 0);
//pDeviceContext->Draw(this->NumberOfVerts(), 0);
}
What the hell am I missing? I've looked at several books on indexed drawing and it seems pretty straight-forward. At first I thought the winding order was reversed but I checked this by simply reversing the index array; same result.
If you need more code let me know, but I feel this should suffice.
Thanks in advance!
Edit: OT: I never figured out how to get my code to be properly formatted so I apologize for that, feel free to share how that's done.

Related

Optimizations causing errors

std::vector<VkWriteDescriptorSet> writeDescriptorSets;
for (int index = 0; index < descriptorBindings.size(); index++)
{
VkWriteDescriptorSet writeDescriptorSet = {};
// Binding 0 : Uniform buffer
writeDescriptorSet.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
writeDescriptorSet.dstSet = descriptorSet;
// Binds this uniform buffer to binding point 0
writeDescriptorSet.dstBinding = index;
writeDescriptorSet.descriptorCount = descriptorBindings[index].Count;
writeDescriptorSet.pNext = nullptr;
writeDescriptorSet.pTexelBufferView = nullptr;
if (descriptorBindings[index].Type == DescriptorType::UniformBuffer)
{
VkDescriptorBufferInfo uniformBufferDescriptor = {};
uniformBufferDescriptor.buffer = descriptorBindings[index].UniformBuffer->buffer;
uniformBufferDescriptor.offset = 0;
uniformBufferDescriptor.range = descriptorBindings[index].UniformBuffer->size;
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
writeDescriptorSet.pBufferInfo = &uniformBufferDescriptor;
}
else if (descriptorBindings[index].Type == DescriptorType::TextureSampler)
{
VkDescriptorImageInfo textureDescriptor = {};
textureDescriptor.imageView = descriptorBindings[index].Texture->imageView->imageView; // The image's view (images are never directly accessed by the shader, but rather through views defining subresources)
textureDescriptor.sampler = descriptorBindings[index].Texture->sampler; // The sampler (Telling the pipeline how to sample the texture, including repeat, border, etc.)
textureDescriptor.imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL; // The current layout of the image (Note: Should always fit the actual use, e.g. shader read)
//printf("%d\n", textureDescriptor.imageLayout);
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
writeDescriptorSet.pImageInfo = &textureDescriptor;
}
writeDescriptorSets.push_back(writeDescriptorSet);
}
vkUpdateDescriptorSets(logicalDevice, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, nullptr);
I am really scratching my head over this. If I enabled optimizations inside Visual Studio then the textureDescriptor.imageLayout line, and probably the rest of the textureDescriptor, gets optimized out and it causes errors in Vulkan. If I comment out the printf below it then no problem. I suspect that the compiler detects that imageLayout is being used and doesn't get rid of it.
Do I even need optimizations? If so how can I prevent it from removing that code?
textureDescriptor is not being "optimized out". It's a stack variable whose lifetime ended before you ever give it to Vulkan.
You're going to have to create those objects in some kind of way that will outlive the block in which they were created. It needs to the call to vkUpdateDescriptorSets.

show random image on Processing

I'm really noob on processing and programing and I can't figure it out how to show my images at random.
I'm loading the images in setup with the PImage name img0, img1, img2 and then
image("img" + random(3), 0, 0);
But it does't work, coz processing wait for a PImage argument, and the string plus a number isn't.
And I know for shure there must be some better way than:
int randomNumber = random(3);
if(randomNumber == 0 ){
image(img0,0,0);
}
if(randomNumber == 1 ){
image(img1,0,0);
}
if(randomNumber == 2 ){
image(img2,0,0);
}
But I haven't found it.
Any thoughts?
thanks!
In addition to Kevin's great answer you can also use a an array to store the loaded PImages.
Here's a rough example (you'll need to adjust path to images of course):
// total number of images
int numImages = 3;
// an array of images
PImage[] images = new PImage[num];
int randomNumber;
void setup(){
//TODO correct sketch size
size(300,300);
// initialize images array (loading each one)
for(int i = 0 ; i < numImages; i++){
// TODO correct path to images
images[i] = loadImage("img"+(i)+".png");
}
}
void draw(){
background(0);
//render the most recently selected random index image
image(images[randomNumber]);
//instructions
text("click to randomize",10,15);
}
// change the random number on click (draw() would look chaotic/hard to debug)
void mousePressed(){
// pick a random number and cast the floating point value return to integer needed as in images array index
randomNumber = (int)random(numImages);
}
You could use a HashMap to create a map from String keys to PImage values. Something like this:
HashMap<String, PImage> imageMap = new HashMap<String, PImage>();
imageMap.put("image1", image1);
imageMap.put("image2", image2);
Then to get a PImage from a String key, you'd call the get() function:
PImage image1 = imageMap.get("image1");
You can find more info in the reference.
By the way, this line won't compile:
int randomNumber = random(3);
The random() function returns a float value. You can't store a float value in an int variable. You have to convert it using the int() function:
int randomNumber = int(random(3));
If you still can't get it working, please post a MCVE that demonstrates the problem. Good luck.

Cannot get OpenAL to play sound

I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.

scoped_ptr for double pointers

Is there a halfway elegant way to upgrade to following code snipped by the use of boost's scoped_ptr or scoped_array?
MyClass** dataPtr = NULL;
dataPtr = new MyClass*[num];
memset(dataPtr, 0, sizeof(MyClass*));
allocateData(dataPtr); // allocates objects under all the pointers
// have fun with the data objects
// now I'm bored and want to get rid of them
for(uint i = 0; i < num; ++i)
delete dataPtr[i];
delete[] dataPtr;
I did it the following way now:
boost::scoped_array<MyClass*> dataPtr(new MyClass*[num]);
memset(dataPtr.get(), 0, num * sizeof(MyClass*));
allocateData(dataPtr.get());
Seems to work fine.

How to get current display mode (resolution, refresh rate) of a monitor/output in DXGI?

I am creating a multi-monitor full screen DXGI/D3D application. I am enumerating through the available outputs and adapters in preparation of creating their swap chains.
When creating my swap chain using DXGI's IDXGIFactory::CreateSwapChain method, I need to provide a swap chain description which includes a buffer description of type DXGI_MODE_DESC that details the width, height, refresh rate, etc. How can I find out what the output is currently set to (or how can I find out what the display mode of the output currently is)? I don't want to change the user's resolution or refresh rate when I go to full screen with this swap chain.
After looking around some more I stumbled upon the EnumDisplaySettings legacy GDI function, which allows me to access the current resolution and refresh rate. Combining this with the IDXGIOutput::FindClosestMatchingMode function I can get pretty close to the current display mode:
void getClosestDisplayModeToCurrent(IDXGIOutput* output, DXGI_MODE_DESC* outCurrentDisplayMode)
{
DXGI_OUTPUT_DESC outputDesc;
output->GetDesc(&outputDesc);
HMONITOR hMonitor = outputDesc.Monitor;
MONITORINFOEX monitorInfo;
monitorInfo.cbSize = sizeof(MONITORINFOEX);
GetMonitorInfo(hMonitor, &monitorInfo);
DEVMODE devMode;
devMode.dmSize = sizeof(DEVMODE);
devMode.dmDriverExtra = 0;
EnumDisplaySettings(monitorInfo.szDevice, ENUM_CURRENT_SETTINGS, &devMode);
DXGI_MODE_DESC current;
current.Width = devMode.dmPelsWidth;
current.Height = devMode.dmPelsHeight;
bool useDefaultRefreshRate = 1 == devMode.dmDisplayFrequency || 0 == devMode.dmDisplayFrequency;
current.RefreshRate.Numerator = useDefaultRefreshRate ? 0 : devMode.dmDisplayFrequency;
current.RefreshRate.Denominator = useDefaultRefreshRate ? 0 : 1;
current.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
current.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
current.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
output->FindClosestMatchingMode(&current, outCurrentDisplayMode, NULL);
}
...But I don't think that this is really the correct answer because I'm needing to use legacy functions. Is there any way to do this with DXGI to get the exact current display mode rather than using this method?
I saw solution here:
http://www.rastertek.com/dx11tut03.html
In folow part:
// Now go through all the display modes and find the one that matches the screen width and height.
// When a match is found store the numerator and denominator of the refresh rate for that monitor.
for(i=0; i<numModes; i++)
{
if(displayModeList[i].Width == (unsigned int)screenWidth)
{
if(displayModeList[i].Height == (unsigned int)screenHeight)
{
numerator = displayModeList[i].RefreshRate.Numerator;
denominator = displayModeList[i].RefreshRate.Denominator;
}
}
}
Is my understanding correct, the available resolution is in the displayModeList.
This might be what you are looking for:
// Get display mode list
std::vector<DXGI_MODE_DESC*> modeList = GetDisplayModeList(*outputItor);
for(std::vector<DXGI_MODE_DESC*>::iterator modeItor = modeList.begin(); modeItor != modeList.end(); ++modeItor)
{
// PrintDisplayModeInfo(*modeItor);
}
}
std::vector<DXGI_MODE_DESC*> GetDisplayModeList(IDXGIOutput* output)
{
UINT num = 0;
DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_TYPELESS;
UINT flags = DXGI_ENUM_MODES_INTERLACED | DXGI_ENUM_MODES_SCALING;
// Get number of display modes
output->GetDisplayModeList(format, flags, &num, 0);
// Get display mode list
DXGI_MODE_DESC * pDescs = new DXGI_MODE_DESC[num];
output->GetDisplayModeList(format, flags, &num, pDescs);
std::vector<DXGI_MODE_DESC*> displayList;
for(int i = 0; i < num; ++i)
{
displayList.push_back(&pDescs[i]);
}
return displayList;
}

Resources