I have spent a long time away from DirectX11 so I'm unfamiliar with the error I'm getting.
I am creating a layout description and then moving on to create an input layout, however it crashes at that point.
D3D11_INPUT_ELEMENT_DESC layout[] =
{
{"POSITION", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"DIMENSIONS", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"INACTIVE", 0, DXGI_FORMAT_R32_UINT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0},
{"ACTIVE", 0, DXGI_FORMAT_R32_UINT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0},
};
this is how I create the layout
Assert(_pDevice->CreateInputLayout(layout, _uiNumElements, CompileData.Data, CompileData.uiDataSize, &m_pVertexLayout))
this is the line it is crashing on.
If anybody could shed some light on what the possible reasons are it would be great. Cheers.
The answer was figured out using the D3D11 Debug Layer as suggested by Adam Miles.
Simply chucked this into the device creation:
#if defined(DEBUG) || defined(_DEBUG)
CreateDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
It turned out I was referencing an old .cso file and not the most recent one.
Related
Sometimes the only way to pass precious data from CPU to GPU is by hiding it in textures.
I tried to trick SCNTechnique and simply pass [NSData dataWithBytes:length:] or a CGDataProviderRef containing my neatly prepared raw pixel data bytes, but SceneKit is smart enough to detect my sinister attempts.
But I did not give up, and found a loophole:
[_sceneView.technique setValue: UIImagePNGRepresentation(encodeInSinglePixelUIImage(pos.x, pos.y)) forKey:#"blob_pos_"];
Encoding and decoding single pixel PNGs at 60fps on a mobile device is something you can afford, on an iPhone X it just costs 2ms and keeps your palm a little bit warmer.
However I do not need any heat-generating features till november, so I was wondering if there's a cool alternative to this method.
The most efficient way I found is constructing floating point RGB TIFFs.
It's still not super fast, consuming 0.7ms on the iPhone X, but a lot faster than the PNG method.
Having a float texture also have the benefits of direct float transfer, that is, no encoding to multiple uint8 RGBA values on the CPU and reconstructing floats on the GPU.
Here's how:
NSData * tiffencode(float x, float y)
{
const uint8_t tags = 9;
const uint8_t headerlen = 8+2+tags*12+4;
const uint8_t width = 1;
const uint8_t height = 1;
const uint8_t datalen = width*height*3*4;
static uint8_t tiff[headerlen+datalen] = {
'I', 'I', 0x2a, 0, //little endian/'I'ntel
8, 0, 0, 0, //index of metadata
tags, 0,
0x00, 1, 4, 0, 1, 0, 0, 0, width, 0, 0, 0, //width
0x01, 1, 4, 0, 1, 0, 0, 0, height, 0, 0, 0, //height
0x02, 1, 3, 0, 1, 0, 0, 0, 32, 0, 0, 0, //bits per sample(s)
0x06, 1, 3, 0, 1, 0, 0, 0, 2, 0, 0, 0, //photometric interpretation: RGB
0x11, 1, 4, 0, 1, 0, 0, 0, headerlen, 0, 0, 0,//strip offset
0x15, 1, 3, 0, 1, 0, 0, 0, 3, 0, 0, 0, //samples per pixel: 3
0x16, 1, 4, 0, 1, 0, 0, 0, height, 0, 0, 0, //rows per strip: height
0x17, 1, 4, 0, 1, 0, 0, 0, datalen, 0, 0, 0, //strip byte length
0x53, 1, 3, 0, 1, 0, 0, 0, 3, 0, 0, 0, //sampleformat: float
0, 0, 0, 0, //end of metadata
//RGBRGB.. pixeldata here
};
float *rawData = tiff+headerlen;
rawData[0] = x;
rawData[1] = y;
NSData *data = [NSData dataWithBytes:&tiff length:sizeof(tiff)];
return data;
}
Useful TIFF links I used:
http://www.fileformat.info/format/tiff/corion.htm
http://paulbourke.net/dataformats/tiff/
https://www.fileformat.info/format/tiff/egff.htm
https://www.awaresystems.be/imaging/tiff/tifftags/sampleformat.html
So, I'm almost finished with my little program. The problem is that the game should look like this:
...but it sometimes looks like this:
This never happens in Debug configuration, only in Release. I'm using VS 2015.
I set up my lights like this:
GLfloat lightPos[] = { 0, 20, 0 };
glEnable(GL_NORMALIZE);
glLightfv(GL_LIGHT0, GL_POSITION, lightPos);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
The ball and the playing field are located in (0, 0, 0) and (0, -1, 0), respectively. Does anyone know what's causing this? Does the game retain something from the last run that messes up with the settings?
The whole project is pretty large by now, so I didn't include all of the code, but I can provide more information if you need it.
GL_POSITION gets homogeneous coordinates, so it should be:
GLfloat lightPos[] = { 0, 20, 0, 1 };
or
GLfloat lightPos[] = { 0, 20, 0, 0 };
depending on whether you want a point light or a directional light.
I've been tasked with re-purposing the core of this Three.js example for a project demo. http://threejs.org/examples/#webgl_gpgpu_birds The problem is that I've also been asked to change the elements from birds formed of 3 triangles to a different shape. To form the new shape I'll need 3-4x that many triangles/vertices.
Because of the nature of how the example is set up and the fact that the number of birds and their vertices are being created, organized and animated through the buffer geometry and shaders, doing this is difficult (at least for me so far). I've gone through the demo, Tried changing everything I can to change the looping point to what the shader sees as a single bird.
Does anyone smarter than me have any insight (or experience) tweaking this demo to modify the shapes? I have successfully updated the triangles and vertices to what the new shape should be by adding more triangles/vertices to the shape I need.
EXAMPLE: (adding 3 more of these to form the new triangles. )
verts_push(
2, 0, 0,
1, 1, 0,
0, 0, 0
);
verts_push(
-2, 0, 0,
-1, 1, 0,
0, 0, 0
);
verts_push(
0, 0, -15,
20, 0, 0,
0, 0, 0
);
EXAMPLE: Doubling the number of triangles/vertices.
var triangles = BIRDS * 6;
var points = triangles * 3;
var vertices = new THREE.BufferAttribute( new Float32Array( points * 3 ), 3 );
But somewhere in the code I'm not able to find what I need to modify/multiply so that the shader correctly counts the vertices and knows that the birds are no longer 3 triangles with 9 vertices but now 6 triangles with 18 vertices (or more). I either get errors or or odd shapes with vertices I don't want moving, moving because whatever I've added isn't of the right size of vertices to what the shader is looking for. So far I have had no luck in being able to get it to work properly. Any help would be appreciated!
After a lot of trial and error I figured out what I needed to modify to both add more triangles and edit what the example considered a single bird. For my purposes I tripled the amount of triangles per "bird" by making the following changes.
Edit 1: Multiplied the number of triangles by an additional 3
var triangles = BIRDS * 3 * 3;
Edit 2: Increase the number of vertices sets pushed from 3 to 9
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
verts_push(0, 0, 0, 0, 0, 0, 0, 0, 0 );
Edit 3: In the for loop make the following change to the birdVertex attribute
birdVertex.array[ v ] = v % (9 * 3);
This successfully gave me enough vertices to play with in order to start creating new shapes as well as controlling the points I wanted to within the birdVS shader.
This worked even with older versions of Three.js (I tested with r.76)
I am creating a web page to illustrate the 3D transformations and I am using Three.js. I have detected a problem when I try to do a negative scale in Y axis. In this case, the object is not affected (a face inversion should be done but it doesn't). However, for negative scales in axis X or Z it works well. Any help? This is my code:
var m = new THREE.Matrix4(
scaleX, 0, 0, 0,
0, scaleY, 0, 0,
0, 0, scaleZ, 0,
0, 0, 0, 1
);
cube.applyMatrix(m);
If I use cube.scale.set(scaleX,scaleY,scaleZ) the first transformation is performed rightly, but I can't link with other transformations. I need for my application that the user can do several transformations in the same scene.
Thanks in advance
Your matrix is not correct.
Try with :
var m = new THREE.Matrix4(
1, 0, 0, scaleX,
0, 1, 0, scaleY,
0, 0, 1, scaleZ,
0, 0, 0, 1
);
cube.applyMatrix(m);
I'm trying to port a simple OpenGL ES 2.0 renderer I made for iOS to OS X desktop and I'm running into into a 'nothing rendering' problem, and I don't get any errors reported so I'm at a loss what to do. So far I've narrowed the problem down to a call I make to glVertexAttrib4f, which is only working in OS X. Can anyone take a look at the following code and see why the call to glVertexAttrib4f doesn't work on desktop?:
void Renderer::drawTest()
{
gl::clear( mBackgroundColor );
static Vec2f vertices[3] = { Vec2f( 0, 0 ), Vec2f( 0.5, 1 ), Vec2f( 1, 0 ) };
static int indices[3] = { 0, 1, 2 };
mShader.bind();
glEnableVertexAttribArray( mAttributes.position );
glVertexAttribPointer( mAttributes.position, 2, GL_FLOAT, GL_FALSE, 0, &vertices[0] );
#ifdef USING_GENERIC_ARRAY_POINTER
// works in iOS /w ES2 and OSX:
static Color colors[3] = { Color( 0, 0, 1 ), Color( 0, 0, 1 ), Color( 0, 0, 1 ) };
glEnableVertexAttribArray( mAttributes.color );
glVertexAttribPointer( mAttributes.color, 3, GL_FLOAT, GL_FALSE, 0, &colors[0] );
#else // using generic attribute
// works in iOS, but doesn't work in OSX ?:
glVertexAttrib4f( mAttributes.color, 0, 1, 1, 1 );
#endif
glDrawElements( GL_TRIANGLES, 3, GL_UNSIGNED_INT, &indices[0] );
errorCheck(); // calls glGetError, which always returns GL_NO_ERROR
}
note: this is example code that I stripped out of something much more complex, please forgive me for not making it more complete.
versions:
desktop OS X is 2.1 ATI-7.18.18
iPhone simulator is OpenGL ES 2.0 APPLE
The reason why drawing wasn't happening while calling glVertexAttrib4f was that mAttributes.color was automatically bound to location 0; I did not call glBindAttribLocation, but instead relied on the originally bounded values during linking. Apparently in GL compatibility context (2.1 in OS X), glDrawElements may not draw anything if whatever attribute is bounds at location 0 is not 'enabled', i.e. you call glEnableVertexAttribArray on it.
So, when I explicitly bind position to 0 and color to 1, I can call glVertexAttrib*() on the color attribute and it will draw just fine in both OS X (legacy) and ES 2.
This issue persists in Apple's GLUT_3_2_CORE_PROFILE on my iMac running OS X 10.1 with an NVidia GeForce GTX 680MX.
The easiest solution was to do as rich.e suggested and explicitly bind position to 0. In my shaders position is referred to as vPosition, and I use a function from a textbook, InitShader, to load, compile and link shaders. Since positions may be re-bound on link, modifying this code:
GLuint program = InitShader( "vshader.glsl", "fshader.glsl" );
glUseProgram( program );
to this:
GLuint program = InitShader( "vshader.glsl", "fshader.glsl" );
glBindAttribLocation(program, 0, "vPosition");
glLinkProgram(program);
glUseProgram( program );
solved my problem.