DirectX11 Sampling 1D Texture Gives Incorrect Color - directx-11

For background I am using SharpDX version 4.2.0
I am having a hard time getting the correct color in a pixel shader by sampling a 1D texture.
Here's how I am making the 1D texture, knowing that textures can be finicky if they are not multiples of DataBox.RowPitch:
public static ShaderResourceView CreateUpdateable1DTexture(SharpDX.Direct3D11.Device device, int width)
{
width = ((width / 128) * 128) + 128;
// Describe and create a Texture2D.
Texture1DDescription textureDesc = new Texture1DDescription()
{
MipLevels = 1,
Format = Format.R8G8B8A8_UNorm,
Width = width,
ArraySize = 1,
BindFlags = BindFlags.ShaderResource,
Usage = ResourceUsage.Dynamic,
CpuAccessFlags = CpuAccessFlags.Write,
};
var tex1D = new Texture1D(device, textureDesc);
var resourceView = new ShaderResourceView(device, tex1D);
tex1D.Dispose();
return resourceView;
}
Here's how I am updating the texture:
public static unsafe void Update1DTexture(SharpDX.Direct3D11.Device device, ShaderResourceView textureResource, SharpDX.Color[] colors)
{
var modifiedColorLength = ((colors.Length / 128) * 128) + 128;
// double check the bounds are fine for this texture
var texture = textureResource.ResourceAs<Texture1D>();
if (texture.Description.Width != modifiedColorLength)
throw new InvalidOperationException("Can't update this texture with this color array, texture size mismatch");
byte[] textureStreamBytes = new byte[modifiedColorLength * 4];
fixed (SharpDX.Color* colorPtr = &colors[0])
{
fixed (byte* bytePtr = &textureStreamBytes[0])
{
System.Buffer.MemoryCopy(colorPtr, bytePtr, textureStreamBytes.Length, colors.Length * 4);
}
}
DataBox databox = device.ImmediateContext.MapSubresource(texture, 0, 0, MapMode.WriteDiscard, SharpDX.Direct3D11.MapFlags.None, out DataStream stream);
if (!databox.IsEmpty)
stream.Write(textureStreamBytes, 0, textureStreamBytes.Length);
device.ImmediateContext.UnmapSubresource(textureResource.Resource, 0);
texture.Dispose();
}
The color array only has 1 color in it, it should be R:0.75, G:0.5, B:0, A:1
I verified in Visual Studio Graphics Debugger that my texture is reaching the pixel shader with that color as the first color
The problem is in my shader the reported color is R:0.874509800, G:0.749019600, B:0, A:0
Here's my sampler state description:
SamplerStateDescription samplerDesc = new SamplerStateDescription()
{
Filter = Filter.MinMagMipLinear,
AddressU = TextureAddressMode.Border,
AddressV = TextureAddressMode.Border,
AddressW = TextureAddressMode.Border,
MipLodBias = 0,
MaximumAnisotropy = 1,
ComparisonFunction = Comparison.Always,
BorderColor = new Color4(255, 255, 0, 0),
MinimumLod = 0,
MaximumLod = float.MaxValue
};
and all I am doing to sample this texture is this:
Texture2DArray shaderTextures : register(t0);
Texture1D upperLeftCoordsTexture : register(t1);
SamplerState SampleType;
float4 upperLeftCoords = upperLeftCoordsTexture.Sample(SampleType, 0);
again, upperLeftCoords reports R:0.874509800, G:0.749019600, B:0, A:0
but I am expecting R:0.75, G:0.5, B:0, A:1
Also I am sure I am passing the textures in the right order:
deviceContext.PixelShader.SetShaderResource(0, textureArray);
deviceContext.PixelShader.SetShaderResource(1, upperLeftCords);
So don't know what I am doing wrong and getting the wrong color in the pixel shader, do you happen to know what I am doing wrong here?

Related

Skia / SkiaSharp - Rotating shapes around their own vertical axis

I generate the scene above using the OnDraw method below:
protected override void OnDraw(SKCanvas canvas, int width, int height)
{
int i = 0;
int step = 0;
List<SKRect> rects = new List<SKRect>();
// get the 2D equivalent of the 3D matrix
var rotationMatrix = rotationView.Matrix;
// get the properties of the rectangle
var length = Math.Min(width / 6, height / 6);
canvas.Clear(EffectMedia.Colors.XamarinLightBlue);
foreach (var n in numbers)
{
var rect = new SKRect(0 + step, 0, 100 + step, 100);
rects.Add(rect);
step += 120;
}
//var sideHoriz = rotationMatrix.MapPoint(new SKPoint(0, 1)).Y > 0;
var sideVert = rotationMatrix.MapPoint(new SKPoint(1, 0)).X > 0;
var paint = new SKPaint
{
Color = sideVert ? EffectMedia.Colors.XamarinPurple : EffectMedia.Colors.XamarinGreen,
Style = SKPaintStyle.Fill,
IsAntialias = true
};
// first do 2D translation to the center of the screen
canvas.Translate((width - (120 * numbers.Count)) / 2, height / 2);
// The following line is disabled because it makes the whole canvas rotate!
// canvas.Concat(ref rotationMatrix);
foreach (var n in numbers)
{
canvas.RotateDegrees((float)-3);
canvas.DrawRoundRect(rects[i], 30, 30, paint);
var shadow = SKShader.CreateLinearGradient(
new SKPoint(0, 0), new SKPoint(0, length * 2),
new[] { paint.Color.WithAlpha(127), paint.Color.WithAlpha(0) },
null,
SKShaderTileMode.Clamp);
var paintShadow = new SKPaint
{
Shader = shadow,
Style = SKPaintStyle.Fill,
IsAntialias = true,
BlendMode = SKBlendMode.SoftLight
};
foreach (var r in rects)
{
r.Offset(0, 105);
canvas.DrawRoundRect(r, 30, 30, paintShadow);
}
i++;
}
}
The idea is to make all those rounded boxes rotate (vertically) around their own axis.
I tried using SKPath + Transform, saving&restoring the rotationMatrix and/or the canvas but I can't find a way to have 6 rotating boxes ( canvas.Concat(ref rotationMatrix); makes the whole canvas rotate [*]).
Do you have any hint on how that can be achieved?
Note [*]: there's a call to rotationView.RotateYDegrees(5) every X milliseconds to update the rotationMatrix used by OnDraw.
This is what I'd like to achieve, any hints / directions would be really appreciated... :-)
The following piece of code rotates those shapes around their Z-axis:
canvas.Save();
canvas.RotateDegrees(degrees, rects[i].MidX, rects[i].MidY);
canvas.DrawRoundRect(rects[i], 30, 30, paint);
canvas.Restore();
Thanks

html5 fillStyle does not work. shows black regardless of whatever value that i pass it into. What did I do wrong?

why fillStyle does not work in the code?
I console.log the variable that i passed in.
It shows correctly.
though, it still shows a black square instead of the color I would like to pass it into.
What did I do wrong?
const canvas = document.getElementById('tetris');
const draw2D = canvas.getContext("2d");
const ROW = 20;
const COL = 10;
//draw2D.fillStyle = '#000';
const strColor = "#FFFFFF";
const color = "#000000";
draw2D.scale(20, 20);
function drawSquare(x, y, bgColor, lineColor) {
console.log('bg color is: ' + bgColor);
draw2D.fillStyle = bgColor;
draw2D.fillRect(x, y, 1, 1);
console.log('line color is: ' + lineColor);
draw2D.strokeColor = lineColor;
draw2D.strokeRect(x, y, 1, 1);
};
drawSquare(0, 0, color, strColor);
<canvas id="tetris"></canvas>
The issue is you're filling a rect 1 unit large and then stroking it with the default line width of 1 unit so the stroke is completely covering the fill
Also you wrote strokeColor instead of strokeStyle
const canvas = document.getElementById('tetris');
const draw2D = canvas.getContext("2d");
const ROW = 20;
const COL = 10;
//draw2D.fillStyle = '#000';
const strColor = "red";
const color = "green";
draw2D.scale(20, 20);
function drawSquare(x, y, bgColor, lineColor) {
console.log('bg color is: ' + bgColor);
draw2D.fillStyle = bgColor;
draw2D.fillRect(x, y, 1, 1);
console.log('line color is: ' + lineColor);
draw2D.lineWidth = 0.1;
draw2D.strokeStyle = lineColor;
draw2D.strokeRect(x, y, 1, 1);
};
drawSquare(0, 0, color, strColor);
<canvas id="tetris"></canvas>

Three.js and applying texture

I'm using a neat script found online (Teal 3d dice roller).
http://a.teall.info/dice/
The dice numbers are hardcoded as standard fonts in the script (no image textures applied).
I would like to get rid of those numbers and apply pictogram textures instead to customize the dice to fit my needs.
So, right now, I'm just trying to apply one unique texture to all faces (even though I plan to obviously have 6 different textures eventually but first thing first).
Here is the original script function :
this.create_dice_materials = function(face_labels, size, margin) {
function create_text_texture(text, color, back_color) {
/* --- start of the part I planned to modify --- */
if (text == undefined) return null;
var canvas = document.createElement("canvas");
var context = canvas.getContext("2d");
var ts = calc_texture_size(size + size * 2 * margin) * 2;
canvas.width = canvas.height = ts;
context.font = ts / (1 + 2 * margin) + "pt Arial";
context.fillStyle = back_color;
context.fillRect(0, 0, canvas.width, canvas.height);
context.textAlign = "center";
context.textBaseline = "middle";
context.fillStyle = color;
context.fillText(text, canvas.width / 2, canvas.height / 2);
/* --- End of the part I planned to modify --- */
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
return texture;
}
var materials = [];
for (var i = 0; i < face_labels.length; ++i)
materials.push(new THREE.MeshPhongMaterial($t.copyto(this.material_options,
{ map: create_text_texture(face_labels[i], this.label_color, this.dice_color) })));
return materials;
}
And here is my attempt to apply a texture instead:
this.create_dice_materials = function(face_labels, size, margin) {
function create_text_texture(text, color, back_color) {
/* --- start of the modified part --- */
var img = document.getElementById("image_name");
var canvas = document.createElement("canvas");
var cs = img.height;
canvas.width = img.height;
canvas.height = img.height;
var context = canvas.getContext("2d");
context.drawImage(img, 0, 0, cs, cs, 0, 0, cs, cs);
/* --- End of the modified part --- */
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
return texture;
}
var materials = [];
for (var i = 0; i < face_labels.length; ++i)
materials.push(new THREE.MeshPhongMaterial($t.copyto(this.material_options,
{ map: create_text_texture(face_labels[i], this.label_color, this.dice_color) })));
return materials;
}
Note: the texture picture is embedded within the html file as an img tag. It shows up alright as a flat html picture and it has the proper id which is "image_name". So, this shouldn't be part of the problem.
Anyway, those changes in the script don't break the script (no exception appears in the console while executing it),but nothing shows up on the dice neither. No numbers, no texture.
Any idea what is wrong and how I should proceed to make it work?
So far I suspect two things:
1) the "drawImage" parameters
2) the "map" parameter within the materials array
Thanks.
For whatever reason, it worked on a distant server, not locally.
So, I guess it is solved.

Xamarin iOS: How to change the color of a UIImage pixel by pixel

Sorry if this has already been answered somewhere but I could not find it.
Basically, I am receiving a QR code where the code itself is black and the background is white (this is a UIImage). I would like to change to the color of the white background to transparent or a custom color and change the QRCode color from black to white. (In Xamarin iOS)
I already know how to get the color of a specific Pixel using the following code:
static UIColor GetPixelColor(CGBitmapContext context, byte[] rawData,
UIImage barcode, CGPoint pt)
{
var handle = GCHandle.Alloc(rawData);
UIColor resultColor = null;
using (context)
{
context.DrawImage(new CGRect(-pt.X, pt.Y - barcode.Size.Height,
barcode.Size.Width, barcode.Size.Height), barcode.CGImage);
float red = (rawData[0]) / 255.0f;
float green = (rawData[1]) / 255.0f;
float blue = (rawData[2]) / 255.0f;
float alpha = (rawData[3]) / 255.0f;
resultColor = UIColor.FromRGBA(red, green, blue, alpha);
}
return resultColor;
}
This is currently my function:
static UIImage GetRealQRCode(UIImage barcode)
{
int width = (int)barcode.Size.Width;
int height = (int)barcode.Size.Height;
var bytesPerPixel = 4;
var bytesPerRow = bytesPerPixel * width;
var bitsPerComponent = 8;
var colorSpace = CGColorSpace.CreateDeviceRGB();
var rawData = new byte[bytesPerRow * height];
CGBitmapContext context = new CGBitmapContext(rawData, width,
height, bitsPerComponent, bytesPerRow, colorSpace,
CGImageAlphaInfo.PremultipliedLast);
for (int i = 0; i < rawData.Length; i++)
{
for (int j = 0; j < bytesPerRow; j++)
{
CGPoint pt = new CGPoint(i, j);
UIColor currentColor = GetPixelColor(context, rawData,
barcode, pt);
}
}
}
Anyone know how to do this ?
Thanks in advance !
Assuming your UIImage is backed by a CGImage (and not a CIImage):
var cgImage = ImageView1.Image.CGImage; // Your UIImage with a CGImage backing image
var bytesPerPixel = 4;
var bitsPerComponent = 8;
var bytesPerUInt32 = sizeof(UInt32) / sizeof(byte);
var width = cgImage.Width;
var height = cgImage.Height;
var bytesPerRow = bytesPerPixel * cgImage.Width;
var numOfBytes = cgImage.Height * cgImage.Width * bytesPerUInt32;
IntPtr pixelPtr = IntPtr.Zero;
try
{
pixelPtr = Marshal.AllocHGlobal((int)numOfBytes);
using (var colorSpace = CGColorSpace.CreateDeviceRGB())
{
CGImage newCGImage;
using (var context = new CGBitmapContext(pixelPtr, width, height, bitsPerComponent, bytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedLast))
{
context.DrawImage(new CGRect(0, 0, width, height), cgImage);
unsafe
{
var currentPixel = (byte*)pixelPtr.ToPointer();
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
// RGBA8888 pixel format
if (*currentPixel == byte.MinValue)
{
*currentPixel = byte.MaxValue;
*(currentPixel + 1) = byte.MaxValue;
*(currentPixel + 2) = byte.MaxValue;
}
else
{
*currentPixel = byte.MinValue;
*(currentPixel + 1) = byte.MinValue;
*(currentPixel + 2) = byte.MinValue;
*(currentPixel + 3) = byte.MinValue;
}
currentPixel += 4;
}
}
}
newCGImage = context.ToImage();
}
var uiimage = new UIImage(newCGImage);
imageView2.Image = uiimage; // Do something with your new UIImage
}
}
finally
{
if (pixelPtr != IntPtr.Zero)
Marshal.FreeHGlobal(pixelPtr);
}
If you do not actually need access to the individual pixels but the end result only, using CoreImage pre-exisitng filters you can first invert the colors and then use the black pixels as an alpha mask. Otherwise see my other answer using Marshal.AllocHGlobal and pointers.
using (var coreImage = new CIImage(ImageView1.Image))
using (var invertFilter = CIFilter.FromName("CIColorInvert"))
{
invertFilter.Image = coreImage;
using (var alphaMaskFiter = CIFilter.FromName("CIMaskToAlpha"))
{
alphaMaskFiter.Image = invertFilter.OutputImage;
var newCoreImage = alphaMaskFiter.OutputImage;
var uiimage = new UIImage(newCoreImage);
imageView2.Image = uiimage; // Do something with your new UIImage
}
}
The plus side is this is blazing fast ;-) and the results are the same:
If you need even faster processing assuming you are batch converting a number of these images, you can write a custom CIKernel that incorporates these two filters into one kernel and thus only process the image once.
Xamarin.IOS with this method you can convert all color white to transparent for me only works with files ".jpg" with .png doesn't work but you can convert the files to jpg and call this method.
public static UIImage ProcessImage (UIImage image)
{
CGImage rawImageRef = image.CGImage;
nfloat[] colorMasking = new nfloat[6] { 222, 255, 222, 255, 222, 255 };
CGImage imageRef = rawImageRef.WithMaskingColors(colorMasking);
UIImage imageB = UIImage.FromImage(imageRef);
return imageB;
}
Regards

WebGL: Adding textures causes DrawElements error (attribute buffers insufficient space)

I want to draw a 3d model (e.g. a house) from a model.json file. I have no problem drawing the house in a single color like blue. However, when I try to use textures instead of a color, I receive an error:
WebGL: DrawElements: bound vertex attribute buffers do not have
sufficient size for given indices from the bound element array
I've searched the web, and tried hundreds of different alterations, and I simply can't get past this error - I'm not good enough at WebGL to see what's wrong. There is an images folder with multiple texture images, but at this point, if I can just draw one of the textures for the entire house, I'd be ecstatic.
The problem lies in renderable.js (attached) but you can access all files at http://tinyurl.com/mk9vbta. Any help would be greatly appreciated, don't know where else to go.
renderable.js
"use strict";
function RenderableModel(gl,model){
function Drawable(attribLocations, vArrays, nVertices, indexArray, drawMode){
// Create a buffer object
var vertexBuffers=[];
var nElements=[];
var nAttributes = attribLocations.length;
for (var i=0; i<nAttributes; i++){
if (vArrays[i]){
vertexBuffers[i] = gl.createBuffer();
if (!vertexBuffers[i]) {
console.log('Failed to create the buffer object');
return null;
}
// Bind the buffer object to an ARRAY_BUFFER target
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffers[i]);
// Write date into the buffer object
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vArrays[i]), gl.STATIC_DRAW);
// Texture coords must always be passed as last attribute location (a_Attribute)
nElements[i] = (i == (nAttributes - 1))? 2: vArrays[i].length/nVertices;
}
else{
vertexBuffers[i]=null;
}
}
var indexBuffer=null;
if (indexArray){
indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indexArray), gl.STATIC_DRAW);
}
var a_texture = createTexture("texture0.jpg");
// Set the texture unit 0 to the sampler
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, a_texture);
this.draw = function (){
nElements[1] = 2;
for (var i=0; i<nAttributes; i++){
if (vertexBuffers[i]){
gl.enableVertexAttribArray(attribLocations[i]);
// Bind the buffer object to target
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffers[i]);
// Assign the buffer object to a_Position variable
gl.vertexAttribPointer(attribLocations[i], nElements[i], gl.FLOAT, false, 24, 0);
}
else{
gl.disableVertexAttribArray(attribLocations[i]);
gl.vertexAttrib3f(attribLocations[i],1,1,1);
//console.log("Missing "+attribLocations[i])
}
}
if (indexBuffer){
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
gl.drawElements(drawMode, indexArray.length, gl.UNSIGNED_SHORT, 0);
}
else{
gl.drawArrays(drawMode, 0, nVertices);
}
}
}
// Vertex shader program
var VSHADER_SOURCE =
'attribute vec2 textureCoord;\n' +
'attribute vec3 position;\n' +
'uniform mat4 modelT, viewT, projT;\n'+
//'varying vec4 v_Color;\n' +
'varying highp vec2 vTextureCoord;\n' +
'void main() {\n' +
' gl_Position = projT*viewT*modelT*vec4(position,1.0);\n' +
//' v_Color = vec4(0, 1.0, 0.0, 1.0);\n' + // use instead of textures for now
' vTextureCoord = textureCoord;\n' +
'}\n';
// Fragment shader program
var FSHADER_SOURCE =
'#ifdef GL_ES\n' +
'precision highp float;\n' +
'#endif\n' +
'uniform sampler2D uSampler;\n' +
//'varying vec4 v_Color;\n' + // use instead of texture
'varying highp vec2 vTextureCoord;\n' +
'void main() {\n' +
// 'vec4 v_Color = vec4(texture2D(uSampler, vTextureCoord).rgb, 1.0);\n'+
' gl_FragColor = texture2D(uSampler, vTextureCoord);\n' +
'}\n';
// create program
var program = createProgram(gl, VSHADER_SOURCE, FSHADER_SOURCE);
if (!program) {
console.log('Failed to create program');
return false;
}
var a_Position = gl.getAttribLocation(program, 'position');
var a_TextureCoord = gl.getAttribLocation(program, 'textureCoord'); // for texture
var a_Locations = [a_Position,a_TextureCoord];
// Get the location/address of the uniform variable inside the shader program.
var mmLoc = gl.getUniformLocation(program,"modelT");
var vmLoc = gl.getUniformLocation(program,"viewT");
var pmLoc = gl.getUniformLocation(program,"projT");
// textures
var textureLoc = gl.getUniformLocation(program,'uSampler');
var drawables=[];
var modelTransformations=[];
var nDrawables=0;
var nNodes = (model.nodes)? model.nodes.length:1;
var drawMode=(model.drawMode)?gl[model.drawMode]:gl.TRIANGLES;
for (var i= 0; i<nNodes; i++){
var nMeshes = (model.nodes)?(model.nodes[i].meshIndices.length):(model.meshes.length);
for (var j=0; j<nMeshes;j++){
var index = (model.nodes)?model.nodes[i].meshIndices[j]:j;
var mesh = model.meshes[index];
drawables[nDrawables] = new Drawable(
a_Locations,[mesh.vertexPositions, mesh.vertexTexCoordinates],
mesh.vertexPositions.length/3,
mesh.indices, drawMode
);
var m = new Matrix4();
if (model.nodes)
m.elements=new Float32Array(model.nodes[i].modelMatrix);
modelTransformations[nDrawables] = m;
nDrawables++;
}
}
// Get the location/address of the vertex attribute inside the shader program.
this.draw = function (cameraPosition,pMatrix,vMatrix,mMatrix)
{
gl.useProgram(program);
gl.uniformMatrix4fv(pmLoc, false, pMatrix.elements);
gl.uniformMatrix4fv(vmLoc, false, vMatrix.elements);
gl.uniform1i(textureLoc, 0);
// pass variables determined at runtime
for (var i= 0; i<nDrawables; i++){
// pass model matrix
var mMatrix=modelTransformations[i];
gl.uniformMatrix4fv(mmLoc, false, mMatrix.elements);
drawables[i].draw();
}
gl.useProgram(null);
}
this.getBounds=function() // Computes Model bounding box
{
var xmin, xmax, ymin, ymax, zmin, zmax;
var firstvertex = true;
var nNodes = (model.nodes)?model.nodes.length:1;
for (var k=0; k<nNodes; k++){
var m = new Matrix4();
if (model.nodes)m.elements=new Float32Array(model.nodes[k].modelMatrix);
//console.log(model.nodes[k].modelMatrix);
var nMeshes = (model.nodes)?model.nodes[k].meshIndices.length:model.meshes.length;
for (var n = 0; n < nMeshes; n++){
var index = (model.nodes)?model.nodes[k].meshIndices[n]:n;
var mesh = model.meshes[index];
for(var i=0;i<mesh.vertexPositions.length; i+=3){
var vertex = m.multiplyVector4(new Vector4([mesh.vertexPositions[i],mesh.vertexPositions[i+1],mesh.vertexPositions[i+2],1])).elements;
//if (i==0){
// console.log([mesh.vertexPositions[i],mesh.vertexPositions[i+1],mesh.vertexPositions[i+2]]);
// console.log([vertex[0], vertex[1], vertex[2]]);
//}
if (firstvertex){
xmin = xmax = vertex[0];
ymin = ymax = vertex[1];
zmin = zmax = vertex[2];
firstvertex = false;
}
else{
if (vertex[0] < xmin) xmin = vertex[0];
else if (vertex[0] > xmax) xmax = vertex[0];
if (vertex[1] < ymin) ymin = vertex[1];
else if (vertex[1] > ymax) ymax = vertex[1];
if (vertex[2] < zmin) zmin = vertex[2];
else if (vertex[2] > zmax) zmax = vertex[2];
}
}
}
}
var dim= {};
dim.min = [xmin,ymin,zmin];
dim.max = [xmax,ymax,zmax];
//console.log(dim);
return dim;
}
// Load texture image and create/return texture object
function createTexture(imageFileName)
{
var tex = gl.createTexture();
var img = new Image();
img.onload = function(){
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL,true);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img);
gl.bindTexture(gl.TEXTURE_2D, null);
}
img.src = imageFileName;
return tex;
}
}
The error means one of your indices is too large for the buffers you currently have attached.
Example: Imagine you have a position buffer with 3 positions in it
[0.123, 0.010, 0.233,
0.423, 0.312, 0.344,
0.933, 1.332, 0.101]
Now imagine you make an index buffer
[0, 1, 3]
You've only got 3 positions so the only valid indices are 0, 1, and 2. 3 is out of range. That's the error your getting.
Some possibilities:
Your data could just be bad. Check your indices
You drew a model with less vertices but more attributes, then drew a different model with more vertices but less attributes. You left the attributes for the previous model on while drawing the 2nd model.
In other words
// setup first model with only 3 vertices, both positions and colors.
gl.enableVertexAttribArray(0);
gl.bindBuffer(gl.BUFFER_ARRAY, bufferWith3Positions);
gl.vertexAttribPointer(0, ....);
gl.enableVertexAttribArray(1);
gl.bindBuffer(gl.BUFFER_ARRAY, bufferWith3Colors);
gl.vertexAttribPointer(1, ....);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indicesForFirstModel);
// setup first model with 6 vertices but no colors
gl.bindBuffer(gl.BUFFER_ARRAY, bufferWith6Positions);
gl.vertexAttribPointer(0, ....);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indicesForSecondModel);
You'll get an error because attrib #1 is still referencing bufferWith3Colors. You need to turn that attribute off.
gl.disableVertexAttribArray(1);
Note: That assumes the shader is still using attribute #1. If it's not you shouldn't get an error even if bufferWith3Colors is still attached to attribute #1.

Resources