How to load Texturepacker spritesheets in ThreeJS? - three.js

I'm trying to load spritesheets from Texturepacker in ThreeJS, which comprises an image and json. The image has a bunch of small sprites packed together and the json defines the location and size of the small sprites in the image.
I have tried 3 methods for loading.
using ThreeJS loaders for the json and image and assigning new textures with different repeat and offset values.
using WebGLRenderTarget buffers to crop the source image into
using Canvas buffers to crop the source image into
The method using multiple texture instances with different offsets should work ok as I'm not copying the source image but when I run an animation by switching a material's texture, it uses a crazy amount of RAM as if it's copying the entire source spritesheet into memory for each one. If I change the texture offsets for the animation instead of using texture copies, it works ok but an offset change would be applied to every object using the same source spritesheet.
The WebGLRenderTarget method needs a camera and scene for cropping the textures and a sprite added to the scene. The output from this is unusable as it doesn't generate a 1:1 crop of the original texture and it's really slow to load. Is there a way to render textures 1:1 to smaller buffers in ThreeJS?
The Canvas method worked best where I create a canvas element for each sprite and crop the spritesheet into each. This is 1:1 and good quality but the point of using a spritesheet is that the GPU only has a single image to address and this needs an HTML loader process. Ideally I don't want to crop the spritesheet to smaller texture buffers.
Why does using the same large source image with multiple THREE.Texture objects use so much memory? I expected it would only need to keep a single texture in memory and the Texture objects would just display the same texture with different offsets.

I found a way that works.
First, I load the texture by making a WebGLTexture from the spritesheet image loaded via a ThreeJS ImageLoader, which gets stored in _spritesheets[textureID].texture.
let texture = this._spritesheets[textureID].texture;
let gl = this._renderer.getContext();
let webGLTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, webGLTexture);
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
Then I set the webGL texture parameter of the texture object to this and set its webglInit value to true so it doesn't create a new buffer.
let frames = textureJSON.frames;
for (let frameID of Object.keys(frames)) {
let frame = frames[frameID];
let t = new THREE.Texture(texture);
let data = frame.frame;
t.repeat.set(data.w / texture.width, data.h / texture.height);
t.offset.x = data.x / texture.width;
t.offset.y = 1 - data.h / texture.height - data.y / texture.height;
let textureProperties = this._renderer.properties.get(t);
textureProperties.__webglTexture = webGLTexture;
textureProperties.__webglInit = true;
this._textures[frameID] = {};
this._textures[frameID].texture = t;
this._textures[frameID].settings = { wrapS: 1, wrapT: 1, magFilter: THREE.LinearFilter, minFilter: THREE.NearestFilter };
}
The spritesheet JSON is loaded via a ThreeJS FileLoader. I then store the sprites by frame id in a _textures object and can assign those to a material's map property.

Related

Reading Pixels in WebGL 2 as Float values

I need to read the pixels of my framebuffer as float values.
My goal is to get a fast transfer of lots of particles between CPU and GPU and process them in realtime. For that I store the particle properties in a floating point texture.
Whenever a new particle is added, I want to get the current particle array back from the texture, add the new particle properties and then fit it back into the texture (this is the only way I could think of to dynamically add particles and process them GPU-wise).
I am using WebGL 2 since it supports reading back pixels to a PIXEL_PACK_BUFFER target. I test my code in Firefox Nightly. The code in question looks like this:
// Initialize the WebGLBuffer
this.m_particlePosBuffer = gl.createBuffer();
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, null);
...
// In the renderloop, bind the buffer and read back the pixels
gl.bindBuffer(gl.PIXEL_PACK_BUFFER, this.m_particlePosBuffer);
gl.readBuffer(gl.COLOR_ATTACHMENT0); // Framebuffer texture is bound to this attachment
gl.readPixels(0, 0, _texSize, _texSize, gl.RGBA, gl.FLOAT, 0);
I get this error in my console:
TypeError: Argument 7 of WebGLRenderingContext.readPixels could not be converted to any of: ArrayBufferView, SharedArrayBufferView.
But looking at the current WebGL 2 Specification, this function call should be possible. Using the type gl.UNSIGNED_BYTE also returns this error.
When I try to read the pixels in an ArrayBufferView (which I want to avoid since it seems to be way slower) it works with the format/type combination of gl.RGBA and gl.UNSIGNED_BYTE for a Uint8Array() but not with gl.RGBA and gl.FLOAT for a Float32Array() - this is as expected since it's documented in the WebGL Specification.
I am thankful for any suggestions on how to get my float pixel values from my framebuffer or on how to otherwise get this particle pipeline going.
Did you try using this extension?
var ext = gl.getExtension('EXT_color_buffer_float');
The gl you have is webgl1,not webgl2.Try:
var gl = document.getElementById("canvas").getContext('webgl2');
In WebGL2 the syntax for glReadPixel is
void gl.readPixels(x, y, width, height, format, type, ArrayBufferView pixels, GLuint dstOffset);
so
let data = new Uint8Array(gl.drawingBufferWidth * gl.drawingBufferHeight * 4);
gl.readPixels(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight, gl.RGBA, gl.UNSIGNED_BYTE, pixels, 0);
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/readPixels

three.js - Objects farther away from camera gets jagged textures

I'm struggeling with textures on objects that are a bit farther back in the scene. The textures become very jagged, and creates a disturbing effect as the camera moves. I've tried changing the anisotropy, and I've tried changing the min and mag filters, but nothing seems to help at all.
Code I'm using to load textures (all textures are 1024px by 1024px):
var texture = new THREE.Texture();
var texloader = new THREE.ImageLoader(manager);
texloader.load('static/3d/' + name + '.jpg', function (image) {
texture.image = image;
texture.needsUpdate = true;
texture.anisotropy = 1;
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearMipmapLinearFilter;
});
You can see it in action here: http://www.90595.websys.sysedata.no/
gaitat is wrong, you do want the mipmaps.
The problem with your code is that they are not generated.
Using the console, I found that while "generateMipmaps" in your textures is set to "true", mipmaps are not generated, as seen in this screenshot: http://imgur.com/hAUEaur.
I looked at your textures, and I believe the mipmaps weren't generated due to your textures not being a power of 2 (e.g. 128x128, 256x256, 512x512). Try making your textures of width and height that are powers of 2 and I think the mipmaps will be generated and they won't look jagged anymore.
As objects move further away from the camera webgl uses textures automatically generated called mipmaps. These are of lower resolution. If you don't like them disable them by:
texture.generateMipmaps = false;
Okay. So I thought I'd tried all the different mipmap filters, but apparently no. So this is what ended up doing the trick:
texture.minFilter = THREE.NearestMipMapNearestFilter;
texture.magFilter = THREE.LinearMipMapLinearFilter;
Didn't need the anisotropy at all.

Dynamic text or texture on curved object

I'm a newcomer to three.js and am looking for what approaches are possible to achieve an effect like this:
For a cola can like object as in the image below (minus condensation), I want to change independent bits of text on the surface of the can based on user interaction. The variants of text are fairly arbitrary, too many for pre-baked full can textures. For instance I might want to:
change "Euro 2012" to arbitrary text
change the nutritional stats on the back of the can
show or hide one of the individual music notes
I'm sure it's possible, just looking for what concepts I need to employ. Is it difficult to have multiple textures on the same object? Or to generate arbitrary text and position it on an object and wrap it to the shape of the object?
Any pointers helpful!
You can use image created in a separate canvas as a Three.js texture. Instead of trying to mix and blend multiple textures in Three.js (possible, but tricky and limited control), I think the best solution would be to create the dynamic texture in 2D, totally out of Three.js then just apply the full texture to the can.
You can create your canvas image manually or using canvas image manipulation library of your choice (some possibilities: https://docs.google.com/spreadsheet/ccc?key=0Aqj_mVmuz3Y8dHNhUVFDYlRaaXlyX0xYSTVnalV5ZlE#gid=0 ). Or you can have your template as SVG and modify that (should be quite simple), render that to canvas, then use it as texture.
Using canvas as a texture is very simple:
var canvas = document.createElement('canvas');
canvas.width = 512;
canvas.height = 512;
var context = canvas.getContext('2d');
// drawing something here....
context.font = "Bold 20px Helvetica";
context.lineWidth = 4;
context.strokeStyle = 'rgba(255,255,255,.8)';
context.fillStyle = "rgba(0,0,0,1)";
context.strokeText("Testing", 4, 22);
context.fillText("Testing", 4, 22);
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;

Drawing UI elements directly to the WebGL area with Three.js

In Three.js, is it possible to draw directly to the WebGL area (for a heads-up display or UI elements, for example) the way you could with a regular HTML5 canvas element?
If so, how can you get the context and what drawing commands are available?
If not, is there another way to accomplish this, through other Three.js or WebGL-specific drawing commands that would cooperate with Three.js?
My backup plan is to use HTML divs as overlays, but I think there should be a better solution.
Thanks!
You can't draw directly to the WebGL canvas in the same way you do with with regular canvas. However, there are other methods, e.g.
Draw to a hidden 2D canvas as usual and transfer that to WebGL by using it as a texture to a quad
Draw images using texture mapped quads (e.g. frames of your health box)
Draw paths (and shapes) by putting their vertices to a VBO and draw that with the appropriate polygon type
Draw text by using a bitmap font (basically textured quads) or real geometry (three.js has examples and helpers for this)
Using these usually means setting up a an orthographic camera.
However, all this is quite a bit of work and e.g. drawing text with real geometry can be expensive. If you can make do with HTML divs with CSS styling, you should use them as it's very quick to set up. Also, drawing over the WebGL canvas, perhaps using transparency, should be a strong hint to the browser to GPU accelerate its div drawing if it doesn't already accelerate everything.
Also remember that you can achieve quite much with CSS3, e.g. rounded corners, alpha transparency, even 3d perspective transformations as demonstrated by Anton's link in the question's comment.
I had exactly the same issue. I was trying to create a HUD (Head-up display) without DOM and I ended up creating this solution:
I created a separate scene with orthographic camera.
I created a canvas element and used 2D drawing primitives to render my graphics.
Then I created an plane fitting the whole screen and used 2D canvas element as a texture.
I rendered that secondary scene on top of the original scene
That's how the HUD code looks like:
// We will use 2D canvas element to render our HUD.
var hudCanvas = document.createElement('canvas');
// Again, set dimensions to fit the screen.
hudCanvas.width = width;
hudCanvas.height = height;
// Get 2D context and draw something supercool.
var hudBitmap = hudCanvas.getContext('2d');
hudBitmap.font = "Normal 40px Arial";
hudBitmap.textAlign = 'center';
hudBitmap.fillStyle = "rgba(245,245,245,0.75)";
hudBitmap.fillText('Initializing...', width / 2, height / 2);
// Create the camera and set the viewport to match the screen dimensions.
var cameraHUD = new THREE.OrthographicCamera(-width/2, width/2, height/2, -height/2, 0, 30 );
// Create also a custom scene for HUD.
sceneHUD = new THREE.Scene();
// Create texture from rendered graphics.
var hudTexture = new THREE.Texture(hudCanvas)
hudTexture.needsUpdate = true;
// Create HUD material.
var material = new THREE.MeshBasicMaterial( {map: hudTexture} );
material.transparent = true;
// Create plane to render the HUD. This plane fill the whole screen.
var planeGeometry = new THREE.PlaneGeometry( width, height );
var plane = new THREE.Mesh( planeGeometry, material );
sceneHUD.add( plane );
And that's what I added to my render loop:
// Render HUD on top of the scene.
renderer.render(sceneHUD, cameraHUD);
You can play with the full source code here:
http://codepen.io/jaamo/pen/MaOGZV
And read more about the implementation on my blog:
http://www.evermade.fi/pure-three-js-hud/

how to display data of yuv format without converting rgb in OpenGL ES?

I have being study about OpenGL ES for iOS.
I wonder that data of YUV format is can display without converting RGB.
In most, the yuv data have to convert RGB for display. But, converting process is very slow, Then, that is not display smoothly.
So, I would like to try to dispaly YUV data without convert to RGB.
Is it possible? If possible, what can I do?
Please, let me give a advice.
I think it is not possible in OpenGL ES to display YUV data without convert to RGB data.
You can do this very easily using OpenGL ES 2.0 shaders. I use this technique for my super-fast iOS camera app SnappyCam. The fragment shader would perform the matrix multiplication to take you from YCbCr ("YUV") to RGB. You can have each {Y, Cb, Cr} channel in a separate GL_LUMINANCE texture, or combine the {Cb, Cr} textures together using a GL_LUMINANCE_ALPHA texture if your chrominance data is already interleaved (Apple call this a bi-planar format).
See my related answer to the question YUV to RGBA on Apple A4, should I use shaders or NEON? here on StackOverflow.
You may also do this using the fixed rendering pipeline of ES 1.1, but I haven't tried it. I would look toward the the texture blending functions, e.g. given in this OpenGL Texture Combiners Wiki Page.
Is you are looking solution for IOS, iPhone application then there are solution for that.
This is way convert CMSampleBufferRef to UIImage when video pixel type is set as kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
-(UIImage *) imageFromSamplePlanerPixelBuffer:(CMSampleBufferRef) sampleBuffer{
#autoreleasepool {
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the plane pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
// Get the number of bytes per row for the plane pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer,0);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent gray color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGImageAlphaNone);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
}
If you are looking for Mobile devices then i can provide others to.

Resources