Has anybody experienced a problem rendering 3D images in Processing where smaller size canvases (E.g. 2000x2000) render fine, but larger canvases (4000x4000) don't render at all?
Related
I'm using WebGL to do something very similar to image processing. I draw a single Quad in orthoscopic projection, and perform some processing in the fragment shaders. There are two steps, in the first one the original texture from the Quad is processed in the fragment shader and written to a framebuffer, a second step processes that data to the final canvas.
The users can zoom and translate the image. For this to work smoothly, I need to hit 60 fps, or this gets noticeably sluggish. This is no issue on desktop GPUs, but on mobile devices with much weaker hardware and higher resolutions this gets problematic.
The translation case is the most noticeable and problematic, the user drags the mouse pointer or their finger over the screen and the image is lagging behind. But translation is also a case where I could theoretically reuse a lot of data from the previous frame.
Ideally I'd copy the canvas from the last frame, translate it by x,y pixels, and then only do the whole image processing in fragment shaders on the parts of the canvas that aren't covered by the translated previous canvas.
Is there a way to do this in WebGL?
If you want to access the previous frame you need to draw to a texture attached to a framebuffer. Then draw that texture into the canvas translated.
I need to blur render but not whole, only fragments. Frozen "glass" shapes will flowed over (SVG animated transparent shapes over WebGl animation). The problem is local frozen effect. Whether is some effect composer or context.readPixels + FastBlur.js makes sense or maybe css + masks ? Thank you for help.
I did it:
WebGl shader blur (Three.js render passes) + mask texture (image = additional invisible canvas element where shapes are drawn). SVG is an independent element, but gives information about kind of shapes and positions for mask texture and displays shapes of course. A bit crazy but works and very fast.
If I render a big texture 1024x1024 but almost the texture is transparent, only about 40% of the texture have data (not transparent). Does it more slower than render a texture with less transparent part?
I have this question because when render a animation, it is more easy to set the pivot of sprite in the image itself, so when I render i only need to draw each sprite at the center of my object's position.
It is more performant, because your image will be smaller. But I doubt it will make a noticeable difference. So, the way you are doing it right now is good, thats how I do it.
I have a question about Three.js with Canvas rendering:
I use Canvas rendering for be full compatible, the speed is not important for me, but i have two viewport, each with same scene, and a textured object render only in one view, depending of the rendering order :( I am block on it since one week , so ,it si normal "feature" ?
You need to set up two separate renderers, attach them to separate HTML elements, and use CSS z-index to layer them on top of each other. As Mr. Doob commented, it won't save you any computation memory.
It is cool because you can use the same scene (mesh, material, lights), but different cameras.
I am using LibGDX for a small app project, and I need to somehow take a series of sprites and place them (or their pixels rather) into a Pixmap. The basic idea is to take random sprites that are generated through various means while the app is running, and, only at specific times, merge some of them onto a single background sprite.
I believe that most of this can be done easily, but the step of getting the sprite images into the Pixmap isn't quite so obvious to me. The sprites also have various transparent and semi-transparent pixels, so simply grabbing the color at each pixel while it is all on the same screen isn't really applicable either, as it obviously shouldn't take the background colors with it.
If there is a suitable alternative to this that would accomplish what I am looking for I would also love to hear it. Any help is highly appreciated.
I think you want to render your sprites to an off-screen buffer (called an "FBO" or FrameBuffer in libgdx) (blending them as they're added), and then render that offscreen buffer to the screen as a single draw call? If so, this question should help: libgdx SpriteBatch render to texture
This requires OpenGL ES 2.0, which will eliminate support for some older devices.